13

It is said, by Mike P. Wittie, in the course curriculum of computer architecture that,

Students need to understand computer architecture in order to structure a program so that it runs more efficiently on a real machine

I'm asking to more experienced programmers or professionals that have a background in this topic:

How does learning computer architecture help you? What aspect benefits you most? How did learning computer architecture changed your programming structuring?

Varaquilex
  • 534
  • 2
  • 5
  • 14
  • 1
    @MichaelT Assembly programming is only a subpart of computer architecture; practically you can know assembly programming without knowing what pipeline or superscalar is. – Random42 Mar 26 '13 at 20:52
  • 2
    @m3th0dman The answers that are showing up here could be copied into the other question without changes and fit just as well - and vice versa. The question is dealing with "how does understanding a lower level than you are working on help understand the higher level?" - and at that level, these two questions are identical. –  Mar 26 '13 at 20:58
  • 1
    @MichaelT Not necessarily; in the below answers branch prediction or memory hierarchy are mentioned and they don't really have much connection with assembly programming. – Random42 Mar 26 '13 at 21:05
  • 1
    Duplicate answers do not a duplicate question necessarily make. Close it as a duplicate if you must, but close it because the question is a duplicate, not because the answers are the same. – Robert Harvey Mar 26 '13 at 22:27
  • 1
    I'm sorry, but `assembly language` != `architecture`. The two have some overlap, but on the whole, are completely different. The memory hierarchy, basic operating system design, and task scheduling are but a small number of categories that fall under "architecture" that have nothing to do with assembly. – riwalk Mar 27 '13 at 01:51
  • 1
    @Stargazer712 - I think one of the deficiencies of the question is that it treats "computer architecture" as "I know it when I see it," and everybody sees it differently. Based on your comments, it seems that you look more at the various components that make up a computer *system* (a definition that I happen to agree with). However, I think it's just as reasonable to talk about "computer architecture" in terms of the pathways between components on the chip(s). Or something completely different. – parsifal Mar 27 '13 at 16:11

9 Answers9

17

How does understanding physics help people drive a car?

  1. They understand phenomena like brake fade, and will compensate for it.
  2. They understand center of gravity and how tires grip the road.
  3. They understand hydroplaning, and how to avoid it.
  4. They know how to best enter and exit a curve.
  5. They are far less likely to tailgate.

And so on. You can drive a car without knowing much about physics, but understanding physics does make you a better driver.

Two example of how understanding computer architecture can affect the way you code:

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • That is one nice answer, I'm thankful for it:) Do you, by any chance, have more examples on this topic? – Varaquilex Mar 26 '13 at 20:30
  • 14
    **I completely disagree with this comparison.** If you compare with a car, the question should be *"How does understanding car mechanics help people drive a car?"* You do not need to know how the engine works to be a good driver. Your comparison is more like comparing to the electronics in a computer, not the architecture as software. – Uooo Mar 27 '13 at 09:13
  • 3
    @w4rumy - The "computer architecture" *is* the innards of the CPU, the caches, the buses, the nasty edge-cases where analog and digital meet, the... The comparison is truly apt. – Vatine Mar 27 '13 at 14:33
  • 2
    I think a better comparison would be a mechanical engineer and a mechanic. Each deals with related problems at different levels of abstraction, and both could benefit from the insights of the other. The driver seems more aligned with the user. – ConditionRacer Mar 27 '13 at 15:22
  • An even more fundamental example of how understaning computer architecture can affect the way you code plays out in the way people (should) select data types. – DavidO Mar 27 '13 at 15:23
  • 6
    Y'all are taking this too literally. Can you know something about the engine that will make you a better driver (the horsepower, the torque curve)? Absolutely. You think race car drivers don't know everything they can possibly know about their car's mechanical design? – Robert Harvey Mar 27 '13 at 15:24
  • @RobertHarvey - "Y'all are taking this too literally" -- this is a hangout for computer programmers after all. I'm surprised that you didn't get a downvote for saying "hydroplaning" rather than "the intercession of a fluid layer" ... not that I can see downvotes, so maybe you did :-) – parsifal Mar 27 '13 at 16:01
  • This is wrong on every level. If anything to be a good driver you need a very rudimentary understanding of why these things occur and an excellent understanding of how they feel. – Ian Mar 27 '13 at 20:33
  • @Ian: I guess you've never met a race car driver. You don't need to know *anything* about your car to be an ordinary driver, any more than you need to know anything about your machine to be a run-of-the-mill programmer. – Robert Harvey Mar 27 '13 at 20:35
  • Analogies like these never stand up to scrutiny. Take two identical people, neither of who can drive and neither of who know about the physics of driving. Teach one to drive and one all the physics and I imagine the one who can drive would win. I would further argue that if we introduced a third person and taught them both the physics and how to drive, they would not necessarily be any faster than the person who only learned how to drive. So I don't agree with your analogy and I don't agree with the point you are using it to try and support. Perhaps we could settle it on the track. – Ian Mar 28 '13 at 13:39
  • 1
    @Ian: Your argument is fine, except that I never made any mention of speed or winning. No analogy is ever perfect. – Robert Harvey Mar 28 '13 at 14:45
  • But you did mention being a "run-of-the-mill programmer" which suggests a certain competitive comparison. Forget the race metaphor though, processor architecture is pretty meaningless now. Not entirely, but I couldn't tell you how many registers the current Intel processors have for example. Modern hardware is complex, really complex. With a few notable exceptions, most development tools aim to abstract and hide the complexity of the hardware. One exception would be compiler creators. And part of their job is to have their compiler optimise my code in clever ways I would not have thought of. – Ian Mar 28 '13 at 20:45
  • @Ian: No argument there. But even knowing just the basics of rudimentary computer architecture can still be meaningful in many ways to a software developer, even if that knowledge only has a subliminal influence on the way you write code. And if you want to continue to get better as a developer, eventually you have to dig below the surface. – Robert Harvey Mar 28 '13 at 20:50
  • I think we've just gone from not agreeing, through a process of reconciliation to a position of mutual understanding. I'm pretty sure that's an internet first! – Ian Mar 29 '13 at 19:14
4

It is basically the same reason as for understanding C and pointers or maybe even algorithms; the only difference is that if you know computer architecture you really understand pointers (actually pointers seem very trivial after knowing computer architecture).

I cannot say about myself that I am an experienced programmer but a (actually the) book on computer architecture I read was for me the most interesting book I have read, related to computers and programming. By understanding computer architecture you basically understand how everything is linked together, how a computer works and how come a program really does work; you see the big picture. Without computer architecture you cannot truly understand:

  • memory management: heap, stack, virtual memory, memory hierarchy and the so spoken about pointers (why is there a stack overflow, why is recursion not so good etc.)
  • assembly programming (if you want to program embedded)
  • compilers and interpreters (if you want to understand optimizations and when it is useless to optimize on code because it is already being made by the compiler)
  • linkers (dynamically linked libraries)
  • operating systems (if you want to read Linux kernel code)
  • the list can go on...

From my really subjective point of view it is by far more interesting and maybe even more useful than knowing algorithms.

Varaquilex
  • 534
  • 2
  • 5
  • 14
Random42
  • 10,370
  • 10
  • 48
  • 65
4

In today's world, this reasoning is negligible if it is present at all for the majority of programming situations.

The places that it is applicable is when one is writing assembly for a particular processor, working on a situation that requires one to take advantage of a particular architecture, or limited significantly by the architecture (embedded systems) so that the previous two points become all the more important.

For programmers of interpreted languages (perl, python, ruby) or languages that run in their own virtual machine (java, C#) the underlying machine is completely abstracted away. A Java program wouldn't be coded differently to run on a massive cluster or on one's desktop.

The cases where the architecture does make a difference as mentioned are embedded systems where it is necessary to consider very low level concerns that are for that environment. The other extreme also exists - where one is writing high performance code either in assembly or something that is compiled to native code (not running in a virtual machine). In these extremes, one is concerned with what fits into the processor cache and how fast it is to access different parts of memory, which way the branch prediction on the processor goes (if the processor uses branch prediction at all or delay slots).

The question of branch prediction and delay slots or processor cache does not enter in to the vast majority of programming issues and cannot enter into interpreted or virtual machine languages.

All that said, it is useful to understand a level of what is going on one deeper than the existing code is being written at. The further than that rapidly reaches diminishing returns. A Java programmer should understand a programming language with manual memory management and pointer math to understand what is going on under the covers. A C programmer should understand assembly so that one can realize what pointers really are and where memory really comes with. Assembly coders should be familiar with the architecture to understand what trade offs of branch prediction and delay slots mean... and to take it even further, those designing processors should be familiar with quantum mechanics for how semiconductors and gates work at a very basic level.

  • A pragmatic answer; hopefully you haven't been downvoted too many times. I will note, however that understanding processor caches [can](http://martinfowler.com/articles/lmax.html) be a part of programming on a managed platform. Although in 99.99999% of the cases, it shouldn't. – parsifal Mar 27 '13 at 16:06
  • @parsifal a most interesting example. Though cache friendly doesn't need to know the specifics. There is much more design in there that deals with working with the JVM to to work well with the underlying system than paying attention to the underlying system itself (the example of a LongToObjectHashMap which uses long primitives rather than Long objects). This goes to the footnote http://martinfowler.com/articles/lmax.html#footnote-collection-imp which reinforces that unless performance is critical, it doesn't matter too much. –  Mar 27 '13 at 22:36
  • Performance or SECURITY, there are all sorts of edge cases that can be exploited if you are not careful, think RowHammer and many of the possible side channel and timing attacks, many of which can absolutely be exploited from an interpreted of JIT compiled language. In fact I would argue that not just the architecture, but some knowledge of the underlying logic implementation (Things like how modern DRAM works at the transistor level and how I might be able to exploit bugs in SMM mode or the DMA Hardware) would be useful. That said, I love poking all that stuff, so maybe it is just me. – Dan Mills Jun 02 '17 at 17:13
1

I'd go so far as to say that anyone who doesn't understand computer organization is doomed to be a lousy programmer. You'll know:

  • how to organize your data structures and algorithms to be more cache efficient
  • how a function call works, and the implications for calling convention
  • the segments of memory, and their implications for variable declarations
  • how to read assembly, and thus interpret the output of a compiler
  • the effects of instruction-level parallelism and out-of-order instruction scheduling, and what that means for branching

Basically, you'll learn how a computer actually works, and thus you'll be able to map your code to it more effectively.

chrisaycock
  • 6,655
  • 3
  • 30
  • 54
  • 7
    I wouldn't go that far. Functionality and good design will always trump performance unless performance is _unusuable_ - and you don't need to know about cache efficiency to make 99% of apps 'usuable'. – Telastyn Mar 26 '13 at 20:46
  • When I studied computer science (approx. 20 years ago), all of these things were relevant. Are they still relevant to programmers studying today, who will very possibly go on to a career entirely involving managed and interpreted code? I'm not so sure. Certainly they're a lot less relevant compared to, say, algorithmic design and analysis, which has lost absolute no relevance in managed and interpreted languages. – Carson63000 Mar 26 '13 at 22:14
  • 1
    @Carson63000, yes. Managed programming means, "Programming with the assistance of a garbage collector and a JIT compiler" It does *not* mean "Programming in a magical fairy tale land where all the problems of the world disappear." :) Too many people treat managed programming as though it is the second definition. – riwalk Mar 27 '13 at 15:58
  • I'm intrigued: what are "the three segments of memory"? The only answer I can come up with is "text, data, and bss," but then "overhead" seems a strange word. – parsifal Mar 27 '13 at 16:43
  • @parsifal, Cache, RAM, and Disk. For more information: http://en.wikipedia.org/wiki/Memory_hierarchy. With that said, there are not necessarily 3. Many people consider registers and cache in a different teir. In addition, cache is usually divided up into the L1 and L2 cache (and sometimes an L3 cache). How complicated you want it to be depends on how deep you want to dig. – riwalk Mar 27 '13 at 17:11
  • *"I'd go so far as to say that anyone who doesn't understand computer organization is doomed to be a lousy programmer"*, that is a BOLD statement, especially in the days of high level programming languages which run on virtual machines.. – hanzolo Mar 27 '13 at 17:29
  • @Stargazer712 - exactly. As I noted in an earlier comment, terminology is a big problem with this question. You'll note that the word "segment" does not appear in the Wikipedia article that you linked. So the fact that you linked it is entirely due to your existing interpretation of the word. I used an interpretation that *might* be more familiar to an OS engineer or someone who works with MMUs. For that matter, someone who learned programming on an 8086 (and didn't learn anything new) will have yet another definition, one that's related to the MMU-centric term but isn't interchangeable. – parsifal Mar 27 '13 at 17:59
  • @Stargazer712 The [segments](http://en.wikipedia.org/wiki/Memory_segmentation) are code, [data](http://en.wikipedia.org/wiki/Data_segment) (namely heap), and [stack](http://en.wikipedia.org/wiki/Call_stack). What you're referring to are tiers in the memory hierarchy. – chrisaycock Mar 27 '13 at 19:27
  • @parsifal By overhead, I was referring to allocating on the heap vs merely placing a variable on the stack. – chrisaycock Mar 27 '13 at 19:42
  • @chrisaycock, I think that you are mixed up somewhere. The segments article that you linked to refers to how the computer divides up the address space of RAM to avoid fragmentation. How the memory of a process is divided up is different (and there are usually 4--the Process Control Block, Code, Heap, and the Stack) – riwalk Mar 28 '13 at 01:40
  • @Stargazer712 Isn't the PCB part of the kernel? – chrisaycock Mar 28 '13 at 01:42
  • @chrisaycock, it is a header that is at the beginning of each process's memory. In the sense that only the kernel directly touches the PCB, you could consider it part of the kernel (and who knows! Maybe everything I was taught was a high level abstraction and it is not stored anywhere close to the process!). But with that said, each process has its own unique PCB and can be considered a part of the process. Knowing of its existence is especially helpful if you ever try to use `fork` and `exec`. – riwalk Mar 28 '13 at 01:48
  • @Stargazer712 Now who's advocating computer organization knowledge? ;) Linux uses a linked list of `task_struct` to store process info. The `for_each_task` macro at the bottom of `sched.h` shows how to iterate through this list. – chrisaycock Mar 28 '13 at 02:07
  • @chrisaycock, if it is a linked list, then what stops each block from being at the beginning of the memory space devoted to each process? :) It doesn't really matter. You're probably right--everything is so abstracted that everyone can eventually be shown to be wrong. That was just the level of abstraction that was used to present the concept to me. – riwalk Mar 28 '13 at 03:45
1

Update 2018: How many Software Developers does it take to change a Lightbulb??? Who Cares!? That's a Hardware Problem!

Generally NO, You don't need to know computer architecture to be a good programmer, That's more in the EE realm IMO.. unless of course you're in embedded systems development, but in that case you're married to the chip and programming right on it, so you'll need to know the architecture of THAT "computer" (and even then it may not matter), but having a general architectural understanding of how computers work wont be good for much else than Water-hole discussions.

I would say it's even less important these days at the rate hardware is declining in price and performance is improving / increasing and how quickly the technologies are changing and languages are evolving. Data structures and design patterns don't really have much to do with physical hardware architecture as far as I know.

Generally Programmers come from a computer science background, in which case, they've more than likely taken computer architecture classes, but now-a-days, Operating Systems are going virtual, disk space is shared, memory is scaleable, etc.. etc..

I have been able to make a great career in programming(10+ years) and I have very little educational knowledge of computer architecture, mostly because... I was an Art major!!!

Update: Just to be fair, MY "little educational knowledge" came from my CPU Sci. Minor. and still, I've never needed to use anything I've learned from my Assembly classes or my Computer Architecture classes in my "Programming" career.

Even now as I play around with some Mesh Networking Idea's implementing the ZigBee spec, I've found that using the products and tools available (XBee), I'm able to program in Python and plop the code right on chip (SoC) and do some really neat stuff with it.. ALL without having to worry about anything to do with actual architecture of the chips, etc.. there are definitely hardware limitations to be cognitive of because of the chip size and the intended low price target.. but even THAT will become less in the upcoming years. So I stand by my "Generally NO" answer

hanzolo
  • 3,111
  • 19
  • 21
  • I think "generally no" is a good summary. There are certainly fields of programming where it's as important as ever; but the bulk of professional programmers do not work in those fields. – Carson63000 Mar 26 '13 at 22:16
  • 2
    _"There are certainly fields of programming where it's as important as ever"_ Do any of those fields include programming in high level languages? If yes, to be more specific, what is the case, to give an example? – Varaquilex Mar 26 '13 at 22:25
  • @Volkanİlbeyli Programming in high level languages is just a means to an end. Many programmers who need to know this stuff for part of their job will also have use for some high level languages, if that was your question. –  Mar 26 '13 at 22:28
  • 2
    I disagree. We use some huge machine (32 cores 60G Memory and 3.3TB drives) ran as in a cluster of 400 machines. But since are field is BigData even these machines are miniscule in terms of the data being processed. So the principle are just as relevant. The machine abilities may be doubling every 18 months but the data we are attempting to processes is doubling every 6 months. – Martin York Mar 26 '13 at 22:48
  • "The machine abilities may be doubling every 18 months but the data we are attempting to processes is doubling every 6 months", that is frickin' awesome.. However, I still stick with "Generally No" – hanzolo Mar 26 '13 at 22:52
  • 2
    -1. You don't know what you don't know. Understanding cache performance, the memory hierarchy, operating systems, time sharing, and countless other things have made me a much better programmer. You're willfully ignorant (and that is fine), as long as you don't set your sights too high. If you want to truly become a "good" programmer, than you must make an effort understand the machines that you are commanding. – riwalk Mar 27 '13 at 01:42
  • @Stargazer712 - In my business app development experience, I've only ever needed to use high level languages. Data structures and design patterns and all the new tools are what I spend my (free)time studying.. How the OS runs my programs isn't really something I ever need to worry about, especially with the virtualization of everything. You call it "willful ignorance" i call it reality check.. the machines do what I tell them to do, and they all want me to tell them in different languages.. how each of those languages do it doesn't really concern me, especially since it's a moving target. – hanzolo Mar 27 '13 at 06:52
  • I disagree with the fact that computer architecture is in the realm of electrical/electronics engineering; memory hierarchy, memory protection, pipelining, branch prediction, ALUs, registers etc. don't have nothing in common with classical electronics (voltages, transistors etc.). – Random42 Mar 27 '13 at 09:55
  • @hanzolo, exactly. You're doing app development, and that's fine. But don't assume that every developer has the same level of aspiration as you do. – riwalk Mar 27 '13 at 14:35
  • @Stargazer712 - I also do some pretty cool stuff on the side (see updated answer) and I still don't *need* to know anything about the computer architecture of the chips i'm working with. Great Conversation! – hanzolo Mar 27 '13 at 17:21
  • @hanzolo, I stand by my original statement: You don't even know enough to know what you don't know. If you're not willing to learn it, then I'm not willing to make any efforts to describe it. – riwalk Mar 29 '13 at 00:18
  • @Stargazer712 - Generally, that should work out just fine! ;-) – hanzolo Mar 29 '13 at 01:52
1

Understanding the principles of computer architecture requires learning many important principles of programming. Therefore, a knowledge of computer architecture is relevant to programming in any language, no matter how high level.

These important principles include:

  • Fundamental data structures like arrays and stacks
  • Program structure: Loops, conditionals, subroutines (jump and call)
  • Considerations of time and space efficiency
  • Systems: The way various components fit together through abstract interfaces. Apparently this is controversial, so I will elaborate. Take the instruction set, a construct with a general form (operands, addressing modes, encoding) that is applicable to many different kinds of operations, such as arithmetic, logical, memory modification, and interrupt control. This illustrates a general principle of system design, namely that systems are composed of individual subsystems that all share the same abstract interface, and that abstract interfaces are capable of handling many specific components. This principle is also visible in a web application which may store the same kind of object (abstract interface) in a database, in memory, or on a web page (subsystems). In each case, the abstract interface specifies the general form without specifying the concrete detail. System design is the art of knowing what to make general and what to make specific. This is a skill honed by designing and understanding systems -- in any language and at any level.
Lyn Headley
  • 121
  • 4
  • Downvote. Your last point, in particular, has nothing to do with computer architecture. – user16764 Mar 27 '13 at 18:44
  • @user16764 - do you have a rigorous definition of what "computer architecture" is? Can you post a link to it? – parsifal Mar 27 '13 at 19:09
  • @parsifal Of course. I'm surprised you'd need to ask for it though: http://en.wikipedia.org/wiki/Computer_architecture – user16764 Mar 27 '13 at 19:16
  • @user16764 - I wondered if that would be your link. So when you read "which described an organization of logical elements" you can't translate that as "components" and "abstract interfaces"? That's just one example. I would claim, for example, that in a modern CPU the entire instruction set is an "abstract interface." – parsifal Mar 27 '13 at 19:18
  • Or do you interpret "abstract interface" as a feature of a specific programming language? – parsifal Mar 27 '13 at 19:19
  • @parsifal, the downvote stands, and your attempt to waste time playing semantic games is not going to change it. – user16764 Mar 27 '13 at 19:19
  • @user16764 - and you might as well leave the ad hom attacks in place (I saw the second comment to *Lyn Headley*, and it's the reason I responded). Either keep a filter on your writing from the start, or leave the filter off. – parsifal Mar 27 '13 at 19:20
  • Very well then. Since you both either are ignorant of what computer architecture is, or are pretending to be in an attempt to be argumentative, the downvote is obviously justified. – user16764 Mar 27 '13 at 19:23
  • LOL at the thought of someone posting a Wikipedia link as an example of a rigorous definition of anything. – HLGEM Mar 27 '13 at 19:38
  • It was "rigorous" enough for the context. Which had nothing to do with accuracy and everything to do with indignation over a perceived "ad hom" on my part, which I had already removed by the time the "request" was made. – user16764 Mar 27 '13 at 19:40
  • @parsifal - instruction set is a great example. This is one of the most fundamental interfaces in all of computing. It ties together memory, cpu, bus, and other components. – Lyn Headley Mar 27 '13 at 20:05
  • @Lyn Headley, can you revise your answer to more clearly explain how understanding an instruction set architecture helps you understand the architecture of a large information system composed of multiple applications? "They're both composed of components that fit together via abstract interfaces" isn't really concrete enough. Especially when you think about how many other things that that would describe. – user16764 Mar 27 '13 at 20:15
  • @user16764 how is that? – Lyn Headley Mar 27 '13 at 20:39
  • @LynHeadley Here's my answer to "how is that": Downvote removed, and previous comments retracted. – user16764 Mar 27 '13 at 20:42
  • @user16764 ah, the sweet glow of consensus :-) – Lyn Headley Mar 27 '13 at 21:48
0

It can help quite a bit actually. Understanding concepts such as shared memory and inter-processor communication and the potential delays involved with these can help a programmer arrange their data and communicative methods to avoid relying heavily on these mechanisms, if needed. This is true for other areas of programming such as horizontal scaling, where distribution and communication among a program or system of programs is a main focal point.

Understanding the pitfalls or tar pits of a physical system can help you arrange such a program to help negotiate the physical systems as quickly and efficiently as possible. Simply throwing data into a communication queue and expecting it to scale is an undersight of what may really need to be put in place, especially if you must scale your software onto larger systems for better support.

Equally, understanding the benefits of something such as functional programming can really be exemplified in the light of understanding what's going on a physical, systems level and thus makes even more traction for concepts such as these, in my opinion.

One last quick example could be understanding the concept of say stream-processing and how sending data off to a processing unit like a video card may be best done in a very specific manner such as: send off all required calculations, receive back the frame of data in one fell swoop. In something like video graphics or maybe even physics calculations you wouldn't want to continually have an open communication with such a device; thus knowing this you would want to arrange this part of your program as such.

After all, if programmers did not understand these issues and road-blocks then these solutions would never exist in the format that they do.

scape
  • 109
  • 4
0

Knowing your architecture allows you to know when something that's being asked for is impossible.

I was once asked to write a program to communicate with a PIC over a PC serial port. The protocol would have the PIC sending nine-bit bytes, with no flow control. I would display in my program's UI the values of the fields in the packets that the PIC sent. Obvious question: how do I read the ninth bit of each byte? The protocol engineer and I decided that we would try setting the parity to MARK and then treat the ninth bit as a parity bit. I would read the value of the ninth bit according to the success or failure of the parity check. Well, I implemented that, and it didn't work. The values being displayed were obviously wrong. And after three continuous days of researching PC UART architecture, I found out why.

Here's why. The PC UART buffers its input. By the time it interrupts the CPU to say "READY TO READ", any number of bytes could have accumulated in its buffer. There is, however, only one Line Status Register to hold the value of the parity check. It is therefore impossible to tell which byte in the buffer failed the parity check. So: make sure that the buffer is only one byte long, you say? That's a flow control setting, and I already mentioned that this protocol didn't have flow control. If I didn't know the architecture of the hardware I was programming, I would never have been able to say: "Reading the ninth bit is impossible and needs to be cut."

user16764
  • 3,583
  • 1
  • 25
  • 22
-4

Learning computer architecture helps immensely in programming.

Without understanding the environment the program is running in, your mental model is seriously handicapped. We see the world, not as it is, but as we are -- through the mental model.

You won't notice the difference in happy case scenarios, where everything just happens to work, but it will make a crucial difference when you are working on harder problems or debugging weird bugs (ie. real life programming).

It's the difference between "WTF?" and "Ah, of course!".

Maglob
  • 3,839
  • 1
  • 24
  • 27