0

I want to understand how the theoretical foundations of computation relate to real-world computers. As far as my knowledge goes, Turing machines, recursive functions, finite state machines, lambda calculus, combinatoric logic, and other models are all theoretically equivalent in how they represent computation. To me, these models seem to underlie what computation really is, at the most basic level.

Thus, my primary question is, how do these purely theoretical models of computation relate to real-world computer architecture? How do we go from these theoretical models to a real-world tool, which makes use of electrical engineering and physics, aspects which are not taken into account in the models of computation? For example, is the real-world computer an advanced physical embodiment of the Turing machine, or is this completely off the mark?

EDIT: My question is different than simply asking how computers work. I am specifically asking about the relationship between practical, real-life computers and models of computation.

Wesley
  • 439
  • 1
  • 4
  • 7
  • 2
    Possible duplicate of [How Do Computers Work?](http://programmers.stackexchange.com/questions/81624/how-do-computers-work) – gnat Jan 19 '16 at 04:53
  • 1
    This seems too broad to really be answerable. You could pretty much rattle off a list of implementation details of real-world computers and all of them would qualify as differences from Turing Machines: L1-L3 caches, registers, pipelining, microcode, opcodes, finite memory, heat constraints, speed of light constraints...and the there's quantum computers. Anything resembling a complete answer would probably end up containing all of the same information as the post gnat linked (but I'm voting "too broad", not "duplicate"). Could you try to narrow your question down to something more tractable? – Ixrec Jan 20 '16 at 00:20

1 Answers1

2

how do these purely theoretical models of computation relate to real-world computer architecture?

They don't.

Both λ-calculus and Turing Machines were designed to model the way a human computes. They weren't designed to model computing machines.

This is most obvious in the Turing Machine, which was heavily modeled after the way a "computer" (which during Alan Turing's time was a job description for a human!) worked: by keeping a limited amount of information in his head and writing down the rest on a sheet of paper. Turing made the amount of information arbitrarily large (but finite), and allowed for infinite amounts of paper (and he cut the sheets in stripes and glued them together to get a simplified one-dimensional tape), however, even allowing for inhuman amounts of information and physically impossible infinite tapes, he could prove that there are certain things that cannot be computed.

λ-calculus similarly developed out of a desire to formalize what computation means, in this case instead of starting from the very physical act of how a human writes down computations on paper, it is more based on an intuition of how one would evaluate a function in one's head.

Turing proved that a Universal Turing Machine can exist, i.e. a Turing Machine which can read the description of an arbitrary Turing Machine from tape and perform that Turing Machine's operations. This can be considered to be the very first interpreter, and the very first stored-program computer.

However, people who were actually involved in developing the first digital computers claim that the work of the logicians on models of computation had little influence on the work of the electrical engineers and applied mathematicians building the first digital computers, despite the striking resemblance.

There are models of computation that are much closer to what current mainstream computers look like. The Random Access Machine, for example, is a cost model for algorithm analysis that more closely models how memory is accessed in a typical computer, namely with constant-time random access, as opposed to a Turing Machine, where access time is linear in the distance from your current memory location to the one you want to access (you have to move the head across the tape one cell at a time).

Note that you use the term "real-world computer architecture" as if there was only one. There are in fact many different architectures. For example, the Reduceron is a graph-reduction CPU specifically designed for Haskell-like languages, it works very differently from, say, a typical x86 CPU. There are architectures which explicitly model the communication costs on the chip, such as the FLEET architecture. The Harvard Architecture strictly separates code and data as opposed to the von Neumann Architecture, which doesn't distinguish between them. And so on.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318