90

OK, so I paraphrased. The full quote:

The Internet was done so well that most people think of it as a natural resource like the Pacific Ocean, rather than something that was man-made. When was the last time a technology with a scale like that was so error-free? The Web, in comparison, is a joke. The Web was done by amateurs. -- Alan Kay.

I am trying to understand the history of the Internet and the web, and this statement is hard to understand. I have read elsewhere that the Internet is now used for very different things than it was designed for, and so perhaps that factors in.

What makes the Internet so well done, and what makes the web so amateurish?

(Of course, Alan Kay is fallible, and no one here is Alan Kay, so we can't know precisely why he said that, but what are some possible explanations?)

*See also the original interview*.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
kalaracey
  • 929
  • 1
  • 8
  • 8
  • 24
    Actually Alan Kay has at one point in the past answered a Stack Overflow question... –  Mar 24 '13 at 02:54
  • 2
    @WorldEngineer Source? – kalaracey Mar 24 '13 at 03:01
  • 20
    http://stackoverflow.com/questions/357813/help-me-remember-a-quote-from-alan-kay –  Mar 24 '13 at 03:04
  • 6
    IMHO the biggest missed opportunity was not making HTML parsing strict e.g. the predecessors like SGML etc had strict parsing rules but the early web browers/UA allowed any sort of HTML and tried their best to display them. That made it easy for HTML to get started but caused problems for years. – jqa Mar 25 '13 at 00:53
  • 9
    IMHO the fundamental problem is that the web usage was extended well beyond its initial application domain (hyper text). – chmike Mar 25 '13 at 09:05
  • @james Parsing isn't the problem. The problem appears when you have the parse and you ask *what do I do with this information*? The specs were way too vague, so everyone did things their own way. – Rag Mar 25 '13 at 09:15
  • 1
    @chmike Indeed I think that we started with static hypertext, and then layered on more dynamic capabilities with javascript and other tools slowly, approaching a dynamic end from a static beginning, where Kay thought that we should have *started* at that dynamic end, with the browser as an "operating system" rather than "an application" (see Karl's answer below). – kalaracey Mar 25 '13 at 21:32
  • 3
    I have a lot of respect for Alan Kay's work but he's talking out of his backside if he truly believes this. As a person who has spent a significant amount of time actually implementing low level network parsers I can confidently say the APIs for TCP/IP were equally amateurish and naive. Sure, implement a variable length of options extensions (that nobody ever used) but make the address space fixed and limit it to a 2 byte length, because that wasn't idiotic. – Evan Plaice Feb 18 '14 at 21:38
  • 3
    (cont) How about checksums. Let's add one to the IP layer, and another one to the TCP layer but require that it includes a pseudo-header from the IP layer because creating protocol interdependencies is a great idea /s. Don't even get me started on the RFC system. Instead of creating a sane version control mechanism for documentation, anybody who wants to parse the lower protocols has to search through dozens of documents and attempt to discover the intent of the original protocol designers. I gleaned a lot of knowledge about how **not** to design an API from the TCP/IP specs. – Evan Plaice Feb 18 '14 at 21:48
  • @Pacerier Not sure how making an ad-hominem attach adds value to the conversation. The point is, it's true that TCP/IP **were** intentionally designed and clearly defined the spec from the start. That helped the proliferation of TCP/IP but also locked in the strengths/weaknesses that have existed since their inception. HTTP/REST was abstractly defined and its meaning has evolved organically over time but it's still applicable in it's original form today. Talk all the trash you want, when you have a relevant body of work to back up your critiques I'll be happy to listen. – Evan Plaice Apr 06 '17 at 06:59
  • @EvanPlaice, It adds value to the conversation **because** the points you made were completely irrelevant. If they were relevant to the issue at hand, the adhominems would have no value, as they do not in and of themselves. (Therefore, the adhominems you were attempting on me have no value because my points happen to be relevant to the issue at hand.) – Pacerier Apr 11 '17 at 08:20
  • ..The point here is to make you and other readers on this page aware that you have completely missed the point, as demonstrated by your previous comment(s). **The Internet** btw, is distinct from **the internet layer** and has a lot more to do with [the link layer](https://en.wikipedia.org/wiki/Link_layer) than anything else. That you are *not-even* busy nitpicking the insignificant details [as opposed to the overarching traits of the architecture] of the Internet, but busy nitpicking that of TCP/IP, shows that you are *not-even* a complete 180 degree off track from the issue at hand. – Pacerier Apr 11 '17 at 08:28
  • @Pacerier Lemme guess, and we'd all be programming in Lisp/SmallTalk as god intended. Alan Kay created his ideal internet, it was called HyperCard and it has lived in obscurity (ie academia) for the past 25 years. Ie, it's vaporware that came with lofty promises and delivered very little. – Evan Plaice Apr 12 '17 at 00:21
  • @EvanPlaice, HyperCard is Alan Kay's web? Seriously folks, which part of "*You want it to be a mini-operating system*" is unclear? Really, read my answer below. – Pacerier Apr 12 '17 at 01:47
  • As one who experienced the web before NCSA Mosaic - it was very, very boring and unambiguous and essentially designed as a better gopher (as I saw it). Nobody could have designed what we see today from what was back then, because the things we can do now on the web was impossible to do or even envision back then. – Thorbjørn Ravn Andersen Feb 18 '18 at 18:22

10 Answers10

80

In a sense he was right. The original (pre-spec) versions of HTML, HTTP and URL were designed by amateurs (not standards people). And there are aspects of the respective designs ... and the subsequent (original) specs ... that are (to put it politely) not as good as they could have been. For example:

  • HTML did not separate structure/content from presentation, and it has required a series of revisions ... and extra specs (CSS) ... to remedy this.

  • HTTP 1.0 was very inefficient, requiring a fresh TCP connection for each "document" fetched.

  • The URL spec was actually an attempt to reverse engineer a specification for a something that was essentially ad hoc and inconsistent. There are still holes in the area of definition of schemes, and the syntax rules for URLs (e.g. what needs to be escaped where) are baroque.

And if there had been more "professional" standards people involved earlier on, many of these "miss-steps" might not have been made. (Of course, we will never know.)

However, the web has succeeded magnificently despite these things. And all credit should go to the people who made it happen. Whether or not they were "amateurs" at the time, they are definitely not amateurs now.

Stephen C
  • 25,180
  • 6
  • 64
  • 87
  • 31
    there is also the issue that HTML was polluted by the browser war – ratchet freak Mar 24 '13 at 03:23
  • 3
    This goes part of the way to explaining my own dissatisfaction with the current standards. I can't help but think that this is something we need to revisit with the benefits of both experience, hindsight, and current technical capabilities. – greyfade Mar 24 '13 at 04:51
  • @ratchetfreak - That is true. – Stephen C Mar 24 '13 at 07:53
  • 3
    @greyfade - Unfortunately, the W3C is severely hampered in that goal by 1) millions of legacy web server installations, billions of legacy web pages, and 2) companies who are more interested in playing the "commercial advantage" card than in fixing stuff. – Stephen C Mar 24 '13 at 07:56
  • 3
    @StephenC: Nevertheless, I would strongly support an effort to build new, better standards. – greyfade Mar 24 '13 at 08:25
  • 2
    @greyfade That's the thing with the Internet too; millions of routers with IP/TCP stack and OSI model, a better and standardized model will not be adopted. – Random42 Mar 24 '13 at 09:03
  • @greyfade - You think they haven't tried? For example, http://www.w3.org/Protocols/HTTP-NG/ – Stephen C Mar 24 '13 at 14:13
  • Sorry, but what was URL trying to reverse-engineer? – Sled Mar 25 '13 at 15:43
  • @ArtB - I can't understand what you are asking. – Stephen C Mar 26 '13 at 00:21
  • `The URL spec was actually an attempt to reverse engineer a specification for a something that was essentially ad hoc and inconsistent.` Was the URL spec trying to reverse engineer itself, or some other standard that isn't mentioned? – Sled Mar 26 '13 at 03:17
  • 1
    @ArtB - it was trying to reverse engineer a common syntax from what was used before the URL spec existed. For example: http://www.w3.org/History/19921103-hypertext/hypertext/WWW/Addressing/Addressing.html. This is the problem. They implemented stuff, shipped software, etc ... then thought ... "Umm perhaps we need a specification so that *other people* can implement it too." – Stephen C Mar 26 '13 at 04:52
  • Some corrections: HTML *did* separate content from presentation from the beginning. There just wasn't a standard for specifying the presentation, so the browsers used a hardcoded default style sheet. It was known from the beginning that a style sheet language would be useful, and DSSSL (basically CSS for SGML) was considered, but in the end CSS was developed since DSSSL was more geared towards print. – JacquesB Mar 28 '15 at 10:06
  • The URL spec was an attempt to *unify* (not "revere engineer") addressing across disparate internet systems. According to TBL's book, the IETF standards people were very skeptical and negative towards the whole idea of URL's, because they didn't really get what TBL was trying to do with the web. – JacquesB Mar 28 '15 at 10:18
  • @JacquesB - Unifying syntaxes and reverse engineer a specification are saying the same thing ... in my opinion. – Stephen C Apr 10 '15 at 10:40
  • @Stephen C: Reverse engineering means to investigate and describe (and often replicate in an independent implementation) the behavior of an existing running system. See: http://en.wikipedia.org/wiki/Reverse_engineering – JacquesB Apr 10 '15 at 13:33
  • I prefer this definition: "Reverse engineer: to study the parts of (something) to see how it was made and how it works so that you can make something that is like it". And note that I am talking about reverse engineering a common **specification** from a bunch of syntaxes. – Stephen C Apr 15 '15 at 10:44
  • The original vision for the web was very close to the role PDF has today: An essentially static presentation of data. It just went another way when it was the hammer available for client-server applications. – Thorbjørn Ravn Andersen Feb 18 '18 at 18:50
63

He actually elaborates on that very topic on the second page of the interview. It's not the technical shortcomings of the protocol he's lamenting, it's the vision of web browser designers. As he put it:

You want it to be a mini-operating system, and the people who did the browser mistook it as an application.

He gives some specific examples, like the Wikipedia page on a programming language being unable to execute any example programs in that language, and the lack of WYSIWYG editing, even though it was available in desktop applications long before the web existed. 23 years later, and we're just barely managing to start to work around the limitations imposed by the original web browser design decisions.

gnat
  • 21,442
  • 29
  • 112
  • 288
Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
  • So he wanted the browser to be a mini operating system, in that it would be more interactive than early HTML (it's getting better now, I agree)? – kalaracey Mar 25 '13 at 00:04
  • 3
    What does WYSIWYG have to do with the Web? That is purely a browser feature. Now, the lack of _proper_ editing, that is a true Web failure. `POST` is utterly inadequate for that purpose. – MSalters Mar 25 '13 at 11:45
  • 10
    "What does WYSIWYG have to do with the Web?" That is the point, the vision of the web is very limited. Static text files being passed around. No interaction. No logic. No code. That is a very limited vision compared to what computers can do and what Kay had already seen being done years previous. And because the web is so static it needs constant revision. In Kay's vision the browser itself would come with the webpage it is displaying. – Cormac Mulhall Nov 21 '13 at 18:03
  • 5
    In an ideal world that would work and frameworks like Java applets and Flash attempted to make it a reality. When you take into consideration the security aspects, cross system compatibility, ability to scale, and work it takes to maintain state between requests. It's no wonder why it has taken so long to advance. Some very smart/talented people have spent years working out the fundamental flaws/weaknesses of a naive specification. – Evan Plaice Feb 18 '14 at 21:29
  • Hooking http://stackoverflow.com/q/6817093/632951 – Pacerier Apr 11 '17 at 08:49
  • @EvanPlaice, Wasm and Extensions have taken so long to advance not because of security issues, but **because** Tim Berners Lee's web isn't built for them. Alan Kay's web on the other hand is specifically built for stuff like Wasm and Extensions and.. – Pacerier Apr 11 '17 at 08:50
  • ..if Tim's web had not [killed Alan's web](http://softwareengineering.stackexchange.com/questions/191738/why-did-alan-kay-say-the-internet-was-so-well-done-but-the-web-was-by-amateur#comment742464_191899) by raw strength in marketing (as opposed to quality), the world would have Wasm and Extensions progressed eons faster than its [present progress](http://webassembly.org/roadmap/#past-milestones). – Pacerier Apr 11 '17 at 08:51
  • @MSalters, EvanPlaice. Since I had written so many comments on this page, I'd thought why not write them as an answer instead? Here you go: http://softwareengineering.stackexchange.com/a/347006/24257 – Pacerier Apr 12 '17 at 01:48
  • @EvanPlaice and now we have CPU bugs which make it literally impossible. – user253751 Apr 05 '22 at 17:18
31

It seems to be due to a fundamental disagreement between Alan Kay versus the people (primarily Tim Berners-Lee) who designed the web, about how such a system should work.

The ideal browser, according to Kay, should really be a mini operating system with only one task: To safely execute code downloaded from the internet. In Kays design, the web does not consist of pages, but of black box "objects" which can contain any kind of code (as long as it is safe). This is why he says a browser shouldn't have features. A browser wouldn't need say a HTML parser or a rendering engine, since all this should be implemented by the objects. This is also the reason he doesn't seem to like standards. If content is not rendered by the browser but by the object itself, there is no need for a standard.

Obviously this would be immensely more powerful than the web today where pages are constrained by the bugs and limitations of the current browsers and web standards.

The philosophy of Tim Berners-Lee, the inventor of the web, is almost the exact opposite. The document "The Principle of Least Power" outline the design principles underlying HTTP, HTML, URL's etc. He points out the benefit of limitations. For example, having a well specified declarative language like HTML is easier to analyze, which makes search engines like Google possible. Indexing is not really possible in Kays web of turing-complete black-box objects. So the lack of constraints on the objects actually makes them much less useful. How valuable are powerful objects if you can't find them? And without a standard notion of links and URLS, Googles page rank algorithm couldn't work. And neither would bookmarks for that matter. Of course the black box web would be totally inaccessible for disabled users also.

Another issue is content production. Now we have various tools, but even from the beginning any amateur could learn to author a html page in notepad. This is what kickstarted the web and made it spread like wildfire. Consider if the only way you could make a web page required you to start programming you own rendering engine? The barrier to entry would be immense.

Java applets and Silverlight resemble to some extent to Kays vision. Both systems are much more flexible and powerful than the web (since you could implement a browser in them), but suffer from the problems outlined above. And both technologies are basically dead in the water.

Tim Berners-Lee was a computer scientist who had experience with networks and information systems before inventing the web. It seems that Kay does not understand the ideas behind the web, and therefore he believes the designers are amateurs without knowledge of computing history. But Tim Berners-Lee certainly wasn't an amateur.

JacquesB
  • 57,310
  • 21
  • 127
  • 176
  • 6
    +1. A lot of what Alan Kay says makes him appear to be the sort of person who would not get the old joke about the difference between theory and practice. He's developed a lot of great theories over the years which have failed horribly in practice, and have been thoroughly out-competed in the "marketplace of ideas" by less theoretically pretty systems that actually work well, and Kay's never seemed to truly understand that. – Mason Wheeler Mar 27 '15 at 15:08
  • 2
    "well specified declarative language like HTML ". That's rich. – Andy Mar 28 '15 at 00:31
  • @Andy: Do you disagree? HTML is specified well enough to allow web crawlers to follow links and hence search engines to work. Try doing the same with a web built from Java applets. – JacquesB Mar 28 '15 at 07:54
  • 2
    For its designed purpose, hypertext, html is fine. But as an application platform it fails miserably. The only advantage was no deployment and platform agnostic. Searching is not the only thing people do on a computer. Financial planning, games, social interactions, etc. Who cares if I can't search my blackjack game? Given a choice between web app and mobile app, people overwhelmingly choose the native app. There's a reason for that. – Andy Mar 28 '15 at 15:43
  • 2
    No doubt native applications are more powerful, but that is not really the question. According to Kay the web should *only* by native apps, no HTML at all. Such a web would never have taken off. – JacquesB Mar 29 '15 at 09:07
  • 1
    +1 Also, black box "objects" would take longer to transfer over the wire, no matter what protocol. I'll take 20kb of javascript over an 8mb jar any day. Because browsers are also execution environments, they can optimize how they parse HTML and interpret javascript simply by limiting what can run inside the browser. – aindurti Oct 21 '15 at 22:38
  • 1
    @JacquesB, **No, you're completely missing the point.** Alan's point is that the *web* should be built such that Tim's "*Principle of Least Power*" idea could be just another idea built atop the *web* instead of being hard-baked in as the web itself. Markup-code, search-engine, pagerank, you could have that too with Alan's implementation of the web, **and you could have more**. Alan was right; Tim was horribly wrong and short-sighted resulting in **development of the web being so slow** due to having to work with existing crap and limitations. ... – Pacerier Apr 01 '17 at 04:02
  • 1
    ... Google [and others] is trying to right this wrong now with Chrome extensions, Chrome apps, Chrome OS-is-the-browser-is-the-OS, ppapi, Wasm, HTTPS-ONLY-no-http, binary http, Websockets, and if Tim hadn't staunted the *web* with his short-sighted "*Principle of Least Power*" viral [read: virus] idea **we would have all that yesterday**, not tomorrow. Not only html/search-engine/pagerank, we would have a full-fledged IDE and a VM within a "browser tab" yesterday, no big deal at all. – Pacerier Apr 01 '17 at 04:03
  • 3
    @Pacerier: We already had the ability to do all that with Java applets decades ago, and with ActiveX, Silverlight and so on. The web is not stunted at all since it does't limit you to HTML, it can support any media format including code like Java applets. It is just not used very widely for the reasons I state in the answer. – JacquesB Apr 01 '17 at 06:40
  • 1
    @JacquesB, Nope, you had completely misunderstood why Java, Silverlight, and ActiveX didn't caught on. Your answer completely missed the mark because grepping "security" and "security model" returned exactly zero results. ActiveX and Java is superseded by Wasm and Extensions for **one reason and one reason alone: Security**. In fact, for your answer to qualify for sympathy marks, you'll have to talk about performance [load time, latency, etc], because that's the second reason why Java and ActiveX are superseded, right after security. – Pacerier Apr 11 '17 at 07:46
  • Note well that the web would be slooow if every page is rewritten using Java objects, but it would be blazingly fast if every webpage is rewritten using Wasm. – Pacerier Apr 11 '17 at 07:50
  • @Pacerier: ActiveX did not run in a sandbox so was completely unsafe. But Java applets and Silverlight runs (or ran) in a sandbox the same way JavaScript and WASM runs in a sandbox, so there is no fundamental difference in the security model. – JacquesB Apr 11 '17 at 12:03
  • @JacquesB, No fundamental difference in the security model? Quite frankly, you do not know what you are talking about at all. **Java was removed from Chrome because it is insecure**: Check up Chrome's buglists. Or if you can't be bothered digging through the buglists, a quick Google will find many articles talking about it: http://archive.is/8TNkG#selection-3185.0-3221.101. Extensions and WASM on the other hand was created **from ground up** with cross-origin security in mind. – Pacerier Apr 12 '17 at 02:02
  • @MasonWheeler You are aware that Alan Kay was one of the engineers of the Internet (not the web), helped invent modern UIs are Xerox Parc, created the first true object-oriented language Smalltalk and has had more influence of computing than almost anyone. I guess not, as "he's developed a lot of great theories over the years" doesn't do that justice at all. – Rob G Sep 25 '17 at 09:44
  • 1
    @RobG The first true object-oriented language was Simula. Alan Kay may have invented the *term* to refer to the mess he was building on Smalltalk, but there are good reasons why the principles he championed have failed in the marketplace of ideas every single time they're introduced, where true OO, Simula style, has gone on to take over the world. – Mason Wheeler Sep 25 '17 at 10:00
21

I read this as Kay being unfamiliar enough with the lower level protocols to assume they're significantly cleaner than the higher level web. The “designed by professionals” era he's talking about still had major problems with security (spoofing is still too easy), reliability and performance which is why there's still new work being done tuning everything for high speed or high packet loss links. Go back just a little further and hostnames were resolved by searching a text file which people had to distribute!

Both systems are complex heterogenous systems and have significant backwards compatibility challenges any time you want to fix a wart. It's easy to spot problems, hard to fix them, and as the array of failed competitors to either shows it's surprisingly hard to design something equivalent without going through the same learning curve.

As a biologist might tell an intelligent design proponent, if you look at either one and see genius design you're not looking closely enough.

Chris Adams
  • 384
  • 1
  • 8
  • 2
    The .hosts file is *still* in use on pretty much every platform. It's handy for blacklisting a malicious site. – Rob K Mar 27 '15 at 20:51
  • @RobK Definitely – that history runs deep even if we don't use things like https://tools.ietf.org/html/rfc953 to update it. These days, however, I wonder if the most common usage is malware. – Chris Adams Mar 30 '15 at 18:05
  • Alan Kay is attributed as being one of the developers behind Ethernet. How low level do you need? His criticism are against the browsers, not the network technologies. Basically he says browsers should had been implemented as virtual OSs, running apps, not sites. – vikingosegundo Oct 16 '21 at 02:13
  • I can't read his mind but I don't think you'd find many people involved in large-scale Internet _operations_ who'd say it's “error-free” (BGP spoofing alone would be worth a discussion) even after decades of development. Ethernet is a LAN protocol & many of the networked computing ideas he's credited with work best in local environments rather than over the internet. Note that I'm not saying the web was perfect — only that the Internet also was not. Both are complex systems built by humans & have needed plenty of course correction over the decades, exactly as you'd expect. – Chris Adams Oct 25 '21 at 14:09
  • You are mixing up the terms web and networking widely, something you do in your answer as-well. – vikingosegundo Oct 30 '21 at 22:00
  • @vikingosegundo It seems odd that you would make such a claim without an example — it's not much of a contribution to the discussion otherwise. – Chris Adams Nov 03 '21 at 23:55
  • I just stopped by to tell you that you accusing a co-inventor of the most important protocol that he doesn't understand rather basic stuff — and without ANY evidence — just suggesting. – vikingosegundo Nov 04 '21 at 02:36
  • It was a pithy quote, not an academic treatise, and as such overstates both sides. I suspect that if he had been working on the latter he’d have acknowledged the weaknesses which are still being addressed decades later, that the challenges are different at different layers of the stack, and that the web might have made a few good calls to achieve global dominance over better funded academic and commercial rivals. – Chris Adams Nov 05 '21 at 11:47
11

Ahh yes, I've asked Alan this question a number of times, for example when he was in Potsdam and on the fonc mailing list. Here is a more recent quote from the list which to me summed it up quite well:

After literally decades of trying to add more and more features and not yet matching up to the software than ran on the machines the original browser was done on, they are slowly coming around to the idea that they should be safely executing programs written by others. It has only been in the last few years -- with Native Client in Chrome -- that really fast programs can be safely downloaded as executables without having to have permission of a SysAdmin.

My understanding of his various answers is that he thinks web-browsers should not display (HTML) documents, possibly enriched, but simply run programs. I personally think he is wrong in this, though I can see where he is coming from. We already had this sort of thing with ActiveX, Java Applets, Flash and now "rich" JavaScript apps, and the experience generally wasn't good, and my personal opinion is that even now most JavaScript heavy sites are a step back from good HTML sites, not a stop forward.

Theoretically, of course, it all makes sense: trying to add interactivity piecemeal to what is basically is document description language is backwards and akin to adding more and more epicycles to the Ptolemaic system, whereas the "right" answer is figuring out that (rich) text is a special case of a program and therefore we should just send programs.

However, given the practical success of the WWW, I think it's wise to modify our theories rather than slam the WWW for having the gall not to conform to our theories.

mpw
  • 117
  • 3
  • 1
    I am coming around to this belief to, see my comment on the original question. Native, safe code execution in the browser (as an "operating system") rather than as a more dynamic version of (perhaps, certainly arguably) fundamentally static documents, I think is what he is getting at. – kalaracey Mar 25 '13 at 21:35
  • 1
    Yes, but we already have an operating system, and we can already download programs from the web to run on our operating system, so if we wanted that functionality, we already have it! So the browser, IMHO, is fulfilling a different need for users, the drive to the web as an app delivery platform seems to be driven more from the supplier side (cool shiny tech + easier deployment). – mpw Mar 26 '13 at 09:22
  • 3
    "Yes, but we already have an operating system, and we can already download programs from the web to run on our operating system..." But *trust* is the issue. You would not download the same number of native applications to your machine in one day as the number of websites you visit, simply because you only download applications you trust (the producer of the app) / verify (MD5 / SHA), you don't blindly download tens (hundreds) of them from people you do not know. OTOH, with the browser as an OS, you get the best of both worlds! – kalaracey Mar 26 '13 at 22:34
  • @mpw no, the browser isn't fulfilling that. Brower "apps" are horrible because they try to abuse the browser into being something its not. It offers the most basic of controls and JavaScript is used to try and make anything remotely close to the rich control set of desktops. What is pushing kays vision forward are the app stores from Microsoft, Apple and Google. I suspect normal users will use browsers less as apps continue their rise. The web will still be there but it will be used behind the scenes by apps. – Andy Mar 28 '15 at 00:37
  • @mpw, We should have that, but we *don't* already have it now. **What's the URI for running Eclipse in my browser now?** There is none. This is the problem. This is the difference between Alan's vision and Tim's short-sighted viral idea. With Tim's lame idea, you have to download Eclipse using a URI from your browser to your OS and then run it manually outside the browser. With Alan's idea, you simply download-cache-run Eclipse using a URI. Tim didn't invented the *web*, he killed *it* with his lame, shabby "counterfeit" product. ... – Pacerier Apr 01 '17 at 04:18
  • ... Google and others are trying to fix that now. See my other comment at the top: http://softwareengineering.stackexchange.com/questions/191738/why-did-alan-kay-say-the-internet-was-so-well-done-but-the-web-was-by-amateur/191739#comment742461_277555 – Pacerier Apr 01 '17 at 04:18
  • fonc mailing list link dead now – wlnirvana Feb 22 '21 at 06:22
4

You cannot really say that the Internet or the Web was invented by amateurs or professionals because those fields are absolutely new ones; all people were amateur in Internet protocols before they were invented so from a point of view the inventors of the Internet were amateurs too.

If we were to be really judgmental the Internet was not so great after all: IPv6 is needed. And it is not only about the address space; IPv6 has a new header with fewer and different fields.

Another big difference from the Internet and the Web is how they are perceived by the programmer; a programmer rarely interacts with the Internet. From his point of view in IP you have addresses and in TCP you have a port in addition and you are assured that the packages are sent. That's about it... While with Web the programmer has a more intense interaction: HTTP methods, headers, HTML, URLs etc. It is normal to see the limits of something with many more possibilities than in something with almost no possibilities at all. With this I don't want to say that the Internet is simple: underneath it is kind of complex but this complexity is handled by network and telecommunications engineers and is about configuring something in a limited amounts of possibilities while in the web you basically have unlimited possibilities but the task of building complex applications relying only on packet sending.

Regarding the greatness of these two technologies, the Internet is so appreciated because it is a very scalable technology and the idea of layering was very good one; basically at the lower levels you can use any technology you want (WLAN, Ethernet, Token Ring etc.) and have IP as a standard intermediate protocol upon which TCP and UDP are placed and above which you can basically add what application protocol you want.

The greatness of the Web is strictly related to the greatness of the Internet because the Web strongly relies on the Internet, having the TCP/IP stack underneath. But I would say the Internet is dependent on the Web too; the Internet existed 20 years before the Web and was kind of anonymous but 20 years after the Web, the Internet is ubiquitous and all of this thanks to the Web.

Random42
  • 10,370
  • 10
  • 48
  • 65
  • 10
    This is not quite true. Vinton Cerf studied data packet networking at graduate school and Bob Kahn worked for ARPA's information processing technologies office, so they both _were_ professionals when they developed TCP/IP. Berners-Lee, on the other hand, was in particle physics. –  Mar 24 '13 at 10:42
  • 2
    @GrahamLee Berners-Lee was not in physics; according to wikipedia in 1980 at CERN he "proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers." From 1981 to 1984 "worked on was a real-time remote procedure call which gave him experience in computer networking." So by 1989-1990 he was not an amateur... both quotes have references http://en.wikipedia.org/wiki/Tim_Berners-Lee – Random42 Mar 24 '13 at 12:31
  • Then the answer has more problems: everyone covered by "all people were amateur" turn out to be unamateur :-( –  Mar 24 '13 at 15:13
  • @GrahamLee If we want to be absolutists; I tend to believe that von Neumann really wasn't a professional in the field of computer architecture when he wrote this - http://en.wikipedia.org/wiki/First_Draft_of_a_Report_on_the_EDVAC - basically it wasn't even finished and represents the blue print for most of the computer architecture used today. At that time von Neumann was busy with the Manhattan project and before that there wasn't such thing as computer architecture (or we could go to Babbage and say the same thing). – Random42 Mar 24 '13 at 17:03
  • 1
    No, he wasn't, he was a mathematician. Though people have been looking for ways out of the constraints of von Neumann (or more properly, Turing) machines for decades: https://www.cs.ucf.edu/~dcm/Teaching/COT4810-Fall%202012/Literature/Backus.pdf –  Mar 24 '13 at 20:00
  • @GrahamLee Yes, and a mathematician isn't a computer architecture professional. Anyway, I hope you have understood my point... – Random42 Mar 25 '13 at 18:23
4

I think he was pointing to something less obscure-- TBL knew nothing about the hypertext work that had gone on from the 60s, so this work didn't inform the design of the web. He often talks of computing as a pop culture, where practitioners don't know their history, and continually "reinvent the flat tire".

4

The Internet has worked remarkably well as a prototype of the packet switching concept discovered by Baran, Pouzin and contemporaries. Contrary to popular opinion, this does not mean that IPv4 as handed down is the perfect protocol architecture, or that IPv6 is the way to go. John Day, who was deeply involved in the development of ARPANET and IP, explains this in his 2008 book Patterns of Network Architecture.

As for the Web, in the words of Richard Gabriel, "Worse is Better". Tim Berners-Lee's account, Weaving The Web, is decent. How The Web Was Born by Gillies & Cailliau is denser and less readable but has lots of detail and some fascinating links with other events in personal computing at the time. I don't think Kay gives it enough credit.

vdm
  • 151
  • 3
1

I dunno, some part of the non-web internet has some horrible warts. Email was before the web, and is part of the internet, and the standard is very open, and requires a lot of hacks on top to tackle (but not solve) the spam problem.

Amandasaurus
  • 259
  • 1
  • 8
  • 3
    I think, by the internet, he meant tcp/ip, and by the web, http/html/javascript, rather than email. He goes on to talk about the browser. – kalaracey Mar 25 '13 at 21:29
  • E-mail relates to the internet exactly the way the web does, so calling the web something separate yet including e-mail as "part of the internet" as you so clearly state it, is simply inaccurate. Furthermore, Kay said that we take the net for granted, just as we do the pacific ocean. The fact that you start talking about e-mail in your response pretty much proves the point. :-) – Pelle Feb 03 '17 at 00:10
0

"Amateur" does not refer to the lack of programming skills, but the lack of imagination.

The underlying problem with Tim Berners-Lee's web is that it was never built for developers. (This is in stark contrast to Alan Kay's web.)

Tim's web was built for non-coders who would publish on the web directly by dabbling with files containing their journals/articles interspersed with HT-markup-language: It's like 1980s WordPerfect and MS-Word, except they would use "<b></b>" instead of clicking on the B icon, and would save it as an open ".htm" format instead of a proprietary ".doc" format. The invention here is the "<a>" tag, which allows these static journals/articles to be globally interlinked.

And that's it, that's the entire web vision by Tim: his web is a mere global highway of interlinked static-articles. Maybe if you had the money, you can buy an editor like Dreamweaver, Nexus, Publisher, Citydesk(?), etc, which would help you generate all those "<b></b>" tags by clicking on the B icon.

..And we see how his vision didn't work as intended. Indeed, there are mighty red flags right from the start that the world had wanted way more than what Tim's vision offers:

  • Red flag 1: The rapid rise of "smart CGI" (PHP).

  • Red flag 2: The rapid rise of "smart HTML" (Javascript).

These days, we have even more red flags like the rise of Chrome-OS-is-the-browser-is-the-OS (exactly what Alan Kay had intended the browser to be btw) and WASM / browser-extensions.


In contrast to Tim's web, Alan Kay's web on the other hand, is a dynamic web built for programmers: a global highway of interlinked dynamic-programs. Non-coders who need a "page" would simply publish one by using a program on the web. (And the program itself was obviously written by programmers, not HTML-dabblers.)

..This is exactly the status-quo of Tim's web in the 2000s, but if we had Alan's web, it will have been done in the 1990s: Instead of the world having "wordpress and friendster" only in the 2000s, we will instead have them right when the web started in the 1990s.

..Similarly, instead of having programs like Steam, Visual Studio, Warcraft, VM Ware on the web in the 2040s, we will instead have them right now in the 2010s. (The multi-decade delay is due to these programs already having been built for the OS-is-not-the-browser, thus reducing the economic incentive for them to be rebuilt on the OS-is-the-browser-is-the-OS.)

So this is what people mean when they say Tim Berners-Lee had killed the True Dynamic Web by pushing his "shabby static web" onto the world. Ever heard of the terms "web 2.0", "web 3.0"? They would have simply been called "The Web" if we had Alan's web instead of Tim's web. But Tim's web needs constant revision because it is so static.

Obviously, all hope is not lost, as the Web can be remodeled however way browser vendors define it to be. But the point is that all these "bleeding edge" stuff that they are "inventing" on the web now are stuff that has already been invented a long time ago. We could already have it all today, not tomorrow.

Pacerier
  • 4,973
  • 7
  • 39
  • 58