172

You'd often see that JavaScript is actually being transported over the web with all the useless stuff that doesn't need to be there -- Comments, particularly those containing licenses, indentations ('\t', '\n'), etc. Given enough time, it could end up wasting terabytes of data worldwide! Would a JavaScript bytecode cause another bigger problem, or has nobody thought of this yet?

brandizzi
  • 172
  • 12
zombiesauce
  • 1,429
  • 2
  • 6
  • 8
  • 121
    You don't have to compile to bytecode to get rid of comments and whitespace, there are plenty of uglification/minification tools, but before WebAssembly there *was* no consistent low-level representation to ship, your users are running different browsers on different operating systems – jonrsharpe Dec 09 '19 at 12:04
  • 36
    @jonrsharpe, that could have been an answer. – Bart van Ingen Schenau Dec 09 '19 at 12:29
  • 6
    Plus, if you compile it to bytecode it would be called [WebAssembly](https://webassembly.org/). – Greg Burghardt Dec 09 '19 at 13:41
  • 11
    So you can view source, if the author intends. Not seen much in practice any more, due to minification etc, but one of the most brilliant features of the web. – davnicwil Dec 09 '19 at 22:32
  • 18
    Source script (yes, even with comments) transported over a compressed channel (like gzip, brotli or deflate) tend to be much smaller than bytecode or bytecode over a compressed channel (yes, it's counter-intuitive). Basically HTTP negates benefits of data being binary. Observe the popularity of JSON and XML over binary packet formats like ASN.1, protobuf and bencoding – slebetman Dec 10 '19 at 08:17
  • 2
    @SvetlinZarev Anecdotally you can look at Java vs javascript - loading binary files was so much slower that it all but killed Java. But I guess you can point at Flash for a counter argument. Practically I went through this twice benchmarking implementations trying to improve my web app (this was in the mid 2000s so think 600MHz Pentiums and 1Mbps internet) and found enabling gzip with JSON and XML to be better than going binary. YMMV but this was my experience – slebetman Dec 10 '19 at 08:55
  • 2
    @slebetman: That is not the reason Java was considered slow. It's not as if loading Java source code instead of Java byte code was going to be faster. – Christoffer Hammarström Dec 10 '19 at 11:32
  • 11
    All content sent over network is usually gzipped. That means all repetitive things like whitespace and variable names is reduced in size drastically. – Tomáš Zato Dec 10 '19 at 12:14
  • 6
    On information theoretic grounds we should expect that the compressed state of any two equivalent programs should be roughly the same. But many compression algorithms are optimized for human readable forms. – Eric Lippert Dec 11 '19 at 03:56
  • 2
    I can only assume that you have never debugged JS using the web browser's console? If it was bytecode then tracking down bugs would be a nightmare. – MonkeyZeus Dec 11 '19 at 15:11
  • 3
    @MonkeyZeus, if it was bytecode, tools to map that bytecode to a readable source would have invented. And, in fact, [they were](https://www.html5rocks.com/en/tutorials/developertools/sourcemaps/). – Arturo Torres Sánchez Dec 11 '19 at 18:28

4 Answers4

408

Why is JavaScript not compiled to bytecode before sending over the network?

Background: I was on the ECMAScript technical committee in the late 1990s and one of the implementers of Microsoft's JScript engine.

Let me begin by saying what I always say when faced with a "why not?" question: language designers are not required to give good reasons why they did not spend hundreds of millions of other people's dollars on a feature that someone happens to like. Rather, the person pitching the feature is required to give good reasons why that's the best way to spend that time, effort and money. You've made an argument with no numbers attached to it that bytecode would be a cost savings in terms of bandwidth. I would encourage you to work up some actual numbers, and compare that to the costs of creating yet another language; those costs are significant. Remember in your analysis that "implementation" is one of the smallest costs. Also in your analysis include who saves the money vs who spends the money, and you will find that the people spending the money are not the ones saving it; incentives matter.

That said, this is one of the more reasonable "why not?" questions because it is a feature we considered and rejected for reasons.

We considered such a scheme, both within Microsoft and at the TC level; since JScript was already implemented as compiling to a well-designed, principled bytecode language, it would have been straightforward for us to propose it as a standard and we considered doing so.

We decided not to, for a variety of reasons including:

  • Holy goodness it was hard enough to standardize JavaScript. Everyone and their dog would have an opinion about what the ideal characteristics of a bytecode language were, and it would be multiple years of bikeshedding. No one really wanted to go there.
  • It was an expensive solution with no associated costly problem. There's no reason to suppose that a bytecode language would be more efficient in either size or speed. JavaScript already minifies reasonably well and is highly compressible.
  • It would have created an enormous amount of work for browser providers, who were already vexed by the expense of producing an efficient, compliant JS implementation.
  • Creating a secure JS implementation that resists attacks by bad actors is hard enough; should we double the surface area available to attack? Probably not.
  • Standards are an impediment to innovation. If we discovered that a small change to our bytecode language would make a big difference in some previously-unforeseen or previously-unimportant user scenario, we were free to make that change. If it was a standard, we would not be free to create that user benefit.

But that analysis presupposes that the reason to do the feature at all is performance. Interestingly enough, the customer requests that motivated considering this feature back in the 1990s were not primarily about performance.

Why not? The 1990s was a very different time for JS than today; scripts were mostly tiny. The notion that there would someday be frameworks with hundreds of thousands of lines was not even close to being on our radar. Downloading and parsing JS was a tiny fraction of the time spent downloading and parsing HTML.

Nor was the motivation the extension to other languages, though that was of interest to Microsoft as we had VBScript running in the browser as well, which used a very similar bytecode language. (Being developed by the same team and compiled out of the same sources and all.)

Rather, the primary customer scenario for motivating bytecode in the browser was to make the code harder to read, understand, decompile, reverse-engineer and tamper with. That a bytecode language is hardly any additional work to understand for any attacker with reasonable resources was major points against doing this work; we did not want to create a false sense of security.

Basically there were lots of expenses and precious few benefits, so it did not get done. Something must have changed between 1998 and 2015 that made WebAssembly have a reasonable price-to-benefit; what those factors are, I do not know. You'd have to ask an expert on WebAssembly.

Eric Lippert
  • 45,799
  • 22
  • 87
  • 126
  • 58
    WASM's goal is not necessarily to be a target for Javascript compilation, but rather to be a better compiler target than Javascript is (for a variety of languages). – Robert Harvey Dec 09 '19 at 20:32
  • 2
    @RobertHarvey: That's a good point, but regardless of whether it is a JS target or not, all the costs associated with a new language are entailed: design, implementation, testing, security, standardization, extensibility model, and so on. I don't know why anyone felt that those considerable costs were worth what seems to be a modest benefit, but like I said, I am not at all an expert on this new language. – Eric Lippert Dec 09 '19 at 21:00
  • 23
    Happily, this question is answered in the [WASM Goals](https://webassembly.org/docs/high-level-goals/) and [WASM faq](https://webassembly.org/docs/faq/). For context, the industry seems to have collectively given up on Java and Flash, security-wise; ASM can help fill that void. Newer standards like service workers gets us on the way to web applications as an alternative to mobile applications, but less so as a means to stuff a AAA game into a web page (one of the specific [WASM Use Cases](https://webassembly.org/docs/use-cases/)). – Brian Dec 09 '19 at 21:13
  • 44
    @Brian: Ironically, about half of all sites using WASM are using it [for malicious purposes](https://www.infoq.com/news/2019/10/WebAssembly-wasm-malicious-usage/). – Robert Harvey Dec 09 '19 at 21:16
  • I expect one of the changes is https stopping most caching of large Javascript frameworks, (including a large Javascript framework used to have little download cost) – Ian Dec 10 '19 at 13:01
  • Someone can correct me but Google was working on NaCl (secure native X86 in the browser) for making better Webapps (think Google Earth, Docs, native level games. At least one shipped) and then next PNaCl (portable assembly in the browser) both with their own custom APIs, not the standard existing broswer APIs. Mozilla either in response or in parallel made asm.js that ran in plain JS but with JS engine support could convert to native code. A win because other than being slow it was backward compatible. Eventually the 2 teams came together and made WASM. – gman Dec 10 '19 at 17:05
  • Note that there is a TC39 proposal for a [binary AST format](https://github.com/tc39/proposal-binary-ast). – curiousdannii Dec 11 '19 at 05:56
  • 2
    Please use comments for improving or clarifying the provided answer, **in a civil tone.** Personal attacks and accusations are not cool. Rudeness is not cool. If you feel that the answer or comments are rude or abusive then please flag them for moderator attention, otherwise vote for the answer on its own merit. – maple_shaft Dec 11 '19 at 17:37
  • 1
    @curiousdannii: Thanks for posting that; I have not been keeping up with the proposals for TC39 these last decades. I note that in the proposal they call out many of the ways that their proposal differs from bytecode, but nonetheless has many of the same costs. It is interesting to note that *parse time* (which is another way to say "CPU burden") is the performance metric they're attempting to address. – Eric Lippert Dec 11 '19 at 19:04
  • 2
    and of course, besides miniifcation already being a poor man's bytecoding (for purposes of improved transmission), the browser then caches nearly all js files anyway, so the compilation would be done once - at least back in 1990 that would have been the case. – gbjbaanb Dec 12 '19 at 00:32
  • 1
    @gbjbaanb: Indeed. The original Microsoft JScript and VBScript engines were designed so that they could re-use compiled state upon demand (though there were some unusual restrictions on the thread affinity of the engines for scenarios where the state was used on different threads). Of course all that code was replaced decades ago, presumably by something better. :) – Eric Lippert Dec 12 '19 at 01:16
  • 28
    I think the second paragraph (`we are not required ... hundreds of millions of dollars ...`) ruins the experience of this otherwise good answer. It appears to be unnecessarily hostile - especially because such a feature would not cost that much to _implement_. Hundreds of millions of dollars is the equivalent of thousands of myselfs for a whole year - how could you even finance a whole compiler then? – phresnel Dec 12 '19 at 08:16
  • 12
    @phresnel: If your question is "what are the economics of implementing languages with industry-wide implications in the corporate world?" that is a complex question that you should do some research on if it interests you, and then ask a more focused question. Consider for example the total cost to Microsoft of C#: a large team has worked on it for 20 years; include in your estimates the costs of design, implementation, testing, documentation, marketing, user education, and so on, and then compare that to offsetting cost savings or revenues. – Eric Lippert Dec 12 '19 at 14:42
  • 4
    @EricLippert why are you so hostile against OP and @phresnel? No one asked this question directly to you, if you don't want to just don't answer it. With such attitude no wonder ES6 came this late - it could've been billions of dollars to draft it if it were in _late 90s_. – ozanmuyes Dec 26 '19 at 11:22
  • 6
    I am not in any way feeling hostile towards those people, so I am unable to answer your ad hominem personal question about my psychology. Helpfully suggesting more specific avenues for research and ways to clarify a question so as to get a good answer is helpful, not hostile. – Eric Lippert Dec 26 '19 at 11:36
  • 6
    @EricLippert, your thinking behind the default "why not" response is fantastic (I'm actually curious if you have other question types and default responses for them). At the same time, the "we" vs "you" semantics add in/out group dynamics. Those are likely to trigger some folks' midbrain threat (i.e. hostility) awareness. In this stack exchange context, semantics from a perspective like "we're all on the same team and learning together" will likely be better received. Regardless, great way to think about quantifying "why not" questions. – Adam Dec 28 '19 at 18:06
  • 4
    @Adam: That is a reasonable point; thanks for making one of the rare "commentary on tone" that does not itself come across as hostile. :) – Eric Lippert Dec 29 '19 at 01:07
  • 2
    @Adam: I never satisfied your curiosity. Yes, I do see many patterns in vague and unanswerable questions. Some highlights: *"I wrote some code that I don't understand/has a bug/won't compile"* -- that's a story, not a question; what's the question? *"I don't know how to start this assignment"* -- start by writing "hello world", and then solve a series of simpler problems that get you moving towards the real problem. *"which of these two program fragments is faster?"* You wrote the code both ways already; now run it both ways and the faster one will be the faster one. – Eric Lippert Jan 02 '20 at 23:51
  • 1
    @Adam: The latter is a variant on *"what does this code do?"* Again, run it in the debugger and you'll see what it does. *"Will technology X be supported in the future?/What should I study to be relevant in five years?..."* It's hard to make predictions especially about the future. *"My program crashes when I do this."* Then don't do that! – Eric Lippert Jan 02 '20 at 23:54
  • @EricLippert Thanks for the insights. And LOL re "I wrote some code that I don't understand". It would be difficult to restrain a laugh in response to that statement. Hopefully you got few of those! – Adam Jan 07 '20 at 01:45
122

View Source

"View Source" was in the beginning, and still is to some extent, considered to be an important feature of the web. It is how generations of web developers learned web development, and the relevant standards bodies (ECMA TC39, W3C, WHATWG) still take it very seriously.

Minification

ECMAScript files are typically "minified" before being deployed. This includes removal of all comments, all whitespace, and renaming of all identifiers to be as short as possible, plus some higher-level optimizations such as removal of dead code.

Compression

Support for compression exists in HTTP since HTTP/1.0 (early 1996). ECMAScript is text, and text compresses really well. In fact, ECMAScript is text with lots of redundancies (lots of appearances of ;, {, }, (, ), ,, ., function, var, if, for, and so on), and compression algorithms thrive on redundancy. So, the amount of data that is transferred is much smaller than you make it out to be. As an experiment, try compressing an ECMAScript source file with one of the typical compression algorithms used on the web (e.g gzip or deflate), and compare that to the size of the compiled bytecode of the same file.

It turns out that compressed source code is actually pretty small, often comparable or smaller than a typical byte code file.

Also, there are specialized compression algorithms for what I will now term "web text".

Zopfli is an improved encoding algorithm for web text compatible with deflate/zlib. This means it can be decoded by any delate/zlib compliant decoder, in other words, it can be uncompressed by every browser without changes. Compressing takes about 80 times longer than with deflate, for a 3%–8% improvement in output size over "naked" deflate. This might not make sense to do on-the-fly for dynamically created content, but pre-compressing something like JQuery might make sense.

Brotli is a new compression algorithm based on LZ77, Huffman, context modeling, and some other tricks, e.g. a pre-defined dictionary of frequent text chunks extracted from a large corpus of web sites, texts, ECMAScript source files, CSS files, etc. It can achieve up to 25% better compression than deflate/zlib. It is designed to be efficiently decoded on low-end portable devices.

Bytecode format

Which brings us to the next problem: there is no standardized bytecode format for ECMAscript. In fact, some implementations may not even use bytecode at all! For example, for the first couple of years, V8 compiled ECMAScript straight to native machine code, with no bytecode step in between. Chakra, SquirrelFish Extreme, and SpiderMonkey all use bytecode, but they use different bytecode. dyn.js, TruffleJS, Nashorn, and Rhine don't use ECMAScript-specific bytecode, they compile to JVML bytecode. Likewise, IronJS compiles to CLI CIL bytecode.

Now, you might say: why not define a standardized bytecode format for ECMAScript? The problems with this are two-fold:

  1. A bytecode format constrains the design of the execution engine. For example, look at JVMs: JVMs are much more similar to each other than ECMAScript engines. Personally, I believe the "performance race" of the late 2000s / early 2010s would not have been possible without the wide range of experimentation that the lack of a standardized bytecode format afforded.

  2. Not only is it hard to get all ECMAScript engine vendors to agree on a common standardized bytecode format, but consider this: it doesn't make sense to add a bytecode format for only ECMAScript to the browser. If you do a common bytecode format, it would be nice if it supported ActionScript, VBScript, Python, Ruby, Perl, Lua, PHP, etc. as well. But now you have the same problem as in #1, except exponentially increased: not only do all ECMAScript engine vendors need to agree on a common bytecode format, you also have to get the PHP, Perl, Ruby, Python, Lua, etc. communities to agree as well!

Caching

Well-known widely-used libraries are hosted at canonical URIs, where they can be referenced from multiple sites. Therefore, they only need to be downloaded once and can be cached client-side.

CDN

Many libraries use CDNs, so they are actually served from a location close to the user.

Wasm / asm.js

WebAssembly (Wasm) is a compact binary instruction format that is currently being standardized by the W3C and already being shipped in Firefox, Chrome, Safari, and Edge. It is, however, not designed as bytecode format for ECMAScript, rather it is designed as a low-level portable machine code and compilation target for languages like C, C++, and Rust.

Before Wasm, there was already asm.js, which had similar goals, but it was designed as a syntactic and semantic subset of ECMAScript, so you could run it unmodified in a non asm.js-aware engine, and it would work, just much slower.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • 9
    [WebAssembly reached recommendation status](https://www.w3.org/blog/news/archives/8123) as of December 5, 2019. – Rob Dec 10 '19 at 13:00
  • 1
    This explains it best with main and important points first – tgkprog Dec 11 '19 at 05:02
  • 1
    @tgkprog Well, the most important point is still "It's a specific feature, and you need to have a good reason to include any feature. There was no reason to standardise JS bytecode, rather than JS language, and many reasons why it would be bad idea." :) Features need to be useful enough to not only return on their investment (which this wouldn't), but also be better than alternative features you could implement with similar effort. As Jörg notes, even today, WebAssembly doesn't care about being a target for Javascript applications - there's little point in doing that. – Luaan Dec 11 '19 at 08:13
  • Seriously? Supported ActionScript? ActionScript *is* JavaScript, with bytecode... Also, I'm not buying either of your arguments: what you are saying is that, if bytecode existed it would've been bad (but give no evidence to that claim), and then because you *believe* it would've been bad, you describe how this would make other things bad (but we cannot actually ever get a proof that would've happened). Your second argument is ridiculous. There are tons of languages, and especially those you listed, which compile to many different bytecode sets. – wvxvw Dec 11 '19 at 10:23
  • Your first bullet is "it's important to be able to read the code" and your second bullet is "the code is already rendered unreadable" – Michael Mrozek Dec 11 '19 at 19:59
  • 1
    @MichaelMrozek: The fact that different portions of the ECMAScript community (standards writers and library writers) have different reasons does not invalidate either reason. – Jörg W Mittag Dec 11 '19 at 21:41
  • @JörgWMittag If the code is already rendered unreadable, the "it's important to be able to read the code" supporters are going to be disappointed regardless, so I'm not sure how that's an argument against compiling Javascript server-side – Michael Mrozek Dec 12 '19 at 16:10
  • 3
    @MichaelMrozek: As long as the people on the standards committee are idealists that believe in "View Source", they have no incentive to standardize a bytecode format. As long as the people in the community can work around this using minification, they have no incentive to replace the people on the standards committee. Again: two different groups of people can have different motivations, this does not invalidate either motivation. – Jörg W Mittag Dec 18 '19 at 09:51
  • In your **Compression** section I think it would be more accurate to say there are duplicates or repetitions rather than redundancies. The symbols you reference are almost always required. – Caltor Dec 26 '19 at 09:23
  • The section on caching is no longer relevant as [browsers are now blocking cross domain caching.](https://www.stefanjudis.com/notes/say-goodbye-to-resource-caching-across-sites-and-domains/) Basically, example.com’s copy of jQuery, and example.net’s (even if both reference the same CDN URL) will have their own copy in the cache. In other words, example.com’s cache is *completely* separate from example.net’s. – Cole Tobin Jan 29 '21 at 21:34
19

JavaScript was invented by Netscape in 1995 and was initially positioned as an easy-to-use embedded scripting language which could integrate with HTML and control more complex components written in Java. See the initial press release for the intended use cases.

It was never intended for large amounts of code (since client-side Java was supposed to be used for complex stuff) so the size overhead of comments and source text was just not a concern at the time.

Audience

As described by Brendan Eich, the initial designer of JavaScript:

We aimed to provide a “glue language” for the Web designers and part time programmers who were building Web content from components such as images, plugins, and Java applets. We saw Java as the “component language” used by higher-priced programmers, where the glue programmers—the Web page designers—would assemble components and automate their interactions using [a scripting language].

And another quote:

The answer was that two languages were required to serve the two mostly-disjoint audiences in the programming ziggurat who most deserved dedicated programming languages: the component authors, who wrote in C++ or (we hoped) Java; and the "scripters", amateur or pro, who would write code directly embedded in HTML.

Back in the 90's there was a sharp distinction between scripting languages and compiled programming languages. The compiled languages (like C++) were for professional developers, while scripting languages were accessible to non-programmers. Note that the press release compares JavaScript to Visual Basic, which was used for office macros and other automation stuff. At the time web page authors were not considered software developers but more like graphic designers or DTP users.

Today the distinction between scripting languages and compiled languages is a lot more blurred, but at the time, catering to non-developers meant scripting language which meant no separate compilation step.

Ease of development

A textual format is much easier for casual developers, since you don't need a development environment to compile it into bytecode. You just type the text and reload the browser to see it run. At the time when JavaScript was introduced, dedicated HTML editors barely existed. People wrote web pages in notepad. There were no such thing as build pipelines for web pages.

Embedding in HTML

JavaScript was designed to be embedded directly in HTML, like <input type="button" onclick="alert('hello world')">. These days it is frowned upon to embed JavaScript in HTML, but in those days this was the standard way to hook up event handlers. Given this use case, JavaScript basically had to be text-based.

There were also facilities for generating HTML directly in JavaScript, like:

<script>
  document.write("<input type=\"button\" onclick=\"alert('hello world')\">"
</script>

Again, this basically requires JavaScript to be textual format to be useful.

Pang
  • 313
  • 4
  • 7
JacquesB
  • 57,310
  • 21
  • 127
  • 176
  • 3
    Eh? Who claims Javascript was invented by Microsoft? Why would anyone want you to think that? Microsoft was part of the _standardisation_ process (which produced the first version after IE 2 was released, with support for Javascript), but who cares? It's a bit silly thing to put on top of an otherwise good answer. – Luaan Dec 11 '19 at 08:33
  • This is a completely solvable problem. Had been solved many times before JavaScript even existed. Just to give you one example: PostScript. A typical PostScript file will have most of its program embedded as compressed payload (which, at run time is decompressed and fed into interpreter). If you wanted to embed JS bytecode in HTML page you can have done the same thing as so-called "data-urls" do: use some text encoding, like Base64! – wvxvw Dec 11 '19 at 10:16
  • 1
    @Luaan: OK I removed that sentence. It was in response to another answer which suggested this was some decision by MS. It was not. They had no choice that to copy JavaScript exactly as it was designed by Netscape. – JacquesB Dec 11 '19 at 11:02
  • 1
    @wvxvw: Obviously it would be technically possible to embed base64-encoded bytecode in HTML - but it would be an immensely complicated solution compared to just embedding the source code. Just imagine the hassle of debugging. – JacquesB Dec 11 '19 at 11:03
  • 1
    @JacquesB: As one of the people making those choices I can assure you that we *did* have the choice to make changes to JS, and we occasionally made choices contrary to poor implementation decisions in the Netscape version. Once the standardization process started up, we worked closely with our Netscape counterparts as well as interested parties from other companies to ensure that the language evolved in a consistent and reasonable manner that created value for users without undue burden to implementers. – Eric Lippert Dec 12 '19 at 01:22
  • Just to make sure I'm absolutely clear here: the notion "The NS implementation is the reference implementation" was explicitly NOT a core principle of the technical committee. Similarly, the notion "a conforming implementation must do NEITHER MORE NOR LESS than what the specification requires" was explicitly NOT a core principle. Rather, just the opposite. The committee encouraged implementers to experiment with new features in a way that neither broke compat nor closed off future avenues for improvements, because the committee was explicitly interested in a living, evolving language. – Eric Lippert Dec 12 '19 at 01:31
  • 1
    The result of this was that the implementation teams at Microsoft, Netscape and other companies were constantly making independent changes to the language, showing the designs to users, and then bringing the finished work -- sometimes after it had shipped to customers! -- to the committee for standardization. I could give you many examples; for instance, I was in the room when we designed the `switch` statement and the `===` operator, which were not in the Netscape implementation. Again, just so I'm clear, the notion that we had *no choice but to draft Netscape* could not be more wrong. – Eric Lippert Dec 12 '19 at 01:37
  • 2
    @wvxvw Base64 *increases* the size of the encoded text by as much as one third. So, I'm not sure how that helps when you'd prefer lower sizes of your web pages. – VLAZ Dec 12 '19 at 08:45
  • 1
    @wvxvw As well as base64 being the _opposite_ of compression, it's worth noting that most JS will in fact be compressed, completely transparently, by web servers, whether it's embedded in HTML or served as separate files, using deflate or gzip streaming compression This makes network bandwidth much less of a distinguishing factor between text and binary formats. – IMSoP Dec 12 '19 at 18:24
1

Data use is probably not actually a problem.

To respond to the assumption in the body (since the wonderful response by Eric Lippert seems to have the actual questions quite well covered):

Whether you're talking about data caps or bandwidth, my Google-Fu has been unable to unearth any research that suggests that Javascript is actually "wasting terabytes of data" (whatever that means).

As for the rest of your questions, in many things, it is less useful to ask "what problems will this cause?" than to first ask "what benefits will this create?".

sp88
  • 35
  • 1
  • Given the question subject I think it would be safe to assume that “wasting terabytes of data” refers to network usage. You could also factor in storage too I suppose when you consider the millions of devices that host JavaScript. – Caltor Dec 26 '19 at 09:34