52

I've been working on a multi-threaded JavaScript runtime implementation for the past week. I have a proof of concept made in C++ using JavaScriptCore and boost.

The architecture is simple: when the runtime finishes evaluating the main script it launches and joins a thread-pool, which begins picking tasks from a shared priority queue, if two tasks try to access a variable concurrently it gets marked atomic and they contend for access.

Working multithreaded node.js runtime

The problem is that when I show this design to a JavaScript programmer I get extremely negative feedback, and I have no idea why. Even in private, they all say that JavaScript is meant to be single threaded, that existing libraries would have to be rewritten, and that gremlins will spawn and eat every living being if I continue working on this.

I originally had a native coroutine implementation (using boost contexts) in place as well, but I had to ditch it (JavaScriptCore is pedantic about the stack), and I didn't want to risk their wrath so I decided against mentioning it.

What do you think? Is JavaScript meant to be single threaded, and should it be left alone? Why is everyone against the idea of a concurrent JavaScript runtime?

Edit: The project is now on GitHub, experiment with it yourself and let me know what you think.

The following is a picture of promises running on all CPU cores in parallel with no contention:

Running promises concurrently.

voodooattack
  • 523
  • 1
  • 4
  • 9
  • 7
    This seems like a highly opinionated question. Did you ask the people who apparently didn't like your idea *why* they think it will be troublesome? – 5gon12eder Apr 12 '16 at 04:50
  • 26
    Adding threads to something that isn't meant to be multithreaded is like converting a one lane road into an expressway without providing driver's ed. It'll work pretty well most of the time, until people start crashing randomly. With multithreading, you're either going to have subtle timing bugs that you can't reproduce or erratic behavior most of the time. You have to design with that in mind. You need thread synchronization. Just making variables atomic doesn't eliminate race conditions. – mgw854 Apr 12 '16 at 04:51
  • Of course, it is possible to write lockless multi-threaded code without any bugs, but we are but mere mortals. – mgw854 Apr 12 '16 at 04:52
  • Yes, they just make things up. I would like it for someone to point out why this would be a bad idea from a software engineering point of view, and not say that writing multithreaded code is hard. – voodooattack Apr 12 '16 at 04:52
  • I agree that atomic variables won't solve everything, but working on a solution for the synchronisation problem is my next goal. – voodooattack Apr 12 '16 at 04:58
  • 3
    How do you plan on handling multi-threaded access to shared state? "Marked as atomic and contend for access" does not explain how you think this would really work. I would guess that negative attitudes toward the idea are because people have no idea how you'd actually make this work. Or, if you're putting all the burden on the developer like in Java or C++ to use proper mutexes and such, then people are probably thinking why do they want that complication and programming risk in an environment that is free from it. – jfriend00 Apr 12 '16 at 04:59
  • @jfriend00 I was actually thinking of adding my own extensions to the language. – voodooattack Apr 12 '16 at 05:04
  • Well, making anything that automatically coordinates and protects access to shared data between multiple threads is a really hard or nearly impossible problem. Presumably, it can be done for a single variable, but most state is much more involved than that and I can't even imagine how you could automatically do it. So, without any credibility for how you'd do it safely and usefully, put me in the doubting Thomas category. The onus is on you to prove you could do something useful in that regard. – jfriend00 Apr 12 '16 at 05:08
  • @jfriend00 I'm not here to discuss the implementation details, I'm asking why are people automatically dismissing it as witchcraft on principle. – voodooattack Apr 12 '16 at 05:12
  • 17
    Because automatically coordinating random state between threads is considered a nearly impossible problem so you have no credibility that you can offer anything that does it automatically. And, if you're just going to put the burden back on the developer like Java or C++ do, then most node.js programmers don't want that burden - they like that node.js doesn't have to deal with that for the most part. If you want a more sympathetic ear, you will have to explain/show how and what you would offer in this regard and why it would be good and useful. – jfriend00 Apr 12 '16 at 05:27
  • 1
    Since you're dealing in what is essentially Ecmascript, you should consider concurrent mechanisms that more closely align with the actual language, like Promises in ES6, rather than attempting to invent something exotic. – Robert Harvey Apr 12 '16 at 05:31
  • I have two choices here: either provide synchronization primitives as JS objects, or implement a scoped lock statement: `lock(x) { /* do something with x */ } – voodooattack Apr 12 '16 at 05:32
  • 1
    Or you could go with the third choice, which is to use Promises. – Robert Harvey Apr 12 '16 at 05:32
  • @RobertHarvey That was my idea. I plan to override `.then` and its kin to work truly asynchronously. The engine currently supports ES6 out of the box. – voodooattack Apr 12 '16 at 05:34
  • @RobertHarvey For low-level ES5 stuff though, I'm not so sure. – voodooattack Apr 12 '16 at 05:35
  • Typescript compiles to ES5, and I'm pretty sure it supports promises. See what it compiles a promise to; I suspect it just uses ordinary callbacks. – Robert Harvey Apr 12 '16 at 05:37
  • @RobertHarvey A library like Bluebird uses nextTick or setImmediate (I can't remember which) to schedule promises. With my runtime they would be run in parallel. So, two promises could complete at the same time. – voodooattack Apr 12 '16 at 05:44
  • 3
    Please continue your work. I consider languages without multi-threading as toy languages. I think most JavaScript developers work with the browser, which has a single-thread model. – Chloe Apr 12 '16 at 06:28
  • 2
    @Chloe Thank you, and I will. I really love JS and I'd love to see it complete. The complete lack of concurrency (or the fakeness of it) really put me off using it. I picked C++ again after almost 2 years just to make this. – voodooattack Apr 12 '16 at 06:32
  • 2
    Be careful. Write a non-trivial application (like a trivial Express based server with a database back-end) and benchmark it continuously. Because the way you've described your implementation can potentially make your interpreter much slower than a single-threaded interpreter. Contention and thread-switching kills speed faster than using multiple CPUs gain speed. That's what the other programming languages are starting to learn and why most web frameworks in other languages are event-oriented and single threaded – slebetman Apr 12 '16 at 10:04
  • 1
    related on SO: [Does the EcmaScript specification place any constraints on the process model used to implement the runtime?](http://stackoverflow.com/q/29798949/1048572) and [Why couldn't popular JavaScript runtimes handle synchronous-looking asynchronous script?](http://stackoverflow.com/q/25446353/1048572) – Bergi Apr 12 '16 at 13:50
  • 1
    @slebetman, "Contention and thread-switching kills speed faster than using multiple CPUs gain speed" is not exactly a universally-accepted statement. For lock-based contention that tends to be true, but that's not the only model -- see transactional memory, f'rinstance. Granted, doing STM well is best when you have a language with amenable primitives -- ie. immutible data structures with fast shared-state copying on update -- and that's a big project in and of itself, but there are places where it *is* used to good effect. – Charles Duffy Apr 12 '16 at 20:35
  • By the way, you are aware that there are nodeJS thread implementations already? (check github). You may not need/want to reinvent the wheel that's already out there. – phyrfox Apr 12 '16 at 22:14
  • @phyrfox Yes, and they all fall within the event loop paradigm, it's the single choking point of JavaScript as we know it. I'm trying to eliminate that choking point by introducing true concurrency via a cooperative thread pool scheduler. There is nothing in the ECMA standard that speaks against it. – voodooattack Apr 12 '16 at 23:22
  • @CharlesDuffy: This specific implementation is lock based. Which is why I warned him – slebetman Apr 13 '16 at 03:22
  • 2
    @voodooattack: Not true. All the thread libraries are true threads that executes outside the loop. You only ever need to interact with the loop to get the results back. That's not a choking point. A choking point would be if multiple threads lock on the same variable because you'd really have code that can run NOW that cannot run because it's locked. Code that doesn't want to run NOW (waiting for results) not being executed is not a choking point. – slebetman Apr 13 '16 at 03:26

8 Answers8

71

1) Multithreading is extremely hard, and unfortunately the way you've presented this idea so far implies you're severely underestimating how hard it is.

At the moment, it sounds like you're simply "adding threads" to the language and worrying about how to make it correct and performant later. In particular:

if two tasks try to access a variable concurrently it gets marked atomic and they contend for access.
...
I agree that atomic variables won't solve everything, but working on a solution for the synchronisation problem is my next goal.

Adding threads to Javascript without a "solution for the synchronisation problem" would be like adding integers to Javascript without a "solution for the addition problem". It's so fundamental to the nature of the problem that there's basically no point even discussing whether multithreading is worth adding without a specific solution in mind, no matter how badly we might want it.

Plus, making all variables atomic is the sort of thing that's likely to make a multithreaded program perform worse than its singlethreaded counterpart, which makes it even more important to actually test performance on more realistic programs and see if you're gaining anything or not.

It's also not clear to me whether you're trying to keep threads hidden from the node.js programmer or if you plan on exposing them at some point, effectively making a new dialect of Javascript for multithreaded programming. Both options are potentially interesting, but it sounds like you haven't even decided which one you're aiming for yet.

So at the moment, you're asking programmers to consider switching from a singlethreaded environment to a brand new multithreaded environment that has no solution for the synchronisation problem and no evidence it improves real-world performance, and seemingly no plan for resolving those issues.

That's probably why people aren't taken you seriously.

2) The simplicity and robustness of the single event loop is a huge advantage.

Javascript programmers know that the Javascript language is "safe" from race conditions and other extremely insidious bugs that plague all genuinely multithreaded programming. The fact that they need strong arguments to convince them to give up that safety does not make them closed-minded, it makes them responsible.

Unless you can somehow retain that safety, anyone who might want to switch to a multithreaded node.js would probably be better off switching to a language like Go that's designed from the ground up for multithreaded applications.

3) Javascript already supports "background threads" (WebWorkers) and asynchronous programming without directly exposing thread management to the programmer.

Those features already solve a lot of the common use cases that affect Javascript programmers in the real world, without giving up the safety of the single event loop.

Do you have any specific use cases in mind that these features don't solve, and that Javascript programmers want a solution for? If so, it'd be a good idea to present your multithreaded node.js in the context of that specific use case.


P.S. What would convince me to try switching to a multithreaded node.js implementation?

Write a non-trivial program in Javascript/node.js that you think would benefit from genuine multithreading. Do performance tests on this sample program in normal node and your multithreaded node. Show me that your version improves runtime performance, responsiveness and usage of multiple cores to a significant degree, without introducing any bugs or instability.

Once you've done that, I think you'll see people much more interested in this idea.

Ixrec
  • 27,621
  • 15
  • 80
  • 87
  • 1
    1) Okay, I'll admit that I've been putting off the issue of synchronization for a while now. When I say that 'two tasks will contend', that's not my design, it's mainly an observation: https://dl.dropboxusercontent.com/u/27714141/Screenshot%20from%202016-04-12%2018-25-34.png ­— I'm not sure what kind of witchcraft JavaScriptCore is performing here, but shouldn't this string get corrupted if it weren't inherently atomic? 2) I strongly disagree. This is the reason JS is looked upon as a toy language. 3) ES6 promises would be much more performant if implemented using a thread pool scheduler. – voodooattack Apr 12 '16 at 18:26
  • 13
    @voodooattack "This is the reason JS is looked upon as a toy language." No, this is the reason *you* consider it a toy language. Millions of people use JS every day and are perfectly happy with it, shortcomings and all. Make sure you're solving a problem that enough other people actually have, that isn't better solved by just changing languages. – Chris Hayes Apr 12 '16 at 20:51
  • @ChrisHayes The question is, why put up with its shortcomings when I can fix them? Would concurrency as a feature improve JavaScript? – voodooattack Apr 12 '16 at 23:14
  • 1
    @voodooattack That's the question. Would concurrency as a feature improve Javascript? If you can get a community answer to that that isn't "no", then maybe you're on to something. However, it seems as though the event loop and Node's native delegation to worker threads for blocking events suffice for most of what people need to do. I think that if people really need threaded Javascript, they will use Javascript workers. If you can find a way to make workers work with first-class functions instead of JS files, however....then you might *really* be onto something. –  Apr 13 '16 at 02:29
  • @voodooattack Python has the global interpreter lock. It has also traditionally looked to multiprocessing over multithreading to solve problems of concurrency. Additionally, the maintainers of the CPython runtime have been looking into concurrency solutions *aside* from removing the GIL because they believe that other options are better. Does this make it a "toy language"? Certainly not. – jpmc26 Apr 13 '16 at 02:32
  • @lunchmeat317 I could do that. It's not that difficult, since JSC allows the sharing of objects across contexts in the same context group. The problem is what happens when a worker function references a global, then we are back again to the same issue. – voodooattack Apr 13 '16 at 02:33
  • @jpmc26 It's not my term, nor do I consider it a toy language, I was merely reiterating what I heard it called elsewhere. – voodooattack Apr 13 '16 at 02:35
  • @voodooattack Yes, and that's exactly why Workers in JS are designed the way they are - by design, you can't share any variables or closures or state across thread boundaries. If you restricted workers to pure functions, though, you could do it. Further, if you did this, you could probably write your own promise implentation on top of it. It wouldn't (and couldn't) be a general-purpose solution, though. –  Apr 13 '16 at 02:41
  • 7
    @voodooattack People who call JavaScript a toy language don't know what they're talking about. Is it the go-to language for everything? Of course not, but *dismissing* it like that is *certainly* a mistake. If you want to dispel the notion that JS is a toy language, then make a non-trivial, production application in it, or point to an existing one. Just adding concurrency won't change those people's minds, anyway. – jpmc26 Apr 13 '16 at 02:43
  • I have edited the question with proof that promises work in parallel with no synchronization problems of any kind, and I included the source code. Please prove me wrong. – voodooattack Apr 14 '16 at 17:56
16

A decade or so ago Brendan Eich (the inventor of JavaScript) wrote an essay called Threads Suck, which is definitely one of the few canonical documents of JavaScript's design mythology.

Whether it is correct is another question, but I think it had a big influence on how the JavaScript community thinks about concurrency.

roryok
  • 103
  • 2
Erik Pukinskis
  • 293
  • 1
  • 2
  • 31
    The last person I would take advice from is Brendan Eich. His whole career is based on creating JavaScript, which is so bad we have a heap of tools created to try and get around its innate crapness. Think of how many developer hours have been wasted in the world because of him. – Phil Wright Apr 12 '16 at 06:42
  • 12
    While I understand why you would discount his opinion in general, and even your disdain for JavaScript, I don't understand how your perspective would allow for disregarding his expertise in the domain. – Billy Cravens Apr 12 '16 at 07:10
  • 4
    @BillyCravens haha, even the canonical book on Javascript : "Javascript *the good parts*" by Crockford gives the game away in the title. Treat Eich's attitudes the same way, just stick to the stuff he says that is good :-) – gbjbaanb Apr 12 '16 at 07:29
  • 3
    Yeah, but he is now admitting that #WebAssembly is likely to patch his mess. – Den Apr 12 '16 at 08:23
  • Oh lord his writing style is a nightmare to follow - just like JS! :D – Gusdor Apr 12 '16 at 08:30
  • 13
    @PhilWright: His first implementation of Javascript was a Lisp variant. Which to me earns him huge respect. His boss's decision to force him to replace Lisp syntax with C-like syntax is not his fault. Javascript at its core is still a Lisp runtime. – slebetman Apr 12 '16 at 09:59
  • 7
    @PhilWright: You should not blame Eich for the crappyness of the language. It's bugs in early implementations, and the need of backwards-compatibility for a web language that prevented JS from maturing. The core concepts still form a wonderful language. – Bergi Apr 12 '16 at 14:03
  • @slebetman ...which is a large part of why it's so awful. Between its extreme dynamic typing, `eval`, and no real concept of namespaces--all serious problems in JavaScript that originated in Lisp--(not to mention the massive security holes inherent in the concept of conflating data and code,) Lisp has caused a lot more harm to our craft than benefit. – Mason Wheeler Apr 12 '16 at 18:21
  • 1
    @MasonWheeler *"massive security holes inherent in the concept of conflating data and code"* ...Where? Metaprogramming, macros and homoiconicity are just a few of the best things Lisp has given modern programming. – cat Apr 12 '16 at 20:28
  • 1
    @cat: Metaprogramming and macros are great, done well. Lisp *does not do them well,* and homoiconicity is a big part of the problem, but a comment is too small to discuss this issue properly. But every time you see a data breach caused by injection (of the SQL variety, XSS/XSRF, or otherwise,) that's from someone managing to get a system to *treat data as code* when it shouldn't have been. Conflating data with code (the Lisp problem) and buffer overflows (the C problem) are, together, the source of the vast majority of all serious security holes in computing today. – Mason Wheeler Apr 12 '16 at 20:36
  • @MasonWheeler Well, homoiconicity in a language like C probably isn't a good thing, which is why Lisp exists. – cat Apr 12 '16 at 20:38
  • @cat Homoiconicity is a relic of Lisp's development in ancient days when parsing was still a dark art and not the (mostly) solved problem it is today. The original design papers for Lisp show a more complicated system, but they couldn't figure out how to parse it so they punted, came up with the most stupidly simple thing that could possibly work, and shoved all of the work that a parser ought to be responsible for off onto the developer. And nowhere does this become more painfully obvious than when you try to write macros without a proper contextual AST. – Mason Wheeler Apr 12 '16 at 20:41
  • @PhilWright "Think of how many developer hours have been wasted in the world because of him" I know right, kind of like reading your comment. – NiCk Newman Apr 18 '16 at 02:45
16

Just guessing here to demonstrate a problem in your approach. I can't test it against the real implementation as there is no link anywhere...

I'd say it is because invariants are not always expressed by the value of one variable, and 'one variable' is not sufficient to be the scope of a lock in the general case. For example, imagine we have an invariant that a+b = 0 (a bank's balance with two accounts). The two functions below ensure that the invariant is always held at the end of each function (the unit of execution in single-threaded JS).

function withdraw(v) {
  a -= v;
  b += v;
}
function deposit(v) {
  b -= v;
  a += v;
}

Now, in your multithreaded world, what happens when two threads execute withdraw and deposit at the same time? Thanks, Murphy...

(You might have code that treats += and -= specially, but that is no help. At some point, you will have local state in a function, and with no way to 'lock' two variables at the same time your invariant will be violated.)


EDIT: If your code is semantically equivalent to the Go code at https://gist.github.com/thriqon/f94c10a45b7e0bf656781b0f4a07292a, my comment is accurate ;-)

senshin
  • 324
  • 1
  • 11
thriqon
  • 287
  • 1
  • 5
  • This comment is irrelevant, but the philosophers are capable of locking multiple objects, but they starve because they lock them in an inconsistent order. – Dietrich Epp Apr 12 '16 at 12:35
  • @thriqon I just tested and the end result is correct, although the intermediate states are not consistent. https://dl.dropboxusercontent.com/u/27714141/Screenshot%20from%202016-04-12%2020-34-08.png – voodooattack Apr 12 '16 at 18:35
  • 3
    @voodooattack "I just tested..." Have you never encountered inconsistent execution order with thread scheduling? It varies from run to run, from machine to machine. Sitting down and running a single test (or even a hundred tests!) without identifying the *mechanism* for how operations are scheduled is useless. – jpmc26 Apr 13 '16 at 02:41
  • @jpmc26 Which is just the nature of multithreading everywhere. When I say it works. I mean it works relatively well on my local machine. I will upload it soon and people will be able to test on their own machines. – voodooattack Apr 13 '16 at 02:46
  • 5
    @voodooattack The problem is that simply testing concurrency isn't useful. You need to be able to *prove* that the invariant will hold, or the mechanism can never be trusted in production. – sapi Apr 13 '16 at 02:53
  • @sapi That's the beauty of it. Promises never contend. My plan is to use the scheduler for promises and warn users against using it haphazardly, then it's up to the user to use the system responsibly. – voodooattack Apr 13 '16 at 02:57
  • 1
    But you haven't given the user any tools to "use the system responsibly" because there's absolutely no locking mechanism. Making everything atomic gives an illusion of thread-safety (and you take the performance hit of everything being atomic, even when you don't need atomic access), but it doesn't actually solve most concurrency problems like the one thriqon gives here. For another example, try iterating over an array on one thread while another thread adds or removes elements from the array. For that matter, what makes you think the engine's implementation of Array is even thread safe? – Zach Lipton Apr 13 '16 at 04:39
  • 2
    @voodooattack So if users can only use your threads for functions with no shared data or side effects (and they'd better not be using them for anything else, because as we've seen here, there's no way to ensure thread safety), then what value are you providing over Web Workers (or one of the many libraries that provide a more usable API around Web Workers)? And Web Workers are much safer, because they make it impossible for the user to use the system "irresponsibly." – Zach Lipton Apr 13 '16 at 04:48
  • @voodooattack One of the criticisms you mentioned was specifically "existing libraries would have to be rewritten". This answer is the core of it. Random existing javascript code will not work, because synchronisation logic needs to be carefully applied (making each variable access atomic won't preserve correctness), and the synchronisation primitives don't exist in existing javascript so they obviously aren't used. So while your concurrent javascript may be brilliant, it won't be **javascript** in that it will require a whole new library ecosystem to be built up from scratch. – Ben Apr 13 '16 at 06:36
  • You can test the implementation yourself now. I have uploaded it to GitHub and edited the question to reflect this. – voodooattack Apr 14 '16 at 17:59
8

Atomic access does not translate into thread-safe behavior.

One example is when a global data structure needs to be invalid during an update like rehashing a hashmap (when adding a property to an object for example) or sorting a global array. During that time you cannot allow any other thread to access the variable. This basically means that you need to detect the entire read-update-write cycles and lock over that. If the update is non trivial that will end up into halting problem territory.

Javascript has been single threaded and sandboxed from the beginning and all code is written with these assumptions in mind.

This has a great advantage with regards to isolated contexts and letting 2 separate contexts run in different threads. I also means that people writing javascript don't need to know how to deal with race conditions and various other multi-thread pitfals.

ratchet freak
  • 25,706
  • 2
  • 62
  • 97
  • This is why I think the scheduler API should be more on the advanced side, and internally used for promises and functions that have no side effects. – voodooattack Apr 12 '16 at 23:18
6

Is your approach going to significantly improve performance?

Doubtful. You really need to prove this.

Is your approach going to make it easier/faster to write code?

Definitely not, multithreaded code is many times harder to get right than single threaded code.

Is your approach going to be more robust?

Deadlocks, race conditions etc. are a nightmare to fix.

Phil Wright
  • 409
  • 1
  • 3
  • 6
  • 2
    Try building a raytracer using node.js, now try it again with threads. Multi-process is not everything. – voodooattack Apr 12 '16 at 06:21
  • 8
    @voodooattack, no one in their right mind would write a ray-tracer using javascript, with or without threading, because it is a relatively simple, but computation-intensive algorithm that is better written in a fully compiled language, preferably one with SIMD support. For the kind of problems that javascript is used for, multi-process is more than enough. – Jan Hudec Apr 12 '16 at 11:59
  • @JanHudec: Heh, JS is going to get SIMD support as well :-) https://hacks.mozilla.org/2014/10/introducing-simd-js/ – Bergi Apr 12 '16 at 14:05
  • 2
    @voodooattack If you're not aware of them, take a look at [SharedArrayBuffers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/SharedArrayBuffer). JavaScript is getting more concurrency constructs, but they're being added extreme cautiously, addressing specific pain points, trying to minimize making bad design decisions that we'll have to live with for years. – Jeremy Apr 13 '16 at 02:20
2

Your implementation is not just about introducing concurrency, rather its about introducing a specific way to implement concurrency i.e concurrency by shared mutable state. Over the course of history people have used this type of concurrency and this has lead to many kinds of problems. Ofcourse you can create simple programs that works perfectly with using shared mutable state concurrency but the real test of any mechanism is not what it can do but can it scale as the program get complex and more and more features are added to the program. Remember software is not a static thing that you build once and are done with it, rather it keeps evolving over time and if any mechanism or concept can't cope with evolution of the software then it will only lead to new problems and this is exactly what history has taught us about shared mutable memory concurrency.

You can look at other models of concurrency (ex: message passing) to help you figure out what sort of benefits those models provide.

Ankur
  • 219
  • 1
  • 2
  • 1
    I think ES6 promises would benefit from my implementation's model, since they don't contend for access. – voodooattack Apr 12 '16 at 06:10
  • Have a look at WebWorkers specification. There are npm packages that provide their implementation but you can implement them as the core of your engine rather as a package – Ankur Apr 12 '16 at 06:26
  • JavaScriptCore (the webkit implementation of JS I'm using) already implements them, it's just a compilation flag. – voodooattack Apr 12 '16 at 06:28
  • Ok. WebWorkers are concurrency with message passing. You can try your raytracer example with them and compare it with the mutable state approach. – Ankur Apr 12 '16 at 06:31
  • 4
    Message passing is always implemented on top of mutable state, which means that it would be slower. I fail to see your point here. :-/ – voodooattack Apr 12 '16 at 06:34
  • Yes and it probably use some locking mechanisms under the hood but that means in all of your code base the mutable state synchronization are being done at one place only and that too well tested, rather than having locks spread across the whole code base which can lead to various problems. Its not just about performance :) – Ankur Apr 12 '16 at 06:37
  • I concede. But at least having the option is better than being forced to use something you don't like. No one approach is the right approach to every problem. – voodooattack Apr 12 '16 at 06:43
  • @voodooattack, exactly, no one approach is the right approach to every problem. Shared-state multi-threading is not the right approach for javascript. – Jan Hudec Apr 12 '16 at 12:00
1

This is needed. The lack of a low level concurrency mechanism in node js limits it's applications in fields such as math and bioinformatics, etc... Besides, concurrency with threads doesn't necessarily conflict with the default concurency model used in node. There are well known semantics for threading in an environment with a main event loop, such as ui frameworks (and nodejs), and tho they are definitely overly complex for most situations they still have valid uses.

Sure, your average web app will not require threads, but try doing anything a little less conventional, and the lack of a sound low level concurrency primitive will quickly stir you away to something else that does offer it.

dryajov
  • 127
  • 4
  • 4
    But "math and bioinformatics" would be better written in C#, JAVA, or C++ – Ian Apr 12 '16 at 11:35
  • Actually Python and Perl are the dominant languages in those areas. – dryajov Apr 12 '16 at 15:03
  • 1
    Actually, the main reason I'm implementing this is because of machine learning applications. So you do have a point. – voodooattack Apr 12 '16 at 18:39
  • @dryajov Be glad that you don't know how big FORTRAN is in some of the computational science areas... – cmaster - reinstate monica Apr 12 '16 at 20:18
  • @dryajov Because they're more accessible to Humans Who Maybe Aren't Programmers Full Time, not because they're inherently better at genome sequencing -- we have purpose-built languages like R and compiled scientific langs like Fortran for that. – cat Apr 12 '16 at 20:32
  • I'd differ on Perl being more accessible to humans. My main point is that, this sort of perspective makes the node runtime a runtime for web apps, which is OK, but why impose such a restriction at the core? It might have made sense to exclude threads when it ran exclusively in the browser, but once you plant it in a different runtime, your needs are going to start becoming more diverse. I don't think that enabling threads in the runtime/language is going to somehow automatically break the language and make it worse. – dryajov Apr 12 '16 at 21:03
0

I really believe it's because it's a different and powerful idea. You are going against belief systems. Stuff becomes accepted or popular through network affects not on the basis of merit. Also no one wants to adapt to a new stack. People automatically reject things that are too different.

If you can come up with a way to make it into a regular npm module which sounds unlikely, then you may get some people using it.

Jason Livesay
  • 164
  • 1
  • 10
  • Do you mean that JS programmers are vendor-locked to npm? – voodooattack Apr 12 '16 at 06:11
  • 3
    The question is about a new runtime system and not some npm module and also mutable shared state concurrency is really an old idea. – Ankur Apr 12 '16 at 06:55
  • 1
    @voodooattack I mean they are brainlocked to the approach that is already popular and it's going to be next to impossible to overcome the status quo bias. – Jason Livesay Apr 12 '16 at 08:01