32

I have not found many resources about this: I was wondering if it's possible/a good idea to be able to write asynchronous code in a synchronous way.

For example, here is some JavaScript code which retrieves the number of users stored in a database (an asynchronous operation):

getNbOfUsers(function (nbOfUsers) { console.log(nbOfUsers) });

It would be nice to be able to write something like this:

const nbOfUsers = getNbOfUsers();
console.log(getNbOfUsers);

And so the compiler would automatically take care of waiting for the response and then execute console.log. It will always wait for the asynchronous operations to complete before the results have to be used anywhere else. We would make so much less use of callbacks promises, async/await or whatever, and would never have to worry whether the result of an operation is available immediately or not.

Errors would still be manageable (did nbOfUsers get an integer or an error?) using try/catch, or something like optionals like in the Swift language.

Is it possible? It may be a terrible idea/a utopia... I don't know.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
Cinn
  • 463
  • 1
  • 4
  • 8
  • 62
    I don't really understand your question. If you "always wait for the asynchronous operation", then it is not an asynchronous operation, it is a synchronous operation. Can you clarify? Maybe give a specification of the type of behavior you are looking for? Also, "what do you think about it" is off-topic on [softwareengineering.se]. You need to formulate your question in the context of a concrete problem, that has a single, unambiguous, canonical, objectively correct answer. – Jörg W Mittag Mar 29 '19 at 16:10
  • 4
    @JörgWMittag I imagine a hypothetical C# that implicitly `await`s a `Task` to convert it to `T` – Caleth Mar 29 '19 at 16:15
  • 1
    Are you asking about language support for *Transparent Futures*? If yes, why do you need language support, why are the currently existing libraries not good enough? – Jörg W Mittag Mar 29 '19 at 16:16
  • @Caleth I don't know much of C#, in js or swift we have to adopt a particular syntax to manage operations which resolves in the future (i am avoiding the asynchronous term as it seems to create confusions). And this particular syntax is really heavy, so that's why we got promises and later async/await to make it a bit lighter, but still... My question is if it is possible for a programming language to handle the same way the both, as it would make things much more easier for the programmer, code more readable etc. – Cinn Mar 29 '19 at 16:22
  • 2
    Check out [coroutines](https://kotlinlang.org/docs/reference/coroutines/basics.html) in Kotlin. What you are describing seems similar. – JimmyJames Mar 29 '19 at 16:22
  • @JimmyJames Thanks for the link, i will have a look – Cinn Mar 29 '19 at 16:26
  • 6
    What you propose is not doable. It is not up to compiler to decide whether you want to await the result or perhaps fire and forget. Or run in background and await later. Why limit yourself like that? – freakish Mar 29 '19 at 20:29
  • 2
    @freakish I think the idea is that it would run in the background until the result of the promise is used, at which point it would await if necessary. Which is entirely doable, but it removes a lot of the control you would otherwise have over the way the call happens, which is likely a dealbreaker. – DarthFennec Mar 29 '19 at 21:12
  • 2
    @DarthFennec you're missing the point. What if I **don't** want to await? Just fire and go on, continue processing. With default awaiting you say "don't step forward until result is available". You can potentially create some crazy (input dependent) branching so compiler won't know when to await if it should at all. What if I want to await but don't use result? How compiler is supposed to know that? – freakish Mar 29 '19 at 21:14
  • @freakish Again, "I think the idea is that it would run in the background until the result of the promise is used, at which point it would await if necessary." So if you don't want to await, just don't use the result of the promise. And yes, you're right, the compiler can't know that you want to await if you don't want the result (you could just get the result anyway and not use it, but that's unintuitive), and it can't know if you want to pass the promise to a different function to await later. Those are the main drawbacks I was talking about. – DarthFennec Mar 29 '19 at 21:22
  • @DarthFennec About those drawbacks, I know that there is a lot of engineering to do and optimization but in my opinion a parser would definitely knows when a value would be used and where and therefore be able to schedule/prioritize operations accordingly – Cinn Mar 29 '19 at 22:10
  • 6
    [Yes, it is a terrible idea.](https://stackoverflow.com/a/25447289/1048572) Just use `async`/`await` instead, which makes the async parts of the execution explicit. – Bergi Mar 29 '19 at 22:42
  • 1
    @Cinn It knows some information about the value, but the important thing it's missing is developer intent, and design context. If you don't use the value for a while (or at all), but you want to block on the promise to allow its side effects to run, there's no way the compiler can know this, so you end up with race conditions. Similarly, if you immediately pass the promise to a function or return it to the caller, with the intent that it will be blocked on by that function, the compiler has no way to tell, so it will block too early and you lose the advantage. – DarthFennec Mar 29 '19 at 22:58
  • 5
    When you say that two things happen concurrently, you are saying that it's ok that these things happen in any order. If your code has no way of making it clear what re-orderings won't break your code's expectations, then it can't make them concurrent. – Rob Mar 29 '19 at 23:11
  • There are also more complicated things you can do with explicit awaits that can't be done with what you're proposing. For example, blocking on multiple promises at once and continuing once the fastest one completes. Blocking on a promise with a timeout, and continuing early if the timeout runs out. Querying a promise to check whether it's completed or not, without blocking. Things like that are pretty useful in many situations. – DarthFennec Mar 29 '19 at 23:42
  • 1
    Given that the compiler is invoked once, and the compiled code/run-time system is what actually executes, how is it that "the compiler would automatically take care of waiting for the response and then execute console.log"? ??? – Bob Jarvis - Слава Україні Mar 30 '19 at 18:50
  • And if you want to do other stuff while it's running and execute the print whenever it happens to finish, how exactly would you do that? – Kevin Mar 30 '19 at 19:45
  • @BobJarvis The compiler may implicitly transform `const nbOfUsers = getNbOfUsers(); console.log(getNbOfUsers);` into valid ECMAScript `getNbOfUsers().finally(console.log)` or something equivalent... – Cinn Mar 31 '19 at 15:02
  • @Kevin If the compiler does like in my previous comment, then it becomes a non-blocking computation (i am confident in JavaScript, but something equivalent may apply in other languages), so any following instructions will be executed while waiting the server response – Cinn Mar 31 '19 at 15:06
  • 1
    Congratulations, you've invented threads! Now look up what problems threads have. – user253751 Mar 31 '19 at 21:02
  • Because we have yet to develop a programming language that automatically ensures memory consistency across all problems. And that is a hard problem. *How do you handle two threads updating the same counter?* You could run them sequentially. You could interleave them swapping ownership of the memory back and forth. You could disallow the behaviour, but how would you detect that behaviour? At runtime, or at compile time? Some people will say that functional programs have solved this, I will simply ask, how do functions at equal levels in a tree update a monad? Its the same problem. – Kain0_0 Apr 01 '19 at 03:37
  • @Kain0_0 "I will simply ask, how do functions at equal levels in a tree update a monad? Its the same problem." The IO monad et al is designed to be used as an escape into imperative style, so in that context you have to manually manage this the same way you have to in other imperative languages (although Haskell makes this particularly easy, see the STM monad). In a normal functional context however, variables are immutable, so this kind of problem simply doesn't exist. You'd use a different approach that doesn't involve threads updating the same counter, and there's no longer an issue. – DarthFennec Apr 01 '19 at 17:21
  • @Cinn youre examples are terribly simple. Consider this: `obj.x = call();` and later `console.log(obj[label])` where `label` is an input dependent variable. How parser is supposed to deduce what to do? Wrap **everything** into promises? Huge overhead. – freakish Apr 03 '19 at 12:19
  • 1
    This is completely doable by reversing await and not-await so that await becomes implicit and if you really want to defer a task (which happens maybe in 1% of the cases), you would use a special keyword instead. – Tyrrrz Nov 27 '19 at 10:53
  • Read about [continuation-passing style](https://en.wikipedia.org/wiki/Continuation-passing_style) – Basile Starynkevitch Dec 22 '20 at 19:01

14 Answers14

67

Async/await is exactly that automated management that you propose, albeit with two extra keywords. Why are they important? Aside from backwards compatibility?

  • Without explicit points where a coroutine may be suspended and resumed, we would need a type system to detect where an awaitable value must be awaited. Many programming languages do not have such a type system.

  • By making awaiting a value explicit, we can also pass awaitable values around as first class objects: promises. This can be super useful when writing higher-order code.

  • Async code has very deep effects for the execution model of a language, similar to the absence or presence of exceptions in the language. In particular, an async function can only be awaited by async functions. This affects all calling functions! But what if we change a function from non-async to async at the end of this dependency chain? This would be a backwards-incompatible change … unless all functions are async and every function call is awaited by default.

    And that is highly undesirable because it has very bad performance implications. You wouldn't be able to simply return cheap values. Every function call would become a lot more expensive.

Async is great, but some kind of implicit async won't work in reality.

Pure functional languages like Haskell have a bit of an escape hatch because execution order is largely unspecified and unobservable. Or phrased differently: any specific order of operations must be explicitly encoded. That can be rather cumbersome for real-world programs, especially those I/O-heavy programs for which async code is a very good fit.

amon
  • 132,749
  • 27
  • 279
  • 375
  • 2
    You don't necessarily need a type system. Transparent Futures in e.g. ECMAScript, Smalltalk, Self, Newspeak, Io, Ioke, Seph, can be easily implemented without tyoe system or language support. In Smalltalk and its descendants, an object can transparently change its identity, in ECMAScript, it can transparently change its shape. That is all you need to make Futures transparent, no need for language support for asynchrony. – Jörg W Mittag Mar 29 '19 at 16:26
  • 6
    @JörgWMittag I understand what you're saying and how that could work, but transparent futures without a type system make it rather difficult to simultaneously have first class futures, no? I would need some way to select whether I want to send messages to the future or the future's value, preferably something better than `someValue ifItIsAFuture [self| self messageIWantToSend]` because that's tricky to integrate with generic code. – amon Mar 29 '19 at 16:43
  • 1
    Haskell is a pretty good imperative language. In my opinion, explicit state and IO is not cumbersome but elegant. – skywalker Mar 29 '19 at 18:12
  • @les Monads like Haskell's `IO` have the same composability problems as async, not least because I can write my async code as promises and promises are monads. If I have a Haskell function that operates on a monad, its caller must also operate on a monad. All the way up to `main`. That is what I object to in my answer, aside from that explicit state is wonderful. – amon Mar 29 '19 at 19:54
  • @amon "If I have a Haskell function that operates on a monad, its caller must also operate on a monad. All the way up to `main`." This is only true for `IO` and monads that depend on `IO`. You can certainly have other monads in pure code. – DarthFennec Mar 29 '19 at 21:35
  • @DarthFennec You're right, I should have phrased that more like “you may have to *lift* the calling functions as well” – amon Mar 29 '19 at 21:37
  • 8
    @amon "I can write my async code as promises and promises are monads." Monads aren't actually necessary here. Thunks are essentially just promises. Since almost all values in Haskell are boxed, almost all values in Haskell are already promises. That's why you can toss a [`par`](http://hackage.haskell.org/package/parallel-3.2.2.0/docs/Control-Parallel.html#v:par) pretty much anywhere in pure Haskell code and get paralellism for free. – DarthFennec Mar 29 '19 at 22:39
  • 2
    Async/await reminds me of the continuation monad. – skywalker Mar 30 '19 at 06:43
  • 1
    What the async/await keywords do *is* encoding it in the type system. It's just that the type annotations alter the semantics by following the function objects and not the value objects. – Tim Seguine Mar 30 '19 at 14:59
  • @TimSeguine could you elaborate? The `async` bit is type-ish in that it says “this is a coroutine, not a normal function”. But `await` isn't really a type annotation, more of an unary operator “wait until this promise is resolved”. – amon Mar 30 '19 at 15:36
  • 1
    @amon In asking your question you basically summarized it already. async is a type annotation on functions and await is an operator for semantic interaction with the implied state-machine whose use is syntactically constrained by the type annotation. That's basically static typing in a nutshell. I'm not sure to what degree it's implemented in the type system of C# though for example, but that is a practical consideration and not a modelling consideration. – Tim Seguine Mar 30 '19 at 15:50
  • 3
    In fact, both exceptions and async/await are instances of [_algebraic effects_](https://www.microsoft.com/en-us/research/wp-content/uploads/2017/05/asynceffects-msr-tr-2017-21.pdf). – Alex Reinking Mar 31 '19 at 05:55
  • 1
    @les "Async/await reminds me of the continuation monad." Makes sense, await is essentially just a syntactic call/cc. – DarthFennec Apr 01 '19 at 17:24
  • 1
    Lua's undelimited coroutines allow abstracting away asynchronosity and blocking to a large degree without using compile-checked types. Haskell's most noteworthy point regarding async is that blocking IO tends to use `forkIO` rather than using async monads (aka generalised await/async), despite those monads being trivial to implement in the language. `forkIO` is a great example of the language abstracting away the need to manually manage asynchronous code, specifically in this case the inherent asynchronosity in non-blocking use of system IO. – Louis Jackman Apr 09 '19 at 14:55
  • async/await is not fully automated, it's half-automated. Fully automated is called threads, as I said in my answer. – user253751 Mar 21 '23 at 00:33
27

What you are missing, is the purpose of async operations: They allow you to make use of your waiting time!

If you turn an async operation, like requesting some resource from a server, into a synchronous operation by implicitly and immediately waiting for the reply, your thread cannot do anything else with the waiting time. If the server takes 10 milliseconds to respond, there go about 30 million CPU cycles to the waste. The latency of the response becomes the execution time for the request.

The only reason why programmers invented async operations, is to hide the latency of inherently long-running tasks behind other useful computations. If you can fill the waiting time with useful work, that's CPU time saved. If you can't, well, nothing's lost by the operation being async.

So, I recommend to embrace the async operations that your languages provide to you. They are there to save you time.

  • i was thinking of a functional language where operations are not blocking, so even if it has a synchronous syntax, a long-running computation will not block the thread – Cinn Mar 29 '19 at 18:26
  • 6
    @Cinn I didn't find that in the question, and the example in the question is Javascript, which does not have this feature. However, generally it's rather hard for a compiler to find meaningful opportunities for parallelization as you describe: Meaningful exploitation of such a feature would require the programmer to explicitly think about *what* they put right after a long latency call. If you make the runtime smart enough to avoid this requirement on the programmer, your runtime will likely eat up the performance savings because it would need to parallelize aggressively across function calls. – cmaster - reinstate monica Mar 29 '19 at 19:55
  • 2
    All computers wait at the same speed. – Bob Jarvis - Слава Україні Mar 30 '19 at 18:53
  • 3
    @BobJarvis Yes. But they differ in how much work they *could* have done in the waiting time... – cmaster - reinstate monica Mar 30 '19 at 18:55
14

Some do.

They're not mainstream (yet) because async is a relatively new feature that we've only just now gotten a good feel for if it's even a good feature, or how to present it to programmers in a way that is friendly/usable/expressive/etc. Existing async features are largely bolted onto existing languages, which require a little different design approach.

That said, it's not clearly a good idea to do everywhere. A common failing is doing async calls in a loop, effectively serializing their execution. Having asynchronous calls be implicit may obscure that sort of error. Also, if you support implicit coercion from a Task<T> (or your language's equivalent) to T, that can add a bit of complexity/cost to your typechecker and error reporting when it's unclear which of the two the programmer really wanted.

But those are not insurmountable problems. If you wanted to support that behavior you almost certainly could, though there would be trade-offs.

Telastyn
  • 108,850
  • 29
  • 239
  • 365
  • 1
    I think an idea could be to wrap everything in async functions, the synchronous tasks would just resolve immediatly and we get all a one kind to handle (Edit: @amon explained why it's a bad idea...) – Cinn Mar 29 '19 at 16:31
  • 10
    Can you give a few examples for "*Some do*", please? – Bergi Mar 30 '19 at 12:17
  • 2
    Asynchronous programming isn't in any way new, it's just that nowadays people have to deal with it more often. – Cubic Mar 30 '19 at 14:56
  • 1
    @Cubic - it is as a language feature as far as I know. Before it was just (awkward) userland functions. – Telastyn Mar 30 '19 at 16:45
13

There are languages that do this. But, there is actually not much of a need, since it can be easily accomplished with existing language features.

As long as you have some way of expressing asynchrony, you can implement Futures or Promises purely as a library feature, you don't need any special language features. And as long as you have some of expressing Transparent Proxies, you can put the two features together and you have Transparent Futures.

For example, in Smalltalk and its descendants, an object can change its identity, it can literally "become" a different object (and in fact the method that does this is called Object>>become:).

Imagine a long-running computation that returns a Future<Int>. This Future<Int> has all the same methods as Int, except with different implementations. Future<Int>'s + method does not add another number and return the result, it returns a new Future<Int> which wraps the computation. And so on, and so forth. Methods that cannot sensibly be implemented by returning a Future<Int>, will instead automatically await the result, and then call self become: result., which will make the currently executing object (self, i.e. the Future<Int>) literally become the result object, i.e. from now on the object reference that used to be a Future<Int> is now an Int everywhere, completely transparent to the client.

No special asynchrony-related language features needed.

Jörg W Mittag
  • 101,921
  • 24
  • 218
  • 318
  • Ok, but that has problems if both `Future` and `T` share some common interface and I use functionality from that interface. Should it `become` the result and then use the functionality, or not? I'm thinking of things like an equality operator or a to-string debugging representation. – amon Mar 29 '19 at 16:47
  • I understand that it does not add any features, the thing is we have different syntaxes to write immediately resolving computations and long-running computations, and after that we would use the results the same way for other purposes. I was wondering if we could have a syntax that transparently handle the both, making it more readable and so the programmer does not have to handle it. Like doing `a + b`, both integers, no matters if a and b are available immediately or later, we just write `a + b` (making possible to do `Int + Future`) – Cinn Mar 29 '19 at 16:56
  • @Cinn: Yes, you can do that with Transparent Futures, and you don't need any special language features to do that. You can implement it using the already existing features in e.g. Smalltalk, Self, Newspeak, Us, Korz, Io, Ioke, Seph, ECMAScript, and apparently, as I just read, Python. – Jörg W Mittag Mar 29 '19 at 17:01
  • 4
    @amon: The idea of *Transparent Futures* is that you don't know it's a future. From your point of view, there is no common interface between `Future` and `T` because from your point of view, *there is no `Future`*, only a `T`. Now, there is of course lots of engineering challenges around how to make this efficient, which operations should be blocking vs. non-blocking, etc., but that is really independent of whether you do it as a language or as a library feature. Transparency was a requirement stipulated by the OP in the question, I won't argue that it is hard and might not make sense. – Jörg W Mittag Mar 29 '19 at 17:03
  • 3
    @Jörg That seems like it would be problematic in anything but functional languages since you have no way of knowing when code is actually executed in that model. That generally works fine in say Haskell, but I can't see how this would work in more procedural languages (and even in Haskell, if you care about performance you sometimes have to force an execution and understand the underlying model). An interesting idea nevertheless. – Voo Mar 31 '19 at 12:00
7

They do (well, most of them). The feature you're looking for is called threads.

Threads have their own problems however:

  1. Because the code can be suspended at any point, you can't ever assume that things won't change "by themselves". When programming with threads, you waste a lot of time thinking about how your program should deal with things changing.

    Imagine a game server is processing a player's attack on another player. Something like this:

    if (playerInMeleeRange(attacker, victim)) {
        const damage = calculateAttackDamage(attacker, victim);
        if (victim.health <= damage) {
    
            // attacker gets whatever the victim was carrying as loot
            const loot = victim.getInventoryItems();
            attacker.addInventoryItems(loot);
            victim.removeInventoryItems(loot);
    
            victim.sendMessage("${attacker} hits you with a ${attacker.currentWeapon} and you die!");
            victim.setDead();
        } else {
            victim.health -= damage;
            victim.sendMessage("${attacker} hits you with a ${attacker.currentWeapon}!");
        }
        attacker.markAsKiller();
    }
    

    Three months later, a player discovers that by getting killed and logging off precisely when attacker.addInventoryItems is running, then victim.removeInventoryItems will fail, he can keep his items and the attacker also gets a copy of his items. He does this several times, creating a million tonnes of gold out of thin air and crashing the game's economy.

    Alternatively, the attacker can log out while the game is sending a message to the victim, and he won't get a "murderer" tag above his head, so his next victim won't run away from him.

  2. Because the code can be suspended at any point, you need to use locks everywhere when manipulating data structures. I gave an example above that has obvious consequences in a game, but it can be more subtle. Consider adding an item to the start of a linked list:

    newItem.nextItem = list.firstItem;
    list.firstItem = newItem;
    

    This isn't a problem if you say that threads can only be suspended when they're doing I/O, and not at any point. But I'm sure you can imagine a situation where there's an I/O operation - such as logging:

    for (player = playerList.firstItem; player != null; player = item.nextPlayer) {
        debugLog("${item.name} is online, they get a gold star");
        // Oops! The player might've logged out while the log message was being written to disk, and now this will throw an exception and the remaining players won't get their gold stars.
        // Or the list might've been rearranged and some players might get two and some players might get none.
        player.addInventoryItem(InventoryItems.GoldStar);
    }
    
  3. Because the code can be suspended at any point, there could potentially be a lot of state to save. The system deals with this by giving each thread an entirely separate stack. But the stack is quite big, so you can't have more than about 2000 threads in a 32-bit program. Or you could reduce the stack size, at the risk of making it too small.

user253751
  • 4,864
  • 3
  • 20
  • 27
5

A find lot of the answers here misleading, because while the question was literally asking about asynchronous programming and not non-blocking IO, I don't think we can discuss one without discussing the other in this particular case.

While asynchronous programming is inherently, well, asynchronous, the raison d'être of asynchronous programming is mostly to avoid blocking kernel threads. Node.js uses asynchronosity via callbacks or Promises to allow blocking operations to be dispatched from an event loop and Netty in Java uses asynchronisity via callbacks or CompletableFutures to do something similar.

Non-blocking code does not require asynchronosity, however. It depends how much your programming language and runtime is willing to do for you.

Go, Erlang, and Haskell/GHC can handle this for you. You can write something like var response = http.get('example.com/test') and have it release a kernel thread behind the scenes while waiting for a response. This is done by goroutines, Erlang processes, or forkIO letting go of kernel threads behind the scenes when blocking, allowing it to do other things while awaiting a response.

It's true that language can't really handle asynchronosity for you, but some abstractions let you go further than others e.g. undelimited continuations or asymmetric coroutines. However, the primary cause of asynchronous code, blocking system calls, absolutely can be abstracted away from the developer.

Node.js and Java support asynchronous non-blocking code, whereas Go and Erlang support synchronous non-blocking code. They're both valid approaches with different tradeoffs.

My rather subjective argument is that those arguing against runtimes managing non-blocking on behalf of the developer are like those arguing against garbage collection in the early noughties. Yes, it incurs a cost (in this case primarily more memory), but it makes development and debugging easier, and makes codebases more robust.

I'd personally argue that asynchronous non-blocking code should be reserved for systems programming in the future and more modern technology stacks should migrate to synchronous non-blocking runtimes for application development.

Louis Jackman
  • 243
  • 1
  • 5
  • 1
    This was a really interesting answer! But I'm not sure I understand your distinction between “synchronous” and “asynchronous” non-blocking code. For me, synchronous non-blocking code means something like a C function like `waitpid(..., WNOHANG)` that fails if it would have to block. Or does “synchronous” here mean “there are no programmer-visible callbacks/state machines/event loops”? But for your Go example, I still have to explicitly await a result from a goroutine by reading from a channel, no? How is this less async than async/await in JS/C#/Python? – amon Apr 09 '19 at 18:34
  • 1
    I use "asynchronous" and "synchronous" to discuss the programming model exposed to the developer and "blocking" and "non-blocking" to discuss the blocking of a kernel thread during which it can't do anything useful, even if there are other computations that need doing and there is a spare logical processor it can use. Well, a goroutine can just wait around for a result without blocking the underlying thread, but another goroutine can communicate with it over a channel if it wishes. The goroutine needn't use a channel _directly_ to wait for a non-blocking socket read though. – Louis Jackman Apr 14 '19 at 11:50
  • Hmm ok, I do understand your distinction now. Whereas I'm more concerned about managing data- and control-flow between coroutines, you are more concerned about never blocking the main kernel thread. I'm not sure Go or Haskell have any advantage over C++ or Java in this regard since they too can kick off background threads, doing so just requires a tad more code. – amon Apr 14 '19 at 20:52
  • @LouisJackman could elaborate a little on your last statement about async non-blocking for system programming. What are the pros of async non-blocking approach? – sunprophit Aug 11 '19 at 09:42
  • @sunprophit Asynchronous non-blocking is just a compiler transformation (usually async/await), whereas synchronous non-blocking requires runtime support like some combination of complex stack manipulation, inserting yield points on function calls (which can collide with inlining), tracking “reductions” (requiring a VM like BEAM), etc. Like garbage collection, it’s trading off less runtime complexity for ease of use and robustness. Systems languages like C, C++, and Rust avoid larger runtime features like this due to their targeted domains, so asynchronous non-blocking makes more sense there. – Louis Jackman Aug 12 '19 at 11:57
  • @LouisJackman isn't it possible to know at compile time whether operation will be blocking or not, i.e. you open all file descriptors with O_NONBLOCK flag – sunprophit Aug 12 '19 at 15:37
  • Excellent answer and helps explain why languages using the Erlang BEAM VM don't seem to need C# stye async. – Frank Hileman Feb 13 '20 at 00:08
3

If I'm reading you right, you are asking for a synchronous programming model, but a high performance implementation. If that is correct then that is already available to us in the form of green threads or processes of e.g. Erlang or Haskell. So yes, it's an excellent idea, but the retrofitting to existing languages can't always be as smooth as you would like.

Peter Mortensen
  • 1,050
  • 2
  • 12
  • 14
monocell
  • 162
  • 7
3

I appreciate the question, and find the majority of answers to be merely defensive of the status quo. In the spectrum of low- to high-level languages, we've been stuck in a rut for some time. The next higher level is clearly going to be a language that is less focused on syntax (the need for explicit keywords like await and async) and much more about intention. (Obvious credit to Charles Simonyi, but thinking of 2019 and the future.)

If I told a programmer, write some code that simply fetches a value from a database, you can safely assume I mean, "and BTW, don't hang the UI" and "don't introduce other considerations that mask hard to find bugs". Programmers of the future, with a next-generation of languages and tools, will certainly be able to write code that simply fetches a value in one line of code and goes from there.

The highest level language would be speaking English, and relying on competence of the task doer to know what you really want done. (Think the computer in Star Trek, or asking something of Alexa.) We're far from that, but inching closer, and my expectation is that the language/compiler could be more to generate robust, intentioned code without going so far as to needing AI.

On one hand, there are newer visual languages, like Scratch, that do this and aren't bogged down with all the syntactical technicalities. Certainly, there's a lot of behind-the-scenes work going on so the programmer doesn't have to worry about it. That said, I'm not writing business class software in Scratch, so, like you, I have the same expectation that it's time for mature programming languages to automatically manage the synchronous/asynchronous problem.

2

There is a very important aspect that has not been raised yet: reentrancy. If you have any other code (ie.: event loop) that runs during the async call (and if you don't then why do you even need async?), then the code can affect the program state. You cannot hide the async calls from the caller because the caller may depend on parts of the program state to remain unaffected for the duration of his function call. Example:

function foo( obj ) {
    obj.x = 2;
    bar();
    log( "obj.x equals 2: " + obj.x );
}

If bar() is an async function then it may be possible for the obj.x to change during it's execution. This would be rather unexpected without any hint that bar is async and that effect is possible. The only alternative would be to suspect every possible function/method to be async and re-fetch and re-check any non-local state after each function call. This is prone to subtle bugs and may not even be possible at all if some of the non-local state is fetched via functions. Because of that, the programmer needs to be aware which of the functions have a potential of altering the program state in unexpected ways:

async function foo( obj ) {
    obj.x = 2;
    await bar();
    log( "obj.x equals 2: " + obj.x );
}

Now it is clearly visible that the bar() is an async function, and the correct way to handle it is to re-check the expected value of obj.x afterwards and deal with any changes that may have had occurred.

As already noted by other answers, pure functional languages like Haskell can escape that effect entirely by avoiding the need for any shared/global state at all. I do not have much experience with functional languages so I am probably biased against it, but I do not think lack of the global state is an advantage when writing larger applications though.

j_kubik
  • 129
  • 2
1

The problem you're describing is two-fold.

  • The program you're writing should behave asynchronously as a whole when viewed from the outside.
  • It should not be visible at the call site whether a function call potentially gives up control or not.

There are a couple of ways to achieve this, but they basically boil down to

  1. having multiple threads (at some level of abstraction)
  2. having multiple kinds of function at the language level, all of which are called like this foo(4, 7, bar, quux).

For (1), I'm lumping together forking and running multiple processes, spawning multiple kernel threads, and green thread implementations that schedule language-runtime level threads onto kernel threads. From the perspective of the problem, they are the same. In this world, no function ever gives up or loses control from the perspective of its thread. The thread itself sometimes doesn't have control and sometimes isn't running but you don't give up control of your own thread in this world. A system fitting this model may or may not have the ability to spawn new threads or join on existing threads. A system fitting this model may or may not have the ability to duplicate a thread like Unix's fork.

(2) is interesting. In order to do it justice we need to talk about introduction and elimination forms.

I'm going to show why implicit await cannot be added to a language like Javascript in a backwards-compatible way. The basic idea is that by exposing promises to the user and having a distinction between synchronous and asynchronous contexts, Javascript has leaked an implementation detail that prevents handling synchronous and asynchronous functions uniformly. There's also the fact that you can't await a promise outside of an async function body. These design choices are incompatible with "making asynchronousness invisible to the caller".

You can introduce a synchronous function using a lambda and eliminate it with a function call.

Synchronous function introduction:

((x) => {return x + x;})

Synchronous function elimination:

f(4)

((x) => {return x + x;})(4)

You can contrast this with asynchronous function introduction and elimination.

Asynchronous function introduction

(async (x) => {return x + x;})

Asynchonrous function elimination (note: only valid inside an async function)

await (async (x) => {return x + x;})(4)

The fundamental problem here is that an asynchronous function is also a synchronous function producing a promise object.

Here's an example of calling an asynchronous function synchronously in the node.js repl.

> (async (x) => {return x + x;})(4)
Promise { 8 }

You can hypothetically have a language, even a dynamically typed one, where the difference between asynchronous and synchronous function calls is not visible at the call site and possibly is not visible at the definition site.

Taking a language like that and lowering it to Javascript is possible, you'd just have to effectively make all functions asynchronous.

Greg Nisbet
  • 288
  • 1
  • 9
1

In the case of Javascript, which you used in your question, there is an important point to be aware of: Javascript is single-threaded, and the order of execution is guaranteed as long as there are no async calls.

So if you have a sequence like yours:

const nbOfUsers = getNbOfUsers();

You are guaranteed that nothing else will be executed in the meantime. No need for locks or anything similar.

However, if getNbOfUsers is asynchronous, then:

const nbOfUsers = await getNbOfUsers();

means that while getNbOfUsers runs, execution yields, and other code may run in between. This may in turn require some locking to happen, depending on what you are doing.

So, it's a good idea to be aware when a call is asynchronous and when it isn't, as in some situation you will need to take additional precautions you wouldn't need to if the call was synchronous.

jcaron
  • 169
  • 6
  • You are right, my second code in the question is invalid as if `getNbOfUsers()` returns a Promise. But that is exactly the point of my question, why do we need to explicitly write it as asynchronous, the compiler could detect it and handle it automatically in a different way. – Cinn Mar 31 '19 at 15:27
  • @Cinn that’s not my point. My point is that the execution flow may get to other parts of your code during the execution of the asynchronous call, while it isn’t possible for a synchronous call. It would be like having multiple threads running but not being aware of it. This can end up in big issues (which are usually hard to detect and reproduce). – jcaron Mar 31 '19 at 15:49
  • Valid point, but if a language is truly implicitly asynchronous, you'd automatically expect other things to happen in the background during *every* statement. I wouldn't see it as a big counter argument. – phil294 Apr 07 '21 at 13:34
1

With Go language goroutines, and the Go language run time, you can write all code as if it was synchrone. If an operation blocks in one goroutine, execution continues in other goroutines. And with channels you can communicate easily between goroutines. This is often easier than callbacks like in Javascript or async/await in other languages. See for https://tour.golang.org/concurrency/1 for some examples and an explanation.

Furthermore, I have no personal experience with it, but I hear Erlang has similar facilities.

So, yes, there are programming languages Like Go and Erlang, which solve the syncronous/asynchronous problem, but unfortunately they are not very popular yet. As those languages grow in popularity, perhaps the facilities they provide will be implemented also in other languages.

  • I almost never used the Go language but it seems that you explicitly declare `go ...`, so it looks similar as `await ...` no? – Cinn Apr 01 '19 at 08:03
  • 1
    @Cinn Actually, no. You can put any call as a goroutine onto its own fiber / green-thread with `go`. And just about any call which might block is done asynchronously by the runtime, which just switches to a different goroutine in the meantime (co-operative multi-tasking). You await by waiting for a message. – Deduplicator Apr 01 '19 at 08:33
  • 2
    While Goroutines are a kind of concurrency, I wouldn't put them into the same bucket as async/await: not cooperative coroutines but automatically (and preemptively!) scheduled green threads. But this doesn't make awaiting automatic either: Go's equivalent to `await` is reading from a channel `<- ch`. – amon Apr 01 '19 at 12:47
  • @amon As far as I know, goroutines are cooperatively scheduled on native threads (normally just enough to max out true hardware parallelism) by the runtime, and those are preemptively scheduled by the OS. – Deduplicator Apr 01 '19 at 15:36
  • The OP asked "to be able to write asynchronous code in a synchronous way". As you have mentioned, with goroutines and the go runtime, you can exactly do that. You don't have to worry about the details of threading, just write blocking reads and writes, as if the code was synchronous, and your other goroutines, if any, will keep on running. You also don't even have to "await" or read from a channel to get this benefit. I therefore I think Go is programing language that meets the OP's desires most closely. –  Apr 03 '19 at 06:51
  • Elixir also seems pretty closed, I found this article very helpful: https://blog.codeship.com/comparing-elixir-go/ – Cinn Apr 03 '19 at 08:29
  • @amon they are, however, in the same basket as threads – user253751 May 16 '23 at 21:25
0

There are languages that do what (I understand) you propose - you can add any amount of code, and when you later access the variable that was supposed to be filled asynchronously, the execution waits for it to finish.
Of course, if you access it right in the next line, you don't gain anything - the idea is to do other work first, and then use it when really needed, without worrying that it is ready in time.

I know that ABAP (the SAP language) can do that; but it is an interpreted language, so it is probably easier to handle. A compiled language would have to put more effort into it, but it should be possible.

However, hiding that in the language definition would make it harder to write optimally fast programs, as you no longer have control about what happens. C and C++ have the core concept to not add any overhead - you only pay for what you use - with the price that you need to handle synchronization yourself, where you want/need it.

Pang
  • 313
  • 4
  • 7
Aganju
  • 1,453
  • 13
  • 15
-4

This is available in C++ as std::async since C++11.

The template function async runs the function f asynchronously (potentially in a separate thread which may be part of a thread pool) and returns a std::future that will eventually hold the result of that function call.

And with C++20 coroutines can be used:

Robert Andrzejuk
  • 580
  • 2
  • 11
  • 5
    This doesn't seem to answer the question. According to your link: "What does the Coroutines TS give us? Three new language keywords: co_await, co_yield and co_return"... But the question is why do we need an `await` (or `co_await` in this case) keyword in the first place? – Arturo Torres Sánchez Mar 29 '19 at 18:22