7

I've seen this in C#, Hack, and now Kotlin: await or an equivalent operation can only be performed in special "async" contexts. Return values from these are, to borrow Hack's terminology, "awaitable" in turn, so the special async-ness of some low-level async system function call bubbles to the top unless it transformed to a synchronous operation. This partitions the codebase into synchronous and asynchronous ecosystems. After working for a while with async-await in Hack, however, I'm beginning to question the need. Why does the calling scope need to know that it's calling an async function? Why can't async functions look like sync functions that just happen to throw control somewhere else on occasion?

I've found all of the distinctiveness of async code I've written to come from three consequences:

  1. Race conditions are possible when two parallel coroutines share common state
  2. Time information about the transient/unresolved state might be embedded in async objects, which can enforce certain ordering rules
  3. The underlying work of a coroutine can be mutated by other coroutines (e.g. cancellation)

I'll concede the first one is tempting. Annotating an ecosystem as async screams "beware: race conditions might live here!" However, attention to race conditions can be completely localized to combining functions (e.g. (Awaitable<Tu>, Awaitable<Tv>, ...) -> Awaitable<(Tu, Tv, ...)>) since without them, two coroutines cannot execute in parallel. Then, the problem becomes very specific: "make sure all terms of this combining function do not race." This is beneficial to clarity. So long as it's understood that combining functions are useful for async code (but obviously not limited to it; async code is a superset of sync code), and there are a finite number of canonical ones (language constructs as they often are), I feel that this better communicates the risks of race conditions by localizing their sources.

The other two are a matter of language design by how the lowest-level async objects are represented (Hack's WaitHandles for instance). Any mutation of a high-level async object is necessarily confined to a set of operations against the underlying low-level async objects that come from system calls. Whether or not the calling scope is synchronous is irrelevant, since mutability and the effects of mutation are purely functions of that underlying state at a single point in time. Aggregating them into a nondescript async object does not make the behavior any clearer — if anything, to me, it obscures it with the illusion of determinism. This is all moot when the scheduler is opaque (as in Hack and, from what I gather, Kotlin as well) where the information and mutators are hidden anyways.

Otherwise, the result is all the same for the calling scope: it eventually gets a value or an Exception and does its synchronous thing. Am I missing a part of the design thinking behind this rule? Alternatively, are there examples where async function contracts are indistinguishable from synchronous ones?

concat
  • 507
  • 3
  • 10
  • 6
    Recommended reading: [What Color is Your Function?](http://journal.stuffwithstuff.com/2015/02/01/what-color-is-your-function/) (at the risk of spoiling its title, it's directly related to your question). – Andres F. Mar 22 '17 at 19:47
  • 1
    Possible duplicate of [async+await == sync?](http://softwareengineering.stackexchange.com/questions/183576/asyncawait-sync) – gnat Mar 22 '17 at 19:59
  • Also worth reading: [Eliding Async and Await](http://blog.stephencleary.com/2016/12/eliding-async-await.html) and [Async/Await FAQ](https://blogs.msdn.microsoft.com/pfxteam/2012/04/12/asyncawait-faq/) – Doval Mar 22 '17 at 20:21
  • 2
    because async "functions" are not functions. they're totally different models of computation. –  Mar 22 '17 at 20:57
  • "sync" v. "async" is not the real issue; it's functional v interactional computation. –  Mar 22 '17 at 21:08
  • @mobileink What do you mean by "interactional" computation? I'm not familiar with the term, nor am I particularly sure what is interacting. – concat Mar 23 '17 at 00:32
  • unfortunately i do not know of any generally avaiable overview. the basic idea is that traditional models of computation like Turing Machines, Lambda Calc, wtc. are inadequate, aince they cannot handle e.g.io, like talking to dbs. but nobody has come up with a more general model that everybody likes. google "coinduction", and try https://www.amazon.com/Communicating-Mobile-Systems-Pi-Calculus/dp/0521658691 not free but well worth the money. –  Mar 23 '17 at 21:57
  • the short answer is that fns (sync) are white boxes, but "calling a fn" on an async thing is black-boxery - you don't get a result, you get an observation, which has no intrinsic relation to internal state of the "fn" you "called". consider +, you "apply" it to 2 args, it just means what it means, (+ 2 2) jist *is* 4, it's tecnhnically just another name for the same thing. but +` the async version of same, is different. when you say "(+' 2 2)" you perform an action on the +' blackbox, so yk –  Mar 23 '17 at 22:06
  • dammit, hit wrong button. to continue: when you do "(+' 2 2)" you perform an action on the +' blackbox, which may or may not terminate; furthermore a man-in-the-middle may intercept. either way you must "observe" the result, which usually means a callback. either way the point is that the meaning of (+' 2 2) is non-deterrministic, while (+2 2) is. –  Mar 23 '17 at 22:12
  • so to answer your core question "Why does the calling scope need to know that it's calling an async function? ", the answer is that "calling" a func is not the same as "performing an action on a blackbox". so you need to handle them diffrrently –  Mar 23 '17 at 22:16
  • @mobileink If the calling scope can't inspect regular function implementations, aren't they just as opaque as async ones? I imagine that whether it's a middle man who's intercepting or the original work that's generating values doesn't matter the majority of the time. – concat Mar 24 '17 at 14:59
  • @concat: I think it's more about language semantics rather than inspectability. in a compiled functional program you can't inspect the source code, but you (the programmer) know the fn definitions are expressed as functional algorithms, guaranteed to terminate with a value. in principle, their correctness could be certified. plus functions do not have state, black boxes may. so taking the value of a function at an arg is not the same as observing the response of a black box (state machine) to a stimulus. –  Mar 24 '17 at 19:45
  • Do you mean syntactically? Because by necessity, by *definition*, you have to distinguish them semantically... – Miles Rout Mar 28 '17 at 21:38
  • @MilesRout "Semantics" as I understand it describes the meaning of "async"-ness to code that works with it. From this I say not necessarily, because I expect only limited parts of most applications to exploit async properties directly (e.g. parallelizing coroutines). But, I'm open to being corrected on this definition. – concat Mar 29 '17 at 00:31
  • @concat 'Semantics' means 'meaning'. The meaning of asynchronous code is obviously totally distinct from the meaning of synchronous code. – Miles Rout Mar 29 '17 at 20:18

3 Answers3

5

The reason you need to mark methods as async in C# in order to use await as a keyword inside of them is that C# was already a well-established language by the time this was added as a new feature, and it's reasonable to assume that there was code out there that used await as an identifier, which would have broken under the new system.

By introducing a new syntax that was never valid before, (methods marked as async in the method declaration,) the C# compiler team could ensure that all the existing code continued to work as usual, and the use of await as a pseudo-keyword would only come into play when the coder explicitly asked for it in code written for the new feature.

Other languages probably did it that way for similar reasons, or "because that's how C# did it."

Mason Wheeler
  • 82,151
  • 24
  • 234
  • 309
  • `await` existed before `async`? – concat Mar 23 '17 at 00:53
  • @concat It was possible to use it as an identifier before `async` was around. For example, there may well have been a class with a method named `await()`. Any code that tried to call this would have broken if they had unilaterally decided that `await` is now a keyword. Since the C# team didn't want to break existing code, they added the `async` keyword to set it up so that this wouldn't happen. – Mason Wheeler Mar 23 '17 at 01:04
  • Ah okay, sorry, I misunderstood. Did the C# team describe this decision online anywhere? – concat Mar 23 '17 at 03:50
  • I'm also don't quite follow how the `async` annotation fixes the `await` naming conflict. Depending on the ambiguity of `await (...)` (emphasis on the space), isn't that either still a problem, or not a problem to begin with? – concat Mar 23 '17 at 03:52
  • 2
    @concat See https://blogs.msdn.microsoft.com/ericlippert/2010/11/11/asynchrony-in-c-5-part-six-whither-async/, where Eric Lippert, from the C# compiler team (at the time) described why the `async` modifier is needed for compatibility reasons. – Mason Wheeler Mar 23 '17 at 09:07
5

I think you're reading too much into this. async changes the return type of a function. I don't know how or if C# denotes that beyond the async, but in Scala (and I believe Kotlin but I'm not as familiar with it) your return type literally goes from an A to a Future[A]. Obviously, that means the calling context must generate different code to retrieve the return value.

It wouldn't be difficult in Scala to add implicit conversions to go from an A to a Future[A] where a Future[A] is expected in the calling context, by just wrapping it in an async {} block. Likewise, it would be trivial to add implicit conversions to go the other direction by wrapping in an Await.result call. Other languages could easily do this conversion in the compiler.

This would be a mistake though, because you're throwing away all the help your type checker gives you in writing asynchronous code, and in precisely controlling the point at which you want it to become synchronous. Your compiler forces you to keep all the async code in async blocks, so when you accidentally go synchronous deeper than you intended in the call stack it can give you a helpful error message. With weaker types comes weaker protections.

In other words, it's basically the same reason you don't want your compiler to automatically convert strings to integers. When dealing with something as error prone as asynchronous code, you want your compiler to give you as much help as possible.

Karl Bielefeldt
  • 146,727
  • 38
  • 279
  • 479
  • 1
    C#'s `async` annotation is for the compiler; it's not part of the interface. The important part is the `Task` return type. – Sebastian Redl Mar 24 '17 at 11:59
  • My impression is that most asynchronous code only cares about the return value and a hope that the work executes faster in parallel. I wonder if the operations that depend on the timing of a `Future`, like callbacks, could be polymorphic on non-future types too and just degenerate to synchronous behavior there. Couldn't it be more expressive too if operations that mutate the timing of a `Future` could only operate on the base-level async objects from the system? I'd prefer the explicitness of that anyways. – concat Mar 24 '17 at 14:43
-1

Async and await are just syntactic sugar. You don't have to use them, but the code is cleaner and more readable if you do.

When you mark a function as async the compiler injects a bunch of additional code. You could have written that code yourself, but it always follows the same pattern and can be distracting when reading the code.

bikeman868
  • 879
  • 6
  • 9