3

Consider a simple — and fake — interface:

interface ISuperGetter { Super Get(); }
  • An implementation would get some Super from RAM.
  • One would store what it needs on disk.
  • Yet another could fetch the state needed to initialize the Super from a remote HTTP REST JSON server.
  • A last one actually commands a robot arm that seeks through shelves of magnetic tapes and plugs the right band into some mechanical reader.

At dev time, it's likely that the RAM-based implementation will be used, but as the software evolves, it is as likely that the other implementations will have to be developed. As those get increasingly wait-based-sluggish — in the order of the above list — the need for making the operation asynchronous will come along...

One issue I have here sits at code-level: typically, the way of asynchronicity in .NET involves a ton of boilerplate, such as adding the async modifier, changing the return type to a Task, appending Async to the method's name, and cluttering the code with await.

The second — and main — issue I have with the way asynchronicity is built in .NET is the fact that I will have to change the interface as well.

interface ISuperGetter { Task<Super> GetAsync(); }

Furthermore, the reason for the change doesn't come from a high-level policy, but from low-level implementation details!

It appears to me this encourages us to make everything asynchronous by default "just in case".

Is .NET's asynchronicity really intended to be used this way, or is my understanding erroneous? Are there other asynchronous models out there that are built differently?

lennon310
  • 3,132
  • 6
  • 16
  • 33
  • 3
    Related (could be interpreted almost as a duplicate): [Aren't the guidelines of async/await usage in C# contradicting the concepts of good architecture and abstraction layering?](https://softwareengineering.stackexchange.com/questions/382486/arent-the-guidelines-of-async-await-usage-in-c-contradicting-the-concepts-of-g) – Doc Brown Sep 25 '19 at 17:36

1 Answers1

16

As those get increasingly wait-based-sluggish — in the order of the above list — the need for making the operation asynchronous will come along...

Your reasoning is "requirements in the future might change, so let's design for them now".

Smart. But don't stop there. What about error handling? Sure, some implementations of this interface might throw an exception, but maybe someone in future times is going to want a delegate that's called on error because maybe someone in future times knows how to fix the problem on the fly. So don't stop at:

interface ISuperGetter { Task<Super> GetAsync(); }

You're going to want

interface ISuperGetter { Task<Super> GetAsync(Action onFailure); }

And you know, maybe someone in future times will want to be able to return multiple Supers at once, because it is much more efficient to do batch hits to databases rather than multiple hits, so we'd better write

interface ISuperGetter { Task<IEnumerable<Super>> GetAsync(Action onFailure); }

And so it goes. We can make up stories about what people in future times are going to need all day, and vastly complicate the design of the interface in order to be ready for a day that might never come.

There is a design principle that applies to this situation called YAGNI: You Ain't Gonna Need It. You should design your interfaces to meet the current and immediately anticipated user needs.

Is .NET's asynchrony really intended to be used this way?

No. Asynchronous workflows are intended to be used in programs that are built to handle asynchrony as a matter of course because they need to be responsive to user interactions or make efficient use of resources. Asynchronous workflows are intended to be composed of operations that we know ahead of time to be high latency. If your program does not match those characteristics, don't design for asynchrony. If it does, architect the program for asynchrony from the ground up.

Furthermore, the order of change doesn't come from a high-level policy, but from low-level implementation details!

This is an important misunderstanding. An asynchronous program is fundamentally different from a synchronous program at an architectural level. An asynchronous program needs to be built around the notion that all workflows may be "in flight" and partially complete, that some may be in fail states, and the nodes in the workflow may be arbitrarily re-ordered except where await is used to impose an ordering.

These considerations do not arise from the fact that async methods return a task. They arise from the fact that asynchrony is fundamentally different than a synchronous model of computation.

Are there other asynchronous models out there that are built differently?

Yes. The current model evolved from a much more primitive and difficult model of asynchrony, which in turn evolved from an even more primitive model, and so on. I tell you, these kids today with their terrible music who have never written an asynchronous Windows 3.1 program do not know how good they have it and should get off my lawn. I encourage you to try to write an asynchronous program in an operating system without garbage collection, virtual memory, flat memory, noncooperative multitasking, processes or threads, where asynchrony is represented by hardware interrupt handlers, and see how you like it compared to async/await in C#. Asynchrony today is a utopian dream of powerful tools for building rich workflow topologies.

Eric Lippert
  • 45,799
  • 22
  • 87
  • 126
  • We don't need laziness in general, but it comes handy sometimes and Haskell went all the way. Why can't a language be asynchronous in its core just like Haskell is lazy? – Basilevs Oct 19 '21 at 12:46
  • There are prices to be paid for laziness. Two I can think of off the top of my head: you definitely pay in performance (both speed and RAM). You also gain a new kind of storage leak which can kill you yet takes experience to diagnose after the fact or avoid by coding differently before the fact. In fact it makes a lot of debugging more difficult since the mere act of observation in a debugger changes the program behavior. I like Haskell and its laziness but it is not a panacea. – davidbak Oct 19 '21 at 16:24
  • @Basilevs: sure, we could totally build a language where every value was a future-of-T and every operator "natively" understood that when you, say, compute x + y, what you mean is "(await x) + (await y)". A language can be built that way so it is hard to answer your question, which presupposes that a possible thing is impossible. – Eric Lippert Oct 20 '21 at 00:04