8

I'm writing a soft real-time application in C#. Certain tasks, like responding to hardware requests coming in from a network, needs to be finished within a certain amount of milliseconds; however it is not 100% mission-critical to do so (i.e. we can tolerate it being on time most of the time, and the 1% is undesirable but not a failure), hence the "soft" part.

Now I realize that C# is a managed language, and managed languages aren't particularly suited for real-time applications. However, the speed in which we can get things done in C#, as well as the language features, like reflection and memory management, make the task of building this application much easier.

Are there any optimizations or design strategies one can take to reduce the amount of overhead and increase determinism? Ideally I would have the following goals

  • Delay the garbage collection until it is "safe"
  • Allow the garbage collector to work without interfering with real-time processes
  • Thread/process priorities for different tasks

Are there any ways to do these in C#, and are there any other things to look out for with regards to real-time when using C#?

The platform target for the application is .NET 4.0 Client Profile on Windows 7 64-bit, with. I've set it currently to Client profile but this was just the default option and wasn't chosen for any particular reason.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
9a3eedi
  • 2,101
  • 3
  • 23
  • 29
  • 2
    [This article](http://blogs.msdn.com/b/ricom/archive/2006/08/22/713396.aspx) might help you out. It's aimed at video game developers, but that's kind of also a soft real-time application. However, the article is also a bit old, and while I doubt the basic principles behind how GC works have changed much, you might want to double check that it's still up to date. – Doval Aug 27 '14 at 01:01
  • You mention C# but don't mention your target platform or runtime (e.g. .NET on a PC/Unity+Mono/Mono etc) - can you give us a few more details? – J Trana Aug 27 '14 at 03:40
  • @JTrana I've edited with more details – 9a3eedi Aug 27 '14 at 04:49
  • 1) Did you actually notice problems? Thanks the to background GC, your threads should only stop briefly for certain parts of the GC, but many other parts can run in parallel without interference. 2) How many milliseconds are we talking about here? – CodesInChaos Aug 27 '14 at 09:14
  • @CodesInChaos We haven't finished implementation yet, and haven't tested with real hardware. We do however have another C#-based program that works with hardware, and most of the time there aren't any issues. I wasn't planning to optimize early though, I was just curious on what are the general ways C# programmers deal with these things. Also, the maximum amount of milliseocnds before breaking soft real-time constraints is 125ms. However, this amount depends on the hardware (the application is designed to work with different hardware) and we might require tighter time constraints in the future – 9a3eedi Aug 27 '14 at 15:07
  • 2
    @Doval The GC of the compact framework used by XNA running on the xbox is much worse than the GC used in the normal framework. There were significant changes to the GC in .net 4.0 related to background collection which should benefit latency. With background collection the expensive Gen2 collection can run in the background while your application continues working. It can even run Gen0/Gen1 collections while the Gen2 collections is in progress. – CodesInChaos Aug 27 '14 at 15:21

4 Answers4

7

The optimization that fulfills your first two bullets is called an Object Pool. It works by

  1. creating a pool of objects when the program starts,
  2. maintaining references to those objects in a list so that they don't get garbage collected,
  3. handing objects to your program from the pool as needed, and
  4. returning objects back to the pool when you're done using them.

You can find an example class that implements an Object Pool using a ConcurrentBag here.

Thread/process priority can easily be set at runtime. The Thread Class has methods and properties that allow you to set priority and processor affinity. The Process Class contains similar facilities.

Robert Harvey
  • 198,589
  • 55
  • 464
  • 673
  • 3
    This seems like something that should be done after the application is complete and working, as in "optimize later". Nevertheless, it make sense to me as a method to increase determinism. – 9a3eedi Aug 27 '14 at 05:28
  • 1
    Would any suggestions I make be "optimize earlier?" – Robert Harvey Aug 27 '14 at 05:29
  • I was initially expecting some kind of configuration option for the .NET runtime to change the behaviour of the garbage collector to suit real-time tasks as an answer. I didn't mean to say your answer is incorrect in any way. – 9a3eedi Aug 27 '14 at 05:32
  • 2
    The garbage collector has precious few configuration options. It is generally assumed that its behavior is optimal for the vast majority of programs. Note that neither the Windows Operating System nor the .NET Framework are intended to be used in hard real-time environments. Yes, I know you said *soft* real-time, but still. – Robert Harvey Aug 27 '14 at 05:34
  • 2
    @9a3eedi : With regard to whether object pool can be bolted on after the application development has been done, I can offer the following advice. (1) you will have to inject an object factory (which doubles as the object pool) into all objects which you would like to pool, (2) replace all constructor methods with factory methods, (3) each object has a recycle method which return themselves back to the pool-factory (4) implement your objects to be recyclable: be wiped and then rewritten with new information. You may take this into account if you need to design the API interface upfront. – rwong Aug 27 '14 at 06:18
  • 3
    In my experience object pools are useful for recycling large arrays (such as buffers). For small objects the runtime tends to do well without manual pooling. – CodesInChaos Aug 27 '14 at 09:15
  • You know, people talk about premature optimization being the root of all evil. And it's true - in many ways. However, patterns like RAII (which this is not, but perhaps [stackalloc](http://msdn.microsoft.com/en-us/library/cx9s2sy4.aspx) might interest you in C# land) are not the sort of thing you just add at the end; object lifetimes should be considered carefully as a pervasive pattern IMO. @rwong, any other advice for us on what went well and what didn't when you've done this in the past? – J Trana Aug 28 '14 at 04:40
  • @JTrana: Please refer to the ["object cesspool anti-pattern" article](http://patrickdelancy.com/2012/07/object-cesspool-anti-pattern/). Summary: (1) not having release method, (2) release method doesn't do its job, (3) release method not being called appropriately, or not at all. – rwong Aug 28 '14 at 18:29
  • @CodesInChaos: Agreed. Anything less than a kilobyte apiece (i.e. arrays of that size) is not worth object-pooling. However, as far as Android is concerned, even the official development guide says to avoid object allocations inside time-sensitive code (especially the `view.onDraw` method), because on Android every allocation carries a tiny risk of triggering a GC, and yet the consequences of that tiny risk is deemed severe enough for Google to justify issuing such an advice. Weird, and hopefully it will go away as software and hardware improve over time. – rwong Aug 30 '14 at 10:11
3

Delay the garbage collection until it is "safe"

You can do that, by setting the GC latency mode to LowLatency. For some more info, see this answer on SO.

svick
  • 9,999
  • 1
  • 37
  • 51
  • Excellent. I was looking for something like this. Thanks. I wish I could mark two correct answers – 9a3eedi Sep 01 '14 at 01:16
2

Take the same approach that games development does in a managed environment and try to minimise object creation / death down to the absolute minimum.

e.g. Try to create all the objects likely to be required at the start and pool obsessively, pass by ref whereever possible, avoid operations that create short term intermediate objects

lzcd
  • 256
  • 2
  • 3
  • will changing some classes to (immutable) structs help with avoiding creating short term intermediate objects? – 9a3eedi Aug 27 '14 at 04:52
  • The choice between structs and classes generally doesn't have a huge impact on an "object's" lifetime directly (except that structs come with some value-type "singleton" style logic built in). Being immutable may help but it really depends on a case by case basis. – lzcd Aug 27 '14 at 05:03
  • 1
    @9a3eedi: using immutable objects means actually whenever you have to "mutate" something, you have to create a new object with modified values (and probably throw away the old object in exchange for the new). That is exactly the opposite from what you want, since it will leave a lot of memory to be cleaned up for the GC behind. – Doc Brown Aug 27 '14 at 06:37
  • @DocBrown +1. However, GCs are tuned for short-lived objects. You don't pay for objects that are dead when a collection happens (though allocating like crazy will increase the frequency of collections). You're not wrong, but it's worth checking that there's a problem in the first place since pooling amounts to manual memory management and defeats half the purpose of managed code (the other half being avoiding undefined behavior.) – Doval Aug 27 '14 at 11:42
  • @DocBrown Also, objects that the compiler/runtime can prove don't escape their scope get allocated on the stack rather than the heap, so in some cases temporary objects have no effect whatsoever on garbage collection. – Doval Aug 27 '14 at 12:01
  • @DocBrown The thing with immutable structs in C# though is that, as far as I understand, they are allocated in the stack of the function. There's no garbage collection clean-up involved as a result, but it might mean more copying. – 9a3eedi Aug 29 '14 at 01:20
  • @9a3eedi: I think you are correct (as long as those structs are created at a function's scope, and as long as no boxing/unboxing occurs). So in certain situations a struct may be indeed help. But I think this is better discussed using a real-world example. – Doc Brown Aug 29 '14 at 05:44
  • @Doval AFAIK the CLR doesn't have this optimization, instances of reference types are always allocated on the heap. – svick Sep 01 '14 at 04:10
0

Bear in mind that the target environment (Windows) is a pre-emptive multi-tasking O/S. In other words, your soft real time process will inevitably be pre-empted at some time or another, which will introduce other latencies besides the .NET garbage collection. You can mitigate it by reducing the time slice (quantum) value (default is something like 16ms) and elevating the priority of your process (but there are practical limits), but pre-emptions will/must still occur because other essential processes still need to do stuff. All this has nothing to do with programming language; it's fundamental to the O/S.

GC issues aside, your application running on an out-of-the-box Windows machine will probably be able to deliver latencies of a few milliseconds MOST OF THE TIME. It certainly can't be guaranteed 100% of the time, or anything close. Even with tuning (short quantum, high priority), you'll still get occasional high latencies. It's the nature of the beast.

Bottom line: if you need guaranteed response times, especially on the order of a few milliseconds, Windows is probably not the best choice for the delivery O/S.

Zenilogix
  • 309
  • 1
  • 3
  • I agree. I am willing to accept a few milliseconds of delays here and there, hence the "soft real-time" part, but ideally I'd like to minimize as much as possible. Perhaps down the line I can port the application to Linux (with Mono) and maybe that'll allow me to fine-tune these things – 9a3eedi Sep 14 '16 at 12:38