2

I hope I can explain my question clearly. I'm planning to do some real time 2D grayscale image acquisition, and after reading some articles, I noticed that many are using buffers or multithreading or even parallel processing. Below top shows how in single threaded control perform one task at a time and bottom shows how idle time also can be used to speed up image formation:

enter image description here

Basically through my program I need to send trigger signals to a line scan camera and receive these lines, process them and form an image frame and plot it in real time. Another way is to receive each frame at each trigger.

So lets say we have 1024 points in X and 1024 in Y direction on a plane to scanned. What could be an efficient method to to acquire the next frame when at the same time doing processing? Above claims achieving this using multithreading. There this paper about it but a bit long.

I want to use C# with async await since it will be a GUI. In that case do we still need threading or will the Task library take care of it? I appreciate of anyone has experience with such situation. An example mimicking the situation helps a lot.

blunova
  • 388
  • 1
  • 4
  • 16
GNZ
  • 137
  • 3
  • How real time is “real time”? – Telastyn Jan 01 '22 at 16:17
  • Anything between 5 to 50 frames per second on the GUI is adequate. Some frames can be skipped or averaged before plotting the grayscale frame. – GNZ Jan 01 '22 at 16:26
  • Can you describe the physical setup as well as the current latencies of each step and their dependencies? (e.g. decision Y cannot be made until information X has been processed) What image processing algorithms are involved? What is the processing time? Is the processing algorithm already implemented using state-of-the-art techniques and written in a high performance language and running on a sufficiently fast computing platform? – rwong Jan 01 '22 at 21:07

2 Answers2

1

I would recommend you use thread-safe, in-memory queues for this kind of thing. Here's a high-level visualization of the idea:

image created by jimmyjames

The above shows a configuration with 5 threads. One thread is doing the image capture, 3 threads are doing image processing, and one thread is updating the display. I'm showing more threads in the processing stage based on the assumption that this is where the most processing is needed. Please be clear, there is nothing magical about the number 3 here, the actual number of threads will depend on a number of factors; especially the number of CPUs available. You will most likely not want to have more threads than CPUs here and you probably want to have at least one free to handle the capture and display threads.

The last step is a little tricky. If you are processing images on multiple threads, you will need to account for the situation that the last queue contains processed images that are out of order. I assume you don't want to display things out of order i.e.: show a older image after a newer one. You'll therefore want to mark the image captures with a sequential index and retain it on the processed result. You have a couple of options here:

  • As you process from the second queue, drop frames if the index is less than what you have last displayed
  • Wait for the next frame and show each one in order

The former option is the easier approach and can prevent pauses in the output. The latter option can be achieved by browsing the queue repeatedly to find the next frame or using properties/attributes and search functionality in the queue. Another way is to attempt to force the messages to added to the queue in order but you need to take care to avoid blocking the processing thread. I'm not seeing a lot of advantage to that approach but there may be a clever approach that isn't coming to mind.

The nice thing about this kind of design is that you can dynamically adjust the number of threads based on need. If you are delivering this as a package where you don't know the hardware, the end client can increase or decrease the threads according to available resources, potentially automatically.

JimmyJames
  • 24,682
  • 2
  • 50
  • 92
0

You will probably need some manner of multithreading. The async-await will keep the UI responsive but it won’t keep the computer constantly using the camera. And depending on what UI you’re using, you might get weird artifacts as you render the image.

Generally, systems will use a double buffering technique. A search can provide more details, but the basic concept is that you have two images. One is “active” and displayed while the other is being rendered. Then you flip them. The flip is very quick and since the whole image is rendered it cuts down on visual artifacts.

So the main thread would handle the UI, kicking off the second thread, handling the callback to flip the images, and errors from the second thread (which is the trickiest part). The second thread would constantly read the camera and callback when a full one is processed.

Unless the processing is particularly weird or the target machine has limited processing power this should provide good enough results for soft real time needs.

Telastyn
  • 108,850
  • 29
  • 239
  • 365