2

The context

I have two components A & B, which communicate through events .

Components B is responsible for long running background processes and it listens to the following event:

startProcess(uniqueID)

Component A is sending the startProcess events.

The issue

I need the IDs sent by component A to be unique, cause elsewhere component B breaks.

Possible solutions

When component B receives an ID that has already been assigned to another a background job, does not accept it therefore rejects the initiation of a new background process.

The problem here is that component A does not know about the rejection, in contrast to a Http request response design, where the error is communicated through the response.

Question

Is there a better way to handle this, other than silently component B rejecting initiating a new process ? A definitely solution would be to forget about events and do this request/response way. But in general, how do event driven systems handle these kind of scenarios ?

Last

What I am doing at the moment is that component A bears the responsibility to send unique IDs. Is that the solution I am after?

Cap Barracudas
  • 1,001
  • 9
  • 17
  • 1
    You are not building an event driven system. StartProcess is a command and you are coupling two systems this way. If B needs Id’s to be unique, maybe B should be responsible for creating Ids in the first place. – Rik D Dec 15 '19 at 16:35
  • raise a "System B Errored" event and have another consumer handle it – Ewan Dec 17 '19 at 15:28

2 Answers2

2

You are asking for a dead letter queue.

I am not sure what drives the uniqueness of the ID on component B. Are you trying to ensure one-time only delivery, or is that a requirement internal to component B? For the former, there are messaging frameworks which can deliver on that promise. For the latter, you can generate that unique ID internally.

Martin K
  • 2,867
  • 6
  • 17
  • Let us say there are 10 B components(B1-B10).Each of them listens to the startProcess event and after their background job is finished they all publish to a C component and all their calculations (background processes) are stored inside C.ID serves as an identifier for C to store all related calculations together. That is what drives the uniqueness of the ID provided by A. I understand what you are saying, however, I cannot find another way to bind together all these processes B1-10 together from the whole system's perspective apart from the uniqueness of an ID. Can there be another solution? – Cap Barracudas Dec 16 '19 at 09:44
  • I'm unclear if you have one or more processes per ID. In your initial post I thought I understood you want only one process started and reject any others. In your comment above it sounds more like map/reduce where several nodes run the identical algorithm on different slices of the data, and then the data gets aggregated. What are the conditions under an event gets rejected, and what is the action taken following the rejection? – Martin K Dec 16 '19 at 22:32
0

my two cents based on a similar problem that I faced recently, Assuming that A and B do their best effort to do their task within their scope:

  • If the reliability of the whole system A-B-C depends only from the ID uniqueness, then A must be strengthen to create unique IDs and keep your current approach (blind delivery from A to B).
  • On the other hand, if the result of the process made by B matters (success/failure), A must take an orchestrator role, maybe using a queue in A to keep track of the IDs sent and take an action depending on the result that B will provide (if the result is an error, send notification, log it, try again x times, stop the sending of new IDs, etc., the best error handling that applies in your case).

I hope it helps, regards.

MarioG
  • 1
  • 1