I agree with the posted answer that you are overengineering your approach. Additionally, there are several options here, and you've been quite light on details and considerations that would help decide between those options.
But I happen to have worked on a similar problem not too long ago, so I wanted to give you a real world example of how your issue can be tackled.
Backend
In our case, we were returning a series of events of all types (user created, user updated, ...) but it had to be a single list, without specific filters (other than pagination).
Because there were myriad event types, and due to considerations they were kept as minimal as possible, we opted to serialize the event data and store it this way. This means that our data store didn't have to be updated every time a new event was developed.
A quick example. These were the captured events:
public class UserCreated
{
public Guid UserId { get; set; }
}
public class UserDeleted
{
public Guid UserId { get; set; }
}
Note that our events were truly kept minimal. You'd end up with more data in here, but the principle remains the same.
And instead of storing these directly in a table, we stored their serialized data in a table:
public class StoredEvent
{
public Guid Id { get; set; }
public DateTime Timestamp { get; set; }
public string EventType { get; set; }
public string EventData { get; set; }
}
EventType
contained the type name (e.g. MyApp.Domain.Events.UserCreated
), EventData
contained the serialized JSON (e.g. { "id" : "1c8e816f-6126-4ceb-82b1-fa66e237500b" }
).
This meant that we wouldn't need to update our data store for each event type that was added, instead being able to reuse the same data store for all events, since they were part of a single queue anyway.
Since these events did not need to be filtered (which is also one of your requirements), this meant that our API never had to deserialize the data to interpret it. Instead, our API simply returned the StoredEvent
data (well, a DTO, but with the same properties) to the consumer.
This concludes how the backend was set up, and it directly answers the question you're posing here.
In short, by returning two properties (i.e. the serialized event data and the specific type of event), you are able to return a large variation of event types in a single list, without needing to update this logic whenever a new event type would be added. It's both future-proof and OCP friendly.
The next part focuses on the particular example of how we chose to consume this feed in our consumer applications. This may or may not match with your expectations - it's just an example of what you can do with this.
How you design your consumers is up to you. But the backend design discussed here would be compatible with most if not all ways you could design your consumers.
Frontend
In our case, the consumer was going to be another C# application, so we developed a client library that would consume our API, and would deserialize the stored events back into their own respective event classes.
The consumer would install a Nuget package we made available, which contained the event classes (UserCreated
, UserDeleted
, ...) and an interface (IHandler<TEventType>
) that the consumer would use to define how each event needed to be handled.
Internally, the package also contains an event service. This service would do three things:
- Query the REST API to fetch the events
- Convert the stored events back to their individual classes
- Send each of these events to their registered handler
Step 1 is nothing more than an HTTP Get call to our endpoint.
Step 2 is surprisingly simple, when you have the type and data:
var originalEvent = JsonConvert.DeserializeObject(storedEvent.EventData, storedEvent.EventType);
Step 3 relied on the consumer having defined handlers for each type they're interested in. For example:
public class UserEventHandlers : IHandler<UserCreated>, IHandler<UserDeleted>
{
public void Handle(UserCreated e)
{
Console.WriteLine($"User {e.UserId} was created!");
}
public void Handle(UserDeleted e)
{
Console.WriteLine($"User {e.UserId} was deleted!");
}
}
If a consumer wasn't interested in a specific event type, they would simply not create a handler for that type and therefore any events of that type would effectively be ignored.
This also kept things backwards compatible. If a new event type was added tomorrow, but this consumer wouldn't be interested in it, then you could keep this consumer untouched. It wouldn't break because of the new event type (it would just ignore those new types), and it wouldn't force you to redeploy your application.
The only real cause for redeployment would be if a change was made to the event types that the consumer was actually interested in, and that's logically inevitable.