cqrs – Specifications Pattern crossing bounded context – Domain Driven Design

I’m trying to understand and implement with good practice the Specification Pattern within my domain driven design project.

I have few questions:

For example: verifying that a customer is subscribed before update a data on a aggregate is it part of business rule and should sit in the domain layer?

Can specifications cross bounded context? For exemple, i have a Subscription bounded context where we can find a specification called SubscriptionIsActive. In a second bounded context let’s call it PropertyManagement. I would like to call SubscriptionIsActive specification as an User need to be a subscriber to create a Property aggregate. My specification here is crossing bounded context and i don’t think it’s right choice, any recommandation, tips?

Where should instanciate the specification when we want to use it, Application layer (that contains, commands and query, we use cqrs) or Domain Layer wihtin the aggregate root?

At the end, where should access control like (User has rights to edit some aggregate) sit in a domain driven design, Domain Layer or Application Layer in services before calling the aggregate?

Thanks

cqrs – How to detect broken indexes in Lucene?

I am the main author of Squidex, a headless CMS (https://github.com/squidex/squidex).

Squidex has a CQRS like architecture. So when you create a content a new ContentCreated event is created. An event consumer listens to all content events and populates lucene indexes.

Unfortunately there is only a single event consumer for the whole application and not one event consumer per project. But some installations include the Squidex cloud offering could have dozens to thousands of projects.

Lets say the user makes a search and does not retrieve the expected results. This could happen because somehow an event has been missed. Of course it would be a bug that an event is not handled by the event consumer, but if it happens it needs to be handled properly afterwards.

But I cannot just restart the event consumer, because then it would start from beginning for all events and projects. So it would be great if I can somehow detect that the index is very likely broken and just recreate this index. The question is “how?”. The index structure is not corrupt and the number of index entries could be correct, for example when an update event has been missed.

Of course I can also add something to the UI to recreate the full text index per project manually, but it would be great if it would just work.

cqrs – Event Sourcing – Multiple events or a single for a change on one aggregate?

I have a checklist system where we are implementing CQRS/ ES (Event Sourcing). We have a command

updateStatus(taskId: string, status: boolean)

to mark a task or sub task as completed. If I receive a command that a sub task is completed, and all the sibling sub tasks are also completed, I have to mark the parent task as completed as well. So in the example below (subtasks 1-3 of task A):

  • ( ) task A – open
    • ( ) task 1 – open
    • (*) task 2 – completed
    • (*) task 3 – completed

Tasks A and 1 are both open initially and then I receive a command

updateStatus(task1, completed)

the CommandHandler needs to generate an event taskCompleted(task1).

My question is what is the correct CQRS/ ES requirement:

  • Generate a single event: taskCompleted(task1)
  • Generate two events: taskCompleted(task1), taskCompleted(taskA)

In the first option I would expect the consumers of events to see that the aggregate should also update itself to be completed. In the second, the command handle takes care of it.

The major downside of option 1 is more processing for command handlers and their deeper knowledge of the aggregate. Another disadvantage is re-use of the events (e.g. say we have logic for sending an email to task owner when it is completed, with option 2 there would simply be a second event handler which just listens to events and acts on them without knowing the full logic).

The major downside of option 2 is a much larger number of events.

Any suggestions on which is the more correct approach using CQRS/ ES?

design patterns – CQRS for avoiding heavy joins

I’ve been wondering about what is a perfect use case for CQRS where the benefits overcome the complexity and the cost with come with the package. So for me to better understand it, I want to share with you a theoretical use case and we can see together whether it’s a good CQRS fit or not, and if not what alternatives would be good in this case.

Let’s say we have the following very simple model:
Simple model
As
we can see, we have departments which have multiple students where they can pass exams on subjects which are classified into categories. Very simple.

Let’s also say our solution have a screen where we want to display all these information. Here’s our ugly UI for this page:

Our ugly UI

As you can see the information shown in the table must be extracted from all the entities shown above. So we have a lot of joins.

Supposing that our client have w very successful online school with millions students. And they do not tolerate performance issues, so the solution must be as fast as possible.

It’s clear that there are multiple joins when displaying this page, so to improve the overall readability we went ahead and started experimenting on CQRS.
We need to add a new database for read operations which stores all the above information in a single row, so the read model for our view is as follows:

enter image description here

This will definitively make reading faster but it will also make thing more complicated. For example, we need an event source to publish event that sync the read database with the write database, etc…

So my question is, in this particular use case, is CQRS beneficial ?

event sourcing – What are the problems with command handlers returning data in CQRS?

This allows the client to update its representation of the affected resources, without having to perform a follow up query immediately

The catch is in the implication here. What you’re effectively saying is:

I have to fire one less request when I merge the two requests (i.e. the command and subsequent query) into one.

What you’re saying is not wrong, but you are violating the core tenet of CQRS, which it is literally named after: Command Query Responsibility Segregation.

You are correct that merging these two requests saves your frontend a small bit of overhead performance related to the firing and waiting for a second web request (but we can definitely argue about whether this overhead is significant or negligible, which I’ll get into in a bit).

If you’re dealing with a slow network connection, this difference can be non-negligible. e.g. I’ve developed 3G mobile software where network requests were minimized as much as possible due to spotty connections.

However, CQRS isn’t focused on optimizing the performance of the frontend, it’s focused on maintainability and scalability of the backend. This will also benefit the frontend, but in an indirect way. CQRS allows for scalability of your read store (since it is then separated from your write store), thus lowering the overall time of your second request; instead of preventing you from having to fire that second request.


I do want to point out here that for connections with no strict limitations, the cost of performing a second call is negligible and not reasonably spotted by an end user. If there is a noticeable lag due to a second request being fired, on a good connection, that actually suggests that your system should be scaled up (as the requests aren’t being handled in a reasonable time), which is what CQRS helps you with.
If this is the situation you find yourself in, then undoing the command/query segregation is effectively perpetuating your performance issues instead of improving them.


Do you have to use CQRS? Of course not. Just like any other principle or pattern, it exists to fix a particular problem. If the problem doesn’t exist in your scenario (or is not considered a problem), then the principle/pattern is not needed.

But your CQRS “variation” is actually undoing what is essentially the first and only commandment of CQRS: separating your data operations from your data queries.

Does that mean you shouldn’t do what you’re doing? No, not necessarily. But I wouldn’t call it CQRS anymore as it’s quite the opposite.


which possibly would return data from the out-of-date read model, since read models can be updated asynchronously

This is a cart-before-horse situation. If you don’t want to deal with the consequences of having your read store updated asynchronously, then don’t asynchronously update your read store.

It sounds facetious but it really is as simple as that. Asynchronicity has its upsides and its downsides (just like everything), and if you don’t want the downsides, then don’t do it.

domain driven design – CQRS denormalized data query with multiple aggregates

Let’s say there’s a domain that’s similar to Reddit.

Aggregate roots are

Board // you can ignore this
  - boardId
Post
  - postId
  - userId
User
  - userId
  - username

where each aggregate emits

Post
  - PostCreated
User
  - UserCreated
  - UsernameUpdated

for the query side, the query will try to fetch a page of posts in a specific board.

Given this, the denormalized data should look as below

posts: ({
  title: 'some post title',
  body: 'some post body',
  author: 'user123',
})

Now in the denormalised database, I’d create post entry when the PostCreated event is received. And the received event holds userId.

To populate the author field for the read model above with that user ID, I can do either

  1. Read username from the existing denormalized User data, and save the post read model with the username.
    • This will require updating ALL posts when UsernameUpdated event is handled.
  2. Create a join column between User and Post. and when query is requested, join the table to populate the author field with username

Is it so obvious that the second method is the way to do it? The reason why it’s confusing me is that the denormalised database feels almost like a giant monolithic database (thinking of adding other aggregate root’s events to the read model E.g., Board). Or is denormalized data is just like this?

cqrs – Merging aggregates with Event sourcing

I’m currently evaluating Event Sourcing and CQRS for an implementation of a new business requirement at my day job. While I can’t really speak about the actual business problem, I can describe my problem using the domain described in this Kata dealing with quiz games.

I think I got the general idea of Event Sourcing and how CQRS links to it. However, all examples I can find use domains with clear separations between aggregates as well as between different instances of the same aggregate (in the Kata mentioned above, quizzes and games have a clear relationship. There’s no interdependence between different quizzes or different games).

The problem

In my case I have the problem that it must be possible to merge different instances of the same aggregate (in our sample domain this could mean that it must be possible to merge different quizzes together into one quiz) as well as undoing this merge later on (reconstructing the two original quizzes from the merged one).

This constraint adds quite some complexity when it comes to constructing the current state of an aggregate, because it’s necessary to read the whole event stream from the beginning to be sure that all relevant events are taken into consideration. It’s not possible to partition the event stream in a useful way because it’s impossible to tell which aggregates will be merged later in the future. It might even be a problem when the event stream gets partitioned, because the temporal order of related events gets lost.
From what I understand, partitioning the event stream allows for a fast provision of the events that are necessary to build up the current state of an aggregate. For instance, if I want to know the current state of the quiz with ID 124ecf, I technically could filter the event streams to just have the events for this exact ID which would drastically reduce the number of events. If this is not possible, like in my case, reading the event stream ad hoc to recreate the state of an aggregate will become very slow and impractical over time.

The solution I came up with so far

The only solution for this problem that seems to be possible to me is to work with rolling snapshots for all necessary projections. The snapshots would update themselves continuously, building up a state optimized for their specific use case (processing commands, answering queries etc.).
I’m skeptical about this idea, because it requires quite some effort. Most of the implementations of typical applications don’t require rolling snapshots for most use cases because building up the desired state from the event stream is fast enough. This simplicity is lost in my case.

The question

My question could be split up in several parts:

  • Is it a good idea to use Event Sourcing for domains like these where it’s not possible to draw clear boundaries between different instances of the same aggregate?
  • Does it make sense to heavily rely on using rolling snapshots to get the desired performance?
  • Is there another way other than rolling snapshots to implement this?
  • I can’t think of a way for partitioning the event stream. Am I missing something? Are there some techniques that allow partitioning/sharding under the given circumstances?

microservices – Reading modeling, maintenance in CQRS stateless services

In a microservices-based web backend, most services (nodejs) contain modules to handle read and write data separately. When that particular service is restarted, it takes data from other microservices and re-caches the read model (sometimes a document-based database). And this service runs inside a dockable container.

When it comes to clustering or just talking when multiple instances of the same service are running, or when a manual restart is needed, this read model building mechanism runs multiple times and seems like a very bad practice. Especially imagine in a scenario that you need to cache a few hundred thousand entities each time, this affects the startup time and performance of that service.

What are the industry best practices for creating / managing read model data when the service is started and restarted?

Distributed computing: Frontend experience design to support asynchronous backend CQRS operations

I am developing a microservice with CQRS and Event Sourcing. When an event is saved in the event store, the service currently also saves the updated aggregate root as a JSON object in a separate table. So far I have separated the writing model from the reading model (although not in separate data stores).

I want to take this one step further. I want to have a microservice for the writing model, and another for the reading model, for scalability issues. Each service with its own database. I want to use messaging to maintain eventual consistency between the writing and reading model.

I am concerned how this approach would affect the user experience in a frontend application for this backend.

Let's say a user creates a post. The writing service returns ok, and (by design) the user is instantly redirected to the page that shows the newly created publication and its content.

However, due to the asynchronous nature of messaging, the reading model may not be consistent with the write transaction, and I risk a scenario in which the reading service does not yet contain the newly created publication at the time when that the interface tries to call it.

What is the conventional / standard way of the industry to handle a scenario like this?

event source – CQRS – How can a command validate correctly when queries are required?

I am aware that this question has been asked several times, but I have some concerns regarding the queries from the writing side that I do not see addressed in the existing questions, more specifically with respect to the eventual consistency in the command model.

I have a simple CQRS + ES architecture for an application. Customers can buy things from my site, but there is a codified requirement: a customer cannot buy more than $ 500 of products in our store. If they try, the purchase should not be accepted.

So, this is what my command controller looks like (in Python, and simplified worries like coins, injection for simplicity):

class NewPurchaseCommand:
    customer_id: int
    product_ids: List(int)

class PurchasesCommandHandler:
    purchase_repository: PurchaseRepository
    product_repository: ProductRepository
    customer_query_service: CustomerQueryService

    def handle(self, cmd: NewPurchaseCommand):
        current_amount_purchased = self.customer_query_service.get_total(cmd.customer_id)

        purchase_amount = 0
        for product_id in cmd.product_ids:
            product = self.product_repository.get(product_id)
            purchase_amount += product.amount

        if current_amount_purchase + purchase_amount > 500:
             raise Exception('You cannot purchase over 500$')

        new_purchase = Purchase.create(cmd.customer_id, cmd.product_ids)
        self.purchase_repository.save(new_purchase)

        # Then, after the purchase is saved, a PurchaseCreated event is persisted, 
        # sent to a queue which will then update several read projections, which one 
        # of them is the underlying table that the customer_query_service uses.

CustomerQueryService uses an underlying table to quickly retrieve the amount that the user has purchased at that time (for example), and this table is used exclusively by the write side, and is eventually updated:

CustomerPurchasedAmount table
CustomerId | Amount
10         | 480

While my command controller works in simple scenarios, I want to know how to handle possible extreme cases that could occur:

  • This user 10, which is malicious, makes two purchases at the same time of $ 20. But as the CustomerPurchasedAmount table is updated eventually, both requests will be successful (this is the case that worries me most)
  • It is possible that there is a possibility that the price of some products may change while the request is being processed (unlikely, but again, it may happen).

My questions are:

  • How can I avoid and protect the concurrence case command set out above?
  • How should the reading models specifically designed for the writing side be updated? Synchronously? Asynchronous as I am doing now?
  • And, in general, how should command validation happen if the information you are consulting to validate can be obsolete?