简体   繁体   中英

Partitioning in an event sourced invoicing bounded context

In an event sourced system modeled after tactical domain driven design, I have trouble how to handle the the following situation:

  1. Items are selected to be on an invoice.
  2. Either all of the selected items form the invoice, or if any of the selected items cannot be assigned, none of the items are assigned and the invoice doesn't exist.
  3. Invariant: An item must not be on more than one invoice.
  4. As soon as an invoice with items exists, the items should be converted to prices.

My plan would be to have

  • an Invoice aggregate type, so there is an identity for the invoice to group the items by and to store the calculated prices in.
  • and I would have an Item aggregate type, which keeps track of the invariant by storing a single reference to one invoice per aggregate instance.

I imagine I would need to fire a command against a new the Invoice aggregate, containing a list of existing item IDs to assign to the new invoice. This would then emit an event about the invoice creation.

Then something would listen on this event and translate it into a list of commands which assign each of the items selected to the new invoice.

I see that this can potentially fail: For example one of the selected items could have been assigned to another invoice after the command has been issued. Therefore I would somehow need to roll back all the assignments that did not fail and declare the invoice not to exist again.

On the other hand, to calculate the prices on the invoice, I would need to know when all items initially selected are actually assigned to the invoice, to be sure that the invoice is here to stay.

Currently I am working with the Commanded CQRS/Event Sourcing framework in Elixir, which is based on the Erlang actor model.

My naive idea, coming from a long history of working with non-distributed relational databases, would be to put the whole situation into a synchronous transaction spreading over both aggregates. But the framework doesn't seem to support this and it also more or less defeats the idea of asynchronous distributed aggregates achieving eventual consistency.

Therefore I am looking for a proper solution for my problems. Any help would be appreciated.

In your case, you need to use saga as a pattern.

Sample flows of your saga can be:

FLOW _ 1

  1. Created a Invoice aggregate with all item id and state initialzed and output multiple events -- one InvoiceCreated, one InvoiceCreatedForItem
  2. Somebody (ProcessManager) listen to the InvoiceCreatedForItem event and try to assign the item to invoice, and then update the invoice with a command saying MarkItemSUccessfullyAddedToInvoice. If failure happen then update Invoice with MarkItemAdditionFailureToInvoice.

After events from all Items came, Invoice can take a choice to send ItemSuccessfullyCreated or handle other failure cases. Then after ITemSuccessfullyCretaed one add the price to the Invoice.

FLOW _ 2

  1. Created a Invoice aggregate with all item id and state initialzed and output multiple events -- one InvoiceCreated(contains all item id)
  2. Somebody (ProcessManager) listen to the InvoiceCreated event and create another Aggregate or put list of invoiceid vs item vs status in some table. Then on the event check remaining item id and assign them to invoice and update the invoice in same way as above or can update the price as well. Possibly Breaking it into muliple makes more sense to better manage the code.

There could be other flows which you can think. You could go ahead and read http://microservices.io/ for more such patterns

My naive idea, coming from a long history of working with non-distributed relational databases, would be to put the whole situation into a synchronous transaction spreading over both aggregates. But the framework doesn't seem to support this

Yes, writing to multiple event streams in one transaction is typically not good practice. You still have a few options:

Check then create: "sequential" synchronous consistency

Since we're talking about aggregate creation , you can do the invariant check (A) and the Invoice creation per se (B) in a sequence, in a synchronous way but without a transaction. Since the Invoice doesn't exist until B finishes, there is no risk of concurrent access to it. Because A is atomic in itself, you already nailed all possible concurrency cases. You just have to check that nothing went wrong with A before doing B. In the unlikely event that B fails, just log an error or send a notification.

  • If you can find a suitable concept for it in your domain, design an aggregate that is able to enforce A - basically it should contain a map of items and their corresponding invoice. It usually works well if you manage to answer the question "in which scope does the invariant hold? (just one customer? one company? something else?)" and design the aggregate around that scope. Load the aggregate, have it check the invariant and if it's OK then register the new items/invoice associations in it and spawn the new Invoice aggregate.

  • Design an aggregate in such a way that your persistent store is technically able to enforce the uniqueness invariant. For instance, an event stream with Item.Invoiced.(ItemId) as a key. This can be considered as an interim stream on the path to creating the Invoice stream.

Create and check later: eventual consistency

  • On InvoiceCreated , try to insert the new item/invoice associations into a table that has an (itemId) unique constraint, or use the surrogate event stream mentioned above. The table can be in the same database as your Read Models for practical reasons. If the insert fails, trigger a compensation action (remove items from invoice, cancel invoice, etc.)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM