Mistakes made when adopting Event Sourcing (and how we recovered) (DDD EU 2020 talk summary)
Experience report really.
An editorial system for scientific publishing: Submit -> Review -> Revise -> Accept -> Publish, etc. workflows
A big part of this process is in detecting fraud
Needed a lot of different views for a lot of different types of users, eg. authors, journal editors, peer reviewers …
Needed a lot of metadata about these users (who are they)
Thus, Event Sourced arch was picked.
Needed the fact that the audit log WAS the actual history of the state of entities.
Used jvm, kotlin, http4k, PostgreSQL
Mistakes we made
Seduced by eventual consistency
They made an http service for storing events — this one called for a disaster (no transactionality).
Command processors both stored events and current state (snapshots)
Confusion between event-driven and event-sourced
Used event store as a message bus — ok for cqrs — started tracking events that had nothing to do with the business (technical)
Did not use JSON serialization (denormalized json into columns)
How we recovered
They later reverted to using REST & HTTP for integration
Made command handler transactional
Needed snapshots any way (implemented them as a read-through cache) — why were projections not enough?
But still, it was quite easy to do these changes. Why?
One of the key reasons was that hey used hexagonal architecture (more specifically — having domain model that was independent of the infrastructure)
Had extensive automated tests (for the domain model)
Integrated and deployed continuously (Everyone pushed straight to master — every commit built, tested and promoted to live)
Client driven compatibility tests were a part of the pipeline
Had e2e tests
(Just a set of usual best practices)
Well, if you ask me: “Hexagonal architecture is worth it”
I might also add: Do your homework on Event Sourcing before actually using it for your production system.
Make a simple POC and you will avoid the naive mistakes that they have made.