Proof data consistency in a micro service landscape
Season 2020, Episode 65
Introduction
Even if it doesn't appeal to you, you might want to think about it when you work in a larger microservice landscape or have a serious big data platform...Proof data consistency in a microservice landscape. When we google this subject I already get 1.2 MLN results so there’s something going on here.
To ensure data consistency several practices are available:
Saga Pattern
Reconciliation
Event Log
Orchestration vs. ChoreographySingle-Write With EventsChange-First
Event-First
Consistency by Design
Accepting Inconsistency
But in this episode, we won't go over these practices.
What the episode covers
We will dive into the verification part. The proof of the correct operation of your implementation.
Within bol.com we implemented a Data Quality Service (DQS). Actually, the second generation is already in place. The first generation focused on the immutable data in the 2nd improved version mutable data is covered as well. We will go over these questions to explain how we proof data consistency in a microservice landscape:
- How did we come up with our solution?
- What is our approach?
- How does it relate to our big data, BigQuery storage?
Statements
As a starter, we discuss these statements first
- Why care it is just data...
- The microservice is not the issue, the independent data storage solution is, so let’s get back to the centralized databases (makes testing also a lot easier)
- An architect should be the guest of this show as it’s part of his/her role to fix this
- Data Consistency is not a problem for Software Engineers. It should be fixed by our infrastructure solutions
Guests
- Mykola Gurov – Of course, you all know him since he was in our very first episode about Kotlin. Or otherwise from one of his testing in production talks. Jack of all trades.
- Chris Gunnink – Software Engineer on a crusade - DQS
- Sourygna Luangsay – Tech Lead in experimentation, forecasting and the finance product a lot more products