In this article, I will show how Event-Driven Data Mesh could be applied to the “real-life” problem. In the end, you will know the answers to the following questions:
- What is Event-Driven Data Mesh?
- How it can be applied to the business scenario?
- How transformation to EDDM should look like?
- What opportunities you will enable with implementing this approach?
But let’s start with the basics.
A short introduction to Event-Driven Data Mesh
Event-Driven Data Mesh is a Data Architecture which is event-centric.
Event-Driven Data Mesh is a Data Architecture which is event-centric.
Events are first-class citizens. Every system which is a source of data (System of Record) is applying the Turning Database Inside-out approach and exposing their state change in the form of a stream of events. Event Store and the Event Bus are the backbones of the system. Event Store is storing events infinitely. As a result of that, we can prepare any kind of projection/read model/materialized view we would like to have at any point in time. Those projections are created from streams of events and because of that they are updated in real-time.
Teams are responsible not only for their applications but also for streams of events they are producing or transforming. We are treating data as a product and we are fulfilling certain standards like FAIR Data Principles.
The easiest way to understand the new concept is to do it base on a specific example. I would like to explain to you Event-Driven Data Mesh in such a way that even my 2-years old daughter would understand it, and there is no better way to grab her attention than using Rocking-Unicorn.
Rocking-Unicorn Manufacture System
As we all know rocking-unicorns toys are currently the fastest growing market on the planet. Demand surpasses the supply. New rocking-unicorns factories spring up like mushrooms. All those factories need ERP, MES and SCADA systems to keep them running. This is why startup rockingunicorn.com was created.
IIoT and Industry 4.0
You will not sell software today without a good combination of buzzwords and for Manufacturing Industry currently most popular are: IIoT (Industrial Internet of Things) and Industry 4.0. Long story short, in those approaches, we are connecting sensors, machines & controllers to the web and run ERP, MES and SCADA systems directly connected to machines.
ERP, MES, SCADA – definitions
ERP (Enterprise Resource Planning) is a system responsible for managing important business processes like accounting, human resources, sales, production, customer management. As you can see it is a very broad term.
MES (manufacturing execution system) is the system responsible for managing real-time plant activities, it drives and controls manufacturing operations, in it we are planning, monitoring and executing daily work.
SCADA (Supervisory control and data acquisition) is a system connected directly to PLCs (programmable logic controller) and HMIs (Human-Machine Interface).
Rocking-unicorn standard production line
The production line is not very complex, it consists of 5 steps:
- Wooden elements sawing
- Elements fitting
- Textile cutting
- Unicorn horn gluing
The rocking-unicorn market is a very specific market. As we all know, nowadays, nobody wants to have a standard rocking-unicorn, personalization and customization is the king. People want to have, orange unicorns, green, very big, small, crippled, double-horn, and much more – the sky is the limit! In such an environment real-time planning and optimization of production make the difference.
Idea is to make use of the IIoT concept, gather all possible data from the plant floor, run AI algorithms on top of that data and optimize and re-plan the work of the factory whenever a new order comes.
Current State of the System
Our developers in rockingunicorn.com know what they are doing, this is why they embraced the IIoT approach and used MQTT protocol to communicate via messages directly to machines (PLC/HMI) and receive state changes from machines in real-time. Our application contains two services:
Production Planner Service – is responsible for scheduling production for the upcoming day, executing production and controlling of machines. To know what was ordered Production Planner is calling Ordering Service.
Ordering Service – it is a service responsible for managing the life-cycle of orders in our system.
Machines which our system is connected to are: Saw, Fitter, RoboScissors, Upholsterer station and Fusing machine.
As we already know, planning for the whole upcoming day is ok, but everyone is doing that, so it is not giving us any advantage over the competition, we need to be leaner. This is why developers come up with DataWarehouse with Production Optimisation (Machine Learning algorithm) working on top of DataWarehouse. In this solution, the ETL job is retrieving the current state of production and newly placed orders, and those pieces of information are used to dynamically change plan for the day.
Applying Event-Driven Data Mesh
The previous solution was good but it was still far from real-time and we could not react in real-time upon outages of machines. Developers to improve solution used this time Event-Driven Data Mesh approach. To make it happen they introduced those changes in several steps:
- The team responsible for machines now was also responsible for Machine Events as their Data Product. Events were transformed into events with the established schema and published to Kafka. Examples of events coming from machines:
- Wooden block received in sawing station
- Torso cut out
- John Smith logged in on Upholstery station
- The same responsibility has now Production Planner and Ordering teams. Examples of events:
- New order received
- Item added to the order
- Those pre-conditions gave us the possibility to create yet another team responsible for Production Optimisation, and their Data Product was Stream of real-time recommendations – how to adapt the production process.
- This new stream of recommendations was later on consumed by the Production Planner.
Now we have a real-time solution, and even more, we are storing every state change coming from our system and machines in Kafka (we are treating it as an Event Store). We will see that it will become our advantage in the future.
Reporting and Employees Assessment
After one year we hired data analyst. After creating a temporary model and analyzing historical events coming from machines he found some correlations between operators of machines and speed of assembly. The conclusion was that employees have their favorite machines and they work much more efficiently on different shifts. This kind of insight brought the company to the creation of the Shift Optimisation Engine.
Customers also requested many reporting functionalities and thanks to accommodating Event-Driven Data Mesh model we were able to create new projections very rapidly.
We were able to make all those changes because:
- Data was FAIR and we were able to reason base on the collected data
- We did not lose time axis in our data – we tracked with events how the state was changing in time (eg. how employees behaved in time)
- Everything was loosely coupled, and because of that, we were able to extend the system instead of modifying indefinitely existing services.
- We were not forced, like with Data Warehouse, to use one uber data model, we were creating those models on demand.
- We made organizational changes which lead us to shift responsibility for data – now every team is responsible for exposing their data in the form of a stream of events
Every team is responsible for exposing their data in the form of a stream of events
As we can see Event-Driven Data Mesh approach can be very helpful in Data-Centric organizations.
Benefits of Event-Driven Data Mesh:
- Forward compatibility – when we start capturing events, we are able to create in future new projections whenever we will need it.
- We can make real-time decisions based on data.
- We are not losing the time dimension of data.
- We can re-use data in many projections/read models
- It is compatible with modern software architectures like Event-Driven Architecture
I show you how it can be applied to a real-life scenario. But definitely we are still missing very important pieces of information, like how we can accomplish FAIR data? I will try to focus further details in upcoming articles.