
Real-time Database Replication with Grainite

A day in the life of a cloud engineer
Meet Sarah, a seasoned cloud engineer, thrust into the complex world of cloud migration. Sarah begins her day, sipping her ritualistic cup of coffee while scanning through numerous automated migration logs, cloud management consoles, database tools, and other critical applications on her dual monitor setup.
The project at hand is a colossal one - Migrating her firm's extensive on-premises application to the cloud. Each day is filled with intricate tasks, from managing data inconsistencies and network instabilities to ensuring seamless synchronization between the on-premises and cloud databases. Slack messages and emails flood her inbox, and the application team grappling with data mismatch issues further fuel the complexity of the situation.
Sarah is not alone. In our conversations so far with organizations, the same issue with different wording - data movement or replication or synchronization - keeps coming across. We at Grainite strongly believe that the complexity of the process calls for a simpler, more streamlined approach.
Why synchronize data at all?
Organizations undertake such herculean data synchronization projects for numerous reasons.
- Cloud migration: Move an on-premises application to a cloud for scalability reasons.
- Application modernization: Upgrade applications to newer and better architectures.
- System Integration: Making sure a system of record (Marketo, ERP) is always in sync with a single source of truth (CRM), etc.
- Data warehousing: Compiling data from different databases or applications into one unified system for reporting and data analysis.
Other situations include M&A activities, disaster recovery preparation, or even establishing distributed systems. All these scenarios necessitate the seamless synchronization of data across different databases or applications to maintain data consistency and integrity, thereby ensuring uninterrupted business operations.
What does it take to achieve database-database sync today?
To sync two databases, a distributed database synchronization architecture adopted by organizations might look something like this
- Database Replication: Choose either primary-secondary replication or multi-master replication to ensure that changes made to one database are automatically replicated to the other.
- Conflict Resolution: If both databases receive independent writes, conflicts can occur when the same data is modified simultaneously in both databases. A conflict resolution mechanism like Last-Write-Wins (LWW) strategy, where the most recent write is considered the valid one, will be needed.
- Change Tracking: Maintain a change log or a transaction log in each database to track the changes made to the data. This log should include information about the modified data, the operation performed (insert, update, delete), and a timestamp indicating when the change occurred.
- Message Queue: Set up a message queue system to facilitate communication between the databases. Whenever a write operation is performed on one database, the relevant change is pushed to the message queue. The other database can then consume these messages from the queue and apply the corresponding changes.
- Event-Driven Architecture: Use an event-driven architecture to handle database updates. When a change is received from the message queue, trigger an event that initiates the necessary update in the target database. This decouples the databases and allows them to operate independently while still keeping them in sync.
- Error Handling and Retry Mechanism: Implement error handling and retry mechanisms to handle network failures or other issues that may occur during the synchronization process. This ensures that synchronization can resume automatically after a failure without data loss or inconsistency.
- Monitoring and Logging: Implement comprehensive monitoring and logging mechanisms to track the synchronization process, detect potential issues, and facilitate troubleshooting if problems arise.
- Scalability Considerations: Ensure that the architecture is scalable to handle increasing loads and data volumes. This may involve partitioning data across multiple nodes, employing load-balancing techniques, or utilizing sharding strategies to distribute the workload effectively.
This is not even an exhaustive list. More things like snapshotting, handling potential write-loopbacks, etc. could be needed based on the specific situations. No wonder all these steps contribute to making this endeavor a complex undertaking for most organizations
Current approaches to data replication and their shortcomings
To navigate these challenges, organizations often resort to several approaches. Some choose dedicated vendor solutions like GoldenGate, Qlik, etc. Others opt for a combination of Debezium/Kafka Connect and Kafka to capture changes in the source database and stream them to the target database. Yet, some organizations take the path of dual writes, making the source application write to two databases separately and handling any arising errors via Dead Letter Queues (DLQ).
While these methods have their merits, they also carry significant drawbacks. Vendor solutions like GoldenGate are often expensive and can lead to vendor lock-in. Debezium and Kafka, though powerful, require extensive setup and maintenance. Dual writes, on the other hand, increase the complexity of the application code and can lead to consistency issues between databases if not handled correctly
Grainite: Real-time data movement platform
Grainite, a unified real-time data movement platform revolutionizes the process. Designed to simplify the development of event-driven applications, Grainite enables real-time data synchronization between any two databases
- It pulls Change Data Capture (CDC) data from the source database and organizes it into topics, similar to Kafka.
- Grainite can transform this data, if required, via user-written action handlers to, normalize or de-normalize to match the sink database's schema.
- Finally, Grainite writes the transformed data to the destination database.

How is Grainite different from the rest?
The uniqueness comes from Grainite’s underlying platform. Grainite is the only product so far to integrate a message queue, and low-latency event processing directly into its storage engine. In addition to offering a whole bunch of native connectors, the platform supports integrating with kafka-connect source and sink connectors to connect with most databases as sources and sinks.
- Grainite serves as intelligent middleware between any number of source and sink databases or data warehouses while performing in-line filtering, transformations, and joins - with full crash recovery, fault tolerance, and high availability.
- Custom conflict resolution by the document can be executed as scale e.g. merging changes to two different parts of the document from two different databases.
- Testing can be done on a single docker image on a VM or laptop, and the same synchronization application can be deployed to a 3-node or larger Grainite cluster within any Kubernetes deployment (AWS, GCP, Azure, On-prem).
By automating and streamlining the entire process, Grainite offers numerous benefits.
- Real-time data synchronization - This ensures the most up-to-date data is always at your fingertips.
- Developer velocity - Headaches like error handling, message retries, and conflict resolution are automatically handled by Grainite. The necessity for writing copious lines of code is significantly reduced.
- Cost reduction - Due to the unification of components and the ease of maintenance and operation, the costs are drastically reduced compared to traditional methods.
A new day in the life of a cloud engineer (with Grainite)
Let’s now imagine a day in Sarah's shoes with Grainite. The time she spent troubleshooting data inconsistencies could now be used more productively. With fewer lines of code to write and less time spent on setup and maintenance, her workload becomes more manageable and less stressful. Sarah's focus can now shift from managing data replication to driving value-adding tasks.
In our following blog, we will dive into the technical details of how Grainite accomplishes this seamlessly. Stay tuned to learn more about how Grainite could redefine other data movement challenges.