What you need to know about Cosmos DB news from Ignite 2018
Blog|by Mary Branscombe|12 November 2018
Multi-master replication and multiple models and APIs make Cosmos DB the ideal home for intelligent cloud applications
Cosmos DB, Microsoft’s multi-model distributed database service, was designed to match the key features of cloud computing by being globally distributed, elastically scalable and multi-tenant, and offering a choice of consistency models that govern exactly how data converges about the different regions distributed. It’s one of the ‘ring zero’ services that runs in every Azure region, so developers have been able to pick the region that would receive writes to the database (other regions have been read only), changing that if necessary to suit traffic patterns.
Now developers can make all regions writable using multi-master replication, with a write latency that Cosmos DB product manager and architect Rimma Nehme calls ‘single digit milliseconds’. Build an app using Cosmos DB and any of the database query APIs it supports (including Cassandra, MongoDB, Gremlin, SQL and Azure table storage), and writes will automatically be sent to the region closest to the user, and then replicated to all other regions.
“For the first time you can build multi-master MongoDB applications, multi-master Gremlin graph applications; this is a capability MongoDB on premise or in IaaS doesn’t have,” she noted.
“From a developer standpoint, the benefit of multi-master is that your writes become ubiquitous, and also you get unlimited and elastic write scalability. With a single master configuration, you only have a single dedicated region for accepting writes. Now your write throughput becomes virtually limitless because you can scale across all the Azure regions. You get guaranteed low write latency because now you don’t need to incur that speed of light penalty because you can write it locally.”
That speeds up write speeds, allows for higher write throughput by load balancing and scaling writes to multiple regions, which is key for distributed apps that involve time-sensitive capabilities like machine learning.
“When Cosmos DB performs this global replication enabling writes locally across the world, the data becomes seamlessly consistent across all of these replicas that are being served to applications in the tier above the database, whether they are AI applications, analytics applications, cognitive services, web applications, mobile applications, gaming applications. We’re providing this connective tissue across these global geodistributed locations; wherever the customers are, we can serve them data and be able to accept updates to their data and provide a single system image to all these intelligent compute layers above us.”
It also moves Cosmos DB closer to eventually allowing developers to run database regions on their own local hardware for disconnected and edge scenarios. Microsoft isn’t announcing that yet, but it’s part of the vision of intelligent applications that span the cloud and edge, Nehme explained.
“This masterless replication protocol is designed in such a way that we actually don’t differentiate between replicas that live in the cloud or potentially on the edge itself. Today we are only running inside Azure but at some point, it’s not hard to envision that we will have a replica living on the edge and able to participate in the multi-master replication protocol. That means I can write data on the edge. As a developer, I could come to a portal and see the world map and I could associate regions that could be living outside Azure, that could be running on premise – in an IoT device, on the edge, in Azure Stack – using the same turnkey global distribution capability and then that replica starts participating in this masterless replication protocol and providing a single system image for all these globally distributed resources. That’s our vision, and this multi-master capability is a very important step towards this mesh of replicas that might be living in various form factors.”
With multiple regions being written to at once, conflicting writes are possible where two copies of the record might be changed at the same time. Cosmos DB offers three options for dealing with that; developers can choose between the built-in conflict resolution or a custom policy, with CRDTs – conflict-free replicated data types that can automatically converge on the same state by merging concurrent updates – becoming available later this year.
“In case of clients writing to exactly the same record, they can go with the last write wins policy, where we use the last system timestamp to perform the conflict resolution by taking the latest version” Nehme explained. This is the default, because it works with all the query APIs that Cosmos DB supports.
“We also will expose custom policy conflict resolution where customers can specify custom logic that will go into a merge procedure; a stored procedure with a specified template for how conflict resolution should be performed if there is a current version of the record and if there are conflicting versions how to go and resolve it.”
Having native CRDTs in Cosmos DB may avoid all conflicts in records because the database engine will be able to resolve the conflicts, Nehme said. “In many cases the conflicts will not occur naturally because the data type is conflict resolution free; in some cases where it may occur we will go and natively take care of it, so developers don’t have to reason about it.”
Migrating to Cosmos DB
Apache Cassandra support in Cosmos DB is now generally available, which Nehme describes as a fully managed Cassandra-as-a-service offering powered by Cosmos DB.
“Customers don’t have to worry about managing their configuration properties and settings, their JVM config settings, their YAML files, managing clusters of Cassandra nodes. They don’t have to go specify a myriad of settings to get a specific consistency guarantee. All the complexity and independency is taken away by virtue of it being a PaaS service.”
Customers are attracted by the combination of Cassandra compatibility and Azure security, Nehme said, which gives them an option to migrate to the cloud but also continue on-premise development.
“The biggest benefit we hear from customers is all the enterprise grade capabilities. All the security, compliance, enterprise grade readiness of Azure become automatically available to all their Cassandra apps.”
Microsoft is working on unifying the options for migrating to Cosmos DB. The Bulk Executor Library that’s part of Azure Data Factory v2 supports bulk migration from MongoDB and more than 70 other database connectors will soon support migrating from Cassandra. The Azure data migration service, which currently migrates workloads from Oracle and Teradata to SQL Data Warehouse, has preview support for migrating from MongoDB to Cosmos DB, and that will also support Apache Cassandra migrations in future. Some customers have used the Cosmos DB Spark connector to use Spark clusters to migrate Cassandra applications.
“By providing wire protocol level compatibility with Apache Cassandra, customers who have existing Cassandra applications, existing DataStax enterprise applications or ScyllaDB or other variants of the Cassandra protocol will be able to bring those applications to Cosmos DB and still enjoy all the value the ecosystem provides in terms of IDE tools, libraries, SDKs and GitHub projects but be able to take advantage of a database that’s natively designed for cloud.”
The Grey Matter team of Azure specialists can provide a technical consultation. Call them on +44 (0)1364 654100.
Contact Grey Matter
If you have any questions or want some extra information, complete the form below and one of the team will be in touch ASAP. If you have a specific use case, please let us know and we'll help you find the right solution faster.
How Azure ML uses machine learning to help developers build their own machine learning systems
Mary Branscombe is a freelance tech journalist. Mary has been a technology writer for nearly two decades, covering everything from early versions of Windows and Office to the first smartphones, the arrival of the web and most things in between.
In today’s data-driven world, businesses face the critical challenge of ensuring the safety and accessibility of their ever-expanding volumes of data. As organizations increasingly turn to cloud solutions for data storage and backup, the combination of Veeam and Wasabi emerges...
In the fast-paced and interconnected world of business, data has become the lifeblood of organisations. The ability to securely store, manage, and recover data is crucial for business continuity and success. Acronis, a global leader in cyber protection, offers a...
Elevate your development capabilities with RAD Studio 12 Architect, the most advanced RAD Studio edition with the broadest feature set. RAD Studio 12 Architect can enable enterprise-level data services and integrations with Embarcadero’s homegrown tools and services, like Aqua Data...
Intel upgrade options Intel® Parallel Studio XE users, please consider upgrading to Intel® oneAPI Base & HPC Toolkit to take advantage of the latest features including functional and security updates. Intel® System Studio and Intel® oneAPI Base & IOT Toolkit...