The vast selection of NoSQL solutions today share qualities that have set them apart from their relational counterparts including shared-nothing, distributed architectures with fault tolerance and scalability. However, to provide these benefits many NoSQL solutions have given up the strong data consistency and isolation guarantees provided by relational databases, coining a new term – “eventually consistent” – to describe their weak data consistency guarantees.
Eventual consistency pushes the pain and confusion of inconsistent reads and unreliable writes onto software developers. Building the complex, scalable systems demanded by todays highly connected world with such weak guarantees is exceptionally difficult. We need to stop accepting eventual consistency and aggressively explore scalable, distributed database designs that provide strong data consistency.
The concept of eventual consistency comes up frequently in the context of distributed databases. Leading NoSQL databases like Riak, Couchbase, and DynamoDB provide client applications with a guarantee of “eventual consistency”. Others, like MongoDB and Cassandra are eventually consistent in some configurations.
Eventual consistency means exactly that: the system is eventually consistent–if no updates are made to a given data item for a “long enough” period of time, sometime after hardware and network failures heal, then, eventually, all reads to that item will return the same consistent value. It’s also important to understand if a client doesn’t wait “long enough” they aren’t guaranteed consistency at all.
The problem with eventual consistency
Though eventual consistency is touted as a new model, the word “eventual” should carry the same negative connotation that as it would in nearly every other context, like “eventual” honesty or “eventual” fidelity or “eventually” paying you back. Doesn’t sound very appealing, right? Well it’s the same with distributed systems.
When an engineer builds an application on an eventually consistent database, they need to answer several tough questions every time that data is accessed from the database:
- What is the effect on the application if a database read returns an arbitrarily old value?
- What is the effect on the application if the database sees modification happen in the wrong order?
- What is the effect on the application of another client modifying the database as I try to read it?
- And what is the effect that my database updates have on other clients trying to read the data?
That’s an onerous list, and takes up a lot of developer time. Essentially, that engineer needs to manually do the hard work to ensure that multiple clients don’t step on each other’s toes and deal with stale data.
Eventual consistency represents a dramatic weakening of the guarantees that traditional databases provide and places a huge burden on software developers. Designing applications that maintain correct behavior even if the accuracy of the database cannot be relied upon is a huge challenge! In fact, Google addressed the pain points of eventual consistency in a recent paper on its F1 database and noted:
“We also have a lot of experience with eventual consistency systems at Google. In all such systems, we find developers spend a significant fraction of their time building extremely complex and error-prone mechanisms to cope with eventual consistency and handle data that may be out of date. We think this is an unacceptable burden to place on developers and that consistency problems should be solved at the database level.”
How we got here
Building an eventually consistent database has two advantages over building a strongly-consistent database: (1) It’s much easier to build a system with poor guarantees, and (2) database servers separated from the larger database cluster by a network partition can still accept writes from applications. Unsurprisingly, the second justification is the one given by the creators of the first generation NoSQL systems that adopted eventual consistency. Let’s explore that justification more carefully.
Many of the first-generation NoSQL systems which adopted eventual consistency were designed in the context of an early understanding of Eric Brewer’s CAP Theorem. The popular but misleading summary was that developers had to “pick two out of three” of (C)onsistency, (A)vailability, and (P)artition-tolerance.
The theorem applies to any distributed system where communications channels can fail, and it appears to have dramatic consequences. With the assumption that system availability was essential, early NoSQL databases abandoned consistency (i.e. adopted eventual consistency) using the CAP theorem as their justification.
How do we get out?
“Availability” in the CAP sense however, means that every node remains able to read and write even when it is not able to communicate with the rest of the system. Surely that would be desirable, but it is simple to see the impossibility highlighted by the CAP theorem: If a node cannot communicate with anything else, of course it cannot remain consistent.
Yet, an excellent alternative is possible: A system that keeps some, but not all, of its nodes able to read and write during a partition is not available in the CAP sense but is still available in the sense that clients can talk to the nodes that are still connected. In this way fault-tolerant databases with no single point of failure can be built without resorting to eventual consistency.
Developers shouldn’t have to deal with eventual consistency. Vendors should stop hiding behind the CAP theorem as a justification for eventual consistency. New distributed, consistent systems like Google Spanner concretely demonstrate the falsity of a trade-off between strong consistency and high availability.
The next generation of commercial distributed databases with strong consistency won’t be as easy to build, but they will be much more powerful than their predecessors. Like the first generation, they will have true shared-nothing distributed architectures, fault tolerance and scalability. However, rather than accepting eventual consistency, they will adopt far stronger models like ACID transactions, making them more powerful and productive tools in the enterprise.
Dave Rosenthal is a co-founder of FoundationDB.
via Gigaom http://feedproxy.google.com/~r/OmMalik/~3/duAQGAw-iM0/
0 comments:
Post a Comment