Saturday, June 30, 2012

Datomic: An Initial Comparative Analysis


/*---[ Day of Datomic: Initial Thoughts ]---*/

Yesterday I attended a "Day of Datomic" training course, taught by Stuart Halloway. It was well taught and had quite a bit of lively Q&A and discussion. If you have the opportunity to take the Day-of-Datomic course I highly recommend it.

At the bar at the end of class, one of my fellow students and I were discussing Rich Hickey and he said "Rich has clarity". Rich has thought long, deep and critically enough to see through to essence of key issues in modern computer science, bypassing things that are so ingrained as to be a familiarity blindness in the rest of us. Datomic is the latest brain-child of Rich Hickey and was co-created by him, Stu and others at ThinkRelevance, here in Durham, where I live.

To be clear, I have not yet attained "clarity" on Datomic. The training course was an excellent deep dive that lays the groundwork for you to climb the hill of understanding, but that hill still lies ahead of me. As Fogus said, this is something very new, at least to most of us who are used to thinking in terms of relational algebra, SQL, and structured tables. Even for those of us now comfortable with document databases (Mongo and Couch) and key-value stores, this is still different in many important ways.

I won't have time to dive deeper into Datomic for a while (too many other things on my to-do/to-research list), but I wanted to capture some key notes and thoughts that I can come back to later and may help others when starting out with Datomic.

I am not going to try to give an overview of Datomic. There are lots of good resources on the web now, starting with the Datomic web site.

I reserve the right to retract any thoughts I lay out here, as I have not yet done my hammock time on Datomic.


/*---[ Compare it to what you know ]---*/

When you get something new, you want to start by comparing it to what you know. I think that's useful for finding the boundaries of what it's not and tuning the radio dial to find the right way to think about it. There is enough new here that comparisons alone are not sufficient to get a handle on Datomic, but it's one helpful place to start.

Here is a partial list of things that could be useful to compare Datomic to:

  1. Git
  2. Document dbs (MongoDB and CouchDB)
  3. Graph dbs (Neo4j)
  4. RDF Triple Stores
  5. Event Sourcing
  6. VoltDB (and other "NewSQL" databases)


/*---[ Datomic v. Git ]---*/

I've seen the "Datomic is like Git" analogy come up a couple of times now - first in the Relevance podcast with Stu, once while describing Datomic to a work colleague and again yesterday in the class. This seems to resonate with many so use it as a starting point if it helps you, but the analogy falls short in a number of areas.

In fact, the analogy is useful really only in one way - for the most part we aren't used to thinking in terms of immutability. Stu emphasized this in the course - immutability changes everything. Rich Hickey, in his Clojure West talk on Datomic, said "you cannot correctly represent change without immutability ... and that needs to be recognized in our architectures".

Git has given us this idea that I have immutable trees (DAGs actually) of data in my history, wrapped in a SHA1 identifier that guarantees that tree has not changed since you committed it. These trees of history could have been created through sequential commits or a variety of branches and merges. Regardless, I can always go back to that point in time.

That idea does resonate with Datomic's design, but after that the analogy falls apart. Datomic doesn't have multiple branches of history - there is only one sequential trail of history. Datomic is not meant to be a source code repository. Another difference is that Datomic is intended to deal with much larger volumes of data than git. Git does not separate storage from query or transactional commits, as Datomic does, because it doesn't need to. You should never have a source code repo that is so big that you need to put it on a distributed storage service like AWS EC2.


/*---[ Datomic v. Document data stores ]---*/

"Schema" in Datomic is the thing I struggled with the most in the class. I think it will be important carefully and systematically spell out what is (and isn't) a schema in Datomic.

In my view, the Datomic schema model is non-obvious to a newcomer. On the one hand I got the impression from the earlier talks I'd seen on Datomic that Datomic is just a "database of facts". These facts, called "datoms", do have a structure: Entity, Attribute, Value and Time/Transaction (E/A/V/T for short). But "Datomic is a database of facts" sounds superficially like a set of records that have no other structure imposed on them.

On the other hand, my naive assumption coming into the training was that Datomic is actually quite close to the Mongo/Couch model. When written out, datomic records look like constrained JSON - just vectors of E/A/V/T entries and some of those can be references to other JSON-looking E/A/V/T facts, which seems like a way to link documents in a document datastore.

It turns out that neither of those presuppositions are right. Datoms are not JSON documents. Stu made the comment that of all the databases available out there, the Mongo/document-store model is probably the most different from the Datomic model (see Figure 1 below). That was actually quite surprising to me.

I have to go off and think about the differences of these models some more. It will take a bit of tuning of the radio dial back and forth to understand what the Datomic schema is.


/*---[ Datomic v. Graph databases (Neo4j) ]---*/

One area not covered much in the day-of-datomic training was a comparison of Datomic to graph databases (Neo4j in particular).

A significant part of the object-relational impedance mismatch is the translation of relational rectangles to object graphs. When people come to a graph database they get really excited about how this impedance falls away. Walking hierarchies and graphs seems beautifully natural and simple in a graph database, compared to the mind-bending experience of doing that with SQL (or extensions to SQL).

So can Datomic function as a graph database? I don't know. Maybe. This type of db was not on Stu's slides (see figure 1 below). When setting up attributes on an entity, you have to specify the type of an attribute. One type of attribute is a ref -- a reference to another entity. By setting up those refs in the schema of attributes, is that basically the same as setting up a Relationship between Nodes in a graph db? Or does Neo4j do things that Datomic is either not suited for or not as performant as?


[Update: 02-July-2012]: Stu Halloway sent me an email that gave some feedback on this question. First, he pointed me to an article by Davy Suvee in which he gives a nice set of basic examples of how Datomic keeps a graph of relations that can be queried using Datalog.

Stu pointed out that one difference between Datomic and a graph database like Neo4j is that the latter "reifies edges". In graph theory you have two basic concepts: the Node (Vertex) and the Relationship (Edge) between two Nodes. Both are first class (reified) entities of the graph model - they can have identities and attributes of their own.

Datomic does not make the Relationship a thing unto itself - it is just the "ref" between two entities (nodes). In Stu's words: "We believe many of the common business uses for reified edges are better handled by reified transactions". More food for thought! :-)

That being said, you can wrapper Datomic into a full blown graph API and that is just what Davy Suvee has done. He implemented Datomic as a backend to the Blueprints Graph API and data model, which is pretty good evidence that Datomic can function as a graph database for at least some important subset of use cases.



/*---[ Datomic v. RDF Triple Stores ]---*/

Interestingly, the part of the NoSQL world that seems to get the least attention (at least in the circles I mingle in, including sifting through Hacker News with some regularity) is the database whose data model comes closest to Datomic's - RDF Triple Stores.

Rich, in his Clojure West talk, says that RDF triple stores almost got it right. Triple stores use subject-predicate-object to represent facts. This maps to Datomic's entity-attribute-value concept. The piece missing is time, which Datomic models as, or wraps into, transactions. Transactions in Datomic are first-class creatures. They encapsulate timestamp, the order of history and can have additional attributes added to them (such as what user id is associated with this transaction).

Frankly, I don't know a lot about RDF triple stores, so I'm just going to include this slide from Stu's training session for your consideration:

Fig. 1: Comparison of Datomic to other databases using Agility as a metric. (From Stuart Halloway's training slides)


/*---[ Datomic v. Event Sourcing ]---*/

Event sourcing is an intriguing and compelling idea that changes to state should be recorded as a sequence of events. Those events are immutable and often written to an append-only log of "facts". When you do event sourcing with a relational database, you have to decide how to log your facts and then what to represent in the relational schema. Will you keep only "current state"? Will you keep all of history? Will you set up a "write-only" or "write-predominant" database where up-to-date state is kept and then read-only databases that somehow get populated from this main datastore?

There are lots of ways to set it up and thus lots of decisions to make in your architecture. Read Fowler's write up on it if you are new to the idea. The newness of this idea is a something of a risk when trying to figure out the best way to implement it.

As I understand it so far, Datomic is an implementation of an event sourcing model that also answers for you many of these ancillary questions about how to implement it, such as: where and in what format will you write the transaction log? How will you structure your query tables? Will you replicate those tables? Will you project those tables into multiple formats for different types of queries and analytics? Will transactions will be first-class entities?

Datomic's answers to those questions include:

  1. There is a component of the system called a transactor and it writes immutable transactions in a variant of "append-only", since it writes transactional history to trees. As I'll talk about in the next section all transactions are handled in serialized (single-threaded) mode.
  2. You structure your data into E/A/V/T facts using the Datomic schema model constraints. Queries are done on the Peers (app tier) and those Peers have read-only datasets.
  3. You don't need to make multiple projections of your data. Datomic already gives you four "projections" (indexes) out of the box - one index by transaction/time and three other indexes:
    1. An EAVT index, meaning an indexed ordered first by entity (id), then by attributes, then by values and finally by transaction
      • This is roughly equivalent to row-level index access in a relational system
    2. An AEVT index order, which is the analog of a column store in an analytics database
    3. A VEAT index order, for querying by known values first
    4. Lastly, you can also specify that Datomic keep an AVET index, which is useful for range queries on attribute values. It is not turned on by default, since it signifcantly slows down transactional write throughput
  4. Transactions are first-class and have their own attributes. One can query the database based on transactions/time and you can go back to a point in history and see what the state of the database was at that time.

I think one of the most exciting things about Datomic is that it gives you an event-sourced system with many of the design decisions already made for you.


/*---[ Datomic v. VoltDB ]---*/

I did a short write-up on VoltDB and the "new SQL" space a few months ago.

Like VoltDB, Datomic posits that we do not have to give up ACID transactions in order to improve scalability over traditional relational models. And VoltDB and Datomic share one key core premise:

all transactional writes should be serialized by using a single writer.

In VoltDB, there is one thread in one process on one server that handles all transactions. In Datomic, there is one transactor on one machine (either in its own thread or process, not sure which) that handles all transactions. Transactions, unlike most NoSQL systems like MongoDB, can involve any number of inserts/updates/deletes (VoltDB) or additions/retractions (Datomic).

Another similarity is that both VoltDB and Datomic allow you to add embedded functions or stored procedures to the transactor that will run inside the transaction. In both cases, you add or compile the embedded transaction functions to the transactor itself. Those functions do not run in the app tier.

Michael Stonebraker, key designer of VoltDB, based this design on the fact that traditional relational databases spend on the order of 80% of their cycles doing non-productive work: managing locks and latches, commit logs and buffer caches. By moving to a single-threaded model, the overhead of write contention goes away. VoltDB claims their transactional write throughput is on the order of 40x to 50x faster than traditional relational databases.

Where Datomic and VoltDB part ways is that VoltDB retains SQL and a fully relational model, using an in-memory database that is shared across many nodes in a cluster.

Another key difference is that VoltDB really focuses on the OLTP side, not on analytics. Datomic can target both areas.


/*---[ Datomic v. Stonebraker's "10 Rules" ]---*/

In the summer of 2011, Michael Stonebraker and Rick Cattell published an ACM article called "10 Rules for Scalable Performance in ‘Simple Operation’ Datastores". By "Simple Operation Datastore" they mean databases that are not intended to be data warehouses, particularly column store reporting databases. So traditional relational databases and most NoSQL databases, including Datomic, are "Simple Operation" by this definition.

How does Datomic fare against their 10 rules? As I said, I'm still too new to Datomic to give a definitive answer, but here's my first pass at assessing Datomic against them. I'd invite others to chime in with their own analysis of this. These axes will provide another way for us to analyze Datomic.


Rule 1: Look for Shared Nothing Scalability

A shared-nothing architecture is a distributed or clustered environment in which each node shares neither main memory nor disk. Instead, each node has its own private memory, disks and IO devices independent of other nodes, allowing it to be self-contained and possibly self-sufficient, depending on the usage conditions. Typically in a shared nothing environment the data is sharded or distributed across nodes in some fashion.

An alternative architecture is shared disk - a disk that is accessible from all cluster nodes. In a shared disk environment, write contention must be managed by the disk or database server.

So is Datomic a shared nothing architecture? One could argue that the Peers are not entirely self-sufficient, as they rely on the storage service to pull data. And the transactor is not self-sufficient as it has no storage of its own - it depends on the storage service.

However, I think Datomic is shared nothing in that there will never be write contention among nodes - there is only one (active) transactor.

I also don't quite know how the storage service is architected with Datomic. If it appears as a monolithic service to the Peers on the app tier, then if it goes down, none of the Peers can function unless they have cached all the data they need.


Rule 2: High-level languages are good and need not hurt performance.

This is a direct shot at the fact that many NoSQL databases have had to reinvent APIs for querying data and many have given us low-level programming APIS, not DSLs or high level languages. (To be fair, some have - Cassandra has CQL and Neo4j has the excellent Cypher language.)

Datomic fares very well on this one - it gives us Datalog, which Stu pointed out in the training is mathematically sound like the relational algebra and is transformable into relational algebra.


Rule 3. Plan to carefully leverage main memory databases.

Datomic allows you spin up in memory arbitrary amounts of data from the datastore. The amount is limited by (at least) two things:

  1. how much memory you can put on the app/Peer machine

  2. how much memory you can put into a JVM without hittings a size where GC pauses interfere with the application

On the gc-end, you have a couple of options to ramp up. First, Datomic has a two-tier cache on the Peer - one in main memory (on-heap) and one that is "compressed" or treated as a byte array, which could be on or off-heap, but in either case puts less pressure on the garbage collector.

Second, if you want really big JVMs with on-heap memory and no GC pauses, then you can levearge Azul's JVM, though it will cost you extra.

Datomic scores a reasonably high score here, but it is not architected to be a sharded in-memory database. However, with a very fast network and SSDs on AWS, Datomic's performance may not suffer significantly. We await some good case studies on this.


Rule 4. High availability and automatic recovery are essential for SO scalability.

From the Stonebraker paper:

Any DBMS acquired for SO applications should have built-in high availability, supporting nonstop operation.

With an AWS deployment, you inherit any weaknesses of AWS model (e.g., a few prominent outages recently), but overall it's a highly redudant and highly available storage service. With Datomic you can have any number of Peers on the app tier, so replicate those as needed.

You are not required to use AWS. Currently Datomic can also be deployed on relational systems and they hope to be able use Riak in the future.

The single point of failure would appear to be the single transactor, but Datomic will provide a failover mode to an online back transactor. We'll need to learn if this failover will be provided as an automatic service or requires configuration/set-up by the user/administrator.


Rule 5. Online everything.

This overlaps rule 4 in terms of always being up, but adds to it the idea that schema changes, index changes, reprovisioning and software upgrades should all be possible without interruption of service.

I don't have hard data here from the Datomic guys, but my guess is that Datomic will pass this test with flying colors. Most indexes are already turned on by default and I suspect turning on the other one (see above) can be done without database downtime.

Schema changes ... hmm. Stu said in the training course that attributes cannot (currently) be removed from the schema, but you can add new ones. Would this require rebuilding the indexes? I doubt it, but I'd need more information on this one.

For reprovisioning and software upgrades, it is easy to envision how to add more storage and Peers since you have redundancy built in from the start. For the transactor, reprovisioning would not mean spinning up more nodes to process transactions, it would be mean getting a bigger/faster box. I suspect you could reprovision offline while your existing transactors are in motion, then swap to the new improved system in real-time using the failover procedure.


Rule 6: Avoid multi-node operations.

From the Stonebraker paper:

For achieving SO scalability over a cluster of servers .... Applications rarely perform operations spanning more than one server or shard.

Datomic does not share data sets. The only operations that need to be performed over more than one server is when a Peer needs to access data that has not yet been cached on the app tier. It then makes a network lookup to pull in those data. This is certainly slower than a complete main memory database, but Stu said that AWS lookups are very fast - on the order of 1ms.


Rule 7: Don’t try to build ACID consistency yourself.

Unlike most NoSQL datastores, Datomic has not abandoned ACID transactions. It handles transactions of arbitrary complexity in a beautiful, fast serialized model. A+ on this one.


Rule 8: Look for administrative simplicity.

From the Stonebraker paper:

One of our favorite complaints about relational DBMSs is their poor out-of-the-box behavior. Most products include many tuning knobs that allow adjustment of DBMS behavior; moreover, our experience is that a DBA skilled in a particular vendor’s product, can make it go a factor of two or more faster than one unskilled in the given product.

As such, it is a daunting task to bring in a new DBMS, especially one distributed over many nodes; it requires installation, schema construction, application design, data distribution, tuning, and monitoring. Even getting a high-performance version of TPC-C running on a new engine takes weeks, though code and schema are readily available.

I don't have a lot data one way or the other on this for Datomic. I doubt there will be a lot of knobs to turn, however. And if you are deploying to AWS, there is a Datomic AMI, so deployment and set up of the storage side should be quite simple.


Rule 9: Pay attention to node performance.

Not every performance problem can or should be solved with linear scalability. Make sure your database can have excellent performance on individual nodes.

This one is a no-brainer: A+ for Datomic. You can run all pieces on the same box, entirely on different boxes, or mix and match as desired to put the faster systems in place that you need for each piece. You can also put your high intensity analytics queries on one box and your run-of-the-mill queries on another and they will not interfere. Performance of the system will be entirely dependant on individual node performance (and data lookups back into your data store).


Rule 10. Open source gives you more control over your future.

Datomic is a commercial product and is not currently open source or redistributable, as I understand it.

From the Stonebraker paper:

The landscape is littered with situations where a company acquired a vendor’s product, only to face expensive upgrades in the following years, large maintenance bills for often-inferior technical support, and the inability to avoid these fees because the cost of switching to a different product would require extensive recoding. The best way to avoid “vendor malpractice” is to use an open source product.

VoltDB, as an example, is open source and seeks commercial viability by using the support model that has worked well for Red Hat. Datomic is still young (less than 6 months old), so we will see how its pricing and support model evolve.


/*---[ Coda ]---*/

Remember that Datomic is more than the union of all these comparisons, even assuming I got all my facts right. Datomic not only combines the best of many existing solutions, but brings new stuff to the table. Like anything of significant intellectual merit, it will take a investment of your time to learn it, understand it and for us, as a community of users, to figure out best practices with it.

Datomic is interesting and fascinating from an intellectual and architectural point of view, but it also eminently practical and ready to use. I look forward to following its progress over the coming months and years.

Lastly, if it is of any use to anyone, here are my raw notes from the day-of-datomic training.

5 comments:

  1. Thanks for thematizing, organizing, and posting this set of impression. As someone on the periphery of, but interested in, Hickey's inventions (Clojure, Datomic), I found it useful.

    ReplyDelete
  2. RDF stores typically do support update. There are several ways to model time and transactions in RDF though. I have been combining event stores and triples stores for a while now using the Talis Change Set model. A transaction is recorded as a "change set", i.e. triples that describe a change to a graph.

    In this scenario, I refer to the sequence of changes over time as the "system of record". It is an event store, and only appends more recent changes to previous changes. Every transaction appends a change set to the event store.

    The graph of "data" per se I do not allow to be directly updated by an application. Applications submit changes as proposed transactions. Approved transactions append to the change set. Subsequently any given "system of reference" which is a graph of the data using a triple store, may update itself to represent the graph as it exists at that new point in time.

    The may be systems of reference that remain at earlier points in time for various reasons. There may be one or more that keep themselves up to date with the latest change in the event store.

    Event stores and the data of the graph at any point in time are are just RDF triples and can be queried using SPARQL. Some implementations of triple stores, such as Jena and Allegro Graph, have essentially Datalog engines available as well as SPARQL. (Jena has essentially a forward and backward rules engine, the backward engine being essentially Datalog. Allegro Graph is written in Franz Common Lisp, which includes a Prolog implementation. And so it does not offer Datalog per se, rather Prolog, which is close enough (and has a bit more capabilities if desired.)

    ReplyDelete
    Replies
    1. Hi Patrick,

      Thanks for the detailed write-up from the RDF triples store. Comparing it to something I know, I'd say what you describe in the first half of your comment is a way to an event-sourced model with an RDF data model. As I said in the blog, there are a lot of design and implementation decisions one has to make when choosing event sourcing and it looks like you've thought about how to do it with triples.

      I recently saw a tweet (https://twitter.com/tastapod/status/220859303270162433) suggesting that Datomic is a paradigm shift and if enough people pay attention to it may very well change how we approach data and databases. I haven't paid much attention to RDF stores in the past, but I suspect there are some serious paradigmatic differences that the broader software and computer science community needs to start digesting there as well. A deep comparison between Datomic and RDF triples and how they may alter our future data landscape would be a welcome analysis in my book.

      Thanks for giving us a taste of that comparison.

      Delete
  3. Pretty good post. I just stumbled upon your blog and wanted to say that I have really enjoyed reading your blog posts. Anyway, I’ll be subscribing to your feed and I hope you post again soon.
    business analytics training in hyderabad

    ReplyDelete
  4. Hello! Our outsourcing company offers state-of-the-art aerial image annotation services required for training AI and machine learning models used in various industries. Whether it's autonomous vehicles, agriculture or urban planning, our expert annotators ensure accurate and comprehensive data labeling! >> Drone Annotation: Core Applications

    ReplyDelete