From MVC to MOVE

Conrad Irvin has a nice post on how to fix some of the problems with the ubiquitous MVC (“Model View Controller”) architecture that’s used in so many user interface frameworks, from rich clients to web apps.

He calls the model MOVE, i.e Models, Operations, Views and Events: his Models capture the information in the application, Views render it and, when the user interacts with a View, raise Events that are received by Operations that modify the Model.

In InfoGrid, we obviously use the word model more precisely by distinguishing what we call a model (the schema) and the graph of objects (the data) — together what he calls the Model.

Viewlets are exactly what he means with a View.

Where it’s getting interesting is Operations and Events: since the InfoGrid HttpShell has been supporting HttpShellHandlers, these handlers are almost exactly what he describes as operations. We can think of the symbolic names of handlers as used in a JSP page as the Events in his model, and the binding of symbolic names to handlers as the mapping between Events and Operations.

So we like his description of an improved MVC model, and offer an example where it has been implemented.

Database Impedance Mismatches

Alex Popescu argues that document databases have an impedance mismatch with object-based software. He compares the situation to relational databases:

“One of the most often mentioned issues reported by software engineers working with relational databases from object-oriented languages is the object-relational impedance mismatch. Document databases adopters are saying that one benefit of document stores is that there is no impedance mismatch between the object and document worlds.

“I don’t think this is entirely true.”

Given that databases and in-memory representations of objects serve different purposes, I don’t think we’ll ever completely manage to do without mappers of some kind. (Otherwise we’d all be using Java Object Serialization as our database.)

However, to me at least it appears that of all database technologies, graph databases like InfoGrid have the smallest impedance mismatch: a graph of objects is fundamentally closer to what is happening in memory of an object-based application than any other representation (e.g. document-based / hierarchical / relational / key-value etc.)

One More Word on GraphDBs and Schemas

myNoSQL picked up our recent post on why having a schema is A Good Thing. In the comments to his post, various other graph database vendors speak up. They are far more ambiguous on the matter than we are …

For now, I think of Brian Akers’s take on this subject as the final word. With the comment “I know where everything is… don’t touch”, he shows this picture:

[very messy office]

If you can afford code and data like this, be my (schema-less) guest. Personally, I can’t.

’nuff said.

ACID Transactions Are Overrated

For many years, the canonical example why we need database transactions has been banking. If you move $100, you don’t really want the money be subtracted from the first account, but never be added to the second because of some problem in between. You want both the subtraction and the addition to happen, or neither.

Sounds good so far. Just apparently that is not how banks work in the real world, and they certainly use enough database systems that have ACID transactions. The Economist (July 24, 2010, “Computer says no”) quotes a former executive of the Royal Bank of Scotland saying: “The reality was you could never be certain that anything was correct.” Continuing: “Reported numbers fot the bank’s exposure were regularly billions of dollars adrift of reality.” The article offers an explanation: “banks tend to operate lots of different databases producing conflicting numbers.” HSBC is quoted: 55 separate systems for core banking, 24 for credit cards, and 41 for internet banking.

According to traditional transaction wisdom, if a customer makes an internet transaction to pay off his credit card, it should be a single transaction: start transaction, take money from checking account, put it into credit card account, commit. But transactions generally cannot span systems. Because the system responsible for internet banking is separate in this real-world example from core banking and from credit card systems, no such single ACID transaction is possible. Given the numbers above, it looks very much like those money transfers that actually can follow the canonical ACID transaction pattern constitute only a very small fraction of all transactions (like transferring from checking to savings.)

If I look at my own banking, the vast majority of my banking activity isn’t even within the same bank, but with other banks: bills to pay usually have to be paid at other banks. No cross-bank ACID database transactions that I’ve ever heard of.

So banking software necessarily has to have functionality that prevents that money is deducted but never arrives, all without depending on database transactions. If we have to have this functionality anyway, why then are transactions “indispensable” as some people still want to make us believe?

This pattern can be generalized: the more distributed and decentralized a system is, the less likely it is that we can use transactions that span the entire system. That is certainly true for the banking system, apparently also true for systems inside banks, and in many other places. ACID transactions were invented for the mainframe, the world’s most centralized computing construct. But computing is not “one mainframe” any more I’m afraid as it was in the sixties.

Instead of trotting out transactions as the answer, what we need for NoSQL databases is the ability to get the same benefits in a distributed system (“nothing is lost”) without relying on transactions. That’s where all of our efforts should be.

In the InfoGrid graph database, we have a weaker form of transactions for individual MeshBases, but then synchronize with the rest of the world by passing XPRISO messages. I’d be surprised if the eventual “transactional” architecture for large-scale distributed and decentralized systems looked very different.

InfiniteGraph Implementation of FirstStep

InfiniteGraph, the currently youngest member of the GraphDB party, has now also implemented our FirstStep example. Todd Stavish’s code is here. It joins implementations of the same example from InfoGrid, Neo4j, Sones, and Filament.

[Update: see Todd's comment below. Apparently there is checking in the second step.] On cursory examination, I’m surprised that InfiniteGraph allows you (requires you?) to create edges without source and destination nodes. Only after the creation of the edge does one assign the nodes to the edge. I’m unclear why this is an advantage for any particular scenario; however, I would think there’s a clear disadvantage as an application developer, because I now have to check for null pointers that I wouldn’t have to in most other graph databases (InfoGrid included).

Hope somebody more neutral than me will perform an API comparison using this example some day.

Welcome Infinite Graph

It always looked like it was only a matter of time until the object database companies would try and become graph databases. Perhaps that is what they should have been all along. I’m speaking as somebody who tried several products almost 20 years ago and decided that they were just too much hassle to be worth it: graphs are a much better abstraction level than programming-level constructs for a database.

Today, Objectivity announced “Infinite Graph”, a:

strategic business unit is tasked with bringing [a] enterprise-ready and distributed graph database product to market

(I took the liberty of eliminating the “marketing” superlatives from the quote; the entire press release has a very generous sprinkling of them.)

Actually, they only announced a beta program, which I signed up for. InfiniGraph.com says:

X:\> BETA IS NOW OPEN

But then, on the screen behind, they say:

Over the next several days, we’ll be preparing our installer and documentation for distribution to the InfiniteGraph community. Stay tuned, and feel free to participate in the discussion on our beta blog!

Well, well, the difficulties of a launch. So I don’t know yet what they created. But it’s good to see another player legitimizing graph databases as a category. So, welcome Objectivity!

Graph Database Scaling: InfoGrid’s Contrarian View

There’s an ongoing debate how to scale graph databases horizontally, and I got “strongly encouraged” to present the InfoGrid point of view.

In a nutshell: to make it work, scale like the web, not like a database.

Let me explain:

If you start out with a single relational database server, and you want to scale it horizontally to thousands of servers, you have a problem: it doesn’t really work. SQL makes too many assumptions that one simply cannot make for a massively distributed system. Which is why instead of running Oracle, Google runs BigTable (and Amazon runs Dynamo etc.), which were designed from the ground up for thousands of servers.

If you start out with a single hypertext system, and you want to scale it horizontally to millions of servers, you have a problem, too: it doesn’t really work either. Which is why we got the world-wide-web, which has been inherently designed for a massively decentralized architecture from the get-go, instead of a horizontally-scaled version of Xanadu.

So he we are, and people are trying to scale graph databases from one to many servers. They are finding it hard, and so far I have not seen a single credible idea, never mind an implementation, even in an early stage. Guess what? I think massively scaling a graph database that’s been designed using a one-server philosophy will not work either, just like it didn’t work for relational databases or hypertext. We need a different approach.

With InfoGrid, we take such a fundamentally different approach. To be clear, its current re-implementation in the code base is early and is still missing important parts. But the architecture is stable, the core protocol is stable, and it has been proven to work. Turning it back on across the network is a matter of “mere” programming.

To explain it, let’s lay out our high-level assumptions. They are very similar to the original assumptions of the web itself, but unlike traditional database assumptions:

Traditional database assumptions Web assumptions InfoGrid assumptions
All relevant data is stored in a centrally administered database Let everybody operate their own server with their own content and create hyperlinks to each other Let everybody operate their own InfoGrid server on their own data and share sub-graphs with each other as needed (see also note on XPRISO protocol below)
Data is “imported” and “exported” from/to the database from time to time, but not very often: it’s an unusual act. Only “my” data in “my” database really matters. Data stays where it is, a web server makes it accessible from/to the web. Through hyperlinks, that server becomes just one in millions that together form the world-wide-web. Data stays where it is, and a graph database server makes it look like a (seamless) sub-graph of a larger graph, which conceivably one day could be the entire internet
Mashing up data from different databases is a “web services” problem and uses totally different APIs and abstractions than the database’s Mashing up content from different servers is as simple as an <a…, <img… or <script… tag. Application developers should not have to worry which server currently has the subgraph they are interested in; it becomes local through automatic replication, synchronization and garbage collection.

This approach is supported by a protocol we came up with called XPRISO, which stands for eXtensible Protocol for the Replication, Integration and Synchronization of distributed Objects. (Perhaps “graphs” would have been a better name, but then the acronym wouldn’t sound as good.) There is some more info about XPRISO on the InfoGrid wiki.

Simplified, XPRISO allows one server to ask another server for a copy of a node or edge in the graph, so they can create a replica. When granted, this replica comes with an event stream reflecting changes made to it and related objects over time, so the receiving server can obtain the received replica in sync. When not needed any more, the replication relationship can be canceled and the replicas garbage collected. There are also features to move update rights around etc.

For a (conceptual) graphical illustration, look at InfoGrid Core Ideas on Slideshare, in particular slides 15 and 16.

Code looks like this:

// create a local node:
MeshObject myLocalObject = mb.getMeshObjectLifecycleManager().createMeshObject();
// get a replica of a remote node. The identifier is essentially a URL where to find the original
MeshObject myRemoteObject = mb.accessLocally( remote_identifier );

// now we can do stuff, like relate the two nodes:
myLocalObject.relate( myRemoteObject );

// or set a property on the replica, which is kept consistent with the original via XPRISO:
myRemoteObject.setProperty( property_type, property_value );

// or traverse to all neighbors of the replica: they automatically get replicated, too
MeshObjectSet neighbors = myRemoteObject.traverseToNeighbors();

Simple enough? [There are several versions of this scenario in the automated tests in SVN. Feel free to download and run.]

What does this contrarian approach to graphdb scaling give us? The most important result is that we can assemble truly gigantic graphs, one server at a time: each server serves a subgraph, and XPRISO makes sure that the overlaps between the subgraphs are kept consistent in the face of updates by any of the servers. Almost as important is that it enables a bottom-up bootstrapping of large graphs: everybody who feels like participating sets up their own server, populates it with the data they wish to contribute (which doesn’t even need to be “copied” by virtue of the InfoGrid Probe Framework), and link to others.

Now, if you think that makes InfoGrid more like a web system than a database, we sympathize. There’s a reason we call InfoGrid an “internet graph database”. Or it might look more like a P2P system. But other NoSQL projects use peer-to-peer protocols, too, like Cassandra’s gossip protocol. And I predict: the more we distribute databases, the more decentralized they become, the more the “database” will look like the web itself. That’s particularly so for graph databases: after all, the web is the largest graph ever assembled.

We do not claim that this approach addresses all possible use cases for scaling graph databases. For example, if you need to visit every single node in a gazillion-node graph in sequence, this approach is just as good or bad as any other: you can’t afford the memory to get the entire graph onto your local server, and so things will be slow.

However, the InfoGrid approach elegantly addresses a very common use case: adding somebody else’s data to the graph. “Simply” have them set up their own graph server, and create the relationships to objects in other graph servers: XPRISO will keep them maintained. Leave data in the place where its owners have it already is a very powerful feature; without it, the web arguably could not have emerged. It further addresses control issues and privacy/security issues much better than a more database’y approach, because people can remain in control over their subgraph: just like on the web, where nobody needs to trust a single “web server operator” with their content; you simply set up your own web server and then link.

Want to help? ;-)

InfoGrid Interview Published

Pere Urbon published his e-mail interview with me about InfoGrid:

This is the fourth in his series on graph databases:

It’s rather apparent that while these projects are all GraphDBs, they differ substantially in what they are trying to accomplish, and why, and therefore how they do it. This is a good resource for developers investigating GraphDBs and trying to understand their alternatives.

Graph Database Feature Comparison Table

Pere Urbón Bayes has put together an excellent table comparing features of notable graph databases: post, table (PDF): Neo4j, HyperGraphDB, DEX, InfoGrid, Sones and VertexDB.

One can quibble, as one always can with tables like this, but it is the best comparison of graph databases so far that I’m aware of. Check it out.

Big Data Workshop April 23, Mountain View, CA

I’m planning to be at Big Data Workshop, the first unconference on NoSQL and Big Data. If past events moderated by Kaliya Hamlin are any guide, it will be a great opportunity for everybody:

  • to explore together how the Big Data market will be coming together
  • to understand how the key technologies and projects work
  • what interfaces and interoperability standards are emerging and/or needed
  • how we can grow the overall market and make it easier for everybody to adopt these technologies for interesting new projects.

Arguably, without Internet Identity Workshop (also moderated by Kaliya) was the enabler for the stunning adoption rate over the past five years of OpenID, OAuth and related technologies (at last count, more than 1 billion enabled accounts). I hope history repeats itself here.

P.S. Feel free to corner me on InfoGrid, graph databases or any other subject. That’s the whole point of an unconference.