Archive

Archive for the ‘Structured Storage’ Category

Mass Data Storage

Categories: Structured Storage

Property Graph Vs. Hypergraph

Property graph and hypergraph are two main types of graph models in graph databases.  For each type the corresponding popular products are (view a survey of graph databases):

Property Graph: Neo4J, DEX, InfoGrid, InfiniteGraph, AllegroGraph

Hypergraph: HypergraphDB, Trinity.

1. Property Graph

In a property graph, each node and edge can be associated with multiple key/value pairs. For example, in a social network where each node is a person, a key/value pair like “age/25” can be associated with each node, and another key/value pair like “relationship/friend” can be associated with each edge. Since you can attach any properties with nodes and edges, this framework provides a powerful description for real word applications. Figure 1 shows an example of property graph.

image

Figure 1

2. Hypergraph

A hypergraph means an edge can connect more than two nodes. In other words, you can view an edge as a node and let this edge connect with other node or edges. Obviously, a RDF graph cannot be a hypergraph, because RDF triples cannot let a triple as an entity in another triple. In a computational geometry, we can say nodes within a region are connected by hyperedges. Figure 2 shows a hypergraph with 4 hyperedges, distinguished by different colors.

image

Figure 2

3. Discussion: which one is better?

I do not want to say “it depends” here. I want to make a decision.

In graph theory, hypergraph is more genetic than property graphs. However, my first comment is that every hypergraph can be represented by a property graph. For example, we can add an extra key/value pair to annotate the nodes connected by the same hyperedge. Second, property graphs is more close to graph structures in real applications. We are easy to understand an edge is a link between two nodes, but a hyperedge is not easy to understand and also not common in applications. That’s why I suggest taking property graph as the default graph model in a graph database.

Get Started with DEX Graph Database

DEX claimed itself as a high-performance and scalable graph database, which is very attractive for NoSQL database applications. View here for the impressive comparison between DEX and peering products. I also wrote a post about graph databases to compare DEX with others, and the result shows DEX is among the best.

However, the example in Java API for DEX version 4.3 is not updated — they use old examples in old version, which are no longer compatible. A migration manual is useful when you want to write your code based on new version; but not always. This post will show you how to create DEX applications based on new version 4.3.

  1. Download DEX here. The free version can support up to 1 million nodes, which is a constraint compared with Neo4J.
  2. An instruction for use can be found here, but the example in Appendix A is for old version. To deploy, unpack it and add /lib/dexjava.jar to your Java project. Really neat.
  3. The following is the Java code to create new node types, new edge types, select nodes from a specific node type, select nodes from a specific property, and get neighbors of a node. It is straight-forward and should be not hard to read. It works under new version 4.2 & 4.3!

(To understand the Java code, you need to know DEX is based on property graph model)

import java.io.FileNotFoundException;
import java.util.Date;
import com.sparsity.dex.gdb.AttributeKind;
import com.sparsity.dex.gdb.Condition;
import com.sparsity.dex.gdb.DataType;
import com.sparsity.dex.gdb.Database;
import com.sparsity.dex.gdb.Dex;
import com.sparsity.dex.gdb.DexConfig;
import com.sparsity.dex.gdb.EdgesDirection;
import com.sparsity.dex.gdb.Graph;
import com.sparsity.dex.gdb.Objects;
import com.sparsity.dex.gdb.ObjectsIterator;
import com.sparsity.dex.gdb.Session;
import com.sparsity.dex.gdb.Value;

public class example {

	public static void main(String[] args)
			throws FileNotFoundException {
		Dex dex = new Dex(new DexConfig());
		Database gpool = dex.create("example.dex",
				"DEXEXAMPLE");
		Session sess = gpool.newSession();

		// node types
		sess.begin();
		Graph dbg = sess.getGraph();
		int person = dbg.newNodeType("PERSON");
		int name = dbg.newAttribute(person, "NAME",
				DataType.String, AttributeKind.Indexed);
		int age = dbg.newAttribute(person, "AGE",
				DataType.Integer, AttributeKind.Basic);
		long p1 = dbg.newNode(person);
		dbg.setAttribute(p1, name,
				new Value().setString("JOHN"));
		dbg.setAttribute(p1, age,
				new Value().setInteger(18));
		long p2 = dbg.newNode(person);
		dbg.setAttribute(p2, name,
				new Value().setString("KELLY"));
		long p3 = dbg.newNode(person);
		dbg.setAttribute(p3, name,
				new Value().setString("MARY"));
		sess.commit();

		// edge types
		sess.begin();
		int phones = dbg.newEdgeType("PHONES", true, true);
		int when = dbg.newAttribute(phones, "WHEN",
				DataType.Timestamp, AttributeKind.Basic);
		long e4 = dbg.newEdge(phones, p1, p3);
		dbg.setAttribute(e4, when,
				new Value().setTimestamp(new Date()));
		long e5 = dbg.newEdge(phones, p1, p3);
		dbg.setAttribute(e5, when,
				new Value().setTimestamp(new Date()));
		long e6 = dbg.newEdge(phones, p3, p2);
		dbg.setAttribute(e6, when,
				new Value().setTimestamp(new Date()));
		sess.commit();

		// Select all objects from a specific node type
		sess.begin();
		Objects persons = dbg.select(person);
		ObjectsIterator it = persons.iterator();
		while (it.hasNext()) {
			long p = it.next();
			Value v = new Value();
			dbg.getAttribute(p, name, v);
			System.out.println(v.getString());
		}
		it.close();
		persons.close();
		sess.commit();

		sess.begin();
		// get nodes from a specific property
		persons = dbg.select(name, Condition.Equal,
				new Value().setString("JOHN"));
		it = persons.iterator();
		while (it.hasNext()) {
			long p = it.next();
			Value v = new Value();
			dbg.getAttribute(p, name, v);
			System.out.println(v.getString());
		}

		// get neighbors
		persons = dbg.explode(p1, phones,
				EdgesDirection.Outgoing);
		it = persons.iterator();
		it.close();
		persons.close();
		sess.commit();

		sess.close();
		gpool.close();
		dex.close();
	}
}

Setup Hadoop on Ubuntu 11.04 64-bit

Hadoop documentation page has provided a clear statement for hadoop setup on Linux. However, in this entry I want to make the same process simpler and shorter, tailored to suit Ubuntu 11.04 64-bit OS.

1. Install Sun JDK

Sun JDK is unavailable in the official repository of Ubuntu Software Center. What a shame! Let’s resort to an external PPA (Personal Package Archives). Launch the Terminal and run the following commands:

sudo add-apt-repository ppa:ferramroberto/java
sudo apt-get update
sudo apt-get install sun-java6-bin
sudo apt-get install sun-java6-jdk

Add JAVA_HOME variable:

sudo gedit /etc/environment

Append a new line in the file:

export JAVA_HOME="/usr/lib/jvm/java-6-sun-1.6.0.26"

Test the success of installation in Terminal:

java -version

2. Check SSH Setting

ssh localhost

If it says “connection refused”, you’d better reinstall SSH:

sudo apt-get install openssh-server openssh-client

If you cannot ssh to localhost without a passphrase, execute the following commands:

ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

3. Setup Hadoop

Download a recent stable release and unpack it. Edit conf/hadoop-env.sh to define JAVA_HOME as "/usr/lib/jvm/java-6-sun-1.6.0.26":

# The java implementation to use. Required.
export JAVA_HOME=/usr/lib/jvm/java-6-sun-1.6.0.26

Pseudo-Distributed Operation:

conf/core-site.xml:

<configuration>
     <property>
         <name>fs.default.name</name>
         <value>hdfs://localhost:9000</value>
     </property>
</configuration>

conf/hdfs-site.xml:

<configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
</configuration>

conf/mapred-site.xml:

<configuration>
     <property>
         <name>mapred.job.tracker</name>
         <value>localhost:9001</value>
     </property>
</configuration>

Switch to hadoop root directory and format a new distributed file system:

bin/hadoop namenode -format

You’ll get info like “Storage directory /tmp/hadoop-jasper/dfs/name has been successfully formatted.” Remember this path is the HDFS home directory of namenode.

Start and stop hadoop daemons:

bin/start-all.sh 
bin/stop-all.sh

Web interfaces for the NameNode and the JobTracker:

4. Deploy An Example Map-Reduce Job

Let’s run the WordCount example job, which is already embedded in hadoop release. In your local directory, e.g., “/home/jasper/mapreduce/wordcount/”, put some text files. Then copy these files from local directory to HDFS directory and list them:

bin/hadoop dfs -copyFromLocal /home/jasper/mapreduce/wordcount /tmp/hadoop-jasper/dfs/name/wordcount

bin/hadoop dfs -ls /tmp/hadoop-jasper/dfs/name/wordcount

Run the job:

bin/hadoop jar hadoop*examples*.jar wordcount /tmp/hadoop-jasper/dfs/name/wordcount /tmp/hadoop-jasper/dfs/name/wordcount-output

If the output info looks no problem, copy the output file from HDFS to local directory:

bin/hadoop dfs -getmerge /tmp/hadoop-jasper/dfs/name/wordcount-output /home/jasper/mapreduce/wordcount/

Now you can open the output file in your local directory to view the results.

A Survey on Graph Databases

Graph Databases were also discussed in my previous entry about NoSQL databases. Two other valuable surveys about graph databases are a post in ReadWriteWeb and a page in DBPedias. While they are from the top view by conceptual and framework sides respectively, here I mainly start from the bottom view by looking at their manipulations and functions. In addition, this entry covers more products than them.

In Graph Theory, a simple graph is a set of nodes and edges. While this definition is fundamental, graph databases usually add types and attributes on both nodes and edges to make themselves more descriptive and practical in use. At least, graph databases are expected to support fast traversal — this is the reason why we do not simply use tabular databases like HBase or Cassandra to store all the edges (join operation is expensive).

In the previous entry we say graph databases are one of four major categories of NoSQL databases. Also, seven products are listed in the category of graph store: Neo4J, Infinite Graph, DEX, InfoGrid, HyperGraphDB, Trinity and AllegroGraph. We discuss each of them in detail in this entry, mainly from the perspective of how to use them as a Java programmer.

1. Neo4J (Neo Technology)

Neo4J may be the most popular graph database. From the name we know Neo4J is particularly developed for Java applications, but it also supports Python. Neo4J is an open source project available in a GPLv3 Community edition, with Advanced and Enterprise editions available under both the AGPLv3 as well as a commercial license.

The graph model in Neo4J is shown in Figure 1. In simple words,

  • Property (key-value pair) can be added to both node and edge;
  • Only edges can be associated with a type, e.g., “KNOWS”;
  • Edges can be specified as directed or undirected.

image
Figure 1

Given the name of a node, if you want to locate this node in the graph, then you need the help of an index. Neo4J uses the following index mechanism: a super referenceNode is connected all the nodes by a special edge type “REFERENCE”. This actually allows you to create multiple indexes if you distinguish them by different edge types. The index structure is illustrated in Figure 2.

image

Figure 2

Neo4J also provides functions such as getting the neighbors of a specific node or all the shortest distance paths between two nodes. Notice that for all of these “traverse” functions, Neo4J needs you to specify the edge types along paths, which is handy.

There is no need to install Neo4J as a software. We can simply import the JAR file to build an embedded graph database, which will be persisted in your disk as a directory. The documentation of Neo4J looks complete. There is no limit about the maximum number of supported nodes in free version.

Weakness:

  • Although we can manually add a property on nodes with key “type” to annotate the type of node, it is better to provide native support for node types in API to make the graph model more general. Also the problem comes when a node has multiple types.
  • The index mechanism by adding new edges manually by user seems strange and not convenient. It is better to follow what the current relational DB does: users say “create index on a group of nodes”, then done.

Here is another entry about how to get started with Neo4J in Java: https://jasperpeilee.wordpress.com/2011/11/22/neo4j-the-first-cup-of-tea/

2. Infinite Graph (Objectivity Inc.)

InfiniteGraph is a graph database from Objectivity, the company behind the object database of the same name. The free license can only support up to 1 million nodes and edges. InfiniteGraph needs to be installed as a service, which behaves like traditional DB such as MySQL. InfiniteGraph borrows the object-oriented concepts from Objectivity/DB, so each node and edge in InfiniteGraph are objects. Specially,

  • All node classes will extend the base class BaseVertex;
  • All edge classes will extend the base class BaseEdge.

In the example page shown in http://wiki.infinitegraph.com/w/index.php?title=Tutorial:_Hello_Graph!, suppose Person is a node class and Meeting is an edge class. This is the code for adding an edge between two nodes:

Person john = new Person("John", "Hello ");
helloGraphDB.addVertex(john);
Person dana = new Person("Dana", "Database!");
helloGraphDB.addVertex(dana);
Meeting meeting1 = new Meeting("NY", "Graph");

image

Figure 3

InfiniteGraph also provides the visualization tool to view the data. The edge generated by the above code is visualized in Figure 3. Compared with the graph model of Neo4J in Figure 1, InfiniteGraph supports nodes with different types/classes. Note that the property key-value pairs in Neo4J can correspond to member variables in classes of InfiniteGraph.

Weakness:

  • It is fine to install as a service, but should make the configuration simple.
  • Since nodes and edges can be user-customized objects, I suspect the performance will be harmed for huge graphs when we enjoy the flexibility. Remember NoSQL databases should always keep high performance to make themselves compelling.

Note: My experience of getting started with InfiniteGraph on Win 7 64-bit OS is not smooth. The configuration shown in http://wiki.infinitegraph.com/w/index.php?title=InfiniteGraph_Installation seems not complete, which makes the Java programs keep throwing “….dll: Can’t find dependent libraries” error. Then I checked the dependency of that DLL file using Dependency Walker, the error “Modules with different CPU types were found” tells me probably InfiniteGraph does not support 64 bit OS. Finally, I switch to Ubuntu 64-bit OS, finding that InfiniteGraph only provides versions for Redhat/SUSE Linux OS.

3. DEX (Sparsity Technologies)

DEX is said to be a high-performance and scalable graph database, which is attractive for NoSQL applications. The personal evaluation version can support up to 1 million nodes. The current version is 4.2 and it supports both Java and .NET programming. Note that the old version 4.1 only supports Java and is not compatible with the new version. Until today Nov. 24, 2011, the documentation for new version 4.2 is not complete yet, and it is very hard to find a start example for the new version on the web. The migration file here will be very helpful to write programs based on old version examples.

image

Figure 4

Figure 4 shows the architecture of DEX, which explains why DEX can achieve a high performance. The native C++ DEX Core is the key. In the event page, the team shows some exciting applications based on DEX:

DEX is also portable, and you only need a JAR file to run. Not like Neo4J, the persisted database of DEX is only a single file. DEX Java API is easy to use, and Class Graph can provide nearly all the operations you need. To make DEX stronger, following weak points are expected to be eliminated:

  • Better to raise the limit for personal version to 1 billion nodes;
  • More complete documentation with fine examples;
  • Transplant the graph algorithms on old version to the new version in the near future.

Here is a new entry about how to deploy your graph with DEX.

4. InfoGrid (Netmesh Inc.)

InfoGrid calls itself as a “web graph database”, so some of its functions are oriented to web applications. Figure 5 shows the whole framework of InfoGrid, and Graph DB seems not a dominating component. InfoGrid has some applications in OpenID project, which is supported by the same company. I suspect InfoGrid is only used in the internal of Netmesh, because of the following weakness:

  • The newest Java API at here is incomplete and sometimes confused;
  • The tutorial at here is not written in a clear and formal way.

image

Figure 5

For the first step example at http://infogrid.org/wiki/Examples/FirstStep, while it is not hard to read overall, but the enums such as TAGLIBRARY, TAG, TAG_LABEL and TAGLIBRARY_COLLECTS_TAG make me really confused. These enums seems embedded in the model, and why is that? It looks like this example is used in the internal projects of Netmesh to serve for some specific application but who knows.

5. HyperGraphDB (Kobrix Inc.)

HyperGraphDB is an open source data storage mechanism with its implementation based on BerkeleyDB database. The graph model of HyperGraphDB is known as direct hypergraphs. In mathematics, a hypergraph allows its edge pointing to more than two nodes. HyperGraphDB extends this further by allowing edges to point to other edges, so HyperGraphDB offers more generality than other graph databases. Figure 6 shows a hypergraph example with four edges, distinguished by different colors.

image

Figure 6

The tutorial of HyperGraphDB looks complete. Each node in HyperGraphDB is called an atom, and operations like indexing and traversals are supported.

Note: Although the tutorial is written in a nice form, the same error “….dll: Can’t find dependent libraries” occurs on Win 7 OS. After I switch to Ubuntu 64-bit, the sample program throws exception “ELFCLASS32 (possible cause: architecture word width mismatch)”. That’s probably because HyperGraphDB only supports Linux 32-bit.

6. Trinity (Microsoft)

Microsoft joins the competition just recently and the first release V0.1 of Trinity only allows for intranet access. From the introduction, Trinity is a memory-based graph store with rich database features, including highly concurrent online query processing, ACI transaction support, etc. Trinity only provides C# APIs to the user for graph processing.

Since Trinity package is not open to the outside of Microsoft, we cannot know too much details of it. But at least, the key features of Trinity are listed below:

  • Use hypergraph as data model;
  • Applicable to be deployed in distributed mode.

The system architecture can be found here. Overall, it is hard to find any distinct advantages currently when we compare Trinity with other open source graph databases. However, since Trinity is still in its prototype stage, it is worth being noticed. In addition, Probase is an ongoing project that looks like an ontology/taxonomy knowledge bases built on top of Trinity. Here links to a nice article about Probase and Trinity.

7. AllegroGraph (Franz Inc.)

AllegroGraph is a persistent graph database that purportedly scales to “billions of RDF triples while maintaining high performance”. Although a RDF triple can be viewed as an edge, AllegroGraph is intended to build RDF-centric semantic web applications and supports SPARQL, RDFS++, and Prolog reasoning from client applications including Java programs. A free version of AllegroGraph RDFStore supports up to 50 million triples. 

image

Figure 7

Figure 7 shows an example of RDF graph. AllegroGraph appends an additional slot called “named graph” for each triple to make them as quads (but still call them triples for convenience). Here are some assertions from Figure 7.

subject  predicate   object   graph   
robbie   petOf       jans     jans's home page  
petOf    inverseOf   hasPet   english grammar  
Dog      subClassOf  Mammal   science 

To add a bunch of triples into RDF graph, AllegroGraph has facilities to bulk load from both N-triples and RDF/XML files. Overall, AllegroGraph is ideal for RDF storage, but not for general graphs. The documentation looks great. Find introduction here and for Java API tutorial, the Sesame version here and the Jena version here.

Overall comparison:

The overall comparison is shown in the table below. High-performance and distributed deploy are supposed to be supported by all products. “1M” means the corresponding graph databases can support 1 million nodes for free. RDF graphs can be viewed as a special kind of property graph. Since hypergraph is the most generic form of graphs, a graph database supporting hypergraph should also support property graphs theoretically.

 

1

2

3

4

5

6

7

Documentation?

Good Good Fair Bad Good Bad Good

Portable?

Y N Y Y Y N N
Java? Y Y Y Y Y N Y
Free? Y < 1M < 1M Y Y N < 50 M
Property Graph? Y Y Y Y Y Y RDF
Hypergraph? N N N N Y Y N

Tentative Ranking:

Which one is the best? The answer is usually “it depends”. Although it is always controversial to rank products with different characteristics, sometimes we need to make a hard decision. I show the following general rules based on my personal understanding:

  • If you need to store RDF triples, go to AllegroGraph;
  • For property graph, make Neo4J and DEX the first class citizen;
  • For Hypergraph, go to HyperGraphDB.

Yet Another Guide to NoSQL Databases

Key Characteristics of NoSQL Databases:

  • Non-relational;
  • Distributed and highly scalable to huge data;
  • Handling high traffic and streaming data;
  • No ACID guarantees.

NoSQL Wikipedia page at http://en.wikipedia.org/wiki/NoSQL has covered the history and basic concepts of NoSQL databases. Please read it quickly and in this article, I will not repeat the basic concept again. Other than Wikipedia, the most valuable pages I can find so far on the web are:

According to [1], there are currently 122+ known NoSQL databases. The development of NoSQL databases are usually motivated by the practical need in the industry, so most NoSQL products are firstly the internal projects in companies and then open-sourced to the remaining world. Thus, there is no common standard among these products, and nearly all NoSQL database are task-specific — they are limited to fulfill their specific aim and not designed to fit all the need in non-relational world. So the careful classification of current NoSQL databases becomes valuable to generate an overview, which is the aim of this article.

All the links I give above show some categories for NoSQL databases. While sharing some common, the problem is they do not agree with each other. To obtain a better classification based on them, I follow several rules:

  1. Each category should be disjointed with others to the best;
  2. The number of products in each category is better to be balanced;
  3. Each category should be considerably imperative in whole NoSQL world.

The following table shows the result of my classification. The introductions are tailored from [1]. To the broad sense, NoSQL has seven categories. But for a narrow sense, NoSQL generally means only the first four:

  1. Tabular Store;
  2. Document Store;
  3. Key-value Store;
  4. Graph Store.

That’s because object database, XML database and multi-value database belong to the old world, and the key characteristics shown in the beginning are not very obviously reflected in them.

Tabular Store

Hadoop/HBase

API: Java; Query Method: MapReduce Java/any exec; Replication: HDFS Replication; Written in: Java

Cassandra

API: many; Query Method: MapReduce; Written in: Java; Consistency: eventually consistent; initiated by Facebook

Hypertable

API: Apache Thrift (Java, PHP, Perl, Python, Ruby, etc.); Query Method: HQL, native Thrift API; Replication: HDFS Replication; Concurrency: MVCC; Consistency Model: Fully consistent

Cloudata

A Distributed Large scale Structured Data Storage, and an open source project implementing Google’s Bigtable.

Cloudera

Professional Software & Services for solving business problems based on Hadoop

SciDB

A Data Management and Analytics Software System, optimized for data management of big data and for big analytics

HPCC

HPCC (High Performance Computing Cluster) is a massive parallel-processing computing platform that solves Big Data problems

Stratosphere

Massive parallel & flexible execution, Map/Reduce generalization and extension, consists of the PACT Programming Model and the Nephele Execution Engine.

Document Store

MongoDB

API: BSON; Protocol: lots of langs; Query Method: dynamic object-based language & MapReduce; Replication: Master Slave & Auto-Sharding; Written in: C++; Concurrency: Update in Place.

CouchDB

API: JSON; Protocol: REST; Query Method: MapReduceR of JavaScript Funcs; Replication: Master Master; Written in: Erlang; Concurrency: MVCC

Terrastore

API: Java & http; Protocol: http; Language: Java; Querying: Range queries; Predicates; Replication: Partitioned with consistent hashing; Consistency: Per-record strict consistency; Misc: Based on Terracotta

ThruDB

Uses Apache Thrift to integrate multiple backend databases such as as BerkeleyDB, MySQL

OrientDB

Languages: Java; Schema: Has features of an Object-Database, DocumentDB, GraphDB or Key-Value DB; Written in: Java; Query Method: Native and SQL; Misc: really fast, lightweight, ACID with recovery

Key-value Store

Amazon Dynamo

Misc: not open source, eventually consistent

Voldemort

Open-Source implementation of Amazons Dynamo Key-Value Store.

Dynomite

Open-Source implementation of Amazons Dynamo Key-Value Store. written in Erlang.

KAI

Open Source Amazon Dnamo implementation

Azure Table Storage

Collections of free form entities (row key, partition key, timestamp). Blob and Queue Storage available. Accessible via REST or ATOM.

MEMBASE

API: Memcached API, most languages; Protocol: Memcached REST interface for cluster; Written in: C/C++, Erlang (clustering); Replication: Peer to Peer; fully consistent

Riak

API: JSON; Protocol: REST; Query Method: MapReduce term matching; Scaling: Multiple Masters; Written in: Erlang, Concurrency: eventually consistent

Redis

API: many languages; Written in: C; Concurrency: in memory and saves asynchronous disk after a defined time. Replication: Master / Slave

LevelDB

Fast & Batch updates, from Google

Chordless

API: Javap; Query Method: Map/Reduce inside value objects; Scaling: every node is master for its slice of namespace; Written in: Java; Concurrency: serializable transaction isolation

Graph Store

Neo4J

API: lots of langs; Protocol: Java embedded / REST; Query Method: SparQL, nativeJavaAPI, JRuby; Replication: typical MySQL style master/slave; Written in: Java; Concurrency: non-block reads, writes locks involved nodes/relationships until commit; Misc: ACID possible

Infinite Graph

API: Java; Protocol: Direct Language Binding; Query Method: Graph Navigation API; Written in: Java (Core C++); Data Model: Labeled Directed Multi Graph; Concurrency: Update locking on subgraphs

DEX

API: Java;  Protocol: Java Embedded; Query Method: Java API; Written in: Java/C++; Data Model: Labeled Directed Attributed Multigraph; Concurrency: yes

InfoGrid

API: Java; http/REST; Protocol: API + XPRISO, OpenID, RSS, Atom, JSON, Java embedded; Query Method: Web user interface with html, RSS, Atom, JSON output, Java native; Replication: peer-to-peer; Written in: Java; Concurrency: concurrent reads; write lock within one MeshBase

HyperGraphDB

API: Java; Written in: Java;  Query Method: Java or P2P; Replication: P2P; Concurrency: STM; Misc: especially for AI and Semantic Web.

Trinity

API: C#; Protocol: C# Language Binding; Query Method: Graph Navigation API; Replication: P2P with Master Node; Written in: C#; Concurrency: Yes Misc: distributed in-memory storage; parallel graph computation

AllegroGraph

API: Java, Python, Ruby, C#, Per, Lisp; Protocol: REST; Query Method: SPARQL and Prolog; Libraries: Social Networking Analytics & GeoSpatial; Written in: Common Lisp

Object Database

db4o

API: Java, C#; Query Method: QBE (by Example), Native Queries, LINQ (.NET);  Replication: db4o2db4o; Written in: Java;  Cuncurrency: ACID serialized; Misc: embedded lib

Versant

Languages/Protocol: Java, C#, C++, Python; Schema: language class model; Replication: synchronous fault tolerant and peer to peer asynchronous. Concurrency:  optimistic and object based locks. Scaling: can add physical nodes on fly for scale out/in and migrate objects between nodes without impact to application code. Misc: MapReduce via parallel SQL like query across logical database groupings

Objectivity

Languages: Java, C#, C++, Python, Smalltalk, SQL access through ODBC; Schema: native language class model; direct support for references; interoperable across all language bindings. Modes: always consistent (ACID);  Concurrency: locks at cluster of objects (container) level. Scaling: unique distributed architecture, dynamic addition/removal of clients & servers, cloud environment ready. Replication: synchronous with quorum fault tolerant across peer to peer partitions

Starcounter

API: C# (.NET languages); Schema: Native language class model; Query method: SQL; Concurrency: Fully ACID compliant; Storage: In-memory with transactions secured on disk; Reliability: Full checkpoint recovery

Perst

API: Java, Java M, C#, Mono. Query method: OO via Perst collections, QBE, Native Queries, LINQ, native full-text search, JSQL. Replication: Async+sync (master-slave) Written in: Java, C#. Caching: Object cache (LRU; weak; strong), page pool, in-memory database. Index types: Many tree models & Time Series. Misc.: Embedded lib., encryption, automatic recovery, native full text search, on-line or off-line backup

ZODB

API:  Python; Protocol:  Internal ZEO; Query Method: Direct object access; Written in:  Python, C; Concurrency:  MVCC; License: Zope Public License; Misc:Used in production since 1998

XML Database

EMC xDB

API: Java, XQuery; Protocol: WebDAV, web services; Query method: XQuery, XPath, XPointer; Replication: lazy primary copy replication (master/replicas); Written in: Java; Concurrency: concurrent reads, writes with lock; Misc: Fully transactional persistent DOM, versioning. multiple index types, metadata and non-XML data support, unlimited horizontal scaling

eXist

API: XQuery, XML:DB API, DOM, SAX; Protocols: HTTP/REST, WebDAV, SOAP, XML-RPC, Atom; Query Method: XQuery; Written in: Java (open source), Concurrency: Concurrent reads, lock on write; Misc: Entire web applications can be written in XQuery, using XSLT, XHTML, CSS, and Javascript (for AJAX)

Sedna

ACID transactions, security, indices, hot backup. Flexible XML processing facilities include W3C XQuery implementation, tight integration of XQuery with full-text search facilities and a node-level update language.

BaseX

a fast, powerful, lightweight XML database system and XPath/XQuery processor with highly conformant support for the latest W3C Update and Full Text Recommendations. Client/Server architecture, ACID transaction support, user management, logging, Open Source, BSD-license, written in Java.

Berkeley DB XML

API: Many languages; Written in: C++; Query Method: XQuery; Replication: Master / Slave; Concurrency: MVCC

Multivalue Databases

U2

Data Structure: MultiValued, Supports nested entities, Virtual Metadata; API: BASIC, InterCall, Socket, .NET and Java API’s; Scalability: automatic table space allocation; Protocol: Client Server, SOA,  Terminal Line, X-OFF/X-ON; Written in: C; Query Method: Native mvQuery and SQL; Replication: yes, Hot standby; Concurrency: Record and File Locking

OpenInsight

API:  Basic, .Net, COM, Socket, ODBC; Protocol: TCP/IP, Named Pipes, Telnet; Query Method: RList, SQL & XPath; Written in: Native 4GL, C, C++, Basic+, .Net, Java;  Replication: Hot Standby; Concurrency: table &/or row locking, optionally transaction based commit & rollback; Data structure: Relational &/or MultiValue, supports nested entities; Scalability: rows and tables size dynamically

OpenQM

Supports nested data. Fully automated table space allocation. Concurrency control via task locks, file locks & shareable/exclusive record locks. OO programming integrated into QMBasic. QMClient connectivity from Visual Basic, PowerBasic, Delphi, PureBasic, ASP, PHP, C and more. Extended multivalue query language.

References

[1] List of NoSQL Databases. http://nosql-database.org/

Categories: Structured Storage Tags:

Hive – A SQL-like Wrapper over Hadoop

This is a summary and review for Hive [1].

1. Motivation

For companies like Facebook in the industry, the size of data being collected and analyzed for business intelligence (BI) is growing rapidly. Traditional data warehouse solutions become prohibitively expensive in this scenario. To solve this problem, Hadoop [2] is a popular open-source map-reduce [4] implementation which is being used widely to store and process extremely huge data sets on commodity hardware. However, the war never ends — the map-reduce programming model is very low-level and requires the developers to write custom programs which is hard to maintain and reuse. In other words, the business intelligence goal directly on top of hadoop will be hard to achieve. These backgrounds are all the bedding for a new solution — Hive [1][3].

2. What is Hive

Hive is a open-source data warehousing solution built on top of Hadoop. In the front end, Hive supports a SQL-like query language called HiveQL, and in the backend, HiveQL will be compiled into map-reduce jobs and executed on Hadoop. In addition, HiveQL allows users to plug in custom map-reduce scripts into queries. Just as SQL, HiveQL supports tables containing primitive types (number, boolean, string, etc), collections (array, map, etc) and nested compositions.

3. Architecture

Here is the system architecture for Hive. We can see, Hive is built on top of Hadoop, including a Metastore with schemas and statistics inside, which are useful in data exploration and query optimization.

image

4. HiveQL

Basically, HiveQL comprises of a subset of SQL and some useful extensions. While HiveQL has great advantages in manipulating huge data, some limitations emerges due to various reasons:

  • Lack of inequality operator. Join predicates only support equality operator. Say bye-bye to ‘<’ and ‘>’.
  • Lack of “insert into”. Cannot insert into an existing table or data partition. Only support “insert overwrite” and an insert will always overwrite the existing data in the whole table or partition, so be careful here!
  • Lack of “update” and “delete”. As claimed by the paper, the marginal gain of “update” and “delete” will be offset by the new complexity of dealing with reader and writer concurrency and I agree on this point.

Here is an example of HiveQL query:

FROM (SELECT a.status, b.school, b.gender FROM status_updates a JOIN profiles b ON (a.userid = b.userid AND a.ds=’2009-03-20′ )) subq1

INSERT OVERWRITE TABLE gender_summary PARTITION(ds=’2009-03-20′) SELECT subq1.gender, COUNT(1) GROUP BY subq1.gender

INSERT OVERWRITE TABLE school_summary PARTITION(ds=’2009-03-20′) SELECT subq1.school, COUNT(1) GROUP BY subq1.school

This query has a single join followed by two different aggregations. By writing the query as a multi-table-insert, make sure that the join is only performed once. The query plan of this query with 3 map-reduce jobs is shown in the following figure (click to view large):

image

5. Comments

Hive provides a solution to perform business intelligence of huge data on top of mature Hadoop map-reduce platform. The SQL-like HiveQL cuts off the learning curve compared with low-level map-reduce programs. To think of the constraints, I can list the following:

  • While Hive brings convenience by high-level SQL-like language, this will harm the generality and expressive capability. Imagine a task T which can be written by map-reduce programs, but may be hard or impossible to be written by HiveQL.
  • I agree on the removal of “delete” and “update”, since they make the performance declined. There should be other convenient ways to update and delete, so HiveQL can save on them. But I do not feel fine on the removal of inequality operators, which are useful in many BI analysis.
  • Hive is not the end for BI solutions on Hadoop. Hive is definitely a huge step for pushing map-reduce platforms towards BI, but many advanced BI techniques such as clustering, classification and prediction, still have a long way to go when facing huge data.

References

[1] Ashish Thusoo, Joydeep Sen Sarma, Namit Jain, Zheng Shao, Prasad Chakka, Ning Zhang, Suresh Anthony, Hao Liu, Raghotham Murthy: Hive – a petabyte scale data warehouse using Hadoop. ICDE 2010:996-1005

[2] Apache Hadoop. Available at http://wiki.apache.org/hadoop

[3] Hive wiki at http://www.apache.org/hadoop/hive

[4] Hadoop Map-Reduce Tutorial at http://hadoop.apache.org/common/docs/current/mapred_tutorial.html