Friday, June 27, 2014

Simple Comparisons on some In-Memory Data Grids

Fetures Feature Description GemFire7 Coherence3.7 Infinispan5 Gigaspaces7 Notes
1. topologies peer-peer; client-server High High High High
2. cross-site /
  WAN replication
datacenter-datacenter; region-region High Low Med High Coherence support by incubator
"Push replication";
Infinispan basic support in V5.2
3. read-most scalability via replication High High High High
4. write-most scalability via partitioning and replication
between primary and backups
High High Med High
5. high availability / HA via replication and persistence
 into disk (DB)
High High High High
6. asychronous
useful for slow changing data and
WAN replication
High Med High High Coherence support by incubator
"Push replication";
7. partition rehashing consistent hashing to reduce
data relocation
High High High High all seem to use consistent
hashing-like algorithms
8. dynamic clustering adds or loses nodes High High High Med
9. updating among
    partition backups
master-backup: less deadlock prone
and IO master-master/update
anywhere: more dealock prone and IO
High High Low High Infinispan only has master-master
that incurs deadlock for slow
network and large caches
10. cache loader and
2 application interfaces: loades data
from DB and saves data into Cache.
With cache loader and writer,
applications only need to interace
with Cache!
High High Med High Infinispan writers only support JPA, not JDBC or Hibernate!
11. read-through and
populates cache with DB
in batch mode
Med High Med High no prefetch/batch support from
Gemfire and infinispan
12. write-through and
persists cache into wherever they
were loaded.write-behind persists
data asynchronously
High High Med High
13. event notification Cache also works as a messaging and
parallel processing bus like JMS/MDB
Med Med Med High
14. continous querying register contents-based interests
with Cache and receive updates
High High Low High
15. cache querying SQL like query: not only searches
 based on key matching
High High Low High
16. locking and tx JTA (global tx) and ACID properties High High Med High
17. off-heap puts cache data off Java heap
so that GC pause time is reduce!
Low High Low Low
18. key affinity /
colocates related cache objects on the
same partition to reduce IO
High High High High
19. customized
puts cache into specific cluster node
to bypass cache hashing algorithm
High Low Med Low
20. synchronous
     requests like RFQ
synchronou requests should go
through the same partition routing as
asynchrous messages.
Low Low Low High
21. API Map,Restful and appropriate interfaces Med Med Med High
22. language bindings Java, C++ etc Med Med Med High
23. map-reduce /
submits aggregation tasks to run
across multiple or all cluster nodes
High High High High
24. monitoring monitor the health of clusters High Med Med High
25. J2EE (JMS,Web,
how much J2EE? to support? Low Low Low High
26. migration effort how to migrate to a different cache
production in a J2EE app. server
High High High Low GigaSpaces is an app. server

No comments:

Post a Comment