directmemory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Roman Levenstein <>
Subject FYI: BigMemory Go announcement from Terracotta
Date Sat, 29 Sep 2012 20:24:20 GMT

Terracotta announced that they offer a feature-limited version of
their BigMemory product for free under the name "BigMemory Go".
The main limitation is that you can use at most 32GB of off-heap
memory per JVM instance and there is no support for a replication and
clustering of caches between JVMs/nodes.

It looks like off-heap caching solutions are gaining more attention
these days. One of the reasons for Terracotta's move could be a wish
to counteract offerings from competitors and open-source analogs.

Since it is available now, it would be very interesting to see some
benchmarks comparing off-heap memory solutions from DirectMemory and
Terracota. Is there anyone willing to give it a try? :-)
>From their documentation is sounds like they use a standard Java
serialization, which means that DirectMemory could be even faster than
BigMemory, because it uses more efficient serializers.
Also their implementation of  querying/indexing does not sound like
very optimized. May be it is another place, where DirectMemory could
be better. If overall DirectMemory would show a comparable or better
performance, it would add it a lot of credibility, IMHO.

Another thing, which could be interesting is to look at their APIs and
to see if something could be/should be modeled after it or made
similar to it, so that users can easily switch from their
closed-source solutions to an open-source solution.

The third thing is: it could be interesting and useful to join forces
among open-source projects that have a similar goal of providing
efficient serialization and off-heap memory.
Right now we have something like this:
- (distributed) off-heap caches: DirectMemory, BigCache
( and Hazelcast Enterprise (off-heap support
is closed-source at the moment)
- serialization: kryo, protostuff, lightning, protocol buffers and
many, many more implementations, which all look very similar
IMHO, too much effort is wasted re-implementing the same functionality
every time by every project. If those projects (especially off-heap
caching impls) would work together it would lead much faster to much
better results. What do you think? Any plans to co-operate with any of


View raw message