hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ed Kohlwey <ekohl...@gmail.com>
Subject Re: Hypertable claiming upto >900% random-read throughput vs HBase
Date Thu, 16 Dec 2010 02:19:28 GMT
Along the lines of Terracotta big memory, apparently what they are actually
doing is just using the DirectByteBuffer class (see this forum post:
http://forums.terracotta.org/forums/posts/list/4304.page) which is basically
the same as using malloc - it gives you non-gc access to a giant pool of
memory that you can allocate as you please.

Using the DirectByteBuffer directly might be even better than using
bigmemory, since it appears to use java object serialization to translate
between their "special" memory and regular java memory, which is probably
just another unnecessary layer.

On Wed, Dec 15, 2010 at 3:27 PM, Vladimir Rodionov

> Why do not you use off heap memory for this purpose? If its block cache
> (all blocks are of equal sizes)
> alloc/free algorithm is pretty much simple - you do not have to
> re-implement malloc in Java.
> I think something like open source version of Terracotta BigMemory is a
> good candidate for
> Apache project. I see at least  several large Hadoops : HBase, HDFS
> DataNodes, TaskTrackers and NameNode who suffer a lot from GC timeouts.
> Best regards,
> Vladimir Rodionov
> Principal Platform Engineer
> Carrier IQ, www.carrieriq.com
> e-mail: vrodionov@carrieriq.com

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message