hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amit Shah <amits...@gmail.com>
Subject Leveraging As Much Memory As Possible
Date Wed, 30 Mar 2016 14:59:47 GMT

I am trying to configure my hbase (version 1.0) phoenix (version - 4.6)
cluster to utilize as much memory as possible on the server hardware. We
have an OLAP workload that allows users to perform interactive analysis
over huge sets of data. While reading about hbase configuration I came
across two configs

1. Hbase bucket cache
(off heap) which looks like a good option to bypass garbage collection.
2. Hadoop pinned hdfs blocks
(max locked memory) concept that loads the hdfs blocks in memory, but given
that hbase is configured with short circuit reads I assume this config may
not be of much help. Instead it would be right to increase hbase region
server heap memory. Is my understanding right?

We use HBase with Phoenix.
Kindly let me know your thoughts or suggestions on any more options that I
should explore


  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message