hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <tdunn...@maprtech.com>
Subject Re: hbase and hadoop capacity and load measurement
Date Sat, 05 Feb 2011 08:25:32 GMT
Sounds like you ahve a problem with hbase being swapped out of memory.  It
might help (paradoxically) to decrease the memory available for hbase since
it will cache less and have fewer long lived pages in its cache.  Certainly
you should consider decreasing the memory used by the map-reduce processes.
 One thing that might help is to use a linux container to protect the memory
for the hbase process.  This could have dramatically negative consequences
as well, so tread carefully there.

On Fri, Feb 4, 2011 at 5:09 PM, Jinsong Hu <jinsong_hu@hotmail.com> wrote:

> Hi, There:
>  We have a hadoop/hbase cluster with 6 regionservers, double as task
> tracker and datanodes. They have 8G and 4x0.5T disk.
> I am using cdh3b2 distribution.
>
> I noticed that when the load is small, everything is happy. However, when
> we push enough data continuously to hbase and
> run map-reduce job continuously, the hbase becomes flaky, it constantly got
> into trouble. On each day, for each region server,
> I see around 15-50 timeout when regionserver tries to save data into hdfs,
> and on average , each 1-2 days there is enough
> NotServingRegionException that the hbase is not usable. I have to restart
> it to keep working. In the mean time,  map-reduce
> and pushing data to the cluster works fine. I checked the performance of
> the disk, the writing is significantly more than reading.
> and the average write rate is about 40-100M/sec for the disk.  the disk
> %util is around 20-30% on average , but can go to 100%
> occasionally. the CPU IOwait is about 50% to 100%. average is around 50% on
> all regionservers.
>
> I have tuned hbase GC so it is not a problem. The average write rate to
> hbase is about 2 - 3 M/second, which is much smaller
> than the reported 10M/second bench mark I found on the internet. The only
> difference is that this insertion is continuous
> and never stop.  On average, there are around 10 map-reduces running
> simultaneously. Each machine I configure only 2 mappers
> and 1 reducer.
>
>  I  am probably pushing the limit for the hadoop/hbase cluster for
> performance. I wonder if in the hadoop/hbase community
> there is any objective way to measure the capacity and load of the cluster
> . I googled around and didn't find any.
> Without this, it is very hard to tell the management  to open the wallet to
> buy more machines, as they will ask what is the capacity,
> what is our current load, etc. Since I am working on a shard environment,
> where almost everybody who has access to the system are allowed
> to submit their job and insert data.  Storage and CPU is relatively easy to
> measure, however, we are more interested in how much data
> can be inserted to hbase while mapreduce is submitted continuously.
>
> Does anybody have an idea how to resolve this issue, and help to show the
> management that more machines are needed, and further more,
> how much memory , CPU , disk are needed to support ongoing load ?
>
> Jimmy.
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message