hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dmitriy Lyfar <dly...@gmail.com>
Subject Re: Problems with write performance (25kb rows)
Date Wed, 13 Jan 2010 12:35:06 GMT
Hi Stack,

Thank you for you help. I set xceivers in hdfs xml config like:

<property>
        <name>dfs.datanode.max.xcievers</name>
        <value>8192</value>
</property>

And ulimit is 32K for sure. I turned off DEBUG logging level for hbase and
here is log for one of regionservers after I have inserted 200K records
(each row is 25Kb).
Speed still the same (about 1K rows per second).
Random ints plays a role of row keys now (i.e. uniform random distribution
on (0, 100 * 1000)).
What do you think is 5GB for hbase and 2GB for hdfs enough?


> What are you tasktrackers doing?   Are they doing the hbase loading?  You
> might try turning down how many task run concurrently on each tasktracker.
> The running tasktracker may be sucking resources from hdfs (and thus by
> association, from hbase): i.e. mapred.map.tasks and mapred.reduce.tasks
> (Pardon me if this advice has been given previous and you've already acted
> on it).


Tasktrackers is not used now (I planned them for future use in statistical
analysis). So I turned them off for last tests. Data uploader is several
clients which run simultaneously on name node and each of them inserts 100K
records.

-- 
Regards, Lyfar Dmitriy

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message