hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: HTable.put hangs on bulk loading
Date Fri, 08 Apr 2011 21:20:08 GMT
That exception means you are running out of threads on that whole
machine. I wonder how you were able to get that... is hbase running on
that machine too? I'd love you see your configuration but what you
pasted is the hbase-default, which doesn't say anything since it's all
the default values.

The newer FC releases have an awful small setting for nproc If that's
your case you might want to bump that, google should tell you how on
your specific system.

J-D

On Fri, Apr 8, 2011 at 12:21 PM, Ajay Govindarajan
<agovindarajan@yahoo.com> wrote:
> I used to call HTable.close on each put. I commented it out and I get the exception below
(the program stops insertion at the exact same point i.e. 15876 rows)
>
> Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:640)
>         at java.util.concurrent.ThreadPoolExecutor.ensureQueuedTaskHandled(ThreadPoolExecutor.java:760)
>         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:655)
>         at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
>         at org.apache.hadoop.hbase.client.HConnectionManager$TableServers.processBatchOfPuts(HConnectionManager.java:1439)
>         at org.apache.hadoop.hbase.client.HTable.flushCommits(HTable.java:664)
>         at org.apache.hadoop.hbase.client.HTable.doPut(HTable.java:549)
>         at org.apache.hadoop.hbase.client.HTable.put(HTable.java:535)
>
> I am not sure why this happens because I commit after every put.
>
> Any help will be appreciated.
>
> thanks
> -ajay
>
>
>
>
>
> ________________________________
> From: Ajay Govindarajan <agovindarajan@yahoo.com>
> To: "user@hbase.apache.org" <user@hbase.apache.org>
> Sent: Thursday, April 7, 2011 5:35 PM
> Subject: Re: HTable.put hangs on bulk loading
>
> Thanks for pointing this out. I have uploaded the server config at:
> http://pastebin.com/U41QZGiq
>
> thanks
> -ajay
>
>
>
>
>
>
>
> ________________________________
> From: Jean-Daniel Cryans <jdcryans@apache.org>
> To: user@hbase.apache.org
> Sent: Thursday, April 7, 2011 10:29 AM
> Subject: Re: HTable.put hangs on bulk loading
>
> There's nothing of use in the pasted logs unfortunately, and the log
> didn't get attached to your mail (happens often). Consider putting on
> a web server or pastebin.
>
> Also I see you are on an older version, upgrading isn't going to fix
> your issue (which is probably related to your environment or
> configuration) but at least it's gonna be easier for us to support
> you.
>
> J-D
>
> On Wed, Apr 6, 2011 at 7:10 PM, ajay.gov <agovindarajan@yahoo.com> wrote:
>>
>> I am doing a load test for which I need to load a table with many rows.  I
>> have a small java program that has a for loop and calls HTable.put.  I am
>> inserting a map of 2 items into a table that has one column family. The
>> limit of the for loop is currently 20000. However after 15876 rows the call
>> to Put hangs. I am using autoFlush on the HTable. Any ideas why this may
>> happen?
>>
>> The table configuration:
>> DESCRIPTION                                          ENABLED
>>  {NAME => 'TABLE2', FAMILIES => [{NAME => 'TABLE2_CF true
>>  1', BLOOMFILTER => 'NONE', REPLICATION_SCOPE => '0'
>>  , COMPRESSION => 'NONE', VERSIONS => '3', TTL => '2
>>  147483647', BLOCKSIZE => '65536', IN_MEMORY => 'fal
>>  se', BLOCKCACHE => 'true'}]}
>>
>> The HBase config on the client is the one in the hbase-default.xml. Some
>> values:
>> hbase.client.write.buffer=2097152
>> hbase.client.pause=1000
>> hbase.client.retries.number=10
>>
>> If i use another client I am able to put items to the table. I am also able
>> to scan items from the table using the hbase shell.
>>
>> I have attached the server configuratio
>> I don't see anything in the region server or master logs. I have them here.
>>
>> The master server log:
>> 2011-04-06 19:02:40,149 INFO org.apache.hadoop.hbase.master.BaseScanner:
>> RegionManager.rootScanner scanning meta region {server:
>> 184.106.69.238:60020, regionname: -ROOT-,,0.70236052, startKey: <>}
>> 2011-04-06 19:02:40,152 INFO org.apache.hadoop.hbase.master.BaseScanner:
>> RegionManager.rootScanner scan of 1 row(s) of meta region {server:
>> 184.106.69.238:60020, regionname: -ROOT-,,0.70236052, startKey: <>} complete
>> 2011-04-06 19:02:40,157 INFO org.apache.hadoop.hbase.master.ServerManager: 1
>> region servers, 0 dead, average load 42.0
>> 2011-04-06 19:03:15,252 INFO org.apache.hadoop.hbase.master.BaseScanner:
>> RegionManager.metaScanner scanning meta region {server:
>> 184.106.69.238:60020, regionname: .META.,,1.1028785192, startKey: <>}
>> 2011-04-06 19:03:15,265 INFO org.apache.hadoop.hbase.master.BaseScanner:
>> RegionManager.metaScanner scan of 40 row(s) of meta region {server:
>> 184.106.69.238:60020, regionname: .META.,,1.1028785192, startKey: <>}
>> complete
>> 2011-04-06 19:03:15,266 INFO org.apache.hadoop.hbase.master.BaseScanner: All
>> 1 .META. region(s) scanned
>>
>>
>> The region server logs:
>> 2011-04-06 19:02:21,294 DEBUG org.apache.hadoop.hbase.regionserver.HRegion:
>> Creating region TABLE2,,1302141740486.010a5ae704ed53f656cbddb8e489162a.
>> 2011-04-06 19:02:21,295 INFO org.apache.hadoop.hbase.regionserver.HRegion:
>> Onlined TABLE2,,1302141740486.010a5ae704ed53f656cbddb8e489162a.; next
>> sequenceid=1
>>
>> --
>> View this message in context: http://old.nabble.com/HTable.put-hangs-on-bulk-loading-tp31338874p31338874.html
>> Sent from the HBase User mailing list archive at Nabble.com.
>>
>>

Mime
View raw message