hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Rawson <ryano...@gmail.com>
Subject Re: Data Loss During Bulk Load
Date Wed, 24 Mar 2010 18:12:33 GMT
You'll want this one:

<property>
<name>dfs.datanode.socket.write.timeout</name>
<value>0</value>
</property>

A classic standby from just over a year ago.  It should be in the
recommended config - might not be anymore, but I am finding it
necessary now.

On Wed, Mar 24, 2010 at 11:06 AM, Rod Cope <rod.cope@openlogic.com> wrote:
> This describes my situation, too.  I never could get rid of the
> SocketTimeoutException's, even after dozens of hours of research and
> applying every tuning and configuration suggestion I could find.
>
> Rod
>
>
> On 3/24/10 Wednesday, March 24, 201011:45 AM, "Tuan Nguyen"
> <tuan08@gmail.com> wrote:
>
>> Hi Nathan,
>>
>> We recently run a performance test again hbase 0.20.3 and hadoop 0.20.2.  We
>>  have a quite similar problem to your.  At the first scan test ,  we notice
>> that we loose some data on certain column in certain row and out log have
>> the errors such Error Recovery for block, Coul Not get the block,
>> IOException, SocketTimeoutException: 480000 millis timeout... And the test
>> completely fail at the middle. After various tuning the GC, caching,
>> xcievier... We can finish the test without any data loss. Our log have only
>> SocketTimeoutException: 480000 millis timeout error left.
>>
>> Tuan Nguyen!
>
>
> --
>
> Rod Cope | CTO and Founder
> rod.cope@openlogic.com
> Follow me on Twitter @RodCope
>
> 720 240 4501    |  phone
> 720 240 4557    |  fax
> 1 888 OpenLogic    |  toll free
> www.openlogic.com
> Follow OpenLogic on Twitter @openlogic
>
>
>
>
>
>

Mime
View raw message