hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From anil gupta <anilgupt...@gmail.com>
Subject Re: org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000 actions to finish
Date Mon, 15 Feb 2016 07:37:31 GMT
I figured out the problem. We have phoenix.upsert.batch.size set to 10 in
hbase-site.xml but somehow that property is **not getting picked up in our
oozie workflow**
When i am explicitly setting phoenix.upsert.batch.size property in my oozie
workflow then my job ran successfully.

By default, phoenix.upsert.batch.size is 1000. Hence, the commits were
failing with a huge batch size of 1000.

Thanks,
Anil Gupta


On Sun, Feb 14, 2016 at 8:03 PM, Heng Chen <heng.chen.1986@gmail.com> wrote:

> I am not sure whether "upsert batch size in phoenix" equals HBase Client
> batch puts size or not.
>
> But as log shows, it seems there are 2000 actions send to hbase one time.
>
> 2016-02-15 11:38 GMT+08:00 anil gupta <anilgupta84@gmail.com>:
>
>> My phoenix upsert batch size is 50. You mean to say that 50 is also a lot?
>>
>> However, AsyncProcess is complaining about 2000 actions.
>>
>> I tried with upsert batch size of 5 also. But it didnt help.
>>
>> On Sun, Feb 14, 2016 at 7:37 PM, anil gupta <anilgupta84@gmail.com>
>> wrote:
>>
>> > My phoenix upsert batch size is 50. You mean to say that 50 is also a
>> lot?
>> >
>> > However, AsyncProcess is complaining about 2000 actions.
>> >
>> > I tried with upsert batch size of 5 also. But it didnt help.
>> >
>> >
>> > On Sun, Feb 14, 2016 at 6:43 PM, Heng Chen <heng.chen.1986@gmail.com>
>> > wrote:
>> >
>> >> 2016-02-14 12:34:23,593 INFO [main]
>> >> org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions to finish
>> >>
>> >> It means your writes are too many,  please decrease the batch size of
>> your
>> >> puts,  and balance your requests on each RS.
>> >>
>> >> 2016-02-15 4:53 GMT+08:00 anil gupta <anilgupta84@gmail.com>:
>> >>
>> >> > After a while we also get this error:
>> >> > 2016-02-14 12:45:10,515 WARN [main]
>> >> > org.apache.phoenix.execute.MutationState: Swallowing exception and
>> >> > retrying after clearing meta cache on connection.
>> >> > java.sql.SQLException: ERROR 2008 (INT10): Unable to find cached
>> index
>> >> > metadata.  ERROR 2008 (INT10): ERROR 2008 (INT10): Unable to find
>> >> > cached index metadata.  key=-594230549321118802
>> >> > region=BI.SALES,,1455470578449.44e39179789041b5a8c03316730260e7.
>> Index
>> >> > update failed
>> >> >
>> >> > We have already set:
>> >> >
>> >> >
>> >>
>> <name>phoenix.coprocessor.maxServerCacheTimeToLiveMs</name><value>180000</value>
>> >> >
>> >> > Upset batch size is 50. Write are quite frequent so the cache would
>> >> > not timeout in 180000ms
>> >> >
>> >> >
>> >> > On Sun, Feb 14, 2016 at 12:44 PM, anil gupta <anilgupta84@gmail.com>
>> >> > wrote:
>> >> >
>> >> > > Hi,
>> >> > >
>> >> > > We are using phoenix4.4, hbase 1.1(hdp2.3.4).
>> >> > > I have a MR job that is using PhoenixOutputFormat. My job keeps
on
>> >> > failing
>> >> > > due to following error:
>> >> > >
>> >> > > 2016-02-14 12:29:43,182 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:29:53,197 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:03,212 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:13,225 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:23,239 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:33,253 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:43,266 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:30:53,279 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:03,293 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:13,305 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:23,318 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:33,331 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:43,345 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:31:53,358 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:03,371 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:13,385 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:23,399 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:33,412 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:43,428 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:32:53,443 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:03,457 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:13,472 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:23,486 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:33,524 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:43,538 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:33:53,551 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:03,565 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:03,953 INFO
>> [hconnection-0xe82ca6e-shared--pool2-t16]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, table=BI.SALES,
>> >> > attempt=10/35 failed=2000ops, last exception: null on
>> hdp3.truecar.com
>> >> ,16020,1455326291512,
>> >> > tracking started null, retrying after=10086ms, replay=2000ops
>> >> > > 2016-02-14 12:34:13,578 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > > 2016-02-14 12:34:23,593 INFO [main]
>> >> > org.apache.hadoop.hbase.client.AsyncProcess: #1, waiting for 2000
>> >> actions
>> >> > to finish
>> >> > >
>> >> > > I have never seen anything like this. Can anyone give me pointers
>> >> about
>> >> > > this problem?
>> >> > >
>> >> > > --
>> >> > > Thanks & Regards,
>> >> > > Anil Gupta
>> >> > >
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Thanks & Regards,
>> >> > Anil Gupta
>> >> >
>> >>
>> >
>> >
>> >
>> > --
>> > Thanks & Regards,
>> > Anil Gupta
>> >
>>
>>
>>
>> --
>> Thanks & Regards,
>> Anil Gupta
>>
>
>


-- 
Thanks & Regards,
Anil Gupta

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message