hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Bing Jiang <jiangbinglo...@gmail.com>
Subject Re: regionserver died when using Put to insert data
Date Wed, 14 Aug 2013 14:12:45 GMT
hi,jia
If you want to load 77G data, you can consider the solution as below:
1 create table with splitting region beforehand,
2 Write a MR program to generate HFile according to table's split
region.(HFileOutputFormat, refer to import bulkload)
3 Incrementally load into region.
On Aug 14, 2013 6:50 PM, "Jean-Marc Spaggiari" <jean-marc@spaggiari.org>
wrote:

> Hi Jia,
>
> That's all how hbase works ;)
>
> When regions are bigger than the configured value, hbase will split them.
> Default is 10GB but you can configure that per table.
>
> So with 77GB you should have at least 8 regions.  For performances, don't
> forget to pre-split before you load...
>
> JM
> Le 2013-08-13 22:16, <tjuhenryli@gmail.com> a écrit :
>
> > Hi jean-marc,****
> >
> > ** **
> >
> > The hdfs is running all the time ; I guess hbase occurs splitting during
> > putting large data , and the original hfile is splitted into new
> hfiles?**
> > **
> >
> > Is that possible?****
> >
> > ** **
> >
> > ** **
> >
> > ** **
> >
> > Hi Jia,****
> >
> > ** **
> >
> > How is you HDFS running?****
> >
> > ** **
> >
> > "Caused by: org.apache.hadoop.ipc.****
> >
> > RemoteException(java.io.IOException): File****
> >
> >
> >
> /apps/hbase/data/lbc_zte_1_imei_index/4469e6b0500bf3f5ed0ac1247d249537/.tmp/e7bb489662344b26bc6de1e72c122eec
> > ****
> >
> > could only be replicated to 0 nodes instead of minReplication (=1).
>  There
> > ****
> >
> > are 3 datanode(s) running and no node(s) are excluded in this
> operation.**
> > **
> >
> > "****
> >
> > ** **
> >
> > Sound like there is some issues on the datanode. Have you checked its
> logs?
> > ****
> >
> > ** **
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message