hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ashish singhi <ashish.sin...@huawei.com>
Subject RE: hbase table creation
Date Thu, 16 Mar 2017 12:18:09 GMT
Was any data added into this table region ? If not then you can skip this region directory
from completebulkload.

-----Original Message-----
From: Rajeshkumar J [mailto:rajeshkumarit8292@gmail.com] 
Sent: 16 March 2017 17:44
To: user@hbase.apache.org
Subject: Re: hbase table creation

Ashish,

    I have tried as u said but I dont have any data in this folder

/hbase/tmp/t1/region1/d

So in the log

2017-03-16 13:12:40,120 WARN  [main] mapreduce.LoadIncrementalHFiles: Bulk load operation
did not find any files to load in directory /hbase/tmp/t1/region1.  Does it contain files
in subdirectories that correspond to column family names?

So is this data corrupted?



On Thu, Mar 16, 2017 at 5:14 PM, ashish singhi <ashish.singhi@huawei.com>
wrote:

> Hi,
>
> You can try completebulkload tool to load the data into the table. 
> Below is the command usage,
>
> hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
>
> usage: completebulkload /path/to/hfileoutputformat-output tablename  
> -Dcreate.table=no - can be used to avoid creation of table by this tool
>   Note: if you set this to 'no', then the target table must already 
> exist in HBase.
>
>
> For example:
> Consider tablename as t1 you have copied the data of t1 from cluster1 
> to
> /hbase/tmp/t1 directory in cluster2 .
> Delete the recovered.edits directory or any other directory except 
> column family directory(store dir) from the region directory of that 
> table, Suppose you have two regions in the table t1 and list output of 
> table dir is like below
>
> ls /hbase/tmp/t1
>
> drwxr-xr-x    /hbase/tmp/t1/.tabledesc
> -rw-r--r--    /hbase/tmp/t1/.tabledesc/.tableinfo.0000000001
> drwxr-xr-x    /hbase/tmp/t1/.tmp
> drwxr-xr-x    /hbase/tmp/t1/region1
> -rw-r--r--    /hbase/tmp/t1/region1/.regioninfo
> drwxr-xr-x    /hbase/tmp/t1/region1/d
> -rwxrwxrwx    /hbase/tmp/t1/region1/d/0fcaf624cf124d7cab50ace0a6f0f9
> df_SeqId_4_
> drwxr-xr-x    /hbase/tmp/t1/region1/recovered.edits
> -rw-r--r--    /hbase/tmp/t1/region1/recovered.edits/2.seqid
> drwxr-xr-x    /hbase/tmp/t1/region2
> -rw-r--r--    /hbase/tmp/t1/region2/.regioninfo
> drwxr-xr-x    /hbase/tmp/t1/region2/d
> -rwxrwxrwx    /hbase/tmp/t1/region2/d/14925680d8a5457e9be1c05087f44d
> f5_SeqId_4_
> drwxr-xr-x    /hbase/tmp/t1/region2/recovered.edits
> -rw-r--r--    /hbase/tmp/t1/region2/recovered.edits/2.seqid
>
> Delete the /hbase/tmp/t1/region1/recovered.edits and 
> /hbase/tmp/t1/region2/recovered.edits
>
> And now run the completebulkload for each region like below,
>
> 1) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> /hbase/tmp/t1/region1 t1
> 2) hbase org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles
> /hbase/tmp/t1/region2 t1
>
> Note: The tool will create the table if doesn't exist with only one 
> region. If you want the same table properties as it is in cluster1 
> then you will have to create it manually in cluster2.
>
> I hope this helps.
>
> Regards,
> Ashish
>
> -----Original Message-----
> From: Rajeshkumar J [mailto:rajeshkumarit8292@gmail.com]
> Sent: 16 March 2017 16:46
> To: user@hbase.apache.org
> Subject: Re: hbase table creation
>
> ‚ÄčKarthi,
>
>    I have mentioned that as of now I dont have any data in that old 
> cluster. Now only have that copied files in the new cluster. I think i 
> can't use this utility?‚Äč
>
> On Thu, Mar 16, 2017 at 4:10 PM, karthi keyan 
> <karthi93.sankar@gmail.com>
> wrote:
>
> > Ted-
> >
> > Cool !! Will consider hereafter .
> >
> > On Thu, Mar 16, 2017 at 4:06 PM, Ted Yu <yuzhihong@gmail.com> wrote:
> >
> > > karthi:
> > > The link you posted was for 0.94
> > >
> > > We'd better use up-to-date link from refguide (see my previous reply).
> > >
> > > Cheers
> > >
> > > On Thu, Mar 16, 2017 at 3:26 AM, karthi keyan 
> > > <karthi93.sankar@gmail.com
> > >
> > > wrote:
> > >
> > > > Rajesh,
> > > >
> > > > Use HBase snapshots for backup and move the data from your "
> > > > /hbase/default/data/testing" with its snapshot and clone them to 
> > > > your destination cluster.
> > > >
> > > > Snapshot ref link  - http://hbase.apache.org/0.94/
> > > book/ops.snapshots.html
> > > > <http://hbase.apache.org/0.94/book/ops.snapshots.html>
> > > >
> > > >
> > > >
> > > > On Thu, Mar 16, 2017 at 3:51 PM, sudhakara st 
> > > > <sudhakara.st@gmail.com>
> > > > wrote:
> > > >
> > > > > You have to use 'copytable', here is more info 
> > > > > https://hbase.apache.org/book.html#copy.table
> > > > >
> > > > > On Thu, Mar 16, 2017 at 3:46 PM, Rajeshkumar J < 
> > > > > rajeshkumarit8292@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > I have copied hbase data of a table from one cluster to another.
> > For
> > > > > > instance I have a table testing and its data will be in the

> > > > > > path /hbase/default/data/testing
> > > > > >
> > > > > > I have copied these files from existing cluster to new 
> > > > > > cluster. Is
> > > > there
> > > > > > any possibilty to create table and load data from these 
> > > > > > files in
> > the
> > > > new
> > > > > > cluster
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > >
> > > > > Regards,
> > > > > ...sudhakara
> > > > >
> > > >
> > >
> >
>
Mime
View raw message