hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stuti Awasthi <stutiawas...@hcl.com>
Subject RE: Copying tables from one server to another
Date Thu, 15 Sep 2011 10:14:00 GMT
Hi Tom,
Issue is resolved :)

-----Original Message-----
From: Stuti Awasthi
Sent: Thursday, September 15, 2011 3:08 PM
To: user@hbase.apache.org
Subject: RE: Copying tables from one server to another

Hi Tom,

Can you please help me in this. Im also facing issues while copying table from one cluster
to another.
Please refer my mail with sub : <Issues in CopyTable to different cluster> for more


-----Original Message-----
From: Tom Goren [mailto:tom@tomgoren.com]
Sent: Thursday, September 08, 2011 6:40 PM
To: user@hbase.apache.org
Subject: Re: Copying tables from one server to another

OK got it to work.

It seems that the problem was in a non-related network issue.

The CopyTable ran with the syntax I specified earlier.


On Thu, Sep 8, 2011 at 5:14 AM, Tom Goren <tom@tomgoren.com> wrote:

> J-D, thanks a lot.
> However I was not able to understand the correct usage from your
> explanation. I was only able to use the tool to copy a table from one
> server to itself (w/new.name or w/o) as I stated earlier.
> In addition, according to
> http://ofps.oreilly.com/titles/9781449396107/clusteradmin.html
> Another supplied tool is *CopyTable*, which is primarily designed to
>> bootstrap cluster replication. You can use is to make a copy of an
>> existing table from the master cluster to the slave one
> Now, in my case, the two 'clusters' (pseudo-distributed) are entirely
> independent from from another.
> Could this be my problem perhaps?
> Again, I will state my goal: copying tables from a so-called
> production server, to a so called development server. Thus as is
> implied, these machines are mutually exclusive, however are accessible
> network wise in both directions.
> From what I gather and my experience has shown till now, the nearest
> option is manual or scripted creation of the tables, and then copying
> the data with the export tool via hdfs -> localfiles -> scp ->
> hdfs_on_new_server -> import tool (this is as per these instructions:
> http://www.sethcall.com/blog/2010/04/10/how-to-export-and-import-an-hb
> ase-table/ more or less)
> Surely there is something better?
> Thanks in advance, your help is greatly appreciated!
> Tom
> On Wed, Sep 7, 2011 at 8:07 PM, Jean-Daniel Cryans <jdcryans@apache.org>wrote:
>> Inline.
>> J-D
>> On Wed, Sep 7, 2011 at 8:02 PM, Tom Goren <tom@tomgoren.com> wrote:
>> > It completed successfully on server A as destination and as source,
>> however
>> > only after I created the table with all the correlating column
>> > families (specified by "--new.name=new_table_name"). Without that
>> > step being
>> done
>> > manually it failed as well.
>> That's currently how it works, eg it doesn't create the table for you.
>> >
>> > When running:
>> >
>> > hbase org.apache.hadoop.hbase.mapreduce.CopyTable
>> > --peer.adr=serverB:2181:/hbase table_name
>> That's not the format, the last part of the peer address is
>> zookeeper.znode.parent which by default is /hbase (that's the root
>> znode where you can find the bootstrap information to contact hbase,
>> not a table name).


The contents of this e-mail and any attachment(s) are confidential and intended for the named
recipient(s) only.
It shall not attach any liability on the originator or HCL or its affiliates. Any views or
opinions presented in
this email are solely those of the author and may not necessarily reflect the opinions of
HCL or its affiliates.
Any form of reproduction, dissemination, copying, disclosure, modification, distribution and
/ or publication of
this message without the prior written consent of the author of this e-mail is strictly prohibited.
If you have
received this email in error please delete it and notify the sender immediately. Before opening
any mail and
attachments please check them for viruses and defect.


View raw message