hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From lars hofhansl <lhofha...@yahoo.com>
Subject Re: export/import for backup
Date Wed, 22 Feb 2012 01:55:46 GMT
I filed HBASE-5440.
Although I am placing this more as a import to bulk load. I.e. we run export as do now, but
on import one can choose to create HFiles for bulk load, instead of updating the life cluster
through the API.

-- Lars



________________________________
 From: lars hofhansl <lhofhansl@yahoo.com>
To: "user@hbase.apache.org" <user@hbase.apache.org> 
Sent: Tuesday, February 21, 2012 9:27 AM
Subject: Re: export/import for backup
 
It seems we could converge the import and importtsv tools. importtsv can write directly to
a (life) table or use HFileOutputFormat.

-- Lars



________________________________
From: Stack <stack@duboce.net>
To: user@hbase.apache.org 
Sent: Monday, February 20, 2012 9:19 PM
Subject: Re: export/import for backup

On Mon, Feb 20, 2012 at 1:58 PM, Paul Mackles <pmackles@adobe.com> wrote:
> Actually, an hbase export to "bulk load" facility sounds like a great idea. We have been
using bulk loads to migrate data from an older data store and they have worked awesome for
us. It also doesn't seem like it would be that hard to implement. So what am I missing?
>

Little?

Check out the Import.java in mapreduce package.  See how its pulling
from SequenceFiles into a map that outputs to a TableOutputFormat
inside in the map.  Make a new MR job that has same input but that
outputs to HFileOutputFormat instead (you'll need the total order
partitioner and a reducer in the mix which Import doesn't have).

St.Ack
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message