hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ramkrishna vasudevan <ramkrishna.s.vasude...@gmail.com>
Subject Re: Hbase import Tsv performance (slow import)
Date Wed, 24 Oct 2012 05:52:06 GMT
Anil,
When you do ImportTSV the data that is present in the the TSV file alone
will be parsed and loaded into HBase.
How are you planning to generate the UniqueID? Your usecase seems like it
your data is in CSV file but the unique id that you need is not part of the
TSV.
Now you need them to be loaded to HBASE thro WAL.

I would suggest that can you first do a loading of the existing TSV file to
one HTable.
Then from that table you can do a bulk load into another table using ur
custom mapper.  Here you can use the logic of generating unique ID for
every row that comes out from the loaded table.
Here we can make the data to be inserted into the new table thro normal
puts which will use the WAL and memstore.

Regards
Ram

On Wed, Oct 24, 2012 at 10:58 AM, anil gupta <anilgupta84@gmail.com> wrote:

> That's a very interesting fact. You made it clear but my custom Bulk Loader
> generates an unique ID for every row in map phase. So, all my data is not
> in csv or text. Is there a way that i can explicitly turn on WAL for bulk
> loading?
>
> On Tue, Oct 23, 2012 at 10:14 PM, Anoop John <anoop.hbase@gmail.com>
> wrote:
>
> > Hi Anil
> >                 In case of bulk loading it is not like data is put into
> > HBase one by one.. The MR job will create an o/p like HFile.. It will
> > create the KVs and write to file in order as how HFile will look like..
> The
> > the file is loaded into HBase finally.. Only for this final step HBase RS
> > will be used.. So there is no point in WAL there...  I am making it clear
> > for you?   The data is already present in form of raw data in some txt or
> > csv file  :)
> >
> > -Anoop-
> >
> > On Wed, Oct 24, 2012 at 10:41 AM, Anoop John <anoop.hbase@gmail.com>
> > wrote:
> >
> > > Hi Anil
> > >
> > >
> > >
> > > On Wed, Oct 24, 2012 at 10:39 AM, anil gupta <anilgupta84@gmail.com
> > >wrote:
> > >
> > >> Hi Anoop,
> > >>
> > >> As per your last email, did you mean that WAL is not used while using
> > >> HBase
> > >> Bulk Loader? If yes, then how we ensure "no data loss" in case of
> > >> RegionServer failure?
> > >>
> > >> Thanks,
> > >> Anil Gupta
> > >>
> > >> On Tue, Oct 23, 2012 at 9:55 PM, ramkrishna vasudevan <
> > >> ramkrishna.s.vasudevan@gmail.com> wrote:
> > >>
> > >> > As Kevin suggested we can make use of bulk load that goes thro WAL
> and
> > >> > Memstore.  Or the second option will be to use the o/p of mappers
to
> > >> create
> > >> > HFiles directly.
> > >> >
> > >> > Regards
> > >> > Ram
> > >> >
> > >> > On Wed, Oct 24, 2012 at 8:59 AM, Anoop John <anoop.hbase@gmail.com>
> > >> wrote:
> > >> >
> > >> > > Hi
> > >> > >     Using ImportTSV tool you are trying to bulk load your data.
> Can
> > >> you
> > >> > see
> > >> > > and tell how many mappers and reducers were there. Out of total
> time
> > >> what
> > >> > > is the time taken by the mapper phase and by the reducer phase.
> >  Seems
> > >> > like
> > >> > > MR related issue (may be some conf issue). In this bulk load
case
> > >> most of
> > >> > > the work is done by the MR job. It will read the raw data and
> > convert
> > >> it
> > >> > > into Puts and write to HFiles. MR o/p is HFiles itself. The next
> > part
> > >> in
> > >> > > ImportTSV will just put the HFiles under the table region store..
> > >>  There
> > >> > > wont be WAL usage in this bulk load.
> > >> > >
> > >> > > -Anoop-
> > >> > >
> > >> > > On Tue, Oct 23, 2012 at 9:18 PM, Nick maillard <
> > >> > > nicolas.maillard@fifty-five.com> wrote:
> > >> > >
> > >> > > > Hi everyone
> > >> > > >
> > >> > > > I'm starting with hbase and testing for our needs. I have
set
> up a
> > >> > hadoop
> > >> > > > cluster of Three machines and A Hbase cluster atop on the
same
> > three
> > >> > > > machines,
> > >> > > > one master two slaves.
> > >> > > >
> > >> > > > I am testing the Import of a 5GB csv file with the importTsv
> > tool. I
> > >> > > > import the
> > >> > > > file in the HDFS and use the importTsv tool to import in
Hbase.
> > >> > > >
> > >> > > > Right now it takes a little over an hour to complete. It
creates
> > >> > around 2
> > >> > > > million entries in one table with a single family.
> > >> > > > If I use bulk uploading it goes down to 20 minutes.
> > >> > > >
> > >> > > > My hadoop has 21 map tasks but they all seem to be taking
a very
> > >> long
> > >> > > time
> > >> > > > to
> > >> > > > finish many tasks end up in time out.
> > >> > > >
> > >> > > > I am wondering what I have missed in my configuration. I
have
> > >> followed
> > >> > > the
> > >> > > > different prerequisites in the documentations but I am really
> > >> unsure as
> > >> > > to
> > >> > > > what
> > >> > > > is causing this slow down. If I were to apply the wordcount
> > example
> > >> to
> > >> > > the
> > >> > > > same
> > >> > > > file it takes only minutes to complete so I am guessing
the
> issue
> > >> lies
> > >> > in
> > >> > > > my
> > >> > > > Hbase configuration.
> > >> > > >
> > >> > > > Any help or pointers would by appreciated
> > >> > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >>
> > >>
> > >> --
> > >> Thanks & Regards,
> > >> Anil Gupta
> > >>
> > >
> > >
> >
>
>
>
> --
> Thanks & Regards,
> Anil Gupta
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message