hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Murali Krishna. P" <muralikpb...@yahoo.com>
Subject Re: Issue with bulk loader tool
Date Thu, 05 Nov 2009 13:04:20 GMT
Hi Stack,
Sorry, could not look into this last week...

I got problem with the Htable interface as well. Some records i am not retrieve from Htable
as well. 
I lost the old table, but reproduced the problem with a different table.

I cannot send the region since it is very huge. will try to give as much info as possible
here :)

There are total 5 regions as below in that table:
Name

Encoded Name
Start Key
End Key
test1,,1257414794600 
 106817540 
 fffe9c7f87c8332a 
test1,fffe9c7f87c8332a,1257414794616 
 1346846599 fffe9c7f87c8332a fffebe279c0ac4d2 
test1,fffebe279c0ac4d2,1257414794628 
 1835851728 fffebe279c0ac4d2 fffec418284d6fbc 
test1,fffec418284d6fbc,1257414794637 
 1078205908 fffec418284d6fbc fffef7a12ea22498 
test1,fffef7a12ea22498,1257414794647 
 1515378663 fffef7a12ea22498 
 
I am looking for a key, say 000011d1bc8cd6fe . This should be in the first region ?

using hfile tool,
org.apache.hadoop.hbase.io.hfile.HFile -k -f /hbase/test1/106817540/image/3828859735461759684
-v -m -p |  grep 000011d1bc8cd6fe
The first region doesn't have it. Not sure what happened to that record.

For a working key, it gives the record properly as below
K: \x00\x100003bdd08ca88ee2\x05imagevalue\x7F\xFF\xFF\xFF\xFF\xFF\xFF\xFF\x04 V: \xFF...

Please let me know if you need more information

 Thanks,
Murali Krishna




________________________________
From: stack <stack@duboce.net>
To: hbase-user@hadoop.apache.org
Sent: Mon, 2 November, 2009 11:05:43 PM
Subject: Re: Issue with bulk loader tool

Murali:

Any developments worth mentioning?

St.Ack


On Fri, Oct 30, 2009 at 10:14 AM, stack <stack@duboce.net> wrote:

> That is interesting.  It'd almost point to a shell issue.  Enable DEBUG so
> client can see it.  Then rerun shell.  Is it at least loading the right
> region?  (The regions start and end keys span the asked for key?).  I took a
> look at your attached .META. scan.  All looks good there.  The region
> specifications look right.  If you want to bundle up the region that is
> failing -- the one that the failing key comes out of, I can take a look
> here.  You could also try playing with the HFile tool: ./bin/hbase
> org.apache.hadoop.hbase.io.hfile.HFile.  Run the former and it'll output
> usage.  You should be able to get it to dump content of the region (You need
> to supply flags like -v to see actual keys to the HFile tool else it just
> runs its check silently).    Check for your key.  Check things like
> timestamp on it.  Maybe its 100 years in advance of now or something?
>
> Yours,
> St.Ack
>
>
> On Fri, Oct 30, 2009 at 9:01 AM, Murali Krishna. P <muralikpbhat@yahoo.com
> > wrote:
>
>> Attached ".META"
>>
>> Interesting, I was able to get the row from HTable via java code. But from
>> the shell, still getting following
>>
>> hbase(main):004:0> get 'TestTable2', 'ffffef95bcbf2638'
>> 0 row(s) in 1.2250 seconds
>>
>> Thanks,
>> Murali Krishna
>>
>> Thanks,
>> Murali Krishna
>>
>>
>> ------------------------------
>> *From:* stack <stack@duboce.net>
>> *To:* hbase-user@hadoop.apache.org
>> *Sent:* Fri, 30 October, 2009 8:39:46 PM
>> *Subject:* Re: Issue with bulk loader tool
>>
>> Can you send a listing of ".META."?
>>
>> hbase> scan ".META."
>>
>> Also, can you bring a region down from hdfs, tar and gzip it, and then put
>> it someplace I can pull so I can take a look?
>>
>> Thanks,
>> St.Ack
>>
>>
>> On Fri, Oct 30, 2009 at 3:31 AM, Murali Krishna. P
>> <muralikpbhat@yahoo.com>wrote:
>>
>> > Hi guys,
>> >  I created a table according to hbase-48. A mapreduce job which creates
>> > HFiles and then used loadtable.rb script to create the table. Everything
>> > worked fine and i was able to scan the table. But when i do a get for a
>> key
>> > displayed in the scan output, it is not retrieving the row. shell says 0
>> > row.
>> >
>> >  I tried using one reducer to ensure total ordering, but still same
>> issue.
>> >
>> >
>> > My mapper is like:
>> >  context.write(new
>> > ImmutableBytesWritable(((Text)key).toString().getBytes()), new
>> > KeyValue(((Text)key).toString().getBytes(), "family1".getBytes(),
>> >                    "column1".getBytes(), getValueBytes()));
>> >
>> >
>> > Please help me investigate this.
>> >
>> > Thanks,
>> > Murali Krishna
>> >
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message