Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2E02910021 for ; Thu, 1 May 2014 15:01:45 +0000 (UTC) Received: (qmail 20502 invoked by uid 500); 1 May 2014 15:01:41 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 20441 invoked by uid 500); 1 May 2014 15:01:41 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 20433 invoked by uid 99); 1 May 2014 15:01:41 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 May 2014 15:01:41 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of jdcryans@gmail.com designates 209.85.220.169 as permitted sender) Received: from [209.85.220.169] (HELO mail-vc0-f169.google.com) (209.85.220.169) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 May 2014 15:01:35 +0000 Received: by mail-vc0-f169.google.com with SMTP id im17so4099103vcb.0 for ; Thu, 01 May 2014 08:01:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:sender:in-reply-to:references:date:message-id:subject :from:to:content-type; bh=jf/eV1CC2jfrAihocumuxWe/poI6eYkAF4izdb3o/UI=; b=Fsm3BPwg7cfwSBQdW27lt4Gp1oZFCrUhQ2/OXtLrJeSAllIH8YtJYzF8ioKpDuvDPf BAX3TqAS5gmxmOLsPB4vhKNQ1mkqLXDLBCwA0PkwSd9jz1chvd5hFoxDk+RI/XLx1x6O 71cKy1ATORf/PjCPify6CpSPrG5sniJzdK7yjcxLHtHn3hFigio8io+gwjpTNWGm9QXJ NdD4faf94KBf9IDCuWSdGxduaC4oYV9mvWBH9VLsj7POpqwsglE5EHM2rF/1ynMmTmOy 5bc7JUaSPhKJaqLTkq5DT38a7DaklcP+X7AwFDSM4MebPuF+E91mXjNrJi8aXxN4kAZj w5jg== MIME-Version: 1.0 X-Received: by 10.58.74.201 with SMTP id w9mr358650vev.56.1398956475150; Thu, 01 May 2014 08:01:15 -0700 (PDT) Sender: jdcryans@gmail.com Received: by 10.221.12.75 with HTTP; Thu, 1 May 2014 08:01:15 -0700 (PDT) In-Reply-To: References: Date: Thu, 1 May 2014 08:01:15 -0700 X-Google-Sender-Auth: r-IBnyQ3gpVImuTH8jGaLwBo0sk Message-ID: Subject: Re: Error loading SHA-1 keys with load bulk From: Jean-Daniel Cryans To: "user@hbase.apache.org" Content-Type: multipart/alternative; boundary=047d7bacb770c1f63a04f857efdf X-Virus-Checked: Checked by ClamAV on apache.org --047d7bacb770c1f63a04f857efdf Content-Type: text/plain; charset=UTF-8 Are you using HFileOutputFormat.configureIncrementalLoad() to set up the partitioner and the reducers? That will take care of ordering your keys. J-D On Thu, May 1, 2014 at 5:38 AM, Guillermo Ortiz wrote: > I have been looking at the code in HBase, but, I don't really understand > what this error happens. Why can I put in HBase those keys? > > > 2014-04-30 17:57 GMT+02:00 Guillermo Ortiz > ');> > >: > > > I'm using HBase with MapReduce to load a lot of data, so I have decide to > > do it with bulk load. > > > > > > I parse my keys with SHA1, but when I try to load them, I got this > > exception. > > > > java.io.IOException: Added a key not lexically larger than previous > key=\x00(6e9e59f36a7ec2ac54635b2d353e53e677839046\x01l\x00\x00\x01E\xB3>\xC9\xC7\x0E, > lastkey=\x00(b313a9f1f57c8a07c81dc3221c6151cf3637506a\x01l\x00\x00\x01E\xAE\x18k\x87\x0E > > at > org.apache.hadoop.hbase.io.hfile.AbstractHFileWriter.checkKey(AbstractHFileWriter.java:207) > > at > org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:324) > > at > org.apache.hadoop.hbase.io.hfile.HFileWriterV2.append(HFileWriterV2.java:289) > > at > org.apache.hadoop.hbase.regionserver.StoreFile$Writer.append(StoreFile.java:1206) > > at > org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.write(HFileOutputFormat.java:168) > > at > org.apache.hadoop.hbase.mapreduce.HFileOutputFormat$1.write(HFileOutputFormat.java:124) > > at > org.apache.hadoop.mapred.ReduceTask$NewTrackingRecordWriter.write(ReduceTask.java:551) > > at > org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:85) > > > > I work with HBase 0.94.6. I have been loking for if I could define any > reducer, since, I have defined no one. I have read something about > KeyValueSortReducer but, I don'tknow if there's something that extends > TableReducer or I'm lookging for a wrong way. > > > > > > > --047d7bacb770c1f63a04f857efdf--