hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Anoop John <anoop.hb...@gmail.com>
Subject Re: java.lang.NegativeArraySizeException: -1 in hbase
Date Mon, 09 Sep 2013 17:59:28 GMT
Thats sound correct.  Can we mention it some where in our doc? Will that be
good?

-Anoop-

On Mon, Sep 9, 2013 at 11:24 PM, lars hofhansl <larsh@apache.org> wrote:

> The 0.94.5 change (presumably HBASE-3996) is only forward compatible. M/R
> is a bit special in the jars are shipped with the job.
>
> Here's a comment from Todd Lipcon on that issue:
> "The jar on the JT doesn't matter. Split computation and interpretation
> happens only in the user code – i.e on the client machine and inside the
>  tasks themselves. So you don't need HBase installed on the JT at all.
> As for the TTs, it's possible to configure the TTs to put an hbase jar
> on the classpath, but I usually recommend against it for the exact
> reason you're mentioning - if the jars differ in version, and they're
> not 100% API compatible, you can get nasty  errors. The recommended
> deployment is to not put hbase on the TT classpath, and instead ship the
> HBase dependencies as part of the MR job, using the provided
> utility function in TableMapReduceUtil."
>
> -- Lars
>
>
> ----- Original Message -----
> From: Jean-Marc Spaggiari <jean-marc@spaggiari.org>
> To: user <user@hbase.apache.org>
> Cc:
> Sent: Monday, September 9, 2013 6:08 AM
> Subject: Re: java.lang.NegativeArraySizeException: -1 in hbase
>
>  So. After some internal discussions with Anoop, here is a summary of the
> situation.
>
> An hbase-0.94.0 jar file was included in the MR job client file. But also,
> this MR client file was stored into the Master lib directory. And only in
> the master and the  RS hosted on the same host. Not in any of the other RS
> nodes.
>
> Removing this file from the client, recompiling HBase 0.94.12-SNAPSHOT and
> redeploying everything fixed the issue.
>
> What does it mean.
>
> I think there is something between Hbase 0.94.0 and HBase 0.94.12 which is
> not compatible. It's not related to the TableSplit class. This class is
> like that since 0.94.5. It's most probably related to a more recent
> modification which is breaking the compatibility between HBase 0.94.0 and
> last HBase 0.94 branch.
>
> The MR job on my server was running for months without any issue, with this
> 0.94.0 jar included. Which mean the compatibility has been broken recently.
> Something like between 0.94.10 and 0.94.12 (I guess).
>
> Now, even if 0.94.12 is not compatible with HBase version < 0.94.5. Is this
> something we want to investigate further? Or 0.94.5 versions are already
> too old and if there is some break of the compatibility we can live with
> that?
>
> JM
>
>
> 2013/9/8 Jean-Marc Spaggiari <jean-marc@spaggiari.org>
>
> > FYI,
> >
> > I just faced the exact same exception with version 0.94.12SNAPSHOT... All
> > tasks failed with the same exception
> >
> > $ bin/hbase hbck
> > Version: 0.94.12-SNAPSHOT
> > ....
> > 0 inconsistencies detected.
> > Status: OK
> >
> > I will update, rebuild and retry tomorrow morning...
> >
> > java.lang.NegativeArraySizeException: -1
> >     at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:148)
> >
> >     at
> >
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
> >     at
> >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> >     at
> >
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
> >     at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
> >     at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:728)
> >     at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> >     at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> >     at java.security.AccessController.doPrivileged(Native Method)
> >     at javax.security.auth.Subject.doAs(Subject.java:415)
> >
> >     at
> >
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >     at org.apache.hadoop.mapred.Child.main(Child.java:249)
> >
> >
> >
> > 2013/9/4 Jean-Marc Spaggiari <jean-marc@spaggiari.org>
> >
> >> That's interesting. Can you please tell u sa bit more about the context?
> >> What kind of table are you using for you job? Is it an empty one?
> Anything
> >> special? Have yoy run HBCK?
> >>
> >> Also, can you please double check your HBase version? I looked at the
> >> code for 0.94.9 and it doesn't seems to be in sync with the stack trace
> you
> >> have provided. readFields is calling readByteArray many times, so we
> need
> >> to figure which one exactly failed.
> >>
> >> Thanks,
> >>
> >> JM
> >>
> >>
> >> 2013/9/4 Job Thomas <jobt@suntecgroup.com>
> >>
> >>> I am using Hbase 0.94.9
> >>>
> >>>
> >>>
> >>> Best Regards,
> >>> Job M Thomas
> >>>
> >>> ________________________________
> >>>
> >>> From: Job Thomas [mailto:jobt@suntecgroup.com]
> >>> Sent: Wed 9/4/2013 11:08 AM
> >>> To: user@hbase.apache.org
> >>> Subject: java.lang.NegativeArraySizeException: -1 in hbase
> >>>
> >>>
> >>>
> >>>
> >>> Hi All,
> >>>
> >>> I am getting the following error while runnig a simple hbase-mapreduce
> >>> job to get one table data and  write back to another table .
> >>>
> >>> 13/09/04 10:24:03 INFO mapred.JobClient: map 0% reduce 0%
> >>>
> >>> 13/09/04 10:24:22 INFO mapred.JobClient: Task Id :
> >>> attempt_201309031846_0023_m_000000_0, Status : FAILED
> >>>
> >>> java.lang.NegativeArraySizeException: -1
> >>>
> >>> at org.apache.hadoop.hbase.util.Bytes.readByteArray(Bytes.java:147)
> >>>
> >>> at
> >>>
> org.apache.hadoop.hbase.mapreduce.TableSplit.readFields(TableSplit.java:133)
> >>>
> >>> at
> >>>
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:67)
> >>>
> >>> at
> >>>
> org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:40)
> >>>
> >>> at org.apache.hadoop.mapred.MapTask.getSplitDetails(MapTask.java:396)
> >>>
> >>> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:728)
> >>>
> >>> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:370)
> >>>
> >>> at org.apache.hadoop.mapred.Child$4.run(Child.java:255)
> >>>
> >>> at java.security.AccessController.doPrivileged(Native Method)
> >>>
> >>> at javax.security.auth.Subject.doAs(Subject.java:416)
> >>>
> >>> at
> >>>
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1121)
> >>>
> >>> at org.apache.hadoop.mapred.Child.main(Child.java:249)
> >>>
> >>>
> >>>
> >>>
> >>> Best Regards,
> >>> Job M Thomas
> >>> Suntec Business Solution
> >>>
> >>>
> >>>
> >>
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message