hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hung-chih Yang (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-7651) Hadoop Record compiler generates Java files with erroneous byte-array lengths for fields trailing a 'ustring' field
Date Sat, 17 Sep 2011 09:43:08 GMT
Hadoop Record compiler generates Java files with erroneous byte-array lengths for fields trailing
a 'ustring' field
-------------------------------------------------------------------------------------------------------------------

                 Key: HADOOP-7651
                 URL: https://issues.apache.org/jira/browse/HADOOP-7651
             Project: Hadoop Common
          Issue Type: Bug
          Components: record
    Affects Versions: 0.21.0, 0.20.203.0
            Reporter: Hung-chih Yang


Hadoop Record compiler produces Java files from a DDL file. If a DDL file has a class that
contains a 'ustring' field, then the generated 'compareRaw()' function for this record is
erroneous in computing the length of remaining bytes after the logic of computing the buffer
segment for a 'ustring' field.

Below is a line in a generated 'compareRaw()' function for a record class with a 'ustring'
field :
          s1+=i1; s2+=i2; l1-=i1; l1-=i2;
This line shoud be corrected by changing the last 'l1' to 'l2':
          s1+=i1; s2+=i2; l1-=i1; l2-=i2;

To fix this bug, one should correct the 'genCompareBytes()' function in the 'JString.java'
file of the package 'org.apache.hadoop.record.compiler' by changing the line below to the
ensuing line. There is only one digit difference:

      cb.append("s1+=i1; s2+=i2; l1-=i1; l1-=i2;\n");

      cb.append("s1+=i1; s2+=i2; l1-=i1; l2-=i2;\n");

This bug is serious as it will always crash unserializing a record with a simple definition
like the one below
class PairStringDouble {
  ustring first;
  double  second;
}
Unserializing a record of this class will throw an exception as the 'second' field does not
have 8 bytes for a double value due to the erroneous length computation for the remaining
buffer.

Both Hadoop 0.20 and 0.21 have this bug.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message