hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sarthak (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1487) io.DataInputBuffer.getLength() semantic wrong/confused
Date Sat, 25 Dec 2010 21:20:45 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12975074#action_12975074
] 

Sarthak commented on MAPREDUCE-1487:
------------------------------------

Will this issue be prioritized? What is the procedure for these kinds of requests? I do not
want to modify source code and then have the same issue when i upgrade to a newer version.


For now, I do not see a workaround for me in this case. I will still keep looking around for
the same. 

> io.DataInputBuffer.getLength() semantic wrong/confused
> ------------------------------------------------------
>
>                 Key: MAPREDUCE-1487
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1487
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>         Environment: linux
>            Reporter: Yang Yang
>
> I was trying Google Protocol Buffer as a value type on hadoop,
> then when I used it in reducer, the parser always failed.
> while it worked fine on a plain inputstream reader or mapper.
> the reason is that the reducer interface in Task.java gave a buffer larger than an actual
encoded record to the parser, and the parser does not stop until it reaches
> the buffer end, so it parsed some  junk bytes.
> the root cause is due to hadoop.io.DataInputBuffer.java :
> in 0.20.1  DataInputBuffer.java  line 47:
>     public void reset(byte[] input, int start, int length) {
>       this.buf = input;
>       this.count = start+length;
>       this.mark = start;
>       this.pos = start;
>     }
>     public byte[] getData() { return buf; }
>     public int getPosition() { return pos; }
>     public int getLength() { return count; }
> we see that the above logic seems to assume that "getLength()" returns the total ** capacity
***, not the actual content length, of the buffer, yet latter code
> seems to assume the semantic that "length" is actual content length, i.e. end - start
:
>  /** Resets the data that the buffer reads. */
>   public void reset(byte[] input, int start, int length) {
>     buffer.reset(input, start, length);
>   }
> i.e. if u call reset( getPosition(), getLength() ) on the same buffer again and again,
the "length" would be infinitely increased.
> this confusion in semantic is reflected in  many places, at leat in IFile.java, and Task.java,
where it caused the original issue.
> around line 980 of Task.java, we see
>    valueIn.reset(nextValueBytes.getData(), nextValueBytes.getPosition(), nextValueBytes.getLength())
 
> if the position above is not empty, the above actually sets a buffer too long, causing
the reported issue.
> changing the Task.java as a hack , to 
>       valueIn.reset(nextValueBytes.getData(), nextValueBytes.getPosition(), nextValueBytes.getLength()
- nextValueBytes.getPosition());
> fixed the issue, but the semantic of DataInputBuffer should be fixed and streamlined

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message