hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinay (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-9505) Specifying checksum type to NULL can cause write failures with AIOBE
Date Tue, 12 Nov 2013 14:06:25 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vinay updated HADOOP-9505:
--------------------------

    Resolution: Duplicate
        Status: Resolved  (was: Patch Available)

> Specifying checksum type to NULL can cause write failures with AIOBE
> --------------------------------------------------------------------
>
>                 Key: HADOOP-9505
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9505
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 2.1.0-beta
>            Reporter: Uma Maheswara Rao G
>            Assignee: Vinay
>            Priority: Minor
>         Attachments: HADOOP-9505.patch
>
>
> I have created a file with checksum disable option and I am seeing ArrayIndexOutOfBoundsException.
> {code}
> out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf()
> 	  .getInt("io.file.buffer.size", 4096), replFactor, fs
> 	  .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled());
> {code}
> See the trace here:
> {noformat}
> java.lang.ArrayIndexOutOfBoundsException: 0
> 	at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178)
> 	at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162)
> 	at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106)
> 	at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92)
> 	at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54)
> 	at java.io.DataOutputStream.write(DataOutputStream.java:90)
> 	at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261)
> 	at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174)
> {noformat}
> In FSOutputSummer#int2byte will not check any bytes length, so, do you think we have
to to check the length then only we call this in CRC NULL case, as there will not be any checksum
bytes?
> {code}
> static byte[] int2byte(int integer, byte[] bytes) {
>     bytes[0] = (byte)((integer >>> 24) & 0xFF);
>     bytes[1] = (byte)((integer >>> 16) & 0xFF);
>     bytes[2] = (byte)((integer >>>  8) & 0xFF);
>     bytes[3] = (byte)((integer >>>  0) & 0xFF);
>     return bytes;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message