Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 6D3D910CEA for ; Mon, 12 Aug 2013 13:00:52 +0000 (UTC) Received: (qmail 17602 invoked by uid 500); 12 Aug 2013 13:00:51 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 17428 invoked by uid 500); 12 Aug 2013 13:00:49 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 17384 invoked by uid 99); 12 Aug 2013 13:00:49 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 12 Aug 2013 13:00:49 +0000 Date: Mon, 12 Aug 2013 13:00:49 +0000 (UTC) From: "Vinay (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HADOOP-9505) Specifying checksum type to NULL can cause write failures with AIOBE MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HADOOP-9505?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinay updated HADOOP-9505: -------------------------- Assignee: Vinay Status: Patch Available (was: Open) > Specifying checksum type to NULL can cause write failures with AIOBE > -------------------------------------------------------------------- > > Key: HADOOP-9505 > URL: https://issues.apache.org/jira/browse/HADOOP-9505 > Project: Hadoop Common > Issue Type: Bug > Components: fs > Affects Versions: 2.1.0-beta > Reporter: Uma Maheswara Rao G > Assignee: Vinay > Priority: Minor > Attachments: HADOOP-9505.patch > > > I have created a file with checksum disable option and I am seeing ArrayIndexOutOfBoundsException. > {code} > out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf() > .getInt("io.file.buffer.size", 4096), replFactor, fs > .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled()); > {code} > See the trace here: > {noformat} > java.lang.ArrayIndexOutOfBoundsException: 0 > at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178) > at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162) > at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106) > at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92) > at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54) > at java.io.DataOutputStream.write(DataOutputStream.java:90) > at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261) > at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174) > {noformat} > In FSOutputSummer#int2byte will not check any bytes length, so, do you think we have to to check the length then only we call this in CRC NULL case, as there will not be any checksum bytes? > {code} > static byte[] int2byte(int integer, byte[] bytes) { > bytes[0] = (byte)((integer >>> 24) & 0xFF); > bytes[1] = (byte)((integer >>> 16) & 0xFF); > bytes[2] = (byte)((integer >>> 8) & 0xFF); > bytes[3] = (byte)((integer >>> 0) & 0xFF); > return bytes; > } > {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira