Return-Path: X-Original-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-common-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 2D42E10B33 for ; Thu, 25 Apr 2013 13:56:19 +0000 (UTC) Received: (qmail 96988 invoked by uid 500); 25 Apr 2013 13:56:18 -0000 Delivered-To: apmail-hadoop-common-issues-archive@hadoop.apache.org Received: (qmail 96581 invoked by uid 500); 25 Apr 2013 13:56:17 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-issues@hadoop.apache.org Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 96410 invoked by uid 99); 25 Apr 2013 13:56:17 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 Apr 2013 13:56:17 +0000 Date: Thu, 25 Apr 2013 13:56:17 +0000 (UTC) From: "Uma Maheswara Rao G (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Created] (HADOOP-9505) Specifying checksum type to NULL can cause write failures with AIOBE MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 Uma Maheswara Rao G created HADOOP-9505: ------------------------------------------- Summary: Specifying checksum type to NULL can cause write failures with AIOBE Key: HADOOP-9505 URL: https://issues.apache.org/jira/browse/HADOOP-9505 Project: Hadoop Common Issue Type: Bug Components: fs Affects Versions: 2.0.5-beta Reporter: Uma Maheswara Rao G Priority: Minor I have created a file with checksum disable option and I am seeing ArrayIndexOutOfBoundsException. {code} out = fs.create(fileName, FsPermission.getDefault(), flags, fs.getConf() .getInt("io.file.buffer.size", 4096), replFactor, fs .getDefaultBlockSize(fileName), null, ChecksumOpt.createDisabled()); {code} See the trace here: {noformat} java.lang.ArrayIndexOutOfBoundsException: 0 at org.apache.hadoop.fs.FSOutputSummer.int2byte(FSOutputSummer.java:178) at org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:162) at org.apache.hadoop.fs.FSOutputSummer.write1(FSOutputSummer.java:106) at org.apache.hadoop.fs.FSOutputSummer.write(FSOutputSummer.java:92) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:54) at java.io.DataOutputStream.write(DataOutputStream.java:90) at org.apache.hadoop.hdfs.DFSTestUtil.createFile(DFSTestUtil.java:261) at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:174) {noformat} Whether I have missed any other configs to set? In FSOutputSummer#int2byte will not check any bytes length, so, do you think we have to to check the length then only we call this in CRC NULL case, as there will not be any checksum bytes? {code} static byte[] int2byte(int integer, byte[] bytes) { bytes[0] = (byte)((integer >>> 24) & 0xFF); bytes[1] = (byte)((integer >>> 16) & 0xFF); bytes[2] = (byte)((integer >>> 8) & 0xFF); bytes[3] = (byte)((integer >>> 0) & 0xFF); return bytes; } {code} -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira