hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-9114) After defined the dfs.checksum.type as the NULL, write file and hflush will through java.lang.ArrayIndexOutOfBoundsException
Date Tue, 12 Nov 2013 14:23:22 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13820121#comment-13820121
] 

Uma Maheswara Rao G commented on HADOOP-9114:
---------------------------------------------

Thanks Sathish and Vinay for working on this.
{code}
conf.set(DFSConfigKeys.DFS_CHECKSUM_TYPE_KEY,"NULL");
{code}

Seems like wrong format of the patch. Can you please format the code correctly. 
Once you address the format thing, I will commit the patch Sathish.

> After defined the dfs.checksum.type as the NULL, write file and hflush will through java.lang.ArrayIndexOutOfBoundsException
> ----------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-9114
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9114
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 2.0.1-alpha
>            Reporter: liuyang
>            Priority: Minor
>         Attachments: FSOutputSummer.java.patch, HADOOP-9114-001.patch
>
>
> when I test the characteristic parameter about dfs.checksum.type. The value can be defined
as NULL,CRC32C,CRC32. It's ok when the value is CRC32C or CRC32, but the client will through
java.lang.ArrayIndexOutOfBoundsException when the value is configured NULL.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message