hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Konstantin Shvachko (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1443) TestFileCorruption fails with ArrayIndexOutOfBoundsException
Date Sat, 02 Jun 2007 06:04:16 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Konstantin Shvachko updated HADOOP-1443:
----------------------------------------

    Attachment: EmptyFile.patch

The bug is related to a corner case. An empty file in hdfs is represented by one block 
of size 0, which is replicated on the required number of nodes, and the replicas are 
represented respectively by empty files. This case was not handled correctly.
I fixed the bug and created test cases that verify correctness of 2 things:
- empty file open and read;
- reading beyond the file end.


> TestFileCorruption fails with ArrayIndexOutOfBoundsException
> ------------------------------------------------------------
>
>                 Key: HADOOP-1443
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1443
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Nigel Daley
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.13.0
>
>         Attachments: 1443.patch, EmptyFile.patch
>
>
> org.apache.hadoop.dfs.TestFileCorruption.testFileCorruption failed once on Windows with
this exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException:
1
> 	at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:472)
> 	at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:436)
> 	at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:272)
> 	at org.apache.hadoop.dfs.NameNode.open(NameNode.java:259)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:341)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:567)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:471)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:165)
> 	at org.apache.hadoop.dfs.$Proxy0.open(Unknown Source)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> 	at org.apache.hadoop.dfs.$Proxy0.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:590)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.<init>(DFSClient.java:582)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:273)
> 	at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.open(DistributedFileSystem.java:136)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.<init>(ChecksumFileSystem.java:114)
> 	at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:340)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:234)
> 	at org.apache.hadoop.dfs.DFSTestUtil.checkFiles(DFSTestUtil.java:132)
> 	at org.apache.hadoop.dfs.TestFileCorruption.testFileCorruption(TestFileCorruption.java:66)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message