hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1443) TestFileCorruption fails with ArrayIndexOutOfBoundsException
Date Sun, 03 Jun 2007 01:09:15 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1443?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500976
] 

dhruba borthakur commented on HADOOP-1443:
------------------------------------------

+1 code looks good.

Two minor comments:

1. There is a comment in the test saying "// create and write a file that contains three blocks
of data". This might not be correct.

2. This patch checks for negative offsets and lengths. Is there a way to enhance the test
so that it triggers negative lengths and negative offsets and verify that they generate the
expected exceptions?

> TestFileCorruption fails with ArrayIndexOutOfBoundsException
> ------------------------------------------------------------
>
>                 Key: HADOOP-1443
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1443
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Nigel Daley
>            Assignee: Konstantin Shvachko
>            Priority: Blocker
>             Fix For: 0.13.0
>
>         Attachments: 1443.patch, EmptyFile.patch
>
>
> org.apache.hadoop.dfs.TestFileCorruption.testFileCorruption failed once on Windows with
this exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: java.lang.ArrayIndexOutOfBoundsException:
1
> 	at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:472)
> 	at org.apache.hadoop.dfs.FSNamesystem.getBlockLocations(FSNamesystem.java:436)
> 	at org.apache.hadoop.dfs.NameNode.getBlockLocations(NameNode.java:272)
> 	at org.apache.hadoop.dfs.NameNode.open(NameNode.java:259)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:341)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:567)
> 	at org.apache.hadoop.ipc.Client.call(Client.java:471)
> 	at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:165)
> 	at org.apache.hadoop.dfs.$Proxy0.open(Unknown Source)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
> 	at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
> 	at org.apache.hadoop.dfs.$Proxy0.open(Unknown Source)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:590)
> 	at org.apache.hadoop.dfs.DFSClient$DFSInputStream.<init>(DFSClient.java:582)
> 	at org.apache.hadoop.dfs.DFSClient.open(DFSClient.java:273)
> 	at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.open(DistributedFileSystem.java:136)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.<init>(ChecksumFileSystem.java:114)
> 	at org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:340)
> 	at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:234)
> 	at org.apache.hadoop.dfs.DFSTestUtil.checkFiles(DFSTestUtil.java:132)
> 	at org.apache.hadoop.dfs.TestFileCorruption.testFileCorruption(TestFileCorruption.java:66)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message