hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hairong Kuang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1123) LocalFileSystem gets a NullPointerException when tries to recover from ChecksumError
Date Thu, 15 Mar 2007 20:56:09 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12481312

Hairong Kuang commented on HADOOP-1123:

When a checksumed file system tries to recover from a ChecksumError, it first reports the
checksum error and then tries to read from a different replica.

For the local file system, it closes the input stream when reporting checksum error. In read
retry, it does a seek on the closed input stream and thus causes a NPE.

One solution to this problem is not to retry reading for the local file system. We could let
reportCheckError returns a flag indicating if we want to retry or not.

> LocalFileSystem gets a NullPointerException when tries to recover from ChecksumError
> ------------------------------------------------------------------------------------
>                 Key: HADOOP-1123
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1123
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.12.0
>            Reporter: Hairong Kuang
>         Assigned To: Hairong Kuang
>             Fix For: 0.13.0
> NullPointerException occurs when run a large sort
> java.lang.NullPointerException
> 	at org.apache.hadoop.fs.FSDataInputStream$Buffer.seek(FSDataInputStream.java:74)
> 	at org.apache.hadoop.fs.FSDataInputStream.seek(FSDataInputStream.java:121)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.readBuffer(ChecksumFileSystem.java:221)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSInputChecker.read(ChecksumFileSystem.java:167)
> 	at org.apache.hadoop.fs.FSDataInputStream$PositionCache.read(FSDataInputStream.java:41)
> 	at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
> 	at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
> 	at org.apache.hadoop.fs.FSDataInputStream$Buffer.read(FSDataInputStream.java:93)
> 	at java.io.DataInputStream.readInt(DataInputStream.java:370)
> 	at org.apache.hadoop.io.SequenceFile$Reader.nextRawKey(SequenceFile.java:1616)
> 	at org.apache.hadoop.io.SequenceFile$Sorter$SegmentDescriptor.nextRawKey(SequenceFile.java:2567)
> 	at org.apache.hadoop.io.SequenceFile$Sorter$MergeQueue.next(SequenceFile.java:2353)
> 	at org.apache.hadoop.mapred.ReduceTask$ValuesIterator.getNext(ReduceTask.java:180)
> 	at org.apache.hadoop.mapred.ReduceTask$ValuesIterator.next(ReduceTask.java:149)
> 	at org.apache.hadoop.mapred.lib.IdentityReducer.reduce(IdentityReducer.java:41)
> 	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:313)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1445)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message