hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yongjun Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-7070) TestWebHdfsFileSystemContract fails occassionally
Date Tue, 14 Oct 2014 23:46:33 GMT

     [ https://issues.apache.org/jira/browse/HDFS-7070?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Yongjun Zhang resolved HDFS-7070.
---------------------------------
    Resolution: Cannot Reproduce

Haven't seen the reported tests to fail for 3 weeks. The issue might have been addressed by
some fix. Closing it for now. Please feel free to reopen if it happens again.


> TestWebHdfsFileSystemContract fails occassionally
> -------------------------------------------------
>
>                 Key: HDFS-7070
>                 URL: https://issues.apache.org/jira/browse/HDFS-7070
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: webhdfs
>    Affects Versions: 2.6.0
>            Reporter: Yongjun Zhang
>            Assignee: Yongjun Zhang
>
> org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testResponseCode
> and  org.apache.hadoop.hdfs.web.TestWebHdfsFileSystemContract.testRenameDirToSelf 
> failed recently.
> Need to determine whether it's  introduced by some latest code change due to file descriptor
leak; or it's a similar issue as HDFS-6694 reported.
> E.g. https://builds.apache.org/job/PreCommit-HDFS-Build/8026/testReport/org.apache.hadoop.hdfs.web/TestWebHdfsFileSystemContract/testResponseCode/.
> {code}
> 2014-09-15 12:52:18,866 INFO  datanode.DataNode (DataXceiver.java:writeBlock(749)) -
opWriteBlock BP-23833599-67.195.81.147-1410785517350:blk_1073741827_1461 received exception
java.io.IOException: Cannot run program "stat": java.io.IOException: error=24, Too many open
files
> 2014-09-15 12:52:18,867 ERROR datanode.DataNode (DataXceiver.java:run(243)) - 127.0.0.1:47221:DataXceiver
error processing WRITE_BLOCK operation  src: /127.0.0.1:38112 dst: /127.0.0.1:47221
> java.io.IOException: Cannot run program "stat": java.io.IOException: error=24, Too many
open files
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:470)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:485)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:455)
> 	at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:702)
> 	at org.apache.hadoop.fs.HardLink.getLinkCount(HardLink.java:495)
> 	at org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:288)
> 	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:702)
> 	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:680)
> 	at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.append(FsDatasetImpl.java:101)
> 	at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.<init>(BlockReceiver.java:193)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:604)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:126)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:72)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:225)
> 	at java.lang.Thread.run(Thread.java:662)
> Caused by: java.io.IOException: java.io.IOException: error=24, Too many open files
> 	at java.lang.UNIXProcess.<init>(UNIXProcess.java:148)
> 	at java.lang.ProcessImpl.start(ProcessImpl.java:65)
> 	at java.lang.ProcessBuilder.start(ProcessBuilder.java:452)
> 	... 14 more
> 2014-09-15 12:52:18,867 INFO  hdfs.DFSClient (DFSOutputStream.java:createBlockOutputStream(1400))
- Exception in createBlockOutputStream
> java.io.EOFException: Premature EOF: no length prefix available
> 	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2101)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1368)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1210)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:530)
> 2014-09-15 12:52:18,870 WARN  hdfs.DFSClient (DFSOutputStream.java:run(883)) - DFSOutputStream
ResponseProcessor exception  for block BP-23833599-67.195.81.147-1410785517350:blk_1073741827_1461
> java.lang.NullPointerException
> 	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2099)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer$ResponseProcessor.run(DFSOutputStream.java:798)
> 2014-09-15 12:52:18,870 WARN  hdfs.DFSClient (DFSOutputStream.java:run(627)) - DataStreamer
Exception
> java.lang.NullPointerException
> 	at org.apache.hadoop.hdfs.DFSOutputStream$Packet.writeTo(DFSOutputStream.java:273)
> 	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:579)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message