hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Josh Lospinoso (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-197) "du" fails on Cygwin
Date Mon, 19 Dec 2011 22:31:31 GMT

    [ https://issues.apache.org/jira/browse/HDFS-197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13172685#comment-13172685
] 

Josh Lospinoso commented on HDFS-197:
-------------------------------------

I'm also getting this error when using MiniDFSCluster in an acceptance test. Is this issue
related?

17:24:40.288 [main] ERROR o.a.h.h.server.namenode.FSNamesystem - FSNamesystem initialization
failed.
java.io.IOException: Expecting a line not the end of stream
	at org.apache.hadoop.fs.DF.parseExecResult(DF.java:117) ~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.util.Shell.runCommand(Shell.java:237) ~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.util.Shell.run(Shell.java:182) ~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.fs.DF.getFilesystem(DF.java:63) ~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirsToCheck(NameNodeResourceChecker.java:93)
~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:73)
~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:348)
~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:327)
~[hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:271) [hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:465) [hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1239) [hadoop-core-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:278) [hadoop-test-0.20.2-cdh3u2.jar:na]
	at org.apache.hadoop.hdfs.MiniDFSCluster.<init>(MiniDFSCluster.java:125) [hadoop-test-0.20.2-cdh3u2.jar:na]
	at com.redowlanalytics.reveal.test.integration.hadoop.HadoopIntegrationTest.setUp(HadoopIntegrationTest.java:48)
[test-classes/:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_01]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_01]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.7.0_01]
	at java.lang.reflect.Method.invoke(Method.java:601) ~[na:1.7.0_01]
	at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44) [junit-4.9.jar:na]
	at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15) [junit-4.9.jar:na]
	at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41) [junit-4.9.jar:na]
	at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:27) [junit-4.9.jar:na]
	at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:31) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:263) [junit-4.9.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:69) [junit-4.9.jar:na]
	at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:48) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner$3.run(ParentRunner.java:231) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:60) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:229) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner.access$000(ParentRunner.java:50) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:222) [junit-4.9.jar:na]
	at org.junit.runners.ParentRunner.run(ParentRunner.java:292) [junit-4.9.jar:na]
	at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50)
[.cp/:na]
	at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) [.cp/:na]
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467)
[.cp/:na]
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683)
[.cp/:na]
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390)
[.cp/:na]
	at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
[.cp/:na]
                
> "du" fails on Cygwin
> --------------------
>
>                 Key: HDFS-197
>                 URL: https://issues.apache.org/jira/browse/HDFS-197
>             Project: Hadoop HDFS
>          Issue Type: Bug
>         Environment: Windows + Cygwin
>            Reporter: Kohsuke Kawaguchi
>         Attachments: HADOOP-5486
>
>
> When I try to run a datanode on Windows, I get the following exception:
> {noformat}
> java.io.IOException: Expecting a line not the end of stream
> 	at org.apache.hadoop.fs.DU.parseExecResult(DU.java:181)
> 	at org.apache.hadoop.util.Shell.runCommand(Shell.java:179)
> 	at org.apache.hadoop.util.Shell.run(Shell.java:134)
> 	at org.apache.hadoop.fs.DU.<init>(DU.java:53)
> 	at org.apache.hadoop.fs.DU.<init>(DU.java:63)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset$FSVolume.<init>(FSDataset.java:325)
> 	at org.apache.hadoop.hdfs.server.datanode.FSDataset.<init>(FSDataset.java:681)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:291)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:205)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1238)
> 	at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1193)
> {noformat}
> This is because Hadoop execs "du -sk C:\tmp\hadoop-SYSTEM\dfs\data" with a Windows path
representation, which cygwin du doesn't understand.
> {noformat}
> C:\hudson>du -sk C:\tmp\hadoop-SYSTEM\dfs\data
> du -sk C:\tmp\hadoop-SYSTEM\dfs\data
> du: cannot access `C:\\tmp\\hadoop-SYSTEM\\dfs\\data': No such file or directory
> {noformat}
> For this to work correctly, Hadoop would have to run cygpath first to get a Unix path
representation, then to call DU.
> Also, I had to use the debugger to get this information. Shell.runCommand should catch
IOException from parseExecResult and add the buffered stderr to simplify the error diagnostics.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message