Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 31937 invoked from network); 24 Jun 2008 12:27:48 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 24 Jun 2008 12:27:48 -0000 Received: (qmail 51438 invoked by uid 500); 24 Jun 2008 12:27:48 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 51407 invoked by uid 500); 24 Jun 2008 12:27:48 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 51396 invoked by uid 99); 24 Jun 2008 12:27:48 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Jun 2008 05:27:48 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Jun 2008 12:26:56 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 872CF234C14D for ; Tue, 24 Jun 2008 05:26:45 -0700 (PDT) Message-ID: <523396952.1214310405552.JavaMail.jira@brutus> Date: Tue, 24 Jun 2008 05:26:45 -0700 (PDT) From: "Hudson (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-3559) test-libhdfs fails on linux In-Reply-To: <1203888823.1213377944972.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-3559?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D126= 07597#action_12607597 ]=20 Hudson commented on HADOOP-3559: -------------------------------- Integrated in Hadoop-trunk #528 (See [http://hudson.zones.apache.org/hudson= /job/Hadoop-trunk/528/]) > test-libhdfs fails on linux > --------------------------- > > Key: HADOOP-3559 > URL: https://issues.apache.org/jira/browse/HADOOP-3559 > Project: Hadoop Core > Issue Type: Bug > Components: libhdfs > Affects Versions: 0.18.0 > Environment: linux > Reporter: Mukund Madhugiri > Assignee: Lohit Vijayarenu > Priority: Blocker > Fix For: 0.18.0 > > Attachments: HADOOP-3559-1.patch > > > test-libhdfs fails on linux > test-libhdfs: > [mkdir] Created dir: /workspace/trunk/build/test/libhdfs > [mkdir] Created dir: /workspace/trunk/build/test/libhdfs/logs > [mkdir] Created dir: /workspace/trunk/build/test/libhdfs/dfs/name > [exec] ./tests/test-libhdfs.sh=09 > [exec] 08/06/13 03:25:15 INFO dfs.NameNode: STARTUP_MSG:=20 > [exec] /************************************************************ > [exec] STARTUP_MSG: Starting NameNode > [exec] STARTUP_MSG: host =3D hudson/NNN.NNN.NNN.NNN > [exec] STARTUP_MSG: args =3D [-format] > [exec] STARTUP_MSG: version =3D 0.18.0 > [exec] STARTUP_MSG: build =3D http://svn.apache.org/repos/asf/hado= op/core/trunk -r 667040; compiled by 'hudsonqa' on Fri Jun 13 03:24:28 UTC = 2008 > [exec] ************************************************************/ > [exec] Re-format filesystem in ../../../build/test/libhdfs/dfs/name = ? (Y or N) 08/06/13 03:25:15 INFO fs.FSNamesystem: fsOwner=3Dhudsonqa,users > [exec] 08/06/13 03:25:15 INFO fs.FSNamesystem: supergroup=3Dsupergro= up > [exec] 08/06/13 03:25:15 INFO fs.FSNamesystem: isPermissionEnabled= =3Dtrue > [exec] 08/06/13 03:25:15 INFO dfs.FSNamesystemMetrics: Initializing = FSNamesystemMeterics using context object:org.apache.hadoop.metrics.spi.Nul= lContext > [exec] 08/06/13 03:25:16 INFO fs.FSNamesystem: Registered FSNamesyst= emStatusMBean > [exec] 08/06/13 03:25:16 INFO dfs.Storage: Image file of size 82 sav= ed in 0 seconds. > [exec] 08/06/13 03:25:16 INFO dfs.Storage: Storage directory ../../.= ./build/test/libhdfs/dfs/name has been successfully formatted. > [exec] 08/06/13 03:25:16 INFO dfs.NameNode: SHUTDOWN_MSG:=20 > [exec] /************************************************************ > [exec] SHUTDOWN_MSG: Shutting down NameNode at hudson/NNN.NNN.NNN.NN= N > [exec] ************************************************************/ > [exec] starting namenode, logging to /workspace/trunk/build/test/lib= hdfs/logs/hadoop-hudsonqa-namenode-hudsonqa.out > [exec] starting datanode, logging to /workspace/trunk/build/test/lib= hdfs/logs/hadoop-hudsonqa-datanode-hudsonqa.out > [exec] CLASSPATH=3D/workspace/trunk/src/c++/libhdfs/tests/conf:/work= space/trunk/conf:/workspace/trunk/src/c++/libhdfs/tests/conf:/workspace/tru= nk/conf:/home/hudsonqa/tools/java/jdk1.5.0_11-32bit/lib/tools.jar:/workspac= e/trunk/build/classes:/workspace/trunk/build:/workspace/trunk/build/test/cl= asses:/workspace/trunk/lib/commons-cli-2.0-SNAPSHOT.jar:/workspace/trunk/li= b/commons-codec-1.3.jar:/workspace/trunk/lib/commons-httpclient-3.0.1.jar:/= workspace/trunk/lib/commons-logging-1.0.4.jar:/workspace/trunk/lib/commons-= logging-api-1.0.4.jar:/workspace/trunk/lib/commons-net-1.4.1.jar:/workspace= /trunk/lib/jets3t-0.6.0.jar:/workspace/trunk/lib/jetty-5.1.4.jar:/workspace= /trunk/lib/junit-3.8.1.jar:/workspace/trunk/lib/kfs-0.1.3.jar:/workspace/tr= unk/lib/log4j-1.2.13.jar:/workspace/trunk/lib/oro-2.0.8.jar:/workspace/trun= k/lib/servlet-api.jar:/workspace/trunk/lib/slf4j-api-1.4.3.jar:/workspace/t= runk/lib/slf4j-log4j12-1.4.3.jar:/workspace/trunk/lib/xmlenc-0.52.jar:/work= space/trunk/lib/jsp-2.0/*.jar LD_PRELOAD=3D/workspace/trunk/build/libhdfs/l= ibhdfs.so /workspace/trunk/build/libhdfs/hdfs_test > [exec] 08/06/13 03:25:22 WARN fs.FileSystem: "localhost:23000" is a = deprecated filesystem name. Use "hdfs://localhost:23000/" instead. > [exec] 08/06/13 03:25:24 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 0 time(s). > [exec] 08/06/13 03:25:25 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 1 time(s). > [exec] 08/06/13 03:25:26 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 2 time(s). > [exec] 08/06/13 03:25:27 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 3 time(s). > [exec] 08/06/13 03:25:28 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 4 time(s). > [exec] 08/06/13 03:25:29 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 5 time(s). > [exec] 08/06/13 03:25:30 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 6 time(s). > [exec] 08/06/13 03:25:31 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 7 time(s). > [exec] 08/06/13 03:25:32 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 8 time(s). > [exec] 08/06/13 03:25:33 INFO ipc.Client: Retrying connect to server= : localhost/127.0.0.1:23000. Already tried 9 time(s). > [exec] Exception in thread "main" java.io.IOException: Call failed o= n local exception > [exec] =09at org.apache.hadoop.ipc.Client.call(Client.java:710) > [exec] =09at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) > [exec] =09at org.apache.hadoop.dfs.$Proxy0.getProtocolVersion(Unknow= n Source) > [exec] =09at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:319) > [exec] =09at org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSCl= ient.java:103) > [exec] =09at org.apache.hadoop.dfs.DFSClient.(DFSClient.java:1= 73) > [exec] =09at org.apache.hadoop.dfs.DistributedFileSystem.initialize(= DistributedFileSystem.java:67) > [exec] =09at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSy= stem.java:1335) > [exec] =09at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.j= ava:56) > [exec] =09at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.ja= va:1346) > [exec] =09at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:209= ) > [exec] =09at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:114= ) > [exec] Caused by: java.net.ConnectException: Connection refused > [exec] =09at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method= ) > [exec] =09at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChanne= lImpl.java:527) > [exec] =09at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:100= ) > [exec] =09at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(= Client.java:300) > [exec] =09at org.apache.hadoop.ipc.Client$Connection.access$1700(Cli= ent.java:177) > [exec] =09at org.apache.hadoop.ipc.Client.getConnection(Client.java:= 781) > [exec] =09at org.apache.hadoop.ipc.Client.call(Client.java:696) > [exec] =09... 11 more > [exec] Call to org.apache.hadoop.fs.FileSystem::get failed! > [exec] Oops! Failed to connect to hdfs! > [exec] no datanode to stop > [exec] no namenode to stop > [exec] exiting with 255 > [exec] make: *** [test] Error 255 --=20 This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.