Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 998771016B for ; Fri, 16 Oct 2015 05:17:05 +0000 (UTC) Received: (qmail 57161 invoked by uid 500); 16 Oct 2015 05:17:03 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 57019 invoked by uid 500); 16 Oct 2015 05:17:03 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 56194 invoked by uid 99); 16 Oct 2015 05:17:02 -0000 Received: from crius.apache.org (HELO crius.apache.org) (140.211.11.14) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 Oct 2015 05:17:02 +0000 Received: from crius.apache.org (localhost [127.0.0.1]) by crius.apache.org (ASF Mail Server at crius.apache.org) with ESMTP id 8B43D9C034D for ; Fri, 16 Oct 2015 05:16:15 +0000 (UTC) Date: Fri, 16 Oct 2015 05:16:15 +0000 (UTC) From: Apache Jenkins Server To: hdfs-dev@hadoop.apache.org Message-ID: <293764615.2541.1444972575568.JavaMail.jenkins@crius> In-Reply-To: <1394443148.2494.1444957152110.JavaMail.jenkins@crius> References: <1394443148.2494.1444957152110.JavaMail.jenkins@crius> Subject: Hadoop-Hdfs-trunk-Java8 - Build # 504 - Still Failing MIME-Version: 1.0 Content-Type: multipart/mixed; boundary="----=_Part_2540_1507561058.1444972575509" X-Jenkins-Job: Hadoop-Hdfs-trunk-Java8 X-Jenkins-Result: FAILURE ------=_Part_2540_1507561058.1444972575509 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit See https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/504/ ################################################################################### ########################## LAST 60 LINES OF THE CONSOLE ########################### [...truncated 10769 lines...] [mkdir] Created dir: /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/target/test-dir [INFO] Executed tasks [INFO] [INFO] --- maven-source-plugin:2.3:jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-source-plugin:2.3:test-jar-no-fork (hadoop-java-sources) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (dist-enforce) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-javadoc-plugin:2.8.1:jar (module-javadocs) @ hadoop-hdfs-project --- [INFO] Not executing Javadoc as the project is not a Java classpath-capable package [INFO] [INFO] --- maven-enforcer-plugin:1.3.1:enforce (depcheck) @ hadoop-hdfs-project --- [INFO] [INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ hadoop-hdfs-project --- [INFO] [INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-hdfs-project --- [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop HDFS Client ......................... SUCCESS [03:37 min] [INFO] Apache Hadoop HDFS ................................ FAILURE [ 03:10 h] [INFO] Apache Hadoop HDFS Native Client .................. SKIPPED [INFO] Apache Hadoop HttpFS .............................. SKIPPED [INFO] Apache Hadoop HDFS BookKeeper Journal ............. SKIPPED [INFO] Apache Hadoop HDFS-NFS ............................ SKIPPED [INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.057 s] [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 03:14 h [INFO] Finished at: 2015-10-16T05:16:45+00:00 [INFO] Final Memory: 55M/510M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.17:test (default-test) on project hadoop-hdfs: There are test failures. [ERROR] [ERROR] Please refer to /home/jenkins/jenkins-slave/workspace/Hadoop-Hdfs-trunk-Java8/hadoop-hdfs-project/hadoop-hdfs/target/surefire-reports for the individual test results. [ERROR] -> [Help 1] [ERROR] [ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR] [ERROR] For more information about the errors and possible solutions, please read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException [ERROR] [ERROR] After correcting the problems, you can resume the build with the command [ERROR] mvn -rf :hadoop-hdfs Build step 'Execute shell' marked build as failure Archiving artifacts Recording test results Updating HADOOP-12481 Updating YARN-4000 Updating HADOOP-12479 Updating HADOOP-12475 Sending e-mails to: hdfs-dev@hadoop.apache.org Email was triggered for: Failure - Any Sending email for trigger: Failure - Any ################################################################################### ############################## FAILED TESTS (if any) ############################## 73 tests failed. FAILED: org.apache.hadoop.hdfs.TestClientReportBadBlock.testCorruptAllOfThreeReplicas Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestClientReportBadBlock.startUpCluster(TestClientReportBadBlock.java:77) FAILED: org.apache.hadoop.hdfs.TestClientReportBadBlock.testCorruptAllOfThreeReplicas Error Message: null Stack Trace: java.lang.NullPointerException: null at org.apache.hadoop.hdfs.TestClientReportBadBlock.shutDownCluster(TestClientReportBadBlock.java:85) FAILED: org.apache.hadoop.hdfs.TestClientReportBadBlock.testCorruptTwoOutOfThreeReplicas Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestClientReportBadBlock.startUpCluster(TestClientReportBadBlock.java:77) FAILED: org.apache.hadoop.hdfs.TestClientReportBadBlock.testCorruptTwoOutOfThreeReplicas Error Message: null Stack Trace: java.lang.NullPointerException: null at org.apache.hadoop.hdfs.TestClientReportBadBlock.shutDownCluster(TestClientReportBadBlock.java:85) FAILED: org.apache.hadoop.hdfs.TestClientReportBadBlock.testOneBlockReplica Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestClientReportBadBlock.startUpCluster(TestClientReportBadBlock.java:77) FAILED: org.apache.hadoop.hdfs.TestClientReportBadBlock.testOneBlockReplica Error Message: null Stack Trace: java.lang.NullPointerException: null at org.apache.hadoop.hdfs.TestClientReportBadBlock.shutDownCluster(TestClientReportBadBlock.java:85) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test0 Error Message: failed, dn=0, length=4611java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test0(TestDFSStripedOutputStreamWithFailure.java:492) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 24 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=4611java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test0(TestDFSStripedOutputStreamWithFailure.java:492) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 24 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test0(TestDFSStripedOutputStreamWithFailure.java:492) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test1 Error Message: failed, dn=0, length=-1java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:493) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 24 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=-1java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:493) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 24 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:493) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test2 Error Message: failed, dn=0, length=0java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test2(TestDFSStripedOutputStreamWithFailure.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 24 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=0java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test2(TestDFSStripedOutputStreamWithFailure.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 24 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test2(TestDFSStripedOutputStreamWithFailure.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test3 Error Message: failed, dn=0, length=1java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=1java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test4 Error Message: failed, dn=0, length=65535java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test4(TestDFSStripedOutputStreamWithFailure.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=65535java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test4(TestDFSStripedOutputStreamWithFailure.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test4(TestDFSStripedOutputStreamWithFailure.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test5 Error Message: failed, dn=0, length=65536java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:771) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:625) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test5(TestDFSStripedOutputStreamWithFailure.java:497) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 26 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=65536java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:771) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:625) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test5(TestDFSStripedOutputStreamWithFailure.java:497) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 26 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test5(TestDFSStripedOutputStreamWithFailure.java:497) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test6 Error Message: failed, dn=0, length=65537java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:771) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:625) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 26 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=65537java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:771) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:625) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 26 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test7 Error Message: failed, dn=0, length=131071java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test7(TestDFSStripedOutputStreamWithFailure.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=131071java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test7(TestDFSStripedOutputStreamWithFailure.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test7(TestDFSStripedOutputStreamWithFailure.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test8 Error Message: failed, dn=0, length=131072java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:771) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:625) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:500) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 26 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=131072java.net.BindException: Port in use: localhost:0 at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:888) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer.start(NameNodeHttpServer.java:142) at org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:771) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:625) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:500) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) ... 26 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:500) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure000.test9 Error Message: failed, dn=0, length=131073java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test9(TestDFSStripedOutputStreamWithFailure.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=131073java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test9(TestDFSStripedOutputStreamWithFailure.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 31 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test9(TestDFSStripedOutputStreamWithFailure.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test0 Error Message: failed, dn=0, length=196607java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test0(TestDFSStripedOutputStreamWithFailure.java:492) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=196607java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test0(TestDFSStripedOutputStreamWithFailure.java:492) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test0(TestDFSStripedOutputStreamWithFailure.java:492) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test1 Error Message: failed, dn=0, length=196608java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:493) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=196608java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:493) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test1(TestDFSStripedOutputStreamWithFailure.java:493) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test2 Error Message: failed, dn=0, length=196609java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test2(TestDFSStripedOutputStreamWithFailure.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=196609java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test2(TestDFSStripedOutputStreamWithFailure.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test2(TestDFSStripedOutputStreamWithFailure.java:494) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test3 Error Message: failed, dn=0, length=262143java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=262143java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test3(TestDFSStripedOutputStreamWithFailure.java:495) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test4 Error Message: failed, dn=0, length=262144java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test4(TestDFSStripedOutputStreamWithFailure.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=262144java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test4(TestDFSStripedOutputStreamWithFailure.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test4(TestDFSStripedOutputStreamWithFailure.java:496) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test5 Error Message: failed, dn=0, length=262145java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test5(TestDFSStripedOutputStreamWithFailure.java:497) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=262145java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test5(TestDFSStripedOutputStreamWithFailure.java:497) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test5(TestDFSStripedOutputStreamWithFailure.java:497) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test6 Error Message: failed, dn=0, length=327679java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=327679java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test6(TestDFSStripedOutputStreamWithFailure.java:498) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test7 Error Message: failed, dn=0, length=327680java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test7(TestDFSStripedOutputStreamWithFailure.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=327680java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test7(TestDFSStripedOutputStreamWithFailure.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test7(TestDFSStripedOutputStreamWithFailure.java:499) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test8 Error Message: failed, dn=0, length=327681java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:500) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=327681java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:500) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test8(TestDFSStripedOutputStreamWithFailure.java:500) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure010.test9 Error Message: failed, dn=0, length=393215java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test9(TestDFSStripedOutputStreamWithFailure.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more Stack Trace: java.lang.AssertionError: failed, dn=0, length=393215java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.ipc.Server.bind(Server.java:486) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.setup(TestDFSStripedOutputStreamWithFailure.java:146) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:292) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test9(TestDFSStripedOutputStreamWithFailure.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) ... 28 more at org.junit.Assert.fail(Assert.java:88) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure.runTest(TestDFSStripedOutputStreamWithFailure.java:298) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.run(TestDFSStripedOutputStreamWithFailure.java:489) at org.apache.hadoop.hdfs.TestDFSStripedOutputStreamWithFailure$TestBase.test9(TestDFSStripedOutputStreamWithFailure.java:501) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestDFSUpgrade.testUpgrade Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:197) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:839) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1431) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1679) at org.apache.hadoop.hdfs.TestDFSUpgrade.testUpgrade(TestDFSUpgrade.java:260) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:522) at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1196) at io.netty.channel.ChannelHandlerInvokerUtil.invokeBindNow(ChannelHandlerInvokerUtil.java:108) at io.netty.channel.DefaultChannelHandlerInvoker.invokeBind(DefaultChannelHandlerInvoker.java:214) at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:208) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:1003) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:216) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:322) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:356) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:744) FAILED: org.apache.hadoop.hdfs.TestDataTransferKeepalive.testSlowReader Error Message: Problem binding to [localhost:50067] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:50067] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.datanode.DataNode.initIpcServer(DataNode.java:880) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1214) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2422) at org.apache.hadoop.hdfs.MiniDFSCluster.restartDataNode(MiniDFSCluster.java:2197) at org.apache.hadoop.hdfs.TestDataTransferKeepalive.testSlowReader(TestDataTransferKeepalive.java:182) FAILED: org.apache.hadoop.hdfs.TestDatanodeDeath.testSimple0 Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:197) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:839) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDatanodeDeath.simpleTest(TestDatanodeDeath.java:350) at org.apache.hadoop.hdfs.TestDatanodeDeath.testSimple0(TestDatanodeDeath.java:410) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:522) at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1196) at io.netty.channel.ChannelHandlerInvokerUtil.invokeBindNow(ChannelHandlerInvokerUtil.java:108) at io.netty.channel.DefaultChannelHandlerInvoker.invokeBind(DefaultChannelHandlerInvoker.java:214) at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:208) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:1003) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:216) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:322) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:356) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:744) FAILED: org.apache.hadoop.hdfs.TestDatanodeDeath.testSimple1 Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:197) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:839) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDatanodeDeath.simpleTest(TestDatanodeDeath.java:350) at org.apache.hadoop.hdfs.TestDatanodeDeath.testSimple1(TestDatanodeDeath.java:413) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:522) at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1196) at io.netty.channel.ChannelHandlerInvokerUtil.invokeBindNow(ChannelHandlerInvokerUtil.java:108) at io.netty.channel.DefaultChannelHandlerInvoker.invokeBind(DefaultChannelHandlerInvoker.java:214) at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:208) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:1003) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:216) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:322) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:356) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:744) FAILED: org.apache.hadoop.hdfs.TestDatanodeDeath.testSimple2 Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:408) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:792) at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:721) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.start(DatanodeHttpServer.java:197) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:839) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDatanodeDeath.simpleTest(TestDatanodeDeath.java:350) at org.apache.hadoop.hdfs.TestDatanodeDeath.testSimple2(TestDatanodeDeath.java:416) Caused by: java.net.BindException: Address already in use at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at io.netty.channel.socket.nio.NioServerSocketChannel.doBind(NioServerSocketChannel.java:125) at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:522) at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1196) at io.netty.channel.ChannelHandlerInvokerUtil.invokeBindNow(ChannelHandlerInvokerUtil.java:108) at io.netty.channel.DefaultChannelHandlerInvoker.invokeBind(DefaultChannelHandlerInvoker.java:214) at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:208) at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:1003) at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:216) at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:357) at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:322) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:356) at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:703) at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) at java.lang.Thread.run(Thread.java:744) FAILED: org.apache.hadoop.hdfs.TestDatanodeRegistration.testDNSLookups Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestDatanodeRegistration.testDNSLookups(TestDatanodeRegistration.java:75) FAILED: org.apache.hadoop.hdfs.TestDisableConnCache.testDisableCache Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.BlockReaderTestUtil.(BlockReaderTestUtil.java:84) at org.apache.hadoop.hdfs.TestDisableConnCache.testDisableCache(TestDisableConnCache.java:51) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testVersionAndSuiteNegotiation Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testGetEZAsNonSuperUser Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testReadWriteUsingWebHdfs Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testDelegationToken Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testEncryptionZonesOnRootPath Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testReadWrite Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testRenameFileSystem Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testListEncryptionZonesAsNonSuperUser Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testSnapshotsOnEncryptionZones Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testConcatFailsInEncryptionZones Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testOfflineImageViewerOnEncryptionZones Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testRenameFileContext Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testFsckOnEncryptionZones Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testBasicOperations Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testCreateEZWithNoProvider Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testStartFileRetry Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testEncryptionZonesOnRelativePath Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testEncryptionZonesWithSymlinks Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZones.testIsEncryptedMethod Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testDelegationToken Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testCreateEZPopulatesEDEKCache Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testVersionAndSuiteNegotiation Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testGetEZAsNonSuperUser Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testReadWriteUsingWebHdfs Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testEncryptionZonesOnRootPath Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server.bind(Server.java:458) at org.apache.hadoop.hdfs.net.TcpPeerServer.(TcpPeerServer.java:50) at org.apache.hadoop.hdfs.server.datanode.DataNode.initDataXceiver(DataNode.java:973) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1202) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testReadWrite Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testRenameFileSystem Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestEncryptionZones.setup(TestEncryptionZones.java:147) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.setup(TestEncryptionZonesWithKMS.java:56) FAILED: org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testBasicOperations Error Message: Bad value for metric NumEncryptionZones expected:<8> but was:<0> Stack Trace: java.lang.AssertionError: Bad value for metric NumEncryptionZones expected:<8> but was:<0> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:151) at org.apache.hadoop.hdfs.TestEncryptionZones.testBasicOperations(TestEncryptionZones.java:366) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:483) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) FAILED: org.apache.hadoop.hdfs.TestGetBlocks.testReadSelectNonStaleDatanode Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestGetBlocks.testReadSelectNonStaleDatanode(TestGetBlocks.java:94) FAILED: org.apache.hadoop.hdfs.TestGetBlocks.testGetBlocks Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestGetBlocks.testGetBlocks(TestGetBlocks.java:183) FAILED: org.apache.hadoop.hdfs.TestReplication.testReplication Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.runReplication(TestReplication.java:244) at org.apache.hadoop.hdfs.TestReplication.testReplication(TestReplication.java:292) FAILED: org.apache.hadoop.hdfs.TestReplication.testPendingReplicationRetry Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testPendingReplicationRetry(TestReplication.java:375) FAILED: org.apache.hadoop.hdfs.TestReplication.testReplicationWhenBlockCorruption Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testReplicationWhenBlockCorruption(TestReplication.java:531) FAILED: org.apache.hadoop.hdfs.TestReplication.testNoExtraReplicationWhenBlockReceivedIsLate Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testNoExtraReplicationWhenBlockReceivedIsLate(TestReplication.java:613) FAILED: org.apache.hadoop.hdfs.TestReplication.testReplicationWhileUnderConstruction Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testReplicationWhileUnderConstruction(TestReplication.java:695) FAILED: org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransferMissingBlockFile Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:173) at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransferMissingBlockFile(TestReplication.java:230) FAILED: org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:173) at org.apache.hadoop.hdfs.TestReplication.testBadBlockReportOnTransfer(TestReplication.java:221) FAILED: org.apache.hadoop.hdfs.TestReplication.testReplicationSimulatedStorag Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.runReplication(TestReplication.java:244) at org.apache.hadoop.hdfs.TestReplication.testReplicationSimulatedStorag(TestReplication.java:286) FAILED: org.apache.hadoop.hdfs.TestReplication.testReplicateLenMismatchedBlock Error Message: Port in use: localhost:0 Stack Trace: java.net.BindException: Port in use: localhost:0 at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.mortbay.jetty.nio.SelectChannelConnector.open(SelectChannelConnector.java:216) at org.apache.hadoop.http.HttpServer2.openListeners(HttpServer2.java:882) at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:824) at org.apache.hadoop.hdfs.server.datanode.web.DatanodeHttpServer.(DatanodeHttpServer.java:103) at org.apache.hadoop.hdfs.server.datanode.DataNode.startInfoServer(DataNode.java:838) at org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1203) at org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:468) at org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2487) at org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2375) at org.apache.hadoop.hdfs.MiniDFSCluster.startDataNodes(MiniDFSCluster.java:1592) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:841) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestReplication.testReplicateLenMismatchedBlock(TestReplication.java:462) FAILED: org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.testWriteConf Error Message: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException Stack Trace: java.net.BindException: Problem binding to [localhost:0] java.net.BindException: Address already in use; For more details see: http://wiki.apache.org/hadoop/BindException at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:414) at sun.nio.ch.Net.bind(Net.java:406) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:214) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.apache.hadoop.ipc.Server.bind(Server.java:469) at org.apache.hadoop.ipc.Server$Listener.(Server.java:684) at org.apache.hadoop.ipc.Server.(Server.java:2439) at org.apache.hadoop.ipc.RPC$Server.(RPC.java:945) at org.apache.hadoop.ipc.ProtobufRpcEngine$Server.(ProtobufRpcEngine.java:535) at org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:510) at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:787) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.(NameNodeRpcServer.java:358) at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:692) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:630) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:833) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:812) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1505) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1247) at org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1016) at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:888) at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:820) at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:479) at org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:438) at org.apache.hadoop.hdfs.TestWriteConfigurationToDFS.testWriteConf(TestWriteConfigurationToDFS.java:39) ------=_Part_2540_1507561058.1444972575509--