hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1640) TestDecommission fails on Windows
Date Mon, 23 Jul 2007 15:16:31 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Owen O'Malley updated HADOOP-1640:
----------------------------------

    Status: Open  (was: Patch Available)

I'd like to have the wait time bounded by 1 minute or so, so that if the test is broken that
it doesn't hang the unit tests.

> TestDecommission fails on Windows
> ---------------------------------
>
>                 Key: HADOOP-1640
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1640
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.14.0
>            Reporter: Nigel Daley
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.14.0
>
>         Attachments: testDecommission1640.patch
>
>
> In the snippet of test log below, the exception happens every ~15 milliseconds for 15
minutes until the test is timed out:
>     [junit] Created file decommission.dat with 2 replicas.
>     [junit] Block[0] : xxx xxx 
>     [junit] Block[1] : xxx xxx 
>     [junit] Decommissioning node: 127.0.0.1:50013
>     [junit] 2007-07-19 19:12:45,059 INFO  fs.FSNamesystem (FSNamesystem.java:startDecommission(2572))
- Start Decommissioning node 127.0.0.1:50013
>     [junit] Name: 127.0.0.1:50013
>     [junit] State          : Decommission in progress
>     [junit] Total raw bytes: 80030941184 (74.53 GB)
>     [junit] Used raw bytes: 33940945746 (31.60 GB)
>     [junit] % used: 42.40%
>     [junit] Last contact: Thu Jul 19 19:12:44 PDT 2007
>     [junit] Waiting for node 127.0.0.1:50013 to change state to DECOMMISSIONED
>     [junit] 2007-07-19 19:12:45,199 INFO  http.SocketListener (SocketListener.java:stop(212))
- Stopped SocketListener on 0.0.0.0:3147
>     [junit] 2007-07-19 19:12:45,199 INFO  util.Container (Container.java:stop(156)) -
Stopped org.mortbay.jetty.servlet.WebApplicationHandler@1d98a
>     [junit] 2007-07-19 19:12:45,293 INFO  util.Container (Container.java:stop(156)) -
Stopped WebApplicationContext[/,/]
>     [junit] 2007-07-19 19:12:45,402 INFO  util.Container (Container.java:stop(156)) -
Stopped HttpContext[/logs,/logs]
>     [junit] 2007-07-19 19:12:45,481 INFO  util.Container (Container.java:stop(156)) -
Stopped HttpContext[/static,/static]
>     [junit] 2007-07-19 19:12:45,481 INFO  util.Container (Container.java:stop(156)) -
Stopped org.mortbay.jetty.Server@f1916f
>     [junit] 2007-07-19 19:12:45,496 INFO  dfs.DataNode (DataNode.java:run(692)) - Exiting
DataXceiveServer due to java.net.SocketException: socket closed
>     [junit] 2007-07-19 19:12:45,496 WARN  dfs.DataNode (DataNode.java:offerService(568))
- java.io.IOException: java.lang.InterruptedException
>     [junit] 	at org.apache.hadoop.fs.DF.doDF(DF.java:71)
>     [junit] 	at org.apache.hadoop.fs.DF.getCapacity(DF.java:89)
>     [junit] 	at org.apache.hadoop.dfs.FSDataset$FSVolume.getCapacity(FSDataset.java:292)
>     [junit] 	at org.apache.hadoop.dfs.FSDataset$FSVolumeSet.getCapacity(FSDataset.java:379)
>     [junit] 	at org.apache.hadoop.dfs.FSDataset.getCapacity(FSDataset.java:466)
>     [junit] 	at org.apache.hadoop.dfs.DataNode.offerService(DataNode.java:493)
>     [junit] 	at org.apache.hadoop.dfs.DataNode.run(DataNode.java:1306)
>     [junit] 	at java.lang.Thread.run(Thread.java:595)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message