hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-112) ClusterTestDFS fails
Date Sun, 17 Jul 2011 17:38:59 GMT

     [ https://issues.apache.org/jira/browse/HDFS-112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Harsh J resolved HDFS-112.
--------------------------

    Resolution: Not A Problem

This JIRA has grown stale over years and needs to be closed. The tests framework has changed
considerably since '06.

With the current mini clusters, giving a hosts array is possible for different hostnamed daemons,
and by all the tests it carries, it does appear to work alright if you wanna use it for such
purposes.

> ClusterTestDFS fails
> --------------------
>
>                 Key: HDFS-112
>                 URL: https://issues.apache.org/jira/browse/HDFS-112
>             Project: Hadoop HDFS
>          Issue Type: Bug
>         Environment: local workstation (windows) 
>            Reporter: alan wootton
>            Assignee: Sameer Paranjpye
>         Attachments: ClusterTestFixes.patch, fix_clustertestdfs.patch
>
>
> The dfs unit tests, from the ant target 'cluster' have been failing. (ClusterTestDFSNamespaceLogging,
ClusterTestDFS). I don't know if anyone but me cares about these tests, but I do. I would
like to write better tests for dns. I think we all need that.
> They have been partially broken since  "test.dfs.same.host.targets.allowed" went away
and replication ceased for these tests. 
> They got really broken when NameNode stopped automatically formatting itself .
> Since they seem to be ignored, I took the liberty of changing how they work.
> The main thing is, you must put this into your hosts file:
> 127.0.0.1       localhost0
> 127.0.0.1       localhost1
> 127.0.0.1       localhost2
> 127.0.0.1       localhost3
> 127.0.0.1       localhost4
> 127.0.0.1       localhost5
> 127.0.0.1       localhost6
> 127.0.0.1       localhost7
> 127.0.0.1       localhost8
> 127.0.0.1       localhost9
> 127.0.0.1       localhost10
> 127.0.0.1       localhost11
> 127.0.0.1       localhost12
> 127.0.0.1       localhost13
> 127.0.0.1       localhost14
> 127.0.0.1       localhost15
> This way you can start DataNodes, and TaskTrackers (up to 16 of them) with unique hostnames.
> Also, I changed all the places that used to call InetAddress.getLocalHost().getHostName()
to get it from a new method in Configuration (this issue is the same as http://issues.apache.org/jira/browse/HADOOP-197
).

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message