hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Balaji Rajagopalan (JIRA)" <j...@apache.org>
Subject [jira] Commented: (MAPREDUCE-1890) Create automated test scenarios for decommissioning of task trackers
Date Fri, 25 Jun 2010 08:34:50 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-1890?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12882492#action_12882492

Balaji Rajagopalan commented on MAPREDUCE-1890:

+    //String confFile = "mapred-site.xml";
+    //Hashtable<String,Object> prop = new Hashtable<String,Object>();
+    //prop.put("mapred.hosts.exclude", "/tmp/mapred.exclude");
+    //cluster.restartClusterWithNewConfig(prop, confFile);

There is bunch of commented out code, it looks like you might need the above code to make
is work correctly. 

Also the code in tear down is commented out. 

+    List<TTClient> ttClients = cluster.getTTClients();
+    //One slave is got
+    TTClient ttClient = (TTClient)ttClients.get(0);

I have a similar helper method, that I will check in AbstractDaemonCluster with health script,
it is better to use that helper, I will check in this code shortly. 

//The client shich needs tobe decommissioned is put in the exclude path.
Fix the typo in the comment. 

One question is you were telling me that hadoopqa user is executing mradmin refresh which
requires mapred priviledge, so how is this achieved wiith this code. If this is going to be
deployment specific changes to mapred-site.xml we should call out that dependency, so it is

> Create automated test scenarios for decommissioning of task trackers
> --------------------------------------------------------------------
>                 Key: MAPREDUCE-1890
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-1890
>             Project: Hadoop Map/Reduce
>          Issue Type: Test
>          Components: test
>            Reporter: Iyappan Srinivasan
>         Attachments: TestDecomissioning.patch
> Test scenarios :
> 1) Put a healthy slave task tracker in the dfs.exclude file.
> 2) As a valid user, decommission a  node in the cluster by issuing the command "hadoop
mradmin -refreshNodes"
> 3) Make sure that the node is decommissioned.
> 4) Now take the task tracker out of the file.
> 5) As a valid user, again issue the command "hadoop mradmin -refreshNodes"
> 6) Make sure that the node is not in the decommiossion list.
> 7) Bring back that node.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message