hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Owen O'Malley (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-3568) Don't need to use toString() on strings (code cleanup)
Date Mon, 16 Jun 2008 17:03:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Owen O'Malley updated HADOOP-3568:
----------------------------------

    Description: 
Don't need to call toString on a String type.  This occurs in several places in the test code.
 Patches below:




  was:
Don't need to call toString on a String type.  This occurs in several places in the test code.
 Patches below:

patch to org.apache.hadoop.dfs.TestDFSStartupVersions

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -37,7 +37,7 @@
   private static final Log LOG = LogFactory.getLog(
                                                    "org.apache.hadoop.dfs.TestDFSStartupVersions");
   private static Path TEST_ROOT_DIR = new Path(
-                                               System.getProperty("test.build.data","/tmp").toString().replace('
', '+'));
+                                               System.getProperty("test.build.data","/tmp").replace('
', '+'));
   private MiniDFSCluster cluster = null;
   
   /**



patch to org.apache.hadoop.mapred.lib.aggregate.TestAggregates

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -109,7 +109,7 @@
     Path outPath = new Path(OUTPUT_DIR, "part-00000");
     String outdata = TestMiniMRWithDFS.readOutput(outPath,job);
     System.out.println("full out data:");
-    System.out.println(outdata.toString());
+    System.out.println(outdata);
     outdata = outdata.substring(0, expectedOutput.toString().length());
 
     assertEquals(expectedOutput.toString(),outdata);



patch to org.apache.hadoop.mapred.NotificationTestCase

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -174,8 +174,7 @@
 
     // Hack for local FS that does not have the concept of a 'mounting point'
     if (isLocalFS()) {
-      String localPathRoot = System.getProperty("test.build.data","/tmp")
-        .toString().replace(' ', '+');;
+      String localPathRoot = System.getProperty("test.build.data","/tmp").replace(' ', '+');;
       inDir = new Path(localPathRoot, inDir);
       outDir = new Path(localPathRoot, outDir);
     }



patch to org.apache.hadoop.streaming.TestMultipleArchiveFiles

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -84,7 +84,7 @@
     dos.write(inputFileString.getBytes("UTF-8"));
     dos.close();
     
-    DataOutputStream out = fileSys.create(new Path(CACHE_ARCHIVE_1.toString()));
+    DataOutputStream out = fileSys.create(new Path(CACHE_ARCHIVE_1));
     ZipOutputStream zos = new ZipOutputStream(out);
     ZipEntry ze = new ZipEntry(CACHE_FILE_1.toString());
     zos.putNextEntry(ze);
@@ -112,7 +112,7 @@
     }
 
     return new String[] {
-      "-input", INPUT_FILE.toString(),
+      "-input", INPUT_FILE,
       "-output", OUTPUT_DIR,
       "-mapper", "xargs cat", 
       "-reducer", "cat",




Please don't put patches in the comments and especially not in the description, which is sent
out each time someone comments on it. Please generate a patch from HADOOP_HOME via "svn diff"
and upload it to this jira. Then make the patch available for review by pressing "submit patch."
Thanks!

BTW, here is your removed patch
patch to org.apache.hadoop.dfs.TestDFSStartupVersions

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -37,7 +37,7 @@
   private static final Log LOG = LogFactory.getLog(
                                                    "org.apache.hadoop.dfs.TestDFSStartupVersions");
   private static Path TEST_ROOT_DIR = new Path(
-                                               System.getProperty("test.build.data","/tmp").toString().replace('
', '+'));
+                                               System.getProperty("test.build.data","/tmp").replace('
', '+'));
   private MiniDFSCluster cluster = null;
   
   /**



patch to org.apache.hadoop.mapred.lib.aggregate.TestAggregates

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -109,7 +109,7 @@
     Path outPath = new Path(OUTPUT_DIR, "part-00000");
     String outdata = TestMiniMRWithDFS.readOutput(outPath,job);
     System.out.println("full out data:");
-    System.out.println(outdata.toString());
+    System.out.println(outdata);
     outdata = outdata.substring(0, expectedOutput.toString().length());
 
     assertEquals(expectedOutput.toString(),outdata);



patch to org.apache.hadoop.mapred.NotificationTestCase

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -174,8 +174,7 @@
 
     // Hack for local FS that does not have the concept of a 'mounting point'
     if (isLocalFS()) {
-      String localPathRoot = System.getProperty("test.build.data","/tmp")
-        .toString().replace(' ', '+');;
+      String localPathRoot = System.getProperty("test.build.data","/tmp").replace(' ', '+');;
       inDir = new Path(localPathRoot, inDir);
       outDir = new Path(localPathRoot, outDir);
     }



patch to org.apache.hadoop.streaming.TestMultipleArchiveFiles

Index: .
===================================================================
--- .	(revision 8259)
+++ .	(working copy)
@@ -84,7 +84,7 @@
     dos.write(inputFileString.getBytes("UTF-8"));
     dos.close();
     
-    DataOutputStream out = fileSys.create(new Path(CACHE_ARCHIVE_1.toString()));
+    DataOutputStream out = fileSys.create(new Path(CACHE_ARCHIVE_1));
     ZipOutputStream zos = new ZipOutputStream(out);
     ZipEntry ze = new ZipEntry(CACHE_FILE_1.toString());
     zos.putNextEntry(ze);
@@ -112,7 +112,7 @@
     }
 
     return new String[] {
-      "-input", INPUT_FILE.toString(),
+      "-input", INPUT_FILE,
       "-output", OUTPUT_DIR,
       "-mapper", "xargs cat", 
       "-reducer", "cat",


> Don't need to use toString() on strings (code cleanup)
> ------------------------------------------------------
>
>                 Key: HADOOP-3568
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3568
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: test
>    Affects Versions: 0.17.0
>            Reporter: Tim Halloran
>            Priority: Minor
>
> Don't need to call toString on a String type.  This occurs in several places in the test
code.  Patches below:

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message