hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ma...@apache.org
Subject svn commit: r1195861 [2/2] - /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
Date Tue, 01 Nov 2011 07:52:57 GMT

Modified: hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html?rev=1195861&r1=1195860&r2=1195861&view=diff
--- hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html (original)
+++ hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/releasenotes.html Tue Nov  1 07:52:57 2011
@@ -1 +1,5408 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
+<META http-equiv="Content-Type" content="text/html; charset=UTF-8">
+<title>Hadoop 0.23.0 Release Notes</title>
+<STYLE type="text/css">
+		H1 {font-family: sans-serif}
+		H2 {font-family: sans-serif; margin-left: 7mm}
+		TABLE {margin-left: 7mm}
+	</STYLE>
+<h1>Hadoop 0.23.0 Release Notes</h1>
+		These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
+<a name="changes"/>
+<h2>Changes since Hadoop 0.22</h2>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7778">HADOOP-7778</a>.
+     Major bug reported by tomwhite and fixed by tomwhite <br>
+     <b>FindBugs warning in Token.getKind()</b><br>
+     <blockquote>From https://builds.apache.org/job/PreCommit-HADOOP-Build/330//artifact/trunk/hadoop-common-project/patchprocess/newPatchFindbugsWarningshadoop-common.html<br><br>bq. org.apache.hadoop.security.token.Token.getKind() is unsynchronized, org.apache.hadoop.security.token.Token.setKind(Text) is synchronized<br><br>Looks like this was introduced by MAPREDUCE-2764.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7772">HADOOP-7772</a>.
+     Trivial improvement reported by stevel@apache.org and fixed by stevel@apache.org <br>
+     <b>javadoc the topology classes</b><br>
+     <blockquote>To help people understand and make changes to the Topology classes, their javadocs could be rounded off.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7771">HADOOP-7771</a>.
+     Blocker bug reported by johnvijoe and fixed by johnvijoe <br>
+     <b>NPE when running hdfs dfs -copyToLocal, -get etc</b><br>
+     <blockquote>NPE when running hdfs dfs -copyToLocal if the destination directory does not exist. The behavior in branch-0.20-security is to create the directory and copy/get the contents from source.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7770">HADOOP-7770</a>.
+     Blocker bug reported by raviprak and fixed by raviprak (fs)<br>
+     <b>ViewFS getFileChecksum throws FileNotFoundException for files in /tmp and /user</b><br>
+     <blockquote>Thanks to Rohini Palaniswamy for discovering this bug. To quote<br>bq. When doing getFileChecksum for path /user/hadoopqa/somefile, it is trying to fetch checksum for /user/user/hadoopqa/somefile. If /tmp/file, it is trying /tmp/tmp/file. Works fine for other FS operations.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7766">HADOOP-7766</a>.
+     Major bug reported by jnp and fixed by jnp <br>
+     <b>The auth to local mappings are not being respected, with webhdfs and security enabled.</b><br>
+     <blockquote>KerberosAuthenticationHandler reloads the KerberosName statically and overrides the auth to local mappings. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7764">HADOOP-7764</a>.
+     Blocker bug reported by jeagles and fixed by jeagles <br>
+     <b>Allow both ACL list and global path spec filters to HttpServer</b><br>
+     <blockquote>HttpServer allows setting global path spec filters in one constructor and ACL list in another constructor. Having both set in HttpServer is not user settable either by public API or constructor.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7763">HADOOP-7763</a>.
+     Major improvement reported by tomwhite and fixed by tomwhite (documentation)<br>
+     <b>Add top-level navigation to APT docs</b><br>
+     <blockquote>We need navigation menus for the APT docs that have been written so far.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7753">HADOOP-7753</a>.
+     Major sub-task reported by tlipcon and fixed by tlipcon (io, native, performance)<br>
+     <b>Support fadvise and sync_data_range in NativeIO, add ReadaheadPool class</b><br>
+     <blockquote>This JIRA adds JNI wrappers for sync_data_range and posix_fadvise. It also implements a ReadaheadPool class for future use from HDFS and MapReduce.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7749">HADOOP-7749</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (util)<br>
+     <b>Add NetUtils call which provides more help in exception messages</b><br>
+     <blockquote>In setting up MR2, I accidentally had a bad configuration value specified for one of the IP configs. I was getting a NumberFormatException parsing this config, but no indication as to what config value was being fetched. This JIRA is to add an API to NetUtils.createSocketAddr which takes the configuration name, so that any exceptions thrown will point back to where the user needs to fix it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7745">HADOOP-7745</a>.
+     Major bug reported by raviprak and fixed by raviprak <br>
+     <b>I switched variable names in HADOOP-7509</b><br>
+     <blockquote>As Aaron pointed out on https://issues.apache.org/jira/browse/HADOOP-7509?focusedCommentId=13126725&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13126725 I stupidly swapped CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION with CommonConfigurationKeys.HADOOP_SECURITY_AUTHORIZATION.<br><br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7744">HADOOP-7744</a>.
+     Major bug reported by jeagles and fixed by jeagles (test)<br>
+     <b>Incorrect exit code for hadoop-core-test tests when exception thrown</b><br>
+     <blockquote>Please see MAPREDUCE-3179 for a full description.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7743">HADOOP-7743</a>.
+     Major improvement reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Add Maven profile to create a full source tarball</b><br>
+     <blockquote>Currently we are building binary distributions only.<br><br>We should also build a full source distribution from where Hadoop can be built.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7740">HADOOP-7740</a>.
+     Minor bug reported by arpitgupta and fixed by arpitgupta (conf)<br>
+     <b>security audit logger is not on by default, fix the log4j properties to enable the logger</b><br>
+     <blockquote>                                              Fixed security audit logger configuration. (Arpit Gupta via Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7737">HADOOP-7737</a>.
+     Major improvement reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>normalize hadoop-mapreduce &amp; hadoop-dist dist/tar build with common/hdfs</b><br>
+     <blockquote>Normalize the build fo hadoop-mapreduce and hadoop-dist with hadoop-common and hadoop-hdfs making the -Pdist and -Dtar maven options to be consistent.<br><br>* -Pdist should create the layout<br>* -Dtar should create the TAR<br><br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7728">HADOOP-7728</a>.
+     Major bug reported by rramya and fixed by rramya (conf)<br>
+     <b>hadoop-setup-conf.sh should be modified to enable task memory manager</b><br>
+     <blockquote>                                              Enable task memory management to be configurable via hadoop config setup script.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7724">HADOOP-7724</a>.
+     Major bug reported by gkesavan and fixed by arpitgupta <br>
+     <b>hadoop-setup-conf.sh should put proxy user info into the core-site.xml </b><br>
+     <blockquote>                                              Fixed hadoop-setup-conf.sh to put proxy user in core-site.xml.  (Arpit Gupta via Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7721">HADOOP-7721</a>.
+     Major bug reported by arpitgupta and fixed by jnp <br>
+     <b>dfs.web.authentication.kerberos.principal expects the full hostname and does not replace _HOST with the hostname</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7720">HADOOP-7720</a>.
+     Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
+     <b>improve the hadoop-setup-conf.sh to read in the hbase user and setup the configs</b><br>
+     <blockquote>                                              Added parameter for HBase user to setup config script. (Arpit Gupta via Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7709">HADOOP-7709</a>.
+     Major improvement reported by jeagles and fixed by jeagles <br>
+     <b>Running a set of methods in a Single Test Class</b><br>
+     <blockquote>Instead of running every test method in a class, limit to specific testing methods as describe in the link below.<br><br>http://maven.apache.org/plugins/maven-surefire-plugin/examples/single-test.html<br><br>Upgrade to the latest version of maven-surefire-plugin that has this feature.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7708">HADOOP-7708</a>.
+     Critical bug reported by arpitgupta and fixed by eyang (conf)<br>
+     <b>config generator does not update the properties file if on exists already</b><br>
+     <blockquote>                                              Fixed hadoop-setup-conf.sh to handle config file consistently.  (Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7707">HADOOP-7707</a>.
+     Major improvement reported by arpitgupta and fixed by arpitgupta (conf)<br>
+     <b>improve config generator to allow users to specify proxy user, turn append on or off, turn webhdfs on or off</b><br>
+     <blockquote>                                              Added toggle for dfs.support.append, webhdfs and hadoop proxy user to setup config script. (Arpit Gupta via Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7705">HADOOP-7705</a>.
+     Minor new feature reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
+     <b>Add a log4j back end that can push out JSON data, one per line</b><br>
+     <blockquote>If we had a back end for Log4j that pushed out log events in single line JSON content, we&apos;d have something that is fairly straightforward to machine parse. If: it may be harder to do than expected. Once working HADOOP-6244 could use it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7691">HADOOP-7691</a>.
+     Major bug reported by gkesavan and fixed by eyang <br>
+     <b>hadoop deb pkg should take a diff group id</b><br>
+     <blockquote>                                              Fixed conflict uid for install packages. (Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7684">HADOOP-7684</a>.
+     Major bug reported by eyang and fixed by eyang (scripts)<br>
+     <b>jobhistory server and secondarynamenode should have init.d script</b><br>
+     <blockquote>                                              Added init.d script for jobhistory server and secondary namenode. (Eric Yang)<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7681">HADOOP-7681</a>.
+     Minor bug reported by arpitgupta and fixed by arpitgupta (conf)<br>
+     <b>log4j.properties is missing properties for security audit and hdfs audit should be changed to info</b><br>
+     <blockquote>(Arpit Gupta via Eric Yang)<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7671">HADOOP-7671</a>.
+     Major bug reported by raviprak and fixed by raviprak <br>
+     <b>Add license headers to hadoop-common/src/main/packages/templates/conf/</b><br>
+     <blockquote>hadoop-common/src/main/packages/templates/conf/ not in the exclude list for apache-rat plugin . This causes 10 release audit warnings for missing license headers (in the properties and xml files like hdfs-site.xml)</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7668">HADOOP-7668</a>.
+     Minor improvement reported by sureshms and fixed by stevel@apache.org (util)<br>
+     <b>Add a NetUtils method that can tell if an InetAddress belongs to local host</b><br>
+     <blockquote>                                              closing again<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7664">HADOOP-7664</a>.
+     Minor improvement reported by raviprak and fixed by raviprak (conf)<br>
+     <b>o.a.h.conf.Configuration complains of overriding final parameter even if the value with which its attempting to override is the same. </b><br>
+     <blockquote>o.a.h.conf.Configuration complains of overriding final parameter even if the value with which its attempting to override is the same. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7663">HADOOP-7663</a>.
+     Major bug reported by mayank_bansal and fixed by mayank_bansal (test)<br>
+     <b>TestHDFSTrash failing on 22</b><br>
+     <blockquote>Seems to have started failing recently in many commit builds as well as the last two nightly builds of 22:<br>https://builds.apache.org/hudson/job/Hadoop-Hdfs-22-branch/51/testReport/org.apache.hadoop.hdfs/TestHDFSTrash/testTrashEmptier/<br><br>https://issues.apache.org/jira/browse/HDFS-1967</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7662">HADOOP-7662</a>.
+     Major bug reported by tgraves and fixed by tgraves <br>
+     <b>logs servlet should use pathspec of /*</b><br>
+     <blockquote>The logs servlet in HttpServer should use a pathspec of /* instead of /.<br>      logContext.addServlet(AdminAuthorizedServlet.class, &quot;/*&quot;);<br><br>In making the changes for the yarn webapps (MAPREDUCE-2999), I registered a webapp to use &quot;/&quot;.  This blocked the /logs servlet from working.  because both had a pathSpec of &quot;/&quot; and the guice filter seemed to take precendence.  Changing the pathspec of the logs servlet to /* fixes the issue.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7658">HADOOP-7658</a>.
+     Major bug reported by gkesavan and fixed by eyang <br>
+     <b>to fix hadoop config template</b><br>
+     <blockquote>hadoop rpm config template by default sets the HADOOP_SECURE_DN_USER, HADOOP_SECURE_DN_LOG_DIR &amp; HADOOP_SECURE_DN_PID_DIR <br>the above values should only be set for secured deployment ; <br># On secure datanodes, user to run the datanode as after dropping privileges<br>export HADOOP_SECURE_DN_USER=${HADOOP_HDFS_USER}<br><br># Where log files are stored.  $HADOOP_HOME/logs by default.<br>export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER<br><br># Where log files are stored in the secure data environment.<br>export HADOOP_SE...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7655">HADOOP-7655</a>.
+     Major improvement reported by arpitgupta and fixed by arpitgupta <br>
+     <b>provide a small validation script that smoke tests the installed cluster</b><br>
+     <blockquote>                                              Committed to trunk and v23, since code reviewed by Eric.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7642">HADOOP-7642</a>.
+     Major improvement reported by tucu00 and fixed by tomwhite (build)<br>
+     <b>create hadoop-dist module where TAR stitching would happen</b><br>
+     <blockquote>Instead having a post build script that stitches common&amp;hdfs&amp;mmr, this should be done as part of the build when running &apos;mvn package -Pdist -Dtar&apos;<br><br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7639">HADOOP-7639</a>.
+     Major bug reported by tgraves and fixed by tgraves <br>
+     <b>yarn ui not properly filtered in HttpServer</b><br>
+     <blockquote>Currently httpserver only has .html&quot;, &quot;.jsp as user facing urls when you add a filter. For the new web framework in yarn, the pages no longer have the *.html or *.jsp and thus they are not properly being filtered.  The yarn ui just uses paths - for it would be serve:port/yarn/*</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7637">HADOOP-7637</a>.
+     Major bug reported by eyang and fixed by eyang (build)<br>
+     <b>Fair scheduler configuration file is not bundled in RPM</b><br>
+     <blockquote>205 build of tar is fine, but rpm failed with:<br><br>{noformat}<br>      [rpm] Processing files: hadoop-<br>      [rpm] warning: File listed twice: /usr/libexec<br>      [rpm] warning: File listed twice: /usr/libexec/hadoop-config.sh<br>      [rpm] warning: File listed twice: /usr/libexec/jsvc.i386<br>      [rpm] Checking for unpackaged file(s): /usr/lib/rpm/check-files /tmp/hadoop_package_build_hortonfo/BUILD<br>      [rpm] error: Installed (but unpackaged) file(s) found:<br>      [rpm]    /etc/hadoop/fai...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7633">HADOOP-7633</a>.
+     Major bug reported by arpitgupta and fixed by eyang (conf)<br>
+     <b>log4j.properties should be added to the hadoop conf on deploy</b><br>
+     <blockquote>currently the log4j properties are not present in the hadoop conf dir. We should add them so that log rotation happens appropriately and also define other logs that hadoop can generate for example the audit and the auth logs as well as the mapred summary logs etc.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7631">HADOOP-7631</a>.
+     Major bug reported by rramya and fixed by eyang (conf)<br>
+     <b>In mapred-site.xml, stream.tmpdir is mapped to ${mapred.temp.dir} which is undeclared.</b><br>
+     <blockquote>Streaming jobs seem to fail with the following exception:<br><br>{noformat}<br>Exception in thread &quot;main&quot; java.io.IOException: No such file or directory<br>        at java.io.UnixFileSystem.createFileExclusively(Native Method)<br>        at java.io.File.checkAndCreate(File.java:1704)<br>        at java.io.File.createTempFile(File.java:1792)<br>        at org.apache.hadoop.streaming.StreamJob.packageJobJar(StreamJob.java:603)<br>        at org.apache.hadoop.streaming.StreamJob.setJobConf(StreamJob.java:798)<br>        a...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7630">HADOOP-7630</a>.
+     Major bug reported by arpitgupta and fixed by eyang (conf)<br>
+     <b>hadoop-metrics2.properties should have a property *.period set to a default value foe metrics</b><br>
+     <blockquote>currently the hadoop-metrics2.properties file does not have a value set for *.period<br><br>This property is useful for metrics to determine when the property will refresh. We should set it to default of 60</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7629">HADOOP-7629</a>.
+     Major bug reported by phunt and fixed by tlipcon <br>
+     <b>regression with MAPREDUCE-2289 - setPermission passed immutable FsPermission (rpc failure)</b><br>
+     <blockquote>MAPREDUCE-2289 introduced the following change:<br><br>{noformat}<br>+        fs.setPermission(stagingArea, JOB_DIR_PERMISSION);<br>{noformat}<br><br>JOB_DIR_PERMISSION is an immutable FsPermission which cannot be used in RPC calls, it results in the following exception:<br><br>{noformat}<br>2011-09-08 16:31:45,187 WARN org.apache.hadoop.ipc.Server: Unable to read call parameters for client<br>java.lang.RuntimeException: java.lang.NoSuchMethodException: org.apache.hadoop.fs.permission.FsPermission$2.&lt;init&gt;()<br>   ...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7627">HADOOP-7627</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (metrics, test)<br>
+     <b>Improve MetricsAsserts to give more understandable output on failure</b><br>
+     <blockquote>In developing a test case that uses MetricsAsserts, I had two issues:<br>1) the error output in the case that an assertion failed does not currently give any information as to the _actual_ value of the metric<br>2) there is no way to retrieve the metric variable (eg to assert that the sum of a metric over all DNs is equal to some value)<br><br>This JIRA is to improve this test class to fix the above issues.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7626">HADOOP-7626</a>.
+     Major bug reported by eyang and fixed by eyang (scripts)<br>
+     <b>Allow overwrite of HADOOP_CLASSPATH and HADOOP_OPTS</b><br>
+     <blockquote>Quote email from Ashutosh Chauhan:<br><br>bq. There is a bug in hadoop-env.sh which prevents hcatalog server to start in secure settings. Instead of adding classpath, it overrides them. I was not able to verify where the bug belongs to, in HMS or in hadoop scripts. Looks like hadoop-env.sh is generated from hadoop-env.sh.template in installation process by HMS. Hand crafted patch follows:<br><br>bq. - export HADOOP_CLASSPATH=$f<br>bq. +export HADOOP_CLASSPATH=${HADOOP_CLASSPATH}:$f<br><br>bq. -export HADOOP_OPTS=...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7612">HADOOP-7612</a>.
+     Major improvement reported by tomwhite and fixed by tomwhite (build)<br>
+     <b>Change test-patch to run tests for all nested modules</b><br>
+     <blockquote>HADOOP-7561 changed the behaviour of test-patch to run tests for changed modules, however this was assuming a flat structure. Given the nested maven hierarchy we should always run all the common tests for any common change, all the HDFS tests for any HDFS change, and all the MapReduce tests for any MapReduce change.<br><br>In addition, we should do a top-level build to test compilation after any change.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7610">HADOOP-7610</a>.
+     Major bug reported by eyang and fixed by eyang (scripts)<br>
+     <b>/etc/profile.d does not exist on Debian</b><br>
+     <blockquote>As part of post installation script, there is a symlink created in /etc/profile.d/hadoop-env.sh to source /etc/hadoop/hadoop-env.sh.  Therefore, users do not need to configure HADOOP_* environment.  Unfortunately, /etc/profile.d only exists in Ubuntu.  [Section 9.9 of the Debian Policy|http://www.debian.org/doc/debian-policy/ch-opersys.html#s9.9] states:<br><br>{quote}<br>A program must not depend on environment variables to get reasonable defaults. (That&apos;s because these environment variables would ha...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7608">HADOOP-7608</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (io)<br>
+     <b>SnappyCodec check for Hadoop native lib is wrong</b><br>
+     <blockquote>Currently SnappyCodec is doing:<br><br>{code}<br>  public static boolean isNativeSnappyLoaded(Configuration conf) {<br>    return LoadSnappy.isLoaded() &amp;&amp; conf.getBoolean(<br>        CommonConfigurationKeys.IO_NATIVE_LIB_AVAILABLE_KEY,<br>        CommonConfigurationKeys.IO_NATIVE_LIB_AVAILABLE_DEFAULT);<br>  }<br>{code}<br><br>But the conf check is wrong as it defaults to true. Instead it should use *NativeCodeLoader.isNativeCodeLoaded()*</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7606">HADOOP-7606</a>.
+     Major bug reported by atm and fixed by tucu00 (test)<br>
+     <b>Upgrade Jackson to version 1.7.1 to match the version required by Jersey</b><br>
+     <blockquote>As of 2 days ago, 13 tests started failing, all with errors in Avro-related tests.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7604">HADOOP-7604</a>.
+     Critical bug reported by mahadev and fixed by mahadev <br>
+     <b>Hadoop Auth examples pom in 0.23 point to 0.24 versions.</b><br>
+     <blockquote>hadoop-auth-examples/pom.xml has references to 0.24 in the 0.23 branch.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7603">HADOOP-7603</a>.
+     Major bug reported by eyang and fixed by eyang <br>
+     <b>Set default hdfs, mapred uid, and hadoop group gid for RPM packages</b><br>
+     <blockquote>                                              Set hdfs uid, mapred uid, and hadoop gid to fixed numbers (201, 202, and 123, respectively).<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7599">HADOOP-7599</a>.
+     Major bug reported by eyang and fixed by eyang (scripts)<br>
+     <b>Improve hadoop setup conf script to setup secure Hadoop cluster</b><br>
+     <blockquote>Setting up a secure Hadoop cluster requires a lot of manual setup.  The motivation of this jira is to provide setup scripts to automate setup secure Hadoop cluster.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7598">HADOOP-7598</a>.
+     Major bug reported by revans2 and fixed by revans2 (build)<br>
+     <b>smart-apply-patch.sh does not handle patching from a sub directory correctly.</b><br>
+     <blockquote>smart-apply-patch.sh does not apply valid patches from trunk, or from git like it was designed to do in some situations.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7595">HADOOP-7595</a>.
+     Major improvement reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Upgrade dependency to Avro 1.5.3</b><br>
+     <blockquote>Avro 1.5.3 depends on Snappy-Java 1.5.3 which enables the use of its SO file from the java.library.path</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7594">HADOOP-7594</a>.
+     Major new feature reported by szetszwo and fixed by szetszwo <br>
+     <b>Support HTTP REST in HttpServer</b><br>
+     <blockquote>Provide an API in HttpServer for supporting HTTP REST.<br><br>This is a part of HDFS-2284.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7593">HADOOP-7593</a>.
+     Major bug reported by szetszwo and fixed by umamaheswararao (test)<br>
+     <b>AssertionError in TestHttpServer.testMaxThreads()</b><br>
+     <blockquote>TestHttpServer passed but there were AssertionError in the output.<br>{noformat}<br>11/08/30 03:35:56 INFO http.TestHttpServer: HTTP server started: http://localhost:52974/<br>Exception in thread &quot;pool-1-thread-61&quot; java.lang.AssertionError: <br>	at org.junit.Assert.fail(Assert.java:91)<br>	at org.junit.Assert.assertTrue(Assert.java:43)<br>	at org.junit.Assert.assertTrue(Assert.java:54)<br>	at org.apache.hadoop.http.TestHttpServer$1.run(TestHttpServer.java:164)<br>	at java.util.concurrent.ThreadPoolExecutor$Worker.ru...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7589">HADOOP-7589</a>.
+     Major bug reported by revans2 and fixed by revans2 (build)<br>
+     <b>Prefer mvn test -DskipTests over mvn compile in test-patch.sh</b><br>
+     <blockquote>I got a failure running test-patch with a clean .m2 directory.<br><br>To quote Alejandro:<br>{quote}<br>The reason for this failure is because of how Maven reactor/dependency<br>resolution works (IMO a bug).<br><br>Maven reactor/dependency resolution is smart enough to create the classpath<br>using the classes from all modules being built.<br><br>However, this smartness falls short just a bit. The dependencies are<br>resolved using the deepest maven phase used by current mvn invocation. If<br>you are doing &apos;mvn compile&apos; you don...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7580">HADOOP-7580</a>.
+     Major bug reported by sseth and fixed by sseth <br>
+     <b>Add a version of getLocalPathForWrite to LocalDirAllocator which doesn&apos;t create dirs</b><br>
+     <blockquote>Required in MR where directories are created by ContainerExecutor (mrv2) / TaskController (0.20) as a specific user.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7579">HADOOP-7579</a>.
+     Major task reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>Rename package names from alfredo to auth</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7578">HADOOP-7578</a>.
+     Major bug reported by mahadev and fixed by mahadev <br>
+     <b>Fix test-patch to be able to run on MR patches.</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7576">HADOOP-7576</a>.
+     Major bug reported by tomwhite and fixed by szetszwo (security)<br>
+     <b>Fix findbugs warnings in Hadoop Auth (Alfredo)</b><br>
+     <blockquote>Found in HADOOP-7567: https://builds.apache.org/job/PreCommit-HADOOP-Build/65//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-alfredo.html</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7575">HADOOP-7575</a>.
+     Minor bug reported by jeagles and fixed by jeagles (fs)<br>
+     <b>Support fully qualified paths as part of LocalDirAllocator</b><br>
+     <blockquote>Contexts with configuration path strings using fully qualified paths (e.g. file:///tmp instead of /tmp) mistakenly creates a directory named &apos;file:&apos; and sub-directories in the current local file system working directory.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7568">HADOOP-7568</a>.
+     Major bug reported by shv and fixed by zero45 (io)<br>
+     <b>SequenceFile should not print into stdout</b><br>
+     <blockquote>The following line in {{SequenceFile.Reader.initialize()}} should be removed:<br>{code}<br>System.out.println(&quot;Setting end to &quot; + end);<br>{code}<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7566">HADOOP-7566</a>.
+     Major bug reported by mahadev and fixed by tucu00 <br>
+     <b>MR tests are failing  webapps/hdfs not found in CLASSPATH</b><br>
+     <blockquote>While running ant tests, the tests are failing with the following trace:<br><br>{noformat}<br><br>webapps/hdfs not found in CLASSPATH<br>java.io.FileNotFoundException: webapps/hdfs not found in CLASSPATH<br>        at org.apache.hadoop.http.HttpServer.getWebAppsPath(HttpServer.java:470)<br>        at org.apache.hadoop.http.HttpServer.&lt;init&gt;(HttpServer.java:186)<br>        at org.apache.hadoop.http.HttpServer.&lt;init&gt;(HttpServer.java:147)<br>        at org.apache.hadoop.hdfs.server.namenode.NameNodeHttpServer$1.run(NameNo...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7564">HADOOP-7564</a>.
+     Major sub-task reported by tomwhite and fixed by tomwhite <br>
+     <b>Remove test-patch SVN externals</b><br>
+     <blockquote>With the new top-level test-patch script in dev-support, the SVN externals for the old test-patch scripts are no longer needed.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7563">HADOOP-7563</a>.
+     Major bug reported by eyang and fixed by eyang (scripts)<br>
+     <b>hadoop-config.sh setup CLASSPATH, HADOOP_HDFS_HOME and HADOOP_MAPRED_HOME incorrectly</b><br>
+     <blockquote>HADOOP_HDFS_HOME and HADOOP_MAPRED_HOME was set to HADOOP_PREFIX/share/hadoop/hdfs and HADOOP_PREFIX/share/hadoop/mapreduce.  This setup confuses the location of hdfs and mapred scripts.  Instead the script should look for hdfs and mapred script in HADOOP_PREFIX/bin.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7561">HADOOP-7561</a>.
+     Major sub-task reported by tomwhite and fixed by tomwhite <br>
+     <b>Make test-patch only run tests for changed modules</b><br>
+     <blockquote>By running test-patch from trunk we can check that a change in one project (e.g. common) doesn&apos;t cause compile errors in other projects (e.g. HDFS). To get this to work we only need to run tests for the modules that are affected by the patch.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7560">HADOOP-7560</a>.
+     Major sub-task reported by tucu00 and fixed by tucu00 <br>
+     <b>Make hadoop-common a POM module with sub-modules (common &amp; alfredo)</b><br>
+     <blockquote>Currently hadoop-common is a JAR module, thus it cannot aggregate sub-modules.<br><br>Changing it to POM module it makes it an aggregator module, all the code under hadoop-common must be moved to a sub-module.<br><br>I.e.:<br><br>mkdir hadoop-common-project<br><br>mv hadoop-common hadoop-common-project<br><br>mv hadoop-alfredo hadoop-common-project<br><br>hadoop-common-project/pom.xml is a POM module that aggregates common &amp; alfredo<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7555">HADOOP-7555</a>.
+     Trivial improvement reported by atm and fixed by atm (build)<br>
+     <b>Add a eclipse-generated files to .gitignore</b><br>
+     <blockquote>The .gitignore file in the hadoop-mapreduce directory specifically excludes .classpath, .settings, and .project files/dirs. We should move these excludes to the top level .gitignore so that Common and HDFS have these files excluded as well.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7552">HADOOP-7552</a>.
+     Minor improvement reported by eli and fixed by eli (fs)<br>
+     <b>FileUtil#fullyDelete doesn&apos;t throw IOE but lists it in the throws clause</b><br>
+     <blockquote>FileUtil#fullyDelete doesn&apos;t throw IOException so it shouldn&apos;t have IOException in its throws clause. Having it listed makes it easy to think you&apos;ll get an IOException eg trying to delete a non-existant file or on an IO error accessing the local file, but you don&apos;t.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7547">HADOOP-7547</a>.
+     Minor bug reported by umamaheswararao and fixed by umamaheswararao (io)<br>
+     <b>Fix the warning in writable classes.[ WritableComparable is a raw type. References to generic type WritableComparable&lt;T&gt; should be parameterized  ]</b><br>
+     <blockquote>WritableComparable is a raw type. References to generic type WritableComparable&lt;T&gt; should be parameterized.<br><br>Also address the same in example implementation in WritableComparable interface&apos;s javadoc.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7545">HADOOP-7545</a>.
+     Critical bug reported by tlipcon and fixed by tlipcon (build, test)<br>
+     <b>common -tests jar should not include properties and configs</b><br>
+     <blockquote>This is the cause of HDFS-2242. The -tests jar generated from the common build should only include the test classes, and not the test resources.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7536">HADOOP-7536</a>.
+     Major bug reported by kihwal and fixed by tucu00 (build)<br>
+     <b>Correct the dependency version regressions introduced in HADOOP-6671</b><br>
+     <blockquote>I just noticed the versions specified for dependencies have gone backward with HADOOP-6671.<br>To name a few,<br>* commons-logging  was 1.1.1, now 1.0.4<br>* commons-logging-api  was 1.1, now 1.0.4<br>* slf4 was 1.5.11, now 1.5.8<br><br>There might be more.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7533">HADOOP-7533</a>.
+     Major sub-task reported by tomwhite and fixed by tomwhite <br>
+     <b>Allow test-patch to be run from any subproject directory </b><br>
+     <blockquote>Currently dev-support/test-patch.sh can only be run from the top-level (and only for hadoop-common).</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7531">HADOOP-7531</a>.
+     Major improvement reported by eli and fixed by eli (util)<br>
+     <b>Add servlet util methods for handling paths in requests </b><br>
+     <blockquote>Common side of HDFS-2235.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7529">HADOOP-7529</a>.
+     Critical bug reported by tlipcon and fixed by vicaya (metrics)<br>
+     <b>Possible deadlock in metrics2</b><br>
+     <blockquote>Lock cycle detected by jcarder between MetricsSystemImpl and DefaultMetricsSystem</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7528">HADOOP-7528</a>.
+     Major sub-task reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Maven build fails in Windows</b><br>
+     <blockquote>Maven does not run in window for the following reasons:<br><br>* Enforcer plugin restricts build to Unix<br>* Ant run snippets to create TAR are not cygwin friendly</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7526">HADOOP-7526</a>.
+     Minor test reported by eli and fixed by eli (fs)<br>
+     <b>Add TestPath tests for URI conversion and reserved characters  </b><br>
+     <blockquote>TestPath needs tests that cover URI conversion (eg places where Paths and URIs differ) and handling of URI reserved characters in paths. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7525">HADOOP-7525</a>.
+     Major sub-task reported by tomwhite and fixed by tomwhite (scripts)<br>
+     <b>Make arguments to test-patch optional</b><br>
+     <blockquote>Currently you have to specify all the arguments to test-patch.sh, which makes it cumbersome to use. We should make all arguments except the patch file optional. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7523">HADOOP-7523</a>.
+     Blocker bug reported by jlee@mindset-media.com and fixed by jlee@mindset-media.com (test)<br>
+     <b>Test org.apache.hadoop.fs.TestFilterFileSystem fails due to java.lang.NoSuchMethodException</b><br>
+     <blockquote>Test org.apache.hadoop.fs.TestFilterFileSystem fails due to java.lang.NoSuchMethodException. Here is the error message:<br><br>-------------------------------------------------------------------------------<br>Test set: org.apache.hadoop.fs.TestFilterFileSystem<br>-------------------------------------------------------------------------------<br>Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.232 sec &lt;&lt;&lt; FAILURE!<br>testFilterFileSystem(org.apache.hadoop.fs.TestFilterFileSystem)  Time elapsed...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7520">HADOOP-7520</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>hadoop-main fails to deploy</b><br>
+     <blockquote>Doing a Maven deployment hadoop-main (trunk/pom.xml) fails to deploy because it does not have the distribution management information.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7515">HADOOP-7515</a>.
+     Major sub-task reported by tomwhite and fixed by tomwhite (build)<br>
+     <b>test-patch reports the wrong number of javadoc warnings</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7512">HADOOP-7512</a>.
+     Trivial task reported by qwertymaniac and fixed by qwertymaniac (documentation)<br>
+     <b>Fix example mistake in WritableComparable javadocs</b><br>
+     <blockquote>From IRC, via uberj:<br><br>{code}<br>[9:58pm] uberj: http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/WritableComparable.html<br>[9:58pm] uberj: In the example it says &quot;int thatValue = ((IntWritable)o).value;&quot;<br>[9:59pm] uberj: should &apos;o&apos; be replaced with &apos;w&apos;?<br>[9:59pm] uberj: int thatValue = ((IntWritable)w).value;<br>{code}<br><br>Attaching patch for s/w/o.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7509">HADOOP-7509</a>.
+     Trivial improvement reported by raviprak and fixed by raviprak <br>
+     <b>Improve message when Authentication is required</b><br>
+     <blockquote>                    Thanks Aaron and Suresh!
<br/><br><br>Marking as resolved fixed since changes have gone in.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7508">HADOOP-7508</a>.
+     Major sub-task reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>compiled nativelib is in wrong directory and it is not picked up by surefire setup</b><br>
+     <blockquote>The location of the compiled native libraries differs from the one surefire plugin (run testcases) is configured to use.<br><br>This makes testcases using nativelibs to fail loading them.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7507">HADOOP-7507</a>.
+     Major bug reported by jwfbean and fixed by tucu00 (metrics)<br>
+     <b>jvm metrics all use the same namespace</b><br>
+     <blockquote>                                              JVM metrics published to Ganglia now include the process name as part of the gmetric name.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7502">HADOOP-7502</a>.
+     Major sub-task reported by vicaya and fixed by vicaya <br>
+     <b>Use canonical (IDE friendly) generated-sources directory for generated sources</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7501">HADOOP-7501</a>.
+     Major sub-task reported by tucu00 and fixed by tomwhite (build)<br>
+     <b>publish Hadoop Common artifacts (post HADOOP-6671) to Apache SNAPSHOTs repo</b><br>
+     <blockquote>A *distributionManagement* section must be added to the hadoop-project POM with the SNAPSHOTs section, then &apos;mvn deploy&apos; will push the artifacts to it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7499">HADOOP-7499</a>.
+     Major bug reported by naisbitt and fixed by naisbitt (util)<br>
+     <b>Add method for doing a sanity check on hostnames in NetUtils</b><br>
+     <blockquote>As part of MAPREDUCE-2489, we need a method in NetUtils to do a sanity check on hostnames</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7498">HADOOP-7498</a>.
+     Major sub-task reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>Remove legacy TAR layout creation</b><br>
+     <blockquote>Currently the build creates 2 different tarball layouts.<br><br>One is the legacy one, the layout used until 0.22 (ant tar &amp;  mvn package -Ptar)<br><br>The other is new new one, the layout used in trunk that mimics the Unix layout (ant binary &amp; mvn package -Pbintar).<br><br>The legacy layout is of not use as all the scripts have been modified to work with the new layout only.<br><br>We should thus remove the legacy layout generation.<br><br>In addition we could rename the current &apos;bintar&apos; to just &apos;tar&apos;<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7496">HADOOP-7496</a>.
+     Major sub-task reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>break Maven TAR &amp; bintar profiles into just LAYOUT &amp; TAR proper</b><br>
+     <blockquote>Currently the tar &amp; bintar profile create the layout and create tarball.<br><br>For development it would be convenient to break them into layout and tar, thus not having to pay the overhead of TARing up.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7493">HADOOP-7493</a>.
+     Major new feature reported by umamaheswararao and fixed by umamaheswararao (io)<br>
+     <b>[HDFS-362] Provide ShortWritable class in hadoop.</b><br>
+     <blockquote>As part of HDFS-362, Provide the ShortWritable class.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7491">HADOOP-7491</a>.
+     Major improvement reported by eli and fixed by eli (scripts)<br>
+     <b>hadoop command should respect HADOOP_OPTS when given a class name </b><br>
+     <blockquote>When using the hadoop command HADOOP_OPTS and HADOOP_CLIENT_OPTS options are not passeed through.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7474">HADOOP-7474</a>.
+     Major improvement reported by jnp and fixed by jnp <br>
+     <b>Refactor ClientCache out of WritableRpcEngine.</b><br>
+     <blockquote>This jira captures the changes in common corresponding to MAPREDUCE-2707.<br>Moving ClientCache out into its own class makes sense because it can be used by other RpcEngine implementations as well.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7472">HADOOP-7472</a>.
+     Minor improvement reported by kihwal and fixed by kihwal (ipc)<br>
+     <b>RPC client should deal with the IP address changes</b><br>
+     <blockquote>The current RPC client implementation and the client-side callers assume that the hostname-address mappings of servers never change. The resolved address is stored in an immutable InetSocketAddress object above/outside RPC, and the reconnect logic in the RPC Connection implementation also trusts the resolved address that was passed down.<br><br>If the NN suffers a failure that requires migration, it may be started on a different node with a different IP address. In this case, even if the name-addre...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7471">HADOOP-7471</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (build)<br>
+     <b>the saveVersion.sh script sometimes fails to extract SVN URL</b><br>
+     <blockquote>When using an SVN checkout of the source, sometime the {{svn info}} command outputs a &apos;Copied from URL: ###&apos; line in addition to the &apos;URL: ###&apos;.<br><br>This breaks the saveVersion.sh script that assume there is only one line in the output of {{svn info}} that contains the word URL.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7469">HADOOP-7469</a>.
+     Minor sub-task reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
+     <b>add a standard handler for socket connection problems which improves diagnostics</b><br>
+     <blockquote>connection refused, connection timed out, no route to host, etc, are classic IOExceptions that can be raised in a lot of parts of the code. The standard JDK exceptions are useless for debugging as they <br># don&apos;t include the destination (host, port) that can be used in diagnosing service dead/blocked problems<br># don&apos;t include any source hostname that can be used to handle routing issues<br># assume the reader understands the TCP stack.<br>It&apos;s obvious from the -user lists that a lot of people hit thes...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7465">HADOOP-7465</a>.
+     Trivial sub-task reported by xiexianshan and fixed by xiexianshan (fs, ipc)<br>
+     <b>A several tiny improvements for the LOG format</b><br>
+     <blockquote>There are several fields in the log that the space characters are missed.<br>For instance:<br>src/java/org/apache/hadoop/ipc/Client.java(248): LOG.debug(&quot;The ping interval is&quot; + this.pingInterval + &quot;ms.&quot;);<br>src/java/org/apache/hadoop/fs/LocalDirAllocator.java(235):  LOG.warn( localDirs[i] + &quot;is not writable¥n&quot;, de);<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7463">HADOOP-7463</a>.
+     Minor improvement reported by mahadev and fixed by mahadev <br>
+     <b>Adding a configuration parameter to SecurityInfo interface.</b><br>
+     <blockquote>HADOOP-6929 allowed to make implementations/providers of SecurityInfo to be configurable via service class loaders. For adding Security to TunnelProtocols, configuration is needed to figure out which particular interface getKerberosInfo is called for. Just the class name is not enough since its always TunnerProtocol for all the interfaces. I propose adding a config to getKerberosInfo, so that its easy for TunnerProtocols to get the information they need.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7460">HADOOP-7460</a>.
+     Major improvement reported by dhruba and fixed by usmanm (fs)<br>
+     <b>Support for pluggable Trash policies</b><br>
+     <blockquote>It would be beneficial to make the Trash policy pluggable. One primary use-case for this is to archive files (in some remote store) when they get removed by Trash emptier.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7457">HADOOP-7457</a>.
+     Blocker improvement reported by jghoman and fixed by jghoman (documentation)<br>
+     <b>Remove out-of-date Chinese language documentation</b><br>
+     <blockquote>The Chinese language documentation haven&apos;t been updated (other than copyright years and svn moves) since their original contribution several years ago.  Worse than no docs is out-of-date, wrong docs.  We should delete them from the source tree.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7451">HADOOP-7451</a>.
+     Major improvement reported by mattf and fixed by mattf <br>
+     <b>merge for MR-279: Generalize StringUtils#join</b><br>
+     <blockquote>Fix incomplete merge from yahoo-merge branch to trunk: <br>-r 1079167: Generalize StringUtils::join (Chris Douglas)</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7449">HADOOP-7449</a>.
+     Major improvement reported by mattf and fixed by mattf <br>
+     <b>merge for MR-279: add Data(In,Out)putByteBuffer to work with ByteBuffer similar to Data(In,Out)putBuffer for byte[]</b><br>
+     <blockquote>Fix incomplete merge from yahoo-merge branch to trunk: <br>-r 1079163: Added Data(In,Out)putByteBuffer to work with ByteBuffer similar to Data(In,Out)putBuffer for byte[]. (Chris Douglas)</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7448">HADOOP-7448</a>.
+     Major improvement reported by mattf and fixed by mattf <br>
+     <b>merge for MR-279: HttpServer /stacks servlet should use plain text content type</b><br>
+     <blockquote>Fix incomplete merge from yahoo-merge branch to trunk: <br>-r 1079157: Fix content type for /stacks servlet (Luke Lu)<br>-r 1079164: No need to escape plain text (Luke Lu)<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7446">HADOOP-7446</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon (native)<br>
+     <b>Implement CRC32C native code using SSE4.2 instructions</b><br>
+     <blockquote>Once HADOOP-7445 is implemented, we can get further performance improvements by implementing CRC32C using the hardware support available in SSE4.2. This support should be dynamically enabled based on CPU feature flags, and of course should be ifdeffed properly so that it doesn&apos;t break the build on architectures/platforms where it&apos;s not available.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7445">HADOOP-7445</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon (native, util)<br>
+     <b>Implement bulk checksum verification using efficient native code</b><br>
+     <blockquote>Once HADOOP-7444 is implemented (&quot;bulk&quot; API for checksums), good performance gains can be had by implementing bulk checksum operations using JNI. This JIRA is to add checksum support to the native libraries. Of course if native libs are not available, it will still fall back to the pure-Java implementations.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7444">HADOOP-7444</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon <br>
+     <b>Add Checksum API to verify and calculate checksums &quot;in bulk&quot;</b><br>
+     <blockquote>Currently, the various checksum types only provide the capability to calculate the checksum of a range of a byte array. For HDFS-2080, it&apos;s advantageous to provide an API that, given a buffer with some number of &quot;checksum chunks&quot;, can either calculate or verify the checksums of all of the chunks. For example, given a 4KB buffer and a 512-byte chunk size, it would calculate or verify 8 CRC32s in one call.<br><br>This allows efficient JNI-based checksum implementations since the cost of crossing the ...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7443">HADOOP-7443</a>.
+     Major new feature reported by tlipcon and fixed by tlipcon (io, util)<br>
+     <b>Add CRC32C as another DataChecksum implementation</b><br>
+     <blockquote>CRC32C is another checksum very similar to our existing CRC32, but with a different polynomial. The chief advantage of this other polynomial is that SSE4.2 includes hardware support for its calculation. HDFS-2080 is the umbrella JIRA which proposes using this new polynomial to save substantial amounts of CPU.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7442">HADOOP-7442</a>.
+     Major bug reported by atm and fixed by atm (conf, documentation)<br>
+     <b>Docs in core-default.xml still reference deprecated config &quot;topology.script.file.name&quot;</b><br>
+     <blockquote>HADOOP-6233 renamed the config &quot;{{topology.script.file.name}}&quot; to &quot;{{net.topology.script.file.name}}&quot; but missed a few spots in the docs of core-default.xml.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7440">HADOOP-7440</a>.
+     Major bug reported by tlipcon and fixed by tlipcon <br>
+     <b>HttpServer.getParameterValues throws NPE for missing parameters</b><br>
+     <blockquote>If the requested parameter was not specified in the request, the raw request&apos;s getParameterValues function returns null. Thus, trying to access {{unquoteValue.length}} throws NPE.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7438">HADOOP-7438</a>.
+     Major improvement reported by raviprak and fixed by raviprak <br>
+     <b>Using the hadoop-deamon.sh script to start nodes leads to a depricated warning </b><br>
+     <blockquote>hadoop-daemon.sh calls common/bin/hadoop for hdfs/bin/hdfs tasks and so common/bin/hadoop complains its deprecated for those uses.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7437">HADOOP-7437</a>.
+     Major bug reported by umamaheswararao and fixed by umamaheswararao (io)<br>
+     <b>IOUtils.copybytes will suppress the stream closure exceptions. </b><br>
+     <blockquote>{code}<br><br>public static void copyBytes(InputStream in, OutputStream out, long count,<br>      boolean close) throws IOException {<br>    byte buf[] = new byte[4096];<br>    long bytesRemaining = count;<br>    int bytesRead;<br><br>    try {<br>      .............<br>      .............<br>    } finally {<br>      if (close) {<br>        closeStream(out);<br>        closeStream(in);<br>      }<br>    }<br>  }<br><br>{code}<br><br>Here if any exception in closing the stream, it will get suppressed here.<br><br>So, better to follow the stream closure pattern ...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7434">HADOOP-7434</a>.
+     Minor improvement reported by yanjinshuang and fixed by yanjinshuang <br>
+     <b>Display error when using &quot;daemonlog -setlevel&quot; with illegal level</b><br>
+     <blockquote>While using the command with inexistent &quot;level&quot; like &quot;nomsg&quot;, there is no error message displayed,and the level &quot;DEBUG&quot; is set by default.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7430">HADOOP-7430</a>.
+     Minor improvement reported by raviprak and fixed by raviprak (fs)<br>
+     <b>Improve error message when moving to trash fails due to quota issue</b><br>
+     <blockquote>-rm command doesn&apos;t suggest -skipTrash on failure.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7428">HADOOP-7428</a>.
+     Major bug reported by tlipcon and fixed by tlipcon (ipc)<br>
+     <b>IPC connection is orphaned with null &apos;out&apos; member</b><br>
+     <blockquote>We had a situation a JT ended up in a state where a certain user could not submit a job, due to an NPE on the following line in {{sendParam}}:<br>{code}<br>synchronized (Connection.this.out) {<br>{code}<br>Looking at the code, my guess is that an RTE was thrown in setupIOstreams, which only catches IOE. This could leave the connection in a half-setup state which is never cleaned up and also cannot perform IPCs.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7419">HADOOP-7419</a>.
+     Major bug reported by tlipcon and fixed by bzheng <br>
+     <b>new hadoop-config.sh doesn&apos;t manage classpath for HADOOP_CONF_DIR correctly</b><br>
+     <blockquote>Since the introduction of the RPM packages, hadoop-config.sh incorrectly puts $HADOOP_HDFS_HOME/conf on the classpath regardless of whether HADOOP_CONF_DIR is already defined in the environment.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7402">HADOOP-7402</a>.
+     Trivial bug reported by atm and fixed by atm (test)<br>
+     <b>TestConfiguration doesn&apos;t clean up after itself</b><br>
+     <blockquote>{{testGetFile}} and {{testGetLocalPath}} both create directories a, b, and c in the working directory from where the tests are run. They should clean up after themselves.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7392">HADOOP-7392</a>.
+     Major improvement reported by tanping and fixed by tanping <br>
+     <b>Implement capability of querying individual property of a mbean using JMXProxyServlet </b><br>
+     <blockquote>Hadoop-7144 provides the capability to query all the properties of a mbean using JMXProxyServlet.  In addition to this, we add the capability to query an individual property of a mbean.  Client will send http request,<br><br>http://hostname/jmx?get=meanName::property<br><br>to query from server.  <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7389">HADOOP-7389</a>.
+     Major bug reported by atm and fixed by atm (test)<br>
+     <b>Use of TestingGroups by tests causes subsequent tests to fail</b><br>
+     <blockquote>As mentioned in HADOOP-6671, {{UserGroupInformation.createUserForTesting(...)}} manipulates static state which can cause test cases which are run after a call to this function to fail.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7385">HADOOP-7385</a>.
+     Minor bug reported by bharathm and fixed by bharathm <br>
+     <b>Remove StringUtils.stringifyException(ie) in logger functions</b><br>
+     <blockquote>Apache logger api has an overloaded function which can take the message and exception. I am proposing to clean the logging code with this api.<br>ie.:<br>Change the code from LOG.warn(msg, StringUtils.stringifyException(exception)); to LOG.warn(msg, exception);<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7384">HADOOP-7384</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon <br>
+     <b>Allow test-patch to be more flexible about patch format</b><br>
+     <blockquote>Right now the test-patch process only accepts patches that are generated as &quot;-p0&quot; relative to common/, hdfs/, or mapreduce/. This has always been annoying for git users where the default patch format is -p1. It&apos;s also now annoying for SVN users who may generate a patch relative to trunk/ instead of the subproject subdirectory. We should auto-detect the correct patch level.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7383">HADOOP-7383</a>.
+     Blocker bug reported by tlipcon and fixed by tlipcon (build)<br>
+     <b>HDFS needs to export protobuf library dependency in pom</b><br>
+     <blockquote>MR builds are failing since the HDFS protobuf patch went in, since they aren&apos;t picking up protobuf as a transitive dependency. I think we just need to add it to the HDFS pom template.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7380">HADOOP-7380</a>.
+     Major sub-task reported by atm and fixed by atm (ipc)<br>
+     <b>Add client failover functionality to o.a.h.io.(ipc|retry)</b><br>
+     <blockquote>Implementing client failover will likely require changes to {{o.a.h.io.ipc}} and/or {{o.a.h.io.retry}}. This JIRA is to track those changes.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7379">HADOOP-7379</a>.
+     Major improvement reported by tlipcon and fixed by tlipcon (io, ipc)<br>
+     <b>Add ability to include Protobufs in ObjectWritable</b><br>
+     <blockquote>                                              Protocol buffer-generated types may now be used as arguments or return values for Hadoop RPC.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7377">HADOOP-7377</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>Fix command name handling affecting DFSAdmin</b><br>
+     <blockquote>When an error occurs in the get/set quota commands in DFSAdmin, they are displaying the following:<br>setQuota: failed to get SetQuotaCommand.NAME<br><br>The {{Command}} class expects the {{NAME}} field to be accessible, but for DFSAdmin, it&apos;s not.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7375">HADOOP-7375</a>.
+     Major improvement reported by sanjay.radia and fixed by sanjay.radia <br>
+     <b>Add resolvePath method to FileContext</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7374">HADOOP-7374</a>.
+     Major improvement reported by eli and fixed by eli (scripts)<br>
+     <b>Don&apos;t add tools.jar to the classpath when running Hadoop</b><br>
+     <blockquote>                                              The scripts that run Hadoop no longer automatically add tools.jar from the JDK to the classpath (if it is present). If your job depends on tools.jar in the JDK you will need to add this dependency in your job.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7361">HADOOP-7361</a>.
+     Minor improvement reported by umamaheswararao and fixed by umamaheswararao (fs)<br>
+     <b>Provide overwrite option (-overwrite/-f) in put and copyFromLocal command line options</b><br>
+     <blockquote>FileSystem has the API <br><br><br><br>*public void copyFromLocalFile(boolean delSrc, boolean overwrite, Path[] srcs, Path dst)*<br>     <br>                         <br>This API provides overwrite option. But the mapping command line doesn&apos;t have this option. To maintain the consistency and better usage  the command line option also can support the overwrite option like to put the files forcefully. ( put [-f] &lt;srcpath&gt; &lt;dstPath&gt;) and also for copyFromLocal command line option.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7360">HADOOP-7360</a>.
+     Major improvement reported by daryn and fixed by kihwal (fs)<br>
+     <b>FsShell does not preserve relative paths with globs</b><br>
+     <blockquote>FsShell currently preserves relative paths that do not contain globs.  Unfortunately the method {{fs.globStatus()}} is fully qualifying all returned paths.  This is causing inconsistent display of paths.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7357">HADOOP-7357</a>.
+     Trivial bug reported by philip and fixed by philip (test)<br>
+     <b>hadoop.io.compress.TestCodec#main() should exit with non-zero exit code if test failed</b><br>
+     <blockquote>It&apos;s convenient to run something like<br>{noformat}<br>HADOOP_CLASSPATH=hadoop-test-0.20.2.jar bin/hadoop org.apache.hadoop.io.compress.TestCodec  -count 3 -codec fo<br>{noformat}<br>but the error code it returns isn&apos;t interesting.<br><br>1-line patch attached fixes that.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7356">HADOOP-7356</a>.
+     Blocker bug reported by eyang and fixed by eyang <br>
+     <b>RPM packages broke bin/hadoop script for hadoop 0.20.205</b><br>
+     <blockquote>hadoop-config.sh has been moved to libexec for binary package, but developers prefers to have hadoop-config.sh in bin.  Hadoo shell scripts should be modified to support both scenarios.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7353">HADOOP-7353</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>Cleanup FsShell and prevent masking of RTE stacktraces</b><br>
+     <blockquote>{{FsShell}}&apos;s top level exception handler catches and displays exceptions.  Unfortunately it displays only the first line of an exception, which means an unexpected {{RuntimeExceptions}} like {{NullPointerException}} only display &quot;{{cmd: NullPointerException}}&quot;.  This user has no context to understand and/or accurately report the issue.<br><br>Found due to bugs such as {{HADOOP-7327}}.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7342">HADOOP-7342</a>.
+     Minor bug reported by bharathm and fixed by bharathm <br>
+     <b>Add an utility API in FileUtil for JDK File.list</b><br>
+     <blockquote>Java File.list API can return null when disk is bad or directory is not a directory. This utility API in FileUtil will throw an exception when this happens rather than returning null. </blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7341">HADOOP-7341</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>Fix option parsing in CommandFormat</b><br>
+     <blockquote>CommandFormat currently allows options in any location within the args.  This is not the intended behavior for FsShell commands.  Prior to the redesign, the commands used to expect option processing to stop at the first non-option.<br><br>CommandFormat was an existing class prior the redesign, but it was only used by &quot;count&quot; to find the -q flag.  All commands were converted to using this class, thus inherited the unintended behavior.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7337">HADOOP-7337</a>.
+     Minor improvement reported by szetszwo and fixed by szetszwo (util)<br>
+     <b>Annotate PureJavaCrc32 as a public API</b><br>
+     <blockquote>The API of PureJavaCrc32 is stable.  It is incorrect to annotate it as private unstable.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7336">HADOOP-7336</a>.
+     Minor bug reported by jnp and fixed by jnp <br>
+     <b>TestFileContextResolveAfs will fail with default test.build.data property.</b><br>
+     <blockquote>In TestFileContextResolveAfs if test.build.data property is not set and default is used, the test case will try to create that in the root directory and that will fail. /tmp should be used as default as in many other test cases. Normally, test.build.data will be set and this issue should not occur.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7333">HADOOP-7333</a>.
+     Minor improvement reported by ecaspole and fixed by ecaspole (util)<br>
+     <b>Performance improvement in PureJavaCrc32</b><br>
+     <blockquote>I would like to propose a small patch to <br><br>  org.apache.hadoop.util.PureJavaCrc32.update(byte[] b, int off, int len)<br><br>Currently the method stores the intermediate result back into the data member &quot;crc.&quot; I noticed this method gets<br>inlined into DataChecksum.update() and that method appears as one of the hotter methods in a simple hprof profile collected while running terasort and gridmix.<br><br>If the code is modified to save the temporary result into a local and just once store the final result bac...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7331">HADOOP-7331</a>.
+     Trivial improvement reported by tanping and fixed by tanping (scripts)<br>
+     <b>Make hadoop-daemon.sh to return 1 if daemon processes did not get started</b><br>
+     <blockquote>                                              hadoop-daemon.sh now returns a non-zero exit code if it detects that the daemon was not still running after 3 seconds.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7329">HADOOP-7329</a>.
+     Minor improvement reported by xiexianshan and fixed by xiexianshan (fs)<br>
+     <b>incomplete help message  is displayed for df -h option</b><br>
+     <blockquote>The help message for the command &quot;hdfs dfs -help df&quot; is displayed like this:<br>&quot;-df [&lt;path&gt; ...]:    Shows the capacity, free and used space of the filesystem.<br>        If the filesystem has multiple partitions, and no path to a<br>        particular partition is specified, then the status of the root<br>        partitions will be shown.&quot;<br>and the information about df -h option is missed,despite the fact that df -h option is implemented.<br><br>Therefore,the expected message should be displayed like this:<br>&quot;-...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7327">HADOOP-7327</a>.
+     Minor bug reported by mattf and fixed by mattf (fs)<br>
+     <b>FileSystem.listStatus() throws NullPointerException instead of IOException upon access permission failure</b><br>
+     <blockquote>Many processes that call listStatus() expect to handle IOException, but instead are getting runtime error NullPointerException, if the directory being scanned is visible but no-access to the running user id.  For example, if directory foo is drwxr-xr-x, and subdirectory foo/bar is drwx------, then trying to do listStatus(Path(foo/bar)) will cause a NullPointerException.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7324">HADOOP-7324</a>.
+     Blocker bug reported by vicaya and fixed by priyomustafi (metrics)<br>
+     <b>Ganglia plugins for metrics v2</b><br>
+     <blockquote>Although, all metrics in metrics v2 are exposed via the standard JMX mechanisms, most users are using Ganglia to collect metrics.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7322">HADOOP-7322</a>.
+     Minor bug reported by bharathm and fixed by bharathm <br>
+     <b>Adding a util method in FileUtil for JDK File.listFiles</b><br>
+     <blockquote>                                              Use of this new utility method avoids null result from File.listFiles(), and consequent NPEs.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7320">HADOOP-7320</a>.
+     Major improvement reported by daryn and fixed by daryn <br>
+     <b>Refactor FsShell&apos;s copy &amp; move commands</b><br>
+     <blockquote>Need to refactor the move and copy commands to conform to the FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7316">HADOOP-7316</a>.
+     Major improvement reported by jmhsieh and fixed by eli (documentation)<br>
+     <b>Add public javadocs to FSDataInputStream and FSDataOutputStream</b><br>
+     <blockquote>This is a method made public for testing.  In comments in HADOOP-7301 after commit, adding javadoc comments was requested.  This is a follow up jira to address it.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7314">HADOOP-7314</a>.
+     Major improvement reported by naisbitt and fixed by naisbitt <br>
+     <b>Add support for throwing UnknownHostException when a host doesn&apos;t resolve</b><br>
+     <blockquote>As part of MAPREDUCE-2489, we need support for having the resolve methods (for DNS mapping) throw UnknownHostExceptions.  (Currently, they hide the exception).  Since the existing &apos;resolve&apos; method is ultimately used by several other locations/components, I propose we add a new &apos;resolveValidHosts&apos; method.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7306">HADOOP-7306</a>.
+     Major improvement reported by vicaya and fixed by vicaya (metrics)<br>
+     <b>Start metrics system even if config files are missing</b><br>
+     <blockquote>Per experience and discussion with HDFS-1922, it seems preferable to treat missing metrics config file as empty/default config, which is more compatible with metrics v1 behavior (the MBeans are always registered.)</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7305">HADOOP-7305</a>.
+     Minor improvement reported by nielsbasjes and fixed by nielsbasjes (build)<br>
+     <b>Eclipse project files are incomplete</b><br>
+     <blockquote>                                              Added missing library during creation of the eclipse project files.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7301">HADOOP-7301</a>.
+     Major improvement reported by jmhsieh and fixed by jmhsieh <br>
+     <b>FSDataInputStream should expose a getWrappedStream method</b><br>
+     <blockquote>Ideally FSDataInputStream should expose a getWrappedStream method similarly to how FSDataOutputStream exposes a getWrappedStream method.  Exposing this is useful for verifying correctness in tests cases.  This FSDataInputStream type is the class that the o.a.h.fs.FileSystem.open call returns.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7298">HADOOP-7298</a>.
+     Major test reported by tlipcon and fixed by tlipcon (test)<br>
+     <b>Add test utility for writing multi-threaded tests</b><br>
+     <blockquote>A lot of our tests spawn off multiple threads in order to check various synchronization issues, etc. It&apos;s often tedious to write these kinds of tests because you have to manually propagate exceptions back to the main thread, etc.<br><br>In HBase we have developed a testing utility which makes writing these kinds of tests much easier. I&apos;d like to copy that utility into Hadoop so we can use it here as well.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7292">HADOOP-7292</a>.
+     Minor bug reported by vicaya and fixed by vicaya (metrics)<br>
+     <b>Metrics 2 TestSinkQueue is racy</b><br>
+     <blockquote>The TestSinkQueue is racy (Thread.yield is not enough to guarantee other intended thread getting run), though it&apos;s the first time (from HADOOP-7289) I saw it manifested here.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7289">HADOOP-7289</a>.
+     Major improvement reported by szetszwo and fixed by eyang (build)<br>
+     <b>ivy: test conf should not extend common conf</b><br>
+     <blockquote>Otherwise, the same jars will appear in both {{build/ivy/lib/Hadoop-Common/common/}} and {{build/ivy/lib/Hadoop-Common/test/}}.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7287">HADOOP-7287</a>.
+     Blocker bug reported by tlipcon and fixed by atm (conf)<br>
+     <b>Configuration deprecation mechanism doesn&apos;t work properly for GenericOptionsParser/Tools</b><br>
+     <blockquote>For example, you can&apos;t use -D options on the &quot;hadoop fs&quot; command line in order to specify the deprecated names of configuration options. The issue is that the ordering is:<br>- JVM starts<br>- GenericOptionsParser creates a Configuration object and calls set() for each of the options specified on command line<br>- DistributedFileSystem or other class eventually instantiates HdfsConfiguration which adds the deprecations<br>- Some class calls conf.get(&quot;new key&quot;) and sees the default instead of the version ...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7286">HADOOP-7286</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s du/dus/df</b><br>
+     <blockquote>                    The &amp;quot;Found X items&amp;quot; header on the output of the &amp;quot;du&amp;quot; command has been removed to more closely match unix. The displayed paths now correspond to the command line arguments instead of always being a fully qualified URI. For example, the output will have relative paths if the command line arguments are relative paths.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7285">HADOOP-7285</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s test</b><br>
+     <blockquote>Need to refactor to conform to FsCommand subclass.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7284">HADOOP-7284</a>.
+     Major bug reported by sanjay.radia and fixed by sanjay.radia <br>
+     <b>Trash and shell&apos;s rm does not work for viewfs</b><br>
+     <blockquote></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7282">HADOOP-7282</a>.
+     Major bug reported by johnvijoe and fixed by johnvijoe (ipc)<br>
+     <b>getRemoteIp could return null in cases where the call is ongoing but the ip went away.</b><br>
+     <blockquote>getRemoteIp gets the ip from socket instead of the stored ip in Connection object. Thus calls to this function could return null when a client disconnected, but the rpc call is still ongoing...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7276">HADOOP-7276</a>.
+     Major bug reported by scurrilous and fixed by scurrilous (native)<br>
+     <b>Hadoop native builds fail on ARM due to -m32</b><br>
+     <blockquote>The native build fails on machine targets where gcc does not support -m32. This is any target other than x86, SPARC, RS/6000, or PowerPC, such as ARM.<br><br>$ ant -Dcompile.native=true<br>...<br>     [exec] make  all-am<br>     [exec] make[1]: Entering directory<br>`/home/trobinson/dev/hadoop-common/build/native/Linux-arm-32&apos;<br>     [exec] /bin/bash ./libtool  --tag=CC   --mode=compile gcc<br>-DHAVE_CONFIG_H -I. -I/home/trobinson/dev/hadoop-common/src/native<br>-I/usr/lib/jvm/java-6-openjdk/include<br>-I/usr/lib/jvm/jav...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7275">HADOOP-7275</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s stat</b><br>
+     <blockquote>Refactor to conform to the FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7272">HADOOP-7272</a>.
+     Major improvement reported by sureshms and fixed by sureshms (ipc, security)<br>
+     <b>Remove unnecessary security related info logs</b><br>
+     <blockquote>Two info logs are printed when connection to RPC server is established, is not necessary. On a production cluster, these log lines made up of close to 50% of lines in the namenode log. I propose changing them into debug logs.<br><br><br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7271">HADOOP-7271</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Standardize error messages</b><br>
+     <blockquote>The FsShell commands have no standard format for the same error message.  For instance, here is a snippet of the variations of just one of many error messages:<br><br>cmd: $path: No such file or directory<br>cmd: cannot stat `$path&apos;: No such file or directory<br>cmd: Can not find listing for $path<br>cmd: Cannot access $path: No such file or directory.<br>cmd: No such file or directory `$path&apos;<br>cmd: File does not exist: $path<br>cmd: File $path does not exist<br>... etc ...<br><br>These need to be common.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7268">HADOOP-7268</a>.
+     Major bug reported by devaraj and fixed by jnp (fs, security)<br>
+     <b>FileContext.getLocalFSFileContext() behavior needs to be fixed w.r.t tokens</b><br>
+     <blockquote>FileContext.getLocalFSFileContext() instantiates a FileContext object upon the first call to it, and for all subsequent calls returns back that instance (a static localFsSingleton object). With security turned on, this causes some hard-to-debug situations when that fileContext is used for doing HDFS operations. This is because the UserGroupInformation is stored when a FileContext is instantiated. If the process in question wishes to use different UserGroupInformation objects for different fil...</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7267">HADOOP-7267</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s rm/rmr/expunge</b><br>
+     <blockquote>Refactor to conform to the FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7265">HADOOP-7265</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Keep track of relative paths</b><br>
+     <blockquote>As part of the effort to standardize the display of paths, the PathData tracks the exact string used to create a path.  When obtaining a directory&apos;s contents, the relative nature of the original path should be preserved.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7264">HADOOP-7264</a>.
+     Major improvement reported by vicaya and fixed by vicaya (io)<br>
+     <b>Bump avro version to at least 1.4.1</b><br>
+     <blockquote>Needed by mapreduce 2.0 avro support. Maybe we could jump to Avro 1.5. There is incompatible API changes from 1.3x to 1.4x (Utf8 to CharSequence in user facing APIs) not sure about 1.5x though.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7261">HADOOP-7261</a>.
+     Major bug reported by sureshms and fixed by sureshms (test)<br>
+     <b>Disable IPV6 for junit tests</b><br>
+     <blockquote>IPV6 addresses not handles currently in the common library methods. IPV6 can return address as &quot;0:0:0:0:0:0:port&quot;. Some utility methods such as NetUtils#createSocketAddress(), NetUtils#normalizeHostName(), NetUtils#getHostNameOfIp() to name a few, do not handle IPV6 address and expect address to be of format host:port.<br><br>Until IPV6 is formally supported, I propose disabling IPV6 for junit tests to avoid problems seen in HDFS-1891.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7259">HADOOP-7259</a>.
+     Major bug reported by owen.omalley and fixed by owen.omalley (build)<br>
+     <b>contrib modules should include build.properties from parent.</b><br>
+     <blockquote>Current build.properties in the hadoop root directory is not included by the contrib modules.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7258">HADOOP-7258</a>.
+     Major bug reported by owen.omalley and fixed by owen.omalley <br>
+     <b>Gzip codec should not return null decompressors</b><br>
+     <blockquote>In HADOOP-6315, the gzip codec was changed to return a null codec with the intent to disallow pooling of the decompressors. Rather than break the interface, we can use an annotation to achieve the goal.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7257">HADOOP-7257</a>.
+     Major new feature reported by sanjay.radia and fixed by sanjay.radia <br>
+     <b>A client side mount table to give per-application/per-job file system view</b><br>
+     <blockquote>                                              viewfs - client-side mount table.<br><br>      <br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7251">HADOOP-7251</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s getmerge</b><br>
+     <blockquote>Need to refactor getmerge to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7250">HADOOP-7250</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s setrep</b><br>
+     <blockquote>Need to refactor setrep to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7249">HADOOP-7249</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s chmod/chown/chgrp</b><br>
+     <blockquote>Need to refactor permissions commands to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7241">HADOOP-7241</a>.
+     Minor improvement reported by weiyj and fixed by weiyj (fs, test)<br>
+     <b>fix typo of command &apos;hadoop fs -help tail&apos;</b><br>
+     <blockquote>Fix the typo of command &apos;hadoop fs -help tail&apos;.<br><br>$ hadoop fs -help tail<br>-tail [-f] &lt;file&gt;:  Show the last 1KB of the file. <br>		The -f option shows apended data as the file grows. <br><br>The &quot;apended data&quot; should be &quot;appended data&quot;.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7238">HADOOP-7238</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s cat &amp; text</b><br>
+     <blockquote>Need to refactor cat &amp; text to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7237">HADOOP-7237</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s touchz</b><br>
+     <blockquote>Need to refactor touchz to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7236">HADOOP-7236</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s mkdir</b><br>
+     <blockquote>Need to refactor tail to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7235">HADOOP-7235</a>.
+     Major improvement reported by daryn and fixed by daryn <br>
+     <b>Refactor FsShell&apos;s tail</b><br>
+     <blockquote>Need to refactor tail to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7233">HADOOP-7233</a>.
+     Major improvement reported by daryn and fixed by daryn (fs)<br>
+     <b>Refactor FsShell&apos;s ls</b><br>
+     <blockquote>Need to refactor ls to conform to new FsCommand class.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7231">HADOOP-7231</a>.
+     Major bug reported by daryn and fixed by daryn (util)<br>
+     <b>Fix synopsis for -count</b><br>
+     <blockquote>The synopsis for the count command is wrong.<br>1) missing a space in &quot;-count[-q]&quot;<br>2) missing ellipsis for multiple path args</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7230">HADOOP-7230</a>.
+     Major test reported by daryn and fixed by daryn (test)<br>
+     <b>Move -fs usage tests from hdfs into common</b><br>
+     <blockquote>The -fs usage tests are in hdfs which causes an unnecessary synchronization of a common &amp; hdfs bug when changing the text.  The usages have no ties to hdfs, so they should be moved into common.</blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7227">HADOOP-7227</a>.
+     Major improvement reported by jnp and fixed by jnp (ipc)<br>
+     <b>Remove protocol version check at proxy creation in Hadoop RPC.</b><br>
+     <blockquote>                    1. Protocol version check is removed from proxy creation, instead version check is performed at server in every rpc call.
<br/><br><br>2. This change is backward incompatible because format of the rpc messages is changed to include client version, client method hash and rpc version.
<br/><br><br>3. rpc version is introduced which should change when the format of rpc messages is changed.<br></blockquote></li>
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7223">HADOOP-7223</a>.
+     Major bug reported by sureshms and fixed by sureshms (fs)<br>
+     <b>FileContext createFlag combinations during create are not clearly defined</b><br>
+     <blockquote>During file creation with FileContext, the expected behavior is not clearly defined for combination of createFlag EnumSet.<br></blockquote></li>

[... 4505 lines stripped ...]

View raw message