hadoop-common-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ma...@apache.org
Subject svn commit: r1454934 [2/2] - in /hadoop/common/branches/branch-1.2: build.xml src/docs/releasenotes.html
Date Sun, 10 Mar 2013 22:26:15 GMT
Modified: hadoop/common/branches/branch-1.2/src/docs/releasenotes.html
URL: http://svn.apache.org/viewvc/hadoop/common/branches/branch-1.2/src/docs/releasenotes.html?rev=1454934&r1=1454933&r2=1454934&view=diff
==============================================================================
--- hadoop/common/branches/branch-1.2/src/docs/releasenotes.html (original)
+++ hadoop/common/branches/branch-1.2/src/docs/releasenotes.html Sun Mar 10 22:26:14 2013
@@ -2,7 +2,7 @@
 <html>
 <head>
 <META http-equiv="Content-Type" content="text/html; charset=UTF-8">
-<title>Hadoop 1.0.3 Release Notes</title>
+<title>Hadoop 1.2.0 Release Notes</title>
 <STYLE type="text/css">
 		H1 {font-family: sans-serif}
 		H2 {font-family: sans-serif; margin-left: 7mm}
@@ -10,11 +10,2031 @@
 	</STYLE>
 </head>
 <body>
-<h1>Hadoop 1.0.3 Release Notes</h1>
+<h1>Hadoop 1.2.0 Release Notes - Preliminary</h1>
 		These release notes include new developer and user-facing incompatibilities, features, and major improvements. 
 
 <a name="changes"/>
 
+<h2>Changes since Hadoop 1.1.2</h2>
+
+<h3>Jiras with Release Notes (describe major or incompatible changes)</h3>
+<ul>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-4572">HADOOP-4572</a>.
+     Major improvement reported by szetszwo and fixed by szetszwo <br>
+     <b>INode and its sub-classes should be package private</b><br>
+     <blockquote>                                          Moved org.apache.hadoop.hdfs.{CreateEditsLog, NNThroughputBenchmark} to org.apache.hadoop.hdfs.server.namenode.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7698">HADOOP-7698</a>.
+     Critical bug reported by daryn and fixed by daryn (build)<br>
+     <b>jsvc target fails on x86_64</b><br>
+     <blockquote>                                          The jsvc build target is now supported for Mac OSX and other platforms as well.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8164">HADOOP-8164</a>.
+     Major sub-task reported by sureshms and fixed by daryn (fs)<br>
+     <b>Handle paths using back slash as path separator for windows only</b><br>
+     <blockquote>                    This jira only allows providing paths using back slash as separator on Windows. The back slash on *nix system will be used as escape character. The support for paths using back slash as path separator will be removed in <a href="/jira/browse/HADOOP-8139" title="Path does not allow metachars to be escaped">HADOOP-8139</a> in release 23.3.
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8971">HADOOP-8971</a>.
+     Major improvement reported by gopalv and fixed by gopalv (util)<br>
+     <b>Backport: hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data (HADOOP-8926)</b><br>
+     <blockquote>                                          Backport cache-aware improvements for PureJavaCrc32 from trunk (<a href="/jira/browse/HADOOP-8926" title="hadoop.util.PureJavaCrc32 cache hit-ratio is low for static data"><strike>HADOOP-8926</strike></a>)
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-385">HDFS-385</a>.
+     Major improvement reported by dhruba and fixed by dhruba <br>
+     <b>Design a pluggable interface to place replicas of blocks in HDFS</b><br>
+     <blockquote>                                          New experimental API BlockPlacementPolicy allows investigating alternate rules for locating block replicas.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3697">HDFS-3697</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (datanode, performance)<br>
+     <b>Enable fadvise readahead by default</b><br>
+     <blockquote>                    The datanode now performs 4MB readahead by default when reading data from its disks, if the native libraries are present. This has been shown to improve performance in many workloads. The feature may be disabled by setting dfs.datanode.readahead.bytes to &quot;0&quot;.
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4071">HDFS-4071</a>.
+     Minor sub-task reported by jingzhao and fixed by jingzhao (datanode, namenode)<br>
+     <b>Add number of stale DataNodes to metrics for Branch-1</b><br>
+     <blockquote>                    This jira adds a new metric with name &quot;StaleDataNodes&quot; under metrics context &quot;dfs&quot; of type Gauge. This tracks the number of DataNodes marked as stale. A DataNode is marked stale when the heartbeat message from the DataNode is not received within the configured time &quot;&quot;dfs.namenode.stale.datanode.interval&quot;. 
<br/>
+
+
<br/>
+
+
<br/>
+
+Please see hdfs-default.xml documentation corresponding to &quot;dfs.namenode.stale.datanode.interval&quot; for more details on how to configure this feature. When this feature is not configured, this metrics would return zero. 
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4122">HDFS-4122</a>.
+     Major bug reported by sureshms and fixed by sureshms (datanode, hdfs-client, namenode)<br>
+     <b>Cleanup HDFS logs and reduce the size of logged messages</b><br>
+     <blockquote>                    The change from this jira changes the content of some of the log messages. No log message are removed. Only the content of the log messages is changed to reduce the size. If you have a tool that depends on the exact content of the log, please look at the patch and make appropriate updates to the tool.
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4320">HDFS-4320</a>.
+     Major improvement reported by mostafae and fixed by mostafae (datanode, namenode)<br>
+     <b>Add a separate configuration for namenode rpc address instead of only using fs.default.name</b><br>
+     <blockquote>                    The namenode RPC address is currently identified from configuration &quot;fs.default.name&quot;. In some setups where default FS is other than HDFS, the &quot;fs.default.name&quot; cannot be used to get the namenode address. When such a setup co-exists with HDFS, with this change namenode can be identified using a separate configuration parameter &quot;dfs.namenode.rpc-address&quot;.
<br/>
+
+
<br/>
+
+&quot;dfs.namenode.rpc-address&quot;, when configured, overrides fs.default.name for identifying namenode RPC address.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4337">HDFS-4337</a>.
+     Major bug reported by djp and fixed by mgong@vmware.com (namenode)<br>
+     <b>Backport HDFS-4240 to branch-1: Make sure nodes are avoided to place replica if some replica are already under the same nodegroup.</b><br>
+     <blockquote>                                          Backport <a href="/jira/browse/HDFS-4240" title="In nodegroup-aware case, make sure nodes are avoided to place replica if some replica are already under the same nodegroup"><strike>HDFS-4240</strike></a> to branch-1
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4350">HDFS-4350</a>.
+     Major bug reported by andrew.wang and fixed by andrew.wang <br>
+     <b>Make enabling of stale marking on read and write paths independent</b><br>
+     <blockquote>                    This patch makes an incompatible configuration change, as described below:
<br/>
+
+In releases 1.1.0 and other point releases 1.1.x, the configuration parameter &quot;dfs.namenode.check.stale.datanode&quot; could be used to turn on checking for the stale nodes. This configuration is no longer supported in release 1.2.0 onwards and is renamed as &quot;dfs.namenode.avoid.read.stale.datanode&quot;. 
<br/>
+
+
<br/>
+
+How feature works and configuring this feature:
<br/>
+
+As described in <a href="/jira/browse/HDFS-3703" title="Decrease the datanode failure detection time"><strike>HDFS-3703</strike></a> release notes, datanode stale period can be configured using parameter &quot;dfs.namenode.stale.datanode.interval&quot; in seconds (default value is 30 seconds). NameNode can be configured to use this staleness information for reads using configuration &quot;dfs.namenode.avoid.read.stale.datanode&quot;. When this parameter is set to true, namenode picks a stale datanode as the last target to read from when returning block locations for reads. Using staleness information for writes is as described in the releases notes of <a href="/jira/browse/HDFS-3912" title="Detecting and avoiding stale datanodes for writing"><strike>HDFS-3912</strike></a>.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4519">HDFS-4519</a>.
+     Major bug reported by cnauroth and fixed by cnauroth (datanode, scripts)<br>
+     <b>Support override of jsvc binary and log file locations when launching secure datanode.</b><br>
+     <blockquote>                    With this improvement the following options are available in release 1.2.0 and later on 1.x release stream:
<br/>
+
+1. jsvc location can be overridden by setting environment variable JSVC_HOME. Defaults to jsvc binary packaged within the Hadoop distro.
<br/>
+
+2. jsvc log output is directed to the file defined by JSVC_OUTFILE. Defaults to $HADOOP_LOG_DIR/jsvc.out.
<br/>
+
+3. jsvc error output is directed to the file defined by JSVC_ERRFILE file.  Defaults to $HADOOP_LOG_DIR/jsvc.err.
<br/>
+
+
<br/>
+
+With this improvement the following options are available in release 2.0.4 and later on 2.x release stream:
<br/>
+
+1. jsvc log output is directed to the file defined by JSVC_OUTFILE. Defaults to $HADOOP_LOG_DIR/jsvc.out.
<br/>
+
+2. jsvc error output is directed to the file defined by JSVC_ERRFILE file.  Defaults to $HADOOP_LOG_DIR/jsvc.err.
<br/>
+
+
<br/>
+
+For overriding jsvc location on 2.x releases, here is the release notes from <a href="/jira/browse/HDFS-2303" title="Unbundle jsvc"><strike>HDFS-2303</strike></a>:
<br/>
+
+To run secure Datanodes users must install jsvc for their platform and set JSVC_HOME to point to the location of jsvc in their environment.
<br/>
+
+
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3678">MAPREDUCE-3678</a>.
+     Major new feature reported by bejoyks and fixed by qwertymaniac (mrv1, mrv2)<br>
+     <b>The Map tasks logs should have the value of input split it processed</b><br>
+     <blockquote>                                          A map-task&#39;s syslogs now carries basic info on the InputSplit it processed.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4415">MAPREDUCE-4415</a>.
+     Major improvement reported by qwertymaniac and fixed by qwertymaniac (mrv1)<br>
+     <b>Backport the Job.getInstance methods from MAPREDUCE-1505 to branch-1</b><br>
+     <blockquote>                                          Backported new APIs to get a Job object to 1.2.0 from 2.0.0. Job API static methods Job.getInstance(), Job.getInstance(Configuration) and Job.getInstance(Configuration, jobName) are now available across both releases to avoid porting pain.
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4451">MAPREDUCE-4451</a>.
+     Major bug reported by erik.fang and fixed by erik.fang (contrib/fair-share)<br>
+     <b>fairscheduler fail to init job with kerberos authentication configured</b><br>
+     <blockquote>                                          Using FairScheduler with security configured, job initialization fails. The problem is that threads in JobInitializer runs as RPC user instead of jobtracker, pre-start all the threads fix this bug
+
+      
+</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4565">MAPREDUCE-4565</a>.
+     Major improvement reported by kkambatl and fixed by kkambatl <br>
+     <b>Backport MR-2855 to branch-1: ResourceBundle lookup during counter name resolution takes a lot of time</b><br>
+     <blockquote>                                          Passing a cached class-loader to ResourceBundle creator to minimize counter names lookup time.
+
+      
+</blockquote></li>
+
+</ul>
+
+
+<h3>Other Jiras (describe bug fixes and minor changes)</h3>
+<ul>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-6496">HADOOP-6496</a>.
+     Minor bug reported by lars_francke and fixed by ivanmi <br>
+     <b>HttpServer sends wrong content-type for CSS files (and others)</b><br>
+     <blockquote>CSS files are send as text/html causing problems if the HTML page is rendered in standards mode. The HDFS interface for example still works because it is rendered in quirks mode, the HBase interface doesn&apos;t work because it is rendered in standards mode. See HBASE-2110 for more details.<br><br>I&apos;ve had a quick look at HttpServer but I&apos;m too unfamiliar with it to see the problem. I think this started happening with HADOOP-6441 which would lead me to believe that the filter is called for every request...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7096">HADOOP-7096</a>.
+     Major improvement reported by ahmed.radwan and fixed by ahmed.radwan <br>
+     <b>Allow setting of end-of-record delimiter for TextInputFormat</b><br>
+     <blockquote>The patch for https://issues.apache.org/jira/browse/MAPREDUCE-2254 required minor changes to the LineReader class to allow extensions (see attached 2.patch). Description copied below:<br><br>It will be useful to allow setting the end-of-record delimiter for TextInputFormat. The current implementation hardcodes &apos;\n&apos;, &apos;\r&apos; or &apos;\r\n&apos; as the only possible record delimiters. This is a problem if users have embedded newlines in their data fields (which is pretty common). This is also a problem for other ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7688">HADOOP-7688</a>.
+     Major improvement reported by szetszwo and fixed by umamaheswararao <br>
+     <b>When a servlet filter throws an exception in init(..), the Jetty server failed silently. </b><br>
+     <blockquote>When a servlet filter throws a ServletException in init(..), the exception is logged by Jetty but not re-throws to the caller.  As a result, the Jetty server failed silently.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7754">HADOOP-7754</a>.
+     Major sub-task reported by tlipcon and fixed by tlipcon (native, performance)<br>
+     <b>Expose file descriptors from Hadoop-wrapped local FileSystems</b><br>
+     <blockquote>In HADOOP-7714, we determined that using fadvise inside of the MapReduce shuffle can yield very good performance improvements. But many parts of the shuffle are FileSystem-agnostic and thus operate on FSDataInputStreams and RawLocalFileSystems. This JIRA is to figure out how to allow RawLocalFileSystem to expose its FileDescriptor object without unnecessarily polluting the public APIs.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7827">HADOOP-7827</a>.
+     Trivial bug reported by davevr and fixed by davevr <br>
+     <b>jsp pages missing DOCTYPE</b><br>
+     <blockquote>The various jsp pages in the UI are all missing a DOCTYPE declaration.  This causes the pages to render incorrectly on some browsers, such as IE9.  Every UI page should have a valid tag, such as &lt;!DOCTYPE HTML&gt;, as their first line.  There are 31 files that need to be changed, all in the core\src\webapps tree.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7836">HADOOP-7836</a>.
+     Minor bug reported by eli and fixed by daryn (ipc, test)<br>
+     <b>TestSaslRPC#testDigestAuthMethodHostBasedToken fails with hostname localhost.localdomain</b><br>
+     <blockquote>TestSaslRPC#testDigestAuthMethodHostBasedToken fails on branch-1 on some hosts.<br><br>null expected:&lt;localhost[]&gt; but was:&lt;localhost[.localdomain]&gt;<br>junit.framework.ComparisonFailure: null expected:&lt;localhost[]&gt; but was:&lt;localhost[.localdomain]&gt;<br><br>null expected:&lt;[localhost]&gt; but was:&lt;[eli-thinkpad]&gt;<br>junit.framework.ComparisonFailure: null expected:&lt;[localhost]&gt; but was:&lt;[eli-thinkpad]&gt;<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-7868">HADOOP-7868</a>.
+     Major bug reported by javacruft and fixed by scurrilous (native)<br>
+     <b>Hadoop native fails to compile when default linker option is -Wl,--as-needed</b><br>
+     <blockquote>Recent releases of Ubuntu and Debian have switched to using --as-needed as default when linking binaries.<br><br>As a result the AC_COMPUTE_NEEDED_DSO fails to find the required DSO names during execution of configure resulting in a build failure.<br><br>Explicitly using &quot;-Wl,--no-as-needed&quot; in this macro when required resolves this issue.<br><br>See http://wiki.debian.org/ToolChain/DSOLinking for a few more details</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8023">HADOOP-8023</a>.
+     Critical new feature reported by tucu00 and fixed by tucu00 (conf)<br>
+     <b>Add unset() method to Configuration</b><br>
+     <blockquote>HADOOP-7001 introduced the *Configuration.unset(String)* method.<br><br>MAPREDUCE-3727 requires that method in order to be back-ported.<br><br>This is required to fix an issue manifested when running MR/Hive/Sqoop jobs from Oozie, details are in MAPREDUCE-3727.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8249">HADOOP-8249</a>.
+     Major bug reported by bcwalrus and fixed by tucu00 (security)<br>
+     <b>invalid hadoop-auth cookies should trigger authentication if info is avail before returning HTTP 401</b><br>
+     <blockquote>WebHdfs gives out cookies. But when the client passes them back, it&apos;d sometimes reject them and return a HTTP 401 instead. (&quot;Sometimes&quot; as in after a restart.) The interesting thing is that if the client doesn&apos;t pass the cookie back, WebHdfs will be totally happy.<br><br>The correct behaviour should be to ignore the cookie if it looks invalid, and attempt to proceed with the request handling.<br><br>I haven&apos;t tried HttpFs to see whether it handles restart better.<br><br>Reproducing it with curl:<br>{noformat}<br>###...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8355">HADOOP-8355</a>.
+     Minor bug reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>SPNEGO filter throws/logs exception when authentication fails</b><br>
+     <blockquote>if the auth-token is NULL means the authenticator has not authenticated the request and it has already issue an UNAUTHORIZED response, there is no need to throw an exception and then immediately catch it and log it. The &apos;else throw&apos; can be removed.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8386">HADOOP-8386</a>.
+     Major bug reported by cberner and fixed by cberner (scripts)<br>
+     <b>hadoop script doesn&apos;t work if &apos;cd&apos; prints to stdout (default behavior in Ubuntu)</b><br>
+     <blockquote>if the &apos;hadoop&apos; script is run as &apos;bin/hadoop&apos; on a distro where the &apos;cd&apos; command prints to stdout, the script will fail due to this line: &apos;bin=`cd &quot;$bin&quot;; pwd`&apos;<br><br>Workaround: execute from the bin/ directory as &apos;./hadoop&apos;<br><br>Fix: change that line to &apos;bin=`cd &quot;$bin&quot; &gt; /dev/null; pwd`&apos;</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8423">HADOOP-8423</a>.
+     Major bug reported by jason98 and fixed by tlipcon (io)<br>
+     <b>MapFile.Reader.get() crashes jvm or throws EOFException on Snappy or LZO block-compressed data</b><br>
+     <blockquote>I am using Cloudera distribution cdh3u1.<br><br>When trying to check native codecs for better decompression<br>performance such as Snappy or LZO, I ran into issues with random<br>access using MapFile.Reader.get(key, value) method.<br>First call of MapFile.Reader.get() works but a second call fails.<br><br>Also  I am getting different exceptions depending on number of entries<br>in a map file.<br>With LzoCodec and 10 record file, jvm gets aborted.<br><br>At the same time the DefaultCodec works fine for all cases, as well as<br>r...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8460">HADOOP-8460</a>.
+     Major bug reported by revans2 and fixed by revans2 (documentation)<br>
+     <b>Document proper setting of HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR</b><br>
+     <blockquote>We should document that in a properly setup cluster HADOOP_PID_DIR and HADOOP_SECURE_DN_PID_DIR should not point to /tmp, but should point to a directory that normal users do not have access to.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8512">HADOOP-8512</a>.
+     Minor bug reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>AuthenticatedURL should reset the Token when the server returns other than OK on authentication</b><br>
+     <blockquote>Currently the token is not being reset and if using AuthenticatedURL, it will keep sending the invalid token as Cookie. There is not security concern with this, the main inconvenience is the logging being generated on the server side.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8580">HADOOP-8580</a>.
+     Major bug reported by ekoontz and fixed by  <br>
+     <b>ant compile-native fails with automake version 1.11.3</b><br>
+     <blockquote>The following:<br><br>{code}<br>ant -d -v -DskipTests -Dcompile.native=true clean compile-native<br>{code}<br><br>works with GNU automake version 1.11.1, but fails with automake version 1.11.3. <br><br>Relevant lines of failure seem to be these:<br><br>{code}<br>[exec] make[1]: Leaving directory `/tmp/hadoop-common/build/native/Linux-amd64-64&apos;<br>     [exec] Current OS is Linux<br>     [exec] Executing &apos;sh&apos; with arguments:<br>     [exec] &apos;/tmp/hadoop-common/build/native/Linux-amd64-64/libtool&apos;<br>     [exec] &apos;--mode=install&apos;<br>     [exec]...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8586">HADOOP-8586</a>.
+     Major bug reported by eli and fixed by eli <br>
+     <b>Fixup a bunch of SPNEGO misspellings</b><br>
+     <blockquote>SPNEGO is misspelled as &quot;SPENGO&quot; a bunch of places.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8587">HADOOP-8587</a>.
+     Minor bug reported by eli and fixed by eli (fs)<br>
+     <b>HarFileSystem access of harMetaCache isn&apos;t threadsafe</b><br>
+     <blockquote>HarFileSystem&apos;s use of the static harMetaCache map is not threadsafe. Credit to Todd for pointing this out.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8606">HADOOP-8606</a>.
+     Major bug reported by daryn and fixed by daryn (fs)<br>
+     <b>FileSystem.get may return the wrong filesystem</b><br>
+     <blockquote>{{FileSystem.get(URI, conf)}} will return the default fs if the scheme is null, regardless of whether the authority is null too.  This causes URIs of &quot;//authority/path&quot; to _always_ refer to &quot;/path&quot; on the default fs.  To the user, this appears to &quot;work&quot; if the authority in the null-scheme URI matches the authority of the default fs.  When the authorities don&apos;t match, the user is very surprised that the default fs is used.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8611">HADOOP-8611</a>.
+     Major bug reported by kihwal and fixed by robsparker (security)<br>
+     <b>Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails</b><br>
+     <blockquote>When the JNI-based users-group mapping is enabled, the process/command will fail if the native library, libhadoop.so, cannot be found. This mostly happens at client-side where users may use hadoop programatically. Instead of failing, falling back to the shell-based implementation will be desirable. Depending on how cluster is configured, use of the native netgroup mapping cannot be subsituted by the shell-based default. For this reason, this behavior must be configurable with the default bein...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8612">HADOOP-8612</a>.
+     Major bug reported by mattf and fixed by eli (fs)<br>
+     <b>Backport HADOOP-8599 to branch-1 (Non empty response when read beyond eof)</b><br>
+     <blockquote>When FileSystem.getFileBlockLocations(file,start,len) is called with &quot;start&quot; argument equal to the file size, the response is not empty. See HADOOP-8599 for details and tiny patch.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8613">HADOOP-8613</a>.
+     Critical bug reported by daryn and fixed by daryn <br>
+     <b>AbstractDelegationTokenIdentifier#getUser() should set token auth type</b><br>
+     <blockquote>{{AbstractDelegationTokenIdentifier#getUser()}} returns the UGI associated with a token.  The UGI&apos;s auth type will either be SIMPLE for non-proxy tokens, or PROXY (effective user) and SIMPLE (real user).  Instead of SIMPLE, it needs to be TOKEN.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8767">HADOOP-8767</a>.
+     Minor bug reported by surfercrs4 and fixed by surfercrs4 (bin)<br>
+     <b>secondary namenode on slave machines</b><br>
+     <blockquote>when the default value for HADOOP_SLAVES is changed in hadoop-env.sh the hdfs starting (with start-dfs.sh) creates secondary namenodes on all the machines in the file conf/slaves instead of conf/masters.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8781">HADOOP-8781</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (scripts)<br>
+     <b>hadoop-config.sh should add JAVA_LIBRARY_PATH to LD_LIBRARY_PATH</b><br>
+     <blockquote>Snappy SO fails to load properly if LD_LIBRARY_PATH does not include the path where snappy SO is. This is observed in setups that don&apos;t have an independent snappy installation (not installed by Hadoop)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8786">HADOOP-8786</a>.
+     Major bug reported by tlipcon and fixed by tlipcon <br>
+     <b>HttpServer continues to start even if AuthenticationFilter fails to init</b><br>
+     <blockquote>As seen in HDFS-3904, if the AuthenticationFilter fails to initialize, the web server will continue to start up. We need to check for context initialization errors after starting the server.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8791">HADOOP-8791</a>.
+     Major bug reported by bdechoux and fixed by jingzhao (documentation)<br>
+     <b>rm &quot;Only deletes non empty directory and files.&quot;</b><br>
+     <blockquote>The documentation (1.0.3) is describing the opposite of what rm does.<br>It should be  &quot;Only delete files and empty directories.&quot;<br><br>With regards to file, the size of the file should not matter, should it?<br><br>OR I am totally misunderstanding the semantic of this command and I am not the only one.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8819">HADOOP-8819</a>.
+     Major bug reported by brandonli and fixed by brandonli (fs)<br>
+     <b>Should use &amp;&amp; instead of  &amp; in a few places in FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs</b><br>
+     <blockquote>Should use &amp;&amp; instead of  &amp; in a few places in FTPFileSystem,FTPInputStream,S3InputStream,ViewFileSystem,ViewFs.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8820">HADOOP-8820</a>.
+     Major new feature reported by djp and fixed by djp (net)<br>
+     <b>Backport HADOOP-8469 and HADOOP-8470: add &quot;NodeGroup&quot; layer in new NetworkTopology (also known as NetworkTopologyWithNodeGroup)</b><br>
+     <blockquote>This patch backport HADOOP-8469 and HADOOP-8470 to branch-1 and includes:<br>1. Make NetworkTopology class pluggable for extension.<br>2. Implement a 4-layer NetworkTopology class (named as NetworkTopologyWithNodeGroup) to use in virtualized environment (or other situation with additional layer between host and rack).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8832">HADOOP-8832</a>.
+     Major bug reported by brandonli and fixed by brandonli <br>
+     <b>backport serviceplugin to branch-1</b><br>
+     <blockquote>The original patch was only partially back ported to branch-1. This JIRA is to back port the rest of it.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8861">HADOOP-8861</a>.
+     Major bug reported by amareshwari and fixed by amareshwari (fs)<br>
+     <b>FSDataOutputStream.sync should call flush() if the underlying wrapped stream is not Syncable</b><br>
+     <blockquote>Currently FSDataOutputStream.sync is a no-op if the wrapped stream is not Syncable. Instead it should call flush() if the wrapped stream is not syncable.<br><br>This behavior is already present in trunk, but branch-1 does not have this.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8900">HADOOP-8900</a>.
+     Major bug reported by slavik_krassovsky and fixed by adi2 <br>
+     <b>BuiltInGzipDecompressor throws IOException - stored gzip size doesn&apos;t match decompressed size</b><br>
+     <blockquote>Encountered failure when processing large GZIP file<br>¥ Gz: Failed in 1hrs, 13mins, 57sec with the error:<br> üjava.io.IOException: IO error in map input file hdfs://localhost:9000/Halo4/json_m/gz/NewFileCat.txt.gz<br> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:242)<br> at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:216)<br> at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)<br> at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.j...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8917">HADOOP-8917</a>.
+     Major bug reported by arpitgupta and fixed by arpitgupta <br>
+     <b>add LOCALE.US to toLowerCase in SecurityUtil.replacePattern</b><br>
+     <blockquote>Webhdfs and fsck when getting the kerberos principal use Locale.US in toLowerCase. We should do the same in replacePattern as this method is used when service prinicpals log in.<br><br>see https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245 for more details</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8931">HADOOP-8931</a>.
+     Trivial improvement reported by eli and fixed by eli <br>
+     <b>Add Java version to startup message</b><br>
+     <blockquote>I often look at logs and have to track down the java version they were run with, it would be useful if we logged this as part of the startup message.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8951">HADOOP-8951</a>.
+     Minor improvement reported by stevel@apache.org and fixed by stevel@apache.org (util)<br>
+     <b>RunJar to fail with user-comprehensible error message if jar missing</b><br>
+     <blockquote>When the RunJar JAR is missing or not a file, exit with a meaningful message.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8963">HADOOP-8963</a>.
+     Trivial bug reported by billie.rinaldi and fixed by arpitgupta <br>
+     <b>CopyFromLocal doesn&apos;t always create user directory</b><br>
+     <blockquote>When you use the command &quot;hadoop fs -copyFromLocal filename .&quot; before the /user/username directory has been created, the file is created with name /user/username instead of a directory being created with file /user/username/filename.  The command &quot;hadoop fs -copyFromLocal filename filename&quot; works as expected, creating /user/username and /user/username/filename, and &quot;hadoop fs -copyFromLocal filename .&quot; works as expected if the /user/username directory already exists.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8968">HADOOP-8968</a>.
+     Major improvement reported by tucu00 and fixed by tucu00 <br>
+     <b>Add a flag to completely disable the worker version check</b><br>
+     <blockquote>The current logic in the TaskTracker and the DataNode to allow a relax version check with the JobTracker and NameNode works only if the versions of Hadoop are exactly the same.<br><br>We should add a switch to disable version checking completely, to enable rolling upgrades between compatible versions (typically patch versions).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-8988">HADOOP-8988</a>.
+     Major new feature reported by jingzhao and fixed by jingzhao (conf)<br>
+     <b>Backport HADOOP-8343 to branch-1</b><br>
+     <blockquote>Backport HADOOP-8343 to branch-1 so as to specifically control the authorization requirements for accessing /jmx, /metrics, and /conf in branch-1.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9036">HADOOP-9036</a>.
+     Major bug reported by ivanmi and fixed by sureshms <br>
+     <b>TestSinkQueue.testConcurrentConsumers fails intermittently (Backports HADOOP-7292)</b><br>
+     <blockquote>org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers<br> <br><br>Error Message<br><br>should&apos;ve thrown<br>Stacktrace<br><br>junit.framework.AssertionFailedError: should&apos;ve thrown<br>	at org.apache.hadoop.metrics2.impl.TestSinkQueue.shouldThrowCME(TestSinkQueue.java:229)<br>	at org.apache.hadoop.metrics2.impl.TestSinkQueue.testConcurrentConsumers(TestSinkQueue.java:195)<br>Standard Output<br><br>2012-10-03 16:51:31,694 INFO  impl.TestSinkQueue (TestSinkQueue.java:consume(243)) - sleeping<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9090">HADOOP-9090</a>.
+     Minor new feature reported by mostafae and fixed by mostafae (metrics)<br>
+     <b>Support on-demand publish of metrics</b><br>
+     <blockquote>Updated description based on feedback:<br><br>We have a need to publish metrics out of some short-living processes, which is not really well-suited to the current metrics system implementation which periodically publishes metrics asynchronously (a behavior that works great for long-living processes). Of course I could write my own metrics system, but it seems like such a waste to rewrite all the awesome code currently in the MetricsSystemImpl and supporting classes.<br>The way this JIRA solves this pr...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9095">HADOOP-9095</a>.
+     Minor bug reported by szetszwo and fixed by jingzhao (net)<br>
+     <b>TestNNThroughputBenchmark fails in branch-1</b><br>
+     <blockquote>{noformat}<br>java.lang.StringIndexOutOfBoundsException: String index out of range: 0<br>    at java.lang.String.charAt(String.java:686)<br>    at org.apache.hadoop.net.NetUtils.normalizeHostName(NetUtils.java:539)<br>    at org.apache.hadoop.net.NetUtils.normalizeHostNames(NetUtils.java:562)<br>    at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:88)<br>    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getBlockLocations(FSNamesystem.java:1047)<br>    ...<br>    at org...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9098">HADOOP-9098</a>.
+     Blocker bug reported by tomwhite and fixed by arpitagarwal (build)<br>
+     <b>Add missing license headers</b><br>
+     <blockquote>There are missing license headers in some source files (e.g. TestUnderReplicatedBlocks.java is one) according to the RAT report.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9099">HADOOP-9099</a>.
+     Minor bug reported by ivanmi and fixed by ivanmi (test)<br>
+     <b>NetUtils.normalizeHostName fails on domains where UnknownHost resolves to an IP address</b><br>
+     <blockquote>I just hit this failure. We should use some more unique string for &quot;UnknownHost&quot;:<br><br>Testcase: testNormalizeHostName took 0.007 sec<br>	FAILED<br>expected:&lt;[65.53.5.181]&gt; but was:&lt;[UnknownHost]&gt;<br>junit.framework.AssertionFailedError: expected:&lt;[65.53.5.181]&gt; but was:&lt;[UnknownHost]&gt;<br>	at org.apache.hadoop.net.TestNetUtils.testNormalizeHostName(TestNetUtils.java:347)<br><br>Will post a patch in a bit.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9124">HADOOP-9124</a>.
+     Minor bug reported by phunt and fixed by snihalani (io)<br>
+     <b>SortedMapWritable violates contract of Map interface for equals() and hashCode()</b><br>
+     <blockquote>This issue is similar to HADOOP-7153. It was found when using MRUnit - see MRUNIT-158, specifically https://issues.apache.org/jira/browse/MRUNIT-158?focusedCommentId=13501985&amp;page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13501985<br><br>--<br>o.a.h.io.SortedMapWritable implements the java.util.Map interface, however it does not define an implementation of the equals() or hashCode() methods; instead the default implementations in java.lang.Object are used.<br><br>This violates...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9154">HADOOP-9154</a>.
+     Major bug reported by kkambatl and fixed by kkambatl (io)<br>
+     <b>SortedMapWritable#putAll() doesn&apos;t add key/value classes to the map</b><br>
+     <blockquote>In the following code from {{SortedMapWritable}}, #putAll() doesn&apos;t add key/value classes to the class-id maps.<br><br>{code}<br><br>  @Override<br>  public Writable put(WritableComparable key, Writable value) {<br>    addToMap(key.getClass());<br>    addToMap(value.getClass());<br>    return instance.put(key, value);<br>  }<br><br>  @Override<br>  public void putAll(Map&lt;? extends WritableComparable, ? extends Writable&gt; t){<br>    for (Map.Entry&lt;? extends WritableComparable, ? extends Writable&gt; e:<br>      t.entrySet()) {<br>      <br>    ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9174">HADOOP-9174</a>.
+     Major test reported by arpitagarwal and fixed by arpitagarwal (test)<br>
+     <b>TestSecurityUtil fails on Open JDK 7</b><br>
+     <blockquote>TestSecurityUtil.TestBuildTokenServiceSockAddr fails due to implicit dependency on the test case execution order.<br><br>Testcase: testBuildTokenServiceSockAddr took 0.003 sec<br>	Caused an ERROR<br>expected:&lt;[127.0.0.1]:123&gt; but was:&lt;[localhost]:123&gt;<br>	at org.apache.hadoop.security.TestSecurityUtil.testBuildTokenServiceSockAddr(TestSecurityUtil.java:133)<br><br><br>Similar bug exists in TestSecurityUtil.testBuildDTServiceName.<br><br>The root cause is that a helper routine (verifyAddress) used by some test cases has a ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9175">HADOOP-9175</a>.
+     Major test reported by arpitagarwal and fixed by arpitagarwal (test)<br>
+     <b>TestWritableName fails with Open JDK 7</b><br>
+     <blockquote>TestWritableName.testAddName fails due to a test order execution dependency on testSetName.<br><br>java.io.IOException: WritableName can&apos;t load class: mystring<br>at org.apache.hadoop.io.WritableName.getClass(WritableName.java:73)<br>at org.apache.hadoop.io.TestWritableName.testAddName(TestWritableName.java:92)<br>Caused by: java.lang.ClassNotFoundException: mystring<br>at java.net.URLClassLoader$1.run(URLClassLoader.java:366)<br>at java.net.URLClassLoader$1.run(URLClassLoader.java:355)<br>at java.security.AccessCon...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9179">HADOOP-9179</a>.
+     Major bug reported by brandonli and fixed by brandonli <br>
+     <b>TestFileSystem fails with open JDK7</b><br>
+     <blockquote>This is a test order-dependency bug as pointed out in HADOOP-8390. This JIRA is to track the fix in branch-1.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9191">HADOOP-9191</a>.
+     Major bug reported by arpitagarwal and fixed by arpitagarwal (test)<br>
+     <b>TestAccessControlList and TestJobHistoryConfig fail with JDK7</b><br>
+     <blockquote>Individual test cases have dependencies on a specific order of execution and fail when the order is changed.<br><br>TestAccessControlList.testNetGroups relies on Groups being initialized with a hard-coded test class that subsequent test cases depend on.<br><br>TestJobHistoryConfig.testJobHistoryLogging fails to shutdown the MiniDFSCluster on exit.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9253">HADOOP-9253</a>.
+     Major improvement reported by arpitgupta and fixed by arpitgupta <br>
+     <b>Capture ulimit info in the logs at service start time</b><br>
+     <blockquote>output of ulimit -a is helpful while debugging issues on the system.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9349">HADOOP-9349</a>.
+     Major bug reported by sandyr and fixed by sandyr (tools)<br>
+     <b>Confusing output when running hadoop version from one hadoop installation when HADOOP_HOME points to another</b><br>
+     <blockquote>Hadoop version X is downloaded to ~/hadoop-x, and Hadoop version Y is downloaded to ~/hadoop-y.  HADOOP_HOME is set to hadoop-x.  A user running hadoop-y/bin/hadoop might expect to be running the hadoop-y jars, but, because of HADOOP_HOME, will actually be running hadoop-x jars.<br><br>&quot;hadoop version&quot; could help clear this up a little by reporting the current HADOOP_HOME.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9369">HADOOP-9369</a>.
+     Major bug reported by kkambatl and fixed by kkambatl (net)<br>
+     <b>DNS#reverseDns() can return hostname with . appended at the end</b><br>
+     <blockquote>DNS#reverseDns uses javax.naming.InitialDirContext to do a reverse DNS lookup. This can sometimes return hostnames with a . at the end.<br><br>Saw this happen on hadoop-1: two nodes with tasktracker.dns.interface set to eth0</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HADOOP-9375">HADOOP-9375</a>.
+     Trivial bug reported by teledriver and fixed by sureshms (test)<br>
+     <b>Port HADOOP-7290 to branch-1 to fix TestUserGroupInformation failure</b><br>
+     <blockquote>Unit test failure in TestUserGroupInformation.testGetServerSideGroups. port HADOOP-7290 to branch-1.1 </blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-1957">HDFS-1957</a>.
+     Minor improvement reported by asrabkin and fixed by asrabkin (documentation)<br>
+     <b>Documentation for HFTP</b><br>
+     <blockquote>There should be some documentation for HFTP.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2533">HDFS-2533</a>.
+     Minor improvement reported by tlipcon and fixed by tlipcon (datanode, performance)<br>
+     <b>Remove needless synchronization on FSDataSet.getBlockFile</b><br>
+     <blockquote>HDFS-1148 discusses lock contention issues in FSDataset. It provides a more comprehensive fix, converting it all to RWLocks, etc. This JIRA is for one very specific fix which gives a decent performance improvement for TestParallelRead: getBlockFile() currently holds the lock which is completely unnecessary.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2751">HDFS-2751</a>.
+     Major bug reported by tlipcon and fixed by tlipcon (datanode)<br>
+     <b>Datanode drops OS cache behind reads even for short reads</b><br>
+     <blockquote>HDFS-2465 has some code which attempts to disable the &quot;drop cache behind reads&quot; functionality when the reads are &lt;256KB (eg HBase random access). But this check was missing in the {{close()}} function, so it always drops cache behind reads regardless of the size of the read. This hurts HBase random read performance when this patch is enabled.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2757">HDFS-2757</a>.
+     Major bug reported by jdcryans and fixed by jdcryans <br>
+     <b>Cannot read a local block that&apos;s being written to when using the local read short circuit</b><br>
+     <blockquote>When testing the tail&apos;ing of a local file with the read short circuit on, I get:<br><br>{noformat}<br>2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: BlockReaderLocal requested with incorrect offset:  Offset 0 and length 8230400 don&apos;t match block blk_-2842916025951313698_454072 ( blockLen 124 )<br>2012-01-06 00:17:31,598 WARN org.apache.hadoop.hdfs.DFSClient: BlockReaderLocal: Removing blk_-2842916025951313698_454072 from cache because local file /export4/jdcryans/dfs/data/blocksBeingWritt...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-2827">HDFS-2827</a>.
+     Major bug reported by umamaheswararao and fixed by umamaheswararao (namenode)<br>
+     <b>Cannot save namespace after renaming a directory above a file with an open lease</b><br>
+     <blockquote>When i execute the following operations and wait for checkpoint to complete.<br><br>fs.mkdirs(new Path(&quot;/test1&quot;));<br>FSDataOutputStream create = fs.create(new Path(&quot;/test/abc.txt&quot;)); //dont close<br>fs.rename(new Path(&quot;/test/&quot;), new Path(&quot;/test1/&quot;));<br><br>Check-pointing is failing with the following exception.<br><br>2012-01-23 15:03:14,204 ERROR namenode.FSImage (FSImage.java:run(795)) - Unable to save image for E:\HDFS-1623\hadoop-hdfs-project\hadoop-hdfs\build\test\data\dfs\name3<br>java.io.IOException: saveLease...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3062">HDFS-3062</a>.
+     Critical bug reported by mingjielai and fixed by mingjielai (ha, security)<br>
+     <b>Fail to submit mapred job on a secured-HA-HDFS: logic URI cannot be picked up by job submission.</b><br>
+     <blockquote>When testing the combination of NN HA + security + yarn, I found that the mapred job submission cannot pick up the logic URI of a nameservice. <br><br>I have logic URI configured in core-site.xml<br>{code}<br>&lt;property&gt;<br> &lt;name&gt;fs.defaultFS&lt;/name&gt;<br> &lt;value&gt;hdfs://ns1&lt;/value&gt;<br>&lt;/property&gt;<br>{code}<br><br>HDFS client can work with the HA deployment/configs:<br>{code}<br>[root@nn1 hadoop]# hdfs dfs -ls /<br>Found 6 items<br>drwxr-xr-x   - hbase  hadoop          0 2012-03-07 20:42 /hbase<br>drwxrwxrwx   - yarn   hadoop          0 201...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3163">HDFS-3163</a>.
+     Trivial improvement reported by brandonli and fixed by brandonli (test)<br>
+     <b>TestHDFSCLI.testAll fails if the user name is not all lowercase</b><br>
+     <blockquote>In the test resource file testHDFSConf.xml, the test comparators expect user name to be all lowercase. <br>If the user issuing the test has an uppercase in the username (e.g., Brandon instead of brandon), many RegexpComarator tests will fail. The following is one example:<br>{noformat} <br>        &lt;comparator&gt;<br>          &lt;type&gt;RegexpComparator&lt;/type&gt;<br>          &lt;expected-output&gt;^-rw-r--r--( )*1( )*[a-z]*( )*supergroup( )*0( )*[0-9]{4,}-[0-9]{2,}-[0-9]{2,} [0-9]{2,}:[0-9]{2,}( )*/file1&lt;/expected-output&gt;<br>...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3402">HDFS-3402</a>.
+     Minor bug reported by benoyantony and fixed by benoyantony (scripts, security)<br>
+     <b>Fix hdfs scripts for secure datanodes</b><br>
+     <blockquote>Starting secure datanode gives out the following error :<br><br>Error thrown :<br>09/04/2012 12:09:30 2524 jsvc error: Invalid option -server<br>09/04/2012 12:09:30 2524 jsvc error: Cannot parse command line arguments</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3479">HDFS-3479</a>.
+     Major improvement reported by cmccabe and fixed by cmccabe <br>
+     <b>backport HDFS-3335 (check for edit log corruption at the end of the log) to branch-1</b><br>
+     <blockquote>backport HDFS-3335 (check for edit log corruption at the end of the log) to branch-1</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3515">HDFS-3515</a>.
+     Major new feature reported by eli2 and fixed by eli (namenode)<br>
+     <b>Port HDFS-1457 to branch-1</b><br>
+     <blockquote>Let&apos;s port HDFS-1457 (configuration option to enable limiting the transfer rate used when sending the image and edits for checkpointing) to branch-1.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3521">HDFS-3521</a>.
+     Major improvement reported by szetszwo and fixed by szetszwo (namenode)<br>
+     <b>Allow namenode to tolerate edit log corruption</b><br>
+     <blockquote>HDFS-3479 adds checking for edit log corruption. It uses a fixed UNCHECKED_REGION_LENGTH (=PREALLOCATION_LENGTH) so that the bytes at the end within the length is not checked.  Instead of not checking the bytes, we should check everything and allow toleration.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3540">HDFS-3540</a>.
+     Major bug reported by szetszwo and fixed by szetszwo (namenode)<br>
+     <b>Further improvement on recovery mode and edit log toleration in branch-1</b><br>
+     <blockquote>*Recovery Mode*: HDFS-3479 backported HDFS-3335 to branch-1.  However, the recovery mode feature in branch-1 is dramatically different from the recovery mode in trunk since the edit log implementations in these two branch are different.  For example, there is UNCHECKED_REGION_LENGTH in branch-1 but not in trunk.<br><br>*Edit Log Toleration*: HDFS-3521 added this feature to branch-1 to remedy UNCHECKED_REGION_LENGTH and to tolerate edit log corruption.<br><br>There are overlaps between these two features....</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3595">HDFS-3595</a>.
+     Major bug reported by cmccabe and fixed by cmccabe (namenode)<br>
+     <b>TestEditLogLoading fails in branch-1</b><br>
+     <blockquote>TestEditLogLoading currently fails in branch-1, with this error message:<br>{code}<br>Testcase: testDisplayRecentEditLogOpCodes took 1.965 sec<br>    FAILED<br>error message contains opcodes message<br>junit.framework.AssertionFailedError: error message contains opcodes message<br>    at org.apache.hadoop.hdfs.server.namenode.TestEditLogLoading.testDisplayRecentEditLogOpCodes(TestEditLogLoading.java:75)<br>{code}</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3596">HDFS-3596</a>.
+     Minor improvement reported by cmccabe and fixed by cmccabe <br>
+     <b>Improve FSEditLog pre-allocation in branch-1</b><br>
+     <blockquote>Implement HDFS-3510 in branch-1.  This will improve FSEditLog preallocation to decrease the incidence of corrupted logs after disk full conditions.  (See HDFS-3510 for a longer description.)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3604">HDFS-3604</a>.
+     Minor improvement reported by eli and fixed by eli <br>
+     <b>Add dfs.webhdfs.enabled to hdfs-default.xml</b><br>
+     <blockquote>Let&apos;s add {{dfs.webhdfs.enabled}} to hdfs-default.xml.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3617">HDFS-3617</a>.
+     Major improvement reported by mattf and fixed by qwertymaniac <br>
+     <b>Port HDFS-96 to branch-1 (support blocks greater than 2GB)</b><br>
+     <blockquote>Please see HDFS-96.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3628">HDFS-3628</a>.
+     Blocker bug reported by qwertymaniac and fixed by qwertymaniac (datanode, namenode)<br>
+     <b>The dfsadmin -setBalancerBandwidth command on branch-1 does not check for superuser privileges</b><br>
+     <blockquote>The changes from HDFS-2202 for 0.20.x/1.x failed to add in a checkSuperuserPrivilege();, and hence any user (not admins alone) can reset the balancer bandwidth across the cluster if they wished to.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3647">HDFS-3647</a>.
+     Major improvement reported by hoffman60613 and fixed by qwertymaniac (datanode)<br>
+     <b>Backport HDFS-2868 (Add number of active transfer threads to the DataNode status) to branch-1</b><br>
+     <blockquote>Not sure if this is in a newer version of Hadoop, but in CDH3u3 it isn&apos;t there.<br><br>There is a lot of mystery surrounding how large to set dfs.datanode.max.xcievers.  Most people say to just up it to 4096, but given that exceeding this will cause an HBase RegionServer shutdown (see Lars&apos; blog post here: http://www.larsgeorge.com/2012/03/hadoop-hbase-and-xceivers.html), it would be nice if we could expose the current count via the built-in metrics framework (most likely under dfs).  In this way w...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3679">HDFS-3679</a>.
+     Minor bug reported by cmeyerisi and fixed by cmeyerisi (fuse-dfs)<br>
+     <b>fuse_dfs notrash option sets usetrash</b><br>
+     <blockquote>fuse_dfs sets usetrash option when the &quot;notrash&quot; flag is given. This is the exact opposite of the desired behavior. The &quot;usetrash&quot; flag sets usetrash as well, but this is correct. Here are the relevant lines from fuse_options.c, in latest HDFS HEAD[0]:<br><br>123	  case KEY_USETRASH:<br>124	    options.usetrash = 1;<br>125	    break;<br>126	  case KEY_NOTRASH:<br>127	    options.usetrash = 1;<br>128	    break;<br><br>This is a pretty trivial bug to fix. I&apos;m not familiar with the process here, but I can attach a patch i...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3698">HDFS-3698</a>.
+     Major bug reported by atm and fixed by atm (security)<br>
+     <b>TestHftpFileSystem is failing in branch-1 due to changed default secure port</b><br>
+     <blockquote>This test is failing since the default secure port changed to the HTTP port upon the commit of HDFS-2617.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3754">HDFS-3754</a>.
+     Major bug reported by eli and fixed by eli (datanode)<br>
+     <b>BlockSender doesn&apos;t shutdown ReadaheadPool threads</b><br>
+     <blockquote>The BlockSender doesn&apos;t shutdown the ReadaheadPool threads so when tests are run with native libraries some tests fail (time out) because shutdown hangs waiting for the outstanding threads to exit.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3819">HDFS-3819</a>.
+     Minor improvement reported by jingzhao and fixed by jingzhao <br>
+     <b>Should check whether invalidate work percentage default value is not greater than 1.0f</b><br>
+     <blockquote>In DFSUtil#getInvalidateWorkPctPerIteration we should also check that the configured value is not greater than 1.0f.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3838">HDFS-3838</a>.
+     Trivial improvement reported by brandonli and fixed by brandonli (namenode)<br>
+     <b>fix the typo in FSEditLog.java:  isToterationEnabled should be isTolerationEnabled</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3912">HDFS-3912</a>.
+     Major sub-task reported by jingzhao and fixed by jingzhao <br>
+     <b>Detecting and avoiding stale datanodes for writing</b><br>
+     <blockquote>1. Make stale timeout adaptive to the number of nodes marked stale in the cluster.<br>2. Consider having a separate configuration for write skipping the stale nodes.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3941">HDFS-3941</a>.
+     Major new feature reported by djp and fixed by djp (namenode)<br>
+     <b>Backport HDFS-3498 and HDFS3601: update replica placement policy for new added &quot;NodeGroup&quot; layer topology</b><br>
+     <blockquote>With enabling additional layer of &quot;NodeGroup&quot;, the replica placement policy used in BlockPlacementPolicyWithNodeGroup is updated to following rules:<br>0. No more than one replica is placed within a NodeGroup (*)<br>1. First replica on the local node.<br>2. Second and third replicas are within the same rack but remote rack with 1st replica.<br>3. Other replicas on random nodes with restriction that no more than two replicas are placed in the same rack, if there is enough racks.<br><br>Also, this patch abstract...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3942">HDFS-3942</a>.
+     Major new feature reported by djp and fixed by djp (balancer)<br>
+     <b>Backport HDFS-3495: Update balancer policy for Network Topology with additional &apos;NodeGroup&apos; layer</b><br>
+     <blockquote>This is the backport work for HDFS-3495 and HDFS-4234.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-3961">HDFS-3961</a>.
+     Major bug reported by jingzhao and fixed by jingzhao <br>
+     <b>FSEditLog preallocate() needs to reset the position of PREALLOCATE_BUFFER when more than 1MB size is needed</b><br>
+     <blockquote>In the new preallocate() function, when the required size is larger 1MB, we need to reset the position for PREALLOCATION_BUFFER every time when we have allocated 1MB. Otherwise seems only 1MB can be allocated even if need is larger than 1MB.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4057">HDFS-4057</a>.
+     Minor improvement reported by brandonli and fixed by brandonli (namenode)<br>
+     <b>NameNode.namesystem should be private. Use getNamesystem() instead.</b><br>
+     <blockquote>NameNode.namesystem should be private. One should use NameNode.getNamesystem() to get it instead.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4072">HDFS-4072</a>.
+     Minor bug reported by jingzhao and fixed by jingzhao (namenode)<br>
+     <b>On file deletion remove corresponding blocks pending replication</b><br>
+     <blockquote>Currently when deleting a file, blockManager does not remove records that are corresponding to the file&apos;s blocks from pendingRelications. These records can only be removed after timeout (5~10 min).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4168">HDFS-4168</a>.
+     Major bug reported by szetszwo and fixed by jingzhao (namenode)<br>
+     <b>TestDFSUpgradeFromImage fails in branch-1</b><br>
+     <blockquote>{noformat}<br>java.lang.NullPointerException<br>	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removeBlocks(FSNamesystem.java:2212)<br>	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.removePathAndBlocks(FSNamesystem.java:2225)<br>	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.unprotectedDelete(FSDirectory.java:645)<br>	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.loadFSEdits(FSEditLog.java:833)<br>	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSEdits(FSImage.java:1024)<br>...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4180">HDFS-4180</a>.
+     Minor bug reported by szetszwo and fixed by jingzhao (test)<br>
+     <b>TestFileCreation fails in branch-1 but not branch-1.1</b><br>
+     <blockquote>{noformat}<br>Testcase: testFileCreation took 3.419 sec<br>	Caused an ERROR<br>java.io.IOException: Cannot create /test_dir; already exists as a directory<br>	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1374)<br>	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesystem.java:1334)<br>	...<br>	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)<br><br>org.apache.hadoop.ipc.RemoteException: java.io.IOException: Cannot create /test_dir; already e...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4207">HDFS-4207</a>.
+     Minor bug reported by stevel@apache.org and fixed by jingzhao (hdfs-client)<br>
+     <b>All hadoop fs operations fail if the default fs is down even if a different file system is specified in the command</b><br>
+     <blockquote>you can&apos;t do any {{hadoop fs}} commands against any hadoop filesystem (e.g, s3://, a remote hdfs://, webhdfs://) if the default FS of the client is offline. Only operations that need the local fs should be expected to fail in this situation</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4219">HDFS-4219</a>.
+     Major new feature reported by arpitgupta and fixed by arpitgupta <br>
+     <b>Port slive to branch-1</b><br>
+     <blockquote>Originally it was committed in HDFS-708 and MAPREDUCE-1804</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4222">HDFS-4222</a>.
+     Minor bug reported by teledriver and fixed by teledriver (namenode)<br>
+     <b>NN is unresponsive and lose heartbeats of DNs when Hadoop is configured to use LDAP and LDAP has issues</b><br>
+     <blockquote>For Hadoop clusters configured to access directory information by LDAP, the FSNamesystem calls on behave of DFS clients might hang due to LDAP issues (including LDAP access issues caused by networking issues) while holding the single lock of FSNamesystem. That will result in the NN unresponsive and loss of the heartbeats from DNs.<br><br>The places LDAP got accessed by FSNamesystem calls are the instantiation of FSPermissionChecker, which could be moved out of the lock scope since the instantiation...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4256">HDFS-4256</a>.
+     Major test reported by sureshms and fixed by sanjay.radia (namenode)<br>
+     <b>Backport concatenation of files into a single file to branch-1</b><br>
+     <blockquote>HDFS-222 added support concatenation of multiple files in a directory into a single file. This helps several use cases where writes can be parallelized and several folks have expressed in this functionality.<br><br>This jira intends to make changes equivalent from HDFS-222 into branch-1 to be made available release 1.2.0.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4351">HDFS-4351</a>.
+     Major bug reported by andrew.wang and fixed by andrew.wang (namenode)<br>
+     <b>Fix BlockPlacementPolicyDefault#chooseTarget when avoiding stale nodes</b><br>
+     <blockquote>There&apos;s a bug in {{BlockPlacementPolicyDefault#chooseTarget}} with stale node avoidance enabled (HDFS-3912). If a NotEnoughReplicasException is thrown in the call to {{chooseRandom()}}, {{numOfReplicas}} is not updated together with the partial result in {{result}} since it is pass by value. The retry call to {{chooseTarget}} then uses this incorrect value.<br><br>This can be seen if you enable stale node detection for {{TestReplicationPolicy#testChooseTargetWithMoreThanAvaiableNodes()}}.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4355">HDFS-4355</a>.
+     Major bug reported by brandonli and fixed by brandonli (test)<br>
+     <b>TestNameNodeMetrics.testCorruptBlock fails with open JDK7</b><br>
+     <blockquote>Argument(s) are different! Wanted:<br>metricsRecordBuilder.addGauge(<br>&quot;CorruptBlocks&quot;,<br>&lt;any&gt;,<br>1<br>);<br>-&gt; at org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsserts.java:96)<br>Actual invocation has different arguments:<br>metricsRecordBuilder.addGauge(<br>&quot;FilesTotal&quot;,<br>&quot;&quot;,<br>4<br>);<br>-&gt; at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getMetrics(FSNamesystem.java:5818)<br><br>at java.lang.reflect.Constructor.newInstance(Constructor.java:525)<br>at org.apache.hadoop.test.MetricsAsserts.assertGauge(MetricsAsse...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4358">HDFS-4358</a>.
+     Major bug reported by arpitagarwal and fixed by arpitagarwal (test)<br>
+     <b>TestCheckpoint failure with JDK7</b><br>
+     <blockquote>testMultipleSecondaryNameNodes doesn&apos;t shutdown the SecondaryNameNode which causes testCheckpoint to fail.<br><br>Testcase: testCheckpoint took 2.736 sec<br>	Caused an ERROR<br>Cannot lock storage C:\hdp1-2\build\test\data\dfs\namesecondary1. The directory is already locked.<br>java.io.IOException: Cannot lock storage C:\hdp1-2\build\test\data\dfs\namesecondary1. The directory is already locked.<br>	at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:602)<br>	at org.apache.hadoop.hd...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4444">HDFS-4444</a>.
+     Trivial bug reported by schu and fixed by schu <br>
+     <b>Add space between total transaction time and number of transactions in FSEditLog#printStatistics</b><br>
+     <blockquote>Currently, when we log statistics, we see something like<br>{code}<br>13/01/25 23:16:59 INFO namenode.FSNamesystem: Number of transactions: 0 Total time for transactions(ms): 0Number of transactions batched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0<br>{code}<br><br>Notice how the value for total transactions time and &quot;Number of transactions batched in Syncs&quot; needs a space to separate them.<br><br>FSEditLog#printStatistics:<br>{code}<br>  private void printStatistics(boolean force) {<br>    long now = now();<br>    if (...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4466">HDFS-4466</a>.
+     Major bug reported by brandonli and fixed by brandonli (namenode, security)<br>
+     <b>Remove the deadlock from AbstractDelegationTokenSecretManager</b><br>
+     <blockquote>In HDFS-3374, new synchronization in AbstractDelegationTokenSecretManager.ExpiredTokenRemover was added to make sure the ExpiredTokenRemover thread can be interrupted in time. Otherwise TestDelegation fails intermittently because the MiniDFScluster thread could be shut down before tokenRemover thread. <br>However, as Todd pointed out in HDFS-3374, a potential deadlock was introduced by its patch:<br>{quote}<br>   * FSNamesystem.saveNamespace (holding FSN lock) calls DTSM.saveSecretManagerState (which ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4479">HDFS-4479</a>.
+     Major bug reported by jingzhao and fixed by jingzhao <br>
+     <b>logSync() with the FSNamesystem lock held in commitBlockSynchronization</b><br>
+     <blockquote>In FSNamesystem#commitBlockSynchronization of branch-1, logSync() may be called when the FSNamesystem lock is held. Similar with HDFS-4186, this may cause some performance issue.<br><br>Since logSync is called right after the synchronization section, we can simply remove the logSync call.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4518">HDFS-4518</a>.
+     Major bug reported by arpitagarwal and fixed by arpitagarwal <br>
+     <b>Finer grained metrics for HDFS capacity</b><br>
+     <blockquote>Namenode should export disk usage metrics in bytes via FSNamesystemMetrics.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4544">HDFS-4544</a>.
+     Major bug reported by amareshwari and fixed by arpitagarwal <br>
+     <b>Error in deleting blocks should not do check disk, for all types of errors</b><br>
+     <blockquote>The following code in Datanode.java <br><br>{noformat}<br>      try {<br>        if (blockScanner != null) {<br>          blockScanner.deleteBlocks(toDelete);<br>        }<br>        data.invalidate(toDelete);<br>      } catch(IOException e) {<br>        checkDiskError();<br>        throw e;<br>      }<br>{noformat}<br><br>causes check disk to happen in case of any errors during invalidate.<br><br>We have seen errors like :<br><br>2013-03-02 00:08:28,849 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Unexpected error trying to delete bloc...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4551">HDFS-4551</a>.
+     Major improvement reported by mwagner and fixed by mwagner (webhdfs)<br>
+     <b>Change WebHDFS buffersize behavior to improve default performance</b><br>
+     <blockquote>Currently on 1.X branch, the buffer size used to copy bytes to network defaults to io.file.buffer.size. This causes performance problems if that buffersize is large.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/HDFS-4558">HDFS-4558</a>.
+     Critical bug reported by gujilangzi and fixed by djp (balancer)<br>
+     <b>start balancer failed with NPE</b><br>
+     <blockquote>start balancer failed with NPE<br> File this issue to track for QE and dev take a look<br><br>balancer.log:<br> 2013-03-06 00:19:55,174 ERROR org.apache.hadoop.hdfs.server.balancer.Balancer: java.lang.NullPointerException<br> at org.apache.hadoop.hdfs.server.namenode.BlockPlacementPolicy.getInstance(BlockPlacementPolicy.java:165)<br> at org.apache.hadoop.hdfs.server.balancer.Balancer.checkReplicationPolicyCompatibility(Balancer.java:799)<br> at org.apache.hadoop.hdfs.server.balancer.Balancer.&lt;init&gt;(Balancer.java:...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-461">MAPREDUCE-461</a>.
+     Minor new feature reported by fhedberg and fixed by fhedberg <br>
+     <b>Enable ServicePlugins for the JobTracker</b><br>
+     <blockquote>Allow ServicePlugins (see HADOOP-5257) for the JobTracker.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-987">MAPREDUCE-987</a>.
+     Minor new feature reported by philip and fixed by ahmed.radwan (build, test)<br>
+     <b>Exposing MiniDFS and MiniMR clusters as a single process command-line</b><br>
+     <blockquote>It&apos;s hard to test non-Java programs that rely on significant mapreduce functionality.  The patch I&apos;m proposing shortly will let you just type &quot;bin/hadoop jar hadoop-hdfs-hdfswithmr-test.jar minicluster&quot; to start a cluster (internally, it&apos;s using Mini{MR,HDFS}Cluster) with a specified number of daemons, etc.  A test that checks how some external process interacts with Hadoop might start minicluster as a subprocess, run through its thing, and then simply kill the java subprocess.<br><br>I&apos;ve been usi...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1684">MAPREDUCE-1684</a>.
+     Major bug reported by amareshwari and fixed by knoguchi (capacity-sched)<br>
+     <b>ClusterStatus can be cached in CapacityTaskScheduler.assignTasks()</b><br>
+     <blockquote>Currently,  CapacityTaskScheduler.assignTasks() calls getClusterStatus() thrice: once in assignTasks(), once in MapTaskScheduler and once in ReduceTaskScheduler. It can be cached in assignTasks() and re-used.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-1806">MAPREDUCE-1806</a>.
+     Major bug reported by pauly and fixed by jira.shegalov (harchive)<br>
+     <b>CombineFileInputFormat does not work with paths not on default FS</b><br>
+     <blockquote>In generating the splits in CombineFileInputFormat, the scheme and authority are stripped out. This creates problems when trying to access the files while generating the splits, as without the har:/, the file won&apos;t be accessed through the HarFileSystem.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2217">MAPREDUCE-2217</a>.
+     Major bug reported by schen and fixed by kkambatl (jobtracker)<br>
+     <b>The expire launching task should cover the UNASSIGNED task</b><br>
+     <blockquote>The ExpireLaunchingTask thread kills the task that are scheduled but not responded.<br>Currently if a task is scheduled on tasktracker and for some reason tasktracker cannot put it to RUNNING.<br>The task will just hang in the UNASSIGNED status and JobTracker will keep waiting for it.<br><br>JobTracker.ExpireLaunchingTask should be able to kill this task.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2264">MAPREDUCE-2264</a>.
+     Major bug reported by akramer and fixed by devaraj.k (jobtracker)<br>
+     <b>Job status exceeds 100% in some cases </b><br>
+     <blockquote>I&apos;m looking now at my jobtracker&apos;s list of running reduce tasks. One of them is 120.05% complete, the other is 107.28% complete.<br><br>I understand that these numbers are estimates, but there is no case in which an estimate of 100% for a non-complete task is better than an estimate of 99.99%, nor is there any case in which an estimate greater than 100% is valid.<br><br>I suggest that whatever logic is computing these set 99.99% as a hard maximum.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2289">MAPREDUCE-2289</a>.
+     Major bug reported by tlipcon and fixed by ahmed.radwan (job submission)<br>
+     <b>Permissions race can make getStagingDir fail on local filesystem</b><br>
+     <blockquote>I&apos;ve observed the following race condition in TestFairSchedulerSystem which uses a MiniMRCluster on top of RawLocalFileSystem:<br>- two threads call getStagingDir at the same time<br>- Thread A checks fs.exists(stagingArea) and sees false<br>-- Calls mkdirs(stagingArea, JOB_DIR_PERMISSIONS)<br>--- mkdirs calls the Java mkdir API which makes the file with umask-based permissions<br>- Thread B runs, checks fs.exists(stagingArea) and sees true<br>-- checks permissions, sees the default permissions, and throws IOE...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2770">MAPREDUCE-2770</a>.
+     Trivial improvement reported by eli and fixed by sandyr (documentation)<br>
+     <b>Improve hadoop.job.history.location doc in mapred-default.xml</b><br>
+     <blockquote>The documentation for hadoop.job.history.location in mapred-default.xml should indicate that this parameter can be a URI and any file system that Hadoop supports (eg hdfs and file).</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-2931">MAPREDUCE-2931</a>.
+     Major improvement reported by forest520 and fixed by sandyr <br>
+     <b>CLONE - LocalJobRunner should support parallel mapper execution</b><br>
+     <blockquote>The LocalJobRunner currently supports only a single execution thread. Given the prevalence of multi-core CPUs, it makes sense to allow users to run multiple tasks in parallel for improved performance on small (local-only) jobs.<br><br>It is necessary to patch back MAPREDUCE-1367 into Hadoop 0.20.X version. Also, MapReduce-434 should be submitted together.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3727">MAPREDUCE-3727</a>.
+     Critical bug reported by tucu00 and fixed by tucu00 (security)<br>
+     <b>jobtoken location property in jobconf refers to wrong jobtoken file</b><br>
+     <blockquote>Oozie launcher job (for MR/Pig/Hive/Sqoop action) reads the location of the jobtoken file from the *HADOOP_TOKEN_FILE_LOCATION* ENV var and seeds it as the *mapreduce.job.credentials.binary* property in the jobconf that will be used to launch the real (MR/Pig/Hive/Sqoop) job.<br><br>The MR/Pig/Hive/Sqoop submission code (via Hadoop job submission) uses correctly the injected *mapreduce.job.credentials.binary* property to load the credentials and submit their MR jobs.<br><br>The problem is that the *mapre...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-3993">MAPREDUCE-3993</a>.
+     Major bug reported by tlipcon and fixed by kkambatl (mrv1, mrv2)<br>
+     <b>Graceful handling of codec errors during decompression</b><br>
+     <blockquote>When using a compression codec for intermediate compression, some cases of corrupt data can cause the codec to throw exceptions other than IOException (eg java.lang.InternalError). This will currently cause the whole reduce task to fail, instead of simply treating it like another case of a failed fetch.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4036">MAPREDUCE-4036</a>.
+     Major bug reported by tucu00 and fixed by tucu00 (test)<br>
+     <b>Streaming TestUlimit fails on CentOS 6</b><br>
+     <blockquote>CentOS 6 seems to have higher memory requirements than other distros and together with the new MALLOC library makes the TestUlimit to fail with exit status 134.<br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4195">MAPREDUCE-4195</a>.
+     Critical bug reported by jira.shegalov and fixed by  (jobtracker)<br>
+     <b>With invalid queueName request param, jobqueue_details.jsp shows NPE</b><br>
+     <blockquote>When you access /jobqueue_details.jsp manually, instead of via a link, it has queueName set to null internally and this goes for a lookup into the scheduling info maps as well.<br><br>As a result, if using FairScheduler, a Pool with String name = null gets created and this brings the scheduler down. I have not tested what happens to the CapacityScheduler, but ideally if no queueName is set in that jsp, it should fall back to &apos;default&apos;. Otherwise, this brings down the JobTracker completely.<br><br>FairSch...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4278">MAPREDUCE-4278</a>.
+     Major bug reported by araceli and fixed by sandyr <br>
+     <b>cannot run two local jobs in parallel from the same gateway.</b><br>
+     <blockquote>I cannot run two local mode jobs from Pig in parallel from the same gateway, this is a typical use case. If I re-run the tests sequentially, then the test pass. This seems to be a problem from Hadoop.<br><br>Additionally, the pig harness, expects to be able to run Pig-version-undertest against Pig-version-stable from the same gateway.<br><br><br>To replicate the error:<br><br>I have two clusters running from the same gateway.<br>If I run the Pig regression suites nightly.conf in local mode in paralell - once on each...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4315">MAPREDUCE-4315</a>.
+     Major bug reported by alo.alt and fixed by sandyr (jobhistoryserver)<br>
+     <b>jobhistory.jsp throws 500 when a .txt file is found in /done</b><br>
+     <blockquote>if a .txt file located in /done the parser throws an 500 error.<br>Trace:<br>java.lang.ArrayIndexOutOfBoundsException: 1<br>        at org.apache.hadoop.mapred.jobhistory_jsp$2.compare(jobhistory_jsp.java:295)<br>        at org.apache.hadoop.mapred.jobhistory_jsp$2.compare(jobhistory_jsp.java:279)<br>        at java.util.Arrays.mergeSort(Arrays.java:1270)<br>        at java.util.Arrays.mergeSort(Arrays.java:1282)<br>        at java.util.Arrays.mergeSort(Arrays.java:1282)<br>        at java.util.Arrays.mergeSort(Arra...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4317">MAPREDUCE-4317</a>.
+     Major bug reported by qwertymaniac and fixed by kkambatl (mrv1)<br>
+     <b>Job view ACL checks are too permissive</b><br>
+     <blockquote>The class that does view-based checks, JSPUtil.JobWithViewAccessCheck, has the following internal member:<br><br>{code}private boolean isViewAllowed = true;{code}<br><br>Note that its true.<br><br>Now, in the method that sets proper view-allowed rights, has:<br><br>{code}<br>if (user != null &amp;&amp; job != null &amp;&amp; jt.areACLsEnabled()) {<br>      final UserGroupInformation ugi =<br>        UserGroupInformation.createRemoteUser(user);<br>      try {<br>        ugi.doAs(new PrivilegedExceptionAction&lt;Void&gt;() {<br>          public Void run() t...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4355">MAPREDUCE-4355</a>.
+     Major new feature reported by kkambatl and fixed by kkambatl (mrv1, mrv2)<br>
+     <b>Add RunningJob.getJobStatus()</b><br>
+     <blockquote>Usecase: Read the start/end-time of a particular job.<br><br>Currently, one has to iterate through JobClient.getAllJobStatuses() and iterate through them. JobClient.getJob(JobID) returns RunningJob, which doesn&apos;t hold the job&apos;s start time.<br><br>Adding RunningJob.getJobStatus() solves the issue.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4359">MAPREDUCE-4359</a>.
+     Major bug reported by tlipcon and fixed by tomwhite <br>
+     <b>Potential deadlock in Counters</b><br>
+     <blockquote>jcarder identified this deadlock in branch-1 (though it may also be present in trunk):<br>- Counters.size() is synchronized and locks Counters before Group<br>- Counters.Group.getCounterForName() is synchronized and calls through to Counters.size()<br><br>This creates a potential cycle which could cause a deadlock (though probably quite rare in practice)</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4385">MAPREDUCE-4385</a>.
+     Major bug reported by kkambatl and fixed by kkambatl <br>
+     <b>FairScheduler.maxTasksToAssign() should check for fairscheduler.assignmultiple.maps &lt; TaskTracker.availableSlots</b><br>
+     <blockquote>FairScheduler.maxTasksToAssign() can potentially return a value greater than the available slots. Currently, we rely on canAssignMaps()/canAssignReduces() to reject such requests.<br><br>These additional calls can be avoided by check against the available slots in maxTasksToAssign().</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4408">MAPREDUCE-4408</a>.
+     Major improvement reported by tucu00 and fixed by rkanter (mrv1, mrv2)<br>
+     <b>allow jobs to set a JAR that is in the distributed cached</b><br>
+     <blockquote>Setting a job JAR with JobConf.setJar(String) and Job.setJar(String) assumes that the JAR is local to the client submitting the job, thus it triggers copying the JAR to HDFS and injecting it to the distributed cached.<br><br>AFAIK, this is the only way to use uber JARs (JARs with JARs inside) in MR jobs.<br><br>For jobs launched by Oozie, all JARs are already in HDFS. In order for Oozie to suport uber JARs (OOZIE-654) there should be a way for specifying as JAR a JAR that is already in HDFS.<br><br></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4434">MAPREDUCE-4434</a>.
+     Major bug reported by kkambatl and fixed by kkambatl (mrv1)<br>
+     <b>Backport MR-2779 (JobSplitWriter.java can&apos;t handle large job.split file) to branch-1</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4464">MAPREDUCE-4464</a>.
+     Minor improvement reported by heathcd and fixed by heathcd (task)<br>
+     <b>Reduce tasks failing with NullPointerException in ConcurrentHashMap.get()</b><br>
+     <blockquote>If DNS does not resolve hostnames properly, reduce tasks can fail with a very misleading exception.<br><br>as per my peer Ahmed&apos;s diagnosis:<br><br>In ReduceTask, it seems that event.getTaskTrackerHttp() returns a malformed URI, and so host from:<br>{code}<br>String host = u.getHost();<br>{code}<br>is evaluated to null and the NullPointerException is thrown afterwards in the ConcurrentHashMap.<br><br>I have written a patch to check for a null hostname condition when getHost is called in the getMapCompletionEvents method a...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4499">MAPREDUCE-4499</a>.
+     Major improvement reported by nroberts and fixed by knoguchi (mrv1, performance)<br>
+     <b>Looking for speculative tasks is very expensive in 1.x</b><br>
+     <blockquote>When there are lots of jobs and tasks active in a cluster, the process of figuring out whether or not to launch a speculative task becomes very expensive. <br><br>I could be missing something but it certainly looks like on every heartbeat we could be scanning 10&apos;s of thousands of tasks looking for something which might need to be speculatively executed. In most cases, nothing gets chosen so we completely trashed our data cache and didn&apos;t even find a task to schedule, just to do it all over again on...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4556">MAPREDUCE-4556</a>.
+     Minor improvement reported by kkambatl and fixed by kkambatl (contrib/fair-share)<br>
+     <b>FairScheduler: PoolSchedulable#updateDemand() has potential redundant computation</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4576">MAPREDUCE-4576</a>.
+     Major bug reported by revans2 and fixed by revans2 <br>
+     <b>Large dist cache can block tasktracker heartbeat</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4595">MAPREDUCE-4595</a>.
+     Critical bug reported by kkambatl and fixed by kkambatl <br>
+     <b>TestLostTracker failing - possibly due to a race in JobHistory.JobHistoryFilesManager#run()</b><br>
+     <blockquote>The source for occasional failure of TestLostTracker seems like the following:<br><br>On job completion, JobHistoryFilesManager#run() spawns another thread to move history files to done folder. TestLostTracker waits for job completion, before checking the file format of the history file. However, the history files move might be in the process or might not have started in the first place.<br><br>The attachment (force-TestLostTracker-failure.patch) helps reproducing the error locally, by increasing the cha...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4629">MAPREDUCE-4629</a>.
+     Major bug reported by kkambatl and fixed by kkambatl <br>
+     <b>Remove JobHistory.DEBUG_MODE</b><br>
+     <blockquote>Remove JobHistory.DEBUG_MODE for the following reasons:<br><br>1. No one seems to be using it - the config parameter corresponding to enabling it does not even exist in mapred-default.xml<br>2. The logging being done in DEBUG_MODE needs to move to LOG.debug() and LOG.trace()<br>3. Buggy handling of helper methods in DEBUG_MODE; e.g. directoryTime() and timestampDirectoryComponent().</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4643">MAPREDUCE-4643</a>.
+     Major bug reported by kkambatl and fixed by sandyr (jobhistoryserver)<br>
+     <b>Make job-history cleanup-period configurable</b><br>
+     <blockquote>Job history cleanup should be made configurable. Currently, it is set to 1 month by default. The DEBUG_MODE (to be removed, see MAPREDUCE-4629) sets it to 20 minutes, but it should be configurable.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4652">MAPREDUCE-4652</a>.
+     Major bug reported by ahmed.radwan and fixed by ahmed.radwan (examples, mrv1)<br>
+     <b>ValueAggregatorJob sets the wrong job jar</b><br>
+     <blockquote>Using branch-1 tarball, if the user tries to submit an example aggregatewordcount, the job fails with the following error:<br><br>{code}<br>ahmed@ubuntu:~/demo/deploy/hadoop-1.2.0-SNAPSHOT$ bin/hadoop jar hadoop-examples-1.2.0-SNAPSHOT.jar aggregatewordcount input examples-output/aggregatewordcount 2 textinputformat<br>12/09/12 17:09:46 INFO mapred.JobClient: originalJarPath: /home/ahmed/demo/deploy/hadoop-1.2.0-SNAPSHOT/hadoop-core-1.2.0-SNAPSHOT.jar<br>12/09/12 17:09:48 INFO mapred.JobClient: submitJarFil...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4660">MAPREDUCE-4660</a>.
+     Major new feature reported by djp and fixed by djp (jobtracker, mrv1, scheduler)<br>
+     <b>Update task placement policy for NetworkTopology with &apos;NodeGroup&apos; layer</b><br>
+     <blockquote></blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4662">MAPREDUCE-4662</a>.
+     Major bug reported by tgraves and fixed by kihwal (jobhistoryserver)<br>
+     <b>JobHistoryFilesManager thread pool never expands</b><br>
+     <blockquote>The job history file manager creates a threadpool with core size 1 thread, max pool size 3.   It never goes beyond 1 thread though because its using a LinkedBlockingQueue which doesn&apos;t have a max size. <br><br>    void start() {<br>      executor = new ThreadPoolExecutor(1, 3, 1,<br>          TimeUnit.HOURS, new LinkedBlockingQueue&lt;Runnable&gt;());<br>    }<br><br>According to the ThreadPoolExecutor java doc page it only increases the number of threads when the queue is full. Since the queue we are using has no max ...</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4703">MAPREDUCE-4703</a>.
+     Major improvement reported by ahmed.radwan and fixed by ahmed.radwan (mrv1, mrv2, test)<br>
+     <b>Add the ability to start the MiniMRClientCluster using the configurations used before it is being stopped.</b><br>
+     <blockquote>The objective here is to enable starting back the cluster, after being stopped, using the same configurations/port numbers used before stopping.</blockquote></li>
+
+<li> <a href="https://issues.apache.org/jira/browse/MAPREDUCE-4706">MAPREDUCE-4706</a>.
+     Critical bug reported by kkambatl and fixed by kkambatl (contrib/fair-share)<br>

[... 1222 lines stripped ...]


Mime
View raw message