accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From build...@apache.org
Subject svn commit: r951093 - in /websites/staging/accumulo/trunk/content: ./ release_notes/1.7.0.html
Date Tue, 12 May 2015 22:29:46 GMT
Author: buildbot
Date: Tue May 12 22:29:46 2015
New Revision: 951093

Log:
Staging update by buildbot for accumulo

Modified:
    websites/staging/accumulo/trunk/content/   (props changed)
    websites/staging/accumulo/trunk/content/release_notes/1.7.0.html

Propchange: websites/staging/accumulo/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Tue May 12 22:29:46 2015
@@ -1 +1 @@
-1679112
+1679115

Modified: websites/staging/accumulo/trunk/content/release_notes/1.7.0.html
==============================================================================
--- websites/staging/accumulo/trunk/content/release_notes/1.7.0.html (original)
+++ websites/staging/accumulo/trunk/content/release_notes/1.7.0.html Tue May 12 22:29:46 2015
@@ -206,111 +206,39 @@ Latest 1.5 release: <strong>1.5.2</stron
 
     <h1 class="title">Apache Accumulo 1.6.2 Release Notes</h1>
 
-    <p>Apache Accumulo 1.6.2 is a maintenance release on the 1.6 version branch.
-This release contains changes from over 150 issues, comprised of bug-fixes, performance
-improvements and better test cases. Apache Accumulo 1.6.2 is the first release since the
-community has adopted <a href="http://semver.org">Semantic Versioning</a> which
means that all changes to the <a href="https://github.com/apache/accumulo#api">public
API</a>
-are guaranteed to be made without adding to or removing from the public API. This ensures
-that client code that runs against 1.6.1 is guaranteed to run against 1.6.2 and vice versa.</p>
-<p>Users of 1.6.0 or 1.6.1 are strongly encouraged to update as soon as possible to
benefit from
-the improvements with very little concern in change of underlying functionality. Users of
1.4 or 1.6
-are seeking to upgrade to 1.6 should consider 1.6.2 the starting point over 1.6.0 or 1.6.1.
For
-information about improvements since Accumulo 1.5, see the <a href="http://accumulo.apache.org/release_notes/1.6.0.html">1.6.0</a>
and <a href="http://accumulo.apache.org/release_notes/1.6.1.html">1.6.1</a> release
notes.</p>
+    <p>Apache Accumulo 1.7.0 is a release that needs to be described</p>
 <h2 id="notable-bug-fixes">Notable Bug Fixes</h2>
-<h3 id="only-first-zookeeper-server-is-used">Only first ZooKeeper server is used</h3>
-<p>In constructing a <code>ZooKeeperInstance</code>, the user provides
a comma-separated list of addresses for ZooKeeper
-servers. 1.6.0 and 1.6.1 incorrectly truncated the provided list of ZooKeeper servers used
to the first. This
-would cause clients to fail when the first ZooKeeper server in the list became unavailable
and not properly
-load balance requests to all available servers in the quorum. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3218">ACCUMULO-3218</a>
fixes the parsing of
-the ZooKeeper quorum list to use all servers, not just the first.</p>
-<h3 id="incorrectly-handled-zookeeper-exception">Incorrectly handled ZooKeeper exception</h3>
-<p>Use of ZooKeeper's API requires very careful exception handling as some thrown exceptions
from the ZooKeeper
-API are considered "normal" and must be retried by the client. In 1.6.1, Accumulo improved
its handling of
-these "expected failures" to better insulate calls to ZooKeeper; however, the wrapper which
sets data to a ZNode
-incorrectly handled all cases. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3448">ACCUMULO-3448</a>
fixed the implementation of <code>ZooUtil.putData(...)</code> to handle
-the expected error conditions correctly.</p>
-<h3 id="scanid-is-not-set-in-activescan"><code>scanId</code> is not set
in <code>ActiveScan</code></h3>
-<p>The <code>ActiveScan</code> class is the returned object by <code>InstanceOperations.listScans</code>.
This class represents a
-"scan" running on Accumulo servers, either from a <code>Scanner</code> or <code>BatchScanner</code>.
The <code>ActiveScan</code> class 
-is meant to represent all of the information that represents the scan and can be useful to
administrators
-or DevOps-types to observe and act on scans which are running for excessive periods of time.
<a href="https://issues.apache.org/jira/browse/ACCUMULO-2641">ACCUMULO-2641</a>
-fixes <code>ActiveScan</code> to ensure that the internal identifier <code>scanId</code>
is properly set.</p>
-<h3 id="table-state-change-doesnt-wait-when-requested">Table state change doesn't wait
when requested</h3>
-<p>An Accumulo table has two states: <code>ONLINE</code> and <code>OFFLINE</code>.
An offline table in Accumulo consumes no TabletServer
-resources, only HDFS resources, which makes it useful to save infrequently used data. The
Accumulo methods provided
-to transition a state from <code>ONLINE</code> to <code>OFFLINE</code>
and vice versa did not respect the <code>wait=true</code> parameter
-when set. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3301">ACCUMULO-3301</a>
fixes the underlying implementation to ensure that when <code>wait=true</code>
is provided,
-the method will not return until the table's state transition has fully completed.</p>
-<h3 id="keyvalue-doesnt-implement-hashcode-or-equals">KeyValue doesn't implement <code>hashCode()</code>
or <code>equals()</code></h3>
-<p>The <code>KeyValue</code> class is an implementation of <code>Entry&lt;Key,Value&gt;</code>
which is returned by the classes like
-<code>Scanner</code> and <code>BatchScanner</code>. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3217">ACCUMULO-3217</a>
adds these methods which ensure that the returned <code>Entry&lt;Key,Value&gt;</code>
-operates as expected with <code>HashMaps</code> and <code>HashSets</code>.
</p>
-<h3 id="potential-deadlock-in-tabletserver">Potential deadlock in TabletServer</h3>
-<p>Internal to the TabletServer, there are methods to construct instances of configuration
objects for tables
-and namespaces. The locking on these methods was not correctly implemented which created
the possibility to
-have concurrent requests to a TabletServer to deadlock. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3372">ACCUMULO-3372</a>
found this problem while performing
-bulk imports of RFiles into Accumulo. Additional synchronization was added server-side to
prevent this deadlock
-from happening in the future.</p>
-<h3 id="the-datelexicoder-incorrectly-serialized-dates-prior-1970">The <code>DateLexicoder</code>
incorrectly serialized <code>Dates</code> prior 1970</h3>
-<p>The <code>DateLexicode</code>, a part of the <code>Lexicoders</code>
classes which implement methods to convert common type primitives
-into lexicographically sorting Strings/bytes, incorrectly converted <code>Date</code>
objects for dates prior to 1970.
-<a href="https://issues.apache.org/jira/browse/ACCUMULO-3385">ACCUMULO-3385</a>
fixed the <code>DateLexicoder</code> to correctly (de)serialize data <code>Date</code>
objects. For users with
-data stored in Accumulo using the broken implementation, the following can be performed to
read the old data.</p>
-<div class="codehilite"><pre>  <span class="n">Lexicoder</span> <span
class="n">lex</span> <span class="p">=</span> <span class="n">new</span>
<span class="n">ULongLexicoder</span><span class="p">();</span>
-  <span class="k">for</span> <span class="p">(</span><span class="n">Entry</span><span
class="o">&lt;</span><span class="n">Key</span><span class="p">,</span>
<span class="n">Value</span><span class="o">&gt;</span> <span
class="n">e</span> <span class="p">:</span> <span class="n">scanner</span><span
class="p">)</span> <span class="p">{</span>
-    <span class="n">Date</span> <span class="n">d</span> <span
class="p">=</span> <span class="n">new</span> <span class="n">Date</span><span
class="p">(</span><span class="n">lex</span><span class="p">.</span><span
class="n">decode</span><span class="p">(</span><span class="n">TextUtil</span><span
class="p">.</span><span class="n">getBytes</span><span class="p">(</span><span
class="n">e</span><span class="p">.</span><span class="n">getKey</span><span
class="p">().</span><span class="n">getRow</span><span class="p">())));</span>
-    <span class="o">//</span> <span class="p">...</span>
-  <span class="p">}</span>
-</pre></div>
-
-
-<h3 id="reduce-miniaccumulocluster-failures-due-to-random-port-allocations">Reduce
MiniAccumuloCluster failures due to random port allocations</h3>
-<p><code>MiniAccumuloCluster</code> has had issues where it fails to properly
start due to the way it attempts to choose
-a random, unbound port on the local machine to start the ZooKeeper and Accumulo processes.
Improvements have
-been made, including retry logic, to withstand a few failed port choices. The changes made
by <a href="https://issues.apache.org/jira/browse/ACCUMULO-3233">ACCUMULO-3233</a>
-and the related issues should eliminate sporadic failures users of <code>MiniAccumuloCluster</code>
might have observed.</p>
-<h3 id="tracer-doesnt-handle-trace-table-state-transition">Tracer doesn't handle trace
table state transition</h3>
-<p>The Tracer is an optional Accumulo server process that serializes Spans, elements
of a distributed trace,
-to the trace table for later inspection and correlation with other Spans. By default, the
Tracer writes
-to a "trace" table. In earlier versions of Accumulo, if this table was put offline, the Tracer
would fail
-to write new Spans to the table when it came back online. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3351">ACCUMULO-3351</a>
ensures that the Tracer process
-will resume writing Spans to the trace table when it transitions to online after being offline.</p>
-<h3 id="tablet-not-major-compacting">Tablet not major compacting</h3>
-<p>It was noticed that a system performing many bulk imports, there was a tablet with
hundreds of files which
-was not major compacting nor was scheduled to be major compacted. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3462">ACCUMULO-3462</a>
identified as fix
-server-side which would prevent this from happening in the future.</p>
-<h3 id="yarn-job-submission-fails-with-hadoop-260">YARN job submission fails with Hadoop-2.6.0</h3>
-<p>Hadoop 2.6.0 introduced a new component, the TimelineServer, which is a centralized
metrics service designed
-for other Hadoop components to leverage. MapReduce jobs submitted via <code>accumulo</code>
and <code>tool.sh</code> failed to
-run the job because it attempted to contact the TimelineServer and Accumulo was missing a
dependency on 
-the classpath to communicate with the TimelineServer. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3230">ACCUMULO-3230</a>
updates the classpath in the example
-configuration files to include the necessary dependencies for the TimelineServer to ensure
that YARN job
-submission operates as previously.</p>
+<h3 id="bug-fix-1">Bug Fix 1</h3>
+<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
+Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure
+ dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat
+ non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p>
+<h3 id="bug-fix-2">Bug Fix 2</h3>
+<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
+Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure
+ dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat
+ non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p>
 <h2 id="performance-improvements">Performance Improvements</h2>
-<h3 id="user-scans-can-block-root-and-metadata-table-scans">User scans can block root
and metadata table scans</h3>
-<p>The TabletServer provides a feature to limit the number of open files as a resource
management configuration.
-To perform a scan against a normal table, the metadata and root table, when not cached, need
to be consulted
-first. With a sufficient number of concurrent scans against normal tables, adding to the
open file count,
-scans against the metadata and root tables could be blocked from running because no files
can be opened. 
-This prevents other system operations from happening as expected. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3297">ACCUMULO-3297</a>
fixes the internal semaphore
-used to implement this resource management to ensure that root and metadata table scans can
proceed.</p>
+<h3 id="performance-improvement-1">Performance Improvement 1</h3>
+<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
+Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure
+ dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat
+ non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p>
 <h2 id="other-improvements">Other improvements</h2>
-<h3 id="limit-available-ciphers-for-ssltls">Limit available ciphers for SSL/TLS</h3>
-<p>Since Apache Accumulo 1.5.2 and 1.6.1, the <a href="http://en.wikipedia.org/wiki/POODLE">POODLE</a>
man-in-the-middle attack was found which exploits a client's
-ability to fallback to the SSLv3.0 protocol. The main mitigation strategy was to prevent
the use of old ciphers/protocols
-when using SSL connectors. In Accumulo, both the Apache Thrift RPC servers and Jetty server
for the Accumulo
-monitor have the ability to enable SSL. <a href="https://issues.apache.org/jira/browse/ACCUMULO-3316">ACCUMULO-3316</a>
is the parent issue which provides new configuration
-properties in accumulo-site.xml which can limit the accepted ciphers/protocols. By default,
insecure or out-dated
-protocols have been removed from the default set in order to protect users by default.</p>
+<h3 id="improvement-1">Improvement 1</h3>
+<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
+Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure
+ dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat
+ non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p>
 <h2 id="documentation">Documentation</h2>
-<p>Documentation was added to the Administration chapter for moving from a Non-HA Namenode
setup to an HA Namenode setup. 
-New chapters were added for the configuration of SSL and for summaries of Implementation
Details (initially describing 
-FATE operations). A section was added to the Configuration chapter for describing how to
arrive at optimal settings
-for configuring an instance with native maps.</p>
+<p>Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt
ut labore et dolore magna aliqua.
+Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea
commodo consequat. Duis aute irure
+ dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur
sint occaecat cupidatat
+ non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p>
 <h2 id="testing">Testing</h2>
 <p>Each unit and functional test only runs on a single node, while the RandomWalk and
Continuous Ingest tests run 
-on any number of nodes. <em>Agitation</em> refers to randomly restarting Accumulo
processes and Hadoop Datanode processes,
-and, in HDFS High-Availability instances, forcing NameNode failover.
+on any number of nodes. <em>Agitation</em> refers to randomly restarting Accumulo
processes and Hadoop DataNode processes,
+and, in HDFS High-Availability instances, forcing NameNode failover.</p>
 <table id="release_notes_testing">
   <tr>
     <th>OS</th>
@@ -329,30 +257,22 @@ and, in HDFS High-Availability instances
     <td>Unit and Integration Tests</td>
   </tr>
   <tr>
-    <td>Mac OSX</tdt>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
-    <td>No</td>
-    <td>Unit and Integration Tests</td>
-  </tr>
-  <tr>
-    <td>Fedora 21</tdt>
-    <td>N/A</td>
-    <td>1</td>
-    <td>N/A</td>
+    <td>Gentoo</tdt>
+    <td>2.6.0</td>
+    <td>1 (2 TServers)</td>
+    <td>3.4.5</td>
     <td>No</td>
-    <td>Unit and Integration Tests</td>
+    <td>24hr CI w/ agitation and verification, 24hr RW w/o agitation.</td>
   </tr>
   <tr>
-    <td>CentOS 6</td>
-    <td>2.6</td>
-    <td>20</td>
-    <td>3.4.5</td>
+    <td>Centos 6.6</td>
+    <td>2.6.0</td>
+    <td>3</td>
+    <td>3.4.6</td>
     <td>No</td>
-    <td>ContinuousIngest w/ verification w/ and w/o agitation (31B and 21B entries,
respectively)</td>
+    <td>24hr RW w/ agitation, 72hr CI w/o agitation</td>
   </tr>
-</table></p>
+</table>
   </div>
 
   <div id="footer">



Mime
View raw message