accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject svn commit: r1627870 - /accumulo/site/trunk/content/release_notes/1.6.1.mdtext
Date Fri, 26 Sep 2014 20:07:22 GMT
Author: elserj
Date: Fri Sep 26 20:07:22 2014
New Revision: 1627870

Initial version of 1.6.1 release notes


Modified: accumulo/site/trunk/content/release_notes/1.6.1.mdtext
--- accumulo/site/trunk/content/release_notes/1.6.1.mdtext (original)
+++ accumulo/site/trunk/content/release_notes/1.6.1.mdtext Fri Sep 26 20:07:22 2014
@@ -16,24 +16,18 @@ Notice:    Licensed to the Apache Softwa
            specific language governing permissions and limitations
            under the License.
-Apache Accumulo 1.5.2 is a maintenance release on the 1.5 version branch.
-This release contains changes from over 100 issues, comprised of bug fixes
-(client side and server side), new test cases, and updated Hadoop support
-contributed by over 30 different contributors and committers.
-As this is a maintenance release, Apache Accumulo 1.5.2 has no client API 
-incompatibilities over Apache Accumulo 1.5.0 and 1.5.1 and requires no manual upgrade 
-process. Users of 1.5.0 or 1.5.1 are strongly encouraged to update as soon as possible 
-to benefit from the improvements.
-Users who are new to Accumulo are encouraged to use a 1.6 release as opposed
-to the 1.5 line as development has already shifted towards the 1.6 line. For those
-who cannot or do not want to upgrade to 1.6, 1.5.2 is still an excellent choice
-over earlier versions in the 1.5 line.
+Apache Accumulo 1.6.1 is a maintenance release on the 1.6 version branch.
+This release contains changes from over 175 issues, comprised of bug-fixes, performance
+improvements and better test cases. As this is a maintenance release, Apache Accumulo
+1.6.1 has no client API  incompatibilities over Apache Accumulo 1.6.0. Users of 1.6.0
+are strongly encouraged to update as soon as possible to benefit from the improvements.
+New users are encouraged to use this release over 1.6.0 or any other older releases.
 ## Performance Improvements
-Apache Accumulo 1.5.2 includes a number of performance-related fixes over previous versions.
+Apache Accumulo 1.6.1 includes a number of performance-related fixes over previous versions.
+Many of these improvements were also included in the recently released Apache Accumulo 1.5.2.
 ### Write-Ahead Log sync performance
@@ -78,26 +72,20 @@ of *hsync* may result in about a 30% inc
 For users upgrading from Hadoop-1 or Hadoop-0.20 releases, *hflush* is the equivalent of
 sync was implemented in these older versions of Hadoop and should give comparable performance.
-### Server-side mutation queue size
+## Other improvements
-When users desire writes to be as durable as possible, using *hsync*, the ingest performance
-of the system can be improved by increasing the tserver.mutation.queue.max property. The
-of this change is that it will cause TabletServers to use additional memory per writer. In
-the value of this parameter defaulted to a conservative 256K, which resulted in sub-par ingest
+### Use of Hadoop CredentialProviders
-1.5.2 and [ACCUMULO-3018][13] increases this buffer to 1M which has a noticeable positive
impact on
-ingest performance with a minimal increase in TabletServer memory usage.
+Apache Hadoop 2.6.0 introduced a new API aimed at providing ways to separate sensitive values
+from being stored in plaintext as a part of [HADOOP-10607][28]. Accumulo has had two sensitive
+configuration properties stored in *accumulo-site.xml* for every standard installation: instance.secret
+and If either of these properties are compromised, it could
lead to
+unwanted access of Accumulo. [ACCUMULO-2464][29] modifies Accumulo so that it can stored
any sensitive
+configuration properties in a Hadoop CredentialProvider. With sensitive values removed from
+it can be shared without concern and security can be focused solely on the CredentialProvider.
 ## Notable Bug Fixes
-### Fixes MapReduce package name change
-1.5.1 inadvertently included a change to RangeInputSplit which created an incompatibility
-with 1.5.0. The original class has been restored to ensure that users accessing
-the RangeInputSplit class do not have to alter their client code. See [ACCUMULO-2586][1]
-more information
 ### Add configurable maximum frame size to Thrift proxy
 The Thrift proxy server was subject to memory exhaustion, typically
@@ -107,7 +95,7 @@ parameter, like [ACCUMULO-2360][3], to p
 ### Offline tables can prevent tablet balancing
-Before 1.5.2, when a table with many tablets was created, ingested into, and
+Before 1.6.1, when a table with many tablets was created, ingested into, and
 taken offline, tablet balancing may have stoppped. This would happen if there
 were tablet migrations for the table, because the migrations couldn't occur.
 The balancer will not run when there are outstanding migrations; therefore, a
@@ -134,22 +122,42 @@ loop due to a constraint violation. This
 but will also hang compactions. [ACCUMULO-3096][14] fixes the issue so that the
 constraint no longer hangs the entire system.
+### Unable to upgrade cleanly from 1.5
+When upgrading a table from 1.5.1 to 1.6.0, a user experienced an error where the table
+never came online. [ACCUMULO-2974][27] fixes an issue from the change of file references
+stored as absolute paths instead of relative paths in the Accumulo metadata table.
+### Guava dependency changed
+[ACCUMULO-3100][30] lowered the dependency on Guava from 15.0.1 to 14.0. This dependency
+now matches what Hadoop is depending on for the 2.x.y version line. Depending on a newer
+version of Guava introduces many issues stemming from deprecated classes in use by Hadoop
+which have been removed. While installations of Accumulo will likely work as expected with
+newer versions of Guava on the classpath (because the Hadoop processes will have their own
+classpath), use of MiniDfsClusters with the new Guava version will result in errors.
+Users can attempt to use a newer version of Guava on the Accumulo server classpath; however,
+the success is dependent on Hadoop client libraries not using (missing) Guava methods internally.
+### Scanners eat InterruptedException
+Scanners previously consumed InterruptedExceptions and did not exit after. In multi-threaded
+environments, this is very problematic as there is no means to stop the Scanner from reading
+[ACCUMULO-3030][31] fixes the Scanner so that interrupts are observed and the Scanner exits
as expected.
 ## Documentation
 The following documentation updates were made: 
- * [ACCUMULO-2540][15]
- * [ACCUMULO-2767][16]
- * [ACCUMULO-2796][17]
- * [ACCUMULO-2443][18]
- * [ACCUMULO-3008][19]
- * [ACCUMULO-2919][20]
- * [ACCUMULO-2874][21]
- * [ACCUMULO-2653][22]
- * [ACCUMULO-2437][23]
- * [ACCUMULO-3097][24]
- * [ACCUMULO-2499][25]
- * [ACCUMULO-1669][26]
+ * [ACCUMULO-2767][15]
+ * [ACCUMULO-2796][16]
+ * [ACCUMULO-2919][17]
+ * [ACCUMULO-3008][18]
+ * [ACCUMULO-2874][19]
+ * [ACCUMULO-2821][20]
+ * [ACCUMULO-3097][21]
+ * [ACCUMULO-3097][22]
 ## Testing
@@ -198,15 +206,16 @@ and, in HDFS High-Availability instances
\ No newline at end of file
\ No newline at end of file

View raw message