accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [04/36] accumulo git commit: ACCUMULO-4518 Use Jekyll posts for releases
Date Thu, 10 Nov 2016 21:38:10 GMT
http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.5.1.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.1.md b/release_notes/1.5.1.md
deleted file mode 100644
index a13de7c..0000000
--- a/release_notes/1.5.1.md
+++ /dev/null
@@ -1,204 +0,0 @@
----
-title: Apache Accumulo 1.5.1 Release Notes
----
-
-Apache Accumulo 1.5.1 is a maintenance release on the 1.5 version branch.
-This release contains changes from over 200 issues, comprised of bug fixes
-(client side and server side), new test cases, and updated Hadoop support
-contributed by over 30 different contributors and committers.
-As this is a maintenance release, Apache Accumulo 1.5.1 has no client API 
-incompatibilities over Apache Accumulo 1.5.0 and requires no manual upgrade 
-process. Users of 1.5.0 are strongly encouraged to update as soon as possible 
-to benefit from the improvements.
-
-
-## Notable Improvements
-
-While new features are typically not added in a bug-fix release as 1.5.1, the
-community does create a variety of improvements that are API compatible. Contained
-here are some of the more notable improvements.
-
-### PermGen Leak from Client API
-
-Accumulo's client code creates background threads that users presently cannot 
-stop through the API. This is quick to cause problems when invoking the Accumulo
-API in application containers such as Apache Tomcat or JBoss and repeatedly 
-redeploying an application. [ACCUMULO-2128][3] introduces a static utility, 
-org.apache.accumulo.core.util.CleanUp, that users can invoke as part of a 
-teardown hook in their container that will stop these threads and avoid 
-the eventual OutOfMemoryError "PermGen space".
-
-### Prefer IPv4 when starting Accumulo processes
-
-While Hadoop [does not support IPv6 networks][28], attempting to run on a 
-system that does not have IPv6 completely disabled can cause strange failures.
-[ACCUMULO-2262][4] invokes the JVM-provided configuration parameter at process
-startup to prefer IPv4 over IPv6.
-
-### Memory units in configuration
-
-In previous versions, units of memory had to be provided as upper-case (e.g. '2G', not '2g').
-Additionally, a non-intuitive error was printed when a lower-case unit was provided.
-[ACCUMULO-1933][7] allows lower-case memory units in all Accumulo configurations.
-
-### Apache Thrift maximum frame size
-
-Apache Thrift is used as the internal RPC service. [ACCUMULO-2360][14] allows 
-users to configure the maximum frame size an Accumulo server will read. This 
-prevents non Accumulo client from connecting and causing memory exhaustion.
-
-### MultiTableBatchWriter concurrency
-
-The MultiTableBatchWriter is a class which allows multiple tables to be written to
-from a single object that maintains a single buffer for caching Mutations across all tables. This is desirable
-as it greatly simplifies the JVM heap usage from caching Mutations across
-many tables. Sadly, in Apache Accumulo 1.5.0, concurrent access to a single MultiTableBatchWriter
-heavily suffered from synchronization issues. [ACCUMULO-1833][35] introduces a fix
-which alleviates the blocking and idle-wait that previously occurred when multiple threads accessed
-a single MultiTableBatchWriter instance concurrently.
-
-### Hadoop Versions
-
-Since Apache Accumulo 1.5.0 was released, Apache Hadoop 2.2.0 was also released
-as the first generally available (GA) Hadoop 2 release. This was a very exciting release
-for a number of reasons, but this also caused additional effort on Accumulo's part to
-ensure that Apache Accumulo continues to work across multiple Hadoop versions. Apache Accumulo 1.5.1
-should function with any recent Hadoop 1 or Hadoop 2 without any special steps, tricks or instructions
-required.
-
-
-## Notable Bug Fixes
-
-As with any Apache Accumulo release, we have numerous bug fixes that have been fixed. Most
-are very subtle and won't affect the common user; however, some notable bugs were resolved 
-as a part of 1.5.1 that are rather common.
-
-### Failure of ZooKeeper server in quorum kills connected Accumulo services
-
-Apache ZooKeeper provides a number of wonderful features that Accumulo uses to accomplish
-a variety of tasks, most notably a distributed locking service. Typically, multiple ZooKeeper
-servers are run to provide resilience against a certain number of node failures. [ACCUMULO-1572][13]
-resolves an issue where Accumulo processes would kill themselves when the ZooKeeper server they
-were communicating with died instead of failing over to another ZooKeeper server in the quorum.
-
-### Monitor table state isn't updated
-
-The Accumulo Monitor contains a column for the state of each table in the Accumulo instance.
-The previous resolution was to restart the Monitor process when it got in this state.
-[ACCUMULO-1920][25] resolves an issue where the Monitor would not see updates from ZooKeeper.
-
-### Two locations for the same extent
-
-The !METADATA table is the brains behind the data storage for each table, tracking information
-like which files comprise a Tablet, and which TabletServers are hosting which Tablets. [ACCUMULO-2057][9]
-fixes an issue where the !METADATA table contained multiple locations (hosting server) for
-a single Tablet.
-
-### Deadlock on !METADATA tablet unload
-
-Tablets are unloaded, typically, when a shutdown request is issued. [ACCUMULO-1143][27] resolves
-a potential deadlock issue when a merging-minor compaction is issued to flush in-memory data
-to disk before unloading a Tablet.
-
-### Other notable fixes
-
- * [ACCUMULO-1800][5] Fixed deletes made via the Proxy.
- * [ACCUMULO-1994][6] Fixed ranges in the Proxy.
- * [ACCUMULO-2234][8] Fixed offline map reduce over non default HDFS location.
- * [ACCUMULO-1615][15] Fixed `service accumulo-tserver stop`.
- * [ACCUMULO-1876][16] Fixed issues depending on Accumulo using Apache Ivy.
- * [ACCUMULO-2261][10] Duplicate locations for a Tablet.
- * [ACCUMULO-2037][11] Tablets assigned to previous location.
- * [ACCUMULO-1821][12] Avoid recovery on recovering Tablets.
- * [ACCUMULO-2078][20] Incorrectly computed ACCUMULO_LOG_HOST in example configurations.
- * [ACCUMULO-1985][21] Configuration to bind Monitor on all network interfaces.
- * [ACCUMULO-1999][22] Allow '0' to signify random port for the Master.
- * [ACCUMULO-1630][24] Fixed GC to interpret any IP/hostname.
-
-
-## Known Issues
-
-When using Accumulo 1.5 and Hadoop 2, Accumulo will call hsync() on HDFS.
-Calling hsync improves durability by ensuring data is on disk (where other older 
-Hadoop versions might lose data in the face of power failure); however, calling
-hsync frequently does noticably slow writes. A simple work around is to increase 
-the value of the tserver.mutation.queue.max configuration parameter via accumulo-site.xml.
-
-A value of "4M" is a better recommendation, and memory consumption will increase by
-the number of concurrent writers to that TabletServer. For example, a value of 4M with
-50 concurrent writers would equate to approximately 200M of Java heap being used for
-mutation queues.
-
-For more information, see [ACCUMULO-1950][2] and [this comment][1].
-
-## Documentation
-
-The following documentation updates were made: 
-
- * [ACCUMULO-1956][18]
- * [ACCUMULO-1428][19]
- * [ACCUMULO-1687][29]
- * [ACCUMULO-2141][30]
- * [ACCUMULO-1946][31]
- * [ACCUMULO-2223][32]
- * [ACCUMULO-2226][33]
- * [ACCUMULO-1470][34]
-
-## Testing
-
-Below is a list of all platforms that 1.5.1 was tested against by developers. Each Apache Accumulo release
-has a set of tests that must be run before the candidate is capable of becoming an official release. That list includes the following:
-
- 1. Successfully run all unit tests
- 2. Successfully run all functional test (test/system/auto)
- 3. Successfully complete two 24-hour RandomWalk tests (LongClean module), with and without "agitation"
- 4. Successfully complete two 24-hour Continuous Ingest tests, with and without "agitation", with data verification
- 5. Successfully complete two 72-hour Continuous Ingest tests, with and without "agitation"
-
-Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
-on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
-and, in HDFS High-Availability instances, forcing NameNode failover.
-
-{: #release_notes_testing .table }
-| OS         | Hadoop                     | Nodes | ZooKeeper                  | HDFS High-Availability | Tests                                             |
-|------------|----------------------------|-------|----------------------------|------------------------|---------------------------------------------------|
-| CentOS 6.5 | HDP 2.0 (Apache 2.2.0)     | 6     | HDP 2.0 (Apache 3.4.5)     | Yes (QJM)              | All required tests                                |
-| CentOS 6.4 | CDH 4.5.0 (2.0.0+cdh4.5.0) | 7     | CDH 4.5.0 (3.4.5+cdh4.5.0) | Yes (QJM)              | Unit, functional and 24hr Randomwalk w/ agitation |
-| CentOS 6.4 | CDH 4.5.0 (2.0.0+cdh4.5.0) | 7     | CDH 4.5.0 (3.4.5+cdh4.5.0) | Yes (QJM)              | 2x 24/hr continuous ingest w/ verification        |
-| CentOS 6.3 | Apache 1.0.4               | 1     | Apache 3.3.5               | No                     | Local testing, unit and functional tests          |
-| RHEL 6.4   | Apache 2.2.0               | 10    | Apache 3.4.5               | No                     | Functional tests                                  |
-
-[1]: https://issues.apache.org/jira/browse/ACCUMULO-1905?focusedCommentId=13915208&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13915208
-[2]: https://issues.apache.org/jira/browse/ACCUMULO-1950
-[3]: https://issues.apache.org/jira/browse/ACCUMULO-2128 
-[4]: https://issues.apache.org/jira/browse/ACCUMULO-2262
-[5]: https://issues.apache.org/jira/browse/ACCUMULO-1800
-[6]: https://issues.apache.org/jira/browse/ACCUMULO-1994
-[7]: https://issues.apache.org/jira/browse/ACCUMULO-1933
-[8]: https://issues.apache.org/jira/browse/ACCUMULO-2234
-[9]: https://issues.apache.org/jira/browse/ACCUMULO-2057
-[10]: https://issues.apache.org/jira/browse/ACCUMULO-2261
-[11]: https://issues.apache.org/jira/browse/ACCUMULO-2037
-[12]: https://issues.apache.org/jira/browse/ACCUMULO-1821
-[13]: https://issues.apache.org/jira/browse/ACCUMULO-1572
-[14]: https://issues.apache.org/jira/browse/ACCUMULO-2360
-[15]: https://issues.apache.org/jira/browse/ACCUMULO-1615
-[16]: https://issues.apache.org/jira/browse/ACCUMULO-1876
-[17]: https://issues.apache.org/jira/browse/ACCUMULO-2127
-[18]: https://issues.apache.org/jira/browse/ACCUMULO-1956
-[19]: https://issues.apache.org/jira/browse/ACCUMULO-1428
-[20]: https://issues.apache.org/jira/browse/ACCUMULO-2078
-[21]: https://issues.apache.org/jira/browse/ACCUMULO-1985
-[22]: https://issues.apache.org/jira/browse/ACCUMULO-1999
-[23]: https://issues.apache.org/jira/browse/ACCUMULO-2065
-[24]: https://issues.apache.org/jira/browse/ACCUMULO-1630
-[25]: https://issues.apache.org/jira/browse/ACCUMULO-1920
-[27]: https://issues.apache.org/jira/browse/ACCUMULO-1143
-[28]: https://wiki.apache.org/hadoop/HadoopIPv6
-[29]: https://issues.apache.org/jira/browse/ACCUMULO-1687
-[30]: https://issues.apache.org/jira/browse/ACCUMULO-2141
-[31]: https://issues.apache.org/jira/browse/ACCUMULO-1946
-[32]: https://issues.apache.org/jira/browse/ACCUMULO-2223
-[33]: https://issues.apache.org/jira/browse/ACCUMULO-2226
-[34]: https://issues.apache.org/jira/browse/ACCUMULO-1470
-[35]: https://issues.apache.org/jira/browse/ACCUMULO-1833

http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.5.2.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.2.md b/release_notes/1.5.2.md
deleted file mode 100644
index e1a47ae..0000000
--- a/release_notes/1.5.2.md
+++ /dev/null
@@ -1,178 +0,0 @@
----
-title: Apache Accumulo 1.5.2 Release Notes
----
-
-Apache Accumulo 1.5.2 is a maintenance release on the 1.5 version branch.
-This release contains changes from over 100 issues, comprised of bug fixes
-(client side and server side), new test cases, and updated Hadoop support
-contributed by over 30 different contributors and committers.
-As this is a maintenance release, Apache Accumulo 1.5.2 has no client API 
-incompatibilities over Apache Accumulo 1.5.0 and 1.5.1 and requires no manual upgrade 
-process. Users of 1.5.0 or 1.5.1 are strongly encouraged to update as soon as possible 
-to benefit from the improvements.
-
-Users who are new to Accumulo are encouraged to use a 1.6 release as opposed
-to the 1.5 line as development has already shifted towards the 1.6 line. For those
-who cannot or do not want to upgrade to 1.6, 1.5.2 is still an excellent choice
-over earlier versions in the 1.5 line.
-
-
-## Performance Improvements
-
-Apache Accumulo 1.5.2 includes a number of performance-related fixes over previous versions.
-
-
-### Write-Ahead Log sync performance
-
-The Write-Ahead Log (WAL) files are used to ensure durability of updates made to Accumulo.
-A sync is called on the file in HDFS to make sure that the changes to the WAL are persisted
-to disk, which allows Accumulo to recover in the case of failure. [ACCUMULO-2766][9] fixed
-an issue where an operation against a WAL would unnecessarily wait for multiple syncs, slowing
-down the ingest on the system.
-
-### Minor-Compactions not aggressive enough
-
-On a system with ample memory provided to Accumulo, long hold-times were observed which
-blocks the ingest of new updates. Trying to free more server-side memory by running minor
-compactions more frequently increased the overall throughput on the node. These changes
-were made in [ACCUMULO-2905][10].
-
-### HeapIterator optimization
-
-Iterators, a notable feature of Accumulo, are provided to users as a server-side programming
-construct, but are also used internally for numerous server operations. One of these system iterator 
-is the HeapIterator which implements a PriorityQueue of other Iterators. One way this iterator is
-used is to merge multiple files in HDFS to present a single, sorted stream of Key-Value pairs. [ACCUMULO-2827][11]
-introduces a performance optimization to the HeapIterator which can improve the speed of the
-HeapIterator in common cases.
-
-### Write-Ahead log sync implementation
-
-In Hadoop-2, two implementations of sync are provided: hflush and hsync. Both of these
-methods provide a way to request that the datanodes write the data to the underlying
-medium and not just hold it in memory (the *fsync* syscall). While both of these methods
-inform the Datanodes to sync the relevant block(s), *hflush* does not wait for acknowledgement
-from the Datanodes that the sync finished, where *hsync* does. To provide the most reliable system
-"out of the box", Accumulo defaults to *hsync* so that your data is as secure as possible in 
-a variety of situations (notably, unexpected power outages).
-
-The downside is that performance tends to suffer because waiting for a sync to disk is a very
-expensive operation. [ACCUMULO-2842][12] introduces a new system property, tserver.wal.sync.method,
-that lets users to change the HDFS sync implementation from *hsync* to *hflush*. Using *hflush* instead
-of *hsync* may result in about a 30% increase in ingest performance.
-
-For users upgrading from Hadoop-1 or Hadoop-0.20 releases, *hflush* is the equivalent of how
-sync was implemented in these older versions of Hadoop and should give comparable performance.
-
-### Server-side mutation queue size
-
-When users desire writes to be as durable as possible, using *hsync*, the ingest performance
-of the system can be improved by increasing the tserver.mutation.queue.max property. The cost
-of this change is that it will cause TabletServers to use additional memory per writer. In 1.5.1,
-the value of this parameter defaulted to a conservative 256K, which resulted in sub-par ingest
-performance.
-
-1.5.2 and [ACCUMULO-3018][13] increases this buffer to 1M which has a noticeable positive impact on
-ingest performance with a minimal increase in TabletServer memory usage.
-
-## Notable Bug Fixes
-
-### Fixes MapReduce package name change
-
-1.5.1 inadvertently included a change to RangeInputSplit which created an incompatibility
-with 1.5.0. The original class has been restored to ensure that users accessing
-the RangeInputSplit class do not have to alter their client code. See [ACCUMULO-2586][1] for
-more information
-
-### Add configurable maximum frame size to Apache Thrift proxy
-
-The Thrift proxy server was subject to memory exhaustion, typically
-due to bad input, where the server would attempt to allocate a very large
-buffer and die in the process. [ACCUMULO-2658][2] introduces a configuration
-parameter, like [ACCUMULO-2360][3], to prevent this error.
-
-### Offline tables can prevent tablet balancing
-
-Before 1.5.2, when a table with many tablets was created, ingested into, and
-taken offline, tablet balancing may have stoppped. This would happen if there
-were tablet migrations for the table, because the migrations couldn't occur.
-The balancer will not run when there are outstanding migrations; therefore, a
-system could become unbalanced. [ACCUMULO-2694][4] introduces a fix to ensure
-that offline tables do not block balancing and improves the server-side
-logging.
-
-### MiniAccumuloCluster process management
-
-MiniAccumuloCluster had a few issues which could cause deadlock or a method that
-never returns. Most of these are related to management of the Accumulo processes
-([ACCUMULO-2764][5], [ACCUMULO-2985][6], and [ACCUMULO-3055][7]).
-
-### IteratorSettings not correctly serialized in RangeInputSplit
-
-The Writable interface methods on the RangeInputSplit class accidentally omitted
-calls to serialize the IteratorSettings configured for the Job. [ACCUMULO-2962][8]
-fixes the serialization and adds some additional tests.
-
-### Constraint violation causes hung scans
-
-A failed bulk import transaction had the ability to create an infinitely retrying
-loop due to a constraint violation. This directly prevents scans from completing,
-but will also hang compactions. [ACCUMULO-3096][14] fixes the issue so that the
-constraint no longer hangs the entire system.
-
-## Documentation
-
-The following documentation updates were made: 
-
- * [ACCUMULO-2540][15]
- * [ACCUMULO-2767][16]
- * [ACCUMULO-2796][17]
- * [ACCUMULO-2443][18]
- * [ACCUMULO-3008][19]
- * [ACCUMULO-2919][20]
- * [ACCUMULO-2874][21]
- * [ACCUMULO-2653][22]
- * [ACCUMULO-2437][23]
- * [ACCUMULO-3097][24]
- * [ACCUMULO-2499][25]
- * [ACCUMULO-1669][26]
-
-## Testing
-
-Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
-on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
-and, in HDFS High-Availability instances, forcing NameNode failover.
-
-{: #release_notes_testing .table }
-| OS       | Hadoop                | Nodes | ZooKeeper    | HDFS High-Availability | Tests                                                                                               |
-|----------|-----------------------|-------|--------------|------------------------|-----------------------------------------------------------------------------------------------------|
-| Gentoo   | Apache 2.6.0-SNAPSHOT | 1     | Apache 3.4.5 | No                     | Unit and Functional Tests, ContinuousIngest w/ verification (1B entries)                            |
-| CentOS 6 | Apache 2.3.0          | 20    | Apache 3.4.5 | No                     | 24/hr RandomWalk, 24/hr ContinuousIngest w/ verification w/ and w/o agitation (30B and 23B entries) |
-
-
-[1]: https://issues.apache.org/jira/browse/ACCUMULO-2586
-[2]: https://issues.apache.org/jira/browse/ACCUMULO-2658
-[3]: https://issues.apache.org/jira/browse/ACCUMULO-2360
-[4]: https://issues.apache.org/jira/browse/ACCUMULO-2694
-[5]: https://issues.apache.org/jira/browse/ACCUMULO-2764
-[6]: https://issues.apache.org/jira/browse/ACCUMULO-2985
-[7]: https://issues.apache.org/jira/browse/ACCUMULO-3055
-[8]: https://issues.apache.org/jira/browse/ACCUMULO-2962
-[9]: https://issues.apache.org/jira/browse/ACCUMULO-2766
-[10]: https://issues.apache.org/jira/browse/ACCUMULO-2905
-[11]: https://issues.apache.org/jira/browse/ACCUMULO-2827
-[12]: https://issues.apache.org/jira/browse/ACCUMULO-2842
-[13]: https://issues.apache.org/jira/browse/ACCUMULO-3018
-[14]: https://issues.apache.org/jira/browse/ACCUMULO-3096
-[15]: https://issues.apache.org/jira/browse/ACCUMULO-2540
-[16]: https://issues.apache.org/jira/browse/ACCUMULO-2767
-[17]: https://issues.apache.org/jira/browse/ACCUMULO-2796
-[18]: https://issues.apache.org/jira/browse/ACCUMULO-2443
-[19]: https://issues.apache.org/jira/browse/ACCUMULO-3008
-[20]: https://issues.apache.org/jira/browse/ACCUMULO-2919
-[21]: https://issues.apache.org/jira/browse/ACCUMULO-2874
-[22]: https://issues.apache.org/jira/browse/ACCUMULO-2653
-[23]: https://issues.apache.org/jira/browse/ACCUMULO-2437
-[24]: https://issues.apache.org/jira/browse/ACCUMULO-3097
-[25]: https://issues.apache.org/jira/browse/ACCUMULO-2499
-[26]: https://issues.apache.org/jira/browse/ACCUMULO-1669

http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.5.3.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.3.md b/release_notes/1.5.3.md
deleted file mode 100644
index 9d4956e..0000000
--- a/release_notes/1.5.3.md
+++ /dev/null
@@ -1,112 +0,0 @@
----
-title: Apache Accumulo 1.5.3 Release Notes
----
-
-Apache Accumulo 1.5.3 is a bug-fix release for the 1.5 series. It is likely to be the last
-1.5 release, with development shifting towards newer release lines. We recommend upgrading
-to a newer version to continue to get bug fixes and new features.
-
-In the context of Accumulo's [Semantic Versioning][semver] [guidelines][api],
-this is a "patch version". This means that there should be no public API changes. Any
-changes which were made were done in a backwards-compatible manner. Code that
-runs against 1.5.2 should run against 1.5.3.
-
-We'd like to thank all of the committers and contributors which had a part in
-making this release, from code contributions to testing. Everyone's efforts are
-greatly appreciated.
-
-## Security Changes
-
-### [SSLv3 disabled (POODLE)][ACCUMULO-3316]
-
-Many Accumulo services were capable of enabling wire encryption using
-SSL connectors. To be safe, [ACCUMULO-3316] disables the problematic SSLv3 version by default which was
-potentially susceptible to the man-in-the-middle attack. [ACCUMULO-3317] also disables SSLv3 in the monitor,
-so it will not accept SSLv3 client connections, when running it with https.
-
-## Notable Bug Fixes
-
-### [SourceSwitchingIterator Deadlock][ACCUMULO-3745]
-
-An instance of SourceSwitchingIterator, the Accumulo iterator which transparently manages
-whether data for a tablet read from memory (the in-memory map) or disk (HDFS after a minor
-compaction), was found deadlocked in a production system.
-
-This deadlock prevented the scan and the minor compaction from ever successfully completing
-without restarting the tablet server. [ACCUMULO-3745] fixes the inconsistent synchronization
-inside of the SourceSwitchingIterator to prevent this deadlock from happening in the future.
-
-The only mitigation of this bug was to restart the tablet server that is deadlocked.
-
-### [Table flush blocked indefinitely][ACCUMULO-3597]
-
-While running the Accumulo RandomWalk distributed test, it was observed that all activity in
-Accumulo had stopped and there was an offline Accumulo metadata table tablet. The system first
-tried to flush a user tablet, but the metadata table was not online (likely due to the agitation
-process which stops and starts Accumulo processes during the test). After this call, a call to
-load the metadata tablet was queued but could not complete until the previous flush call. Thus,
-a deadlock occurred.
-
-This deadlock happened because the synchronous flush call could not complete before the load
-tablet call completed, but the load tablet call couldn't run because of connection caching we
-perform in Accumulo's RPC layer to reduce the quantity of sockets we need to create to send data.
-[ACCUMULO-3597] prevents this deadlock by forcing the use of a non-cached connection for the RPC
-message requesting a metadata tablet to be loaded.
-
-While this feature does result in additional network resources to be used, the concern is minimal
-because the number of metadata tablets is typically very small with respect to the total number of
-tablets in the system.
-
-The only mitigation of this bug was to restart the tablet server that is hung.
-
-### [RPC Connections not cached][ACCUMULO-3574]
-
-It was observed that the underlying connection for invoking RPCs were not actually being cached,
-despite it being requested that they should be cached. While this did not result in a noticed
-performance impact, it was deficiency. [ACCUMULO-3574] ensures that connections are cached when
-it is requested that they are.
-
-### [Deletes on Apache Thrift Proxy API ignored][ACCUMULO-3474]
-
-A user noted that when trying to specify a delete using the Accumulo Thrift Proxy, the delete
-was treated as an update. [ACCUMULO-3474] fixes the Proxy server such that deletes are properly
-respected as specified by the client.
-
-## Other Changes
-
-Other changes for this version can be found [in JIRA][CHANGES].
-
-## Testing
-
-Each unit and functional test only runs on a single node, while the RandomWalk
-and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
-randomly restarting Accumulo processes and Hadoop DataNode processes, and, in
-HDFS High-Availability instances, forcing NameNode fail-over.
-
-During testing, multiple Accumulo developers noticed some stability issues
-with HDFS using Apache Hadoop 2.6.0 when restarting Accumulo processes and
-HDFS datanodes. The developers investigated these issues as a part of the
-normal release testing procedures, but were unable to find a definitive cause
-of these failures. Users are encouraged to follow
-[ACCUMULO-2388][ACCUMULO-2388] if they wish to follow any future developments.
-One possible workaround is to increase the `general.rpc.timeout` in the
-Accumulo configuration from `120s` to `240s`.
-
-{: #release_notes_testing .table }
-| OS         | Hadoop | Nodes | ZooKeeper | HDFS High-Availability | Tests                        |
-|------------|--------|-------|-----------|------------------------|------------------------------|
-| Gentoo     | 2.6.0  | 1     | 3.4.5     | No                     | Unit and Integration Tests   |
-| Centos 6.5 | 2.7.1  | 6     | 3.4.5     | No                     | Continuous Ingest and Verify |
-
-[ACCUMULO-3316]: https://issues.apache.org/jira/browse/ACCUMULO-3316
-[ACCUMULO-3317]: https://issues.apache.org/jira/browse/ACCUMULO-3317
-[ACCUMULO-2388]: https://issues.apache.org/jira/browse/ACCUMULO-2388
-[ACCUMULO-3474]: https://issues.apache.org/jira/browse/ACCUMULO-3474
-[ACCUMULO-3574]: https://issues.apache.org/jira/browse/ACCUMULO-3574
-[ACCUMULO-3597]: https://issues.apache.org/jira/browse/ACCUMULO-3597
-[ACCUMULO-3745]: https://issues.apache.org/jira/browse/ACCUMULO-3745
-[api]: https://github.com/apache/accumulo/blob/1.7.0/README.md#api
-[readme]: https://github.com/apache/accumulo/blob/1.5.3/README.md
-[semver]: http://semver.org
-[CHANGES]: https://issues.apache.org/jira/browse/ACCUMULO/fixforversion/12328662
-[REL_152]: {{ site.baseurl }}/release_notes/1.5.2

http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.5.4.md
----------------------------------------------------------------------
diff --git a/release_notes/1.5.4.md b/release_notes/1.5.4.md
deleted file mode 100644
index 6981e51..0000000
--- a/release_notes/1.5.4.md
+++ /dev/null
@@ -1,67 +0,0 @@
----
-title: Apache Accumulo 1.5.4 Release Notes
----
-
-Apache Accumulo 1.5.4 is one more bug-fix release for the 1.5 series. Like 1.5.3 before it, this release contains a
-very small changeset when considering the normal size of changes in a release.
-
-This release contains no changes to the [public API][api]. As such, there are no concerns
-for the compatibility of user code running against 1.5.3. All users are encourage to upgrade
-immediately without concern of stability and compatibility.
-
-A full list of changes is available via [CHANGES][CHANGES].
-
-We'd like to thank all of the committers and contributors which had a part in
-making this release, from code contributions to testing. Everyone's efforts are
-greatly appreciated.
-
-## Correctness Bugs
-
-### Silent data-loss via bulk imported files
-
-A user recently reported that a simple bulk-import application would occasionally lose some records. Through investigation,
-it was found that when bulk imports into a table failed the initial assignment, the logic that automatically retries
-the imports was incorrectly choosing the tablets to import the files into. [ACCUMULO-3967][ACCUMULO-3967] contains
-more information on the cause and identification of the bug. The data-loss condition would only affect entire files.
-If records from a file exist in Accumulo, it is still guaranteed that all records within that imported file were
-successful.
-
-As such, users who have bulk import applications using previous versions of Accumulo should verify that all of their
-data was correctly ingested into Accumulo and immediately update to Accumulo 1.5.4.
-
-Thanks to Edward Seidl for reporting this bug to us!
-
-## Server-side auditing changes
-
-Thanks to James Mello for reporting and providing the fixes to the following server-side auditing issues.
-
-### Incorrect audit initialization
-
-It was observed that the implementation used to audit user API requests on Accumulo server processes
-was not being correctly initialized which caused audit messages to never be generated. This was rectified
-in [ACCUMULO-3939][ACCUMULO-3939].
-
-### Missing audit implementations
-
-It was also observed that some server-side API implementations did not include audit messages which resulted
-in an incomplete historical picture on what operations a user might have invoked. The missing audits (and those
-that were added) are described in [ACCUMULO-3946][ACCUMULO-3946].
-
-## Testing
-
-Each unit and functional test only runs on a single node, while the RandomWalk
-and Continuous Ingest tests run on any number of nodes. *Agitation* refers to
-randomly restarting Accumulo processes and Hadoop DataNode processes, and, in
-HDFS High-Availability instances, forcing NameNode fail-over.
-
-{: #release_notes_testing .table }
-| OS         | Hadoop | Nodes | ZooKeeper | HDFS High-Availability | Tests                                                          |
-|------------|--------|-------|-----------|------------------------|----------------------------------------------------------------|
-| OSX        | 2.6.0  | 1     | 3.4.5     | No                     | Unit and Functional Tests                                      |
-| Centos 6.5 | 2.7.1  | 6     | 3.4.5     | No                     | Continuous Ingest and Verify (10B entries), Randomwalk (24hrs) |
-
-[ACCUMULO-3967]: https://issues.apache.org/jira/browse/ACCUMULO-3967
-[ACCUMULO-3939]: https://issues.apache.org/jira/browse/ACCUMULO-3939
-[ACCUMULO-3946]: https://issues.apache.org/jira/browse/ACCUMULO-3946
-[api]: https://github.com/apache/accumulo/blob/1.7.0/README.md#api
-[CHANGES]: https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12312121&version=12333106

http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.6.0.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.0.md b/release_notes/1.6.0.md
deleted file mode 100644
index 1e5d311..0000000
--- a/release_notes/1.6.0.md
+++ /dev/null
@@ -1,349 +0,0 @@
----
-title: Apache Accumulo 1.6.0 Release Notes
----
-
-Apache Accumulo 1.6.0 adds some major new features and fixes many bugs.  This release contains changes from 609 issues contributed by 36 contributors and committers.  
-
-Accumulo 1.6.0 runs on Hadoop 1, however Hadoop 2 with HA namenode is recommended for production systems.  In addition to HA, Hadoop 2 also offers better data durability guarantees, in the case when nodes lose power, than Hadoop 1.
-
-## Notable Improvements
-
-### Multiple volume support
-
-[BigTable's][1] design allows for its internal metadata to automatically spread across multiple nodes.  Accumulo has followed this design and scales very well as a result.  There is one impediment to scaling though, and this is the HDFS namenode.  There are two problems with the namenode when it comes to scaling.  First, the namenode stores all of its filesystem metadata in memory on a single machine.  This introduces an upper bound on the number of files Accumulo can have.  Second, there is an upper bound on the number of file operations per second that a single namenode can support.  For example, a namenode can only support a few thousand delete or create file request per second.  
-
-To overcome this bottleneck, support for multiple namenodes was added under [ACCUMULO-118][ACCUMULO-118].  This change allows Accumulo to store its files across multiple namenodes.  To use this feature, place comma separated list of namenode URIs in the new *instance.volumes* configuration property in accumulo-site.xml.  When upgrading to 1.6.0 and multiple namenode support is desired, modify this setting **only** after a successful upgrade.
-
-### Table namespaces
-
-Administering an Accumulo instance with many tables is cumbersome.  To ease this, [ACCUMULO-802][ACCUMULO-802] introduced table namespaces which allow tables to be grouped into logical collections.  This allows configuration and permission changes to made to a namespace, which will apply to all of its tables.
-
-### Conditional Mutations
-
-Accumulo now offers a way to make atomic read,modify,write row changes from the client side.  Atomic test and set row operations make this possible.  [ACCUMULO-1000][ACCUMULO-1000] added conditional mutations and a conditional writer.  A conditional mutation has tests on columns that must pass before any changes are made.  These test are executed in server processes while a row lock is held.  Below is a simple example of making atomic row changes using conditional mutations.
-
- 1. Read columns X,Y,SEQ into a,b,s from row R1 using an isolated scanner.
- 2. For row R1 write conditional mutation X=f(a),Y=g(b),SEQ=s+1 if SEQ==s.
- 3. If conditional mutation failed, then goto step 1.
-
-The only built in test that conditional mutations support are equality and isNull.  However, iterators can be configured on a conditional mutation to run before these test.  This makes it possible to implement any number of test such as less than, greater than, contains, etc.
-
-### Encryption
-
-Encryption is still an experimental feature, but much progress has been made since 1.5.0.  Support for encrypting rfiles and write ahead logs were added in [ACCUMULO-958][ACCUMULO-958] and [ACCUMULO-980][ACCUMULO-980].  Support for encrypting data over the wire using SSL was added in [ACCUMULO-1009][ACCUMULO-1009].
- 
-When a tablet server fails, its write ahead logs are sorted and stored in HDFS.  In 1.6.0, encrypting these sorted write ahead logs is not supported.  [ACCUMULO-981][ACCUMULO-981] is open to address this issue.  
-
-### Pluggable compaction strategies
-
-One of the key elements of the [BigTable][1] design is use of the [Log Structured Merge Tree][2].  This entails sorting data in memory, writing out sorted files, and then later merging multiple sorted files into a single file.   These automatic merges happen in the background and Accumulo decides when to merge files based comparing relative sizes of files to a compaction ratio.  Before 1.6.0 adjusting the compaction ratio was the only way a user could control this process.  [ACCUMULO-1451][ACCUMULO-1451] introduces pluggable compaction strategies which allow users to choose when and what files to compact.  [ACCUMULO-1808][ACCUMULO-1808] adds a compaction strategy that prevents compaction of files over a configurable size.
-
-### Lexicoders
-
-Accumulo only sorts data lexicographically.  Getting something like a pair of (*String*,*Integer*) to sort correctly in Accumulo is tricky.  It's tricky because you only want to compare the integers if the strings are equal.  It's possible to make this sort properly in Accumulo if the data is encoded properly, but can be difficult.  To make this easier [ACCUMULO-1336][ACCUMULO-1336] added Lexicoders to the Accumulo API.  Lexicoders provide an easy way to serialize data so that it sorts properly lexicographically.  Below is a simple example.
-
-       PairLexicoder plex = new PairLexicoder(new StringLexicoder(), new IntegerLexicoder());
-       byte[] ba1 = plex.encode(new ComparablePair<String, Integer>("b",1));
-       byte[] ba2 = plex.encode(new ComparablePair<String, Integer>("aa",1));
-       byte[] ba3 = plex.encode(new ComparablePair<String, Integer>("a",2));
-       byte[] ba4 = plex.encode(new ComparablePair<String, Integer>("a",1)); 
-       byte[] ba5 = plex.encode(new ComparablePair<String, Integer>("aa",-3));
-
-       //sorting ba1,ba2,ba3,ba4, and ba5 lexicographically will result in the same order as sorting the ComparablePairs
-
-### Locality groups in memory
-
-In cases where a very small amount of data is stored in a locality group one would expect fast scans over that locality group.  However this was not always the case because recently written data stored in memory was not partitioned by locality group.  Therefore if a table had 100GB of data in memory and 1MB of that was in locality group A, then scanning A would have required reading all 100GB.  [ACCUMULO-112][ACCUMULO-112] changes this and partitions data by locality group as its written.
-
-### Service IP addresses
-
-Previous versions of Accumulo always used IP addresses internally.  This could be problematic in virtual machine environments where IP addresses change.  In [ACCUMULO-1585][ACCUMULO-1585] this was changed, now Accumulo uses the exact hostnames from its config files for internal addressing.  
-
-All Accumulo processes running on a cluster are locatable via zookeeper.  Therefore using well known ports is not really required.  [ACCUMULO-1664][ACCUMULO-1664] makes it possible to for all Accumulo processes to use random ports.  This makes it easier to run multiple Accumulo instances on a single node.   
-
-While Hadoop [does not support IPv6 networks][3], attempting to run on a system that does not have IPv6 completely disabled can cause strange failures. [ACCUMULO-2262][ACCUMULO-2262] invokes the JVM-provided configuration parameter at process startup to prefer IPv4 over IPv6.
-
-### ViewFS
-
-Multiple bug-fixes were made to support running Accumulo over multiple HDFS instances using ViewFS. [ACCUMULO-2047][ACCUMULO-2047] is the parent
-ticket that contains numerous fixes to enable this support.
-
-### Maven Plugin
-
-This version of Accumulo is accompanied by a new maven plugin for testing client apps ([ACCUMULO-1030][ACCUMULO-1030]). You can execute the accumulo-maven-plugin inside your project by adding the following to your pom.xml's build plugins section:
-
-      <plugin>
-        <groupId>org.apache.accumulo</groupId>
-        <artifactId>accumulo-maven-plugin</artifactId>
-        <version>1.6.0</version>
-        <configuration>
-          <instanceName>plugin-it-instance</instanceName>
-          <rootPassword>ITSecret</rootPassword>
-        </configuration>
-        <executions>
-          <execution>
-            <id>run-plugin</id>
-            <goals>
-              <goal>start</goal>
-              <goal>stop</goal>
-            </goals>
-          </execution>
-        </executions>
-      </plugin>
-
-This plugin is designed to work in conjunction with the maven-failsafe-plugin. A small test instance of Accumulo will run during the pre-integration-test phase of the Maven build lifecycle, and will be stopped in the post-integration-test phase. Your integration tests, executed by maven-failsafe-plugin can access this instance with a MiniAccumuloInstance connector (the plugin uses MiniAccumuloInstance, internally), as in the following example:
-
-      private static Connector conn;
-      
-      @BeforeClass
-      public static void setUp() throws Exception {
-        String instanceName = "plugin-it-instance";
-        Instance instance = new MiniAccumuloInstance(instanceName, new File("target/accumulo-maven-plugin/" + instanceName));
-        conn = instance.getConnector("root", new PasswordToken("ITSecret"));
-      }
-
-This plugin is quite limited, currently only supporting an instance name and a root user password as configuration parameters. Improvements are expected in future releases, so feedback is welcome and appreciated (file bugs/requests under the "maven-plugin" component in the Accumulo JIRA).
-
-### Packaging
-
-One notable change that was made to the binary tarball is the purposeful omission of a pre-built copy of the Accumulo "native map" library.
-This shared library is used at ingest time to implement an off-JVM-heap sorted map that greatly increases ingest throughput while side-stepping
-issues such as JVM garbage collection pauses. In earlier releases, a pre-built copy of this shared library was included in the binary tarball; however, the decision was made to omit this due to the potential variance in toolchains on the target system.
-
-It is recommended that users invoke the provided build\_native\_library.sh before running Accumulo:
-
-       $ACCUMULO_HOME/bin/build_native_library.sh
-
-Be aware that you will need a C++ compiler/toolchain installed to build this library. Check your GNU/Linux distribution documentation for the package manager command.
-
-### Size-Based Constraint on New Tables
-
-A Constraint is an interface that can determine if a Mutation should be applied or rejected server-side. After [ACCUMULO-466][ACCUMULO-466], new tables that are created in 1.6.0 will automatically have the `DefaultKeySizeConstraint` set.
-As performance can suffer when large Keys are inserted into a table, this Constraint will reject any Key that is larger than 1MB. If this constraint is undesired, it can be removed using the `constraint` shell
-command. See the help message on the command for more information.
-
-### Other notable changes
-
- * [ACCUMULO-842][ACCUMULO-842] Added FATE administration to shell
- * [ACCUMULO-1042][ACCUMULO-1042] CTRL-C no longer kills shell
- * [ACCUMULO-1345][ACCUMULO-1345] Stuck compactions now log a warning with a stack trace, tablet id, and filename.
- * [ACCUMULO-1442][ACCUMULO-1442] JLine2 support was added to the shell.  This adds features like history search and other nice things GNU Readline has. 
- * [ACCUMULO-1481][ACCUMULO-1481] The root tablet is now the root table.
- * [ACCUMULO-1537][ACCUMULO-1537] Python functional test were converted to maven Integration test that use MAC
- * [ACCUMULO-1566][ACCUMULO-1566] When read-ahead starts in the scanner is now configurable.
- * [ACCUMULO-1650][ACCUMULO-1650] Made common admin commands easier to run, try `bin/accumulo admin --help`
- * [ACCUMULO-1667][ACCUMULO-1667] Added a synchronous version of online and offline table
- * [ACCUMULO-1706][ACCUMULO-1706] Admin utilities now respect EPIPE
- * [ACCUMULO-1833][ACCUMULO-1833] Multitable batch writer is faster now when used by multiple threads
- * [ACCUMULO-1933][ACCUMULO-1933] Lower case can be given for memory units now.
- * [ACCUMULO-1985][ACCUMULO-1985] Configuration to bind Monitor on all network interfaces.
- * [ACCUMULO-2128][ACCUMULO-2128] Provide resource cleanup via static utility
- * [ACCUMULO-2360][ACCUMULO-2360] Allow configuration of the maximum thrift message size a server will read.
-
-## Notable Bug Fixes
-
- * [ACCUMULO-324][ACCUMULO-324] System/site constraints and iterators should NOT affect the METADATA table
- * [ACCUMULO-335][ACCUMULO-335] Can't batchscan over the !METADATA table
- * [ACCUMULO-391][ACCUMULO-391] Added support for reading from multiple tables in a Map Reduce job.
- * [ACCUMULO-1018][ACCUMULO-1018] Client does not give informative message when user can not read table
- * [ACCUMULO-1492][ACCUMULO-1492] bin/accumulo should follow symbolic links
- * [ACCUMULO-1572][ACCUMULO-1572] Single node zookeeper failure kills connected Accumulo servers
- * [ACCUMULO-1661][ACCUMULO-1661] AccumuloInputFormat cannot fetch empty column family
- * [ACCUMULO-1696][ACCUMULO-1696] Deep copy in the compaction scope iterators can throw off the stats
- * [ACCUMULO-1698][ACCUMULO-1698] stop-here doesn't consider system hostname
- * [ACCUMULO-1901][ACCUMULO-1901] start-here.sh starts only one GC process even if more are defined
- * [ACCUMULO-1920][ACCUMULO-1920] Monitor was not seeing zookeeper updates for tables
- * [ACCUMULO-1994][ACCUMULO-1994] Proxy does not handle Key timestamps correctly
- * [ACCUMULO-2037][ACCUMULO-2037] Tablets are now assigned to the last location 
- * [ACCUMULO-2174][ACCUMULO-2174] VFS Classloader has potential to collide localized resources
- * [ACCUMULO-2225][ACCUMULO-2225] Need to better handle DNS failure propagation from Hadoop
- * [ACCUMULO-2234][ACCUMULO-2234] Cannot run offline mapreduce over non-default instance.dfs.dir value
- * [ACCUMULO-2261][ACCUMULO-2261] Duplicate locations for a Tablet.
- * [ACCUMULO-2334][ACCUMULO-2334] Lacking fallback when ACCUMULO_LOG_HOST isn't set
- * [ACCUMULO-2408][ACCUMULO-2408] metadata table not assigned after root table is loaded
- * [ACCUMULO-2519][ACCUMULO-2519] FATE operation failed across upgrade
-
-## Known Issues
-
-### Slower writes than previous Accumulo versions
-
-When using Accumulo 1.6 and Hadoop 2, Accumulo will call hsync() on HDFS.
-Calling hsync improves durability by ensuring data is on disk (where other older 
-Hadoop versions might lose data in the face of power failure); however, calling
-hsync frequently does noticeably slow writes. A simple work around is to increase 
-the value of the tserver.mutation.queue.max configuration parameter via accumulo-site.xml.
-
-A value of "4M" is a better recommendation, and memory consumption will increase by
-the number of concurrent writers to that TabletServer. For example, a value of 4M with
-50 concurrent writers would equate to approximately 200M of Java heap being used for
-mutation queues.
-
-For more information, see [ACCUMULO-1950][ACCUMULO-1950] and [this comment][ACCUMULO-1905-comment].
-
-Another possible cause of slower writes is the change in write ahead log replication 
-between 1.4 and 1.5.  Accumulo 1.4. defaulted to two loggers servers.  Accumulo 1.5 and 1.6 store 
-write ahead logs in HDFS and default to using three datanodes.  
-
-### BatchWriter hold time error
-
-If a `BatchWriter` fails with `MutationsRejectedException` and the  message contains
-`"# server errors 1"` then it may be [ACCUMULO-2388][ACCUMULO-2388].  To confirm this look in the tablet server logs 
-for `org.apache.accumulo.tserver.HoldTimeoutException` around the time the `BatchWriter` failed.
-If this is happening often a possible work around is to set `general.rpc.timeout` to `240s`.    
-
-### Other known issues
-
- * [ACCUMULO-981][ACCUMULO-981] Sorted write ahead logs are not encrypted.
- * [ACCUMULO-1507][ACCUMULO-1507] Dynamic Classloader still can't keep proper track of jars
- * [ACCUMULO-1588][ACCUMULO-1588] Monitor XML and JSON differ
- * [ACCUMULO-1628][ACCUMULO-1628] NPE on deep copied dumped memory iterator
- * [ACCUMULO-1708][ACCUMULO-1708] [ACCUMULO-2495][ACCUMULO-2495] Out of memory errors do not always kill tservers leading to unexpected behavior
- * [ACCUMULO-2008][ACCUMULO-2008] Block cache reserves section for in-memory blocks
- * [ACCUMULO-2059][ACCUMULO-2059] Namespace constraints easily get clobbered by table constraints
- * [ACCUMULO-2677][ACCUMULO-2677] Tserver failure during map reduce reading from table can cause sub-optimal performance
-
-## Documentation updates
-
- * [ACCUMULO-1218][ACCUMULO-1218] document the recovery from a failed zookeeper
- * [ACCUMULO-1375][ACCUMULO-1375] Update README files in proxy module.
- * [ACCUMULO-1407][ACCUMULO-1407] Fix documentation for deleterows
- * [ACCUMULO-1428][ACCUMULO-1428] Document native maps
- * [ACCUMULO-1946][ACCUMULO-1946] Include dfs.datanode.synconclose in hdfs configuration documentation
- * [ACCUMULO-1956][ACCUMULO-1956] Add section on decomissioning or adding nodes to an Accumulo cluster
- * [ACCUMULO-2441][ACCUMULO-2441] Document internal state stored in RFile names
- * [ACCUMULO-2590][ACCUMULO-2590] Update public API in readme to clarify what's included
-
-## API Changes
-
-The following deprecated methods were removed in [ACCUMULO-1533][ACCUMULO-1533]
-
- * Many map reduce methods deprecated in [ACCUMULO-769][ACCUMULO-769] were removed 
- * `SecurityErrorCode o.a.a.core.client.AccumuloSecurityException.getErrorCode()` *deprecated in [ACCUMULO-970][ACCUMULO-970]*
- * `Connector o.a.a.core.client.Instance.getConnector(AuthInfo)` *deprecated in [ACCUMULO-1024][ACCUMULO-1024]*
- * `Connector o.a.a.core.client.ZooKeeperInstance.getConnector(AuthInfo)` *deprecated in [ACCUMULO-1024][ACCUMULO-1024]*
- * `static String o.a.a.core.client.ZooKeeperInstance.getInstanceIDFromHdfs(Path)` *deprecated in [ACCUMULO-1][ACCUMULO-1]*
- * `static String ZooKeeperInstance.lookupInstanceName (ZooCache,UUID)` *deprecated in [ACCUMULO-765][ACCUMULO-765]*
- * `void o.a.a.core.client.ColumnUpdate.setSystemTimestamp(long)`  *deprecated in [ACCUMULO-786][ACCUMULO-786]*
-
-## Testing
-
-Below is a list of all platforms that 1.6.0 was tested against by developers. Each Apache Accumulo release
-has a set of tests that must be run before the candidate is capable of becoming an official release. That list includes the following:
-
- 1. Successfully run all unit tests
- 2. Successfully run all functional test (test/system/auto)
- 3. Successfully complete two 24-hour RandomWalk tests (LongClean module), with and without "agitation"
- 4. Successfully complete two 24-hour Continuous Ingest tests, with and without "agitation", with data verification
- 5. Successfully complete two 72-hour Continuous Ingest tests, with and without "agitation"
-
-Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
-on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
-and, in HDFS High-Availability instances, forcing NameNode failover.
-
-The following acronyms are used in the test testing table.
-
- * CI : Continuous Ingest
- * HA : High-Availability
- * IT : Integration test, run w/ `mvn verify`
- * RW : Random Walk
-
-{: #release_notes_testing .table }
-| OS         | Java                       | Hadoop                            | Nodes        | ZooKeeper    | HDFS HA | Version/Commit hash              | Tests                                                              |
-|------------|----------------------------|-----------------------------------|--------------|--------------|---------|----------------------------------|--------------------------------------------------------------------|
-| CentOS 6.5 | CentOS OpenJDK 1.7         | Apache 2.2.0                      | 20 EC2 nodes | Apache 3.4.5 | No      | 1.6.0 RC1 + ACCUMULO\_2668 patch | 24-hour CI w/o agitation. Verified.                                |
-| CentOS 6.5 | CentOS OpenJDK 1.7         | Apache 2.2.0                      | 20 EC2 nodes | Apache 3.4.5 | No      | 1.6.0 RC2                        | 24-hour RW (Conditional.xml module) w/o agitation                  |
-| CentOS 6.5 | CentOS OpenJDK 1.7         | Apache 2.2.0                      | 20 EC2 nodes | Apache 3.4.5 | No      | 1.6.0 RC5                        | 24-hour CI w/ agitation. Verified.                                 |
-| CentOS 6.5 | CentOS OpenJDK 1.6 and 1.7 | Apache 1.2.1, 2.2.0               | Single       | Apache 3.3.6 | No      | 1.6.0 RC5                        | All unit and ITs w/  `-Dhadoop.profile=2` and `-Dhadoop.profile=1` |
-| Gentoo     | Sun JDK 1.6.0\_45          | Apache 1.2.1, 2.2.0, 2.3.0, 2.4.0 | Single       | Apache 3.4.5 | No      | 1.6.0 RC5                        | All unit and ITs. 2B entries ingested/verified with CI             |
-| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 1.6.0 RC4 and RC5                | 24-hour RW (LongClean) with and without agitation                  |
-| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 3a1b38                           | 72-hour CI with and without agitation. Verified.                   |
-| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 1.6.0 RC2                        | 24-hour CI without agitation. Verified.                            |
-| CentOS 6.4 | Sun JDK 1.6.0\_31          | CDH 4.5.0                         | 7            | CDH 4.5.0    | Yes     | 1.6.0 RC3                        | 24-hour CI with agitation. Verified.                               |
-
-[ACCUMULO-1]: https://issues.apache.org/jira/browse/ACCUMULO-1
-[ACCUMULO-112]: https://issues.apache.org/jira/browse/ACCUMULO-112 "Partition data in memory by locality group"
-[ACCUMULO-118]: https://issues.apache.org/jira/browse/ACCUMULO-118 "Multiple namenode support"
-[ACCUMULO-324]: https://issues.apache.org/jira/browse/ACCUMULO-324 "System/site constraints and iterators should NOT affect the METADATA table"
-[ACCUMULO-335]: https://issues.apache.org/jira/browse/ACCUMULO-335 "Batch scanning over the !METADATA table can cause issues"
-[ACCUMULO-391]: https://issues.apache.org/jira/browse/ACCUMULO-391 "Multi-table input format"
-[ACCUMULO-466]: https://issues.apache.org/jira/browse/ACCUMULO-466
-[ACCUMULO-765]: https://issues.apache.org/jira/browse/ACCUMULO-765
-[ACCUMULO-769]: https://issues.apache.org/jira/browse/ACCUMULO-769
-[ACCUMULO-786]: https://issues.apache.org/jira/browse/ACCUMULO-786
-[ACCUMULO-802]: https://issues.apache.org/jira/browse/ACCUMULO-802 "Table namespaces"
-[ACCUMULO-842]: https://issues.apache.org/jira/browse/ACCUMULO-842 "Add FATE administration to shell"
-[ACCUMULO-958]: https://issues.apache.org/jira/browse/ACCUMULO-958 "Support pluggable encryption in walogs"
-[ACCUMULO-970]: https://issues.apache.org/jira/browse/ACCUMULO-970
-[ACCUMULO-980]: https://issues.apache.org/jira/browse/ACCUMULO-980 "Support pluggable codecs for RFile"
-[ACCUMULO-981]: https://issues.apache.org/jira/browse/ACCUMULO-981 "support pluggable encryption when recovering write-ahead logs"
-[ACCUMULO-1000]: https://issues.apache.org/jira/browse/ACCUMULO-1000 "Conditional Mutations"
-[ACCUMULO-1009]: https://issues.apache.org/jira/browse/ACCUMULO-1009 "Support encryption over the wire"
-[ACCUMULO-1018]: https://issues.apache.org/jira/browse/ACCUMULO-1018 "Client does not give informative message when user can not read table"
-[ACCUMULO-1024]: https://issues.apache.org/jira/browse/ACCUMULO-1024
-[ACCUMULO-1030]: https://issues.apache.org/jira/browse/ACCUMULO-1030 "Create a Maven plugin to run MiniAccumuloCluster for integration testing"
-[ACCUMULO-1042]: https://issues.apache.org/jira/browse/ACCUMULO-1042 "Ctrl-C in shell terminates the process"
-[ACCUMULO-1218]: https://issues.apache.org/jira/browse/ACCUMULO-1218 "document the recovery from a failed zookeeper"
-[ACCUMULO-1336]: https://issues.apache.org/jira/browse/ACCUMULO-1336 "Add lexicoders from Typo to Accumulo"
-[ACCUMULO-1345]: https://issues.apache.org/jira/browse/ACCUMULO-1345 "Provide feedback that a compaction is 'stuck'"
-[ACCUMULO-1375]: https://issues.apache.org/jira/browse/ACCUMULO-1375 "Update README files in proxy module."
-[ACCUMULO-1407]: https://issues.apache.org/jira/browse/ACCUMULO-1407 "Fix documentation for deleterows"
-[ACCUMULO-1428]: https://issues.apache.org/jira/browse/ACCUMULO-1428 "Document native maps"
-[ACCUMULO-1442]: https://issues.apache.org/jira/browse/ACCUMULO-1442 "Replace JLine with JLine2"
-[ACCUMULO-1451]: https://issues.apache.org/jira/browse/ACCUMULO-1451 "Make Compaction triggers extensible"
-[ACCUMULO-1481]: https://issues.apache.org/jira/browse/ACCUMULO-1481 "Root tablet in its own table"
-[ACCUMULO-1492]: https://issues.apache.org/jira/browse/ACCUMULO-1492 "bin/accumulo should follow symbolic links"
-[ACCUMULO-1507]: https://issues.apache.org/jira/browse/ACCUMULO-1507 "Dynamic Classloader still can't keep proper track of jars"
-[ACCUMULO-1533]: https://issues.apache.org/jira/browse/ACCUMULO-1533
-[ACCUMULO-1537]: https://issues.apache.org/jira/browse/ACCUMULO-1537 "convert auto tests to integration tests, where possible for continuous integration"
-[ACCUMULO-1585]: https://issues.apache.org/jira/browse/ACCUMULO-1585 "Use node addresses from config files verbatim"
-[ACCUMULO-1562]: https://issues.apache.org/jira/browse/ACCUMULO-1562 "add a troubleshooting section to the user guide"
-[ACCUMULO-1566]: https://issues.apache.org/jira/browse/ACCUMULO-1566 "Add ability for client to start Scanner readahead immediately"
-[ACCUMULO-1572]: https://issues.apache.org/jira/browse/ACCUMULO-1572 "Single node zookeeper failure kills connected accumulo servers"
-[ACCUMULO-1585]: https://issues.apache.org/jira/browse/ACCUMULO-1585 "Use FQDN/verbatim data from config files"
-[ACCUMULO-1588]: https://issues.apache.org/jira/browse/ACCUMULO-1588 "Monitor XML and JSON differ"
-[ACCUMULO-1628]: https://issues.apache.org/jira/browse/ACCUMULO-1628 "NPE on deep copied dumped memory iterator"
-[ACCUMULO-1650]: https://issues.apache.org/jira/browse/ACCUMULO-1650 "Make it easier to find and run admin commands"
-[ACCUMULO-1661]: https://issues.apache.org/jira/browse/ACCUMULO-1661 "AccumuloInputFormat cannot fetch empty column family"
-[ACCUMULO-1664]: https://issues.apache.org/jira/browse/ACCUMULO-1664 "Make all processes able to use random ports"
-[ACCUMULO-1667]: https://issues.apache.org/jira/browse/ACCUMULO-1667 "Allow On/Offline Command To Execute Synchronously"
-[ACCUMULO-1696]: https://issues.apache.org/jira/browse/ACCUMULO-1696 "Deep copy in the compaction scope iterators can throw off the stats"
-[ACCUMULO-1698]: https://issues.apache.org/jira/browse/ACCUMULO-1698 "stop-here doesn't consider system hostname"
-[ACCUMULO-1704]: https://issues.apache.org/jira/browse/ACCUMULO-1704 "IteratorSetting missing (int,String,Class,Map) constructor"
-[ACCUMULO-1706]: https://issues.apache.org/jira/browse/ACCUMULO-1706 "Admin Utilities Should Respect EPIPE"
-[ACCUMULO-1708]: https://issues.apache.org/jira/browse/ACCUMULO-1708 "Error during minor compaction left tserver in bad state"
-[ACCUMULO-1808]: https://issues.apache.org/jira/browse/ACCUMULO-1808 "Create compaction strategy that has size limit"
-[ACCUMULO-1833]: https://issues.apache.org/jira/browse/ACCUMULO-1833 "MultiTableBatchWriterImpl.getBatchWriter() is not performant for multiple threads"
-[ACCUMULO-1901]: https://issues.apache.org/jira/browse/ACCUMULO-1901 "start-here.sh starts only one GC process even if more are defined"
-[ACCUMULO-1905-comment]: https://issues.apache.org/jira/browse/ACCUMULO-1905?focusedCommentId=13915208&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13915208
-[ACCUMULO-1920]: https://issues.apache.org/jira/browse/ACCUMULO-1920 "monitor not seeing zookeeper updates"
-[ACCUMULO-1933]: https://issues.apache.org/jira/browse/ACCUMULO-1933 "Make unit on memory parameters case-insensitive"
-[ACCUMULO-1946]: https://issues.apache.org/jira/browse/ACCUMULO-1946 "Include dfs.datanode.synconclose in hdfs configuration documentation"
-[ACCUMULO-1950]: https://issues.apache.org/jira/browse/ACCUMULO-1950 "Reduce the number of calls to hsync"
-[ACCUMULO-1956]: https://issues.apache.org/jira/browse/ACCUMULO-1956 "Add section on decomissioning or adding nodes to an Accumulo cluster"
-[ACCUMULO-1958]: https://issues.apache.org/jira/browse/ACCUMULO-1958 "Range constructor lacks key checks, should be non-public"
-[ACCUMULO-1985]: https://issues.apache.org/jira/browse/ACCUMULO-1985 "Cannot bind monitor on remote host to all interfaces"
-[ACCUMULO-1994]: https://issues.apache.org/jira/browse/ACCUMULO-1994 "Proxy does not handle Key timestamps correctly"
-[ACCUMULO-2008]: https://issues.apache.org/jira/browse/ACCUMULO-2008 "Block cache reserves section for in-memory blocks"
-[ACCUMULO-2037]: https://issues.apache.org/jira/browse/ACCUMULO-2037 "Tablets not assigned to last location"
-[ACCUMULO-2047]: https://issues.apache.org/jira/browse/ACCUMULO-2047 "Failures using viewfs with multiple namenodes"
-[ACCUMULO-2059]: https://issues.apache.org/jira/browse/ACCUMULO-2059 "Namespace constraints easily get clobbered by table constraints"
-[ACCUMULO-2128]: https://issues.apache.org/jira/browse/ACCUMULO-2128 "Provide resource cleanup via static utility rather than Instance.close"
-[ACCUMULO-2174]: https://issues.apache.org/jira/browse/ACCUMULO-2174 "VFS Classloader has potential to collide localized resources"
-[ACCUMULO-2225]: https://issues.apache.org/jira/browse/ACCUMULO-2225 "Need to better handle DNS failure propagation from Hadoop"
-[ACCUMULO-2234]: https://issues.apache.org/jira/browse/ACCUMULO-2234 "Cannot run offline mapreduce over non-default instance.dfs.dir value"
-[ACCUMULO-2261]: https://issues.apache.org/jira/browse/ACCUMULO-2261 "duplicate locations"
-[ACCUMULO-2262]: https://issues.apache.org/jira/browse/ACCUMULO-2262 "Include java.net.preferIPv4Stack=true in process startup"
-[ACCUMULO-2334]: https://issues.apache.org/jira/browse/ACCUMULO-2334 "Lacking fallback when ACCUMULO_LOG_HOST isn't set"
-[ACCUMULO-2360]: https://issues.apache.org/jira/browse/ACCUMULO-2360 "Need a way to configure TNonblockingServer.maxReadBufferBytes to prevent OOMs"
-[ACCUMULO-2388]: https://issues.apache.org/jira/browse/ACCUMULO-2388
-[ACCUMULO-2408]: https://issues.apache.org/jira/browse/ACCUMULO-2408 "metadata table not assigned after root table is loaded"
-[ACCUMULO-2441]: https://issues.apache.org/jira/browse/ACCUMULO-2441 "Document internal state stored in RFile names"
-[ACCUMULO-2495]: https://issues.apache.org/jira/browse/ACCUMULO-2495 "OOM exception didn't bring down tserver"
-[ACCUMULO-2519]: https://issues.apache.org/jira/browse/ACCUMULO-2519 "FATE operation failed across upgrade"
-[ACCUMULO-2590]: https://issues.apache.org/jira/browse/ACCUMULO-2590 "Update public API in readme to clarify what's included"
-[ACCUMULO-2659]: https://issues.apache.org/jira/browse/ACCUMULO-2659
-[ACCUMULO-2677]: https://issues.apache.org/jira/browse/ACCUMULO-2677 "Single node bottle neck during map reduce"
-
-[1]: https://research.google.com/archive/bigtable.html
-[2]: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.44.2782&rep=rep1&type=pdf
-[3]: https://wiki.apache.org/hadoop/HadoopIPv6

http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.6.1.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.1.md b/release_notes/1.6.1.md
deleted file mode 100644
index 3949bd8..0000000
--- a/release_notes/1.6.1.md
+++ /dev/null
@@ -1,188 +0,0 @@
----
-title: Apache Accumulo 1.6.1 Release Notes
----
-
-Apache Accumulo 1.6.1 is a maintenance release on the 1.6 version branch.
-This release contains changes from over 175 issues, comprised of bug-fixes, performance
-improvements and better test cases. As this is a maintenance release, Apache Accumulo
-1.6.1 has no client API  incompatibilities over Apache Accumulo 1.6.0. Users of 1.6.0
-are strongly encouraged to update as soon as possible to benefit from the improvements.
-
-New users are encouraged to use this release over 1.6.0 or any other older releases. For
-information about improvements since Accumulo 1.5, see the [1.6.0 release notes][32].
-
-## Performance Improvements
-
-Apache Accumulo 1.6.1 includes a number of performance-related fixes over previous versions.
-Many of these improvements were also included in the recently released Apache Accumulo 1.5.2.
-
-
-### Write-Ahead Log sync performance
-
-The Write-Ahead Log (WAL) files are used to ensure durability of updates made to Accumulo.
-A sync is called on the file in HDFS to make sure that the changes to the WAL are persisted
-to disk, which allows Accumulo to recover in the case of failure. [ACCUMULO-2766][9] fixed
-an issue where an operation against a WAL would unnecessarily wait for multiple syncs, slowing
-down the ingest on the system.
-
-### Minor-Compactions not aggressive enough
-
-On a system with ample memory provided to Accumulo, long hold-times were observed which
-blocks the ingest of new updates. Trying to free more server-side memory by running minor
-compactions more frequently increased the overall throughput on the node. These changes
-were made in [ACCUMULO-2905][10].
-
-### HeapIterator optimization
-
-Iterators, a notable feature of Accumulo, are provided to users as a server-side programming
-construct, but are also used internally for numerous server operations. One of these system iterator 
-is the HeapIterator which implements a PriorityQueue of other Iterators. One way this iterator is
-used is to merge multiple files in HDFS to present a single, sorted stream of Key-Value pairs. [ACCUMULO-2827][11]
-introduces a performance optimization to the HeapIterator which can improve the speed of the
-HeapIterator in common cases.
-
-### Write-Ahead log sync implementation
-
-In Hadoop-2, two implementations of sync are provided: hflush and hsync. Both of these
-methods provide a way to request that the datanodes write the data to the underlying
-medium and not just hold it in memory (the *fsync* syscall). While both of these methods
-inform the Datanodes to sync the relevant block(s), *hflush* does not wait for acknowledgement
-from the Datanodes that the sync finished, where *hsync* does. To provide the most reliable system
-"out of the box", Accumulo defaults to *hsync* so that your data is as secure as possible in 
-a variety of situations (notably, unexpected power outages).
-
-The downside is that performance tends to suffer because waiting for a sync to disk is a very
-expensive operation. [ACCUMULO-2842][12] introduces a new system property, tserver.wal.sync.method,
-that lets users to change the HDFS sync implementation from *hsync* to *hflush*. Using *hflush* instead
-of *hsync* may result in about a 30% increase in ingest performance.
-
-For users upgrading from Hadoop-1 or Hadoop-0.20 releases, *hflush* is the equivalent of how
-sync was implemented in these older versions of Hadoop and should give comparable performance.
-
-## Other improvements
-
-### Use of Hadoop CredentialProviders
-
-Apache Hadoop 2.6.0 introduced a new API aimed at providing ways to separate sensitive values
-from being stored in plaintext as a part of [HADOOP-10607][28]. Accumulo has had two sensitive
-configuration properties stored in *accumulo-site.xml* for every standard installation: instance.secret
-and trace.token.property.password. If either of these properties are compromised, it could lead to
-unwanted access of Accumulo. [ACCUMULO-2464][29] modifies Accumulo so that it can stored any sensitive
-configuration properties in a Hadoop CredentialProvider. With sensitive values removed from accumulo-site.xml,
-it can be shared without concern and security can be focused solely on the CredentialProvider.
-
-## Notable Bug Fixes
-
-### Add configurable maximum frame size to Apache Thrift proxy
-
-The Thrift proxy server was subject to memory exhaustion, typically
-due to bad input, where the server would attempt to allocate a very large
-buffer and die in the process. [ACCUMULO-2658][2] introduces a configuration
-parameter, like [ACCUMULO-2360][3], to prevent this error.
-
-### Offline tables can prevent tablet balancing
-
-Before 1.6.1, when a table with many tablets was created, ingested into, and
-taken offline, tablet balancing may have stoppped. This would happen if there
-were tablet migrations for the table, because the migrations couldn't occur.
-The balancer will not run when there are outstanding migrations; therefore, a
-system could become unbalanced. [ACCUMULO-2694][4] introduces a fix to ensure
-that offline tables do not block balancing and improves the server-side
-logging.
-
-### MiniAccumuloCluster process management
-
-MiniAccumuloCluster had a few issues which could cause deadlock or a method that
-never returns. Most of these are related to management of the Accumulo processes
-([ACCUMULO-2764][5], [ACCUMULO-2985][6], and [ACCUMULO-3055][7]).
-
-### IteratorSettings not correctly serialized in RangeInputSplit
-
-The Writable interface methods on the RangeInputSplit class accidentally omitted
-calls to serialize the IteratorSettings configured for the Job. [ACCUMULO-2962][8]
-fixes the serialization and adds some additional tests.
-
-### Constraint violation causes hung scans
-
-A failed bulk import transaction had the ability to create an infinitely retrying
-loop due to a constraint violation. This directly prevents scans from completing,
-but will also hang compactions. [ACCUMULO-3096][14] fixes the issue so that the
-constraint no longer hangs the entire system.
-
-### Unable to upgrade cleanly from 1.5
-
-When upgrading a table from 1.5.1 to 1.6.0, a user experienced an error where the table
-never came online. [ACCUMULO-2974][27] fixes an issue from the change of file references
-stored as absolute paths instead of relative paths in the Accumulo metadata table.
-
-### Guava dependency changed
-
-[ACCUMULO-3100][30] lowered the dependency on Guava from 15.0 to 14.0.1. This dependency
-now matches what Hadoop is depending on for the 2.x.y version line. Depending on a newer
-version of Guava introduces many issues stemming from deprecated classes in use by Hadoop
-which have been removed. While installations of Accumulo will likely work as expected with
-newer versions of Guava on the classpath (because the Hadoop processes will have their own
-classpath), use of MiniDfsClusters with the new Guava version will result in errors.
-
-Users can attempt to use a newer version of Guava on the Accumulo server classpath; however,
-the success is dependent on Hadoop client libraries not using (missing) Guava methods internally.
-
-### Scanners eat InterruptedException
-
-Scanners previously consumed InterruptedExceptions and did not exit after. In multi-threaded
-environments, this is very problematic as there is no means to stop the Scanner from reading data.
-[ACCUMULO-3030][31] fixes the Scanner so that interrupts are observed and the Scanner exits as expected.
-
-## Documentation
-
-The following documentation updates were made: 
-
- * [ACCUMULO-2767][15]
- * [ACCUMULO-2796][16]
- * [ACCUMULO-2919][17]
- * [ACCUMULO-3008][18]
- * [ACCUMULO-2874][19]
- * [ACCUMULO-2821][20]
- * [ACCUMULO-3097][21]
- * [ACCUMULO-3097][22]
-
-## Testing
-
-Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
-on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
-and, in HDFS High-Availability instances, forcing NameNode failover.
-
-{: #release_notes_testing .table }
-| OS         | Hadoop                | Nodes | ZooKeeper    | HDFS HA | Tests                                                                                                       |
-|------------|-----------------------|-------|--------------|---------|-------------------------------------------------------------------------------------------------------------|
-| Gentoo     | Apache 2.6.0-SNAPSHOT | 2     | Apache 3.4.5 | No      | Unit and Functional Tests, ContinuousIngest w/ verification (2B entries)                                    |
-| CentOS 6   | Apache 2.3.0          | 20    | Apache 3.4.5 | No      | 24/hr RandomWalk, ContinuousIngest w/ verification w/ and w/o agitation (17B entries), 24hr Randomwalk test |
-
-[1]: https://issues.apache.org/jira/browse/ACCUMULO-2586
-[2]: https://issues.apache.org/jira/browse/ACCUMULO-2658
-[3]: https://issues.apache.org/jira/browse/ACCUMULO-2360
-[4]: https://issues.apache.org/jira/browse/ACCUMULO-2694
-[5]: https://issues.apache.org/jira/browse/ACCUMULO-2764
-[6]: https://issues.apache.org/jira/browse/ACCUMULO-2985
-[7]: https://issues.apache.org/jira/browse/ACCUMULO-3055
-[8]: https://issues.apache.org/jira/browse/ACCUMULO-2962
-[9]: https://issues.apache.org/jira/browse/ACCUMULO-2766
-[10]: https://issues.apache.org/jira/browse/ACCUMULO-2905
-[11]: https://issues.apache.org/jira/browse/ACCUMULO-2827
-[12]: https://issues.apache.org/jira/browse/ACCUMULO-2842
-[13]: https://issues.apache.org/jira/browse/ACCUMULO-3018
-[14]: https://issues.apache.org/jira/browse/ACCUMULO-3096
-[15]: https://issues.apache.org/jira/browse/ACCUMULO-2767
-[16]: https://issues.apache.org/jira/browse/ACCUMULO-2796
-[17]: https://issues.apache.org/jira/browse/ACCUMULO-2919
-[18]: https://issues.apache.org/jira/browse/ACCUMULO-3008
-[19]: https://issues.apache.org/jira/browse/ACCUMULO-2874
-[20]: https://issues.apache.org/jira/browse/ACCUMULO-2821
-[21]: https://issues.apache.org/jira/browse/ACCUMULO-3097
-[22]: https://issues.apache.org/jira/browse/ACCUMULO-3097
-[27]: https://issues.apache.org/jira/browse/ACCUMULO-2974
-[28]: https://issues.apache.org/jira/browse/HADOOP-10607
-[29]: https://issues.apache.org/jira/browse/ACCUMULO-2464
-[30]: https://issues.apache.org/jira/browse/ACCUMULO-3100
-[31]: https://issues.apache.org/jira/browse/ACCUMULO-3030
-[32]: {{ site.baseurl }}/release_notes/1.6.0

http://git-wip-us.apache.org/repos/asf/accumulo/blob/9a50bd13/release_notes/1.6.2.md
----------------------------------------------------------------------
diff --git a/release_notes/1.6.2.md b/release_notes/1.6.2.md
deleted file mode 100644
index f45b052..0000000
--- a/release_notes/1.6.2.md
+++ /dev/null
@@ -1,171 +0,0 @@
----
-title: Apache Accumulo 1.6.2 Release Notes
----
-
-Apache Accumulo 1.6.2 is a maintenance release on the 1.6 version branch.
-This release contains changes from over 150 issues, comprised of bug-fixes, performance
-improvements and better test cases. Apache Accumulo 1.6.2 is the first release since the
-community has adopted [Semantic Versioning][1] which means that all changes to the [public API][2]
-are guaranteed to be made without adding to or removing from the public API. This ensures
-that client code that runs against 1.6.1 is guaranteed to run against 1.6.2 and vice versa.
-
-Users of 1.6.0 or 1.6.1 are strongly encouraged to update as soon as possible to benefit from
-the improvements with very little concern in change of underlying functionality. Users of 1.4 or 1.6
-are seeking to upgrade to 1.6 should consider 1.6.2 the starting point over 1.6.0 or 1.6.1. For
-information about improvements since Accumulo 1.5, see the [1.6.0][3] and [1.6.1][4] release notes.
-
-## Notable Bug Fixes
-
-### Only first ZooKeeper server is used
-
-In constructing a `ZooKeeperInstance`, the user provides a comma-separated list of addresses for ZooKeeper
-servers. 1.6.0 and 1.6.1 incorrectly truncated the provided list of ZooKeeper servers used to the first. This
-would cause clients to fail when the first ZooKeeper server in the list became unavailable and not properly
-load balance requests to all available servers in the quorum. [ACCUMULO-3218][5] fixes the parsing of
-the ZooKeeper quorum list to use all servers, not just the first.
-
-### Incorrectly handled ZooKeeper exception
-
-Use of ZooKeeper's API requires very careful exception handling as some thrown exceptions from the ZooKeeper
-API are considered "normal" and must be retried by the client. In 1.6.1, Accumulo improved its handling of
-these "expected failures" to better insulate calls to ZooKeeper; however, the wrapper which sets data to a ZNode
-incorrectly handled all cases. [ACCUMULO-3448][6] fixed the implementation of `ZooUtil.putData(...)` to handle
-the expected error conditions correctly.
-
-### `scanId` is not set in `ActiveScan`
-
-The `ActiveScan` class is the returned object by `InstanceOperations.listScans`. This class represents a
-"scan" running on Accumulo servers, either from a `Scanner` or `BatchScanner`. The `ActiveScan` class 
-is meant to represent all of the information that represents the scan and can be useful to administrators
-or DevOps-types to observe and act on scans which are running for excessive periods of time. [ACCUMULO-2641][7]
-fixes `ActiveScan` to ensure that the internal identifier `scanId` is properly set.
-
-### Table state change doesn't wait when requested
-
-An Accumulo table has two states: `ONLINE` and `OFFLINE`. An offline table in Accumulo consumes no TabletServer
-resources, only HDFS resources, which makes it useful to save infrequently used data. The Accumulo methods provided
-to transition a state from `ONLINE` to `OFFLINE` and vice versa did not respect the `wait=true` parameter
-when set. [ACCUMULO-3301][8] fixes the underlying implementation to ensure that when `wait=true` is provided,
-the method will not return until the table's state transition has fully completed.
-
-### KeyValue doesn't implement `hashCode()` or `equals()`
-
-The `KeyValue` class is an implementation of `Entry<Key,Value>` which is returned by the classes like
-`Scanner` and `BatchScanner`. [ACCUMULO-3217][9] adds these methods which ensure that the returned `Entry<Key,Value>`
-operates as expected with `HashMaps` and `HashSets`. 
-
-### Potential deadlock in TabletServer
-
-Internal to the TabletServer, there are methods to construct instances of configuration objects for tables
-and namespaces. The locking on these methods was not correctly implemented which created the possibility to
-have concurrent requests to a TabletServer to deadlock. [ACCUMULO-3372][10] found this problem while performing
-bulk imports of RFiles into Accumulo. Additional synchronization was added server-side to prevent this deadlock
-from happening in the future.
-
-### The `DateLexicoder` incorrectly serialized `Dates` prior 1970
-
-The `DateLexicode`, a part of the `Lexicoders` classes which implement methods to convert common type primitives
-into lexicographically sorting Strings/bytes, incorrectly converted `Date` objects for dates prior to 1970.
-[ACCUMULO-3385][11] fixed the `DateLexicoder` to correctly (de)serialize data `Date` objects. For users with
-data stored in Accumulo using the broken implementation, the following can be performed to read the old data.
-
-      Lexicoder lex = new ULongLexicoder();
-      for (Entry<Key, Value> e : scanner) {
-        Date d = new Date(lex.decode(TextUtil.getBytes(e.getKey().getRow())));
-        // ...
-      }
-
-### Reduce MiniAccumuloCluster failures due to random port allocations
-
-`MiniAccumuloCluster` has had issues where it fails to properly start due to the way it attempts to choose
-a random, unbound port on the local machine to start the ZooKeeper and Accumulo processes. Improvements have
-been made, including retry logic, to withstand a few failed port choices. The changes made by [ACCUMULO-3233][12]
-and the related issues should eliminate sporadic failures users of `MiniAccumuloCluster` might have observed.
-
-### Tracer doesn't handle trace table state transition
-
-The Tracer is an optional Accumulo server process that serializes Spans, elements of a distributed trace,
-to the trace table for later inspection and correlation with other Spans. By default, the Tracer writes
-to a "trace" table. In earlier versions of Accumulo, if this table was put offline, the Tracer would fail
-to write new Spans to the table when it came back online. [ACCUMULO-3351][13] ensures that the Tracer process
-will resume writing Spans to the trace table when it transitions to online after being offline.
-
-### Tablet not major compacting
-
-It was noticed that a system performing many bulk imports, there was a tablet with hundreds of files which
-was not major compacting nor was scheduled to be major compacted. [ACCUMULO-3462][14] identified as fix
-server-side which would prevent this from happening in the future.
-
-### YARN job submission fails with Hadoop-2.6.0
-
-Hadoop 2.6.0 introduced a new component, the TimelineServer, which is a centralized metrics service designed
-for other Hadoop components to leverage. MapReduce jobs submitted via `accumulo` and `tool.sh` failed to
-run the job because it attempted to contact the TimelineServer and Accumulo was missing a dependency on 
-the classpath to communicate with the TimelineServer. [ACCUMULO-3230][15] updates the classpath in the example
-configuration files to include the necessary dependencies for the TimelineServer to ensure that YARN job
-submission operates as previously.
-
-## Performance Improvements
-
-### User scans can block root and metadata table scans
-
-The TabletServer provides a feature to limit the number of open files as a resource management configuration.
-To perform a scan against a normal table, the metadata and root table, when not cached, need to be consulted
-first. With a sufficient number of concurrent scans against normal tables, adding to the open file count,
-scans against the metadata and root tables could be blocked from running because no files can be opened. 
-This prevents other system operations from happening as expected. [ACCUMULO-3297][16] fixes the internal semaphore
-used to implement this resource management to ensure that root and metadata table scans can proceed.
-
-
-## Other improvements
-
-### Limit available ciphers for SSL/TLS
-
-Since Apache Accumulo 1.5.2 and 1.6.1, the [POODLE][17] man-in-the-middle attack was found which exploits a client's
-ability to fallback to the SSLv3.0 protocol. The main mitigation strategy was to prevent the use of old ciphers/protocols
-when using SSL connectors. In Accumulo, both the Apache Thrift RPC servers and Jetty server for the Accumulo
-monitor have the ability to enable SSL. [ACCUMULO-3316][18] is the parent issue which provides new configuration
-properties in accumulo-site.xml which can limit the accepted ciphers/protocols. By default, insecure or out-dated
-protocols have been removed from the default set in order to protect users by default.
-
-
-## Documentation
-
-Documentation was added to the Administration chapter for moving from a Non-HA Namenode setup to an HA Namenode setup. 
-New chapters were added for the configuration of SSL and for summaries of Implementation Details (initially describing 
-FATE operations). A section was added to the Configuration chapter for describing how to arrive at optimal settings
-for configuring an instance with native maps.
-
-
-## Testing
-
-Each unit and functional test only runs on a single node, while the RandomWalk and Continuous Ingest tests run 
-on any number of nodes. *Agitation* refers to randomly restarting Accumulo processes and Hadoop Datanode processes,
-and, in HDFS High-Availability instances, forcing NameNode failover.
-
-{: #release_notes_testing .table }
-| OS        | Hadoop | Nodes | ZooKeeper | HDFS HA | Tests                                                                                     |
-|-----------|--------|-------|-----------|---------|-------------------------------------------------------------------------------------------|
-| Gentoo    | N/A    | 1     | N/A       | No      | Unit and Integration Tests                                                                |
-| Mac OSX   | N/A    | 1     | N/A       | No      | Unit and Integration Tests                                                                |
-| Fedora 21 | N/A    | 1     | N/A       | No      | Unit and Integration Tests                                                                |
-| CentOS 6  | 2.6    | 20    | 3.4.5     | No      | ContinuousIngest w/ verification w/ and w/o agitation (31B and 21B entries, respectively) |
-
-[1]: https://semver.org
-[2]: https://github.com/apache/accumulo#api
-[3]: {{ site.baseurl }}/release_notes/1.6.0
-[4]: {{ site.baseurl }}/release_notes/1.6.1
-[5]: https://issues.apache.org/jira/browse/ACCUMULO-3218
-[6]: https://issues.apache.org/jira/browse/ACCUMULO-3448
-[7]: https://issues.apache.org/jira/browse/ACCUMULO-2641
-[8]: https://issues.apache.org/jira/browse/ACCUMULO-3301
-[9]: https://issues.apache.org/jira/browse/ACCUMULO-3217
-[10]: https://issues.apache.org/jira/browse/ACCUMULO-3372
-[11]: https://issues.apache.org/jira/browse/ACCUMULO-3385
-[12]: https://issues.apache.org/jira/browse/ACCUMULO-3233
-[13]: https://issues.apache.org/jira/browse/ACCUMULO-3351
-[14]: https://issues.apache.org/jira/browse/ACCUMULO-3462
-[15]: https://issues.apache.org/jira/browse/ACCUMULO-3230
-[16]: https://issues.apache.org/jira/browse/ACCUMULO-3297
-[17]: https://en.wikipedia.org/wiki/POODLE
-[18]: https://issues.apache.org/jira/browse/ACCUMULO-3316


Mime
View raw message