kudu-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From danburk...@apache.org
Subject incubator-kudu git commit: Kudu 0.9.0 release notes edit
Date Wed, 01 Jun 2016 00:01:51 GMT
Repository: incubator-kudu
Updated Branches:
  refs/heads/branch-0.9.x feb83b8fc -> 42d54787b


Kudu 0.9.0 release notes edit

Change-Id: I6242089b099a7e220ce4094f3ba0377859338b97
Reviewed-on: http://gerrit.cloudera.org:8080/3176
Reviewed-by: Misty Stanley-Jones <misty@apache.org>
Tested-by: Misty Stanley-Jones <misty@apache.org>
(cherry picked from commit 5805fb71cf07634edcad1096d1616a8a729268bc)
Reviewed-on: http://gerrit.cloudera.org:8080/3251
Reviewed-by: Jean-Daniel Cryans
Tested-by: Jean-Daniel Cryans


Project: http://git-wip-us.apache.org/repos/asf/incubator-kudu/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-kudu/commit/42d54787
Tree: http://git-wip-us.apache.org/repos/asf/incubator-kudu/tree/42d54787
Diff: http://git-wip-us.apache.org/repos/asf/incubator-kudu/diff/42d54787

Branch: refs/heads/branch-0.9.x
Commit: 42d54787ba918c63db1d760a912684d2505bd411
Parents: feb83b8
Author: Misty Stanley-Jones <misty@apache.org>
Authored: Mon May 23 12:30:12 2016 -0700
Committer: Jean-Daniel Cryans <jdcryans@gerrit.cloudera.org>
Committed: Tue May 31 22:40:06 2016 +0000

----------------------------------------------------------------------
 docs/installation.adoc  | 12 ++++++-----
 docs/release_notes.adoc | 51 ++++++++++++++++++++++++++++++--------------
 2 files changed, 42 insertions(+), 21 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-kudu/blob/42d54787/docs/installation.adoc
----------------------------------------------------------------------
diff --git a/docs/installation.adoc b/docs/installation.adoc
index f70cded..a827a5d 100644
--- a/docs/installation.adoc
+++ b/docs/installation.adoc
@@ -623,14 +623,14 @@ in `java/kudu-client/target/apidocs/index.html`.
 // end::view_api[]
 
 [[upgrade]]
-== Upgrade from 0.7.1 to 0.8.0
+== Upgrade from 0.8.0 to 0.9.0
 
 Before upgrading, see <<client_compatibility>> and <<api_compatibility>>.
-To upgrade from Kudu 0.7.1 to 0.8.0, perform the following high-level steps, which
+To upgrade from Kudu 0.8.0 to 0.9.0, perform the following high-level steps, which
 are detailed in <<upgrade_procedure>>:
 
 . Shut down all Kudu services.
-. Install the new Kudu packages or parcels, or install Kudu 0.8.0 from source.
+. Install the new Kudu packages or parcels, or install Kudu 0.9.0 from source.
 . Restart all Kudu services.
 
 It is technically possible to upgrade Kudu using rolling restarts, but it has not
@@ -644,14 +644,16 @@ from the previous latest version to the newest.
 
 Masters and tablet servers should be upgraded before clients are upgraded. For specific
 information about client compatibility, see the
-link:release_notes.html#rn_0.8.0_incompatible_changes[Incompatible Changes] section
+link:release_notes.html#rn_0.9.0_incompatible_changes[Incompatible Changes] section
 of the release notes.
 
 [[api_compatibility]]
 
 === API Compatibility
 
-The Kudu 0.8.0 client API is compatible with Kudu 0.7.1.
+In Kudu 0.9 and higher, you must set partitioning options explicitly when
+creating a new table. If you do not specify partitioning options, the table
+creation will fail. This behavior change does not affect existing tables.
 
 [[upgrade_procedure]]
 === Upgrade procedure

http://git-wip-us.apache.org/repos/asf/incubator-kudu/blob/42d54787/docs/release_notes.adoc
----------------------------------------------------------------------
diff --git a/docs/release_notes.adoc b/docs/release_notes.adoc
index 74c0da0..25df2bd 100644
--- a/docs/release_notes.adoc
+++ b/docs/release_notes.adoc
@@ -56,15 +56,25 @@ Hadoop storage technologies.
 [[rn_0.9.0]]
 === Release notes specific to 0.9.0
 
+Kudu 0.9.0 delivers incremental features, improvements, and bug fixes over the previous versions.
+
+See also +++<a href="https://issues.apache.org/jira/issues/?jql=project%20%3D%20KUDU%20AND%20status%20%3D%20Resolved
+%20AND%20fixVersion%20%3D%200.9.0">JIRAs resolved
+for Kudu 0.9.0</a>+++ and +++<a href="https://github.com/apache/incubator-kudu/compare/0.8.0...0.9.0">Git
+changes between 0.8.0 and 0.9.0</a>+++.
+
+To upgrade to Kudu 0.9.0, see link:installation.html#upgrade[Upgrade from 0.8.0 to 0.9.0].
+
 [[rn_0.9.0_incompatible_changes]]
 ==== Incompatible changes
 
-- The KuduTableInputFormat has changed how it handles scan predicates, including
-  how it serializes predicates to the job configuration object. The new
-  configuration key is "kudu.mapreduce.encoded.predicate". Clients using the
-  TableInputFormatConfigurator should not be affected.
+- The `KuduTableInputFormat` command has changed the way in which it handles
+  scan predicates, including how it serializes predicates to the job configuration
+  object. The new configuration key is `kudu.mapreduce.encoded.predicate`. Clients
+  using the `TableInputFormatConfigurator` are not affected.
 
-- The kudu-spark subproject was been renamed to follow naming conventions for scala kudu-spark_2.10
+- The `kudu-spark` sub-project has been renamed to follow naming conventions for
+  Scala. The new name is `kudu-spark_2.10`.
 
 - Default table partitioning has been removed. All tables must now be created
   with explicit partitioning. Existing tables are unaffected. See the
@@ -75,23 +85,32 @@ Hadoop storage technologies.
 ==== New features
 
 - link:https://issues.apache.org/jira/browse/KUDU-1306[KUDU-1306] Scan token API
-  for creating partition-aware scan descriptors. Can be used by clients and
-  query engines to more easily execute parallel scans.
+  for creating partition-aware scan descriptors. This API simplifies executing
+  parallel scans for clients and query engines.
 
-- link:http://gerrit.cloudera.org:8080/#/c/2848/[Gerrit 2848] Added a kudu datasource for
spark which uses the kudu client directly instead of
-  using mapreduce api. Includes predicate pushdowns for spark-sql and spark filters.
-  Parallel retrieval for multiple tablets and column projections. link:developing.html#_kudu_integration_with_spark[Kudu
integration with Spark Example]
+- link:http://gerrit.cloudera.org:8080/#/c/2848/[Gerrit 2848] Added a kudu datasource
+  for Spark. This datasource uses the Kudu client directly instead of
+  using the MapReduce API. Predicate pushdowns for `spark-sql` and Spark filters are
+  included, as well as parallel retrieval for multiple tablets and column projections.
+  See an example of link:developing.html#_kudu_integration_with_spark[Kudu integration with
Spark].
 
-- link:http://gerrit.cloudera.org:8080/#/c/2992/ Added ability to update and insert from
spark using kudu datasource
+- link:http://gerrit.cloudera.org:8080/#/c/2992/[Gerrit 2992] Added the ability
+  to update and insert from Spark using a Kudu datasource.
 
 [[rn_0.9.0_changes]]
 ==== Other noteworthy changes
 
-- The clients have longer default timeouts. For Java, the default operation timeout and the
default
-  admin operation timeout are now set to 30 seconds instead of 10. The default socket read
timeout
-  is now 10 seconds instead of 5. For the C++ client, the default admin timeout is now 30
seconds
-  instead of 10, the default RPC timeout is now 10 seconds instead of 5, and the default
scan
-  timeout is now 30 seconds instead of 15.
+All Kudu clients have longer default timeout values, as listed below.
+
+.Java
+- The default operation timeout and the default admin operation timeout
+  are now set to 30 seconds instead of 10.
+- The default socket read timeout is now 10 seconds instead of 5.
+
+.C++
+- The default admin timeout is now 30 seconds instead of 10.
+- The default RPC timeout is now 10 seconds instead of 5.
+- The default scan timeout is now 30 seconds instead of 15.
 
 [[rn_0.8.0]]
 === Release notes specific to 0.8.0


Mime
View raw message