trafodion-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sure...@apache.org
Subject [1/2] incubator-trafodion git commit: [TRAFODION-2481] Update provisioning doc
Date Thu, 16 Feb 2017 16:48:40 GMT
Repository: incubator-trafodion
Updated Branches:
  refs/heads/master d78723dea -> df7cd6d00


[TRAFODION-2481] Update provisioning doc

Add information on Ambari integration.

Delete manual (recipe) installation. This info was incomplete and likely
out-dated.

tweak pyinstaller config to agree with current tested distros.

This does not update all the command-line install info for the new
pyinstaller. That is to be done in TRAFODION-2482.


Project: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/commit/51abeb4d
Tree: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/tree/51abeb4d
Diff: http://git-wip-us.apache.org/repos/asf/incubator-trafodion/diff/51abeb4d

Branch: refs/heads/master
Commit: 51abeb4d407c8ca52afd4b8d5dd3b6b683a24db3
Parents: 3d7a612
Author: Steve Varnau <svarnau@apache.org>
Authored: Wed Feb 15 23:37:19 2017 +0000
Committer: Steve Varnau <svarnau@apache.org>
Committed: Wed Feb 15 23:37:19 2017 +0000

----------------------------------------------------------------------
 .../src/asciidoc/_chapters/about.adoc           |   2 +-
 .../src/asciidoc/_chapters/ambari_install.adoc  |  92 +++++++++++
 .../src/asciidoc/_chapters/introduction.adoc    |  24 ++-
 .../src/asciidoc/_chapters/prepare.adoc         | 133 ++--------------
 .../src/asciidoc/_chapters/quickstart.adoc      |   3 +-
 .../src/asciidoc/_chapters/recipe_install.adoc  |  29 ----
 .../src/asciidoc/_chapters/recipe_upgrade.adoc  |  30 ----
 .../src/asciidoc/_chapters/requirements.adoc    | 159 +------------------
 docs/provisioning_guide/src/asciidoc/index.adoc |   4 +-
 install/python-installer/configs/version.json   |   2 +-
 10 files changed, 125 insertions(+), 353 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/about.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/about.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/about.adoc
index 57f8f0f..ddce5bf 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/about.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/about.adoc
@@ -53,7 +53,7 @@ Unless specifically qualified (bare-metal node, virtual-machine node, or
cloud-n
 regardless of platform type.
 
 == New and Changed Information
-This guide has been updated to include provisioning for LDAP and Kerberos.
+This guide has been updated to include Ambari installation.
 
 <<<
 == Notation Conventions

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/ambari_install.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/ambari_install.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/ambari_install.adoc
new file mode 100644
index 0000000..c247124
--- /dev/null
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/ambari_install.adoc
@@ -0,0 +1,92 @@
+////
+/**
+* @@@ START COPYRIGHT @@@
+*
+* Licensed to the Apache Software Foundation (ASF) under one
+* or more contributor license agreements.  See the NOTICE file
+* distributed with this work for additional information
+* regarding copyright ownership.  The ASF licenses this file
+* to you under the Apache License, Version 2.0 (the
+* "License"); you may not use this file except in compliance
+* with the License.  You may obtain a copy of the License at
+*
+*   http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing,
+* software distributed under the License is distributed on an
+* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+* KIND, either express or implied.  See the License for the
+* specific language governing permissions and limitations
+* under the License.
+*
+* @@@ END COPYRIGHT @@@
+  */
+////
+
+[[install-ambari]]
+= Install with Ambari
+
+This method of installation uses RPM packages rather than tar files. There are two packages:
+
+* traf_ambari - Ambari management pack (plug-in) that is installed on the Ambari Server node
+* apache-trafodion_server - Trafodion package that is installed on every data node
+
+You can either set up a local yum repository (requires a web server) or install the RPMs
+manually on each node.
+
+== Local Repository
+
+On your web server host, be sure the *createrepo* package is installed.
+Copy the two RPM files into a directory served to the web and run the createrepo command.
+
+ $ createrepo -d .
+
+The command must be used to update repo meta-data any time new RPMs are added or replaced.
+
+Note the Trafodion repository URL for later use.
+
+== Install Ambari Management Pack for Trafodion
+
+On your Ambari server host:
+
+. If Ambari Server is not already installed, be sure to download a yum repo file for Ambari.
+For example: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/download_the_ambari_repo_lnx6.html[Ambari-2.4.2
repo].
+
+. Add a yum repo file with the URL of your local repository, or copy the traf_ambari RPM
locally.
+
+. Install the Trafodion Ambari management pack RPM. Ambari server will be installed as a
dependency, if not already installed.
+
+ $ sudo yum install traf_ambari
+
+. Set-up Ambari
+.. If Ambari server was previously running, restart it.
+
+ $ sudo ambari-server restart
+
+.. If Ambari server was not previously running, initialize and start it.
+
+ $ sudo ambari-server setup
+ ...
+ $ sudo ambari-server start
+
+== Install Trafodion
+
+Unlike the command-line installer, Trafodion can be provisioned at time of creating a new
cluster.
+
+=== Initial Cluster Creation
+
+If you are creating a new cluster and you have the Trafodion server RPM hosted on a local
yum repository, then
+create the cluster as normal, and select Trafodion on the service selection screen.
+When Ambari prompts for the repository URLs, be sure to update the Trafodion URL
+to the URL for your local repository.
+
+If you plan to install the server RPM manually, do not select the Trafodion service. First,
create a cluster
+without Trafodion service and follow instructions for an existing cluster.
+
+=== Existing Cluster
+
+If you are not using a local yum repository, manually copy the apache-trafodion_server RPM
to each data node and
+install it using yum install.
+
+Using Ambari, select the cluster and then choose "Add a Service" and select Trafodion.
+

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
index bf34fb9..0351129 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/introduction.adoc
@@ -92,28 +92,24 @@ such as LDAP search.  Refer to
 [[introduction-provisioning-options]]
 == Provisioning Options
 
-{project-name} ships with a set of scripts that takes care of many of the installation and
upgrade
-tasks associated with the {project-name} software and its requirements. The main features
include:
+{project-name} includes two options for installation: a plug-in integration with Apache Ambari
and command-line installation scripts.
 
-* *{project-name} installer*: Performs installation and upgrade for {project-name}
-* *{project-name} uninstaller*: Uninstalls {project-name}
-* *{project-name} security installer*: Enables security features including Kerberos and LDAP
for an existing {project-name} installation
+The Ambari integration provides support for Hortonworks Hadoop distributions, while the command-line
{project-name} Installer
+supports Cloudera and  Hortonworks Hadoop distributions, and for select vanilla Hadoop installations.
 
-Currently, the {project-name} Installer is able to install {project-name} on select Cloudera
and  Hortonworks Hadoop distributions, and for select vanilla Hadoop installations.
-The {project-name} Installer limitations are noted as they apply in the different chapters
below. For example, the {project-name} Installer
-is less capable on SUSE than it is on RedHat/CentOS; you have to install the prerequisite
software packages outside the {project-name} Installer.
+The {project-name} Installer supports Linux distributions SUSE and RedHat/CentOS. There are,
however, some differences.
+Prerequisite software packages are not installed automatically on SUSE.
 
-The {project-name} Installer automates many of the tasks required to install/upgrade {project-name},
spanning from downloading and
-installing required software packages and making required changes to your Hadoop environment
via creating
+The {project-name} Installer automates many of the tasks required to install/upgrade {project-name},
from downloading and
+installing required software packages and making required configuration changes to your Hadoop
environment via creating
 the {project-name} runtime user ID to installing and starting {project-name}. It is, therefore,
 highly recommend that
 you use the {project-name} Installer for initial installation and upgrades of {project-name}.
These steps are referred to as
 "Script-Based Provisioning" in this guide. Refer to <<introduction-trafodion-installer,
{project-name} Installer>> that provides
 usage information.
 
-If, for any reason, you choose not to use the {project-name} Installer, then separate chapters
provide
-step-by-step recipes for the tasks required to install/upgrade {project-name}. These steps
are referred to as
-*Recipe-Based Provisioning* in this guide. It is assumed that you are well-versed in Linux
and Hadoop
-administrative tasks if using Recipe-Based Provisioning.
+The command-line installer has been replaced for the 2.1.0 release. Written in python, it
replaces the legacy bash-script installer.
+The bash command-line installer is deprecated as of 2.1.0, but is still provided, just in
case you experience any problems with
+the new installer. If so, please report those problems to the project team, since the legacy
installer will soon be obsoleted.
 
 [[introduction-provisioning-activities]]
 == Provisioning Activities

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
index cc5f4d6..6bb0f11 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/prepare.adoc
@@ -35,7 +35,6 @@ You need to prepare your Hadoop environment before installing {project-name}.
 6. <<prepare-configure-ldap-identity-store,Configure LDAP Identity Store>>
 7. <<prepare-gather-configuration-information,Gather Configuration Information>>
 8. <<prepare-install-required-software-packages,Install Required Software Packages>>
-9. <<prepare-perform-recipe-based-provisioning-tasks,Perform Recipe-Based Provisioning
Tasks>>
 
 [[prepare-install-optional-workstation-software]]
 == Install Optional Workstation Software
@@ -51,6 +50,7 @@ We recommended that you pre-install the software before continuing with
the {pro
 [[configure-installation-user-id]]
 == Configure Installation User ID
 
+If using the command-line Installer,
 {project-name} installation requires a user ID with these attributes:
 
 * `sudo` access per the requirements documented in <<requirements-linux-installation-user,Linux
Installation User>>.
@@ -84,7 +84,8 @@ This adds the node to the `$HOME/.ssh/known_hosts` file completing the passwordl
 
 [[prepare-disable-requiretty]]
 == Disable requiretty
-You need to disable `requiretty` in `/etc/sudoers` on all nodes in the cluster
+If using the command-line Installer,
+you need to disable `requiretty` in `/etc/sudoers` on all nodes in the cluster
 to ensure that `sudo` commands can be run from inside the installation scripts.
 
 Comment out the `Defaults requiretty` setting in the `/etc/sudoers` file to
@@ -120,8 +121,7 @@ If you wish to manually set up the authentication configuration file and
enable
 [[prepare-gather-configuration-information]]
 == Gather Configuration Information
 
-You need to gather/decide information about your environment to aid installation {project-name},
both for the {project-name} Installer
-and for recipe-based provisioning. (Listed in alphabetical order to make it easier to find
information when referenced in the install and upgrade instructions.)
+You need to gather/decide information about your environment to aid installation {project-name}
for the {project-name} Installer. (Listed in alphabetical order to make it easier to find
information when referenced in the install and upgrade instructions.)
 
 [cols="25%l,25%,15%l,35%",options="header"]
 |===
@@ -153,7 +153,7 @@ configuration changes under this user.
  +
 If the home directory of the `trafodion` user is
 `/opt/home/trafodion`, then specify the root directory as `/opt/home`. 
-| INIT_TRAFODION     | Whether to automatically initialize the {project-name} database. 
  | N                             | Does not apply to Recipe-Based Provisioning. Applies if
$START=Y only.
+| INIT_TRAFODION     | Whether to automatically initialize the {project-name} database. 
  | N                             | Applies if $START=Y only.
 | INTERFACE          | Interface type used for $FLOATING_IP.                          | None
                         | Not needed if $ENABLE_HA = N. 
 | JAVA_HOME          | Location of Java 1.7.0_65 or higher (JDK).                     | $JAVA_HOME
setting            | Fully qualified path of the JDK. For example:
 `/usr/java/jdk1.7.0_67-cloudera`
@@ -178,7 +178,7 @@ distribution manager's REST API.
 | REST_BUILD         | Tar file containing the REST component.                        | None
| Not needed if using a {project-name} package installation tar file.
 | SECURE_HADOOP^2^   | Indicates whether Hadoop has enabled Kerberos                   |
Y only if Kerberos enabled | Based on whether Kerberos is enabled for your Hadoop installation
 | TRAF_HOME            | Target directory for the {project-name} software.              
    | $HOME_DIR/trafodion           | {project-name} is installed in this directory on all
nodes in `$NODE_LIST`.
-| START              | Whether to start {project-name} after install/upgrade.           
  | N                             | Does not apply to Recipe-Based Provisioning.
+| START              | Whether to start {project-name} after install/upgrade.           
  | N                             | 
 | SUSE_LINUX         | Whether your installing {project-name} on SUSE Linux.            
  | false                         | Auto-detected by the {project-name} Installer.
 | TRAF_KEYTAB^2^     | Name to use when specifying {project-name} keytab              | based
on distribution         |  Required if Kerberos is enabled.
 | TRAF_KEYTAB_DIR^2^ | Location  of {project_name} keytab                             | based
on distribution         |  Required if Kerberos is enabled.
@@ -206,7 +206,6 @@ for more information.
 This step is required if you're:
 
 * Installing {project-name} on SUSE.
-* Using Recipe-Based Provisioning.
 * Can't download the required software packages using the Internet.
 
 If none of these situations exist, then we highly recommend that you use the {project-name}
Installer.
@@ -222,120 +221,16 @@ Install the packages listed in <<requirements-software-packages,Software
Package
 You download the {project-name} binaries from the {project-name} {download-url}[Download]
page. 
 Download the following packages:
 
-* {project-name} Installer (if planning to use the {project-name} Installer)
-* {project-name} Server
+Command-line Installation
 
-NOTE: You can download and install the {project-name} Clients once you've installed and activated
{project-name}. Refer to the
-{docs-url}/client_install/index.html[{project-name} Client Install Guide] for instructions.
-
-*Example*
-
-```
-$ mkdir $HOME/trafodion-download
-$ cd $HOME/trafodion-download
-$ # Download the Trafodion Installer binaries
-$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-1.3.0.incubating/apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
-Resolving http://apache.cs.utah.edu... 192.168.1.56
-Connecting to http://apache.cs.utah.edu|192.168.1.56|:80... connected.
-HTTP request sent, awaiting response... 200 OK
-Length: 68813 (67K) [application/x-gzip]
-Saving to: "apache-trafodion-installer-1.3.0-incubating-bin.tar.gz"
-
-100%[=====================================================================================================================>]
68,813       124K/s   in 0.5s
-
-2016-02-14 04:19:42 (124 KB/s) - "apache-trafodion-installer-1.3.0-incubating-bin.tar.gz"
saved [68813/68813]
-```
-
-<<<
+* {project-name} Installer
+* {project-name} Server tar file
 
-```
-$ # Download the Trafodion Server binaries
-$ wget http://apache.cs.utah.edu/incubator/trafodion/trafodion-1.3.0.incubating/apache-trafodion-1.3.0-incubating-bin.tar.gz
-Resolving http://apache.cs.utah.edu... 192.168.1.56
-Connecting to http://apache.cs.utah.edu|192.168.1.56|:80... connected.
-HTTP request sent, awaiting response... 200 OK
-Length: 214508243 (205M) [application/x-gzip]
-Saving to: "apache-trafodion-1.3.0-incubating-bin.tar.gz"
-
-100%[=====================================================================================================================>]
214,508,243 3.90M/s   in 55s
-
-2016-02-14 04:22:14 (3.72 MB/s) - "apache-trafodion-1.3.0-incubating-bin.tar.gz" saved [214508243/214508243]
-
-$ ls -l
-total 209552
--rw-rw-r-- 1 centos centos 214508243 Jan 12 20:10 apache-trafodion-1.3.0-incubating-bin.tar.gz
--rw-rw-r-- 1 centos centos     68813 Jan 12 20:10 apache-trafodion-installer-1.3.0-incubating-bin.tar.gz
-$
-```
-
-[[prepare-preparation-for-recipe-based-provisioning]]
-== Preparation for Recipe-Based Provisioning 
-
-NOTE: This step should be skipped if you plan to use the {project-name} Installer
+Ambari Installation
 
-[[prepare-modify-os-settings]]
-=== Modify OS Settings
+* {project-name} Ambari RPM
+* {project-name} Server RPM
 
-Ensure that the `/etc/security/limits.d/trafodion.conf` on each node contains the limits
settings required by {project-name}.
-Refer to <<requirements-operating-system-changes,Operating System Changes>> for
the required settings.
-
-[[prepare-modify-zookeeper-configuration]]
-=== Modify ZooKeeper Configuration
-
-Do the following:
-
-1. Modify the ZooKeeper configuration as follows:
-+
-[cols="40%l,60%l",options="header"]
-|===
-| Attribute                  | Setting
-| maxClientCnxns             | 0
-|===
-
-2. Restart ZooKeeper to activate the new configuration setting.
-
-[[prepare-modify-hdfs-configuration]]
-=== Modify HDFS Configuration
-
-Do the following:
-
-1. Modify the HDFS configuration as follows:
-+
-[cols="40%l,60%l",options="header"]
-|===
-| Attribute                 | Setting
-| dfs.namenode.acls.enabled | true
-|===
-
-2. Restart HDFS to activate the new configuration setting.
-
-[[prepare-modify-hbase-configuration]]
-=== Modify HBase Configuration
-
-Do the following:
-
-1. Modify the HBase configuration as follows:
-+
-[cols="40%l,60%l",options="header"]
-|===
-| Attribute                                    | Setting
-| hbase.coprocessor.region.classes^b^          | org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionObserver,org.apache.hadoop.hbase.coprocessor.transactional.TrxRegionEndpoint,
-org.apache.hadoop.hbase.coprocessor.AggregateImplementation 
-| hbase.hregion.impl                           | org.apache.hadoop.hbase.regionserver.transactional.TransactionalRegion
-| hbase.regionserver.region.split.policy       | org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy

-| hbase.snapshot.enabled                       | true 
-| hbase.bulkload.staging.dir                   | hbase-staging
-| hbase.regionserver.region.transactional.tlog | true 
-| hbase.snapshot.master.timeoutMillis          | 600000
-| hbase.snapshot.region.timeout                | 600000
-| hbase.client.scanner.timeout.period          | 600000
-| hbase.regionserver.lease.period              | 600000
-| hbase.namenode.java.heapsize^a^              | 1073741824
-| hbase.secondary.namenode.java.heapsize^a^    | 1073741824
-|===
-+
-a) Applies to Cloudera distributions only.
-+
-b) Do not overwrite any coprocessors that may already exist.
+NOTE: You can download and install the {project-name} Clients once you've installed and activated
{project-name}. Refer to the
+{docs-url}/client_install/index.html[{project-name} Client Install Guide] for instructions.
 
-2. Restart HBase to activate the new configuration setting.

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
index 84abfc0..8d00d90 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/quickstart.adoc
@@ -25,7 +25,8 @@
 [[quickstart]]
 = Quick Start
 
-This chapter provides a quick start for how to use the {project-name} Installer to install
{project-name}. 
+This chapter provides a quick start for how to use the command-line {project-name} Installer
to install {project-name}. 
+*If you prefer to intall on HDP distribution using Ambari, refer to the <<install-ambari,Ambari
Install>> section.*
 
 You need the following before using the information herein:
 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/recipe_install.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/recipe_install.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/recipe_install.adoc
deleted file mode 100644
index 7939ec1..0000000
--- a/docs/provisioning_guide/src/asciidoc/_chapters/recipe_install.adoc
+++ /dev/null
@@ -1,29 +0,0 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-  */
-////
-
-[[install-recipe]]
-= Install Recipe
-
-To be written.

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/recipe_upgrade.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/recipe_upgrade.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/recipe_upgrade.adoc
deleted file mode 100644
index 3efffb2..0000000
--- a/docs/provisioning_guide/src/asciidoc/_chapters/recipe_upgrade.adoc
+++ /dev/null
@@ -1,30 +0,0 @@
-////
-/**
-* @@@ START COPYRIGHT @@@
-*
-* Licensed to the Apache Software Foundation (ASF) under one
-* or more contributor license agreements.  See the NOTICE file
-* distributed with this work for additional information
-* regarding copyright ownership.  The ASF licenses this file
-* to you under the Apache License, Version 2.0 (the
-* "License"); you may not use this file except in compliance
-* with the License.  You may obtain a copy of the License at
-*
-*   http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing,
-* software distributed under the License is distributed on an
-* "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-* KIND, either express or implied.  See the License for the
-* specific language governing permissions and limitations
-* under the License.
-*
-* @@@ END COPYRIGHT @@@
-  */
-////
-
-[[upgrade-recipe]]
-= Upgrade Recipe
-
-To be written.
-

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc b/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
index 62ebbf4..e33a15e 100644
--- a/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
+++ b/docs/provisioning_guide/src/asciidoc/_chapters/requirements.adoc
@@ -30,9 +30,9 @@
 
 The current release of {project-name} has been tested with:
 
-* 64-bit Red Hat Enterprise Linux (RHEL) or CentOS 6.5, 6.6, and 6.7
-* Cloudera CDH 5.4
-* Hortonworks HDP 2.3
+* 64-bit Red Hat Enterprise Linux (RHEL) or CentOS 6.5 - 6.8
+* Cloudera CDH 5.4 - 5.7
+* Hortonworks HDP 2.3 - 2.4
 
 Other OS releases may work, too. The {project-name} project is currently working on better
support for more distribution and non-distribution versions of Hadoop.
 
@@ -198,7 +198,7 @@ tools that are not typically packaged as part of the core Linux distribution.
 
 NOTE: For RedHat/CentOS, the {project-name} Installer automatically attempts get a subset
of these packages over the Internet.
 If the cluster's access to the Internet is disabled, then you need to manually download the
packages and make them available
-for installation. You need to build and install `log4c&#43;&#43;` manually.
+for installation.
 
 [cols="20%,45%,35%l",options="header"]
 |===
@@ -206,7 +206,6 @@ for installation. You need to build and install `log4c&#43;&#43;`
manually.
 | EPEL                 | Add-on packages to completed the Linux distribution.           
                  | Download
 http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch[Fedora RPM]
 | pdsh                 | Parallelize shell commands during install and {project-name} runtime
utilities.        | yum install pdsh
-| log4cxx              | Message logging.                                               
                  | Manual process^1^
 | sqlite               | Internal configuration information managed by the {project-name}
Foundation component. | yum install sqlite
 | expect               | Not used?                                                      
                  | yum install expect
 | perl-DBD-SQLite      | Allows Perl scripts to connect to SQLite.                      
                  | yum install perl-DBD-SQLite
@@ -240,9 +239,6 @@ The `trafodion:trafodion` user ID is created as part of the installation
process
 {project-name} requires that either HDFS ACL support or Kerberos is enabled. The {project-name}
Installer will enable HDFS ACL and Kerberos support. Refer to <<enable-security-kerberos,Kerberos>>
for more information about the requirements and usage of Kerberos in Trafodion.
 Refer to https://hbase.apache.org/book.html#security[Apache HBase(TM) Reference Guide] for
security in HBase. 
 
-Also, {project-name} requires `sudo` access to `ip` and `arping` so that floating or elastic
IP addresses can be moved from one node to
-another in case of node failures.
-
 NOTE: Do *not* create the `trafodion:trafodion` user ID in advance. The {project-name} Installer
uses the presence of this user ID to determine
 whether you're doing an installation or upgrade.
 
@@ -321,153 +317,6 @@ The Kerberos adminstrator. Required to create Trafodion principals and
keytabs o
 * Kerberos Administrator admin name including the realm.
 * Kerberos Administrator password
 
-[[requirements-required-configuration-changes]]
-== Required Configuration Changes
-
-{project-name} requires changes to a number of different areas of your system configuration:
operating system, HDFS, and HBase.
-
-NOTE: These changes are performed by the {project-name} Installer, if used.
-
-[[requirements-operating-system-changes]]
-=== Operating System Changes
-
-`/etc/security/limits.d/trafodion.conf` on each node in the cluster must contain the following
settings:
-
-```
-# Trafodion settings
-trafodion  soft core    unlimited
-trafodion  hard core    unlimited
-trafodion  soft memlock unlimited
-trafodion  hard memlock unlimited
-trafodion  soft nofile  32768
-trafodion  hard nofile  65536
-trafodion  soft nproc   100000
-trafodion  hard nproc   100000
-```
-
-<<<
-[[requirements-zookeeper-changes]]
-=== ZooKeeper Changes
-
-NOTE: These changes require a restart of ZooKeeper on all nodes in the cluster.
-
-{project-name} requires the following changes to `zoo.cfg`:
-
-[cols="30%l,40%l,30%a",options="header"]
-|===
-| Setting        | New Value | Purpose
-| maxClientCnxns | 0         | Tell ZooKeeper to impose no limit to the number of connections
to enable better {project-name} concurrency.
-|===
-
-NOTE: If Kerberos is enabled, it is not possible to secure the {project-name} data in ZooKeeper
at this time. 
-
-[[requirements-hdfs-changes]]
-=== HDFS Changes
-
-NOTE: These changes require a restart of HDFS on all nodes in the cluster.
-
-{project-name} requires the following changes to the HDFS environment:
-
-[cols="60%a,40%a",options="header"]
-|===
-| Action  | Purpose 
-| &#8226; Create `/hbase-staging` directory.  +
-  &#8226; Change owner to HBase Administrator. |
-| &#8226; Create `/bulkload` directory.  +
-  &#8226; Change owner to `trafodion`. | Used to stage data when processing the {project-name}
-{docs-url}/sql_reference/index.html#load_statement[LOAD INTO table]
-statement and as a temporary directory to create links to actual HFile for snapshot scanning.
-| &#8226; Create `/lobs` directory.  +
-  &#8226; Change owner to `trafodion`. |
-| &#8226; Create `/apps/hbase/data/archive`^1^.  +
-  &#8226; Change owner to: `hbase:hbase` (Cloudera) or `hbase:hdfs` (Hortonworks) +
-  &#8226; Give the `trafodion` user RWX access to `/apps/hbase/data/archive` +
-  &#8226; Set default user of `/apps/hbase/data/archive` to `trafodion` +
-  &#8226; Recursively change `setafcl` of `/apps/hbase/data/archive` to RWX | 
-|===
-
-1. These steps are performed *after* HDFS ACLs have been enabled.
-
-The following changes are required in `hdfs-site.xml`:
-
-[cols="30%l,40%l,30%a",options="header"]
-|===
-| Setting | New Value | Purpose
-| dfs.namenode.acls.enabled | true | Enable HDFS  POSIX Access Control Lists (ACLs).
-|===
-
-[[requirements-hbase-changes]]
-=== HBase Changes
-
-NOTE: These changes require a restart of ZooKeeper and HBase on all nodes in the cluster.
-
-{project-name} requires that the following changes to the HBase environment:
-
-[cols="25%a,40%a,35%a",options="header"]
-|===
-| Action | Affected Directories | Purpose
-| Install/replace {project-name}'s version of `hbase-trx` | &#8226; `/usr/lib/hbase/lib/`
+
-&#8226; `/usr/share/cmf/lib/plugins/` (Cloudera) +
-&#8226; `/usr/hdp/current/hbase-regionserver/lib/` (Hortonworks) |
-{project-name} transaction management relies on an enhanced version of `hbase-trx`.
-| Install/Replace {project-name} utility jar file. | &#8226; `/usr/lib/hbase/lib/` +
-&#8226; `/usr/share/cmf/lib/plugins/` (Cloudera) +
-&#8226; `/usr/hdp/current/hbase-regionserver/lib` (Hortonworks) |
-TODO: Add purpose here.
-|For Kerberos enabled clusters, grant `trafodion` user privileges | not applicable | privileges:
create, read, write, and execute access |
-|===
-
-The following changes are required in `hbase-site.xml`. Please refer to the 
-https://hbase.apache.org/book.html[Apache HBase(TM) Reference Guide] for additional descriptions
of these settings.
-
-[cols="30%l,40%l,30%a",options="header"]
-|===
-| Setting | New Value | Purpose
-| hbase.master.
-distributed.log.splitting | false | Do not use the HBase Split Log Manager. Instead, the
HMaster controls all log-splitting activities.
-| hbase.coprocessor.
-region.classes | 
-org.apache.hadoop.
-hbase.coprocessor.
-transactional.TrxRegionObserver,
-org.apache.hadoop.
-hbase.coprocessor.
-transactional.TrxRegionEndpoint,
-org.apache.hadoop.
-hbase.coprocessor.
-AggregateImplementation | Install {project-name} coprocessor classes.
-| hbase.hregion.impl | org.apache.hadoop.
-hbase.regionserver.
-transactional.TransactionalRegion | {project-name} needs to be able to read the Write Ahead
Log from a coprocessor using the getScanner method. This method
-is protected in standard HBase. This change overloads the getScanner method to be public
thereby allowing coprocessor code to use it.
-| hbase.regionserver.
-region.split.policy | org.apache.hadoop.
-hbase.regionserver.
-ConstantSizeRegionSplitPolicy | Tell HBase to use the ConstantSizeRegionSplitPolicy for region
splitting. 
-This setting causes region splitting to occur only when the maximum file size is reached.

-| hbase.snapshot.
-enabled | true | Enable the HBase Snapshot feature. Used for {project-name} backup and restore.
-| hbase.bulkload.
-staging.dir | hbase-staging | Use `/hbase-staging` as the bulk load staging directory.
-| hbase.regionserver.region.
-transactional.tlog | true | The HBase Regions requests that the Transaction Manager re-drives
in-doubt transactions.
-| hbase.snapshot.
-master.timeoutMillis | 600000 | HMaster timeout when waiting for RegionServers involved in
the snapshot operation.
-| hbase.snapshot.
-region.timeout | 600000 | RegionServer timeout when waiting for snapshot to be created.
-| hbase.client.
-scanner.timeout.period | 600000 | Time limit to perform a scan request. 
-| hbase.regionserver.
-lease.period | 600000 | Clients must report within this time limit or they are considered
dead by HBase.
-| hbase.namenode.
-java.heapsize^1^ | 1073741824 (1GB) | Java Heap Size for the HDFS NameNode.
-| hbase.secondary.namenode.
-java.heapsize^1^ | 1073741824 (1GB) | Java Heap Size for the HDFS Secondary NameNode.
-|===
-
-1. Applies to Cloudera distributions only.
-
-[[requirements-recommended-configuration-changes]]
 == Recommended Configuration Changes
 The following configuration changes are recommended but not required.
 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/docs/provisioning_guide/src/asciidoc/index.adoc
----------------------------------------------------------------------
diff --git a/docs/provisioning_guide/src/asciidoc/index.adoc b/docs/provisioning_guide/src/asciidoc/index.adoc
index 1dab870..9ba0634 100644
--- a/docs/provisioning_guide/src/asciidoc/index.adoc
+++ b/docs/provisioning_guide/src/asciidoc/index.adoc
@@ -51,13 +51,11 @@ include::asciidoc/_chapters/quickstart.adoc[]
 include::asciidoc/_chapters/introduction.adoc[]
 include::asciidoc/_chapters/requirements.adoc[]
 include::asciidoc/_chapters/prepare.adoc[]
+include::asciidoc/_chapters/ambari_install.adoc[]
 include::asciidoc/_chapters/script_install.adoc[]
 include::asciidoc/_chapters/script_upgrade.adoc[]
 include::asciidoc/_chapters/activate.adoc[]
 include::asciidoc/_chapters/script_remove.adoc[]
 include::asciidoc/_chapters/enable_security.adoc[]
-include::asciidoc/_chapters/recipe_install.adoc[]
-include::asciidoc/_chapters/recipe_upgrade.adoc[]
-
 
 

http://git-wip-us.apache.org/repos/asf/incubator-trafodion/blob/51abeb4d/install/python-installer/configs/version.json
----------------------------------------------------------------------
diff --git a/install/python-installer/configs/version.json b/install/python-installer/configs/version.json
index b0064d5..0d13da3 100644
--- a/install/python-installer/configs/version.json
+++ b/install/python-installer/configs/version.json
@@ -4,7 +4,7 @@
     "java":   ["1.7", "1.8"],
     "centos": ["6"],
     "redhat": ["6"],
-    "cdh":    ["5.4", "5.5", "5.6"],
+    "cdh":    ["5.4", "5.5", "5.6", "5.7"],
     "hdp":    ["2.3", "2.4"],
     "hbase":  ["1.0", "1.1"]
 }


Mime
View raw message