accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [11/11] accumulo git commit: ACCUMULO-4490 Code review updates
Date Tue, 08 Nov 2016 21:13:45 GMT
ACCUMULO-4490 Code review updates

* Moved code in config.sh to accumulo script
* Moved non-API scripts to contrib
* Moved setting master goal state to service.sh
* Stopped using ifconfig command as its deprecated


Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo
Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/ab0d6fc3
Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/ab0d6fc3
Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/ab0d6fc3

Branch: refs/heads/master
Commit: ab0d6fc3fed83308364129cd9a721be066d4c843
Parents: 2e3b62a
Author: Mike Walch <mwalch@apache.org>
Authored: Thu Nov 3 13:55:16 2016 -0400
Committer: Mike Walch <mwalch@apache.org>
Committed: Tue Nov 8 15:37:05 2016 -0500

----------------------------------------------------------------------
 INSTALL.md                                      |   2 +-
 README.md                                       |  10 +-
 assemble/bin/accumulo                           | 395 +++++++++++++++++-
 assemble/bin/accumulo-cluster                   |   2 +-
 assemble/bin/accumulo-service                   |   8 +-
 assemble/contrib/bootstrap-hdfs.sh              |  91 +++++
 assemble/contrib/check-tservers                 | 199 +++++++++
 assemble/contrib/gen-monitor-cert.sh            |  85 ++++
 assemble/contrib/tool.sh                        |  93 +++++
 assemble/libexec/bootstrap-hdfs.sh              |  90 ----
 assemble/libexec/check-tservers                 | 199 ---------
 assemble/libexec/cluster.sh                     |  13 +-
 assemble/libexec/config.sh                      | 408 -------------------
 assemble/libexec/gen-monitor-cert.sh            |  84 ----
 assemble/libexec/load-env.sh                    |   2 -
 assemble/libexec/service.sh                     |   6 +-
 assemble/libexec/tool.sh                        |  92 -----
 assemble/src/main/assemblies/component.xml      |  30 +-
 .../main/scripts/generate-example-configs.sh    |   3 +-
 .../main/asciidoc/chapters/administration.txt   |   2 +-
 docs/src/main/asciidoc/chapters/clients.txt     |   6 +-
 .../main/resources/examples/README.bulkIngest   |   2 +-
 .../main/resources/examples/README.classpath    |   2 +-
 .../src/main/resources/examples/README.filedata |   2 +-
 docs/src/main/resources/examples/README.mapred  |   6 +-
 docs/src/main/resources/examples/README.regex   |   2 +-
 docs/src/main/resources/examples/README.rowhash |   2 +-
 .../main/resources/examples/README.tabletofile  |   2 +-
 .../src/main/resources/examples/README.terasort |   2 +-
 .../StandaloneClusterControlTest.java           |   2 +-
 proxy/README                                    |   2 +-
 test/system/continuous/run-moru.sh              |   2 +-
 test/system/continuous/run-verify.sh            |   2 +-
 test/system/upgrade_test.sh                     |   8 +-
 34 files changed, 918 insertions(+), 938 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/INSTALL.md
----------------------------------------------------------------------
diff --git a/INSTALL.md b/INSTALL.md
index 02ad3ac..b614991 100644
--- a/INSTALL.md
+++ b/INSTALL.md
@@ -114,7 +114,7 @@ There are several methods for running Accumulo:
    start Accumulo.
 
 2. Run an Accumulo cluster on one or more nodes using `accumulo-cluster` (which
-   uses `accumulo-service` to run servcies). Useful for local development and
+   uses `accumulo-service` to run services). Useful for local development and
    testing or if you are not using a cluster management tool in production.
 
 Each method above has instructions below.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 4cca80d..d3b8b5d 100644
--- a/README.md
+++ b/README.md
@@ -39,15 +39,15 @@ Accumulo has the following documentation which is viewable on the [Accumulo webs
 using the links below:
 
 * [User Manual][man-web] - In-depth developer and administrator documentation.
-* [Examples][ex-web] - Code with corresponding README files that give step by step
+* [Accumulo Examples][ex-web] - Code with corresponding README files that give step by step
 instructions for running the example.
 
 This documentation can also be found in Accumulo distributions:
 
-* **Binary distribution** - The User Manual can be found in the `docs` directory.  The
-Examples Readmes can be found in `docs/examples`. While the source for the Examples is
-not included, the distribution has a jar with the compiled examples. This makes it easy
-to run them after following the [install] instructions.
+* **Binary distribution**
+  - User manual is located at `docs/accumulo_user_manual.html`.
+  - Accumulo Examples: READMEs and source are in `docs/examples`. The distribution also has a jar with
+    the compiled examples. This makes it easy to run them after following the [install] instructions.
 
 * **Source distribution** - The [Example Source][ex-src], [Example Readmes][rm-src], and
 [User Manual Source][man-src] are available.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/bin/accumulo
----------------------------------------------------------------------
diff --git a/assemble/bin/accumulo b/assemble/bin/accumulo
index 21ce5a1..210350d 100755
--- a/assemble/bin/accumulo
+++ b/assemble/bin/accumulo
@@ -16,6 +16,10 @@
 # limitations under the License.
 
 function build_native() {
+  if [[ -f "$basedir"/conf/accumulo-env.sh ]]; then
+    source "$basedir"/libexec/load-env.sh
+  fi
+
   native_tarball="$basedir/lib/accumulo-native.tar.gz"
   final_native_target="$basedir/lib/native"
 
@@ -44,11 +48,7 @@ function build_native() {
 
   # Make the native library
   export USERFLAGS="$*"
-  if ! make
-  then
-      echo "Make failed!"
-      exit 1
-  fi
+  make || { echo 'Make failed!'; exit 1; }
 
   # "install" the artifact
   cp libaccumulo.* "${final_native_target}" || exit 1
@@ -59,6 +59,389 @@ function build_native() {
   echo "Successfully installed native library"
 }
 
+function create_config_usage() {
+  cat <<EOF
+Usage: accumulo create-config [-options]
+where options include (long options not available on all platforms):
+    -d, --dir        Alternate directory to setup config files
+    -s, --size       Supported sizes: '1GB' '2GB' '3GB' '512MB'
+    -n, --native     Configure to use native libraries
+    -j, --jvm        Configure to use the jvm
+    -o, --overwrite  Overwrite the default config directory
+    -v, --version    Specify the Apache Hadoop version supported versions: '1' '2'
+    -k, --kerberos   Configure for use with Kerberos
+    -h, --help       Print this help message
+EOF
+}
+
+function create_config() {
+  TEMPLATE_CONF_DIR="$basedir/libexec/templates"
+  CONF_DIR="${ACCUMULO_CONF_DIR:-$basedir/conf}"
+  ACCUMULO_SITE=accumulo-site.xml
+  ACCUMULO_ENV=accumulo-env.sh
+
+  SIZE=
+  TYPE=
+  HADOOP_VERSION=
+  OVERWRITE="0"
+  BASE_DIR=
+  KERBEROS=
+
+  #Execute getopt
+  if [[ $(uname -s) == "Linux" ]]; then
+    args=$(getopt -o "b:d:s:njokv:h" -l "basedir:,dir:,size:,native,jvm,overwrite,kerberos,version:,help" -q -- "$@")
+  else # Darwin, BSD
+    args=$(getopt b:d:s:njokv:h "$@")
+  fi
+
+  #Bad arguments
+  if [[ $? != 0 ]]; then
+    create_config_usage 1>&2
+    exit 1
+  fi
+  eval set -- "${args[@]}"
+
+  for i
+  do
+    case "$i" in
+      -b|--basedir) #Hidden option used to set general.maven.project.basedir for developers
+        BASE_DIR=$2; shift
+        shift;;
+      -d|--dir)
+        CONF_DIR=$2; shift
+        shift;;
+      -s|--size)
+        SIZE=$2; shift
+        shift;;
+      -n|--native)
+        TYPE=native
+        shift;;
+      -j|--jvm)
+        TYPE=jvm
+        shift;;
+      -o|--overwrite)
+        OVERWRITE=1
+        shift;;
+      -v|--version)
+        HADOOP_VERSION=$2; shift
+        shift;;
+      -k|--kerberos)
+        KERBEROS="true"
+        shift;;
+      -h|--help)
+        create_config_usage
+        exit 0
+        shift;;
+      --)
+        shift
+        break;;
+    esac
+  done
+
+  while [[ "${OVERWRITE}" = "0" ]]; do
+    if [[ -e "${CONF_DIR}/${ACCUMULO_ENV}" || -e "${CONF_DIR}/${ACCUMULO_SITE}" ]]; then
+      echo "Warning your current config files in ${CONF_DIR} will be overwritten!"
+      echo
+      echo "How would you like to proceed?:"
+      select CHOICE in 'Continue with overwrite' 'Specify new conf dir'; do
+        if [[ "${CHOICE}" = 'Specify new conf dir' ]]; then
+          echo -n "Please specifiy new conf directory: "
+          read CONF_DIR
+        elif [[ "${CHOICE}" = 'Continue with overwrite' ]]; then
+          OVERWRITE=1
+        fi
+        break
+      done
+    else
+      OVERWRITE=1
+    fi
+  done
+  echo "Copying configuration files to: ${CONF_DIR}"
+
+  #Native 1GB
+  native_1GB_tServer="-Xmx128m -Xms128m"
+  _1GB_master="-Xmx128m -Xms128m"
+  _1GB_monitor="-Xmx64m -Xms64m"
+  _1GB_gc="-Xmx64m -Xms64m"
+  _1GB_other="-Xmx128m -Xms64m"
+  _1GB_shell="${_1GB_other}"
+
+  _1GB_memoryMapMax="256M"
+  native_1GB_nativeEnabled="true"
+  _1GB_cacheDataSize="15M"
+  _1GB_cacheIndexSize="40M"
+  _1GB_sortBufferSize="50M"
+  _1GB_waLogMaxSize="256M"
+
+  #Native 2GB
+  native_2GB_tServer="-Xmx256m -Xms256m"
+  _2GB_master="-Xmx256m -Xms256m"
+  _2GB_monitor="-Xmx128m -Xms64m"
+  _2GB_gc="-Xmx128m -Xms128m"
+  _2GB_other="-Xmx256m -Xms64m"
+  _2GB_shell="${_2GB_other}"
+
+  _2GB_memoryMapMax="512M"
+  native_2GB_nativeEnabled="true"
+  _2GB_cacheDataSize="30M"
+  _2GB_cacheIndexSize="80M"
+  _2GB_sortBufferSize="50M"
+  _2GB_waLogMaxSize="512M"
+
+  #Native 3GB
+  native_3GB_tServer="-Xmx1g -Xms1g -XX:NewSize=500m -XX:MaxNewSize=500m"
+  _3GB_master="-Xmx1g -Xms1g"
+  _3GB_monitor="-Xmx1g -Xms256m"
+  _3GB_gc="-Xmx256m -Xms256m"
+  _3GB_other="-Xmx1g -Xms256m"
+  _3GB_shell="${_3GB_other}"
+
+  _3GB_memoryMapMax="1G"
+  native_3GB_nativeEnabled="true"
+  _3GB_cacheDataSize="128M"
+  _3GB_cacheIndexSize="128M"
+  _3GB_sortBufferSize="200M"
+  _3GB_waLogMaxSize="1G"
+
+  #Native 512MB
+  native_512MB_tServer="-Xmx48m -Xms48m"
+  _512MB_master="-Xmx128m -Xms128m"
+  _512MB_monitor="-Xmx64m -Xms64m"
+  _512MB_gc="-Xmx64m -Xms64m"
+  _512MB_other="-Xmx128m -Xms64m"
+  _512MB_shell="${_512MB_other}"
+
+  _512MB_memoryMapMax="80M"
+  native_512MB_nativeEnabled="true"
+  _512MB_cacheDataSize="7M"
+  _512MB_cacheIndexSize="20M"
+  _512MB_sortBufferSize="50M"
+  _512MB_waLogMaxSize="100M"
+
+  #JVM 1GB
+  jvm_1GB_tServer="-Xmx384m -Xms384m"
+
+  jvm_1GB_nativeEnabled="false"
+
+  #JVM 2GB
+  jvm_2GB_tServer="-Xmx768m -Xms768m"
+
+  jvm_2GB_nativeEnabled="false"
+
+  #JVM 3GB
+  jvm_3GB_tServer="-Xmx2g -Xms2g -XX:NewSize=1G -XX:MaxNewSize=1G"
+
+  jvm_3GB_nativeEnabled="false"
+
+  #JVM 512MB
+  jvm_512MB_tServer="-Xmx128m -Xms128m"
+
+  jvm_512MB_nativeEnabled="false"
+
+
+  if [[ -z "${SIZE}" ]]; then
+    echo "Choose the heap configuration:"
+    select DIRNAME in 1GB 2GB 3GB 512MB; do
+      echo "Using '${DIRNAME}' configuration"
+      SIZE=${DIRNAME}
+      break
+    done
+  elif [[ "${SIZE}" != "1GB" && "${SIZE}" != "2GB"  && "${SIZE}" != "3GB" && "${SIZE}" != "512MB" ]]; then
+    echo "Invalid memory size"
+    echo "Supported sizes: '1GB' '2GB' '3GB' '512MB'"
+    exit 1
+  fi
+
+  if [[ -z "${TYPE}" ]]; then
+    echo
+    echo "Choose the Accumulo memory-map type:"
+    select TYPENAME in Java Native; do
+      if [[ "${TYPENAME}" == "Native" ]]; then
+        TYPE="native"
+        echo "Don't forget to build the native libraries using the command 'bin/accumulo build-native'"
+      elif [[ "${TYPENAME}" == "Java" ]]; then
+        TYPE="jvm"
+      fi
+      echo "Using '${TYPE}' configuration"
+      echo
+      break
+    done
+  fi
+
+  if [[ -z "${HADOOP_VERSION}" ]]; then
+    echo
+    echo "Choose the Apache Hadoop version:"
+    select HADOOP in 'Hadoop 2' 'HDP 2.0/2.1' 'HDP 2.2' 'IOP 4.1'; do
+      if [ "${HADOOP}" == "Hadoop 2" ]; then
+        HADOOP_VERSION="2"
+      elif [ "${HADOOP}" == "HDP 2.0/2.1" ]; then
+        HADOOP_VERSION="HDP2"
+      elif [ "${HADOOP}" == "HDP 2.2" ]; then
+        HADOOP_VERSION="HDP2.2"
+      elif [ "${HADOOP}" == "IOP 4.1" ]; then
+        HADOOP_VERSION="IOP4.1"
+      fi
+      echo "Using Hadoop version '${HADOOP_VERSION}' configuration"
+      echo
+      break
+    done
+  elif [[ "${HADOOP_VERSION}" != "2" && "${HADOOP_VERSION}" != "HDP2" && "${HADOOP_VERSION}" != "HDP2.2" ]]; then
+    echo "Invalid Hadoop version"
+    echo "Supported Hadoop versions: '2', 'HDP2', 'HDP2.2'"
+    exit 1
+  fi
+
+  TRACE_USER="root"
+
+  if [[ ! -z "${KERBEROS}" ]]; then
+    echo
+    read -p "Enter server's Kerberos principal: " PRINCIPAL
+    read -p "Enter server's Kerberos keytab: " KEYTAB
+    TRACE_USER="${PRINCIPAL}"
+  fi
+
+  for var in SIZE TYPE HADOOP_VERSION; do
+    if [[ -z ${!var} ]]; then
+      echo "Invalid $var configuration"
+      exit 1
+    fi
+  done
+
+  TSERVER="${TYPE}_${SIZE}_tServer"
+  MASTER="_${SIZE}_master"
+  MONITOR="_${SIZE}_monitor"
+  GC="_${SIZE}_gc"
+  SHELL="_${SIZE}_shell"
+  OTHER="_${SIZE}_other"
+
+  MEMORY_MAP_MAX="_${SIZE}_memoryMapMax"
+  NATIVE="${TYPE}_${SIZE}_nativeEnabled"
+  CACHE_DATA_SIZE="_${SIZE}_cacheDataSize"
+  CACHE_INDEX_SIZE="_${SIZE}_cacheIndexSize"
+  SORT_BUFFER_SIZE="_${SIZE}_sortBufferSize"
+  WAL_MAX_SIZE="_${SIZE}_waLogMaxSize"
+
+  MAVEN_PROJ_BASEDIR=""
+
+  if [[ ! -z "${BASE_DIR}" ]]; then
+    MAVEN_PROJ_BASEDIR="\n  <property>\n    <name>general.maven.project.basedir</name>\n    <value>${BASE_DIR}</value>\n  </property>\n"
+  fi
+
+  mkdir -p "${CONF_DIR}" && cp "${TEMPLATE_CONF_DIR}"/* "${CONF_DIR}"/
+
+  if [[ -f "${CONF_DIR}/examples/client.conf" ]]; then
+    cp "${CONF_DIR}"/examples/client.conf "${CONF_DIR}"/
+  fi
+
+  #Configure accumulo-env.sh
+  sed -e "s/\${tServerHigh_tServerLow}/${!TSERVER}/" \
+    -e "s/\${masterHigh_masterLow}/${!MASTER}/" \
+    -e "s/\${monitorHigh_monitorLow}/${!MONITOR}/" \
+    -e "s/\${gcHigh_gcLow}/${!GC}/" \
+    -e "s/\${shellHigh_shellLow}/${!SHELL}/" \
+    -e "s/\${otherHigh_otherLow}/${!OTHER}/" \
+    "${TEMPLATE_CONF_DIR}/$ACCUMULO_ENV" > "${CONF_DIR}/$ACCUMULO_ENV"
+
+  #Configure accumulo-site.xml
+  sed -e "s/\${memMapMax}/${!MEMORY_MAP_MAX}/" \
+    -e "s/\${nativeEnabled}/${!NATIVE}/" \
+    -e "s/\${cacheDataSize}/${!CACHE_DATA_SIZE}/" \
+    -e "s/\${cacheIndexSize}/${!CACHE_INDEX_SIZE}/" \
+    -e "s/\${sortBufferSize}/${!SORT_BUFFER_SIZE}/" \
+    -e "s/\${waLogMaxSize}/${!WAL_MAX_SIZE}/" \
+    -e "s=\${traceUser}=${TRACE_USER}=" \
+    -e "s=\${mvnProjBaseDir}=${MAVEN_PROJ_BASEDIR}=" "${TEMPLATE_CONF_DIR}/$ACCUMULO_SITE" > "${CONF_DIR}/$ACCUMULO_SITE"
+
+  # If we're not using kerberos, filter out the krb properties
+  if [[ -z "${KERBEROS}" ]]; then
+    sed -e 's/<!-- Kerberos requirements -->/<!-- Kerberos requirements --><!--/' \
+      -e 's/<!-- End Kerberos requirements -->/--><!-- End Kerberos requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+  else
+    # Make the substitutions
+    sed -e "s!\${keytab}!${KEYTAB}!" \
+      -e "s!\${principal}!${PRINCIPAL}!" \
+      "${CONF_DIR}/${ACCUMULO_SITE}" > temp
+    mv temp "${CONF_DIR}/${ACCUMULO_SITE}"
+  fi
+
+  # Configure hadoop version
+  if [[ "${HADOOP_VERSION}" == "2" ]]; then
+    sed -e 's/<!-- HDP 2.0 requirements -->/<!-- HDP 2.0 requirements --><!--/' \
+      -e 's/<!-- End HDP 2.0 requirements -->/--><!-- End HDP 2.0 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- HDP 2.2 requirements -->/<!-- HDP 2.2 requirements --><!--/' \
+      -e 's/<!-- End HDP 2.2 requirements -->/--><!-- End HDP 2.2 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- IOP 4.1 requirements -->/<!-- IOP 4.1 requirements --><!--/' \
+      -e 's/<!-- End IOP 4.1 requirements -->/--><!-- End IOP 4.1 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+  elif [[ "${HADOOP_VERSION}" == "HDP2" ]]; then
+    sed -e 's/<!-- Hadoop 2 requirements -->/<!-- Hadoop 2 requirements --><!--/' \
+      -e 's/<!-- End Hadoop 2 requirements -->/--><!-- End Hadoop 2 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- HDP 2.2 requirements -->/<!-- HDP 2.2 requirements --><!--/' \
+      -e 's/<!-- End HDP 2.2 requirements -->/--><!-- End HDP 2.2 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- IOP 4.1 requirements -->/<!-- IOP 4.1 requirements --><!--/' \
+      -e 's/<!-- End IOP 4.1 requirements -->/--><!-- End IOP 4.1 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+  elif [[ "${HADOOP_VERSION}" == "HDP2.2" ]]; then
+    sed -e 's/<!-- Hadoop 2 requirements -->/<!-- Hadoop 2 requirements --><!--/' \
+      -e 's/<!-- End Hadoop 2 requirements -->/--><!-- End Hadoop 2 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- HDP 2.0 requirements -->/<!-- HDP 2.0 requirements --><!--/' \
+      -e 's/<!-- End HDP 2.0 requirements -->/--><!-- End HDP 2.0 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- IOP 4.1 requirements -->/<!-- IOP 4.1 requirements --><!--/' \
+      -e 's/<!-- End IOP 4.1 requirements -->/--><!-- End IOP 4.1 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+  elif [[ "${HADOOP_VERSION}" == "IOP4.1" ]]; then
+    sed -e 's/<!-- Hadoop 2 requirements -->/<!-- Hadoop 2 requirements --><!--/' \
+      -e 's/<!-- End Hadoop 2 requirements -->/--><!-- End Hadoop 2 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- HDP 2.0 requirements -->/<!-- HDP 2.0 requirements --><!--/' \
+      -e 's/<!-- End HDP 2.0 requirements -->/--><!-- End HDP 2.0 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+    sed -e 's/<!-- HDP 2.2 requirements -->/<!-- HDP 2.2 requirements --><!--/' \
+      -e 's/<!-- End HDP 2.2 requirements -->/--><!-- End HDP 2.2 requirements -->/' \
+      "${CONF_DIR}/$ACCUMULO_SITE" > temp
+    mv temp "${CONF_DIR}/$ACCUMULO_SITE"
+  fi
+
+  #Additional setup steps for native configuration.
+  if [[ ${TYPE} == native ]]; then
+    if [[ $(uname) == Linux ]]; then
+      if [[ -z $HADOOP_PREFIX ]]; then
+        echo "WARNING: HADOOP_PREFIX not set, cannot automatically configure LD_LIBRARY_PATH to include Hadoop native libraries"
+      else
+        NATIVE_LIB=$(readlink -ef "$(dirname "$(for x in $(find "$HADOOP_PREFIX" -name libhadoop.so); do ld "$x" 2>/dev/null && echo "$x" && break; done)" 2>>/dev/null)" 2>>/dev/null)
+        if [[ -z $NATIVE_LIB ]]; then
+          echo -e "WARNING: The Hadoop native libraries could not be found for your sytem in: $HADOOP_PREFIX"
+        else
+          sed "/# Should the monitor/ i export LD_LIBRARY_PATH=${NATIVE_LIB}:\${LD_LIBRARY_PATH}" "${CONF_DIR}/$ACCUMULO_ENV" > temp
+          mv temp "${CONF_DIR}/$ACCUMULO_ENV"
+          echo -e "Added ${NATIVE_LIB} to the LD_LIBRARY_PATH"
+        fi
+      fi
+    fi
+    echo -e "Please remember to compile the Accumulo native libraries using the command 'bin/accumulo build-native' and to set the LD_LIBRARY_PATH variable in the ${CONF_DIR}/accumulo-env.sh script if needed."
+  fi
+  echo "Setup complete"
+}
+
 function main() {
 
   # Start: Resolve Script Directory
@@ -73,7 +456,7 @@ function main() {
   # Stop: Resolve Script Directory
 
   if [[ "$1" == "create-config" ]]; then
-    "$basedir/libexec/config.sh" "${@:2}"
+    create_config "${@:2}"
     exit 0
   elif [[ "$1" == "build-native" ]]; then
     build_native "${@:2}"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/bin/accumulo-cluster
----------------------------------------------------------------------
diff --git a/assemble/bin/accumulo-cluster b/assemble/bin/accumulo-cluster
index f7f44ac..a8c0362 100755
--- a/assemble/bin/accumulo-cluster
+++ b/assemble/bin/accumulo-cluster
@@ -30,7 +30,7 @@ EOF
 
 function invalid_args {
   echo -e "Invalid arguments: $1\n"
-  print_usage
+  print_usage 1>&2
   exit 1
 }
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/bin/accumulo-service
----------------------------------------------------------------------
diff --git a/assemble/bin/accumulo-service b/assemble/bin/accumulo-service
index 90a310e..a80f4ef 100755
--- a/assemble/bin/accumulo-service
+++ b/assemble/bin/accumulo-service
@@ -37,18 +37,16 @@ EOF
 
 function invalid_args {
   echo -e "Invalid arguments: $1\n"
-  print_usage
+  print_usage 1>&2
   exit 1
 }
 
 function get_host {
   host="$(hostname -s)"
   if [[ -z "$host" ]]; then
-    netcmd=/sbin/ifconfig
-    [[ ! -x $netcmd ]] && netcmd='/bin/netstat -ie'
-    host=$($netcmd 2>/dev/null| grep "inet[^6]" | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
+    host=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1  -d'/')
     if [[ $? != 0 ]]; then
-      host=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
+      host=$(getent ahosts "$(hostname -f)" | grep DGRAM | cut -f 1 -d ' ')
     fi
   fi 
   echo "$host"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/contrib/bootstrap-hdfs.sh
----------------------------------------------------------------------
diff --git a/assemble/contrib/bootstrap-hdfs.sh b/assemble/contrib/bootstrap-hdfs.sh
new file mode 100755
index 0000000..26f94f4
--- /dev/null
+++ b/assemble/contrib/bootstrap-hdfs.sh
@@ -0,0 +1,91 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Start: Resolve Script Directory
+SOURCE="${BASH_SOURCE[0]}"
+while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
+  contrib=$( cd -P "$( dirname "$SOURCE" )" && pwd )
+  SOURCE=$(readlink "$SOURCE")
+  [[ $SOURCE != /* ]] && SOURCE="$contrib/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
+done
+contrib=$( cd -P "$( dirname "$SOURCE" )" && pwd )
+basedir=$( cd -P "${contrib}"/.. && pwd )
+# Stop: Resolve Script Directory
+
+source "$basedir"/libexec/load-env.sh
+
+#
+# Find the system context directory in HDFS
+#
+SYSTEM_CONTEXT_HDFS_DIR=$(grep -A1 "general.vfs.classpaths" "$ACCUMULO_CONF_DIR/accumulo-site.xml" | tail -1 | perl -pe 's/\s+<value>//; s/<\/value>//; s/,.+$//; s|[^/]+$||; print $ARGV[1]')
+
+if [ -z "$SYSTEM_CONTEXT_HDFS_DIR" ]
+then
+  echo "Your accumulo-site.xml file is not set up for the HDFS Classloader. Please add the following to your accumulo-site.xml file where ##CLASSPATH## is one of the following formats:"
+  echo "A single directory: hdfs://host:port/directory/"
+  echo "A single directory with a regex: hdfs://host:port/directory/.*.jar"
+  echo "Multiple directories: hdfs://host:port/directory/.*.jar,hdfs://host:port/directory2/"
+  echo ""
+  echo "<property>"
+  echo "   <name>general.vfs.classpaths</name>"
+  echo "   <value>##CLASSPATH##</value>"
+  echo "   <description>location of the jars for the default (system) context</description>"
+  echo "</property>"
+  exit 1
+fi
+
+#
+# Create the system context directy in HDFS if it does not exist
+#
+"$HADOOP_PREFIX/bin/hadoop" fs -ls "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
+if [[ $? != 0 ]]; then
+  "$HADOOP_PREFIX/bin/hadoop" fs -mkdir "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
+  if [[ $? != 0 ]]; then
+    echo "Unable to create classpath directory at $SYSTEM_CONTEXT_HDFS_DIR"
+    exit 1
+  fi
+fi
+
+#
+# Replicate to all tservers to avoid network contention on startup
+#
+TSERVERS=$ACCUMULO_CONF_DIR/tservers
+NUM_TSERVERS=$(egrep -v '(^#|^\s*$)' "$TSERVERS" | wc -l)
+
+#let each datanode service around 50 clients
+REP=$(( NUM_TSERVERS / 50 ))
+(( REP < 3 )) && REP=3
+
+#
+# Copy all jars in lib to the system context directory
+#
+"$HADOOP_PREFIX/bin/hadoop" fs -moveFromLocal "$ACCUMULO_LIB_DIR"/*.jar "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
+"$HADOOP_PREFIX/bin/hadoop" fs -setrep -R $REP "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
+
+#
+# We need some of the jars in lib, copy them back out and remove them from the system context dir
+#
+"$HADOOP_PREFIX/bin/hadoop" fs -copyToLocal "$SYSTEM_CONTEXT_HDFS_DIR/commons-vfs2.jar" "$ACCUMULO_LIB_DIR/."  > /dev/null
+"$HADOOP_PREFIX/bin/hadoop" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/commons-vfs2.jar"  > /dev/null
+"$HADOOP_PREFIX/bin/hadoop" fs -copyToLocal "$SYSTEM_CONTEXT_HDFS_DIR/accumulo-start.jar" "$ACCUMULO_LIB_DIR/."  > /dev/null
+"$HADOOP_PREFIX/bin/hadoop" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/accumulo-start.jar"  > /dev/null
+"$HADOOP_PREFIX/bin/hadoop" fs -copyToLocal "$SYSTEM_CONTEXT_HDFS_DIR/slf4j*.jar" "$ACCUMULO_LIB_DIR/."  > /dev/null
+"$HADOOP_PREFIX/bin/hadoop" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/slf4j*.jar"  > /dev/null
+for f in $(grep -v '^#' "$ACCUMULO_CONF_DIR/tservers")
+do
+  rsync -ra --delete "$ACCUMULO_HOME" "$(dirname "$ACCUMULO_HOME")"
+done

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/contrib/check-tservers
----------------------------------------------------------------------
diff --git a/assemble/contrib/check-tservers b/assemble/contrib/check-tservers
new file mode 100755
index 0000000..7f9850e
--- /dev/null
+++ b/assemble/contrib/check-tservers
@@ -0,0 +1,199 @@
+#! /usr/bin/env python
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# This script will check the configuration and uniformity of all the nodes in a cluster.
+# Checks
+#   each node is reachable via ssh
+#   login identity is the same
+#   the physical memory is the same
+#   the mounts are the same on each machine
+#   a set of writable locations (typically different disks) are in fact writable
+# 
+# In order to check for writable partitions, you must configure the WRITABLE variable below.
+#
+
+import subprocess
+import time
+import select
+import os
+import sys
+import fcntl
+import signal
+if not sys.platform.startswith('linux'):
+   sys.stderr.write('This script only works on linux, sorry.\n')
+   sys.exit(1)
+
+TIMEOUT = 5
+WRITABLE = []
+#WRITABLE = ['/srv/hdfs1', '/srv/hdfs2', '/srv/hdfs3']
+
+def ssh(tserver, *args):
+    'execute a command on a remote tserver and return the Popen handle'
+    handle = subprocess.Popen( ('ssh', '-o', 'StrictHostKeyChecking=no', '-q', '-A', '-n', tserver) + args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
+    handle.tserver = tserver
+    handle.finished = False
+    handle.out = ''
+    return handle
+
+def wait(handles, seconds):
+    'wait for lots of handles simultaneously, and kill anything that doesn\'t return in seconds time\n'
+    'Note that stdout will be stored on the handle as the "out" field and "finished" will be set to True'
+    handles = handles[:]
+    stop = time.time() + seconds
+    for h in handles:
+       fcntl.fcntl(h.stdout, fcntl.F_SETFL, os.O_NONBLOCK)
+    while handles and time.time() < stop:
+       wait = min(0, stop - time.time())
+       handleMap = dict( [(h.stdout, h) for h in handles] )
+       rd, wr, err = select.select(handleMap.keys(), [], [], wait)
+       for r in rd:
+           handle = handleMap[r]
+           while 1:
+               more = handle.stdout.read(1024)
+               if more == '':
+                   handles.remove(handle)
+                   handle.poll()
+                   handle.wait()
+                   handle.finished = True
+               handle.out += more
+               if len(more) < 1024:
+                   break
+    for handle in handles:
+       os.kill(handle.pid, signal.SIGKILL)
+       handle.poll()
+
+def runAll(tservers, *cmd):
+    'Run the given command on all the tservers, returns Popen handles'
+    handles = []
+    for tserver in tservers:
+        handles.append(ssh(tserver, *cmd))
+    wait(handles, TIMEOUT)
+    return handles
+
+def checkIdentity(tservers):
+    'Ensure the login identity is consistent across the tservers'
+    handles = runAll(tservers, 'id', '-u', '-n')
+    bad = set()
+    myIdentity = os.popen('id -u -n').read().strip()
+    for h in handles:
+        if not h.finished or h.returncode != 0:
+            print '#', 'cannot look at identity on', h.tserver
+            bad.add(h.tserver)
+        else:
+            identity = h.out.strip()
+            if identity != myIdentity:
+                print '#', h.tserver, 'inconsistent identity', identity
+                bad.add(h.tserver)
+    return bad
+
+def checkMemory(tservers):
+    'Run free on all tservers and look for weird results'
+    handles = runAll(tservers, 'free')
+    bad = set()
+    mem = {}
+    swap = {}
+    for h in handles:
+        if not h.finished or h.returncode != 0:
+            print '#', 'cannot look at memory on', h.tserver
+            bad.add(h.tserver)
+        else:
+            if h.out.find('Swap:') < 0:
+               print '#',h.tserver,'has no swap'
+               bad.add(h.tserver)
+               continue
+            lines = h.out.split('\n')
+            for line in lines:
+               if line.startswith('Mem:'):
+                  mem.setdefault(line.split()[1],set()).add(h.tserver)
+               if line.startswith('Swap:'):
+                  swap.setdefault(line.split()[1],set()).add(h.tserver)
+    # order memory sizes by most common
+    mems = sorted([(len(v), k, v) for k, v in mem.items()], reverse=True)
+    mostCommon = float(mems[0][1])
+    for _, size, tservers in mems[1:]:
+        fract = abs(mostCommon - float(size)) / mostCommon
+        if fract > 0.05:
+            print '#',', '.join(tservers), ': unusual memory size', size
+            bad.update(tservers)
+    swaps = sorted([(len(v), k, v) for k, v in swap.items()], reverse=True)
+    mostCommon = float(mems[0][1])
+    for _, size, tservers in swaps[1:]:
+        fract = abs(mostCommon - float(size) / mostCommon)
+        if fract > 0.05:
+            print '#',', '.join(tservers), ': unusual swap size', size
+            bad.update(tservers)
+    return bad
+
+def checkWritable(tservers):
+    'Touch all the directories that should be writable by this user return any nodes that fail'
+    if not WRITABLE:
+       print '# WRITABLE value not configured, not checking partitions'
+       return []
+    handles = runAll(tservers, 'touch', *WRITABLE)
+    bad = set()
+    for h in handles:
+        if not h.finished or h.returncode != 0:
+           bad.add(h.tserver)
+           print '#', h.tserver, 'some drives are not writable'
+    return bad
+
+def checkMounts(tservers):
+    'Check the file systems that are mounted and report any that are unusual'
+    handles = runAll(tservers, 'mount')
+    mounts = {}
+    finished = set()
+    bad = set()
+    for handle in handles:
+        if handle.finished and handle.returncode == 0:
+            for line in handle.out.split('\n'):
+                words = line.split()
+                if len(words) < 5: continue
+                if words[4] == 'nfs': continue
+                if words[0].find(':/') >= 0: continue
+                mount = words[2]
+                mounts.setdefault(mount, set()).add(handle.tserver)
+            finished.add(handle.tserver)
+        else:
+            bad.add(handle.tserver)
+            print '#', handle.tserver, 'did not finish'
+    for m in sorted(mounts.keys()):
+        diff = finished - mounts[m]
+        if diff:
+            bad.update(diff)
+            print '#', m, 'not mounted on', ', '.join(diff)
+    return bad
+
+def main(argv):
+    if len(argv) < 1:
+        sys.stderr.write('Usage: check_tservers tservers\n')
+        sys.exit(1)
+    sys.stdin.close()
+    tservers = set()
+    for tserver in open(argv[0]):
+        hashPos = tserver.find('#')
+        if hashPos >= 0:
+           tserver = tserver[:hashPos]
+        tserver = tserver.strip()
+        if not tserver: continue
+        tservers.add(tserver)
+    bad = set()
+    for test in checkIdentity, checkMemory, checkMounts, checkWritable:
+        bad.update(test(tservers - bad))
+    for tserver in sorted(tservers - bad):
+        print tserver
+
+main(sys.argv[1:])

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/contrib/gen-monitor-cert.sh
----------------------------------------------------------------------
diff --git a/assemble/contrib/gen-monitor-cert.sh b/assemble/contrib/gen-monitor-cert.sh
new file mode 100755
index 0000000..e7f313e
--- /dev/null
+++ b/assemble/contrib/gen-monitor-cert.sh
@@ -0,0 +1,85 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Start: Resolve Script Directory
+SOURCE="${BASH_SOURCE[0]}"
+while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
+   contrib=$( cd -P "$( dirname "$SOURCE" )" && pwd )
+   SOURCE=$(readlink "$SOURCE")
+   [[ $SOURCE != /* ]] && SOURCE="$contrib/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
+done
+contrib=$( cd -P "$( dirname "$SOURCE" )" && pwd )
+basedir=$( cd -P "${contrib}"/.. && pwd )
+# Stop: Resolve Script Directory
+
+source "$basedir"/libexec/load-env.sh
+
+ALIAS="default"
+KEYPASS=$(LC_CTYPE=C tr -dc '#-~' < /dev/urandom | tr -d '<>&' | head -c 20)
+STOREPASS=$(LC_CTYPE=C tr -dc '#-~' < /dev/urandom | tr -d '<>&' | head -c 20)
+KEYSTOREPATH="$ACCUMULO_CONF_DIR/keystore.jks"
+TRUSTSTOREPATH="$ACCUMULO_CONF_DIR/conf/cacerts.jks"
+CERTPATH="$ACCUMULO_CONF_DIR/server.cer"
+
+if [[ -e "$KEYSTOREPATH" ]]; then
+   rm -i "$KEYSTOREPATH"
+   if [[ -e "$KEYSTOREPATH" ]]; then
+      echo "KeyStore already exists, exiting"
+      exit 1
+   fi
+fi
+
+if [[ -e "$TRUSTSTOREPATH" ]]; then
+   rm -i "$TRUSTSTOREPATH"
+   if [[ -e "$TRUSTSTOREPATH" ]]; then
+      echo "TrustStore already exists, exiting"
+      exit 2
+   fi
+fi
+
+if [[ -e "$CERTPATH" ]]; then
+   rm -i "$CERTPATH"
+   if [[ -e "$CERTPATH" ]]; then
+      echo "Certificate already exists, exiting"
+      exit 3
+  fi
+fi
+
+"${JAVA_HOME}/bin/keytool" -genkey -alias "$ALIAS" -keyalg RSA -keypass "$KEYPASS" -storepass "$KEYPASS" -keystore "$KEYSTOREPATH"
+"${JAVA_HOME}/bin/keytool" -export -alias "$ALIAS" -storepass "$KEYPASS" -file "$CERTPATH" -keystore "$KEYSTOREPATH"
+"${JAVA_HOME}/bin/keytool" -import -v -trustcacerts -alias "$ALIAS" -file "$CERTPATH" -keystore "$TRUSTSTOREPATH" -storepass "$STOREPASS" <<< "yes"
+
+echo
+echo "keystore and truststore generated.  now add the following to accumulo-site.xml:"
+echo
+echo "    <property>"
+echo "      <name>monitor.ssl.keyStore</name>"
+echo "      <value>$KEYSTOREPATH</value>"
+echo "    </property>"
+echo "    <property>"
+echo "      <name>monitor.ssl.keyStorePassword</name>"
+echo "      <value>$KEYPASS</value>"
+echo "    </property>"
+echo "    <property>"
+echo "      <name>monitor.ssl.trustStore</name>"
+echo "      <value>$TRUSTSTOREPATH</value>"
+echo "    </property>"
+echo "    <property>"
+echo "      <name>monitor.ssl.trustStorePassword</name>"
+echo "      <value>$STOREPASS</value>"
+echo "    </property>"
+echo

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/contrib/tool.sh
----------------------------------------------------------------------
diff --git a/assemble/contrib/tool.sh b/assemble/contrib/tool.sh
new file mode 100755
index 0000000..cb8cedc
--- /dev/null
+++ b/assemble/contrib/tool.sh
@@ -0,0 +1,93 @@
+#! /usr/bin/env bash
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Start: Resolve Script Directory
+SOURCE="${BASH_SOURCE[0]}"
+while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
+   contrib=$( cd -P "$( dirname "$SOURCE" )" && pwd )
+   SOURCE=$(readlink "$SOURCE")
+   [[ $SOURCE != /* ]] && SOURCE="$contrib/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
+done
+contrib=$( cd -P "$( dirname "$SOURCE" )" && pwd )
+basedir=$( cd -P "${contrib}"/.. && pwd )
+# Stop: Resolve Script Directory
+
+source "$basedir"/libexec/load-env.sh
+
+if [[ -z "$HADOOP_PREFIX" ]] ; then
+   echo "HADOOP_PREFIX is not set.  Please make sure it's set globally or in conf/accumulo-env.sh"
+   exit 1
+fi
+if [[ -z "$ZOOKEEPER_HOME" ]] ; then
+   echo "ZOOKEEPER_HOME is not set.  Please make sure it's set globally or in conf/accumulo-env.sh"
+   exit 1
+fi
+
+ZOOKEEPER_CMD="ls -1 $ZOOKEEPER_HOME/zookeeper-[0-9]*[^csn].jar "
+if [[ $(eval "$ZOOKEEPER_CMD" | wc -l) -ne 1 ]] ; then
+   echo "Not exactly one zookeeper jar in $ZOOKEEPER_HOME"
+   exit 1
+fi
+ZOOKEEPER_LIB=$(eval "$ZOOKEEPER_CMD")
+
+LIB="$ACCUMULO_LIB_DIR"
+CORE_LIB="$LIB/accumulo-core.jar"
+FATE_LIB="$LIB/accumulo-fate.jar"
+THRIFT_LIB="$LIB/libthrift.jar"
+JCOMMANDER_LIB="$LIB/jcommander.jar"
+COMMONS_VFS_LIB="$LIB/commons-vfs2.jar"
+GUAVA_LIB="$LIB/guava.jar"
+HTRACE_LIB="$LIB/htrace-core.jar"
+
+USERJARS=" "
+for arg in "$@"; do
+    if [ "$arg" != "-libjars" -a -z "$TOOLJAR" ]; then
+      TOOLJAR="$arg"
+      shift
+   elif [ "$arg" != "-libjars" -a -z "$CLASSNAME" ]; then
+      CLASSNAME="$arg"
+      shift
+   elif [ -z "$USERJARS" ]; then
+      USERJARS=$(echo "$arg" | tr "," " ")
+      shift
+   elif [ "$arg" = "-libjars" ]; then
+      USERJARS=""
+      shift
+   else
+      break
+   fi
+done
+
+LIB_JARS="$THRIFT_LIB,$CORE_LIB,$FATE_LIB,$ZOOKEEPER_LIB,$JCOMMANDER_LIB,$COMMONS_VFS_LIB,$GUAVA_LIB,$HTRACE_LIB"
+H_JARS="$THRIFT_LIB:$CORE_LIB:$FATE_LIB:$ZOOKEEPER_LIB:$JCOMMANDER_LIB:$COMMONS_VFS_LIB:$GUAVA_LIB:$HTRACE_LIB"
+
+for jar in $USERJARS; do
+   LIB_JARS="$LIB_JARS,$jar"
+   H_JARS="$H_JARS:$jar"
+done
+export HADOOP_CLASSPATH="$H_JARS:$HADOOP_CLASSPATH"
+
+if [[ -z "$CLASSNAME" || -z "$TOOLJAR" ]]; then
+   echo "Usage: tool.sh path/to/myTool.jar my.tool.class.Name [-libjars my1.jar,my2.jar]" 1>&2
+   exit 1
+fi
+
+#echo USERJARS=$USERJARS
+#echo CLASSNAME=$CLASSNAME
+#echo HADOOP_CLASSPATH=$HADOOP_CLASSPATH
+#echo exec "$HADOOP_PREFIX/bin/hadoop" jar "$TOOLJAR" "$CLASSNAME" -libjars \"$LIB_JARS\" $ARGS
+exec "$HADOOP_PREFIX/bin/hadoop" jar "$TOOLJAR" "$CLASSNAME" -libjars "$LIB_JARS" "$@"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/bootstrap-hdfs.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/bootstrap-hdfs.sh b/assemble/libexec/bootstrap-hdfs.sh
deleted file mode 100755
index 7748604..0000000
--- a/assemble/libexec/bootstrap-hdfs.sh
+++ /dev/null
@@ -1,90 +0,0 @@
-#! /usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Start: Resolve Script Directory
-SOURCE="${BASH_SOURCE[0]}"
-while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
-  libexec=$( cd -P "$( dirname "$SOURCE" )" && pwd )
-  SOURCE=$(readlink "$SOURCE")
-  [[ $SOURCE != /* ]] && SOURCE="$libexec/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
-done
-libexec=$( cd -P "$( dirname "$SOURCE" )" && pwd )
-# Stop: Resolve Script Directory
-
-source "$libexec"/load-env.sh
-
-#
-# Find the system context directory in HDFS
-#
-SYSTEM_CONTEXT_HDFS_DIR=$(grep -A1 "general.vfs.classpaths" "$ACCUMULO_CONF_DIR/accumulo-site.xml" | tail -1 | perl -pe 's/\s+<value>//; s/<\/value>//; s/,.+$//; s|[^/]+$||; print $ARGV[1]')
-
-if [ -z "$SYSTEM_CONTEXT_HDFS_DIR" ]
-then
-  echo "Your accumulo-site.xml file is not set up for the HDFS Classloader. Please add the following to your accumulo-site.xml file where ##CLASSPATH## is one of the following formats:"
-  echo "A single directory: hdfs://host:port/directory/"
-  echo "A single directory with a regex: hdfs://host:port/directory/.*.jar"
-  echo "Multiple directories: hdfs://host:port/directory/.*.jar,hdfs://host:port/directory2/"
-  echo ""
-  echo "<property>"
-  echo "   <name>general.vfs.classpaths</name>"
-  echo "   <value>##CLASSPATH##</value>"
-  echo "   <description>location of the jars for the default (system) context</description>"
-  echo "</property>"
-  exit 1
-fi
-
-#
-# Create the system context directy in HDFS if it does not exist
-#
-"$HADOOP_PREFIX/bin/hadoop" fs -ls "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
-if [[ $? != 0 ]]; then
-  "$HADOOP_PREFIX/bin/hadoop" fs -mkdir "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
-  if [[ $? != 0 ]]; then
-    echo "Unable to create classpath directory at $SYSTEM_CONTEXT_HDFS_DIR"
-    exit 1
-  fi
-fi
-
-#
-# Replicate to all tservers to avoid network contention on startup
-#
-TSERVERS=$ACCUMULO_CONF_DIR/tservers
-NUM_TSERVERS=$(egrep -v '(^#|^\s*$)' "$TSERVERS" | wc -l)
-
-#let each datanode service around 50 clients
-REP=$(( NUM_TSERVERS / 50 ))
-(( REP < 3 )) && REP=3
-
-#
-# Copy all jars in lib to the system context directory
-#
-"$HADOOP_PREFIX/bin/hadoop" fs -moveFromLocal "$ACCUMULO_LIB_DIR"/*.jar "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
-"$HADOOP_PREFIX/bin/hadoop" fs -setrep -R $REP "$SYSTEM_CONTEXT_HDFS_DIR"  > /dev/null
-
-#
-# We need some of the jars in lib, copy them back out and remove them from the system context dir
-#
-"$HADOOP_PREFIX/bin/hadoop" fs -copyToLocal "$SYSTEM_CONTEXT_HDFS_DIR/commons-vfs2.jar" "$ACCUMULO_LIB_DIR/."  > /dev/null
-"$HADOOP_PREFIX/bin/hadoop" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/commons-vfs2.jar"  > /dev/null
-"$HADOOP_PREFIX/bin/hadoop" fs -copyToLocal "$SYSTEM_CONTEXT_HDFS_DIR/accumulo-start.jar" "$ACCUMULO_LIB_DIR/."  > /dev/null
-"$HADOOP_PREFIX/bin/hadoop" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/accumulo-start.jar"  > /dev/null
-"$HADOOP_PREFIX/bin/hadoop" fs -copyToLocal "$SYSTEM_CONTEXT_HDFS_DIR/slf4j*.jar" "$ACCUMULO_LIB_DIR/."  > /dev/null
-"$HADOOP_PREFIX/bin/hadoop" fs -rm "$SYSTEM_CONTEXT_HDFS_DIR/slf4j*.jar"  > /dev/null
-for f in $(grep -v '^#' "$ACCUMULO_CONF_DIR/tservers")
-do
-  rsync -ra --delete "$ACCUMULO_HOME" "$(dirname "$ACCUMULO_HOME")"
-done

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/check-tservers
----------------------------------------------------------------------
diff --git a/assemble/libexec/check-tservers b/assemble/libexec/check-tservers
deleted file mode 100755
index 7f9850e..0000000
--- a/assemble/libexec/check-tservers
+++ /dev/null
@@ -1,199 +0,0 @@
-#! /usr/bin/env python
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# This script will check the configuration and uniformity of all the nodes in a cluster.
-# Checks
-#   each node is reachable via ssh
-#   login identity is the same
-#   the physical memory is the same
-#   the mounts are the same on each machine
-#   a set of writable locations (typically different disks) are in fact writable
-# 
-# In order to check for writable partitions, you must configure the WRITABLE variable below.
-#
-
-import subprocess
-import time
-import select
-import os
-import sys
-import fcntl
-import signal
-if not sys.platform.startswith('linux'):
-   sys.stderr.write('This script only works on linux, sorry.\n')
-   sys.exit(1)
-
-TIMEOUT = 5
-WRITABLE = []
-#WRITABLE = ['/srv/hdfs1', '/srv/hdfs2', '/srv/hdfs3']
-
-def ssh(tserver, *args):
-    'execute a command on a remote tserver and return the Popen handle'
-    handle = subprocess.Popen( ('ssh', '-o', 'StrictHostKeyChecking=no', '-q', '-A', '-n', tserver) + args, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
-    handle.tserver = tserver
-    handle.finished = False
-    handle.out = ''
-    return handle
-
-def wait(handles, seconds):
-    'wait for lots of handles simultaneously, and kill anything that doesn\'t return in seconds time\n'
-    'Note that stdout will be stored on the handle as the "out" field and "finished" will be set to True'
-    handles = handles[:]
-    stop = time.time() + seconds
-    for h in handles:
-       fcntl.fcntl(h.stdout, fcntl.F_SETFL, os.O_NONBLOCK)
-    while handles and time.time() < stop:
-       wait = min(0, stop - time.time())
-       handleMap = dict( [(h.stdout, h) for h in handles] )
-       rd, wr, err = select.select(handleMap.keys(), [], [], wait)
-       for r in rd:
-           handle = handleMap[r]
-           while 1:
-               more = handle.stdout.read(1024)
-               if more == '':
-                   handles.remove(handle)
-                   handle.poll()
-                   handle.wait()
-                   handle.finished = True
-               handle.out += more
-               if len(more) < 1024:
-                   break
-    for handle in handles:
-       os.kill(handle.pid, signal.SIGKILL)
-       handle.poll()
-
-def runAll(tservers, *cmd):
-    'Run the given command on all the tservers, returns Popen handles'
-    handles = []
-    for tserver in tservers:
-        handles.append(ssh(tserver, *cmd))
-    wait(handles, TIMEOUT)
-    return handles
-
-def checkIdentity(tservers):
-    'Ensure the login identity is consistent across the tservers'
-    handles = runAll(tservers, 'id', '-u', '-n')
-    bad = set()
-    myIdentity = os.popen('id -u -n').read().strip()
-    for h in handles:
-        if not h.finished or h.returncode != 0:
-            print '#', 'cannot look at identity on', h.tserver
-            bad.add(h.tserver)
-        else:
-            identity = h.out.strip()
-            if identity != myIdentity:
-                print '#', h.tserver, 'inconsistent identity', identity
-                bad.add(h.tserver)
-    return bad
-
-def checkMemory(tservers):
-    'Run free on all tservers and look for weird results'
-    handles = runAll(tservers, 'free')
-    bad = set()
-    mem = {}
-    swap = {}
-    for h in handles:
-        if not h.finished or h.returncode != 0:
-            print '#', 'cannot look at memory on', h.tserver
-            bad.add(h.tserver)
-        else:
-            if h.out.find('Swap:') < 0:
-               print '#',h.tserver,'has no swap'
-               bad.add(h.tserver)
-               continue
-            lines = h.out.split('\n')
-            for line in lines:
-               if line.startswith('Mem:'):
-                  mem.setdefault(line.split()[1],set()).add(h.tserver)
-               if line.startswith('Swap:'):
-                  swap.setdefault(line.split()[1],set()).add(h.tserver)
-    # order memory sizes by most common
-    mems = sorted([(len(v), k, v) for k, v in mem.items()], reverse=True)
-    mostCommon = float(mems[0][1])
-    for _, size, tservers in mems[1:]:
-        fract = abs(mostCommon - float(size)) / mostCommon
-        if fract > 0.05:
-            print '#',', '.join(tservers), ': unusual memory size', size
-            bad.update(tservers)
-    swaps = sorted([(len(v), k, v) for k, v in swap.items()], reverse=True)
-    mostCommon = float(mems[0][1])
-    for _, size, tservers in swaps[1:]:
-        fract = abs(mostCommon - float(size) / mostCommon)
-        if fract > 0.05:
-            print '#',', '.join(tservers), ': unusual swap size', size
-            bad.update(tservers)
-    return bad
-
-def checkWritable(tservers):
-    'Touch all the directories that should be writable by this user return any nodes that fail'
-    if not WRITABLE:
-       print '# WRITABLE value not configured, not checking partitions'
-       return []
-    handles = runAll(tservers, 'touch', *WRITABLE)
-    bad = set()
-    for h in handles:
-        if not h.finished or h.returncode != 0:
-           bad.add(h.tserver)
-           print '#', h.tserver, 'some drives are not writable'
-    return bad
-
-def checkMounts(tservers):
-    'Check the file systems that are mounted and report any that are unusual'
-    handles = runAll(tservers, 'mount')
-    mounts = {}
-    finished = set()
-    bad = set()
-    for handle in handles:
-        if handle.finished and handle.returncode == 0:
-            for line in handle.out.split('\n'):
-                words = line.split()
-                if len(words) < 5: continue
-                if words[4] == 'nfs': continue
-                if words[0].find(':/') >= 0: continue
-                mount = words[2]
-                mounts.setdefault(mount, set()).add(handle.tserver)
-            finished.add(handle.tserver)
-        else:
-            bad.add(handle.tserver)
-            print '#', handle.tserver, 'did not finish'
-    for m in sorted(mounts.keys()):
-        diff = finished - mounts[m]
-        if diff:
-            bad.update(diff)
-            print '#', m, 'not mounted on', ', '.join(diff)
-    return bad
-
-def main(argv):
-    if len(argv) < 1:
-        sys.stderr.write('Usage: check_tservers tservers\n')
-        sys.exit(1)
-    sys.stdin.close()
-    tservers = set()
-    for tserver in open(argv[0]):
-        hashPos = tserver.find('#')
-        if hashPos >= 0:
-           tserver = tserver[:hashPos]
-        tserver = tserver.strip()
-        if not tserver: continue
-        tservers.add(tserver)
-    bad = set()
-    for test in checkIdentity, checkMemory, checkMounts, checkWritable:
-        bad.update(test(tservers - bad))
-    for tserver in sorted(tservers - bad):
-        print tserver
-
-main(sys.argv[1:])

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/cluster.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/cluster.sh b/assemble/libexec/cluster.sh
index 4f74138..3ee42c4 100755
--- a/assemble/libexec/cluster.sh
+++ b/assemble/libexec/cluster.sh
@@ -34,17 +34,14 @@ EOF
 
 function invalid_args {
   echo -e "Invalid arguments: $1\n"
-  print_usage
+  print_usage 1>&2
   exit 1
 }
 
 function get_ip() {
-  net_cmd=/sbin/ifconfig
-  [[ ! -x $net_cmd ]] && net_cmd='/bin/netstat -ie'
-
-  ip_addr=$($net_cmd 2>/dev/null| grep "inet[^6]" | awk '{print $2}' | sed 's/addr://' | grep -v 0.0.0.0 | grep -v 127.0.0.1 | head -n 1)
-  if [[ $? != 0 ]] ; then
-    ip_addr=$(python -c 'import socket as s; print s.gethostbyname(s.getfqdn())')
+  ip_addr=$(ip addr | grep 'state UP' -A2 | tail -n1 | awk '{print $2}' | cut -f1  -d'/')
+  if [[ $? != 0 ]]; then
+    ip_addr=$(getent ahosts "$(hostname -f)" | grep DGRAM | cut -f 1 -d ' ')
   fi
   echo "$ip_addr"
 }
@@ -84,7 +81,6 @@ function start_all() {
     start_tservers
   fi
 
-  ${accumulo_cmd} org.apache.accumulo.master.state.SetGoalState NORMAL
   for master in $(egrep -v '(^#|^\s*$)' "$ACCUMULO_CONF_DIR/masters"); do
     start_service "$master" master
   done
@@ -110,7 +106,6 @@ function start_here() {
 
   for host in $local_hosts; do
     if grep -q "^${host}\$" "$ACCUMULO_CONF_DIR/masters"; then
-      ${accumulo_cmd} org.apache.accumulo.master.state.SetGoalState NORMAL
       start_service "$host" master
       break
     fi

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/config.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/config.sh b/assemble/libexec/config.sh
deleted file mode 100755
index 87f51e0..0000000
--- a/assemble/libexec/config.sh
+++ /dev/null
@@ -1,408 +0,0 @@
-#! /usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-function usage {
-  cat <<EOF
-Usage: config.sh [-options]
-where options include (long options not available on all platforms):
-    -d, --dir        Alternate directory to setup config files
-    -s, --size       Supported sizes: '1GB' '2GB' '3GB' '512MB'
-    -n, --native     Configure to use native libraries
-    -j, --jvm        Configure to use the jvm
-    -o, --overwrite  Overwrite the default config directory
-    -v, --version    Specify the Apache Hadoop version supported versions: '1' '2'
-    -k, --kerberos   Configure for use with Kerberos
-    -h, --help       Print this help message
-EOF
-}
-
-# Start: Resolve Script Directory
-SOURCE="${BASH_SOURCE[0]}"
-while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
-  libexec="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
-  SOURCE="$(readlink "$SOURCE")"
-  [[ $SOURCE != /* ]] && SOURCE="$libexec/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
-done
-libexec="$( cd -P "$( dirname "$SOURCE" )" && pwd )"
-basedir=$( cd -P "${libexec}"/.. && pwd )
-# Stop: Resolve Script Directory
-
-TEMPLATE_CONF_DIR="${libexec}/templates"
-CONF_DIR="${ACCUMULO_CONF_DIR:-$basedir/conf}"
-ACCUMULO_SITE=accumulo-site.xml
-ACCUMULO_ENV=accumulo-env.sh
-
-SIZE=
-TYPE=
-HADOOP_VERSION=
-OVERWRITE="0"
-BASE_DIR=
-KERBEROS=
-
-#Execute getopt
-if [[ $(uname -s) == "Linux" ]]; then
-  args=$(getopt -o "b:d:s:njokv:h" -l "basedir:,dir:,size:,native,jvm,overwrite,kerberos,version:,help" -q -- "$@")
-else # Darwin, BSD
-  args=$(getopt b:d:s:njokv:h "$@")
-fi
-
-#Bad arguments
-if [[ $? != 0 ]]; then
-  usage 1>&2
-  exit 1
-fi
-eval set -- "${args[@]}"
-
-for i
-do
-  case "$i" in
-    -b|--basedir) #Hidden option used to set general.maven.project.basedir for developers
-      BASE_DIR=$2; shift
-      shift;;
-    -d|--dir)
-      CONF_DIR=$2; shift
-      shift;;
-    -s|--size)
-      SIZE=$2; shift
-      shift;;
-    -n|--native)
-      TYPE=native
-      shift;;
-    -j|--jvm)
-      TYPE=jvm
-      shift;;
-    -o|--overwrite)
-      OVERWRITE=1
-      shift;;
-    -v|--version)
-      HADOOP_VERSION=$2; shift
-      shift;;
-    -k|--kerberos)
-      KERBEROS="true"
-      shift;;
-    -h|--help)
-      usage
-      exit 0
-      shift;;
-    --)
-      shift
-      break;;
-  esac
-done
-
-while [[ "${OVERWRITE}" = "0" ]]; do
-  if [[ -e "${CONF_DIR}/${ACCUMULO_ENV}" || -e "${CONF_DIR}/${ACCUMULO_SITE}" ]]; then
-    echo "Warning your current config files in ${CONF_DIR} will be overwritten!"
-    echo
-    echo "How would you like to proceed?:"
-    select CHOICE in 'Continue with overwrite' 'Specify new conf dir'; do
-      if [[ "${CHOICE}" = 'Specify new conf dir' ]]; then
-        echo -n "Please specifiy new conf directory: "
-        read CONF_DIR
-      elif [[ "${CHOICE}" = 'Continue with overwrite' ]]; then
-        OVERWRITE=1
-      fi
-      break
-    done
-  else
-    OVERWRITE=1
-  fi
-done
-echo "Copying configuration files to: ${CONF_DIR}"
-
-#Native 1GB
-native_1GB_tServer="-Xmx128m -Xms128m"
-_1GB_master="-Xmx128m -Xms128m"
-_1GB_monitor="-Xmx64m -Xms64m"
-_1GB_gc="-Xmx64m -Xms64m"
-_1GB_other="-Xmx128m -Xms64m"
-_1GB_shell="${_1GB_other}"
-
-_1GB_memoryMapMax="256M"
-native_1GB_nativeEnabled="true"
-_1GB_cacheDataSize="15M"
-_1GB_cacheIndexSize="40M"
-_1GB_sortBufferSize="50M"
-_1GB_waLogMaxSize="256M"
-
-#Native 2GB
-native_2GB_tServer="-Xmx256m -Xms256m"
-_2GB_master="-Xmx256m -Xms256m"
-_2GB_monitor="-Xmx128m -Xms64m"
-_2GB_gc="-Xmx128m -Xms128m"
-_2GB_other="-Xmx256m -Xms64m"
-_2GB_shell="${_2GB_other}"
-
-_2GB_memoryMapMax="512M"
-native_2GB_nativeEnabled="true"
-_2GB_cacheDataSize="30M"
-_2GB_cacheIndexSize="80M"
-_2GB_sortBufferSize="50M"
-_2GB_waLogMaxSize="512M"
-
-#Native 3GB
-native_3GB_tServer="-Xmx1g -Xms1g -XX:NewSize=500m -XX:MaxNewSize=500m"
-_3GB_master="-Xmx1g -Xms1g"
-_3GB_monitor="-Xmx1g -Xms256m"
-_3GB_gc="-Xmx256m -Xms256m"
-_3GB_other="-Xmx1g -Xms256m"
-_3GB_shell="${_3GB_other}"
-
-_3GB_memoryMapMax="1G"
-native_3GB_nativeEnabled="true"
-_3GB_cacheDataSize="128M"
-_3GB_cacheIndexSize="128M"
-_3GB_sortBufferSize="200M"
-_3GB_waLogMaxSize="1G"
-
-#Native 512MB
-native_512MB_tServer="-Xmx48m -Xms48m"
-_512MB_master="-Xmx128m -Xms128m"
-_512MB_monitor="-Xmx64m -Xms64m"
-_512MB_gc="-Xmx64m -Xms64m"
-_512MB_other="-Xmx128m -Xms64m"
-_512MB_shell="${_512MB_other}"
-
-_512MB_memoryMapMax="80M"
-native_512MB_nativeEnabled="true"
-_512MB_cacheDataSize="7M"
-_512MB_cacheIndexSize="20M"
-_512MB_sortBufferSize="50M"
-_512MB_waLogMaxSize="100M"
-
-#JVM 1GB
-jvm_1GB_tServer="-Xmx384m -Xms384m"
-
-jvm_1GB_nativeEnabled="false"
-
-#JVM 2GB
-jvm_2GB_tServer="-Xmx768m -Xms768m"
-
-jvm_2GB_nativeEnabled="false"
-
-#JVM 3GB
-jvm_3GB_tServer="-Xmx2g -Xms2g -XX:NewSize=1G -XX:MaxNewSize=1G"
-
-jvm_3GB_nativeEnabled="false"
-
-#JVM 512MB
-jvm_512MB_tServer="-Xmx128m -Xms128m"
-
-jvm_512MB_nativeEnabled="false"
-
-
-if [[ -z "${SIZE}" ]]; then
-  echo "Choose the heap configuration:"
-  select DIRNAME in 1GB 2GB 3GB 512MB; do
-    echo "Using '${DIRNAME}' configuration"
-    SIZE=${DIRNAME}
-    break
-  done
-elif [[ "${SIZE}" != "1GB" && "${SIZE}" != "2GB"  && "${SIZE}" != "3GB" && "${SIZE}" != "512MB" ]]; then
-  echo "Invalid memory size"
-  echo "Supported sizes: '1GB' '2GB' '3GB' '512MB'"
-  exit 1
-fi
-
-if [[ -z "${TYPE}" ]]; then
-  echo
-  echo "Choose the Accumulo memory-map type:"
-  select TYPENAME in Java Native; do
-    if [[ "${TYPENAME}" == "Native" ]]; then
-      TYPE="native"
-      echo "Don't forget to build the native libraries using the bin/build_native_library.sh script"
-    elif [[ "${TYPENAME}" == "Java" ]]; then
-      TYPE="jvm"
-    fi
-    echo "Using '${TYPE}' configuration"
-    echo
-    break
-  done
-fi
-
-if [[ -z "${HADOOP_VERSION}" ]]; then
-  echo
-  echo "Choose the Apache Hadoop version:"
-  select HADOOP in 'Hadoop 2' 'HDP 2.0/2.1' 'HDP 2.2' 'IOP 4.1'; do
-    if [ "${HADOOP}" == "Hadoop 2" ]; then
-      HADOOP_VERSION="2"
-    elif [ "${HADOOP}" == "HDP 2.0/2.1" ]; then
-      HADOOP_VERSION="HDP2"
-    elif [ "${HADOOP}" == "HDP 2.2" ]; then
-      HADOOP_VERSION="HDP2.2"
-    elif [ "${HADOOP}" == "IOP 4.1" ]; then
-      HADOOP_VERSION="IOP4.1"
-    fi
-    echo "Using Hadoop version '${HADOOP_VERSION}' configuration"
-    echo
-    break
-  done
-elif [[ "${HADOOP_VERSION}" != "2" && "${HADOOP_VERSION}" != "HDP2" && "${HADOOP_VERSION}" != "HDP2.2" ]]; then
-  echo "Invalid Hadoop version"
-  echo "Supported Hadoop versions: '2', 'HDP2', 'HDP2.2'"
-  exit 1
-fi
-
-TRACE_USER="root"
-
-if [[ ! -z "${KERBEROS}" ]]; then
-  echo
-  read -p "Enter server's Kerberos principal: " PRINCIPAL
-  read -p "Enter server's Kerberos keytab: " KEYTAB
-  TRACE_USER="${PRINCIPAL}"
-fi
-
-for var in SIZE TYPE HADOOP_VERSION; do
-  if [[ -z ${!var} ]]; then
-    echo "Invalid $var configuration"
-    exit 1
-  fi
-done
-
-TSERVER="${TYPE}_${SIZE}_tServer"
-MASTER="_${SIZE}_master"
-MONITOR="_${SIZE}_monitor"
-GC="_${SIZE}_gc"
-SHELL="_${SIZE}_shell"
-OTHER="_${SIZE}_other"
-
-MEMORY_MAP_MAX="_${SIZE}_memoryMapMax"
-NATIVE="${TYPE}_${SIZE}_nativeEnabled"
-CACHE_DATA_SIZE="_${SIZE}_cacheDataSize"
-CACHE_INDEX_SIZE="_${SIZE}_cacheIndexSize"
-SORT_BUFFER_SIZE="_${SIZE}_sortBufferSize"
-WAL_MAX_SIZE="_${SIZE}_waLogMaxSize"
-
-MAVEN_PROJ_BASEDIR=""
-
-if [[ ! -z "${BASE_DIR}" ]]; then
-  MAVEN_PROJ_BASEDIR="\n  <property>\n    <name>general.maven.project.basedir</name>\n    <value>${BASE_DIR}</value>\n  </property>\n"
-fi
-
-mkdir -p "${CONF_DIR}" && cp "${TEMPLATE_CONF_DIR}"/* "${CONF_DIR}"/
-
-if [[ -f "${CONF_DIR}/examples/client.conf" ]]; then
-  cp "${CONF_DIR}"/examples/client.conf "${CONF_DIR}"/
-fi
-
-#Configure accumulo-env.sh
-sed -e "s/\${tServerHigh_tServerLow}/${!TSERVER}/" \
-  -e "s/\${masterHigh_masterLow}/${!MASTER}/" \
-  -e "s/\${monitorHigh_monitorLow}/${!MONITOR}/" \
-  -e "s/\${gcHigh_gcLow}/${!GC}/" \
-  -e "s/\${shellHigh_shellLow}/${!SHELL}/" \
-  -e "s/\${otherHigh_otherLow}/${!OTHER}/" \
-  "${TEMPLATE_CONF_DIR}/$ACCUMULO_ENV" > "${CONF_DIR}/$ACCUMULO_ENV"
-
-#Configure accumulo-site.xml
-sed -e "s/\${memMapMax}/${!MEMORY_MAP_MAX}/" \
-  -e "s/\${nativeEnabled}/${!NATIVE}/" \
-  -e "s/\${cacheDataSize}/${!CACHE_DATA_SIZE}/" \
-  -e "s/\${cacheIndexSize}/${!CACHE_INDEX_SIZE}/" \
-  -e "s/\${sortBufferSize}/${!SORT_BUFFER_SIZE}/" \
-  -e "s/\${waLogMaxSize}/${!WAL_MAX_SIZE}/" \
-  -e "s=\${traceUser}=${TRACE_USER}=" \
-  -e "s=\${mvnProjBaseDir}=${MAVEN_PROJ_BASEDIR}=" "${TEMPLATE_CONF_DIR}/$ACCUMULO_SITE" > "${CONF_DIR}/$ACCUMULO_SITE"
-
-# If we're not using kerberos, filter out the krb properties
-if [[ -z "${KERBEROS}" ]]; then
-  sed -e 's/<!-- Kerberos requirements -->/<!-- Kerberos requirements --><!--/' \
-    -e 's/<!-- End Kerberos requirements -->/--><!-- End Kerberos requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-else
-  # Make the substitutions
-  sed -e "s!\${keytab}!${KEYTAB}!" \
-    -e "s!\${principal}!${PRINCIPAL}!" \
-    "${CONF_DIR}/${ACCUMULO_SITE}" > temp
-  mv temp "${CONF_DIR}/${ACCUMULO_SITE}"
-fi
-
-# Configure hadoop version
-if [[ "${HADOOP_VERSION}" == "2" ]]; then
-  sed -e 's/<!-- HDP 2.0 requirements -->/<!-- HDP 2.0 requirements --><!--/' \
-    -e 's/<!-- End HDP 2.0 requirements -->/--><!-- End HDP 2.0 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- HDP 2.2 requirements -->/<!-- HDP 2.2 requirements --><!--/' \
-    -e 's/<!-- End HDP 2.2 requirements -->/--><!-- End HDP 2.2 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- IOP 4.1 requirements -->/<!-- IOP 4.1 requirements --><!--/' \
-    -e 's/<!-- End IOP 4.1 requirements -->/--><!-- End IOP 4.1 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-elif [[ "${HADOOP_VERSION}" == "HDP2" ]]; then
-  sed -e 's/<!-- Hadoop 2 requirements -->/<!-- Hadoop 2 requirements --><!--/' \
-    -e 's/<!-- End Hadoop 2 requirements -->/--><!-- End Hadoop 2 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- HDP 2.2 requirements -->/<!-- HDP 2.2 requirements --><!--/' \
-    -e 's/<!-- End HDP 2.2 requirements -->/--><!-- End HDP 2.2 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- IOP 4.1 requirements -->/<!-- IOP 4.1 requirements --><!--/' \
-    -e 's/<!-- End IOP 4.1 requirements -->/--><!-- End IOP 4.1 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-elif [[ "${HADOOP_VERSION}" == "HDP2.2" ]]; then
-  sed -e 's/<!-- Hadoop 2 requirements -->/<!-- Hadoop 2 requirements --><!--/' \
-    -e 's/<!-- End Hadoop 2 requirements -->/--><!-- End Hadoop 2 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- HDP 2.0 requirements -->/<!-- HDP 2.0 requirements --><!--/' \
-    -e 's/<!-- End HDP 2.0 requirements -->/--><!-- End HDP 2.0 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- IOP 4.1 requirements -->/<!-- IOP 4.1 requirements --><!--/' \
-    -e 's/<!-- End IOP 4.1 requirements -->/--><!-- End IOP 4.1 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-elif [[ "${HADOOP_VERSION}" == "IOP4.1" ]]; then
-  sed -e 's/<!-- Hadoop 2 requirements -->/<!-- Hadoop 2 requirements --><!--/' \
-    -e 's/<!-- End Hadoop 2 requirements -->/--><!-- End Hadoop 2 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- HDP 2.0 requirements -->/<!-- HDP 2.0 requirements --><!--/' \
-    -e 's/<!-- End HDP 2.0 requirements -->/--><!-- End HDP 2.0 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-  sed -e 's/<!-- HDP 2.2 requirements -->/<!-- HDP 2.2 requirements --><!--/' \
-    -e 's/<!-- End HDP 2.2 requirements -->/--><!-- End HDP 2.2 requirements -->/' \
-    "${CONF_DIR}/$ACCUMULO_SITE" > temp
-  mv temp "${CONF_DIR}/$ACCUMULO_SITE"
-fi
-
-#Additional setup steps for native configuration.
-if [[ ${TYPE} == native ]]; then
-  if [[ $(uname) == Linux ]]; then
-    if [[ -z $HADOOP_PREFIX ]]; then
-      echo "WARNING: HADOOP_PREFIX not set, cannot automatically configure LD_LIBRARY_PATH to include Hadoop native libraries"
-    else
-      NATIVE_LIB=$(readlink -ef "$(dirname "$(for x in $(find "$HADOOP_PREFIX" -name libhadoop.so); do ld "$x" 2>/dev/null && echo "$x" && break; done)" 2>>/dev/null)" 2>>/dev/null)
-      if [[ -z $NATIVE_LIB ]]; then
-        echo -e "WARNING: The Hadoop native libraries could not be found for your sytem in: $HADOOP_PREFIX"
-      else
-        sed "/# Should the monitor/ i export LD_LIBRARY_PATH=${NATIVE_LIB}:\${LD_LIBRARY_PATH}" "${CONF_DIR}/$ACCUMULO_ENV" > temp
-        mv temp "${CONF_DIR}/$ACCUMULO_ENV"
-        echo -e "Added ${NATIVE_LIB} to the LD_LIBRARY_PATH"
-      fi
-    fi
-  fi
-  echo -e "Please remember to compile the Accumulo native libraries using the bin/build_native_library.sh script and to set the LD_LIBRARY_PATH variable in the ${CONF_DIR}/accumulo-env.sh script if needed."
-fi
-echo "Setup complete"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/gen-monitor-cert.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/gen-monitor-cert.sh b/assemble/libexec/gen-monitor-cert.sh
deleted file mode 100755
index 46263ce..0000000
--- a/assemble/libexec/gen-monitor-cert.sh
+++ /dev/null
@@ -1,84 +0,0 @@
-#! /usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Start: Resolve Script Directory
-SOURCE="${BASH_SOURCE[0]}"
-while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
-   libexec=$( cd -P "$( dirname "$SOURCE" )" && pwd )
-   SOURCE=$(readlink "$SOURCE")
-   [[ $SOURCE != /* ]] && SOURCE="$libexec/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
-done
-libexec=$( cd -P "$( dirname "$SOURCE" )" && pwd )
-# Stop: Resolve Script Directory
-
-source "$libexec"/load-env.sh
-
-ALIAS="default"
-KEYPASS=$(LC_CTYPE=C tr -dc '#-~' < /dev/urandom | tr -d '<>&' | head -c 20)
-STOREPASS=$(LC_CTYPE=C tr -dc '#-~' < /dev/urandom | tr -d '<>&' | head -c 20)
-KEYSTOREPATH="$ACCUMULO_CONF_DIR/keystore.jks"
-TRUSTSTOREPATH="$ACCUMULO_CONF_DIR/conf/cacerts.jks"
-CERTPATH="$ACCUMULO_CONF_DIR/server.cer"
-
-if [[ -e "$KEYSTOREPATH" ]]; then
-   rm -i "$KEYSTOREPATH"
-   if [[ -e "$KEYSTOREPATH" ]]; then
-      echo "KeyStore already exists, exiting"
-      exit 1
-   fi
-fi
-
-if [[ -e "$TRUSTSTOREPATH" ]]; then
-   rm -i "$TRUSTSTOREPATH"
-   if [[ -e "$TRUSTSTOREPATH" ]]; then
-      echo "TrustStore already exists, exiting"
-      exit 2
-   fi
-fi
-
-if [[ -e "$CERTPATH" ]]; then
-   rm -i "$CERTPATH"
-   if [[ -e "$CERTPATH" ]]; then
-      echo "Certificate already exists, exiting"
-      exit 3
-  fi
-fi
-
-"${JAVA_HOME}/bin/keytool" -genkey -alias "$ALIAS" -keyalg RSA -keypass "$KEYPASS" -storepass "$KEYPASS" -keystore "$KEYSTOREPATH"
-"${JAVA_HOME}/bin/keytool" -export -alias "$ALIAS" -storepass "$KEYPASS" -file "$CERTPATH" -keystore "$KEYSTOREPATH"
-"${JAVA_HOME}/bin/keytool" -import -v -trustcacerts -alias "$ALIAS" -file "$CERTPATH" -keystore "$TRUSTSTOREPATH" -storepass "$STOREPASS" <<< "yes"
-
-echo
-echo "keystore and truststore generated.  now add the following to accumulo-site.xml:"
-echo
-echo "    <property>"
-echo "      <name>monitor.ssl.keyStore</name>"
-echo "      <value>$KEYSTOREPATH</value>"
-echo "    </property>"
-echo "    <property>"
-echo "      <name>monitor.ssl.keyStorePassword</name>"
-echo "      <value>$KEYPASS</value>"
-echo "    </property>"
-echo "    <property>"
-echo "      <name>monitor.ssl.trustStore</name>"
-echo "      <value>$TRUSTSTOREPATH</value>"
-echo "    </property>"
-echo "    <property>"
-echo "      <name>monitor.ssl.trustStorePassword</name>"
-echo "      <value>$STOREPASS</value>"
-echo "    </property>"
-echo

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/load-env.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/load-env.sh b/assemble/libexec/load-env.sh
index 3d7d9a1..2cc431e 100755
--- a/assemble/libexec/load-env.sh
+++ b/assemble/libexec/load-env.sh
@@ -100,7 +100,6 @@ export ACCUMULO_LIB_DIR="${ACCUMULO_LIB_DIR:-$basedir/lib}"
 export ACCUMULO_LIBEXEC_DIR="${ACCUMULO_LIBEXEC_DIR:-$basedir/libexec}"
 export ACCUMULO_LOG_DIR="${ACCUMULO_LOG_DIR:-$basedir/logs}"
 export ACCUMULO_PID_DIR="${ACCUMULO_PID_DIR:-$basedir/run}"
-export ACCUMULO_OPT_DIR="${ACCUMULO_OPT_DIR:-$basedir/opt}"
 
 # Make directories that may not exist
 mkdir -p "${ACCUMULO_LOG_DIR}" 2>/dev/null
@@ -118,7 +117,6 @@ verify_env_dir "ACCUMULO_LIB_DIR" "${ACCUMULO_LIB_DIR}"
 verify_env_dir "ACCUMULO_LIBEXEC_DIR" "${ACCUMULO_LIBEXEC_DIR}"
 verify_env_dir "ACCUMULO_LOG_DIR" "${ACCUMULO_LOG_DIR}"
 verify_env_dir "ACCUMULO_PID_DIR" "${ACCUMULO_PID_DIR}"
-verify_env_dir "ACCUMULO_OPT_DIR" "${ACCUMULO_OPT_DIR}"
 
 ## Verify Zookeeper installation
 ZOOKEEPER_VERSION=$(find -L "$ZOOKEEPER_HOME" -maxdepth 1 -name "zookeeper-[0-9]*.jar" | head -1)

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/service.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/service.sh b/assemble/libexec/service.sh
index f422d15..9b47382 100755
--- a/assemble/libexec/service.sh
+++ b/assemble/libexec/service.sh
@@ -29,7 +29,7 @@ EOF
 
 function invalid_args {
   echo -e "Invalid arguments: $1\n"
-  print_usage
+  print_usage 1>&2
   exit 1
 }
 
@@ -73,6 +73,10 @@ function start_service() {
   if [[ ${service} == "monitor" && ${ACCUMULO_MONITOR_BIND_ALL} == "true" ]]; then
     address="0.0.0.0"
   fi
+  
+  if [[ $service == "master" ]]; then
+    "$ACCUMULO_BIN_DIR/accumulo" org.apache.accumulo.master.state.SetGoalState NORMAL
+  fi
 
   COMMAND="${ACCUMULO_BIN_DIR}/accumulo"
   if [ "${ACCUMULO_WATCHER}" = "true" ]; then

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/libexec/tool.sh
----------------------------------------------------------------------
diff --git a/assemble/libexec/tool.sh b/assemble/libexec/tool.sh
deleted file mode 100755
index fe482be..0000000
--- a/assemble/libexec/tool.sh
+++ /dev/null
@@ -1,92 +0,0 @@
-#! /usr/bin/env bash
-
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Start: Resolve Script Directory
-SOURCE="${BASH_SOURCE[0]}"
-while [ -h "$SOURCE" ]; do # resolve $SOURCE until the file is no longer a symlink
-   libexec=$( cd -P "$( dirname "$SOURCE" )" && pwd )
-   SOURCE=$(readlink "$SOURCE")
-   [[ $SOURCE != /* ]] && SOURCE="$libexec/$SOURCE" # if $SOURCE was a relative symlink, we need to resolve it relative to the path where the symlink file was located
-done
-libexec=$( cd -P "$( dirname "$SOURCE" )" && pwd )
-# Stop: Resolve Script Directory
-
-source "$libexec"/load-env.sh
-
-if [[ -z "$HADOOP_PREFIX" ]] ; then
-   echo "HADOOP_PREFIX is not set.  Please make sure it's set globally or in conf/accumulo-env.sh"
-   exit 1
-fi
-if [[ -z "$ZOOKEEPER_HOME" ]] ; then
-   echo "ZOOKEEPER_HOME is not set.  Please make sure it's set globally or in conf/accumulo-env.sh"
-   exit 1
-fi
-
-ZOOKEEPER_CMD="ls -1 $ZOOKEEPER_HOME/zookeeper-[0-9]*[^csn].jar "
-if [[ $(eval "$ZOOKEEPER_CMD" | wc -l) -ne 1 ]] ; then
-   echo "Not exactly one zookeeper jar in $ZOOKEEPER_HOME"
-   exit 1
-fi
-ZOOKEEPER_LIB=$(eval "$ZOOKEEPER_CMD")
-
-LIB="$ACCUMULO_LIB_DIR"
-CORE_LIB="$LIB/accumulo-core.jar"
-FATE_LIB="$LIB/accumulo-fate.jar"
-THRIFT_LIB="$LIB/libthrift.jar"
-JCOMMANDER_LIB="$LIB/jcommander.jar"
-COMMONS_VFS_LIB="$LIB/commons-vfs2.jar"
-GUAVA_LIB="$LIB/guava.jar"
-HTRACE_LIB="$LIB/htrace-core.jar"
-
-USERJARS=" "
-for arg in "$@"; do
-    if [ "$arg" != "-libjars" -a -z "$TOOLJAR" ]; then
-      TOOLJAR="$arg"
-      shift
-   elif [ "$arg" != "-libjars" -a -z "$CLASSNAME" ]; then
-      CLASSNAME="$arg"
-      shift
-   elif [ -z "$USERJARS" ]; then
-      USERJARS=$(echo "$arg" | tr "," " ")
-      shift
-   elif [ "$arg" = "-libjars" ]; then
-      USERJARS=""
-      shift
-   else
-      break
-   fi
-done
-
-LIB_JARS="$THRIFT_LIB,$CORE_LIB,$FATE_LIB,$ZOOKEEPER_LIB,$JCOMMANDER_LIB,$COMMONS_VFS_LIB,$GUAVA_LIB,$HTRACE_LIB"
-H_JARS="$THRIFT_LIB:$CORE_LIB:$FATE_LIB:$ZOOKEEPER_LIB:$JCOMMANDER_LIB:$COMMONS_VFS_LIB:$GUAVA_LIB:$HTRACE_LIB"
-
-for jar in $USERJARS; do
-   LIB_JARS="$LIB_JARS,$jar"
-   H_JARS="$H_JARS:$jar"
-done
-export HADOOP_CLASSPATH="$H_JARS:$HADOOP_CLASSPATH"
-
-if [[ -z "$CLASSNAME" || -z "$TOOLJAR" ]]; then
-   echo "Usage: tool.sh path/to/myTool.jar my.tool.class.Name [-libjars my1.jar,my2.jar]" 1>&2
-   exit 1
-fi
-
-#echo USERJARS=$USERJARS
-#echo CLASSNAME=$CLASSNAME
-#echo HADOOP_CLASSPATH=$HADOOP_CLASSPATH
-#echo exec "$HADOOP_PREFIX/bin/hadoop" jar "$TOOLJAR" "$CLASSNAME" -libjars \"$LIB_JARS\" $ARGS
-exec "$HADOOP_PREFIX/bin/hadoop" jar "$TOOLJAR" "$CLASSNAME" -libjars "$LIB_JARS" "$@"

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/src/main/assemblies/component.xml
----------------------------------------------------------------------
diff --git a/assemble/src/main/assemblies/component.xml b/assemble/src/main/assemblies/component.xml
index ac71fce..c384e14 100644
--- a/assemble/src/main/assemblies/component.xml
+++ b/assemble/src/main/assemblies/component.xml
@@ -73,7 +73,7 @@
       <fileMode>0755</fileMode>
     </fileSet>
     <fileSet>
-      <directory>libexec</directory>
+      <directory>contrib</directory>
       <directoryMode>0755</directoryMode>
       <fileMode>0755</fileMode>
       <includes>
@@ -82,6 +82,14 @@
       </includes>
     </fileSet>
     <fileSet>
+      <directory>libexec</directory>
+      <directoryMode>0755</directoryMode>
+      <fileMode>0755</fileMode>
+      <includes>
+        <include>**/*.sh</include>
+      </includes>
+    </fileSet>
+    <fileSet>
       <directory>libexec/templates</directory>
       <outputDirectory>libexec/templates</outputDirectory>
       <directoryMode>0755</directoryMode>
@@ -91,12 +99,12 @@
       </includes>  
     </fileSet>
     <fileSet>
-      <directory>../examples/simple</directory>
-      <outputDirectory>opt/examples/simple</outputDirectory>
+      <directory>../examples/simple/src/main/java/org/apache/accumulo/examples/simple/</directory>
+      <outputDirectory>docs/examples/src</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
       <includes>
-        <include>src/main/**</include>
+        <include>*/**</include>
       </includes>
     </fileSet>
     <fileSet>
@@ -171,7 +179,7 @@
     </fileSet>
     <fileSet>
       <directory>../test</directory>
-      <outputDirectory>opt/test</outputDirectory>
+      <outputDirectory>test</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0755</fileMode>
       <includes>
@@ -187,7 +195,7 @@
     </fileSet>
     <fileSet>
       <directory>../test</directory>
-      <outputDirectory>opt/test</outputDirectory>
+      <outputDirectory>test</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
       <excludes>
@@ -210,7 +218,7 @@
     <!-- Lift generated thrift proxy code into its own directory -->
     <fileSet>
       <directory>../proxy/target</directory>
-      <outputDirectory>opt/proxy/thrift</outputDirectory>
+      <outputDirectory>proxy/thrift</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
       <includes>
@@ -221,7 +229,7 @@
     </fileSet>
     <fileSet>
       <directory>../proxy</directory>
-      <outputDirectory>opt/proxy</outputDirectory>
+      <outputDirectory>proxy</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
       <includes>
@@ -231,7 +239,7 @@
     </fileSet>
     <fileSet>
       <directory>../proxy/examples</directory>
-      <outputDirectory>opt/proxy/examples</outputDirectory>
+      <outputDirectory>proxy/examples</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0755</fileMode>
       <includes>
@@ -241,7 +249,7 @@
     </fileSet>
     <fileSet>
       <directory>../proxy/examples</directory>
-      <outputDirectory>opt/proxy/examples</outputDirectory>
+      <outputDirectory>proxy/examples</outputDirectory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
       <excludes>
@@ -253,7 +261,7 @@
       <directory>../proxy/src/main/thrift</directory>
       <directoryMode>0755</directoryMode>
       <fileMode>0644</fileMode>
-      <outputDirectory>/opt/proxy/thrift</outputDirectory>
+      <outputDirectory>proxy/thrift</outputDirectory>
       <includes>
         <include>*.thrift</include>
       </includes>

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/assemble/src/main/scripts/generate-example-configs.sh
----------------------------------------------------------------------
diff --git a/assemble/src/main/scripts/generate-example-configs.sh b/assemble/src/main/scripts/generate-example-configs.sh
index c9073dd..6a8e0a1 100755
--- a/assemble/src/main/scripts/generate-example-configs.sh
+++ b/assemble/src/main/scripts/generate-example-configs.sh
@@ -20,5 +20,4 @@
 out=target/config.out
 
 echo 'Generating example scripts...' > $out
-libexec/config.sh -o -d target/example-configs -s 2GB -j -v 2 >> $out 2>&1
-
+bin/accumulo create-config -o -d target/example-configs -s 2GB -j -v 2 >> $out 2>&1

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/docs/src/main/asciidoc/chapters/administration.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/administration.txt b/docs/src/main/asciidoc/chapters/administration.txt
index 5e7fb24..e0d2f48 100644
--- a/docs/src/main/asciidoc/chapters/administration.txt
+++ b/docs/src/main/asciidoc/chapters/administration.txt
@@ -567,7 +567,7 @@ SSL may be enabled for the monitor page by setting the following properties in t
   monitor.ssl.trustStorePassword
 
 If the Accumulo conf directory has been configured (in particular the +accumulo-env.sh+ file must be set up), the +gen-monitor-cert.sh+
-script in the Accumulo +lib/scripts+ directory can be used to create the keystore and truststore files with random passwords. The script
+script in the Accumulo +contrib+ directory can be used to create the keystore and truststore files with random passwords. The script
 will print out the properties that need to be added to the +accumulo-site.xml+ file. The stores can also be generated manually with the
 Java +keytool+ command, whose usage can be seen in the +gen-monitor-cert.sh+ script.
 

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/docs/src/main/asciidoc/chapters/clients.txt
----------------------------------------------------------------------
diff --git a/docs/src/main/asciidoc/chapters/clients.txt b/docs/src/main/asciidoc/chapters/clients.txt
index 3ac0d54..958ab11 100644
--- a/docs/src/main/asciidoc/chapters/clients.txt
+++ b/docs/src/main/asciidoc/chapters/clients.txt
@@ -41,7 +41,7 @@ class +com.foo.Client+ and placed that in +lib/ext+, then you could use the comm
 +accumulo com.foo.Client+ to execute your code.
 
 If you are writing map reduce job that access Accumulo, then you can use the
-+libexec/tool.sh+ script to run those jobs. See the map reduce example.
++contrib/tool.sh+ script to run those jobs. See the map reduce example.
 
 === Connecting
 
@@ -293,7 +293,7 @@ the very least, you need to supply the following properties:
   instance=test
   zookeepers=localhost:2181
 
-You can find a sample configuration file in your distribution at +opt/proxy/proxy.properties+.
+You can find a sample configuration file in your distribution at +proxy/proxy.properties+.
 
 This sample configuration file further demonstrates an ability to back the proxy server
 by MockAccumulo or the MiniAccumuloCluster.
@@ -313,7 +313,7 @@ for Thrift installed to generate client code in that language. Typically, your o
 system's package manager will be able to automatically install these for you in an expected
 location such as +/usr/lib/python/site-packages/thrift+.
 
-You can find the thrift file for generating the client at +opt/proxy/proxy.thrift+.
+You can find the thrift file for generating the client at +proxy/proxy.thrift+.
 
 After a client is generated, the port specified in the configuration properties above will be
 used to connect to the server.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/docs/src/main/resources/examples/README.bulkIngest
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.bulkIngest b/docs/src/main/resources/examples/README.bulkIngest
index 20e0c4d..bc9f913 100644
--- a/docs/src/main/resources/examples/README.bulkIngest
+++ b/docs/src/main/resources/examples/README.bulkIngest
@@ -27,7 +27,7 @@ accumulo. Then we verify the 1000 rows are in accumulo.
     $ ARGS="-i instance -z zookeepers -u username -p password"
     $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666
     $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt
-    $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
+    $ ./contrib/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork
     $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000
 
 For a high level discussion of bulk ingest, see the docs dir.

http://git-wip-us.apache.org/repos/asf/accumulo/blob/ab0d6fc3/docs/src/main/resources/examples/README.classpath
----------------------------------------------------------------------
diff --git a/docs/src/main/resources/examples/README.classpath b/docs/src/main/resources/examples/README.classpath
index 37b2aac..7497014 100644
--- a/docs/src/main/resources/examples/README.classpath
+++ b/docs/src/main/resources/examples/README.classpath
@@ -25,7 +25,7 @@ table reference that jar.
 
 Execute the following command in the shell.
 
-    $ hadoop fs -copyFromLocal /path/to/accumulo/opt/test/src/test/resources/FooFilter.jar /user1/lib
+    $ hadoop fs -copyFromLocal /path/to/accumulo/test/src/test/resources/FooFilter.jar /user1/lib
 
 Execute following in Accumulo shell to setup classpath context
 


Mime
View raw message