Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 4B1B3200BC8 for ; Tue, 8 Nov 2016 22:13:38 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 49798160B12; Tue, 8 Nov 2016 21:13:38 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 27E1C160AD0 for ; Tue, 8 Nov 2016 22:13:36 +0100 (CET) Received: (qmail 91007 invoked by uid 500); 8 Nov 2016 21:13:35 -0000 Mailing-List: contact commits-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@accumulo.apache.org Delivered-To: mailing list commits@accumulo.apache.org Received: (qmail 90994 invoked by uid 99); 8 Nov 2016 21:13:35 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Nov 2016 21:13:35 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 31234E09D3; Tue, 8 Nov 2016 21:13:35 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: mwalch@apache.org To: commits@accumulo.apache.org Date: Tue, 08 Nov 2016 21:13:35 -0000 Message-Id: X-Mailer: ASF-Git Admin Mailer Subject: [01/11] accumulo git commit: ACCUMULO-4490 Simplify Accumulo scripts and config archived-at: Tue, 08 Nov 2016 21:13:38 -0000 Repository: accumulo Updated Branches: refs/heads/master 8aede75ae -> ab0d6fc3f http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/asciidoc/chapters/troubleshooting.txt ---------------------------------------------------------------------- diff --git a/docs/src/main/asciidoc/chapters/troubleshooting.txt b/docs/src/main/asciidoc/chapters/troubleshooting.txt index d993767..a88dfa2 100644 --- a/docs/src/main/asciidoc/chapters/troubleshooting.txt +++ b/docs/src/main/asciidoc/chapters/troubleshooting.txt @@ -23,7 +23,7 @@ Accumulo is a distributed system. It is supposed to run on remote equipment, across hundreds of computers. Each program that runs on these remote computers writes down events as they occur, into a local file. By default, this is defined in -+$ACCUMULO_HOME/conf/accumule-env.sh+ as +ACCUMULO_LOG_DIR+. ++$ACCUMULO_CONF_DIR/accumule-env.sh+ as +ACCUMULO_LOG_DIR+. *A*: Look in the +$ACCUMULO_LOG_DIR/tserver*.log+ file. Specifically, check the end of the file. @@ -125,7 +125,7 @@ It is important to see the word +CONNECTED+! If you only see +CONNECTING+ you will need to diagnose zookeeper errors. *A*: Check to make sure that zookeeper is up, and that -+$ACCUMULO_HOME/conf/accumulo-site.xml+ has been pointed to ++$ACCUMULO_CONF_DIR/accumulo-site.xml+ has been pointed to your zookeeper server(s). *Q*: Zookeeper is running, but it does not say +CONNECTED+ @@ -294,7 +294,7 @@ There's a class that will examine an accumulo storage file and print out basic metadata. ---- -$ ./bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo /accumulo/tables/1/default_tablet/A000000n.rf +$ accumulo org.apache.accumulo.core.file.rfile.PrintInfo /accumulo/tables/1/default_tablet/A000000n.rf 2013-07-16 08:17:14,778 [util.NativeCodeLoader] INFO : Loaded the native-hadoop library Locality group : Start block : 0 @@ -322,7 +322,7 @@ Meta block : RFile.index When trying to diagnose problems related to key size, the +PrintInfo+ tool can provide a histogram of the individual key sizes: - $ ./bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo --histogram /accumulo/tables/1/default_tablet/A000000n.rf + $ accumulo org.apache.accumulo.core.file.rfile.PrintInfo --histogram /accumulo/tables/1/default_tablet/A000000n.rf ... Up to size count %-age 10 : 222 28.23% @@ -338,7 +338,7 @@ When trying to diagnose problems related to key size, the +PrintInfo+ tool can p Likewise, +PrintInfo+ will dump the key-value pairs and show you the contents of the RFile: - $ ./bin/accumulo org.apache.accumulo.core.file.rfile.PrintInfo --dump /accumulo/tables/1/default_tablet/A000000n.rf + $ accumulo org.apache.accumulo.core.file.rfile.PrintInfo --dump /accumulo/tables/1/default_tablet/A000000n.rf row columnFamily:columnQualifier [visibility] timestamp deleteFlag -> Value ... @@ -356,21 +356,21 @@ does not provide the normal access controls in Accumulo. If you would like to backup, or otherwise examine the contents of Zookeeper, there are commands to dump and load to/from XML. - $ ./bin/accumulo org.apache.accumulo.server.util.DumpZookeeper --root /accumulo >dump.xml - $ ./bin/accumulo org.apache.accumulo.server.util.RestoreZookeeper --overwrite < dump.xml + $ accumulo org.apache.accumulo.server.util.DumpZookeeper --root /accumulo >dump.xml + $ accumulo org.apache.accumulo.server.util.RestoreZookeeper --overwrite < dump.xml *Q*: How can I get the information in the monitor page for my cluster monitoring system? *A*: Use GetMasterStats: - $ ./bin/accumulo org.apache.accumulo.test.GetMasterStats | grep Load + $ accumulo org.apache.accumulo.test.GetMasterStats | grep Load OS Load Average: 0.27 *Q*: The monitor page is showing an offline tablet. How can I find out which tablet it is? *A*: Use FindOfflineTablets: - $ ./bin/accumulo org.apache.accumulo.server.util.FindOfflineTablets + $ accumulo org.apache.accumulo.server.util.FindOfflineTablets 2<<@(null,null,localhost:9997) is UNASSIGNED #walogs:2 Here's what the output means: @@ -397,7 +397,7 @@ logs to be sorted for efficient recovery. *A*: +CheckForMetadataProblems+ will verify the start/end of every tablet matches, and the start and stop for the table is empty: - $ ./bin/accumulo org.apache.accumulo.server.util.CheckForMetadataProblems -u root --password + $ accumulo org.apache.accumulo.server.util.CheckForMetadataProblems -u root --password Enter the connection password: All is well for table !0 All is well for table 1 @@ -408,7 +408,7 @@ every tablet matches, and the start and stop for the table is empty: that the file exists in HDFS. Optionally, it will remove the reference: - $ ./bin/accumulo org.apache.accumulo.server.util.RemoveEntriesForMissingFiles -u root --password + $ accumulo org.apache.accumulo.server.util.RemoveEntriesForMissingFiles -u root --password Enter the connection password: 2013-07-16 13:10:57,293 [util.RemoveEntriesForMissingFiles] INFO : File /accumulo/tables/2/default_tablet/F0000005.rf is missing @@ -418,7 +418,7 @@ reference: *A*: Use CleanZookeeper: - $ ./bin/accumulo org.apache.accumulo.server.util.CleanZookeeper + $ accumulo org.apache.accumulo.server.util.CleanZookeeper This command will not delete the instance pointed to by the local +conf/accumulo-site.xml+ file. @@ -426,23 +426,23 @@ This command will not delete the instance pointed to by the local +conf/accumulo *A*: Use the admin command: - $ ./bin/accumulo admin stop hostname:9997 + $ accumulo admin stop hostname:9997 2013-07-16 13:15:38,403 [util.Admin] INFO : Stopping server 12.34.56.78:9997 *Q*: I cannot login to a tablet server host, and the tablet server will not shut down. How can I kill the server? *A*: Sometimes you can kill a "stuck" tablet server by deleting its lock in zookeeper: - $ ./bin/accumulo org.apache.accumulo.server.util.TabletServerLocks --list + $ accumulo org.apache.accumulo.server.util.TabletServerLocks --list 127.0.0.1:9997 TSERV_CLIENT=127.0.0.1:9997 - $ ./bin/accumulo org.apache.accumulo.server.util.TabletServerLocks -delete 127.0.0.1:9997 - $ ./bin/accumulo org.apache.accumulo.server.util.TabletServerLocks -list + $ accumulo org.apache.accumulo.server.util.TabletServerLocks -delete 127.0.0.1:9997 + $ accumulo org.apache.accumulo.server.util.TabletServerLocks -list 127.0.0.1:9997 null You can find the master and instance id for any accumulo instances using the same zookeeper instance: ---- -$ ./bin/accumulo org.apache.accumulo.server.util.ListInstances +$ accumulo org.apache.accumulo.server.util.ListInstances INFO : Using ZooKeepers localhost:2181 Instance Name | Instance ID | Master @@ -565,26 +565,16 @@ Besides these columns, you may see: *Q*: One of my Accumulo processes died. How do I bring it back? -The easiest way to bring all services online for an Accumulo instance is to run the +start-all.sh+ script. +The easiest way to bring all services online for an Accumulo instance is to run the +accumulo-cluster+ script. - $ bin/start-all.sh + $ accumulo-cluster start This process will check the process listing, using +jps+ on each host before attempting to restart a service on the given host. Typically, this check is sufficient except in the face of a hung/zombie process. For large clusters, it may be -undesirable to ssh to every node in the cluster to ensure that all hosts are running the appropriate processes and +start-here.sh+ may be of use. +undesirable to ssh to every node in the cluster to ensure that all hosts are running the appropriate processes and +accumulo-service+ may be of use. $ ssh host_with_dead_process - $ bin/start-here.sh - -+start-here.sh+ should be invoked on the host which is missing a given process. Like start-all.sh, it will start all -necessary processes that are not currently running, but only on the current host and not cluster-wide. Tools such as +pssh+ or -+pdsh+ can be used to automate this process. - -+start-server.sh+ can also be used to start a process on a given host; however, it is not generally recommended for -users to issue this directly as the +start-all.sh+ and +start-here.sh+ scripts provide the same functionality with -more automation and are less prone to user error. - -*A*: Use +start-all.sh+ or +start-here.sh+. + $ accumulo-service tserver start *Q*: My process died again. Should I restart it via +cron+ or tools like +supervisord+? @@ -593,7 +583,6 @@ misconfiguration of Accumulo or over-saturation of resources. Blind automation o is generally an undesirable situation as it is indicative of a problem that is being masked and ignored. Accumulo processes should be stable on the order of months and not require frequent restart. - ### Advanced System Recovery #### HDFS Failure @@ -614,7 +603,7 @@ lost. *A*: Use +accumulo admin checkTablets+ - $ bin/accumulo admin checkTablets + $ accumulo admin checkTablets *Q*: I lost three data nodes, and I'm missing blocks in a WAL. I don't care about data loss, how can I get those tablets online? @@ -784,7 +773,7 @@ omission of new data. will default to using the directory specified by +logger.dir.walog+ in your configuration, or can be overriden by using the +--local-wal-directories+ option on the tool. It can be invoked as follows: - $ACCUMULO_HOME/bin/accumulo org.apache.accumulo.tserver.log.LocalWALRecovery + accumulo org.apache.accumulo.tserver.log.LocalWALRecovery ### File Naming Conventions http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/administration.html ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/administration.html b/docs/src/main/resources/administration.html index d64d6c3..39a46ff 100644 --- a/docs/src/main/resources/administration.html +++ b/docs/src/main/resources/administration.html @@ -23,7 +23,8 @@

Apache Accumulo Documentation : Administration

-

Starting accumulo for the first time

+ +

Configure Accumulo

For the most part, accumulo is ready to go out of the box. To start it, first you must distribute and install the accumulo software to each machine in the cloud that you wish to run on. The software should be installed @@ -36,32 +37,49 @@ to create these files, the startup scripts will assume you are trying to run on It is probably a good idea to back up these files, or distribute them to the other nodes as well, so that you can easily boot up accumulo from another machine, if necessary. You can also make create a conf/accumulo-env.sh file if you want to configure any custom environment variables. -

Once properly configured, you can initialize or prepare an instance of accumulo by running: bin/accumulo init
+

Initializing Accumulo

+ +

Once properly configured, you can initialize or prepare an instance of Accumulo by running the following command: + +

+accumulo init
+
+ Follow the prompts and you are ready to go. This step only prepares accumulo to run, it does not start up accumulo.

Starting accumulo

-

Once you have configured accumulo to your liking, and distributed the appropriate configuration to each machine, you can start accumulo with -bin/start-all.sh. If at any time, you wish to bring accumulo servers online after one or more have been shutdown, you can run bin/start-all.sh again. -This step will only start services that are not already running. Be aware that if you run this command on more than one machine, you may unintentionally -start an extra copy of the garbage collector service and the monitoring service, since each of these will run on the server on which you run this script. +

Once you have configured Accumulo to your liking, and distributed the appropriate configuration to each machine, you can start accumulo with the following command: + +

+accumulo-cluster start
+
+ +

If at any time, you wish to bring accumulo servers online after one or more have been shutdown, you can run the command again. This step will only start services +that are not already running. Be aware that if you run this command on more than one machine, you may unintentionally start an extra copy of the garbage collector +service and the monitoring service, since each of these will run on the server on which you run this script.

Stopping accumulo

-

Similar to the start-all.sh script, we provide a bin/stop-all.sh script to shut down accumulo. This will prompt for the root password so that it can -ask the master to shut down the tablet servers gracefully. If the tablet servers do not respond, or the master takes too long, you can force a shutdown by hitting Ctrl-C -at the password prompt, and waiting 15 seconds for the script to force a shutdown. Normally, once the shutdown happens gracefully, unresponsive tablet servers are -forcibly shut down after 5 seconds. +

When you want to stop Accumulo, run the following command:

+ +
+accumulo-cluster stop
+
+ +

This will prompt for the root password so that it can ask the master to shut down the tablet servers gracefully. If the tablet servers do not respond, or the master +takes too long, you can force a shutdown by hitting Ctrl-C at the password prompt, and waiting 15 seconds for the script to force a shutdown. Normally, once the shutdown +happens gracefully, unresponsive tablet servers are forcibly shut down after 5 seconds.

Adding a Node

-

Update your $ACCUMULO_HOME/conf/tservers (or $ACCUMULO_CONF_DIR/tservers) file to account for the addition; at a minimum this needs to be on the host(s) being added, but in practice it's good to ensure consistent configuration across all nodes.

+

Update your conf/tservers file to account for the addition; at a minimum this needs to be on the host(s) being added, but in practice it's good to ensure consistent configuration across all nodes.

-$ACCUMULO_HOME/bin/accumulo admin start <host(s)> {<host> ...}
+accumulo admin start <host(s)> {<host> ...}
 
-

Alternatively, you can ssh to each of the hosts you want to add and run $ACCUMULO_HOME/bin/start-here.sh.

+

Alternatively, you can ssh to each of the hosts you want to add and run accumulo-service tserver start.

Make sure the host in question has the new configuration, or else the tablet server won't start.

@@ -70,12 +88,12 @@ $ACCUMULO_HOME/bin/accumulo admin start <host(s)> {<host> ...}

If you need to take a node out of operation, you can trigger a graceful shutdown of a tablet server. Accumulo will automatically rebalance the tablets across the available tablet servers.

-$ACCUMULO_HOME/bin/accumulo admin stop <host(s)> {<host> ...}
+bin/accumulo admin stop <host(s)> {<host> ...}
 
-

Alternatively, you can ssh to each of the hosts you want to remove and run $ACCUMULO_HOME/bin/stop-here.sh.

+

Alternatively, you can ssh to each of the hosts you want to remove and run bin/accumulo-service tserver stop.

-

Be sure to update your $ACCUMULO_HOME/conf/tservers (or $ACCUMULO_CONF_DIR/tservers) file to account for the removal of these hosts. Bear in mind that the monitor will not re-read the tservers file automatically, so it will report the decomissioned servers as down; it's recommended that you restart the monitor so that the node list is up to date.

+

Be sure to update your conf/tservers file to account for the removal of these hosts. Bear in mind that the monitor will not re-read the tservers file automatically, so it will report the decomissioned servers as down; it's recommended that you restart the monitor so that the node list is up to date.

Configuration

Accumulo configuration information is stored in a xml file and ZooKeeper. System wide http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README b/docs/src/main/resources/examples/README index 2da5dc1..1c88b56 100644 --- a/docs/src/main/resources/examples/README +++ b/docs/src/main/resources/examples/README @@ -18,7 +18,7 @@ Notice: Licensed to the Apache Software Foundation (ASF) under one Before running any of the examples, the following steps must be performed. -1. Install and run Accumulo via the instructions found in $ACCUMULO_HOME/README. +1. Install and run Accumulo via the instructions found in INSTALL.md. Remember the instance name. It will be referred to as "instance" throughout the examples. A comma-separated list of zookeeper servers will be referred to as "zookeepers". @@ -31,7 +31,7 @@ In all commands, you will need to replace "instance", "zookeepers", "username", and "password" with the values you set for your Accumulo instance. Commands intended to be run in bash are prefixed by '$'. These are always -assumed to be run from the $ACCUMULO_HOME directory. +assumed to be run the from the root of your Accumulo installation. Commands intended to be run in the Accumulo shell are prefixed by '>'. http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.bulkIngest ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.bulkIngest b/docs/src/main/resources/examples/README.bulkIngest index e07dc9b..20e0c4d 100644 --- a/docs/src/main/resources/examples/README.bulkIngest +++ b/docs/src/main/resources/examples/README.bulkIngest @@ -27,7 +27,7 @@ accumulo. Then we verify the 1000 rows are in accumulo. $ ARGS="-i instance -z zookeepers -u username -p password" $ ./bin/accumulo $PKG.SetupTable $ARGS -t test_bulk row_00000333 row_00000666 $ ./bin/accumulo $PKG.GenerateTestData --start-row 0 --count 1000 --output bulk/test_1.txt - $ ./bin/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar $PKG.BulkIngestExample $ARGS -t test_bulk --inputDir bulk --workDir tmp/bulkWork $ ./bin/accumulo $PKG.VerifyIngest $ARGS -t test_bulk --start-row 0 --count 1000 For a high level discussion of bulk ingest, see the docs dir. http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.classpath ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.classpath b/docs/src/main/resources/examples/README.classpath index 710560f..37b2aac 100644 --- a/docs/src/main/resources/examples/README.classpath +++ b/docs/src/main/resources/examples/README.classpath @@ -25,7 +25,7 @@ table reference that jar. Execute the following command in the shell. - $ hadoop fs -copyFromLocal $ACCUMULO_HOME/test/src/test/resources/FooFilter.jar /user1/lib + $ hadoop fs -copyFromLocal /path/to/accumulo/opt/test/src/test/resources/FooFilter.jar /user1/lib Execute following in Accumulo shell to setup classpath context http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.filedata ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.filedata b/docs/src/main/resources/examples/README.filedata index 26a6c1e..cfb41ba 100644 --- a/docs/src/main/resources/examples/README.filedata +++ b/docs/src/main/resources/examples/README.filedata @@ -32,7 +32,7 @@ This example is coupled with the dirlist example. See README.dirlist for instruc If you haven't already run the README.dirlist example, ingest a file with FileDataIngest. - $ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 $ACCUMULO_HOME/README + $ ./bin/accumulo org.apache.accumulo.examples.simple.filedata.FileDataIngest -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --chunk 1000 /path/to/accumulo/README.md Open the accumulo shell and look at the data. The row is the MD5 hash of the file, which you can verify by running a command such as 'md5sum' on the file. @@ -40,7 +40,7 @@ Open the accumulo shell and look at the data. The row is the MD5 hash of the fil Run the CharacterHistogram MapReduce to add some information about the file. - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.filedata.CharacterHistogram -i instance -z zookeepers -u username -p password -t dataTable --auths exampleVis --vis exampleVis Scan again to see the histogram stored in the 'info' column family. http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.mapred ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.mapred b/docs/src/main/resources/examples/README.mapred index 9e9b17f..eccf598 100644 --- a/docs/src/main/resources/examples/README.mapred +++ b/docs/src/main/resources/examples/README.mapred @@ -23,7 +23,7 @@ accumulo table with combiners. To run this example you will need a directory in HDFS containing text files. The accumulo readme will be used to show how to run this example. - $ hadoop fs -copyFromLocal $ACCUMULO_HOME/README /user/username/wc/Accumulo.README + $ hadoop fs -copyFromLocal /path/to/accumulo/README.md /user/username/wc/Accumulo.README $ hadoop fs -ls /user/username/wc Found 1 items -rw-r--r-- 2 username supergroup 9359 2009-07-15 17:54 /user/username/wc/Accumulo.README @@ -50,7 +50,7 @@ for the column family count. After creating the table, run the word count map reduce job. - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers --input /user/username/wc -t wordCount -u username -p password + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers --input /user/username/wc -t wordCount -u username -p password 11/02/07 18:20:11 INFO input.FileInputFormat: Total input paths to process : 1 11/02/07 18:20:12 INFO mapred.JobClient: Running job: job_201102071740_0003 @@ -134,14 +134,14 @@ Because the basic WordCount example uses Opts to parse its arguments the basic WordCount example by calling the same command as explained above except replacing the password with the token file (rather than -p, use -tf). - $ ./bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers --input /user/username/wc -t wordCount -u username -tf tokenfile + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.WordCount -i instance -z zookeepers --input /user/username/wc -t wordCount -u username -tf tokenfile In the above examples, username was 'root' and tokenfile was 'root.pw' However, if you don't want to use the Opts class to parse arguments, the TokenFileWordCount is an example of using the token file manually. - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TokenFileWordCount instance zookeepers username tokenfile /user/username/wc wordCount The results should be the same as the WordCount example except that the authentication token was not stored in the configuration. It was instead http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.regex ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.regex b/docs/src/main/resources/examples/README.regex index ea9f208..1fe9af9 100644 --- a/docs/src/main/resources/examples/README.regex +++ b/docs/src/main/resources/examples/README.regex @@ -41,7 +41,7 @@ in parallel and will store the results in files in hdfs. The following will search for any rows in the input table that starts with "dog": - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RegexExample -u user -p passwd -i instance -t input --rowRegex 'dog.*' --output /tmp/output $ hadoop fs -ls /tmp/output Found 3 items http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.rowhash ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.rowhash b/docs/src/main/resources/examples/README.rowhash index 43782c9..4c84ca5 100644 --- a/docs/src/main/resources/examples/README.rowhash +++ b/docs/src/main/resources/examples/README.rowhash @@ -38,7 +38,7 @@ put a trivial amount of data into accumulo using the accumulo shell: The RowHash class will insert a hash for each row in the database if it contains a specified colum. Here's how you run the map/reduce job - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.RowHash -u user -p passwd -i instance -t input --column cf:cq Now we can scan the table and see the hashes: http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.shard ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.shard b/docs/src/main/resources/examples/README.shard index d08658a..b656927 100644 --- a/docs/src/main/resources/examples/README.shard +++ b/docs/src/main/resources/examples/README.shard @@ -36,7 +36,6 @@ After creating the tables, index some files. The following command indexes all o The following command queries the index to find all files containing 'foo' and 'bar'. - $ cd $ACCUMULO_HOME $ ./bin/accumulo org.apache.accumulo.examples.simple.shard.Query -i instance -z zookeepers -t shard -u username -p password foo bar /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/security/ColumnVisibilityTest.java /local/username/workspace/accumulo/src/core/src/test/java/accumulo/core/client/mock/MockConnectorTest.java http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.tabletofile ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.tabletofile b/docs/src/main/resources/examples/README.tabletofile index 08b7cc9..f3d49e8 100644 --- a/docs/src/main/resources/examples/README.tabletofile +++ b/docs/src/main/resources/examples/README.tabletofile @@ -40,7 +40,7 @@ write the key/value pairs to a file in HDFS. The following will extract the rows containing the column "cf:cq": - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TableToFile -u user -p passwd -i instance -t input --columns cf:cq --output /tmp/output $ hadoop fs -ls /tmp/output -rw-r--r-- 1 username supergroup 0 2013-01-10 14:44 /tmp/output/_SUCCESS http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/docs/src/main/resources/examples/README.terasort ---------------------------------------------------------------------- diff --git a/docs/src/main/resources/examples/README.terasort b/docs/src/main/resources/examples/README.terasort index 409c1d1..4db6ce4 100644 --- a/docs/src/main/resources/examples/README.terasort +++ b/docs/src/main/resources/examples/README.terasort @@ -22,7 +22,7 @@ hadoop terasort benchmark. To run this example you run it with arguments describing the amount of data: - $ bin/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \ + $ ./lib/scripts/tool.sh lib/accumulo-examples-simple.jar org.apache.accumulo.examples.simple.mapreduce.TeraSortIngest \ -i instance -z zookeepers -u user -p password \ --count 10 \ --minKeySize 10 \ http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniClusterExecutable.java ---------------------------------------------------------------------- diff --git a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniClusterExecutable.java b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniClusterExecutable.java index ecd5988..fde8d36 100644 --- a/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniClusterExecutable.java +++ b/minicluster/src/main/java/org/apache/accumulo/minicluster/impl/MiniClusterExecutable.java @@ -30,6 +30,11 @@ public class MiniClusterExecutable implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo minicluster"; + } + + @Override public void execute(final String[] args) throws Exception { MiniAccumuloRunner.main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/minicluster/src/test/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControlTest.java ---------------------------------------------------------------------- diff --git a/minicluster/src/test/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControlTest.java b/minicluster/src/test/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControlTest.java index 3045ddf..7badef9 100644 --- a/minicluster/src/test/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControlTest.java +++ b/minicluster/src/test/java/org/apache/accumulo/cluster/standalone/StandaloneClusterControlTest.java @@ -46,7 +46,7 @@ public class StandaloneClusterControlTest { @Test public void mapreduceLaunchesLocally() throws Exception { - final String toolPath = "/usr/lib/accumulo/bin/tool.sh"; + final String toolPath = "/usr/lib/accumulo/lib/scripts/tool.sh"; final String jar = "/home/user/my_project.jar"; final Class clz = Object.class; final String myClass = clz.getName(); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/proxy/README ---------------------------------------------------------------------- diff --git a/proxy/README b/proxy/README index 1f44a9c..a25acec 100644 --- a/proxy/README +++ b/proxy/README @@ -46,7 +46,7 @@ Accumulo 1.5 instance, or when run standalone in the Mock configuration. Run the following command. - ${ACCUMULO_HOME}/bin/accumulo proxy -p ${ACCUMULO_HOME}/proxy/proxy.properties + ./bin/accumulo proxy -p ./opt/proxy/proxy.properties 5. Clients http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java ---------------------------------------------------------------------- diff --git a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java index a3d185d..5bfa808 100644 --- a/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java +++ b/proxy/src/main/java/org/apache/accumulo/proxy/Proxy.java @@ -113,6 +113,11 @@ public class Proxy implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo proxy"; + } + + @Override public void execute(final String[] args) throws Exception { Opts opts = new Opts(); opts.parseArgs(Proxy.class.getName(), args); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java b/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java index 3cfd759..fa936c4 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java +++ b/server/base/src/main/java/org/apache/accumulo/server/Accumulo.java @@ -124,8 +124,10 @@ public class Accumulo { return explicitConfigFile; } String[] configFiles = {String.format("%s/%s_logger.xml", confDir, application), String.format("%s/%s_logger.properties", confDir, application), - String.format("%s/generic_logger.xml", confDir), String.format("%s/generic_logger.properties", confDir)}; - String defaultConfigFile = configFiles[2]; // generic_logger.xml + String.format("%s/examples/%s_logger.xml", confDir, application), String.format("%s/examples/%s_logger.properties", confDir, application), + String.format("%s/generic_logger.xml", confDir), String.format("%s/generic_logger.properties", confDir), + String.format("%s/examples/generic_logger.xml", confDir), String.format("%s/examples/generic_logger.properties", confDir),}; + String defaultConfigFile = String.format("%s/examples/generic_logger.xml", confDir); for (String f : configFiles) { if (new File(f).exists()) { return f; @@ -137,10 +139,9 @@ public class Accumulo { public static void setupLogging(String application) throws UnknownHostException { System.setProperty("org.apache.accumulo.core.application", application); - if (System.getenv("ACCUMULO_LOG_DIR") != null) + if (System.getenv("ACCUMULO_LOG_DIR") != null) { System.setProperty("org.apache.accumulo.core.dir.log", System.getenv("ACCUMULO_LOG_DIR")); - else - System.setProperty("org.apache.accumulo.core.dir.log", System.getenv("ACCUMULO_HOME") + "/logs/"); + } String localhost = InetAddress.getLocalHost().getHostName(); System.setProperty("org.apache.accumulo.core.ip.localhost.hostname", localhost); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java b/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java index 942cabf..cd01998 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java +++ b/server/base/src/main/java/org/apache/accumulo/server/conf/ConfigSanityCheck.java @@ -34,6 +34,11 @@ public class ConfigSanityCheck implements KeywordExecutable { } @Override + public String description() { + return "Checks server config"; + } + + @Override public void execute(String[] args) throws Exception { main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java index 75ded48..3a3a804 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java +++ b/server/base/src/main/java/org/apache/accumulo/server/init/Initialize.java @@ -751,6 +751,11 @@ public class Initialize implements KeywordExecutable { } @Override + public String description() { + return "Initializes Accumulo"; + } + + @Override public void execute(final String[] args) { Opts opts = new Opts(); opts.parseArgs(Initialize.class.getName(), args); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/metrics/MetricsConfiguration.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/metrics/MetricsConfiguration.java b/server/base/src/main/java/org/apache/accumulo/server/metrics/MetricsConfiguration.java index d772048..796d197 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/metrics/MetricsConfiguration.java +++ b/server/base/src/main/java/org/apache/accumulo/server/metrics/MetricsConfiguration.java @@ -148,11 +148,10 @@ public class MetricsConfiguration { } private void loadConfiguration() { - // Check to see if ACCUMULO_HOME environment variable is set. - String ACUHOME = getEnvironmentConfiguration().getString("ACCUMULO_CONF_DIR"); - if (null != ACUHOME) { + String accumuloConfDir = getEnvironmentConfiguration().getString("ACCUMULO_CONF_DIR"); + if (null != accumuloConfDir) { // Try to load the metrics properties file - File mFile = new File(ACUHOME, metricsFileName); + File mFile = new File(accumuloConfDir, metricsFileName); if (mFile.exists()) { if (log.isDebugEnabled()) log.debug("Loading config file: " + mFile.getAbsolutePath()); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java b/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java index 4aaa18c..d7b5c08 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java +++ b/server/base/src/main/java/org/apache/accumulo/server/util/Admin.java @@ -159,6 +159,11 @@ public class Admin implements KeywordExecutable { } @Override + public String description() { + return "Execute administrative commands"; + } + + @Override public void execute(final String[] args) { boolean everything; http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/util/Info.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/Info.java b/server/base/src/main/java/org/apache/accumulo/server/util/Info.java index d5440a6..e391a12 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/util/Info.java +++ b/server/base/src/main/java/org/apache/accumulo/server/util/Info.java @@ -33,6 +33,11 @@ public class Info implements KeywordExecutable { } @Override + public String description() { + return "Print Accumulo cluster info"; + } + + @Override public void execute(final String[] args) throws KeeperException, InterruptedException { Instance instance = HdfsZooInstance.getInstance(); System.out.println("monitor: " + MonitorUtil.getLocation(instance)); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java b/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java index 7c102e6..f7687da 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java +++ b/server/base/src/main/java/org/apache/accumulo/server/util/LoginProperties.java @@ -41,6 +41,11 @@ public class LoginProperties implements KeywordExecutable { } @Override + public String description() { + return "Print Accumulo login info"; + } + + @Override public void execute(String[] args) throws Exception { AccumuloConfiguration config = new ServerConfigurationFactory(HdfsZooInstance.getInstance()).getConfiguration(); Authenticator authenticator = AccumuloVFSClassLoader.getClassLoader().loadClass(config.get(Property.INSTANCE_SECURITY_AUTHENTICATOR)) http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java ---------------------------------------------------------------------- diff --git a/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java b/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java index c8ef6d5..6abbea8 100644 --- a/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java +++ b/server/base/src/main/java/org/apache/accumulo/server/util/ZooKeeperMain.java @@ -50,6 +50,11 @@ public class ZooKeeperMain implements KeywordExecutable { } @Override + public String description() { + return "Start Apache Zookeeper instance"; + } + + @Override public void execute(final String[] args) throws Exception { Opts opts = new Opts(); opts.parseArgs(ZooKeeperMain.class.getName(), args); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/base/src/test/java/org/apache/accumulo/server/AccumuloTest.java ---------------------------------------------------------------------- diff --git a/server/base/src/test/java/org/apache/accumulo/server/AccumuloTest.java b/server/base/src/test/java/org/apache/accumulo/server/AccumuloTest.java index 19b0a9b..955ee9a 100644 --- a/server/base/src/test/java/org/apache/accumulo/server/AccumuloTest.java +++ b/server/base/src/test/java/org/apache/accumulo/server/AccumuloTest.java @@ -115,7 +115,7 @@ public class AccumuloTest { String confDirName = confDir.getAbsolutePath(); assertTrue("Failed to make test configuration directory", confDir.mkdir()); try { - String genericXmlName = String.format("%s/generic_logger.xml", confDirName); + String genericXmlName = String.format("%s/examples/generic_logger.xml", confDirName); assertEquals(genericXmlName, Accumulo.locateLogConfig(confDirName, "flogger")); } finally { http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/gc/src/main/java/org/apache/accumulo/gc/GCExecutable.java ---------------------------------------------------------------------- diff --git a/server/gc/src/main/java/org/apache/accumulo/gc/GCExecutable.java b/server/gc/src/main/java/org/apache/accumulo/gc/GCExecutable.java index b3d490f..9836927 100644 --- a/server/gc/src/main/java/org/apache/accumulo/gc/GCExecutable.java +++ b/server/gc/src/main/java/org/apache/accumulo/gc/GCExecutable.java @@ -30,6 +30,11 @@ public class GCExecutable implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo garbage collector"; + } + + @Override public void execute(final String[] args) throws IOException { SimpleGarbageCollector.main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/master/src/main/java/org/apache/accumulo/master/MasterExecutable.java ---------------------------------------------------------------------- diff --git a/server/master/src/main/java/org/apache/accumulo/master/MasterExecutable.java b/server/master/src/main/java/org/apache/accumulo/master/MasterExecutable.java index aabfa6d..fe2cb2f 100644 --- a/server/master/src/main/java/org/apache/accumulo/master/MasterExecutable.java +++ b/server/master/src/main/java/org/apache/accumulo/master/MasterExecutable.java @@ -29,6 +29,11 @@ public class MasterExecutable implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo master"; + } + + @Override public void execute(final String[] args) throws Exception { Master.main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/monitor/src/main/java/org/apache/accumulo/monitor/MonitorExecutable.java ---------------------------------------------------------------------- diff --git a/server/monitor/src/main/java/org/apache/accumulo/monitor/MonitorExecutable.java b/server/monitor/src/main/java/org/apache/accumulo/monitor/MonitorExecutable.java index 9da7519..dd5efa6 100644 --- a/server/monitor/src/main/java/org/apache/accumulo/monitor/MonitorExecutable.java +++ b/server/monitor/src/main/java/org/apache/accumulo/monitor/MonitorExecutable.java @@ -29,6 +29,11 @@ public class MonitorExecutable implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo monitor"; + } + + @Override public void execute(final String[] args) throws Exception { Monitor.main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/tracer/src/main/java/org/apache/accumulo/tracer/TracerExecutable.java ---------------------------------------------------------------------- diff --git a/server/tracer/src/main/java/org/apache/accumulo/tracer/TracerExecutable.java b/server/tracer/src/main/java/org/apache/accumulo/tracer/TracerExecutable.java index 3995924..fbfcb51 100644 --- a/server/tracer/src/main/java/org/apache/accumulo/tracer/TracerExecutable.java +++ b/server/tracer/src/main/java/org/apache/accumulo/tracer/TracerExecutable.java @@ -29,6 +29,11 @@ public class TracerExecutable implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo tracer"; + } + + @Override public void execute(final String[] args) throws Exception { TraceServer.main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java ---------------------------------------------------------------------- diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java index 00f6ba8..4e3bf4d 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/NativeMap.java @@ -71,10 +71,10 @@ public class NativeMap implements Iterable> { // Check standard directories List directories = new ArrayList<>(Arrays.asList(new File[] {new File("/usr/lib64"), new File("/usr/lib")})); // Check in ACCUMULO_HOME location, too - String envAccumuloHome = System.getenv("ACCUMULO_HOME"); - if (envAccumuloHome != null) { - directories.add(new File(envAccumuloHome + "/lib/native")); - directories.add(new File(envAccumuloHome + "/lib/native/map")); // old location, just in case somebody puts it here + String accumuloHome = System.getenv("ACCUMULO_HOME"); + if (accumuloHome != null) { + directories.add(new File(accumuloHome + "/lib/native")); + directories.add(new File(accumuloHome + "/lib/native/map")); // old location, just in case somebody puts it here } // Attempt to load from these directories, using standard names loadNativeLib(directories); http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/server/tserver/src/main/java/org/apache/accumulo/tserver/TServerExecutable.java ---------------------------------------------------------------------- diff --git a/server/tserver/src/main/java/org/apache/accumulo/tserver/TServerExecutable.java b/server/tserver/src/main/java/org/apache/accumulo/tserver/TServerExecutable.java index 4c197ca..f56a4ed 100644 --- a/server/tserver/src/main/java/org/apache/accumulo/tserver/TServerExecutable.java +++ b/server/tserver/src/main/java/org/apache/accumulo/tserver/TServerExecutable.java @@ -29,6 +29,11 @@ public class TServerExecutable implements KeywordExecutable { } @Override + public String description() { + return "Start Accumulo tablet server"; + } + + @Override public void execute(final String[] args) throws Exception { TabletServer.main(args); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/shell/src/main/java/org/apache/accumulo/shell/Shell.java ---------------------------------------------------------------------- diff --git a/shell/src/main/java/org/apache/accumulo/shell/Shell.java b/shell/src/main/java/org/apache/accumulo/shell/Shell.java index 0a64111..829f93c 100644 --- a/shell/src/main/java/org/apache/accumulo/shell/Shell.java +++ b/shell/src/main/java/org/apache/accumulo/shell/Shell.java @@ -584,6 +584,11 @@ public class Shell extends ShellOptions implements KeywordExecutable { } @Override + public String description() { + return "Run Accumulo shell"; + } + + @Override public void execute(final String[] args) throws IOException { try { if (!config(args)) { http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/start/src/main/java/org/apache/accumulo/start/Main.java ---------------------------------------------------------------------- diff --git a/start/src/main/java/org/apache/accumulo/start/Main.java b/start/src/main/java/org/apache/accumulo/start/Main.java index 414394a..212a8c9 100644 --- a/start/src/main/java/org/apache/accumulo/start/Main.java +++ b/start/src/main/java/org/apache/accumulo/start/Main.java @@ -20,6 +20,7 @@ import java.io.IOException; import java.lang.reflect.InvocationTargetException; import java.lang.reflect.Method; import java.lang.reflect.Modifier; +import java.util.Arrays; import java.util.Collections; import java.util.Map; import java.util.ServiceLoader; @@ -72,6 +73,10 @@ public class Main { printUsage(); System.exit(1); } + if (args[0].equals("-h") || args[0].equals("-help") || args[0].equals("--help")) { + printUsage(); + System.exit(1); + } // determine whether a keyword was used or a class name, and execute it with the remaining args String keywordOrClassName = args[0]; @@ -131,7 +136,8 @@ public class Main { try { classWithMain = getClassLoader().loadClass(className); } catch (ClassNotFoundException cnfe) { - System.out.println("Classname " + className + " not found. Please make sure you use the wholly qualified package name."); + System.out.println("Invalid argument: Java

'" + className + "' was not found. Please use the wholly qualified package name."); + printUsage(); System.exit(1); } execMainClass(classWithMain, args); @@ -194,20 +200,29 @@ public class Main { System.exit(1); } - public static void printUsage() { - TreeSet keywords = new TreeSet<>(getExecutables(getClassLoader()).keySet()); + public static void printCommand(KeywordExecutable ke) { + System.out.printf(" %-30s %s\n", ke.usage(), ke.description()); + } - // jar is a special case, because it has arguments - keywords.remove("jar"); - keywords.add("jar [
] args"); + public static void printUsage() { + Map executableMap = new TreeMap<>(getExecutables(getClassLoader())); - String prefix = ""; - String kwString = ""; - for (String kw : keywords) { - kwString += prefix + kw; - prefix = " | "; + System.out.println("\nUsage: accumulo ( ...)\n\nCore Commands:"); + System.out.println(" create-config Create Accumulo configuration"); + System.out.println(" build-native Build Accumulo native libraries"); + for (String cmd : Arrays.asList("init", "shell", "classpath", "version", "admin", "info", "help", "jar")) { + printCommand(executableMap.remove(cmd)); + } + System.out.println("
args Run Java
located on Accumulo classpath"); + System.out.println("\nProcess Commands:"); + for (String cmd : Arrays.asList("gc", "master", "monitor", "minicluster", "proxy", "tserver", "tracer", "zookeeper")) { + printCommand(executableMap.remove(cmd)); + } + System.out.println("\nAdvanced Commands:"); + for (Map.Entry entry : executableMap.entrySet()) { + printCommand(entry.getValue()); } - System.out.println("accumulo " + kwString + " | args"); + System.out.println(); } public static synchronized Map getExecutables(final ClassLoader cl) { http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java ---------------------------------------------------------------------- diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java b/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java index 991e89e..d773fb3 100644 --- a/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java +++ b/start/src/main/java/org/apache/accumulo/start/classloader/AccumuloClassLoader.java @@ -93,9 +93,6 @@ public class AccumuloClassLoader { if (System.getenv("ACCUMULO_CONF_DIR") != null) { // accumulo conf dir should be set SITE_CONF = System.getenv("ACCUMULO_CONF_DIR") + "/" + configFile; - } else if (System.getenv("ACCUMULO_HOME") != null) { - // if no accumulo conf dir, try accumulo home default - SITE_CONF = System.getenv("ACCUMULO_HOME") + "/conf/" + configFile; } else { SITE_CONF = null; } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java ---------------------------------------------------------------------- diff --git a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java index db070ec..f287364 100644 --- a/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java +++ b/start/src/main/java/org/apache/accumulo/start/classloader/vfs/AccumuloVFSClassLoader.java @@ -52,7 +52,7 @@ import org.slf4j.LoggerFactory; * SystemClassLoader that loads JVM classes * ^ * | - * AccumuloClassLoader loads jars from locations in general.classpaths. Usually the URLs for HADOOP_HOME, ZOOKEEPER_HOME, ACCUMULO_HOME and their associated directories + * AccumuloClassLoader loads jars from locations in general.classpaths. Usually the URLs for HADOOP_HOME, ZOOKEEPER_HOME, ACCUMULO_HOME/lib and their associated directories * ^ * | * VFSClassLoader that loads jars from locations in general.vfs.classpaths. Can be used to load accumulo jar from HDFS http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/start/src/main/java/org/apache/accumulo/start/spi/KeywordExecutable.java ---------------------------------------------------------------------- diff --git a/start/src/main/java/org/apache/accumulo/start/spi/KeywordExecutable.java b/start/src/main/java/org/apache/accumulo/start/spi/KeywordExecutable.java index 9c8dbeb..30ebdda 100644 --- a/start/src/main/java/org/apache/accumulo/start/spi/KeywordExecutable.java +++ b/start/src/main/java/org/apache/accumulo/start/spi/KeywordExecutable.java @@ -37,13 +37,23 @@ import java.util.ServiceLoader; public interface KeywordExecutable { /** - * Provides access to the service's keyword. - * - * @return the keyword which identifies this service + * @return Keyword which identifies this service */ String keyword(); /** + * @return Usage for service + */ + default String usage() { + return keyword(); + } + + /** + * @return Description of service + */ + String description(); + + /** * Execute the item with the given arguments. * * @param args http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java ---------------------------------------------------------------------- diff --git a/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java b/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java index e1578b3..b32382c 100644 --- a/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java +++ b/test/src/main/java/org/apache/accumulo/test/start/KeywordStartIT.java @@ -199,6 +199,11 @@ public class KeywordStartIT { } @Override + public String description() { + return kw; + } + + @Override public void execute(String[] args) throws Exception {} } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/system/continuous/master-agitator.pl ---------------------------------------------------------------------- diff --git a/test/system/continuous/master-agitator.pl b/test/system/continuous/master-agitator.pl index aef6f1f..d87f17e 100755 --- a/test/system/continuous/master-agitator.pl +++ b/test/system/continuous/master-agitator.pl @@ -20,7 +20,7 @@ use POSIX qw(strftime); use Cwd qw(); if(scalar(@ARGV) != 2){ - print "Usage : master-agitator.pl \n"; + print "Usage : master-agitator.pl \n"; exit(1); } @@ -84,7 +84,7 @@ while(1){ $t = strftime "%Y%m%d %H:%M:%S", localtime; print STDERR "$t Running start-all\n"; - $cmd = "$ACCUMULO_HOME/bin/start-all.sh --notTservers"; + $cmd = "pssh -h $ACCUMULO_CONF_DIR/masters \"$ACCUMULO_HOME/bin/accumulo-service master start\" < /dev/null"; print "$t $cmd\n"; system($cmd); } http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/system/continuous/run-moru.sh ---------------------------------------------------------------------- diff --git a/test/system/continuous/run-moru.sh b/test/system/continuous/run-moru.sh index eb4403d..b3f3a75 100755 --- a/test/system/continuous/run-moru.sh +++ b/test/system/continuous/run-moru.sh @@ -33,5 +33,5 @@ CONTINUOUS_CONF_DIR=${CONTINUOUS_CONF_DIR:-${bin}} SERVER_LIBJAR="$ACCUMULO_HOME/lib/accumulo-test.jar" -"$ACCUMULO_HOME/bin/tool.sh" "$SERVER_LIBJAR" org.apache.accumulo.test.continuous.ContinuousMoru -libjars "$SERVER_LIBJAR" -i "$INSTANCE_NAME" -z "$ZOO_KEEPERS" -u "$USER" -p "$PASS" --table "$TABLE" --min "$MIN" --max "$MAX" --maxColF "$MAX_CF" --maxColQ "$MAX_CQ" --batchMemory "$MAX_MEM" --batchLatency "$MAX_LATENCY" --batchThreads "$NUM_THREADS" --maxMappers "$VERIFY_MAX_MAPS" +"$ACCUMULO_HOME/lib/scripts/tool.sh" "$SERVER_LIBJAR" org.apache.accumulo.test.continuous.ContinuousMoru -libjars "$SERVER_LIBJAR" -i "$INSTANCE_NAME" -z "$ZOO_KEEPERS" -u "$USER" -p "$PASS" --table "$TABLE" --min "$MIN" --max "$MAX" --maxColF "$MAX_CF" --maxColQ "$MAX_CQ" --batchMemory "$MAX_MEM" --batchLatency "$MAX_LATENCY" --batchThreads "$NUM_THREADS" --maxMappers "$VERIFY_MAX_MAPS" http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/system/continuous/run-verify.sh ---------------------------------------------------------------------- diff --git a/test/system/continuous/run-verify.sh b/test/system/continuous/run-verify.sh index b163663..6d2d048 100755 --- a/test/system/continuous/run-verify.sh +++ b/test/system/continuous/run-verify.sh @@ -39,4 +39,4 @@ AUTH_OPT=""; SCAN_OPT=--offline [[ $SCAN_OFFLINE == false ]] && SCAN_OPT= -"$ACCUMULO_HOME/bin/tool.sh" "$SERVER_LIBJAR" org.apache.accumulo.test.continuous.ContinuousVerify -Dmapreduce.job.reduce.slowstart.completedmaps=0.95 -libjars "$SERVER_LIBJAR" "$AUTH_OPT" -i "$INSTANCE_NAME" -z "$ZOO_KEEPERS" -u "$USER" -p "$PASS" --table "$TABLE" --output "$VERIFY_OUT" --maxMappers "$VERIFY_MAX_MAPS" --reducers "$VERIFY_REDUCERS" "$SCAN_OPT" +"$ACCUMULO_HOME/lib/scripts/tool.sh" "$SERVER_LIBJAR" org.apache.accumulo.test.continuous.ContinuousVerify -Dmapreduce.job.reduce.slowstart.completedmaps=0.95 -libjars "$SERVER_LIBJAR" "$AUTH_OPT" -i "$INSTANCE_NAME" -z "$ZOO_KEEPERS" -u "$USER" -p "$PASS" --table "$TABLE" --output "$VERIFY_OUT" --maxMappers "$VERIFY_MAX_MAPS" --reducers "$VERIFY_REDUCERS" "$SCAN_OPT" http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/system/merkle-replication/README ---------------------------------------------------------------------- diff --git a/test/system/merkle-replication/README b/test/system/merkle-replication/README index 18100de..b892491 100644 --- a/test/system/merkle-replication/README +++ b/test/system/merkle-replication/README @@ -49,7 +49,7 @@ data using a Merkle tree. Ingests the configured amount of random data into the source table. -4. Run stop-all.sh && start-all.sh on the source instance +4. Run 'accumulo-cluster stop' && 'accumulo-cluster start' on the source instance A tabletserver in the source instance is likely to still be referencing a WAL for a presently online tablet which will prevent that http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/system/scalability/run.py ---------------------------------------------------------------------- diff --git a/test/system/scalability/run.py b/test/system/scalability/run.py index 68836e5..2b21a12 100755 --- a/test/system/scalability/run.py +++ b/test/system/scalability/run.py @@ -49,7 +49,7 @@ def file_len(fname): def runTest(testName, siteConfig, testDir, numNodes, fdata): log('Stopping accumulo') - syscall('$ACCUMULO_HOME/bin/stop-all.sh') + syscall('$ACCUMULO_HOME/bin/accumulo-cluster stop') log('Creating tservers file for this test') tserversPath = siteConfig.get('TSERVERS') @@ -68,7 +68,7 @@ def runTest(testName, siteConfig, testDir, numNodes, fdata): syscall('printf "%s\nY\n%s\n%s\n" | $ACCUMULO_HOME/bin/accumulo init' % (instance, passwd, passwd)) log('Starting new Accumulo instance') - syscall('$ACCUMULO_HOME/bin/start-all.sh') + syscall('$ACCUMULO_HOME/bin/accumulo-cluster start') sleepTime = 30 if numNodes > 120: @@ -177,7 +177,7 @@ def main(): raise 'ACCUMULO_HOME needs to be set!' if not os.getenv('ACCUMULO_CONF_DIR'): - os.environ['ACCUMULO_CONF_DIR'] = os.path.join(os.getenv('ACCUMULO_HOME'), 'conf') + raise 'ACCUMULO_CONF_DIR needs to be set!' if not os.getenv('HADOOP_HOME'): raise 'HADOOP_HOME needs to be set!' http://git-wip-us.apache.org/repos/asf/accumulo/blob/158cf16d/test/system/upgrade_test.sh ---------------------------------------------------------------------- diff --git a/test/system/upgrade_test.sh b/test/system/upgrade_test.sh index 651755d..0a258a4 100755 --- a/test/system/upgrade_test.sh +++ b/test/system/upgrade_test.sh @@ -54,7 +54,7 @@ fi echo "==== Starting Current ===" -"$CURR/bin/start-all.sh" +"$CURR/bin/accumulo-cluster start" "$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 1 --random 56 --rows 400000 --start 0 --cols 1 -i $INSTANCE -u root -p secret echo "compact -t test_ingest -w" | $CURR/bin/accumulo shell -u root -p secret "$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 1 --random 56 --rows 400000 --start 0 --cols 1 -i $INSTANCE -u root -p secret @@ -65,13 +65,13 @@ echo "compact -t test_ingest -w" | $CURR/bin/accumulo shell -u root -p secret echo "compact -t test_ingest -w" | $CURR/bin/accumulo shell -u root -p secret "$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret -"$CURR/bin/stop-all.sh" -"$CURR/bin/start-all.sh" +"$CURR/bin/accumulo-cluster stop" +"$CURR/bin/accumulo-cluster start" "$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret pkill -9 -f accumulo.start -"$CURR/bin/start-all.sh" +"$CURR/bin/accumulo-cluster start" "$CURR/bin/accumulo" org.apache.accumulo.test.VerifyIngest --size 50 --timestamp 2 --random 57 --rows 500000 --start 0 --cols 1 -i $INSTANCE -u root -p secret