Return-Path: X-Original-To: apmail-accumulo-commits-archive@www.apache.org Delivered-To: apmail-accumulo-commits-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id E327711326 for ; Fri, 16 May 2014 20:47:07 +0000 (UTC) Received: (qmail 93439 invoked by uid 500); 16 May 2014 20:14:48 -0000 Delivered-To: apmail-accumulo-commits-archive@accumulo.apache.org Received: (qmail 32333 invoked by uid 500); 16 May 2014 19:49:54 -0000 Mailing-List: contact commits-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@accumulo.apache.org Delivered-To: mailing list commits@accumulo.apache.org Received: (qmail 22663 invoked by uid 99); 16 May 2014 19:46:07 -0000 Received: from Unknown (HELO tyr.zones.apache.org) (140.211.11.114) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 16 May 2014 19:46:07 +0000 Received: by tyr.zones.apache.org (Postfix, from userid 65534) id DA87792B149; Fri, 16 May 2014 19:38:12 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: ctubbsii@apache.org To: commits@accumulo.apache.org Date: Fri, 16 May 2014 19:38:14 -0000 Message-Id: In-Reply-To: <58d73bad86cd48818864e4855476e888@git.apache.org> References: <58d73bad86cd48818864e4855476e888@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [3/3] git commit: Merge branch '1.6.1-SNAPSHOT' Merge branch '1.6.1-SNAPSHOT' Conflicts: docs/src/main/asciidoc/chapters/administration.txt Project: http://git-wip-us.apache.org/repos/asf/accumulo/repo Commit: http://git-wip-us.apache.org/repos/asf/accumulo/commit/6b36d53a Tree: http://git-wip-us.apache.org/repos/asf/accumulo/tree/6b36d53a Diff: http://git-wip-us.apache.org/repos/asf/accumulo/diff/6b36d53a Branch: refs/heads/master Commit: 6b36d53a90f817af6c53d4324f0d731ef7537450 Parents: c312fa6 f61abc0 Author: Christopher Tubbs Authored: Fri May 16 15:37:08 2014 -0400 Committer: Christopher Tubbs Committed: Fri May 16 15:37:08 2014 -0400 ---------------------------------------------------------------------- README | 7 +++++++ .../main/asciidoc/chapters/administration.txt | 19 +++++++++++++------ 2 files changed, 20 insertions(+), 6 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/accumulo/blob/6b36d53a/README ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/accumulo/blob/6b36d53a/docs/src/main/asciidoc/chapters/administration.txt ---------------------------------------------------------------------- diff --cc docs/src/main/asciidoc/chapters/administration.txt index 4dbcd1b,0000000..b4c1a71 mode 100644,000000..100644 --- a/docs/src/main/asciidoc/chapters/administration.txt +++ b/docs/src/main/asciidoc/chapters/administration.txt @@@ -1,396 -1,0 +1,403 @@@ +// Licensed to the Apache Software Foundation (ASF) under one or more +// contributor license agreements. See the NOTICE file distributed with +// this work for additional information regarding copyright ownership. +// The ASF licenses this file to You under the Apache License, Version 2.0 +// (the "License"); you may not use this file except in compliance with +// the License. You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +== Administration + +=== Hardware + +Because we are running essentially two or three systems simultaneously layered +across the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware to +consist of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can have +at least one core and 2 - 4 GB each. + +One core running HDFS can typically keep 2 to 4 disks busy, so each machine may +typically have as little as 2 x 300GB disks and as much as 4 x 1TB or 2TB disks. + +It is possible to do with less than this, such as with 1u servers with 2 cores and 4GB +each, but in this case it is recommended to only run up to two processes per +machine -- i.e. DataNode and TabletServer or DataNode and MapReduce worker but +not all three. The constraint here is having enough available heap space for all the +processes on a machine. + +=== Network + +Accumulo communicates via remote procedure calls over TCP/IP for both passing +data and control messages. In addition, Accumulo uses HDFS clients to +communicate with HDFS. To achieve good ingest and query performance, sufficient +network bandwidth must be available between any two machines. + +In addition to needing access to ports associated with HDFS and ZooKeeper, Accumulo will +use the following default ports. Please make sure that they are open, or change +their value in conf/accumulo-site.xml. + +.Accumulo default ports +[width="75%",cols=">,^2,^2"] +[options="header"] +|==== +|Port | Description | Property Name +|4445 | Shutdown Port (Accumulo MiniCluster) | n/a +|4560 | Accumulo monitor (for centralized log display) | monitor.port.log4j +|9997 | Tablet Server | tserver.port.client +|9999 | Master Server | master.port.client +|12234 | Accumulo Tracer | trace.port.client +|42424 | Accumulo Proxy Server | n/a +|50091 | Accumulo GC | gc.port.client +|50095 | Accumulo HTTP monitor | monitor.port.client +|==== + +In addition, the user can provide +0+ and an ephemeral port will be chosen instead. This +ephemeral port is likely to be unique and not already bound. Thus, configuring ports to +use +0+ instead of an explicit value, should, in most cases, work around any issues of +running multiple distinct Accumulo instances (or any other process which tries to use the +same default ports) on the same hardware. + +=== Installation +Choose a directory for the Accumulo installation. This directory will be referenced +by the environment variable +$ACCUMULO_HOME+. Run the following: + + $ tar xzf accumulo-1.6.0-bin.tar.gz # unpack to subdirectory + $ mv accumulo-1.6.0 $ACCUMULO_HOME # move to desired location + +Repeat this step at each machine within the cluster. Usually all machines have the +same +$ACCUMULO_HOME+. + +=== Dependencies +Accumulo requires HDFS and ZooKeeper to be configured and running +before starting. Password-less SSH should be configured between at least the +Accumulo master and TabletServer machines. It is also a good idea to run Network +Time Protocol (NTP) within the cluster to ensure nodes' clocks don't get too out of +sync, which can cause problems with automatically timestamped data. + +=== Configuration + +Accumulo is configured by editing several Shell and XML files found in ++$ACCUMULO_HOME/conf+. The structure closely resembles Hadoop's configuration +files. + +==== Edit conf/accumulo-env.sh + +Accumulo needs to know where to find the software it depends on. Edit accumulo-env.sh +and specify the following: + +. Enter the location of the installation directory of Accumulo for +$ACCUMULO_HOME+ +. Enter your system's Java home for +$JAVA_HOME+ +. Enter the location of Hadoop for +$HADOOP_PREFIX+ +. Choose a location for Accumulo logs and enter it for +$ACCUMULO_LOG_DIR+ +. Enter the location of ZooKeeper for +$ZOOKEEPER_HOME+ + +By default Accumulo TabletServers are set to use 1GB of memory. You may change +this by altering the value of +$ACCUMULO_TSERVER_OPTS+. Note the syntax is that of +the Java JVM command line options. This value should be less than the physical +memory of the machines running TabletServers. + +There are similar options for the master's memory usage and the garbage collector +process. Reduce these if they exceed the physical RAM of your hardware and +increase them, within the bounds of the physical RAM, if a process fails because of +insufficient memory. + +Note that you will be specifying the Java heap space in accumulo-env.sh. You should +make sure that the total heap space used for the Accumulo tserver and the Hadoop +DataNode and TaskTracker is less than the available memory on each slave node in +the cluster. On large clusters, it is recommended that the Accumulo master, Hadoop +NameNode, secondary NameNode, and Hadoop JobTracker all be run on separate +machines to allow them to use more heap space. If you are running these on the +same machine on a small cluster, likewise make sure their heap space settings fit +within the available memory. + +==== Native Map + +The tablet server uses a data structure called a MemTable to store sorted key/value +pairs in memory when they are first received from the client. When a minor compaction +occurs, this data structure is written to HDFS. The MemTable will default to using +memory in the JVM but a JNI version, called the native map, can be used to significantly +speed up performance by utilizing the memory space of the native operating system. The +native map also avoids the performance implications brought on by garbage collection +in the JVM by causing it to pause much less frequently. + - 32-bit and 64-bit Linux versions of the native map ship with the Accumulo dist package. - For other operating systems, the native map can be built from the codebase in two ways- - from maven or from the Makefile. ++32-bit and 64-bit Linux and Mac OS X versions of the native map can be built ++from the Accumulo bin package by executing +++$ACCUMULO_HOME/bin/build_native_library.sh+. If your system's ++default compiler options are insufficient, you can add additional compiler ++options to the command line, such as options for the architecture. These will be ++passed to the Makefile in the environment variable +USERFLAGS+. + - . Build from maven using the following command: +mvn clean package -Pnative+. - . Build from the c++ source by running +make+ in the `$ACCUMULO_HOME/server/src/main/c++` directory. ++Examples: ++ ++. +$ACCUMULO_HOME/bin/build_native_library.sh+ ++. +$ACCUMULO_HOME/bin/build_native_library.sh -m32+ + +After building the native map from the source, you will find the artifact in ++$ACCUMULO_HOME/lib/native+. Upon starting up, the tablet server will look +in this directory for the map library. If the file is renamed or moved from its - target directory, the tablet server may not be able to find it. ++target directory, the tablet server may not be able to find it. The system can ++also locate the native maps shared library by setting +LD_LIBRARY_PATH+ ++(or +DYLD_LIBRARY_PATH+ on Mac OS X) in +$ACCUMULO_HOME/conf/accumulo-env.sh+. + +==== Cluster Specification + +On the machine that will serve as the Accumulo master: + +. Write the IP address or domain name of the Accumulo Master to the +$ACCUMULO_HOME/conf/masters+ file. +. Write the IP addresses or domain name of the machines that will be TabletServers in +$ACCUMULO_HOME/conf/slaves+, one per line. + +Note that if using domain names rather than IP addresses, DNS must be configured +properly for all machines participating in the cluster. DNS can be a confusing source +of errors. + +==== Accumulo Settings +Specify appropriate values for the following settings in ++$ACCUMULO_HOME/conf/accumulo-site.xml+ : + +[source,xml] + + instance.zookeeper.host + zooserver-one:2181,zooserver-two:2181 + list of zookeeper servers + + +This enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper to coordinate +settings between processes and helps finalize TabletServer failure. + +[source,xml] + + instance.secret + DEFAULT + + +The instance needs a secret to enable secure communication between servers. Configure your +secret and make sure that the +accumulo-site.xml+ file is not readable to other users. + +Some settings can be modified via the Accumulo shell and take effect immediately, but +some settings require a process restart to take effect. See the configuration documentation +(available in the docs directory of the tarball and in <>) for details. + +==== Deploy Configuration + +Copy the masters, slaves, accumulo-env.sh, and if necessary, accumulo-site.xml +from the +$ACCUMULO_HOME/conf/+ directory on the master to all the machines +specified in the slaves file. + +=== Initialization + +Accumulo must be initialized to create the structures it uses internally to locate +data across the cluster. HDFS is required to be configured and running before +Accumulo can be initialized. + +Once HDFS is started, initialization can be performed by executing ++$ACCUMULO_HOME/bin/accumulo init+ . This script will prompt for a name +for this instance of Accumulo. The instance name is used to identify a set of tables +and instance-specific settings. The script will then write some information into +HDFS so Accumulo can start properly. + +The initialization script will prompt you to set a root password. Once Accumulo is +initialized it can be started. + +=== Running + +==== Starting Accumulo + +Make sure Hadoop is configured on all of the machines in the cluster, including +access to a shared HDFS instance. Make sure HDFS and ZooKeeper are running. +Make sure ZooKeeper is configured and running on at least one machine in the +cluster. +Start Accumulo using the +bin/start-all.sh+ script. + +To verify that Accumulo is running, check the Status page as described under +_Monitoring_. In addition, the Shell can provide some information about the status of +tables via reading the metadata tables. + +==== Stopping Accumulo + +To shutdown cleanly, run +bin/stop-all.sh+ and the master will orchestrate the +shutdown of all the tablet servers. Shutdown waits for all minor compactions to finish, so it may +take some time for particular configurations. + +==== Adding a Node + +Update your +$ACCUMULO_HOME/conf/slaves+ (or +$ACCUMULO_CONF_DIR/slaves+) file to account for the addition. + + $ACCUMULO_HOME/bin/accumulo admin start { ...} + +Alternatively, you can ssh to each of the hosts you want to add and run: + + $ACCUMULO_HOME/bin/start-here.sh + +Make sure the host in question has the new configuration, or else the tablet +server won't start; at a minimum this needs to be on the host(s) being added, +but in practice it's good to ensure consistent configuration across all nodes. + +==== Decomissioning a Node + +If you need to take a node out of operation, you can trigger a graceful shutdown of a tablet +server. Accumulo will automatically rebalance the tablets across the available tablet servers. + + $ACCUMULO_HOME/bin/accumulo admin stop { ...} + +Alternatively, you can ssh to each of the hosts you want to remove and run: + + $ACCUMULO_HOME/bin/stop-here.sh + +Be sure to update your +$ACCUMULO_HOME/conf/slaves+ (or +$ACCUMULO_CONF_DIR/slaves+) file to +account for the removal of these hosts. Bear in mind that the monitor will not re-read the +slaves file automatically, so it will report the decomissioned servers as down; it's +recommended that you restart the monitor so that the node list is up to date. + +=== Monitoring + +The Accumulo Master provides an interface for monitoring the status and health of +Accumulo components. This interface can be accessed by pointing a web browser to ++http://accumulomaster:50095/status+ + +=== Tracing +It can be difficult to determine why some operations are taking longer +than expected. For example, you may be looking up items with very low +latency, but sometimes the lookups take much longer. Determining the +cause of the delay is difficult because the system is distributed, and +the typical lookup is fast. + +Accumulo has been instrumented to record the time that various +operations take when tracing is turned on. The fact that tracing is +enabled follows all the requests made on behalf of the user throughout +the distributed infrastructure of accumulo, and across all threads of +execution. + +These time spans will be inserted into the +trace+ table in +Accumulo. You can browse recent traces from the Accumulo monitor +page. You can also read the +trace+ table directly like any +other table. + +The design of Accumulo's distributed tracing follows that of +http://research.google.com/pubs/pub36356.html[Google's Dapper]. + +==== Tracers +To collect traces, Accumulo needs at least one server listed in + +$ACCUMULO_HOME/conf/tracers+. The server collects traces +from clients and writes them to the +trace+ table. The Accumulo +user that the tracer connects to Accumulo with can be configured with +the following properties + + trace.user + trace.token.property.password + +==== Instrumenting a Client +Tracing can be used to measure a client operation, such as a scan, as +the operation traverses the distributed system. To enable tracing for +your application call + +[source,java] +DistributedTrace.enable(instance, new ZooReader(instance), hostname, "myApplication"); + +Once tracing has been enabled, a client can wrap an operation in a trace. + +[source,java] +Trace.on("Client Scan"); +BatchScanner scanner = conn.createBatchScanner(...); +// Configure your scanner +for (Entry entry : scanner) { +} +Trace.off(); + +Additionally, the user can create additional Spans within a Trace. + +[source,java] +Trace.on("Client Update"); +... +Span readSpan = Trace.start("Read"); +... +readSpan.stop(); +... +Span writeSpan = Trace.start("Write"); +... +writeSpan.stop(); +Trace.off(); + +Like Dapper, Accumulo tracing supports user defined annotations to associate additional data with a Trace. + +[source,java] +... +int numberOfEntriesRead = 0; +Span readSpan = Trace.start("Read"); +// Do the read, update the counter +... +readSpan.data("Number of Entries Read", String.valueOf(numberOfEntriesRead)); + +Some client operations may have a high volume within your +application. As such, you may wish to only sample a percentage of +operations for tracing. As seen below, the CountSampler can be used to +help enable tracing for 1-in-1000 operations + +[source,java] +Sampler sampler = new CountSampler(1000); +... +if (sampler.next()) { + Trace.on("Read"); +} +... +Trace.offNoFlush(); + +It should be noted that it is safe to turn off tracing even if it +isn't currently active. The +Trace.offNoFlush()+ should be used if the +user does not wish to have +Trace.off()+ block while flushing trace +data. + +==== Viewing Collected Traces +To view collected traces, use the "Recent Traces" link on the Monitor +UI. You can also programmatically access and print traces using the ++TraceDump+ class. + +==== Tracing from the Shell +You can enable tracing for operations run from the shell by using the ++trace on+ and +trace off+ commands. + +---- +root@test test> trace on + +root@test test> scan +a b:c [] d + +root@test test> trace off +Waiting for trace information +Waiting for trace information +Trace started at 2013/08/26 13:24:08.332 +Time Start Service@Location Name + 3628+0 shell@localhost shell:root + 8+1690 shell@localhost scan + 7+1691 shell@localhost scan:location + 6+1692 tserver@localhost startScan + 5+1692 tserver@localhost tablet read ahead 6 +---- + +=== Logging +Accumulo processes each write to a set of log files. By default these are found under ++$ACCUMULO/logs/+. + +=== Recovery + +In the event of TabletServer failure or error on shutting Accumulo down, some +mutations may not have been minor compacted to HDFS properly. In this case, +Accumulo will automatically reapply such mutations from the write-ahead log +either when the tablets from the failed server are reassigned by the Master (in the +case of a single TabletServer failure) or the next time Accumulo starts (in the event of +failure during shutdown). + +Recovery is performed by asking a tablet server to sort the logs so that tablets can easily find their missing +updates. The sort status of each file is displayed on +Accumulo monitor status page. Once the recovery is complete any +tablets involved should return to an ``online'' state. Until then those tablets will be +unavailable to clients. + +The Accumulo client library is configured to retry failed mutations and in many +cases clients will be able to continue processing after the recovery process without +throwing an exception.