accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [accumulo-website] branch master updated: Updated docs to reflect change to accumulo.properties (#106)
Date Thu, 06 Sep 2018 16:43:50 GMT
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
     new d4847f5  Updated docs to reflect change to accumulo.properties (#106)
d4847f5 is described below

commit d4847f5b446c4612af3dcd43f34f65668d63025b
Author: Mike Walch <mwalch@apache.org>
AuthorDate: Thu Sep 6 12:43:47 2018 -0400

    Updated docs to reflect change to accumulo.properties (#106)
    
    * Migrated changes in quick-install.md & properties.md from
      Accumulo master
---
 .../administration/configuration-management.md     | 12 +--
 _docs-2-0/administration/in-depth-install.md       | 86 ++++++++--------------
 _docs-2-0/administration/kerberos.md               | 38 +++-------
 _docs-2-0/administration/monitoring-metrics.md     |  8 +-
 _docs-2-0/administration/multivolume.md            | 12 +--
 _docs-2-0/administration/properties.md             | 11 ++-
 _docs-2-0/administration/replication.md            | 33 +++------
 _docs-2-0/administration/ssl.md                    |  6 +-
 _docs-2-0/administration/tracing.md                | 23 ++----
 _docs-2-0/getting-started/quick-install.md         | 32 +++++---
 _docs-2-0/troubleshooting/basic.md                 |  2 +-
 _docs-2-0/troubleshooting/tools.md                 |  2 +-
 12 files changed, 108 insertions(+), 157 deletions(-)

diff --git a/_docs-2-0/administration/configuration-management.md b/_docs-2-0/administration/configuration-management.md
index a3bc0fc..6e71304 100644
--- a/_docs-2-0/administration/configuration-management.md
+++ b/_docs-2-0/administration/configuration-management.md
@@ -17,8 +17,8 @@ Accumulo services (i.e master, tablet server, monitor, etc) are configured
using
 set in the following locations (with increasing precedence):
 
 1. Default values
-2. accumulo-site.xml (overrides defaults)
-3. Zookeeper (overrides accumulo-site.xml & defaults)
+2. accumulo.properties (overrides defaults)
+3. Zookeeper (overrides accumulo.properties & defaults)
 
 If a property is set in multiple locations, the value in the location with the highest precedence
is used. 
 
@@ -29,11 +29,11 @@ The configuration locations above are described in detail below.
 All [server properties][props] have a default value that is listed for each property on the
[properties][props] page. Default values are set in the source code.
 While default values have the lowest precedence, they are usually optimal.  However, there
are cases where a change can increase query and ingest performance.
 
-### accumulo-site.xml
+### accumulo.properties
 
-Setting [server properties][props] in accumulo-site.xml will override their default value.
If you are running Accumulo on a cluster, any updates to accumulo-site.xml must
-be synced across the cluster. Accumulo processes (master, tserver, etc) read their local
accumulo-site.xml on start up so processes must be restarted to apply changes.
-Certain properties can only be set in accumulo-site.xml. These properties have **zk mutable:
no** in their description. Setting properties in accumulo-site.xml allows you
+Setting [server properties][props] in accumulo.properties will override their default value.
If you are running Accumulo on a cluster, any updates to accumulo.properties must
+be synced across the cluster. Accumulo processes (master, tserver, etc) read their local
accumulo.properties on start up so processes must be restarted to apply changes.
+Certain properties can only be set in accumulo.properties. These properties have **zk mutable:
no** in their description. Setting properties in accumulo.properties allows you
 to configure tablet servers with different settings.
 
 ### Zookeeper
diff --git a/_docs-2-0/administration/in-depth-install.md b/_docs-2-0/administration/in-depth-install.md
index 8e6f560..1cb4501 100644
--- a/_docs-2-0/administration/in-depth-install.md
+++ b/_docs-2-0/administration/in-depth-install.md
@@ -32,7 +32,7 @@ network bandwidth must be available between any two machines.
 
 In addition to needing access to ports associated with HDFS and ZooKeeper, Accumulo will
 use the following default ports. Please make sure that they are open, or change
-their value in accumulo-site.xml.
+their value in accumulo.properties.
 
 |Port | Description | Property Name
 |-----|-------------|--------------
@@ -91,7 +91,7 @@ The Accumulo tarball contains a `conf/` directory where Accumulo looks for
confi
 installed Accumulo using downstream packaging, the `conf/` could be something else like
 `/etc/accumulo/`.
 
-Before starting Accumulo, the configuration files `accumulo-env.sh` and `accumulo-site.xml`
must
+Before starting Accumulo, the configuration files `accumulo-env.sh` and `accumulo.properties`
must
 exist in `conf/` and be properly configured. If you are using `accumulo-cluster` to launch
 a cluster, the `conf/` directory must also contain hosts file for Accumulo services (i.e
`gc`,
 `masters`, `monitor`, `tservers`, `tracers`). You can either create these files manually
or run
@@ -208,14 +208,14 @@ Note that if using domain names rather than IP addresses, DNS must be
configured
 properly for all machines participating in the cluster. DNS can be a confusing source
 of errors.
 
-### Configure accumulo-site.xml
+### Configure accumulo.properties
 
-Specify appropriate values for the following properties in `accumulo-site.xml`:
+Specify appropriate values for the following properties in `accumulo.properties`:
 
 * [instance.zookeeper.host] - Enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper
   to coordinate settings between processes and helps finalize TabletServer failure.
 * [instance.secret] - The instance needs a secret to enable secure communication between
servers.
-  Configure your secret and make sure that the `accumulo-site.xml` file is not readable to
other users.
+  Configure your secret and make sure that the `accumulo.properties` file is not readable
to other users.
   For alternatives to storing the [instance.secret] in plaintext, please read the
   [Sensitive Configuration Values](#sensitive-configuration-values) section.
 
@@ -227,7 +227,7 @@ documentation for details.
 
 Accumulo has a number of configuration files which can contain references to other hosts
in your
 network. All of the "host" configuration files for Accumulo (`gc`, `masters`, `tservers`,
`monitor`,
-`tracers`) as well as `instance.volumes` in accumulo-site.xml must contain some host reference.
+`tracers`) as well as `instance.volumes` in accumulo.properties must contain some host reference.
 
 While IP address, short hostnames, or fully qualified domain names (FQDN) are all technically
valid, it
 is good practice to always use FQDNs for both Accumulo and other processes in your Hadoop
cluster.
@@ -242,13 +242,13 @@ Accumulo identifies `localhost:8020` as a different HDFS instance than
`127.0.0.
 
 ### Deploy Configuration
 
-Copy accumulo-env.sh and accumulo-site.xml from the `conf/` directory on the master to all
Accumulo
+Copy accumulo-env.sh and accumulo.properties from the `conf/` directory on the master to
all Accumulo
 tablet servers.  The "host" configuration files files `accumulo-cluster` only need to be
on servers
 where that command is run.
 
 ### Sensitive Configuration Values
 
-Accumulo has a number of properties that can be specified via the accumulo-site.xml
+Accumulo has a number of properties that can be specified via the accumulo.properties
 file which are sensitive in nature, [instance.secret] and `trace.token.property.password`
 are two common examples. Both of these properties, if compromised, have the ability
 to result in data being leaked to users who should not have access to that data.
@@ -261,7 +261,7 @@ these classes, the feature will just be unavailable for use.
 
 A comma separated list of CredentialProviders can be configured using the Accumulo Property
 [general.security.credential.provider.paths]. Each configured URL will be consulted
-when the Configuration object for accumulo-site.xml is accessed.
+when the Configuration object for accumulo.properties is accessed.
 
 ### Using a JavaKeyStoreCredentialProvider for storage
 
@@ -275,13 +275,10 @@ The command will then prompt you to enter the secret to use and create
a keystor
 
     /path/to/accumulo/conf/accumulo.jceks
 
-Then, accumulo-site.xml must be configured to use this KeyStore as a CredentialProvider:
+Then, `accumulo.properties` must be configured to use this KeyStore as a CredentialProvider:
 
-```xml
-<property>
-    <name>general.security.credential.provider.paths</name>
-    <value>jceks://file/path/to/accumulo/conf/accumulo.jceks</value>
-</property>
+```
+general.security.credential.provider.paths=jceks://file/path/to/accumulo/conf/accumulo.jceks
 ```
 
 This configuration will then transparently extract the [instance.secret] from
@@ -330,27 +327,20 @@ The Accumulo classpath can be viewed in human readable format by running
`accumu
 ##### ClassLoader Contexts
 
 With the addition of the VFS based classloader, we introduced the notion of classloader contexts.
A context is identified
-by a name and references a set of locations from which to load classes and can be specified
in the accumulo-site.xml file or added
-using the `config` command in the shell. Below is an example for specify the app1 context
in the accumulo-site.xml file:
-
-```xml
-<property>
-  <name>general.vfs.context.classpath.app1</name>
-  <value>hdfs://localhost:8020/applicationA/classpath/.*.jar,file:///opt/applicationA/lib/.*.jar</value>
-  <description>Application A classpath, loads jars from HDFS and local file system</description>
-</property>
+by a name and references a set of locations from which to load classes and can be specified
in the accumulo.properties file or added
+using the `config` command in the shell. Below is an example for specify the app1 context
in the accumulo.properties file:
+
+```
+# Application A classpath, loads jars from HDFS and local file system
+general.vfs.context.classpath.app1=hdfs://localhost:8020/applicationA/classpath/.*.jar,file:///opt/applicationA/lib/.*.jar
 ```
 
 The default behavior follows the Java ClassLoader contract in that classes, if they exists,
are loaded from the parent classloader first.
 You can override this behavior by delegating to the parent classloader after looking in this
classloader first. An example of this
 configuration is:
 
-```xml
-<property>
-  <name>general.vfs.context.classpath.app1.delegation=post</name>
-  <value>hdfs://localhost:8020/applicationA/classpath/.*.jar,file:///opt/applicationA/lib/.*.jar</value>
-  <description>Application A classpath, loads jars from HDFS and local file system</description>
-</property>
+```
+general.vfs.context.classpath.app1.delegation=post
 ```
 
 To use contexts in your application you can set the `table.classpath.context` on your tables
or use the `setClassLoaderContext()` method on Scanner
@@ -454,17 +444,11 @@ to be able to scale to using 10's of GB of RAM and 10's of CPU cores.
 Accumulo TabletServers bind certain ports on the host to accommodate remote procedure calls
to/from
 other nodes. Running more than one TabletServer on a host requires that you set the environment
variable
 `ACCUMULO_SERVICE_INSTANCE` to an instance number (i.e 1, 2) for each instance that is started.
Also, set
-these properties in `accumulo-site.xml`:
-
-```xml
-  <property>
-    <name>tserver.port.search</name>
-    <value>true</value>
-  </property>
-  <property>
-    <name>replication.receipt.service.port</name>
-    <value>0</value>
-  </property>
+these properties in `accumulo.properties`:
+
+```
+tserver.port.search=true
+replication.receipt.service.port=0
 ```
 
 ## Logging
@@ -513,22 +497,16 @@ that the only volume displayed is the volume from the current namenode's
HDFS UR
 
 After verifying the current volume is correct, shut down the cluster and transition HDFS
to the HA nameservice.
 
-Edit `accumulo-site.xml` to notify accumulo that a volume is being replaced. First,
+Edit `accumulo.properties` to notify accumulo that a volume is being replaced. First,
 add the new nameservice volume to the `instance.volumes` property. Next, add the
 `instance.volumes.replacements` property in the form of `old new`. It's important to not
include
 the volume that's being replaced in `instance.volumes`, otherwise it's possible accumulo
could continue
 to write to the volume.
 
-```xml
-<!-- instance.dfs.uri and instance.dfs.dir should not be set-->
-<property>
-  <name>instance.volumes</name>
-  <value>hdfs://nameservice1/accumulo</value>
-</property>
-<property>
-  <name>instance.volumes.replacements</name>
-  <value>hdfs://namenode.example.com:8020/accumulo hdfs://nameservice1/accumulo</value>
-</property>
+```
+# instance.dfs.uri and instance.dfs.dir should not be set
+instance.volumes=hdfs://nameservice1/accumulo
+instance.volumes.replacements=hdfs://namenode.example.com:8020/accumulo hdfs://nameservice1/accumulo
 ```
 
 Run `accumulo init --add-volumes` and start up the accumulo cluster. Verify that the
@@ -605,14 +583,14 @@ process kills do not show up in Accumulo or Hadoop logs.
 
 To calculate the max memory usage of all java virtual machine (JVM) processes
 add the maximum heap size (often limited by a -Xmx... argument, such as in
-accumulo-site.xml) and the off-heap memory usage. Off-heap memory usage
+accumulo.properties) and the off-heap memory usage. Off-heap memory usage
 includes the following:
 
 * "Permanent Space", where the JVM stores Classes, Methods, and other code elements. This
can be limited by a JVM flag such as `-XX:MaxPermSize:100m`, and is typically tens of megabytes.
 * Code generation space, where the JVM stores just-in-time compiled code. This is typically
small enough to ignore
 * Socket buffers, where the JVM stores send and receive buffers for each socket.
 * Thread stacks, where the JVM allocates memory to manage each thread.
-* Direct memory space and JNI code, where applications can allocate memory outside of the
JVM-managed space. For Accumulo, this includes the native in-memory maps that are allocated
with the memory.maps.max parameter in accumulo-site.xml.
+* Direct memory space and JNI code, where applications can allocate memory outside of the
JVM-managed space. For Accumulo, this includes the native in-memory maps that are allocated
with the memory.maps.max parameter in accumulo.properties.
 * Garbage collection space, where the JVM stores information used for garbage collection.
 
 You can assume that each Hadoop and Accumulo process will use ~100-150MB for
diff --git a/_docs-2-0/administration/kerberos.md b/_docs-2-0/administration/kerberos.md
index 91e04cd..f0759a2 100644
--- a/_docs-2-0/administration/kerberos.md
+++ b/_docs-2-0/administration/kerberos.md
@@ -137,7 +137,7 @@ all Accumulo servers must share the same instance and realm principal
components
 #### Server Configuration
 
 A number of properties need to be changed to account to properly configure servers
-in `accumulo-site.xml`.
+in `accumulo.properties`.
 
 |Key | Default Value | Description
 |----|---------------|-------------
@@ -200,11 +200,11 @@ also be given by the `-u` or `--user` options.
 If you are enabling Kerberos on an existing cluster, you will need to reinitialize the security
system in
 order to replace the existing "root" user with one that can be used with Kerberos. These
steps should be
 completed after you have done the previously described configuration changes and will require
access to
-a complete `accumulo-site.xml`, including the instance secret. Note that this process will
delete all
+a complete `accumulo.properties`, including the instance secret. Note that this process will
delete all
 existing users in the system; you will need to reassign user permissions based on Kerberos
principals.
 
 1. Ensure Accumulo is not running.
-2. Given the path to a `accumulo-site.xml` with the instance secret, run the security reset
tool. If you are
+2. Given the path to a `accumulo.properties` with the instance secret, run the security reset
tool. If you are
 prompted for a password you can just hit return, since it won't be used.
 3. Start the Accumulo cluster
 
@@ -242,21 +242,14 @@ access to the secret key material in order to make a secure connection
to Accumu
 it can only connect to Accumulo as itself. Impersonation, in this context, refers to the
ability
 of the proxy to authenticate to Accumulo as itself, but act on behalf of an Accumulo user.
 
-Accumulo supports basic impersonation of end-users by a third party via static rules in Accumulo's
-site configuration file. These two properties are semi-colon separated properties which are
aligned
+Accumulo supports basic impersonation of end-users by a third party via static rules in
+`accumulo.properties`. These two properties are semi-colon separated properties which are
aligned
 by index. This first element in the user impersonation property value matches the first element
 in the host impersonation property value, etc.
 
-```xml
-<property>
-  <name>instance.rpc.sasl.allowed.user.impersonation</name>
-  <value>$PROXY_USER:*</value>
-</property>
-
-<property>
-  <name>instance.rpc.sasl.allowed.host.impersonation</name>
-  <value>*</value>
-</property>
+```
+instance.rpc.sasl.allowed.user.impersonation=$PROXY_USER:*
+instance.rpc.sasl.allowed.host.impersonation=*
 ```
 
 Here, `$PROXY_USER` can impersonate any user from any host.
@@ -264,16 +257,9 @@ Here, `$PROXY_USER` can impersonate any user from any host.
 The following is an example of specifying a subset of users `$PROXY_USER` can impersonate
and also
 limiting the hosts from which `$PROXY_USER` can initiate requests from.
 
-```xml
-<property>
-  <name>instance.rpc.sasl.allowed.user.impersonation</name>
-  <value>$PROXY_USER:user1,user2;$PROXY_USER2:user2,user4</value>
-</property>
-
-<property>
-  <name>instance.rpc.sasl.allowed.host.impersonation</name>
-  <value>host1.domain.com,host2.domain.com;*</value>
-</property>
+```
+instance.rpc.sasl.allowed.user.impersonation=$PROXY_USER:user1,user2;$PROXY_USER2:user2,user4
+instance.rpc.sasl.allowed.host.impersonation=host1.domain.com,host2.domain.com;*
 ```
 
 Here, `$PROXY_USER` can impersonate user1 and user2 only from host1.domain.com or host2.domain.com.
@@ -600,7 +586,7 @@ java.lang.AssertionError: AuthenticationToken should not be null
 
 **A**: This indicates that the Monitor has not been able to successfully log in a client-side
user to read from the `trace` table. Accumulo allows the TraceServer to rely on the property
`general.kerberos.keytab` as a fallback when logging in the trace user if the `trace.token.property.keytab`
property isn't defined. Some earlier versions of Accumulo did not do this same fallback for
the Monitor's use of the trace user. The end result is that if you configure `general.kerberos.keytab`
an [...]
 
-Ensure you have set `trace.token.property.keytab` to point to a keytab for the principal
defined in `trace.user` in the `accumulo-site.xml` file for the Monitor, since that should
work in all versions of Accumulo.
+Ensure you have set `trace.token.property.keytab` to point to a keytab for the principal
defined in `trace.user` in the `accumulo.properties` file for the Monitor, since that should
work in all versions of Accumulo.
 
 [sasl.enabled]: {% purl -c sasl.enabled %}
 [sasl.qop]: {% purl -c sasl.qop %}
diff --git a/_docs-2-0/administration/monitoring-metrics.md b/_docs-2-0/administration/monitoring-metrics.md
index add252c..6be385d 100644
--- a/_docs-2-0/administration/monitoring-metrics.md
+++ b/_docs-2-0/administration/monitoring-metrics.md
@@ -41,7 +41,7 @@ is strongly recommended that the Monitor is not exposed to any publicly-accessib
 
 ### SSL
 
-SSL may be enabled for the monitor page by setting the following properties in the `accumulo-site.xml`
file:
+SSL may be enabled for the monitor page by setting the following properties in the `accumulo.properties`
file:
 
  * {% plink monitor.ssl.keyStore %}
  * {% plink monitor.ssl.keyStorePassword %}
@@ -50,10 +50,10 @@ SSL may be enabled for the monitor page by setting the following properties
in t
 
 If the Accumulo conf directory has been configured (in particular the `accumulo-env.sh` file
must be set up), the 
 `accumulo-util gen-monitor-cert` command can be used to create the keystore and truststore
files with random passwords. The command
-will print out the properties that need to be added to the `accumulo-site.xml` file. The
stores can also be generated manually with the
+will print out the properties that need to be added to the `accumulo.properties` file. The
stores can also be generated manually with the
 Java `keytool` command, whose usage can be seen in the `accumulo-util` script.
 
-If desired, the SSL ciphers allowed for connections can be controlled via the following properties
in `accumulo-site.xml`:
+If desired, the SSL ciphers allowed for connections can be controlled via the following properties
in `accumulo.properties`:
 
  * {% plink monitor.ssl.include.ciphers %}
  * {% plink monitor.ssl.exclude.ciphers %}
@@ -70,7 +70,7 @@ Accumulo can expose metrics through a legacy metrics library and using the
Hadoo
 ### Legacy Metrics
 
 Accumulo has a legacy metrics library that can be exposes metrics using JMX endpoints or
file-based logging. These metrics can
-be enabled by setting {% plink general.legacy.metrics %} to `true` in `accumulo-site.xml`
and placing the `accumulo-metrics.xml`
+be enabled by setting {% plink general.legacy.metrics %} to `true` in `accumulo.properties`
and placing the `accumulo-metrics.xml`
 configuration file on the classpath (which is typically done by placing the file in the `conf/`
directory). A template for
 `accumulo-metrics.xml` can be found in `conf/templates` of the Accumulo tarball.
 
diff --git a/_docs-2-0/administration/multivolume.md b/_docs-2-0/administration/multivolume.md
index 6d418a8..4707554 100644
--- a/_docs-2-0/administration/multivolume.md
+++ b/_docs-2-0/administration/multivolume.md
@@ -28,11 +28,8 @@ servers.  The configuration [instance.volumes] should be set to a
 comma-separated list, using full URI references to different NameNode
 servers:
 
-```xml
-<property>
-    <name>instance.volumes</name>
-    <value>hdfs://ns1:9001,hdfs://ns2:9001</value>
-</property>
+```
+instance.volumes=hdfs://ns1:9001,hdfs://ns2:9001
 ```
 
 The introduction of multiple volume support in 1.6 changed the way Accumulo
@@ -52,10 +49,7 @@ ns2 with nsB in Accumulo metadata. For this property to take affect, Accumulo
wi
 need to be restarted.
 
 ```xml
-<property>
-    <name>instance.volumes.replacements</name>
-    <value>hdfs://ns1:9001 hdfs://nsA:9001, hdfs://ns2:9001 hdfs://nsB:9001</value>
-</property>
+instance.volumes.replacements=hdfs://ns1:9001 hdfs://nsA:9001, hdfs://ns2:9001 hdfs://nsB:9001
 ```
 
 Using viewfs or HA namenode, introduced in Hadoop 2, offers another option for
diff --git a/_docs-2-0/administration/properties.md b/_docs-2-0/administration/properties.md
index f4b177e..a236a1f 100644
--- a/_docs-2-0/administration/properties.md
+++ b/_docs-2-0/administration/properties.md
@@ -6,14 +6,13 @@ order: 3
 
 <!-- WARNING: Do not edit this file. It is a generated file that is copied from Accumulo
build (from core/target/generated-docs) -->
 
-Below are properties set in `accumulo-site.xml` or the Accumulo shell that configure Accumulo
servers (i.e tablet server, master, etc):
+Below are properties set in `accumulo.properties` or the Accumulo shell that configure Accumulo
servers (i.e tablet server, master, etc):
 
 | Property | Description |
 |--------------|-------------|
 | <a name="gc_prefix" class="prop"></a> **gc.*** | Properties in this category
affect the behavior of the accumulo garbage collector. |
 | <a name="gc_cycle_delay" class="prop"></a> gc.cycle.delay | Time between garbage
collection cycles. In each cycle, old RFiles or write-ahead logs no longer in use are removed
from the filesystem.<br>**type:** TIMEDURATION, **zk mutable:** yes, **default value:**
`5m` |
 | <a name="gc_cycle_start" class="prop"></a> gc.cycle.start | Time to wait before
attempting to garbage collect any old RFiles or write-ahead logs.<br>**type:** TIMEDURATION,
**zk mutable:** yes, **default value:** `30s` |
-| <a name="gc_file_archive" class="prop"></a> gc.file.archive | Archive any files/directories
instead of moving to the HDFS trash or deleting.<br>**type:** BOOLEAN, **zk mutable:**
yes, **default value:** `false` |
 | <a name="gc_port_client" class="prop"></a> gc.port.client | The listening port
for the garbage collector's monitor service<br>**type:** PORT, **zk mutable:** yes but
requires restart of the gc, **default value:** `9998` |
 | <a name="gc_threads_delete" class="prop"></a> gc.threads.delete | The number
of threads used to delete RFiles and write-ahead logs<br>**type:** COUNT, **zk mutable:**
yes, **default value:** `16` |
 | <a name="gc_trace_percent" class="prop"></a> gc.trace.percent | Percent of
gc cycles to trace<br>**type:** FRACTION, **zk mutable:** yes, **default value:** `0.01`
|
@@ -35,7 +34,7 @@ Below are properties set in `accumulo-site.xml` or the Accumulo shell that
confi
 | <a name="general_server_simpletimer_threadpool_size" class="prop"></a> general.server.simpletimer.threadpool.size
| The number of threads to use for server-internal scheduled tasks<br>**type:** COUNT,
**zk mutable:** no, **default value:** `1` |
 | <a name="general_vfs_cache_dir" class="prop"></a> general.vfs.cache.dir | Directory
to use for the vfs cache. The cache will keep a soft reference to all of the classes loaded
in the VM. This should be on local disk on each node with sufficient space. It defaults to
${java.io.tmpdir}/accumulo-vfs-cache-${user.name}<br>**type:** ABSOLUTEPATH, **zk mutable:**
no, **default value:** `${java.io.tmpdir}/accumulo-vfs-cache-${user.name}` |
 | <a name="general_vfs_classpaths" class="prop"></a> general.vfs.classpaths |
Configuration for a system level vfs classloader. Accumulo jar can be configured here and
loaded out of HDFS.<br>**type:** STRING, **zk mutable:** no, **default value:** empty
|
-| <a name="general_vfs_context_classpath_prefix" class="prop"></a> **general.vfs.context.classpath.***
| Properties in this category are define a classpath. These properties start  with the category
prefix, followed by a context name. The value is a comma seperated list of URIs. Supports
full regex on filename alone. For example, general.vfs.context.classpath.cx1=hdfs://nn1:9902/mylibdir/*.jar.
You can enable post delegation for a context, which will load classes from the context first
i [...]
+| <a name="general_vfs_context_classpath_prefix" class="prop"></a> **general.vfs.context.classpath.***
| Properties in this category are define a classpath. These properties start  with the category
prefix, followed by a context name. The value is a comma separated list of URIs. Supports
full regex on filename alone. For example, general.vfs.context.classpath.cx1=hdfs://nn1:9902/mylibdir/*.jar.
You can enable post delegation for a context, which will load classes from the context first
i [...]
 | <a name="instance_prefix" class="prop"></a> **instance.*** | Properties in
this category must be consistent throughout a cloud. This is enforced and servers won't be
able to communicate if these differ. |
 | <a name="instance_dfs_dir" class="prop"></a> instance.dfs.dir | **Deprecated.**
~~HDFS directory in which accumulo instance will run. Do not change after accumulo is initialized.~~<br>~~**type:**
ABSOLUTEPATH~~, ~~**zk mutable:** no~~, ~~**default value:** `/accumulo`~~ |
 | <a name="instance_dfs_uri" class="prop"></a> instance.dfs.uri | **Deprecated.**
~~A url accumulo should use to connect to DFS. If this is empty, accumulo will obtain this
information from the hadoop configuration. This property will only be used when creating new
files if instance.volumes is empty. After an upgrade to 1.6.0 Accumulo will start using absolute
paths to reference files. Files created before a 1.6.0 upgrade are referenced via relative
paths. Relative paths will always be r [...]
@@ -44,12 +43,12 @@ Below are properties set in `accumulo-site.xml` or the Accumulo shell
that confi
 | <a name="instance_rpc_sasl_enabled" class="prop"></a> instance.rpc.sasl.enabled
| Configures Thrift RPCs to require SASL with GSSAPI which supports Kerberos authentication.
Mutually exclusive with SSL RPC configuration.<br>**type:** BOOLEAN, **zk mutable:**
no, **default value:** `false` |
 | <a name="instance_rpc_ssl_clientAuth" class="prop"></a> instance.rpc.ssl.clientAuth
| Require clients to present certs signed by a trusted root<br>**type:** BOOLEAN, **zk
mutable:** no, **default value:** `false` |
 | <a name="instance_rpc_ssl_enabled" class="prop"></a> instance.rpc.ssl.enabled
| Use SSL for socket connections from clients and among accumulo services. Mutually exclusive
with SASL RPC configuration.<br>**type:** BOOLEAN, **zk mutable:** no, **default value:**
`false` |
-| <a name="instance_secret" class="prop"></a> instance.secret | A secret unique
to a given instance that all servers must know in order to communicate with one another. It
should be changed prior to the initialization of Accumulo. To change it after Accumulo has
been initialized, use the ChangeSecret tool and then update accumulo-site.xml everywhere.
Before using the ChangeSecret tool, make sure Accumulo is not running and you are logged in
as the user that controls Accumulo files in HDF [...]
+| <a name="instance_secret" class="prop"></a> instance.secret | A secret unique
to a given instance that all servers must know in order to communicate with one another. It
should be changed prior to the initialization of Accumulo. To change it after Accumulo has
been initialized, use the ChangeSecret tool and then update accumulo.properties everywhere.
Before using the ChangeSecret tool, make sure Accumulo is not running and you are logged in
as the user that controls Accumulo files in H [...]
 | <a name="instance_security_authenticator" class="prop"></a> instance.security.authenticator
| The authenticator class that accumulo will use to determine if a user has privilege to perform
an action<br>**type:** CLASSNAME, **zk mutable:** no, **default value:** {% jlink -f
org.apache.accumulo.server.security.handler.ZKAuthenticator %} |
 | <a name="instance_security_authorizor" class="prop"></a> instance.security.authorizor
| The authorizor class that accumulo will use to determine what labels a user has privilege
to see<br>**type:** CLASSNAME, **zk mutable:** no, **default value:** {% jlink -f org.apache.accumulo.server.security.handler.ZKAuthorizor
%} |
 | <a name="instance_security_permissionHandler" class="prop"></a> instance.security.permissionHandler
| The permission handler class that accumulo will use to determine if a user has privilege
to perform an action<br>**type:** CLASSNAME, **zk mutable:** no, **default value:**
{% jlink -f org.apache.accumulo.server.security.handler.ZKPermHandler %} |
-| <a name="instance_volumes" class="prop"></a> instance.volumes | A comma seperated
list of dfs uris to use. Files will be stored across these filesystems. If this is empty,
then instance.dfs.uri will be used. After adding uris to this list, run 'accumulo init --add-volume'
and then restart tservers. If entries are removed from this list then tservers will need to
be restarted. After a uri is removed from the list Accumulo will not create new files in that
location, however Accumulo can  [...]
-| <a name="instance_volumes_replacements" class="prop"></a> instance.volumes.replacements
| Since accumulo stores absolute URIs changing the location of a namenode could prevent Accumulo
from starting. The property helps deal with that situation. Provide a comma separated list
of uri replacement pairs here if a namenode location changes. Each pair shold be separated
with a space. For example, if hdfs://nn1 was replaced with hdfs://nnA and hdfs://nn2 was replaced
with hdfs://nnB, then set [...]
+| <a name="instance_volumes" class="prop"></a> instance.volumes | A comma separated
list of dfs uris to use. Files will be stored across these filesystems. If this is empty,
then instance.dfs.uri will be used. After adding uris to this list, run 'accumulo init --add-volume'
and then restart tservers. If entries are removed from this list then tservers will need to
be restarted. After a uri is removed from the list Accumulo will not create new files in that
location, however Accumulo can  [...]
+| <a name="instance_volumes_replacements" class="prop"></a> instance.volumes.replacements
| Since accumulo stores absolute URIs changing the location of a namenode could prevent Accumulo
from starting. The property helps deal with that situation. Provide a comma separated list
of uri replacement pairs here if a namenode location changes. Each pair should be separated
with a space. For example, if hdfs://nn1 was replaced with hdfs://nnA and hdfs://nn2 was replaced
with hdfs://nnB, then se [...]
 | <a name="instance_zookeeper_host" class="prop"></a> instance.zookeeper.host
| Comma separated list of zookeeper servers<br>**type:** HOSTLIST, **zk mutable:** no,
**default value:** `localhost:2181` |
 | <a name="instance_zookeeper_timeout" class="prop"></a> instance.zookeeper.timeout
| Zookeeper session timeout; max value when represented as milliseconds should be no larger
than 2147483647<br>**type:** TIMEDURATION, **zk mutable:** no, **default value:** `30s`
|
 | <a name="master_prefix" class="prop"></a> **master.*** | Properties in this
category affect the behavior of the master server |
diff --git a/_docs-2-0/administration/replication.md b/_docs-2-0/administration/replication.md
index ee5e7aa..8fd9f49 100644
--- a/_docs-2-0/administration/replication.md
+++ b/_docs-2-0/administration/replication.md
@@ -51,14 +51,11 @@ into the following sections.
 
 Each system involved in replication (even the primary) needs a name that uniquely
 identifies it across all peers in the replication graph. This should be considered
-fixed for an instance, and set using {% plink replication.name %} in `accumulo-site.xml`.
+fixed for an instance, and set using {% plink replication.name %} in `accumulo.properties`.
 
-```xml
-<property>
-    <name>replication.name</name>
-    <value>primary</value>
-    <description>Unique name for this system used by replication</description>
-</property>
+```
+# Unique name for this system used by replication
+replication.name=primary
 ```
 
 ### Instance Configuration
@@ -69,7 +66,7 @@ to connect to this remote peer. In the case of Accumulo, this additional
data
 is the Accumulo instance name and ZooKeeper quorum; however, this varies on the
 replication implementation for the peer.
 
-These can be set in the site configuration to ease deployments; however, as they may
+These can be set in `accumulo.properties` to ease deployments; however, as they may
 change, it can be useful to set this information using the Accumulo shell.
 
 To configure a peer with the name `peer1` which is an Accumulo system with an instance name
of `accumulo_peer`
@@ -132,7 +129,7 @@ On this page, information is broken down into the following sections:
 
 Depending on the schema of a table, different implementations of the [WorkAssigner]
 used could be configured. The implementation is controlled via the property {% plink replication.work.assigner
%}
-and the full class name for the implementation. This can be configured via the shell or `accumulo-site.xml`.
+and the full class name for the implementation. This can be configured via the shell or `accumulo.properties`.
 
 Two implementations of [WorkAssigner] are provided:
 
@@ -197,27 +194,21 @@ with that name as well (primary:2181 and peer:2181).
 
 We want to configure these systems so that `my_table` on **primary** replicates to `my_table`
on **peer**.
 
-### accumulo-site.xml
+### accumulo.properties
 
 We can assign the "unique" name that identifies this Accumulo instance among all others that
might participate
 in replication together. In this example, we will use the names provided in the description.
 
 #### Primary
 
-```xml
-<property>
-  <name>replication.name</name>
-  <value>primary</value>
-</property>
+```
+replication.name=primary
 ```
 
 #### Peer
 
-```xml
-<property>
-  <name>replication.name</name>
-  <value>peer</value>
-</property>
+```
+replication.name=peer
 ```
 
 ### masters and tservers files
@@ -227,7 +218,7 @@ a local node talking to another local node.
 
 ### Start both instances
 
-The rest of the configuration is dynamic and is best configured on the fly (in ZooKeeper)
than in accumulo-site.xml.
+The rest of the configuration is dynamic and is best configured on the fly (in ZooKeeper)
than in accumulo.properties.
 
 ### Peer
 
diff --git a/_docs-2-0/administration/ssl.md b/_docs-2-0/administration/ssl.md
index 07db4fe..20dbf12 100644
--- a/_docs-2-0/administration/ssl.md
+++ b/_docs-2-0/administration/ssl.md
@@ -30,7 +30,7 @@ included in a section below. Accumulo servers require a certificate and
keystore
 in the form of Java KeyStores, to enable SSL. The following configuration assumes
 these files already exist.
 
-In `accumulo-site.xml`, the following properties are required:
+In `accumulo.properties`, the following properties are required:
 
 * {% plink rpc.javax.net.ssl.keyStore %}  = _The path on the local filesystem to the keystore
containing the server's certificate_
 * {% plink rpc.javax.net.ssl.keyStorePassword %} = _The password for the keystore containing
the server's certificate_
@@ -39,7 +39,7 @@ In `accumulo-site.xml`, the following properties are required:
 * {% plink instance.rpc.ssl.enabled %} = _true_
 
 Optionally, SSL client-authentication (two-way SSL) can also be enabled by setting
-{% plink instance.rpc.ssl.clientAuth %} `true` in `accumulo-site.xml`.
+{% plink instance.rpc.ssl.clientAuth %} `true` in `accumulo.properties`.
 This requires that each client has access to  valid certificate to set up a secure connection
 to the servers. By default, Accumulo uses one-way SSL which does not require clients to have
 their own certificate.
@@ -55,7 +55,7 @@ the properties to connect to an Accumulo instance using SSL:
 * {% plink -c ssl.truststore.path %}
 * {% plink -c ssl.truststore.password %}
 
-If two-way SSL is enabled for the Accumulo instance (by setting [instance.rpc.ssl.clientAuth]
to `true` in `accumulo-site.xml`),
+If two-way SSL is enabled for the Accumulo instance (by setting [instance.rpc.ssl.clientAuth]
to `true` in `accumulo.properties`),
 Accumulo clients must also define their own certificate by setting the following properties:
 
 * {% plink -c ssl.keystore.path %}
diff --git a/_docs-2-0/administration/tracing.md b/_docs-2-0/administration/tracing.md
index eb7cfb1..5ef0a8c 100644
--- a/_docs-2-0/administration/tracing.md
+++ b/_docs-2-0/administration/tracing.md
@@ -68,7 +68,7 @@ trace.span.receiver. when set in the Accumulo configuration.
     tracer.span.min.ms - minimum span length to store (in ms, default 1)
 
 To configure an Accumulo client for tracing, set {% plink -c trace.span.receivers %} and
{% plink -c trace.zookeeper.path %}
-in `accumulo-client.properties`. Also, any [trace.span.receiver.*] properties set in `accumulo-site.xml`
should be set in
+in `accumulo-client.properties`. Also, any [trace.span.receiver.*] properties set in `accumulo.properties`
should be set in
 `accumulo-client.properties`.
 
 Hadoop can also be configured to send traces to Accumulo, as of
@@ -116,12 +116,9 @@ for adding any SpanReceiver to Accumulo:
 `lib/` and NOT in `lib/ext/` so that the new SpanReceiver class
 is visible to the same class loader of htrace-core.
 
-2. Add the following to `accumulo-site.xml`:
+2. Add the following to `accumulo.properties`:
 
-        <property>
-          <name>trace.span.receivers</name>
-          <value>org.apache.accumulo.tracer.ZooTraceClient,org.apache.htrace.impl.ZipkinSpanReceiver</value>
-        </property>
+        trace.span.receivers=org.apache.accumulo.tracer.ZooTraceClient,org.apache.htrace.impl.ZipkinSpanReceiver
 
 3. Restart your Accumulo tablet servers.
 
@@ -144,18 +141,12 @@ this is easily done by adding to your client's pom.xml (taking care
to specify a
 3. Instrument your client as in the next section.
 
 Your SpanReceiver may require additional properties, and if so these should likewise
-be placed in `accumulo-client.properties` (if applicable) and Accumulo's `accumulo-site.xml`.
+be placed in `accumulo-client.properties` (if applicable) and Accumulo's `accumulo.properties`.
 Two such properties for ZipkinSpanReceiver, listed with their default values, are
 
-```xml
-<property>
-  <name>trace.span.receiver.zipkin.collector-hostname</name>
-  <value>localhost</value>
-</property>
-<property>
-  <name>trace.span.receiver.zipkin.collector-port</name>
-  <value>9410</value>
-</property>
+```
+trace.span.receiver.zipkin.collector-hostname=localhost
+trace.span.receiver.zipkin.collector-port=9410
 ```
 
 ### Instrumenting a Client
diff --git a/_docs-2-0/getting-started/quick-install.md b/_docs-2-0/getting-started/quick-install.md
index 037725b..0307ed8 100644
--- a/_docs-2-0/getting-started/quick-install.md
+++ b/_docs-2-0/getting-started/quick-install.md
@@ -31,26 +31,27 @@ For convenience, consider adding `accumulo-{{ page.latest_release }}/bin/`
to yo
 Accumulo requires running [Zookeeper] and [HDFS] instances which should be set up
 before configuring Accumulo.
 
-The primary configuration files for Accumulo are `accumulo-env.sh` and `accumulo-site.xml`
-which are located in the `conf/` directory.
+The primary configuration files for Accumulo are `accumulo.properties`, `accumulo-env.sh`,
+and `accumulo-client.properties` which are located in the `conf/` directory.
 
-Follow the steps below to configure `accumulo-site.xml`:
+The `accumulo.properties` file configures Accumulo server processes (i.e tablet server, master,
+monitor, etc). Follow these steps to set it up:
 
 1. Run `accumulo-util build-native` to build native code.  If this command fails, disable
-   native maps by setting `tserver.memory.maps.native.enabled` to `false`.
+   native maps by setting {% plink tserver.memory.maps.native.enabled %} to `false`.
 
-2. Set `instance.volumes` to HDFS location where Accumulo will store data. If your namenode
+2. Set {% plink instance.volumes %} to HDFS location where Accumulo will store data. If your
namenode
    is running at 192.168.1.9:8020 and you want to store data in `/accumulo` in HDFS, then
set
-   `instance.volumes` to `hdfs://192.168.1.9:8020/accumulo`.
+   {% plink instance.volumes %} to `hdfs://192.168.1.9:8020/accumulo`.
 
-3. Set `instance.zookeeper.host` to the location of your Zookeepers
+3. Set {% plink instance.zookeeper.host %} to the location of your Zookeepers
 
-4. (Optional) Change `instance.secret` (which is used by Accumulo processes to communicate)
+4. (Optional) Change {% plink instance.secret %} (which is used by Accumulo processes to
communicate)
    from the default. This value should match on all servers.
 
-Follow the steps below to configure `accumulo-env.sh`:
+The `accumulo-env.sh` file sets up environment variables needed by Accumulo:
 
-1. Set `HADOOP_PREFIX` and `ZOOKEEPER_HOME` to the location of your Hadoop and Zookeeper
+1. Set `HADOOP_HOME` and `ZOOKEEPER_HOME` to the location of your Hadoop and Zookeeper
    installations. Accumulo will use these locations to find Hadoop and Zookeeper jars and
add
    them to your `CLASSPATH` variable. If you you are running a vendor-specific release of
    Hadoop or Zookeeper, you may need to modify how the `CLASSPATH` variable is built in
@@ -70,6 +71,17 @@ Follow the steps below to configure `accumulo-env.sh`:
 3. (Optional) Review the memory settings for the Accumulo master, garbage collector, and
monitor
    in the `JAVA_OPTS` section of `accumulo-env.sh`.
 
+The `accumulo-client.properties` file is used by the Accumulo shell and can be passed to
Accumulo
+clients to simplify connecting to Accumulo. Below are steps to configure it.
+
+1. Set {% plink -c instance.name %} and {% plink -c instance.zookeepers %} to the Accumulo
instance and zookeeper connection
+   string of your instance.
+
+2. Pick an authentication type and set {% plink -c auth.type %} accordingly.  The most common
`auth.type`
+   is `password` which requires {% plink -c auth.principal %} to be set and {% plink -c auth.token
%} to be set the password
+   of `auth.principal`. For the Accumulo shell, `auth.token` can be commented out and the
shell will
+   prompt you for the password of `auth.principal` at login.
+
 ## Initialization
 
 Accumulo needs to initialize the locations where it stores data in Zookeeper
diff --git a/_docs-2-0/troubleshooting/basic.md b/_docs-2-0/troubleshooting/basic.md
index 131e288..06ef2ff 100644
--- a/_docs-2-0/troubleshooting/basic.md
+++ b/_docs-2-0/troubleshooting/basic.md
@@ -195,7 +195,7 @@ It is important to see the word `CONNECTED`!  If you only see
 `CONNECTING` you will need to diagnose zookeeper errors.
 
 Check to make sure that zookeeper is up, and that
-`accumulo-site.xml` has been pointed to
+`accumulo.properties` has been pointed to
 your zookeeper server(s).
 
 **Zookeeper is running, but it does not say CONNECTED**
diff --git a/_docs-2-0/troubleshooting/tools.md b/_docs-2-0/troubleshooting/tools.md
index 32ab247..5be6d9c 100644
--- a/_docs-2-0/troubleshooting/tools.md
+++ b/_docs-2-0/troubleshooting/tools.md
@@ -123,7 +123,7 @@ If you have entries in zookeeper for old instances that you no longer
need, remo
 
     $ accumulo org.apache.accumulo.server.util.CleanZookeeper
 
-This command will not delete the instance pointed to by the local `accumulo-site.xml` file.
+This command will not delete the instance pointed to by the local `accumulo.properties` file.
 
 ## DumpZookeeper & RestoreZookeeper
 


Mime
View raw message