Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id A04DF200BC7 for ; Thu, 10 Nov 2016 22:38:13 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 9F348160AF7; Thu, 10 Nov 2016 21:38:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 00F89160B28 for ; Thu, 10 Nov 2016 22:38:09 +0100 (CET) Received: (qmail 21052 invoked by uid 500); 10 Nov 2016 21:38:08 -0000 Mailing-List: contact commits-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@accumulo.apache.org Delivered-To: mailing list commits@accumulo.apache.org Received: (qmail 20434 invoked by uid 99); 10 Nov 2016 21:38:08 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 10 Nov 2016 21:38:08 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 0D5BCEF79A; Thu, 10 Nov 2016 21:38:08 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: mwalch@apache.org To: commits@accumulo.apache.org Date: Thu, 10 Nov 2016 21:38:21 -0000 Message-Id: In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [15/36] accumulo git commit: Jekyll build from gh-pages:358b7b4 archived-at: Thu, 10 Nov 2016 21:38:13 -0000 http://git-wip-us.apache.org/repos/asf/accumulo/blob/c0655661/user_manual_1.3-incubating/Accumulo_Design.html ---------------------------------------------------------------------- diff --git a/user_manual_1.3-incubating/Accumulo_Design.html b/user_manual_1.3-incubating/Accumulo_Design.html deleted file mode 100644 index 727ee0c..0000000 --- a/user_manual_1.3-incubating/Accumulo_Design.html +++ /dev/null @@ -1,283 +0,0 @@ - - - - - - - - - - - - -User Manual: Accumulo Design - - - - - - - - - - - - -
-
-
- - -
- -

User Manual: Accumulo Design

- -

** Next:** Accumulo Shell ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** Introduction ** Contents**

- -

Subsections

- - - -
- -

Accumulo Design

- -

Data Model

- -

Accumulo provides a richer data model than simple key-value stores, but is not a fully relational database. Data is represented as key-value pairs, where the key and value are comprised of the following elements:

- -

converted table

- -

All elements of the Key and the Value are represented as byte arrays except for Timestamp, which is a Long. Accumulo sorts keys by element and lexicographically in ascending order. Timestamps are sorted in descending order so that later versions of the same Key appear first in a sequential scan. Tables consist of a set of sorted key-value pairs.

- -

Architecture

- -

Accumulo is a distributed data storage and retrieval system and as such consists of several architectural components, some of which run on many individual servers. Much of the work Accumulo does involves maintaining certain properties of the data, such as organization, availability, and integrity, across many commodity-class machines.

- -

Components

- -

An instance of Accumulo includes many TabletServers, write-ahead Logger servers, one Garbage Collector process, one Master server and many Clients.

- -

Tablet Server

- -

The TabletServer manages some subset of all the tablets (partitions of tables). This includes receiving writes from clients, persisting writes to a write‐ahead log, sorting new key‐value pairs in memory, periodically flushing sorted key‐value pairs to new files in HDFS, and responding to reads from clients, forming a merge‐sorted view of all keys and values from all the files it has created and the sorted in‐memory store.

- -

TabletServers also perform recovery of a tablet that was previously on a server that failed, reapplying any writes found in the write-ahead log to the tablet.

- -

Loggers

- -

The Loggers accept updates to Tablet servers and write them to local on-disk storage. Each tablet server will write their updates to multiple loggers to preserve data in case of hardware failure.

- -

Garbage Collector

- -

Accumulo processes will share files stored in HDFS. Periodically, the Garbage Collector will identify files that are no longer needed by any process, and delete them.

- -

Master

- -

The Accumulo Master is responsible for detecting and responding to TabletServer failure. It tries to balance the load across TabletServer by assigning tablets carefully and instructing TabletServers to migrate tablets when necessary. The Master ensures all tablets are assigned to one TabletServer each, and handles table creation, alteration, and deletion requests from clients. The Master also coordinates startup, graceful shutdown and recovery of changes in write-ahead logs when Tablet servers fail.

- -

Client

- -

Accumulo includes a client library that is linked to every application. The client library contains logic for finding servers managing a particular tablet, and communicating with TabletServers to write and retrieve key-value pairs.

- -

Data Management

- -

Accumulo stores data in tables, which are partitioned into tablets. Tablets are partitioned on row boundaries so that all of the columns and values for a particular row are found together within the same tablet. The Master assigns Tablets to one TabletServer at a time. This enables row-level transactions to take place without using distributed locking or some other complicated synchronization mechanism. As clients insert and query data, and as machines are added and removed from the cluster, the Master migrates tablets to ensure they remain available and that the ingest and query load is balanced across the cluster.

- -

Image data_distribution

- -

Tablet Service

- -

When a write arrives at a TabletServer it is written to a Write‐Ahead Log and then inserted into a sorted data structure in memory called a MemTable. When the MemTable reaches a certain size the TabletServer writes out the sorted key-value pairs to a file in HDFS called Indexed Sequential Access Method (ISAM) file. This process is called a minor compaction. A new MemTable is then created and the fact of the compaction is recorded in the Write‐Ahead Log.

- -

When a request to read data arrives at a TabletServer, the TabletServer does a binary search across the MemTable as well as the in-memory indexes associated with each ISAM file to find the relevant values. If clients are performing a scan, several key‐value pairs are returned to the client in order from the MemTable and the set of ISAM files by performing a merge‐sort as they are read.

- -

Compactions

- -

In order to manage the number of files per tablet, periodically the TabletServer performs Major Compactions of files within a tablet, in which some set of ISAM files are combined into one file. The previous files will eventually be removed by the Garbage Collector. This also provides an opportunity to permanently remove deleted key‐value pairs by omitting key‐value pairs suppressed by a delete entry when the new file is created.

- -

Fault-Tolerance

- -

If a TabletServer fails, the Master detects it and automatically reassigns the tablets assigned from the failed server to other servers. Any key-value pairs that were in memory at the time the TabletServer are automatically reapplied from the Write-Ahead Log to prevent any loss of data.

- -

The Master will coordinate the copying of write-ahead logs to HDFS so the logs are available to all tablet servers. To make recovery efficient, the updates within a log are grouped by tablet. The sorting process can be performed by Hadoops MapReduce or the Logger server. TabletServers can quickly apply the mutations from the sorted logs that are destined for the tablets they have now been assigned.

- -

TabletServer failures are noted on the Master’s monitor page, accessible via
-http://master-address:50095/monitor.

- -

Image failure_handling

- -
- -

** Next:** Accumulo Shell ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** Introduction ** Contents**

- - -
- - - - - -
-
-
- - http://git-wip-us.apache.org/repos/asf/accumulo/blob/c0655661/user_manual_1.3-incubating/Accumulo_Shell.html ---------------------------------------------------------------------- diff --git a/user_manual_1.3-incubating/Accumulo_Shell.html b/user_manual_1.3-incubating/Accumulo_Shell.html deleted file mode 100644 index ddd9038..0000000 --- a/user_manual_1.3-incubating/Accumulo_Shell.html +++ /dev/null @@ -1,322 +0,0 @@ - - - - - - - - - - - - -User Manual: Accumulo Shell - - - - - - - - - - - - -
-
-
- - -
- -

User Manual: Accumulo Shell

- -

** Next:** Writing Accumulo Clients ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** Accumulo Design ** Contents**

- -

Subsections

- - - -
- -

Accumulo Shell

- -

Accumulo provides a simple shell that can be used to examine the contents and configuration settings of tables, apply individual mutations, and change configuration settings.

- -

The shell can be started by the following command:

- -
$ACCUMULO_HOME/bin/accumulo shell -u [username]
-
-
- -

The shell will prompt for the corresponding password to the username specified and then display the following prompt:

- -
Shell - Apache Accumulo Interactive Shell
--
-- version 1.3
-- instance name: myinstance
-- instance id: 00000000-0000-0000-0000-000000000000
--
-- type 'help' for a list of available commands
--
-
-
- -

Basic Administration

- -

The Accumulo shell can be used to create and delete tables, as well as to configure table and instance specific options.

- -
root@myinstance> tables
-!METADATA
-
-root@myinstance> createtable mytable
-
-root@myinstance mytable>
-
-root@myinstance mytable> tables
-!METADATA
-mytable
-
-root@myinstance mytable> createtable testtable
-
-root@myinstance testtable>
-
-root@myinstance junk> deletetable testtable
-
-root@myinstance>
-
-
- -

The Shell can also be used to insert updates and scan tables. This is useful for inspecting tables.

- -
root@myinstance mytable> scan
-
-root@myinstance mytable> insert row1 colf colq value1
-insert successful
-
-root@myinstance mytable> scan
-row1 colf:colq [] value1
-
-
- -

Table Maintenance

- -

The compact command instructs Accumulo to schedule a compaction of the table during which files are consolidated and deleted entries are removed.

- -
root@myinstance mytable> compact -t mytable
-07 16:13:53,201 [shell.Shell] INFO : Compaction of table mytable
-scheduled for 20100707161353EDT
-
-
- -

The flush command instructs Accumulo to write all entries currently in memory for a given table to disk.

- -
root@myinstance mytable> flush -t mytable
-07 16:14:19,351 [shell.Shell] INFO : Flush of table mytable
-initiated...
-
-
- -

User Administration

- -

The Shell can be used to add, remove, and grant privileges to users.

- -
root@myinstance mytable> createuser bob
-Enter new password for 'bob': *********
-Please confirm new password for 'bob': *********
-
-root@myinstance mytable> authenticate bob
-Enter current password for 'bob': *********
-Valid
-
-root@myinstance mytable> grant System.CREATE_TABLE -s -u bob
-
-root@myinstance mytable> user bob
-Enter current password for 'bob': *********
-
-bob@myinstance mytable> userpermissions
-System permissions: System.CREATE_TABLE
-Table permissions (!METADATA): Table.READ
-Table permissions (mytable): NONE
-
-bob@myinstance mytable> createtable bobstable
-bob@myinstance bobstable>
-
-bob@myinstance bobstable> user root
-Enter current password for 'root': *********
-
-root@myinstance bobstable> revoke System.CREATE_TABLE -s -u bob
-
-
- -
- -

** Next:** Writing Accumulo Clients ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** Accumulo Design ** Contents**

- - -
- - - - - -
-
-
- - http://git-wip-us.apache.org/repos/asf/accumulo/blob/c0655661/user_manual_1.3-incubating/Administration.html ---------------------------------------------------------------------- diff --git a/user_manual_1.3-incubating/Administration.html b/user_manual_1.3-incubating/Administration.html deleted file mode 100644 index fca012f..0000000 --- a/user_manual_1.3-incubating/Administration.html +++ /dev/null @@ -1,352 +0,0 @@ - - - - - - - - - - - - -User Manual: Administration - - - - - - - - - - - - -
-
-
- - -
- -

User Manual: Administration

- -

** Next:** Shell Commands ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** Security ** Contents**

- -

Subsections

- - - -
- -

Administration

- -

Hardware

- -

Because we are running essentially two or three systems simultaneously layered across the cluster: HDFS, Accumulo and MapReduce, it is typical for hardware to consist of 4 to 8 cores, and 8 to 32 GB RAM. This is so each running process can have at least one core and 2 - 4 GB each.

- -

One core running HDFS can typically keep 2 to 4 disks busy, so each machine may typically have as little as 2 x 300GB disks and as much as 4 x 1TB or 2TB disks.

- -

It is possible to do with less than this, such as with 1u servers with 2 cores and 4GB each, but in this case it is recommended to only run up to two processes per machine - i.e. DataNode and TabletServer or DataNode and MapReduce worker but not all three. The constraint here is having enough available heap space for all the processes on a machine.

- -

Network

- -

Accumulo communicates via remote procedure calls over TCP/IP for both passing data and control messages. In addition, Accumulo uses HDFS clients to communicate with HDFS. To achieve good ingest and query performance, sufficient network bandwidth must be available between any two machines.

- -

Installation

- -

Choose a directory for the Accumulo installation. This directory will be referenced by the environment variable $ACCUMULO_HOME. Run the following:

- -
$ tar xzf $ACCUMULO_HOME/accumulo.tar.gz
-
-
- -

Repeat this step at each machine within the cluster. Usually all machines have the same $ACCUMULO_HOME.

- -

Dependencies

- -

Accumulo requires HDFS and ZooKeeper to be configured and running before starting. Password-less SSH should be configured between at least the Accumulo master and TabletServer machines. It is also a good idea to run Network Time Protocol (NTP) within the cluster to ensure nodes’ clocks don’t get too out of sync, which can cause problems with automatically timestamped data. Accumulo will remove from the set of TabletServers those machines whose times differ too much from the master’s.

- -

Configuration

- -

Accumulo is configured by editing several Shell and XML files found in $ACCUMULO_HOME/conf. The structure closely resembles Hadoop’s configuration files.

- -

Edit conf/accumulo-env.sh

- -

Accumulo needs to know where to find the software it depends on. Edit accumuloenv. sh and specify the following:

- -
    -
  1. Enter the location of the installation directory of Accumulo for $ACCUMULO_HOME
  2. -
  3. Enter your system’s Java home for $JAVA_HOME
  4. -
  5. Enter the location of Hadoop for $HADOOP_HOME
  6. -
  7. Choose a location for Accumulo logs and enter it for $ACCUMULO_LOG_DIR
  8. -
  9. Enter the location of ZooKeeper for $ZOOKEEPER_HOME
  10. -
- -

By default Accumulo TabletServers are set to use 1GB of memory. You may change this by altering the value of $ACCUMULO_TSERVER_OPTS. Note the syntax is that of the Java JVM command line options. This value should be less than the physical memory of the machines running TabletServers.

- -

There are similar options for the master’s memory usage and the garbage collector process. Reduce these if they exceed the physical RAM of your hardware and increase them, within the bounds of the physical RAM, if a process fails because of insufficient memory.

- -

Note that you will be specifying the Java heap space in accumulo-env.sh. You should make sure that the total heap space used for the Accumulo tserver and the Hadoop DataNode and TaskTracker is less than the available memory on each slave node in the cluster. On large clusters, it is recommended that the Accumulo master, Hadoop NameNode, secondary NameNode, and Hadoop JobTracker all be run on separate machines to allow them to use more heap space. If you are running these on the same machine on a small cluster, likewise make sure their heap space settings fit within the available memory.

- -

Cluster Specification

- -

On the machine that will serve as the Accumulo master:

- -
    -
  1. Write the IP address or domain name of the Accumulo Master to the
    -$ACCUMULO_HOME/conf/masters file.
  2. -
  3. Write the IP addresses or domain name of the machines that will be TabletServers in
    -$ACCUMULO_HOME/conf/slaves, one per line.
  4. -
- -

Note that if using domain names rather than IP addresses, DNS must be configured properly for all machines participating in the cluster. DNS can be a confusing source of errors.

- -

Accumulo Settings

- -

Specify appropriate values for the following settings in
-$ACCUMULO_HOME/conf/accumulo-site.xml :

- -
<property>
-    <name>zookeeper</name>
-    <value>zooserver-one:2181,zooserver-two:2181</value>
-    <description>list of zookeeper servers</description>
-</property>
-<property>
-    <name>walog</name>
-    <value>/var/accumulo/walogs</value>
-    <description>local directory for write ahead logs</description>
-</property>
-
-
- -

This enables Accumulo to find ZooKeeper. Accumulo uses ZooKeeper to coordinate settings between processes and helps finalize TabletServer failure.

- -

Accumulo records all changes to tables to a write-ahead log before committing them to the table. The `walog’ setting specifies the local directory on each machine to which write-ahead logs are written. This directory should exist on all machines acting as TabletServers.

- -

Some settings can be modified via the Accumulo shell and take effect immediately. However, any settings that should be persisted across system restarts must be recorded in the accumulo-site.xml file.

- -

Deploy Configuration

- -

Copy the masters, slaves, accumulo-env.sh, and if necessary, accumulo-site.xml from the
-$ACCUMULO_HOME/conf/ directory on the master to all the machines specified in the slaves file.

- -

Initialization

- -

Accumulo must be initialized to create the structures it uses internally to locate data across the cluster. HDFS is required to be configured and running before Accumulo can be initialized.

- -

Once HDFS is started, initialization can be performed by executing
-$ACCUMULO_HOME/bin/accumulo init . This script will prompt for a name for this instance of Accumulo. The instance name is used to identify a set of tables and instance-specific settings. The script will then write some information into HDFS so Accumulo can start properly.

- -

The initialization script will prompt you to set a root password. Once Accumulo is initialized it can be started.

- -

Running

- -

Starting Accumulo

- -

Make sure Hadoop is configured on all of the machines in the cluster, including access to a shared HDFS instance. Make sure HDFS and ZooKeeper are running. Make sure ZooKeeper is configured and running on at least one machine in the cluster. Start Accumulo using the bin/start-all.sh script.

- -

To verify that Accumulo is running, check the Status page as described under Monitoring. In addition, the Shell can provide some information about the status of tables via reading the !METADATA table.

- -

Stopping Accumulo

- -

To shutdown cleanly, run bin/stop-all.sh and the master will orchestrate the shutdown of all the tablet servers. Shutdown waits for all minor compactions to finish, so it may take some time for particular configurations.

- -

Monitoring

- -

The Accumulo Master provides an interface for monitoring the status and health of Accumulo components. This interface can be accessed by pointing a web browser to
-http://accumulomaster:50095/status

- -

Logging

- -

Accumulo processes each write to a set of log files. By default these are found under
-$ACCUMULO/logs/.

- -

Recovery

- -

In the event of TabletServer failure or error on shutting Accumulo down, some mutations may not have been minor compacted to HDFS properly. In this case, Accumulo will automatically reapply such mutations from the write-ahead log either when the tablets from the failed server are reassigned by the Master, in the case of a single TabletServer failure or the next time Accumulo starts, in the event of failure during shutdown.

- -

Recovery is performed by asking the loggers to copy their write-ahead logs into HDFS. As the logs are copied, they are also sorted, so that tablets can easily find their missing updates. The copy/sort status of each file is displayed on Accumulo monitor status page. Once the recovery is complete any tablets involved should return to an ``online” state. Until then those tablets will be unavailable to clients.

- -

The Accumulo client library is configured to retry failed mutations and in many cases clients will be able to continue processing after the recovery process without throwing an exception.

- -

Note that because Accumulo uses timestamps to order mutations, any mutations that are applied as part of the recovery process should appear to have been applied when they originally arrived at the TabletServer that failed. This makes the ordering of mutations consistent in the presence of failure.

- -
- -

** Next:** Shell Commands ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** Security ** Contents**

- - -
- - - - - -
-
-
- - http://git-wip-us.apache.org/repos/asf/accumulo/blob/c0655661/user_manual_1.3-incubating/Analytics.html ---------------------------------------------------------------------- diff --git a/user_manual_1.3-incubating/Analytics.html b/user_manual_1.3-incubating/Analytics.html deleted file mode 100644 index b5c671c..0000000 --- a/user_manual_1.3-incubating/Analytics.html +++ /dev/null @@ -1,338 +0,0 @@ - - - - - - - - - - - - -User Manual: Analytics - - - - - - - - - - - - -
-
-
- - -
- -

User Manual: Analytics

- -

** Next:** Security ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** High-Speed Ingest ** Contents**

- -

Subsections

- - - -
- -

Analytics

- -

Accumulo supports more advanced data processing than simply keeping keys sorted and performing efficient lookups. Analytics can be developed by using MapReduce and Iterators in conjunction with Accumulo tables.

- -

MapReduce

- -

Accumulo tables can be used as the source and destination of MapReduce jobs. To use an Accumulo table with a MapReduce job (specifically with the new Hadoop API as of version 0.20), configure the job parameters to use the AccumuloInputFormat and AccumuloOutputFormat. Accumulo specific parameters can be set via these two format classes to do the following:

- -
    -
  • Authenticate and provide user credentials for the input
  • -
  • Restrict the scan to a range of rows
  • -
  • Restrict the input to a subset of available columns
  • -
- -

Mapper and Reducer classes

- -

To read from an Accumulo table create a Mapper with the following class parameterization and be sure to configure the AccumuloInputFormat.

- -
class MyMapper extends Mapper<Key,Value,WritableComparable,Writable> {
-    public void map(Key k, Value v, Context c) {
-        // transform key and value data here
-    }
-}
-
-
- -

To write to an Accumulo table, create a Reducer with the following class parameterization and be sure to configure the AccumuloOutputFormat. The key emitted from the Reducer identifies the table to which the mutation is sent. This allows a single Reducer to write to more than one table if desired. A default table can be configured using the AccumuloOutputFormat, in which case the output table name does not have to be passed to the Context object within the Reducer.

- -
class MyReducer extends Reducer<WritableComparable, Writable, Text, Mutation> {
-
-    public void reduce(WritableComparable key, Iterator<Text> values, Context c) {
-        
-        Mutation m;
-        
-        // create the mutation based on input key and value
-        
-        c.write(new Text("output-table"), m);
-    }
-}
-
-
- -

The Text object passed as the output should contain the name of the table to which this mutation should be applied. The Text can be null in which case the mutation will be applied to the default table name specified in the AccumuloOutputFormat options.

- -

AccumuloInputFormat options

- -
Job job = new Job(getConf());
-AccumuloInputFormat.setInputInfo(job,
-        "user",
-        "passwd".getBytes(),
-        "table",
-        new Authorizations());
-
-AccumuloInputFormat.setZooKeeperInstance(job, "myinstance",
-        "zooserver-one,zooserver-two");
-
-
- -

Optional settings:

- -

To restrict Accumulo to a set of row ranges:

- -
ArrayList<Range> ranges = new ArrayList<Range>();
-// populate array list of row ranges ...
-AccumuloInputFormat.setRanges(job, ranges);
-
-
- -

To restrict accumulo to a list of columns:

- -
ArrayList<Pair<Text,Text>> columns = new ArrayList<Pair<Text,Text>>();
-// populate list of columns
-AccumuloInputFormat.fetchColumns(job, columns);
-
-
- -

To use a regular expression to match row IDs:

- -
AccumuloInputFormat.setRegex(job, RegexType.ROW, "^.*");
-
-
- -

AccumuloOutputFormat options

- -
boolean createTables = true;
-String defaultTable = "mytable";
-
-AccumuloOutputFormat.setOutputInfo(job,
-        "user",
-        "passwd".getBytes(),
-        createTables,
-        defaultTable);
-
-AccumuloOutputFormat.setZooKeeperInstance(job, "myinstance",
-        "zooserver-one,zooserver-two");
-
-
- -

Optional Settings:

- -
AccumuloOutputFormat.setMaxLatency(job, 300); // milliseconds
-AccumuloOutputFormat.setMaxMutationBufferSize(job, 5000000); // bytes
-
-
- -

An example of using MapReduce with Accumulo can be found at
-accumulo/docs/examples/README.mapred

- -

Aggregating Iterators

- -

Many applications can benefit from the ability to aggregate values across common keys. This can be done via aggregating iterators and is similar to the Reduce step in MapReduce. This provides the ability to define online, incrementally updated analytics without the overhead or latency associated with batch-oriented MapReduce jobs.

- -

All that is needed to aggregate values of a table is to identify the fields over which values will be grouped, insert mutations with those fields as the key, and configure the table with an aggregating iterator that supports the summarization operation desired.

- -

The only restriction on an aggregating iterator is that the aggregator developer should not assume that all values for a given key have been seen, since new mutations can be inserted at anytime. This precludes using the total number of values in the aggregation such as when calculating an average, for example.

- -

Feature Vectors

- -

An interesting use of aggregating iterators within an Accumulo table is to store feature vectors for use in machine learning algorithms. For example, many algorithms such as k-means clustering, support vector machines, anomaly detection, etc. use the concept of a feature vector and the calculation of distance metrics to learn a particular model. The columns in an Accumulo table can be used to efficiently store sparse features and their weights to be incrementally updated via the use of an aggregating iterator.

- -

Statistical Modeling

- -

Statistical models that need to be updated by many machines in parallel could be similarly stored within an Accumulo table. For example, a MapReduce job that is iteratively updating a global statistical model could have each map or reduce worker reference the parts of the model to be read and updated through an embedded Accumulo client.

- -

Using Accumulo this way enables efficient and fast lookups and updates of small pieces of information in a random access pattern, which is complementary to MapReduce’s sequential access model.

- -
- -

** Next:** Security ** Up:** Apache Accumulo User Manual Version 1.3 ** Previous:** High-Speed Ingest ** Contents**

- - -
- - - - - -
-
-
- -