accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [accumulo] branch 1.8 updated: ACCUMULO-4528 Add import/export table info to docs (#350)
Date Tue, 02 Jan 2018 19:00:55 GMT
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch 1.8
in repository https://gitbox.apache.org/repos/asf/accumulo.git


The following commit(s) were added to refs/heads/1.8 by this push:
     new 7fc61d4  ACCUMULO-4528 Add import/export table info to docs (#350)
7fc61d4 is described below

commit 7fc61d438986232ef8c26c8260fd7e0610a1b22c
Author: Mark Owens <jmarkowe@gmail.com>
AuthorDate: Tue Jan 2 14:00:53 2018 -0500

    ACCUMULO-4528 Add import/export table info to docs (#350)
    
    Updated 1.8 and 2.0 user manual documentation to include the import/export example within
the
    documentation directly rather than requiring a user to click away from the manual pages
to a
    different project (i.e., the accumulo-examples project) for that information.
---
 .../main/asciidoc/chapters/table_configuration.txt | 89 +++++++++++++++++++++-
 1 file changed, 85 insertions(+), 4 deletions(-)

diff --git a/docs/src/main/asciidoc/chapters/table_configuration.txt b/docs/src/main/asciidoc/chapters/table_configuration.txt
index 5c62ccf..ca1eb88 100644
--- a/docs/src/main/asciidoc/chapters/table_configuration.txt
+++ b/docs/src/main/asciidoc/chapters/table_configuration.txt
@@ -637,7 +637,88 @@ root@a14 cic>
 Accumulo supports exporting tables for the purpose of copying tables to another
 cluster. Exporting and importing tables preserves the tables configuration,
 splits, and logical time. Tables are exported and then copied via the hadoop
-distcp command. To export a table, it must be offline and stay offline while
-discp runs. The reason it needs to stay offline is to prevent files from being
-deleted. A table can be cloned and the clone taken offline inorder to avoid
-losing access to the table. See +docs/examples/README.export+ for an example.
+`distcp` command. To export a table, it must be offline and stay offline while
+`distcp` runs. Staying offline prevents files from being deleted during the process.
+An easy way to take a table offline without interrupting access is to clone it
+and take the clone offline.
+
+==== Table Import/Export Example
+
+The following example demonstrates Accumulo's mechanism for exporting and
+importing tables.
+
+The shell session below illustrates creating a table, inserting data, and
+exporting the table.
+
+
+----
+    root@test15> createtable table1
+    root@test15 table1> insert a cf1 cq1 v1
+    root@test15 table1> insert h cf1 cq1 v2
+    root@test15 table1> insert z cf1 cq1 v3
+    root@test15 table1> insert z cf1 cq2 v4
+    root@test15 table1> addsplits -t table1 b r
+    root@test15 table1> scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15> config -t table1 -s table.split.threshold=100M
+    root@test15 table1> clonetable table1 table1_exp
+    root@test15 table1> offline table1_exp
+    root@test15 table1> exporttable -t table1_exp /tmp/table1_export
+    root@test15 table1> quit
+----
+
+After executing the export command, a few files are created in the hdfs dir.
+One of the files is a list of files to distcp as shown below.
+
+----
+    $ hadoop fs -ls /tmp/table1_export
+    Found 2 items
+    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
+    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
+    $ hadoop fs -cat /tmp/table1_export/distcp.txt
+    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+----
+
+Before the table can be imported, it must be copied using `distcp`. After the
+`distcp` completea, the cloned table may be deleted.
+
+----
+    $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+----
+
+The Accumulo shell session below shows importing the table and inspecting it.
+The data, splits, config, and logical time information for the table were
+preserved.
+
+----
+    root@test15> importtable table1_copy /tmp/table1_export_dest
+    root@test15> table table1_copy
+    root@test15 table1_copy> scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15 table1_copy> getsplits -t table1_copy
+    b
+    r
+    root@test15> config -t table1_copy -f split
+    ---------+--------------------------+-------------------------------------------
+    SCOPE    | NAME                     | VALUE
+    ---------+--------------------------+-------------------------------------------
+    default  | table.split.threshold .. | 1G
+    table    |    @override ........... | 100M
+    ---------+--------------------------+-------------------------------------------
+    root@test15> tables -l
+    accumulo.metadata    =>        !0
+    accumulo.root        =>        +r
+    table1_copy          =>         5
+    trace                =>         1
+    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
+    5;b srv:time []    M1343224500467
+    5;r srv:time []    M1343224500467
+    5< srv:time []    M1343224500467
+----

-- 
To stop receiving notification emails like this one, please contact
['"commits@accumulo.apache.org" <commits@accumulo.apache.org>'].

Mime
View raw message