accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [accumulo-website] branch master updated: ACCUMULO-4528 Add import/export table info to docs (#54)
Date Fri, 05 Jan 2018 16:54:30 GMT
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/master by this push:
     new 4a0806d  ACCUMULO-4528 Add import/export table info to docs (#54)
4a0806d is described below

commit 4a0806d1e9ebf33971cc10154c7f1a7bdabfbdbd
Author: Mark Owens <jmarkowe@gmail.com>
AuthorDate: Fri Jan 5 11:54:28 2018 -0500

    ACCUMULO-4528 Add import/export table info to docs (#54)
    
    Updated 1.8 and 2.0 user manual documentation to include the import/export example within
the
    documentation directly rather than requiring a user to click away from the manual pages
to a
    different project (i.e., the accumulo-examples project) for that information.
---
 1.8/accumulo_user_manual.html                    | 99 +++++++++++++++++++++++-
 _docs-2-0/getting-started/table_configuration.md | 90 +++++++++++++++++++--
 2 files changed, 180 insertions(+), 9 deletions(-)

diff --git a/1.8/accumulo_user_manual.html b/1.8/accumulo_user_manual.html
index 9f1b457..cbd7193 100644
--- a/1.8/accumulo_user_manual.html
+++ b/1.8/accumulo_user_manual.html
@@ -519,6 +519,7 @@ body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-b
 <li><a href="#_delete_range">6.9. Delete Range</a></li>
 <li><a href="#_cloning_tables">6.10. Cloning Tables</a></li>
 <li><a href="#_exporting_tables">6.11. Exporting Tables</a></li>
+<li><a href="#_export_example">6.11.1. Table Import/Export Example</a></li>
 </ul>
 </li>
 <li><a href="#_iterator_design">7. Iterator Design</a>
@@ -3000,10 +3001,100 @@ root@a14 cic&gt;</pre>
 <p>Accumulo supports exporting tables for the purpose of copying tables to another
 cluster. Exporting and importing tables preserves the tables configuration,
 splits, and logical time. Tables are exported and then copied via the hadoop
-distcp command. To export a table, it must be offline and stay offline while
-discp runs. The reason it needs to stay offline is to prevent files from being
-deleted. A table can be cloned and the clone taken offline inorder to avoid
-losing access to the table. See <code>docs/examples/README.export</code> for
an example.</p>
+<code>distcp</code> command. To export a table, it must be offline and stay offline
while
+<code>distcp</code> runs. Staying offline prevents files from being deleted.
An easy
+way to take a table offline without interrupting access to it is to clone it and take
+the clone offline.</p>
+</div>
+<div class="sect3">
+<h3 id="_export_example">6.11.1. Table Import/Export Example</h3>
+<div class="paragraph">
+<p>The following example demonstrates Accumulo's mechanism for exporting and
+importing tables.</p>
+</div>
+<div class="paragraph">
+<p>The shell session below illustrates creating a table, inserting data, and exporting
+the table.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+    <pre>root@test15&gt; createtable table1
+    root@test15 table1&gt; insert a cf1 cq1 v1
+    root@test15 table1&gt; insert h cf1 cq1 v2
+    root@test15 table1&gt; insert z cf1 cq1 v3
+    root@test15 table1&gt; insert z cf1 cq2 v4
+    root@test15 table1&gt; addsplits -t table1 b r
+    root@test15 table1&gt; scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15&gt; config -t table1 -s table.split.threshold=100M
+    root@test15 table1&gt; clonetable table1 table1_exp
+    root@test15 table1&gt; offline table1_exp
+    root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
+    root@test15 table1&gt; quit</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>After executing the export command, a few files are created in the hdfs dir.
+One of the files is a list of files to <code>distcp</code> as shown below.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+    <pre>$ hadoop fs -ls /tmp/table1_export
+    Found 2 items
+    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
+    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
+    $ hadoop fs -cat /tmp/table1_export/distcp.txt
+    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>Before the table can be imported, it must be copied using <code>distcp</code>.
After the
+<code>distcp</code> completes, the cloned table may be deleted.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+    <pre>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>The Accumulo shell session below shows importing the table and inspecting it.
+The data, splits, config, and logical time information for the table were
+preserved.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+	<pre>root@test15&gt; importtable table1_copy /tmp/table1_export_dest
+    root@test15&gt; table table1_copy
+    root@test15 table1_copy&gt; scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15 table1_copy&gt; getsplits -t table1_copy
+    b
+    r
+    root@test15&gt; config -t table1_copy -f split
+    ---------+--------------------------+-------------------------------------------
+    SCOPE    | NAME                     | VALUE
+    ---------+--------------------------+-------------------------------------------
+    default  | table.split.threshold .. | 1G
+    table    |    @override ........... | 100M
+    ---------+--------------------------+-------------------------------------------
+    root@test15&gt; tables -l
+    accumulo.metadata    =>        !0
+    accumulo.root        =>        +r
+    table1_copy          =>         5
+    trace                =>         1
+    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
+    5;b srv:time []    M1343224500467
+    5;r srv:time []    M1343224500467
+    5< srv:time []    M1343224500467</pre>
+</div>
+</div>
 </div>
 </div>
 </div>
diff --git a/_docs-2-0/getting-started/table_configuration.md b/_docs-2-0/getting-started/table_configuration.md
index 6b9fc66..488a58e 100644
--- a/_docs-2-0/getting-started/table_configuration.md
+++ b/_docs-2-0/getting-started/table_configuration.md
@@ -619,11 +619,91 @@ root@a14 cic>
 Accumulo supports exporting tables for the purpose of copying tables to another
 cluster. Exporting and importing tables preserves the tables configuration,
 splits, and logical time. Tables are exported and then copied via the hadoop
-distcp command. To export a table, it must be offline and stay offline while
-discp runs. The reason it needs to stay offline is to prevent files from being
-deleted. A table can be cloned and the clone taken offline inorder to avoid
-losing access to the table. See the [export example](https://github.com/apache/accumulo-examples/blob/master/docs/export.md)
-for example code.
+`distcp` command. To export a table, it must be offline and stay offline while
+`distcp` runs. Staying offline prevents files from being deleted during the process.
+An easy way to take a table offline without interrupting access is to clone it
+and take the clone offline.
+
+### Table Import/Export Example
+
+The following example demonstrates Accumulo's mechanism for exporting and
+importing tables.
+
+The shell session below illustrates creating a table, inserting data, and
+exporting the table.
+
+
+```
+    root@test15> createtable table1
+    root@test15 table1> insert a cf1 cq1 v1
+    root@test15 table1> insert h cf1 cq1 v2
+    root@test15 table1> insert z cf1 cq1 v3
+    root@test15 table1> insert z cf1 cq2 v4
+    root@test15 table1> addsplits -t table1 b r
+    root@test15 table1> scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15> config -t table1 -s table.split.threshold=100M
+    root@test15 table1> clonetable table1 table1_exp
+    root@test15 table1> offline table1_exp
+    root@test15 table1> exporttable -t table1_exp /tmp/table1_export
+    root@test15 table1> quit
+```
+
+After executing the export command, a few files are created in the hdfs dir.
+One of the files is a list of files to distcp as shown below.
+
+```
+    $ hadoop fs -ls /tmp/table1_export
+    Found 2 items
+    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
+    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
+    $ hadoop fs -cat /tmp/table1_export/distcp.txt
+    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+```
+
+Before the table can be imported, it must be copied using `distcp`. After the
+`distcp` completea, the cloned table may be deleted.
+
+```
+    $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+```
+
+The Accumulo shell session below shows importing the table and inspecting it.
+The data, splits, config, and logical time information for the table were
+preserved.
+
+```
+    root@test15> importtable table1_copy /tmp/table1_export_dest
+    root@test15> table table1_copy
+    root@test15 table1_copy> scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15 table1_copy> getsplits -t table1_copy
+    b
+    r
+    root@test15> config -t table1_copy -f split
+    ---------+--------------------------+-------------------------------------------
+    SCOPE    | NAME                     | VALUE
+    ---------+--------------------------+-------------------------------------------
+    default  | table.split.threshold .. | 1G
+    table    |    @override ........... | 100M
+    ---------+--------------------------+-------------------------------------------
+    root@test15> tables -l
+    accumulo.metadata    =>        !0
+    accumulo.root        =>        +r
+    table1_copy          =>         5
+    trace                =>         1
+    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
+    5;b srv:time []    M1343224500467
+    5;r srv:time []    M1343224500467
+    5< srv:time []    M1343224500467
+```
 
 [bloom-filter-example]: https://github.com/apache/accumulo-examples/blob/master/docs/bloom.md
 [constraint]: {{ page.javadoc_core }}/org/apache/accumulo/core/constraints/Constraint.html

-- 
To stop receiving notification emails like this one, please contact
['"commits@accumulo.apache.org" <commits@accumulo.apache.org>'].

Mime
View raw message