accumulo-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mwa...@apache.org
Subject [accumulo-website] branch asf-site updated: Jekyll build from master:4a0806d
Date Fri, 05 Jan 2018 18:01:17 GMT
This is an automated email from the ASF dual-hosted git repository.

mwalch pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/accumulo-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new d96d9e8  Jekyll build from master:4a0806d
d96d9e8 is described below

commit d96d9e81660b975c8fa89b2677074a315066939f
Author: Mike Walch <mwalch@apache.org>
AuthorDate: Fri Jan 5 13:00:21 2018 -0500

    Jekyll build from master:4a0806d
    
    ACCUMULO-4528 Add import/export table info to docs (#54)
    
    Updated 1.8 and 2.0 user manual documentation to include the import/export example within
the
    documentation directly rather than requiring a user to click away from the manual pages
to a
    different project (i.e., the accumulo-examples project) for that information.
---
 1.8/accumulo_user_manual.html                     | 99 ++++++++++++++++++++++-
 docs/2.0/getting-started/table_configuration.html | 85 +++++++++++++++++--
 feed.xml                                          |  4 +-
 3 files changed, 177 insertions(+), 11 deletions(-)

diff --git a/1.8/accumulo_user_manual.html b/1.8/accumulo_user_manual.html
index 9f1b457..cbd7193 100644
--- a/1.8/accumulo_user_manual.html
+++ b/1.8/accumulo_user_manual.html
@@ -519,6 +519,7 @@ body.book #toc,body.book #preamble,body.book h1.sect0,body.book .sect1>h2{page-b
 <li><a href="#_delete_range">6.9. Delete Range</a></li>
 <li><a href="#_cloning_tables">6.10. Cloning Tables</a></li>
 <li><a href="#_exporting_tables">6.11. Exporting Tables</a></li>
+<li><a href="#_export_example">6.11.1. Table Import/Export Example</a></li>
 </ul>
 </li>
 <li><a href="#_iterator_design">7. Iterator Design</a>
@@ -3000,10 +3001,100 @@ root@a14 cic&gt;</pre>
 <p>Accumulo supports exporting tables for the purpose of copying tables to another
 cluster. Exporting and importing tables preserves the tables configuration,
 splits, and logical time. Tables are exported and then copied via the hadoop
-distcp command. To export a table, it must be offline and stay offline while
-discp runs. The reason it needs to stay offline is to prevent files from being
-deleted. A table can be cloned and the clone taken offline inorder to avoid
-losing access to the table. See <code>docs/examples/README.export</code> for
an example.</p>
+<code>distcp</code> command. To export a table, it must be offline and stay offline
while
+<code>distcp</code> runs. Staying offline prevents files from being deleted.
An easy
+way to take a table offline without interrupting access to it is to clone it and take
+the clone offline.</p>
+</div>
+<div class="sect3">
+<h3 id="_export_example">6.11.1. Table Import/Export Example</h3>
+<div class="paragraph">
+<p>The following example demonstrates Accumulo's mechanism for exporting and
+importing tables.</p>
+</div>
+<div class="paragraph">
+<p>The shell session below illustrates creating a table, inserting data, and exporting
+the table.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+    <pre>root@test15&gt; createtable table1
+    root@test15 table1&gt; insert a cf1 cq1 v1
+    root@test15 table1&gt; insert h cf1 cq1 v2
+    root@test15 table1&gt; insert z cf1 cq1 v3
+    root@test15 table1&gt; insert z cf1 cq2 v4
+    root@test15 table1&gt; addsplits -t table1 b r
+    root@test15 table1&gt; scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15&gt; config -t table1 -s table.split.threshold=100M
+    root@test15 table1&gt; clonetable table1 table1_exp
+    root@test15 table1&gt; offline table1_exp
+    root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
+    root@test15 table1&gt; quit</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>After executing the export command, a few files are created in the hdfs dir.
+One of the files is a list of files to <code>distcp</code> as shown below.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+    <pre>$ hadoop fs -ls /tmp/table1_export
+    Found 2 items
+    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
+    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
+    $ hadoop fs -cat /tmp/table1_export/distcp.txt
+    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>Before the table can be imported, it must be copied using <code>distcp</code>.
After the
+<code>distcp</code> completes, the cloned table may be deleted.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+    <pre>$ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest</pre>
+</div>
+</div>
+<div class="paragraph">
+<p>The Accumulo shell session below shows importing the table and inspecting it.
+The data, splits, config, and logical time information for the table were
+preserved.</p>
+</div>
+<div class="listingblock">
+<div class="content">
+	<pre>root@test15&gt; importtable table1_copy /tmp/table1_export_dest
+    root@test15&gt; table table1_copy
+    root@test15 table1_copy&gt; scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15 table1_copy&gt; getsplits -t table1_copy
+    b
+    r
+    root@test15&gt; config -t table1_copy -f split
+    ---------+--------------------------+-------------------------------------------
+    SCOPE    | NAME                     | VALUE
+    ---------+--------------------------+-------------------------------------------
+    default  | table.split.threshold .. | 1G
+    table    |    @override ........... | 100M
+    ---------+--------------------------+-------------------------------------------
+    root@test15&gt; tables -l
+    accumulo.metadata    =>        !0
+    accumulo.root        =>        +r
+    table1_copy          =>         5
+    trace                =>         1
+    root@test15 table1_copy> scan -t accumulo.metadata -b 5 -c srv:time
+    5;b srv:time []    M1343224500467
+    5;r srv:time []    M1343224500467
+    5< srv:time []    M1343224500467</pre>
+</div>
+</div>
 </div>
 </div>
 </div>
diff --git a/docs/2.0/getting-started/table_configuration.html b/docs/2.0/getting-started/table_configuration.html
index b00f669..feb4d24 100644
--- a/docs/2.0/getting-started/table_configuration.html
+++ b/docs/2.0/getting-started/table_configuration.html
@@ -957,11 +957,86 @@ root@a14 cic&gt;
 <p>Accumulo supports exporting tables for the purpose of copying tables to another
 cluster. Exporting and importing tables preserves the tables configuration,
 splits, and logical time. Tables are exported and then copied via the hadoop
-distcp command. To export a table, it must be offline and stay offline while
-discp runs. The reason it needs to stay offline is to prevent files from being
-deleted. A table can be cloned and the clone taken offline inorder to avoid
-losing access to the table. See the <a href="https://github.com/apache/accumulo-examples/blob/master/docs/export.md">export
example</a>
-for example code.</p>
+<code class="highlighter-rouge">distcp</code> command. To export a table, it
must be offline and stay offline while
+<code class="highlighter-rouge">distcp</code> runs. Staying offline prevents
files from being deleted during the process.
+An easy way to take a table offline without interrupting access is to clone it
+and take the clone offline.</p>
+
+<h3 id="table-importexport-example">Table Import/Export Example</h3>
+
+<p>The following example demonstrates Accumulo’s mechanism for exporting and
+importing tables.</p>
+
+<p>The shell session below illustrates creating a table, inserting data, and
+exporting the table.</p>
+
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
   root@test15&gt; createtable table1
+    root@test15 table1&gt; insert a cf1 cq1 v1
+    root@test15 table1&gt; insert h cf1 cq1 v2
+    root@test15 table1&gt; insert z cf1 cq1 v3
+    root@test15 table1&gt; insert z cf1 cq2 v4
+    root@test15 table1&gt; addsplits -t table1 b r
+    root@test15 table1&gt; scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15&gt; config -t table1 -s table.split.threshold=100M
+    root@test15 table1&gt; clonetable table1 table1_exp
+    root@test15 table1&gt; offline table1_exp
+    root@test15 table1&gt; exporttable -t table1_exp /tmp/table1_export
+    root@test15 table1&gt; quit
+</code></pre></div></div>
+
+<p>After executing the export command, a few files are created in the hdfs dir.
+One of the files is a list of files to distcp as shown below.</p>
+
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
   $ hadoop fs -ls /tmp/table1_export
+    Found 2 items
+    -rw-r--r--   3 user supergroup        162 2012-07-25 09:56 /tmp/table1_export/distcp.txt
+    -rw-r--r--   3 user supergroup        821 2012-07-25 09:56 /tmp/table1_export/exportMetadata.zip
+    $ hadoop fs -cat /tmp/table1_export/distcp.txt
+    hdfs://n1.example.com:6093/accumulo/tables/3/default_tablet/F0000000.rf
+    hdfs://n1.example.com:6093/tmp/table1_export/exportMetadata.zip
+</code></pre></div></div>
+
+<p>Before the table can be imported, it must be copied using <code class="highlighter-rouge">distcp</code>.
After the
+<code class="highlighter-rouge">distcp</code> completea, the cloned table may
be deleted.</p>
+
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
   $ hadoop distcp -f /tmp/table1_export/distcp.txt /tmp/table1_export_dest
+</code></pre></div></div>
+
+<p>The Accumulo shell session below shows importing the table and inspecting it.
+The data, splits, config, and logical time information for the table were
+preserved.</p>
+
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
   root@test15&gt; importtable table1_copy /tmp/table1_export_dest
+    root@test15&gt; table table1_copy
+    root@test15 table1_copy&gt; scan
+    a cf1:cq1 []    v1
+    h cf1:cq1 []    v2
+    z cf1:cq1 []    v3
+    z cf1:cq2 []    v4
+    root@test15 table1_copy&gt; getsplits -t table1_copy
+    b
+    r
+    root@test15&gt; config -t table1_copy -f split
+    ---------+--------------------------+-------------------------------------------
+    SCOPE    | NAME                     | VALUE
+    ---------+--------------------------+-------------------------------------------
+    default  | table.split.threshold .. | 1G
+    table    |    @override ........... | 100M
+    ---------+--------------------------+-------------------------------------------
+    root@test15&gt; tables -l
+    accumulo.metadata    =&gt;        !0
+    accumulo.root        =&gt;        +r
+    table1_copy          =&gt;         5
+    trace                =&gt;         1
+    root@test15 table1_copy&gt; scan -t accumulo.metadata -b 5 -c srv:time
+    5;b srv:time []    M1343224500467
+    5;r srv:time []    M1343224500467
+    5&lt; srv:time []    M1343224500467
+</code></pre></div></div>
 
 
 
diff --git a/feed.xml b/feed.xml
index 999ea19..5eedeac 100644
--- a/feed.xml
+++ b/feed.xml
@@ -6,8 +6,8 @@
 </description>
     <link>https://accumulo.apache.org/</link>
     <atom:link href="https://accumulo.apache.org/feed.xml" rel="self" type="application/rss+xml"/>
-    <pubDate>Thu, 04 Jan 2018 17:49:13 -0500</pubDate>
-    <lastBuildDate>Thu, 04 Jan 2018 17:49:13 -0500</lastBuildDate>
+    <pubDate>Fri, 05 Jan 2018 13:00:09 -0500</pubDate>
+    <lastBuildDate>Fri, 05 Jan 2018 13:00:09 -0500</lastBuildDate>
     <generator>Jekyll v3.6.2</generator>
     
     

-- 
To stop receiving notification emails like this one, please contact
['"commits@accumulo.apache.org" <commits@accumulo.apache.org>'].

Mime
View raw message