From commits-return-22641-archive-asf-public=cust-asf.ponee.io@accumulo.apache.org Mon Feb 25 22:22:31 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id C51FE180626 for ; Mon, 25 Feb 2019 23:22:30 +0100 (CET) Received: (qmail 63904 invoked by uid 500); 25 Feb 2019 22:22:29 -0000 Mailing-List: contact commits-help@accumulo.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@accumulo.apache.org Delivered-To: mailing list commits@accumulo.apache.org Received: (qmail 63895 invoked by uid 99); 25 Feb 2019 22:22:29 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 25 Feb 2019 22:22:29 +0000 Received: by gitbox.apache.org (ASF Mail Server at gitbox.apache.org, from userid 33) id 52F9E82E8A; Mon, 25 Feb 2019 22:22:29 +0000 (UTC) Date: Mon, 25 Feb 2019 22:22:29 +0000 To: "commits@accumulo.apache.org" Subject: [accumulo-website] branch asf-site updated: Jekyll build from master:f665f04 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Message-ID: <155113334920.24832.15568020469612649130@gitbox.apache.org> From: mwalch@apache.org X-Git-Host: gitbox.apache.org X-Git-Repo: accumulo-website X-Git-Refname: refs/heads/asf-site X-Git-Reftype: branch X-Git-Oldrev: 9dee9cd4427a16c40db10048963c4db708dc281f X-Git-Newrev: 140b7c60580307f410a3f0e54d9d39b0d12719f4 X-Git-Rev: 140b7c60580307f410a3f0e54d9d39b0d12719f4 X-Git-NotificationType: ref_changed_plus_diff X-Git-Multimail-Version: 1.5.dev Auto-Submitted: auto-generated This is an automated email from the ASF dual-hosted git repository. mwalch pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/accumulo-website.git The following commit(s) were added to refs/heads/asf-site by this push: new 140b7c6 Jekyll build from master:f665f04 140b7c6 is described below commit 140b7c60580307f410a3f0e54d9d39b0d12719f4 Author: Mike Walch AuthorDate: Mon Feb 25 17:21:47 2019 -0500 Jekyll build from master:f665f04 Updated client docs with 2.0 API changes (#160) * Limited use of Text * Used new 2.0 API where possible --- docs/2.x/development/mapreduce.html | 2 +- docs/2.x/getting-started/clients.html | 2 +- docs/2.x/getting-started/table_design.html | 51 +++++++++++++++--------------- feed.xml | 4 +-- search_data.json | 6 ++-- 5 files changed, 32 insertions(+), 33 deletions(-) diff --git a/docs/2.x/development/mapreduce.html b/docs/2.x/development/mapreduce.html index c9ef96e..e824422 100644 --- a/docs/2.x/development/mapreduce.html +++ b/docs/2.x/development/mapreduce.html @@ -504,7 +504,7 @@ your job with yarn command.

AccumuloInputFormat has optional settings.

 List<Range> ranges = new ArrayList<Range>();
- List<Pair<Text,Text>> columns = new ArrayList<Pair<Text,TextCollection<IteratorSetting.Column> columns = new ArrayList<IteratorSetting.Column>();
  // populate ranges & columns
  IteratorSetting is = new IteratorSetting(30, RexExFilter.class);
  RegExFilter.setRegexs(is, ".*suffix", null, null, null, true);
diff --git a/docs/2.x/getting-started/clients.html b/docs/2.x/getting-started/clients.html
index 6e2f7ff..7cdff39 100644
--- a/docs/2.x/getting-started/clients.html
+++ b/docs/2.x/getting-started/clients.html
@@ -681,7 +681,7 @@ to return a subset of the columns available.

try (Scanner scan = client.createScanner("table", auths)) { scan.setRange(new Range("harry","john")); - scan.fetchColumnFamily(new Text("attributes")); + scan.fetchColumnFamily("attributes"); for (Entry<Key,Value> entry : scan) { Text row = entry.getKey().getRow(); diff --git a/docs/2.x/getting-started/table_design.html b/docs/2.x/getting-started/table_design.html index fb2cab6..9151de9 100644 --- a/docs/2.x/getting-started/table_design.html +++ b/docs/2.x/getting-started/table_design.html @@ -435,11 +435,9 @@ if we have the following data in a comma-separated file:

name in the column family, and a blank column qualifier:

Mutation m = new Mutation(userid);
-final String column_qualifier = "";
-m.put("age", column_qualifier, age);
-m.put("address", column_qualifier, address);
-m.put("balance", column_qualifier, account_balance);
-
+m.at().family("age").put(age);
+m.at().family("address").put(address);
+m.at().family("balance").put(account_balance);
 writer.add(m);
 
@@ -451,7 +449,7 @@ userid as the range of a scanner and fetching specific columns:

Range r = new Range(userid, userid); // single row Scanner s = client.createScanner("userdata", auths); s.setRange(r); -s.fetchColumnFamily(new Text("age")); +s.fetchColumnFamily("age"); for (Entry<Key,Value> entry : s) { System.out.println(entry.getValue().toString()); @@ -517,7 +515,7 @@ of a lexicoder that encodes a java Date object so that it sorts lexicographicall // encode the rowId so that it is sorted lexicographically Mutation mutation = new Mutation(dateEncoder.encode(hour)); -mutation.put(new Text("colf"), new Text("colq"), new Value(new mutation.at().family("colf").qualifier("colq").put(new byte[]{});

If we want to return the most recent date first, we can reverse the sort order @@ -533,7 +531,7 @@ with the reverse lexicoder:

// encode the rowId so that it sorts in reverse lexicographic order Mutation mutation = new Mutation(reverseEncoder.encode(hour)); -mutation.put(new Text("colf"), new Text("colq"), new Value(new mutation.at().family("colf").qualifier("colq").put(new byte[]{});

Indexing

@@ -581,26 +579,26 @@ BatchScanner, which performs the lookups in multiple threads to multiple servers and returns an Iterator over all the rows retrieved. The rows returned are NOT in sorted order, as is the case with the basic Scanner interface.

-
// first we scan the index for IDs of rows matching our query
-Text term = new Text("mySearchTerm");
-
-HashSet<Range> matchingRows = new HashSet<Range>();
+
HashSet<Range> matchingRows = new HashSet<Range>();
 
-Scanner indexScanner = createScanner("index", auths);
-indexScanner.setRange(new Range(term, term));
+// first we scan the index for IDs of rows matching our query
+try (Scanner indexScanner = client.createScanner("index", auths)) {
+  indexScanner.setRange(Range.exact("mySearchTerm");
 
-// we retrieve the matching rowIDs and create a set of ranges
-for (Entry<Key,Value> entry : indexScanner) {
+  // we retrieve the matching rowIDs and create a set of ranges
+  for (Entry<Key,Value> entry : indexScanner) {
     matchingRows.add(new Range(entry.getKey().getColumnQualifier()));
+  }
 }
 
 // now we pass the set of rowIDs to the batch scanner to retrieve them
-BatchScanner bscan = client.createBatchScanner("table", auths, 10);
-bscan.setRanges(matchingRows);
-bscan.fetchColumnFamily(new Text("attributes"));
+try (BatchScanner bscan = client.createBatchScanner("table", auths, 10)) {
+  bscan.setRanges(matchingRows);
+  bscan.fetchColumnFamily("attributes");
 
-for (Entry<Key,Value> entry : bscan) {
+  for (Entry<Key,Value> entry : bscan) {
     System.out.println(entry.getValue());
+  }
 }
 
@@ -856,16 +854,17 @@ BatchScanner within user query code as follows:

Text[] terms = {new Text("the"), new Text("white"),BatchScanner bscan = client.createBatchScanner(table, auths, 20);
+try (BatchScanner bscan = client.createBatchScanner(table, auths, 20)) {
 
-IteratorSetting iter = new IteratorSetting(20, "ii", IntersectingIterator.class);
-IntersectingIterator.setColumnFamilies(iter, terms);
+  IteratorSetting iter = new IteratorSetting(20, "ii", IntersectingIterator.class);
+  IntersectingIterator.setColumnFamilies(iter, terms);
 
-bscan.addScanIterator(iter);
-bscan.setRanges(Collections.singleton(new Range()));
+  bscan.addScanIterator(iter);
+  bscan.setRanges(Collections.singleton(new Range()));
 
-for (Entry<Key,Value> entry : bscan) {
+  for (Entry<Key,Value> entry : bscan) {
     System.out.println(" " + entry.getKey().getColumnQualifier());
+  }
 }
 
diff --git a/feed.xml b/feed.xml index d13fd10..c68e987 100644 --- a/feed.xml +++ b/feed.xml @@ -6,8 +6,8 @@ https://accumulo.apache.org/ - Mon, 25 Feb 2019 10:59:35 -0500 - Mon, 25 Feb 2019 10:59:35 -0500 + Mon, 25 Feb 2019 17:21:39 -0500 + Mon, 25 Feb 2019 17:21:39 -0500 Jekyll v3.7.3 diff --git a/search_data.json b/search_data.json index 0385f83..4354ad4 100644 --- a/search_data.json +++ b/search_data.json @@ -107,7 +107,7 @@ "docs-2-x-development-mapreduce": { "title": "MapReduce", - "content" : "Accumulo tables can be used as the source and destination of MapReduce jobs.General MapReduce configurationAdd Accumulo’s MapReduce API to your dependenciesIf you are using Maven, add the following dependency to your pom.xml to use Accumulo’s MapReduce API:&lt;dependency&gt; &lt;groupId&gt;org.apache.accumulo&lt;/groupId&gt; &lt;artifactId&gt;accumulo-hadoop-mapreduce&lt;/artifactId&gt; &lt;version&gt;2.0.0-alpha-2&am [...] + "content" : "Accumulo tables can be used as the source and destination of MapReduce jobs.General MapReduce configurationAdd Accumulo’s MapReduce API to your dependenciesIf you are using Maven, add the following dependency to your pom.xml to use Accumulo’s MapReduce API:&lt;dependency&gt; &lt;groupId&gt;org.apache.accumulo&lt;/groupId&gt; &lt;artifactId&gt;accumulo-hadoop-mapreduce&lt;/artifactId&gt; &lt;version&gt;2.0.0-alpha-2&am [...] "url": " /docs/2.x/development/mapreduce", "categories": "development" }, @@ -135,7 +135,7 @@ "docs-2-x-getting-started-clients": { "title": "Accumulo Clients", - "content" : "Creating Client CodeIf you are using Maven to create Accumulo client code, add the following dependency to your pom:&lt;dependency&gt; &lt;groupId&gt;org.apache.accumulo&lt;/groupId&gt; &lt;artifactId&gt;accumulo-core&lt;/artifactId&gt; &lt;version&gt;2.0.0-alpha-2&lt;/version&gt;&lt;/dependency&gt;When writing code that uses Accumulo, only use the Accumulo Public API.The accumulo-core artifact include [...] + "content" : "Creating Client CodeIf you are using Maven to create Accumulo client code, add the following dependency to your pom:&lt;dependency&gt; &lt;groupId&gt;org.apache.accumulo&lt;/groupId&gt; &lt;artifactId&gt;accumulo-core&lt;/artifactId&gt; &lt;version&gt;2.0.0-alpha-2&lt;/version&gt;&lt;/dependency&gt;When writing code that uses Accumulo, only use the Accumulo Public API.The accumulo-core artifact include [...] "url": " /docs/2.x/getting-started/clients", "categories": "getting-started" }, @@ -184,7 +184,7 @@ "docs-2-x-getting-started-table-design": { "title": "Table Design", - "content" : "Basic TableSince Accumulo tables are sorted by row ID, each table can be thought of as beingindexed by the row ID. Lookups performed by row ID can be executed quickly, by doinga binary search, first across the tablets, and then within a tablet. Clients shouldchoose a row ID carefully in order to support their desired application. A simple ruleis to select a unique identifier as the row ID for each entity to be stored and assignall the other attributes to be tracked to [...] + "content" : "Basic TableSince Accumulo tables are sorted by row ID, each table can be thought of as beingindexed by the row ID. Lookups performed by row ID can be executed quickly, by doinga binary search, first across the tablets, and then within a tablet. Clients shouldchoose a row ID carefully in order to support their desired application. A simple ruleis to select a unique identifier as the row ID for each entity to be stored and assignall the other attributes to be tracked to [...] "url": " /docs/2.x/getting-started/table_design", "categories": "getting-started" },