Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 10E4D200BB7 for ; Wed, 9 Nov 2016 15:33:12 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 0F8CE160AF7; Wed, 9 Nov 2016 14:33:12 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 3303D160AEE for ; Wed, 9 Nov 2016 15:33:10 +0100 (CET) Received: (qmail 14887 invoked by uid 500); 9 Nov 2016 14:33:09 -0000 Mailing-List: contact commits-help@kudu.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kudu.apache.org Delivered-To: mailing list commits@kudu.apache.org Received: (qmail 14878 invoked by uid 99); 9 Nov 2016 14:33:09 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 09 Nov 2016 14:33:09 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 4AE7EE07EF; Wed, 9 Nov 2016 14:33:09 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: jtbirdsell@apache.org To: commits@kudu.apache.org Message-Id: X-Mailer: ASF-Git Admin Mailer Subject: kudu-site git commit: Publish commit(s) from site source repo: 5be560d [site] - Update committer page Date: Wed, 9 Nov 2016 14:33:09 +0000 (UTC) archived-at: Wed, 09 Nov 2016 14:33:12 -0000 Repository: kudu-site Updated Branches: refs/heads/asf-site b420996e1 -> 4d030156a Publish commit(s) from site source repo: 5be560d [site] - Update committer page Site-Repo-Commit: 5be560d8bd7030c1f45ef48450ea475a0cf39999 Project: http://git-wip-us.apache.org/repos/asf/kudu-site/repo Commit: http://git-wip-us.apache.org/repos/asf/kudu-site/commit/4d030156 Tree: http://git-wip-us.apache.org/repos/asf/kudu-site/tree/4d030156 Diff: http://git-wip-us.apache.org/repos/asf/kudu-site/diff/4d030156 Branch: refs/heads/asf-site Commit: 4d030156ac3c06c2d5b0d1a0a02cfeea7961a70e Parents: b420996 Author: Jordan Birdsell Authored: Wed Nov 9 09:31:54 2016 -0500 Committer: Jordan Birdsell Committed: Wed Nov 9 09:31:54 2016 -0500 ---------------------------------------------------------------------- blog/index.html | 8 +++--- blog/page/2/index.html | 40 +++++++++++++------------- blog/page/4/index.html | 2 +- blog/page/5/index.html | 2 +- blog/page/6/index.html | 2 +- blog/page/8/index.html | 12 ++++---- committers.html | 5 ++++ feed.xml | 68 ++++++++++++++++++++++----------------------- 8 files changed, 72 insertions(+), 67 deletions(-) ---------------------------------------------------------------------- http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/blog/index.html ---------------------------------------------------------------------- diff --git a/blog/index.html b/blog/index.html index 532ab1d..3d36552 100644 --- a/blog/index.html +++ b/blog/index.html @@ -160,11 +160,11 @@ covers ongoing development and news in the Apache Kudu project.

Welcome to the twenty-first edition of the Kudu Weekly Update. Astute readers will notice that the weekly blog posts have been not-so-weekly -of late – in fact, it has been nearly two months since the previous post +of late – in fact, it has been nearly two months since the previous post as I and others have focused on releases, conferences, etc.

So, rather than covering just this past week, this post will cover highlights -of the progress since the 1.0 release in mid-September. If you’re interested +of the progress since the 1.0 release in mid-September. If you’re interested in learning about progress prior to that release, check the release notes.

@@ -186,8 +186,8 @@ in learning about progress prior to that release, check the
-

This week in New York, O’Reilly and Cloudera will be hosting Strata+Hadoop World -2016. If you’re interested in Kudu, there will be several opportunities to +

This week in New York, O’Reilly and Cloudera will be hosting Strata+Hadoop World +2016. If you’re interested in Kudu, there will be several opportunities to learn more, both from the open source development team as well as some companies who are already adopting Kudu for their use cases.

http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/blog/page/2/index.html ---------------------------------------------------------------------- diff --git a/blog/page/2/index.html b/blog/page/2/index.html index 2f1efb7..77afe9d 100644 --- a/blog/page/2/index.html +++ b/blog/page/2/index.html @@ -138,12 +138,12 @@ scan path to speed up queries.

-

This post discusses the Kudu Flume Sink. First, I’ll give some background on why we considered +

This post discusses the Kudu Flume Sink. First, I’ll give some background on why we considered using Kudu, what Flume does for us, and how Flume fits with Kudu in our project.

Why Kudu

-

Traditionally in the Hadoop ecosystem we’ve dealt with various batch processing technologies such +

Traditionally in the Hadoop ecosystem we’ve dealt with various batch processing technologies such as MapReduce and the many libraries and tools built on top of it in various languages (Apache Pig, Apache Hive, Apache Oozie and many others). The main problem with this approach is that it needs to process the whole data set in batches, again and again, as soon as new data gets added. Things get @@ -183,14 +183,14 @@ queries and processes need to be carefully planned and implemented.

  • flexible and expressive, thanks to SQL support via Apache Impala (incubating)
  • a table-oriented, mutable data store that feels like a traditional relational database
  • -
  • very easy to program, you can even pretend it’s good old MySQL
  • +
  • very easy to program, you can even pretend it’s good old MySQL
  • low-latency and relatively high throughput, both for ingest and query
-

At Argyle Data, we’re dealing with complex fraud detection scenarios. We need to ingest massive +

At Argyle Data, we’re dealing with complex fraud detection scenarios. We need to ingest massive amounts of data, run machine learning algorithms and generate reports. When we created our current architecture two years ago we decided to opt for a database as the backbone of our system. That -database is Apache Accumulo. It’s a key-value based database which runs on top of Hadoop HDFS, +database is Apache Accumulo. It’s a key-value based database which runs on top of Hadoop HDFS, quite similar to HBase but with some important improvements such as cell level security and ease of deployment and management. To enable querying of this data for quite complex reporting and analytics, we used Presto, a distributed query engine with a pluggable architecture open-sourced @@ -203,12 +203,12 @@ architecture has served us well, but there were a few problems:

  • we need to support ad-hoc queries, plus long-term data warehouse functionality
  • -

    So, we’ve started gradually moving the core machine-learning pipeline to a streaming based +

    So, we’ve started gradually moving the core machine-learning pipeline to a streaming based solution. This way we can ingest and process larger data-sets faster in the real-time. But then how would we take care of ad-hoc queries and long-term persistence? This is where Kudu comes in. While the machine learning pipeline ingests and processes real-time data, we store a copy of the same ingested data in Kudu for long-term access and ad-hoc queries. Kudu is our data warehouse. By -using Kudu and Impala, we can retire our in-house Presto connector and rely on Impala’s +using Kudu and Impala, we can retire our in-house Presto connector and rely on Impala’s super-fast query engine.

    But how would we make sure data is reliably ingested into the streaming pipeline and the @@ -216,10 +216,10 @@ Kudu-based data warehouse? This is where Apache Flume comes in.

    Why Flume

    -

    According to their website “Flume is a distributed, reliable, and +

    According to their website “Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault -tolerant with tunable reliability mechanisms and many failover and recovery mechanisms.” As you +tolerant with tunable reliability mechanisms and many failover and recovery mechanisms.” As you can see, nowhere is Hadoop mentioned but Flume is typically used for ingesting data to Hadoop clusters.

    @@ -237,7 +237,7 @@ File-based channels are also provided. As for the sources, Avro, JMS, Thrift, sp source are some of the built-in ones. Flume also ships with many sinks, including sinks for writing data to HDFS, HBase, Hive, Kafka, as well as to other Flume agents.

    -

    In the rest of this post I’ll go over the Kudu Flume sink and show you how to configure Flume to +

    In the rest of this post I’ll go over the Kudu Flume sink and show you how to configure Flume to write ingested data to a Kudu table. The sink has been part of the Kudu distribution since the 0.8 release and the source code can be found here.

    @@ -269,8 +269,8 @@ agent1.sinks.sink1.producer = org.apache.kudu.flume.sink.SimpleKuduEventProducer virtual memory statistics for the machine and queue events into an in-memory channel1 channel, which in turn is used for writing these events to a Kudu table called stats. We are using org.apache.kudu.flume.sink.SimpleKuduEventProducer as the producer. SimpleKuduEventProducer is -the built-in and default producer, but it’s implemented as a showcase for how to write Flume -events into Kudu tables. For any serious functionality we’d have to write a custom producer. We +the built-in and default producer, but it’s implemented as a showcase for how to write Flume +events into Kudu tables. For any serious functionality we’d have to write a custom producer. We need to make this producer and the KuduSink class available to Flume. We can do that by simply copying the kudu-flume-sink-<VERSION>.jar jar file from the Kudu distribution to the $FLUME_HOME/plugins.d/kudu-sink/lib directory in the Flume installation. The jar file contains @@ -278,7 +278,7 @@ copying the kudu-flume-sink-<VERSION>.jar jar file from the K

    At a minimum, the Kudu Flume Sink needs to know where the Kudu masters are (agent1.sinks.sink1.masterAddresses = localhost) and which Kudu table should be used for writing -Flume events to (agent1.sinks.sink1.tableName = stats). The Kudu Flume Sink doesn’t create this +Flume events to (agent1.sinks.sink1.tableName = stats). The Kudu Flume Sink doesn’t create this table, it has to be created before the Kudu Flume Sink is started.

    You may also notice the batchSize parameter. Batch size is used for batching up to that many @@ -299,7 +299,7 @@ impact on ingest performance of the Kudu cluster.

    masterAddresses N/A - Comma-separated list of “host:port” pairs of the masters (port optional) + Comma-separated list of “host:port” pairs of the masters (port optional) tableName @@ -329,7 +329,7 @@ impact on ingest performance of the Kudu cluster.

    -

    Let’s take a look at the source code for the built-in producer class:

    +

    Let’s take a look at the source code for the built-in producer class:

    public class SimpleKuduEventProducer implements KuduEventProducer {
       private byte[] payload;
    @@ -400,8 +400,8 @@ which itself looks like this:

    public void configure(Context context) is called when an instance of our producer is instantiated -by the KuduSink. SimpleKuduEventProducer’s implementation looks for a producer parameter named -payloadColumn and uses its value (“payload” if not overridden in Flume configuration file) as the +by the KuduSink. SimpleKuduEventProducer’s implementation looks for a producer parameter named +payloadColumn and uses its value (“payload” if not overridden in Flume configuration file) as the column which will hold the value of the Flume event payload. If you recall from above, we had configured the KuduSink to listen for events generated from the vmstat command. Each output row from that command will be stored as a new row containing a payload column in the stats table. @@ -410,9 +410,9 @@ define them by prefixing it with producer. (agent1.sinks.sink example).

    The main producer logic resides in the public List<Operation> getOperations() method. In -SimpleKuduEventProducer’s implementation we simply insert the binary body of the Flume event into -the Kudu table. Here we call Kudu’s newInsert() to initiate an insert, but could have used -Upsert if updating an existing row was also an option, in fact there’s another producer +SimpleKuduEventProducer’s implementation we simply insert the binary body of the Flume event into +the Kudu table. Here we call Kudu’s newInsert() to initiate an insert, but could have used +Upsert if updating an existing row was also an option, in fact there’s another producer implementation available for doing just that: SimpleKeyedKuduEventProducer. Most probably you will need to write your own custom producer in the real world, but you can base your implementation on the built-in ones.

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/blog/page/4/index.html ---------------------------------------------------------------------- diff --git a/blog/page/4/index.html b/blog/page/4/index.html index 23e1fb5..1f9a328 100644 --- a/blog/page/4/index.html +++ b/blog/page/4/index.html @@ -167,7 +167,7 @@ covers ongoing development and news in the Apache Kudu (incubating) project.

    This blog post describes how the 1.0 release of Apache Kudu (incubating) will -support fault tolerance for the Kudu master, finally eliminating Kudu’s last +support fault tolerance for the Kudu master, finally eliminating Kudu’s last single point of failure.

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/blog/page/5/index.html ---------------------------------------------------------------------- diff --git a/blog/page/5/index.html b/blog/page/5/index.html index 3432824..3c3fa70 100644 --- a/blog/page/5/index.html +++ b/blog/page/5/index.html @@ -141,7 +141,7 @@ covers ongoing development and news in the Apache Kudu (incubating) project.

    0.9.0!

    This latest version adds basic UPSERT functionality and an improved Apache Spark Data Source -that doesn’t rely on the MapReduce I/O formats. It also improves Tablet Server +that doesn’t rely on the MapReduce I/O formats. It also improves Tablet Server restart time as well as write performance under high load. Finally, Kudu now enforces the specification of a partitioning scheme for new tables.

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/blog/page/6/index.html ---------------------------------------------------------------------- diff --git a/blog/page/6/index.html b/blog/page/6/index.html index 56fcc20..936b253 100644 --- a/blog/page/6/index.html +++ b/blog/page/6/index.html @@ -200,7 +200,7 @@ covers ongoing development and news in the Apache Kudu (incubating) project.

    -

    Recently, I wanted to stress-test and benchmark some changes to the Kudu RPC server, and decided to use YCSB as a way to generate reasonable load. While running YCSB, I noticed interesting results, and what started as an unrelated testing exercise eventually yielded some new insights into Kudu’s behavior. These insights will motivate changes to default Kudu settings and code in upcoming versions. This post details the benchmark setup, analysis, and conclusions.

    +

    Recently, I wanted to stress-test and benchmark some changes to the Kudu RPC server, and decided to use YCSB as a way to generate reasonable load. While running YCSB, I noticed interesting results, and what started as an unrelated testing exercise eventually yielded some new insights into Kudu’s behavior. These insights will motivate changes to default Kudu settings and code in upcoming versions. This post details the benchmark setup, analysis, and conclusions.

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/blog/page/8/index.html ---------------------------------------------------------------------- diff --git a/blog/page/8/index.html b/blog/page/8/index.html index 189c790..4ee3683 100644 --- a/blog/page/8/index.html +++ b/blog/page/8/index.html @@ -158,8 +158,8 @@ covers ongoing development and news in the Apache Kudu (incubating) project.

    -

    Welcome to the second edition of the Kudu Weekly Update. As with last week’s -inaugural post, we’ll cover ongoing development and news in the Apache Kudu +

    Welcome to the second edition of the Kudu Weekly Update. As with last week’s +inaugural post, we’ll cover ongoing development and news in the Apache Kudu project on a weekly basis.

    @@ -180,13 +180,13 @@ project on a weekly basis.

    -

    Kudu is a fast-moving young open source project, and we’ve heard from a few -members of the community that it can be difficult to keep track of what’s +

    Kudu is a fast-moving young open source project, and we’ve heard from a few +members of the community that it can be difficult to keep track of what’s going on day-to-day. A typical month comprises 80-100 individual patches committed and hundreds of code review and discussion emails. So, inspired by similar weekly newsletters like -LLVM Weekly and LWN’s weekly kernel coverage -we’re going to experiment with our own weekly newsletter covering +LLVM Weekly and LWN’s weekly kernel coverage +we’re going to experiment with our own weekly newsletter covering recent development and Kudu-related news.

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/committers.html ---------------------------------------------------------------------- diff --git a/committers.html b/committers.html index c095a37..bde9ff3 100644 --- a/committers.html +++ b/committers.html @@ -152,6 +152,11 @@ PMC + jtbirdsell + Jordan Birdsell + PMC + + julien Julien Le Dem PMC http://git-wip-us.apache.org/repos/asf/kudu-site/blob/4d030156/feed.xml ---------------------------------------------------------------------- diff --git a/feed.xml b/feed.xml index eddde44..28bee12 100644 --- a/feed.xml +++ b/feed.xml @@ -1,4 +1,4 @@ -Jekyll2016-11-01T23:21:55-07:00/Apache Kudu Weekly Update November 1st, 20162016-11-01T00:00:00-07:002016-11-01T00:00:00-07:00/2016/11/01/weekly-update<p>Welcome to the twenty-third edition of the Kudu Weekly Update. This weekly blog post +Jekyll2016-11-09T09:31:51-05:00/Apache Kudu Weekly Update November 1st, 20162016-11-01T00:00:00-04:002016-11-01T00:00:00-04:00/2016/11/01/weekly-update<p>Welcome to the twenty-third edition of the Kudu Weekly Update. This weekly blog post covers ongoing development and news in the Apache Kudu project.</p> <!--more--> @@ -84,7 +84,7 @@ David’s patch series fixes this.</p> tweet at <a href="https://twitter.com/ApacheKudu">@ApacheKudu</a>. Similarly, if you’re aware of some Kudu news we missed, let us know so we can cover it in a future post.</p>Todd LipconWelcome to the twenty-third edition of the Kudu Weekly Update. This weekly blog post -covers ongoing development and news in the Apache Kudu project.Apache Kudu Weekly Update October 20th, 20162016-10-20T00:00:00-07:002016-10-20T00:00:00-07:00/2016/10/20/weekly-update<p>Welcome to the twenty-second edition of the Kudu Weekly Update. This weekly blog post +covers ongoing development and news in the Apache Kudu project.Apache Kudu Weekly Update October 20th, 20162016-10-20T00:00:00-04:002016-10-20T00:00:00-04:00/2016/10/20/weekly-update<p>Welcome to the twenty-second edition of the Kudu Weekly Update. This weekly blog post covers ongoing development and news in the Apache Kudu project.</p> <!--more--> @@ -172,7 +172,7 @@ clients as well as a way to mutually authenticate tablet servers with the master tweet at <a href="https://twitter.com/ApacheKudu">@ApacheKudu</a>. Similarly, if you’re aware of some Kudu news we missed, let us know so we can cover it in a future post.</p>Todd LipconWelcome to the twenty-second edition of the Kudu Weekly Update. This weekly blog post -covers ongoing development and news in the Apache Kudu project.Apache Kudu Weekly Update October 11th, 20162016-10-11T00:00:00-07:002016-10-11T00:00:00-07:00/2016/10/11/weekly-update<p>Welcome to the twenty-first edition of the Kudu Weekly Update. Astute +covers ongoing development and news in the Apache Kudu project.Apache Kudu Weekly Update October 11th, 20162016-10-11T00:00:00-04:002016-10-11T00:00:00-04:00/2016/10/11/weekly-update<p>Welcome to the twenty-first edition of the Kudu Weekly Update. Astute readers will notice that the weekly blog posts have been not-so-weekly of late – in fact, it has been nearly two months since the previous post as I and others have focused on releases, conferences, etc.</p> @@ -332,13 +332,13 @@ tweet at <a href="https://twitter.com/ApacheKudu">@ApacheKudu< aware of some Kudu news we missed, let us know so we can cover it in a future post.</p>Todd LipconWelcome to the twenty-first edition of the Kudu Weekly Update. Astute readers will notice that the weekly blog posts have been not-so-weekly -of late &#8211; in fact, it has been nearly two months since the previous post +of late – in fact, it has been nearly two months since the previous post as I and others have focused on releases, conferences, etc. So, rather than covering just this past week, this post will cover highlights -of the progress since the 1.0 release in mid-September. If you&#8217;re interested +of the progress since the 1.0 release in mid-September. If you’re interested in learning about progress prior to that release, check the -release notes.Apache Kudu at Strata+Hadoop World NYC 20162016-09-26T00:00:00-07:002016-09-26T00:00:00-07:00/2016/09/26/strata-nyc-kudu-talks<p>This week in New York, O’Reilly and Cloudera will be hosting Strata+Hadoop World +release notes.Apache Kudu at Strata+Hadoop World NYC 20162016-09-26T00:00:00-04:002016-09-26T00:00:00-04:00/2016/09/26/strata-nyc-kudu-talks<p>This week in New York, O’Reilly and Cloudera will be hosting Strata+Hadoop World 2016. If you’re interested in Kudu, there will be several opportunities to learn more, both from the open source development team as well as some companies who are already adopting Kudu for their use cases. @@ -392,10 +392,10 @@ featuring Apache Kudu at the Cloudera and ZoomData vendor booths.</p> <p>If you’re not attending the conference, but still based in NYC, all hope is not lost. Michael Crutcher from Cloudera will be presenting an introduction to Apache Kudu at the <a href="http://www.meetup.com/mysqlnyc/events/233599664/">SQL NYC Meetup</a>. -Be sure to RSVP as spots are filling up fast.</p>Todd LipconThis week in New York, O&#8217;Reilly and Cloudera will be hosting Strata+Hadoop World -2016. If you&#8217;re interested in Kudu, there will be several opportunities to +Be sure to RSVP as spots are filling up fast.</p>Todd LipconThis week in New York, O’Reilly and Cloudera will be hosting Strata+Hadoop World +2016. If you’re interested in Kudu, there will be several opportunities to learn more, both from the open source development team as well as some companies -who are already adopting Kudu for their use cases.Apache Kudu 1.0.0 released2016-09-20T00:00:00-07:002016-09-20T00:00:00-07:00/2016/09/20/apache-kudu-1-0-0-released<p>The Apache Kudu team is happy to announce the release of Kudu 1.0.0!</p> +who are already adopting Kudu for their use cases.Apache Kudu 1.0.0 released2016-09-20T00:00:00-04:002016-09-20T00:00:00-04:00/2016/09/20/apache-kudu-1-0-0-released<p>The Apache Kudu team is happy to announce the release of Kudu 1.0.0!</p> <p>This latest version adds several new features, including:</p> @@ -432,7 +432,7 @@ integrations (eg Spark, Flume) are also now available via the ASF Maven repository.</li> </ul>Todd LipconThe Apache Kudu team is happy to announce the release of Kudu 1.0.0! -This latest version adds several new features, including:Pushing Down Predicate Evaluation in Apache Kudu2016-09-16T00:00:00-07:002016-09-16T00:00:00-07:00/2016/09/16/predicate-pushdown<p>I had the pleasure of interning with the Apache Kudu team at Cloudera this +This latest version adds several new features, including:Pushing Down Predicate Evaluation in Apache Kudu2016-09-16T00:00:00-04:002016-09-16T00:00:00-04:00/2016/09/16/predicate-pushdown<p>I had the pleasure of interning with the Apache Kudu team at Cloudera this summer. This project was my summer contribution to Kudu: a restructuring of the scan path to speed up queries.</p> @@ -574,7 +574,7 @@ incubating to a Top Level Apache project. I can’t express enough how grateful am for the amount of support I got from the Kudu team, from the intern coordinators, and from the Cloudera community as a whole.</p>Andrew WongI had the pleasure of interning with the Apache Kudu team at Cloudera this summer. This project was my summer contribution to Kudu: a restructuring of the -scan path to speed up queries.An Introduction to the Flume Kudu Sink2016-08-31T00:00:00-07:002016-08-31T00:00:00-07:00/2016/08/31/intro-flume-kudu-sink<p>This post discusses the Kudu Flume Sink. First, I’ll give some background on why we considered +scan path to speed up queries.An Introduction to the Flume Kudu Sink2016-08-31T00:00:00-04:002016-08-31T00:00:00-04:00/2016/08/31/intro-flume-kudu-sink<p>This post discusses the Kudu Flume Sink. First, I’ll give some background on why we considered using Kudu, what Flume does for us, and how Flume fits with Kudu in our project.</p> <h2 id="why-kudu">Why Kudu</h2> @@ -868,12 +868,12 @@ disparate sources.</p> <p><em>Ara Abrahamian is a software engineer at Argyle Data building fraud detection systems using sophisticated machine learning methods. Ara is the original author of the Flume Kudu Sink that is included in the Kudu distribution. You can follow him on Twitter at -<a href="https://twitter.com/ara_e">@ara_e</a>.</em></p>Ara AbrahamianThis post discusses the Kudu Flume Sink. First, I&#8217;ll give some background on why we considered +<a href="https://twitter.com/ara_e">@ara_e</a>.</em></p>Ara AbrahamianThis post discusses the Kudu Flume Sink. First, I’ll give some background on why we considered using Kudu, what Flume does for us, and how Flume fits with Kudu in our project. Why Kudu -Traditionally in the Hadoop ecosystem we&#8217;ve dealt with various batch processing technologies such +Traditionally in the Hadoop ecosystem we’ve dealt with various batch processing technologies such as MapReduce and the many libraries and tools built on top of it in various languages (Apache Pig, Apache Hive, Apache Oozie and many others). The main problem with this approach is that it needs to process the whole data set in batches, again and again, as soon as new data gets added. Things get @@ -913,14 +913,14 @@ And a Kudu-based near real-time approach is: flexible and expressive, thanks to SQL support via Apache Impala (incubating) a table-oriented, mutable data store that feels like a traditional relational database - very easy to program, you can even pretend it&#8217;s good old MySQL + very easy to program, you can even pretend it’s good old MySQL low-latency and relatively high throughput, both for ingest and query -At Argyle Data, we&#8217;re dealing with complex fraud detection scenarios. We need to ingest massive +At Argyle Data, we’re dealing with complex fraud detection scenarios. We need to ingest massive amounts of data, run machine learning algorithms and generate reports. When we created our current architecture two years ago we decided to opt for a database as the backbone of our system. That -database is Apache Accumulo. It&#8217;s a key-value based database which runs on top of Hadoop HDFS, +database is Apache Accumulo. It’s a key-value based database which runs on top of Hadoop HDFS, quite similar to HBase but with some important improvements such as cell level security and ease of deployment and management. To enable querying of this data for quite complex reporting and analytics, we used Presto, a distributed query engine with a pluggable architecture open-sourced @@ -933,12 +933,12 @@ architecture has served us well, but there were a few problems: we need to support ad-hoc queries, plus long-term data warehouse functionality -So, we&#8217;ve started gradually moving the core machine-learning pipeline to a streaming based +So, we’ve started gradually moving the core machine-learning pipeline to a streaming based solution. This way we can ingest and process larger data-sets faster in the real-time. But then how would we take care of ad-hoc queries and long-term persistence? This is where Kudu comes in. While the machine learning pipeline ingests and processes real-time data, we store a copy of the same ingested data in Kudu for long-term access and ad-hoc queries. Kudu is our data warehouse. By -using Kudu and Impala, we can retire our in-house Presto connector and rely on Impala&#8217;s +using Kudu and Impala, we can retire our in-house Presto connector and rely on Impala’s super-fast query engine. But how would we make sure data is reliably ingested into the streaming pipeline and the @@ -946,10 +946,10 @@ Kudu-based data warehouse? This is where Apache Flume comes in. Why Flume -According to their website &#8220;Flume is a distributed, reliable, and +According to their website “Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault -tolerant with tunable reliability mechanisms and many failover and recovery mechanisms.&#8221; As you +tolerant with tunable reliability mechanisms and many failover and recovery mechanisms.” As you can see, nowhere is Hadoop mentioned but Flume is typically used for ingesting data to Hadoop clusters. @@ -967,7 +967,7 @@ File-based channels are also provided. As for the sources, Avro, JMS, Thrift, sp source are some of the built-in ones. Flume also ships with many sinks, including sinks for writing data to HDFS, HBase, Hive, Kafka, as well as to other Flume agents. -In the rest of this post I&#8217;ll go over the Kudu Flume sink and show you how to configure Flume to +In the rest of this post I’ll go over the Kudu Flume sink and show you how to configure Flume to write ingested data to a Kudu table. The sink has been part of the Kudu distribution since the 0.8 release and the source code can be found here. @@ -999,8 +999,8 @@ We define a source called source1 which simply executes a vmstat command to cont virtual memory statistics for the machine and queue events into an in-memory channel1 channel, which in turn is used for writing these events to a Kudu table called stats. We are using org.apache.kudu.flume.sink.SimpleKuduEventProducer as the producer. SimpleKuduEventProducer is -the built-in and default producer, but it&#8217;s implemented as a showcase for how to write Flume -events into Kudu tables. For any serious functionality we&#8217;d have to write a custom producer. We +the built-in and default producer, but it’s implemented as a showcase for how to write Flume +events into Kudu tables. For any serious functionality we’d have to write a custom producer. We need to make this producer and the KuduSink class available to Flume. We can do that by simply copying the kudu-flume-sink-&lt;VERSION&gt;.jar jar file from the Kudu distribution to the $FLUME_HOME/plugins.d/kudu-sink/lib directory in the Flume installation. The jar file contains @@ -1008,7 +1008,7 @@ KuduSink and all of its dependencies (including Kudu java client classes). At a minimum, the Kudu Flume Sink needs to know where the Kudu masters are (agent1.sinks.sink1.masterAddresses = localhost) and which Kudu table should be used for writing -Flume events to (agent1.sinks.sink1.tableName = stats). The Kudu Flume Sink doesn&#8217;t create this +Flume events to (agent1.sinks.sink1.tableName = stats). The Kudu Flume Sink doesn’t create this table, it has to be created before the Kudu Flume Sink is started. You may also notice the batchSize parameter. Batch size is used for batching up to that many @@ -1029,7 +1029,7 @@ Here is a complete list of KuduSink parameters: masterAddresses N/A - Comma-separated list of &#8220;host:port&#8221; pairs of the masters (port optional) + Comma-separated list of “host:port” pairs of the masters (port optional) tableName @@ -1059,7 +1059,7 @@ Here is a complete list of KuduSink parameters: -Let&#8217;s take a look at the source code for the built-in producer class: +Let’s take a look at the source code for the built-in producer class: public class SimpleKuduEventProducer implements KuduEventProducer { private byte[] payload; @@ -1130,8 +1130,8 @@ public interface KuduEventProducer extends Configurable, ConfigurableComponent { public void configure(Context context) is called when an instance of our producer is instantiated -by the KuduSink. SimpleKuduEventProducer&#8217;s implementation looks for a producer parameter named -payloadColumn and uses its value (&#8220;payload&#8221; if not overridden in Flume configuration file) as the +by the KuduSink. SimpleKuduEventProducer’s implementation looks for a producer parameter named +payloadColumn and uses its value (“payload” if not overridden in Flume configuration file) as the column which will hold the value of the Flume event payload. If you recall from above, we had configured the KuduSink to listen for events generated from the vmstat command. Each output row from that command will be stored as a new row containing a payload column in the stats table. @@ -1140,9 +1140,9 @@ define them by prefixing it with producer. (agent1.sinks.sink1.producer.paramete example). The main producer logic resides in the public List&lt;Operation&gt; getOperations() method. In -SimpleKuduEventProducer&#8217;s implementation we simply insert the binary body of the Flume event into -the Kudu table. Here we call Kudu&#8217;s newInsert() to initiate an insert, but could have used -Upsert if updating an existing row was also an option, in fact there&#8217;s another producer +SimpleKuduEventProducer’s implementation we simply insert the binary body of the Flume event into +the Kudu table. Here we call Kudu’s newInsert() to initiate an insert, but could have used +Upsert if updating an existing row was also an option, in fact there’s another producer implementation available for doing just that: SimpleKeyedKuduEventProducer. Most probably you will need to write your own custom producer in the real world, but you can base your implementation on the built-in ones. @@ -1162,7 +1162,7 @@ disparate sources. Ara Abrahamian is a software engineer at Argyle Data building fraud detection systems using sophisticated machine learning methods. Ara is the original author of the Flume Kudu Sink that is included in the Kudu distribution. You can follow him on Twitter at -@ara_e.New Range Partitioning Features in Kudu 0.102016-08-23T00:00:00-07:002016-08-23T00:00:00-07:00/2016/08/23/new-range-partitioning-features<p>Kudu 0.10 is shipping with a few important new features for range partitioning. +@ara_e.New Range Partitioning Features in Kudu 0.102016-08-23T00:00:00-04:002016-08-23T00:00:00-04:00/2016/08/23/new-range-partitioning-features<p>Kudu 0.10 is shipping with a few important new features for range partitioning. These features are designed to make Kudu easier to scale for certain workloads, like time series. This post will introduce these features, and discuss how to use them to effectively design tables for scalability and performance.</p> @@ -1257,7 +1257,7 @@ dropped and replacements added, but it requires the servers and all clients to be updated to 0.10.</p>Dan BurkertKudu 0.10 is shipping with a few important new features for range partitioning. These features are designed to make Kudu easier to scale for certain workloads, like time series. This post will introduce these features, and discuss how to use -them to effectively design tables for scalability and performance.Apache Kudu 0.10.0 released2016-08-23T00:00:00-07:002016-08-23T00:00:00-07:00/2016/08/23/apache-kudu-0-10-0-released<p>The Apache Kudu team is happy to announce the release of Kudu 0.10.0!</p> +them to effectively design tables for scalability and performance.Apache Kudu 0.10.0 released2016-08-23T00:00:00-04:002016-08-23T00:00:00-04:00/2016/08/23/apache-kudu-0-10-0-released<p>The Apache Kudu team is happy to announce the release of Kudu 0.10.0!</p> <p>This latest version adds several new features, including: <!--more--></p> @@ -1291,7 +1291,7 @@ the release notes below.</p> <li>Download the <a href="http://kudu.apache.org/releases/0.9.0/">Kudu 0.10.0 source release</a></li> </ul>Todd LipconThe Apache Kudu team is happy to announce the release of Kudu 0.10.0! -This latest version adds several new features, including:Apache Kudu Weekly Update August 16th, 20162016-08-16T00:00:00-07:002016-08-16T00:00:00-07:00/2016/08/16/weekly-update<p>Welcome to the twentieth edition of the Kudu Weekly Update. This weekly blog post +This latest version adds several new features, including:Apache Kudu Weekly Update August 16th, 20162016-08-16T00:00:00-04:002016-08-16T00:00:00-04:00/2016/08/16/weekly-update<p>Welcome to the twentieth edition of the Kudu Weekly Update. This weekly blog post covers ongoing development and news in the Apache Kudu project.</p> <!--more-->