From commits-return-6681-archive-asf-public=cust-asf.ponee.io@kudu.apache.org Fri Oct 26 21:03:40 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 059931807A1 for ; Fri, 26 Oct 2018 21:03:35 +0200 (CEST) Received: (qmail 42708 invoked by uid 500); 26 Oct 2018 19:03:35 -0000 Mailing-List: contact commits-help@kudu.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kudu.apache.org Delivered-To: mailing list commits@kudu.apache.org Received: (qmail 40963 invoked by uid 99); 26 Oct 2018 19:03:33 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 26 Oct 2018 19:03:33 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 8E8EBE1207; Fri, 26 Oct 2018 19:03:32 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: abukor@apache.org To: commits@kudu.apache.org Date: Fri, 26 Oct 2018 19:04:20 -0000 Message-Id: In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [50/51] [partial] kudu-site git commit: Publish commit(s) from site source repo: a05466438 [blog] Add post about 1.8.0 release 1fefa84c7 Updating web site for Kudu 1.8.0 release 637a50027 [site] Add http to https redirect 40f26d899 gh-pages: Make http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/.htaccess ---------------------------------------------------------------------- diff --git a/.htaccess b/.htaccess index 4ace0af..ce7dc59 100644 --- a/.htaccess +++ b/.htaccess @@ -25,4 +25,8 @@ # Server should support REQUEST_SCHEME and should be running http or https. RewriteCond "%{REQUEST_SCHEME}" "^http" RewriteRule ^/?(.*)$ %{REQUEST_SCHEME}://kudu.apache.org/$1 [R=301,L] + + # Redirect http to https + RewriteCond %{HTTPS} off + RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/02/26/apache-kudu-0-7-0-released.html ---------------------------------------------------------------------- diff --git a/2016/02/26/apache-kudu-0-7-0-released.html b/2016/02/26/apache-kudu-0-7-0-released.html index 1f1d7b5..b068f40 100644 --- a/2016/02/26/apache-kudu-0-7-0-released.html +++ b/2016/02/26/apache-kudu-0-7-0-released.html @@ -126,7 +126,7 @@ part of the ASF Incubator, version 0.7.0!

Experimental setup

-

The single-node Kudu cluster was configured, started, and stopped by a Python script run_experiments.py which cycled through several different configurations, completely removing all data in between each iteration. For each Kudu configuration, YCSB was used to load 100M rows of data (each approximately 1KB). YCSB is configured with 16 client threads on the same node. For each configuration, the YCSB log as well as periodic dumps of Tablet Server metrics are captured for later analysis.

+

The single-node Kudu cluster was configured, started, and stopped by a Python script run_experiments.py which cycled through several different configurations, completely removing all data in between each iteration. For each Kudu configuration, YCSB was used to load 100M rows of data (each approximately 1KB). YCSB is configured with 16 client threads on the same node. For each configuration, the YCSB log as well as periodic dumps of Tablet Server metrics are captured for later analysis.

Note that in many cases, the 16 client threads were not enough to max out the full performance of the machine. These experiments should not be taken to determine the maximum throughput of Kudu – instead, we are looking at comparing the relative performance of different configuration options.

Benchmarking Synchronous Insert Operations

-

The first set of experiments runs the YCSB load with the sync_ops=true configuration option. This option means that each client thread will insert one row at a time and synchronously wait for the response before inserting the next row. The lack of batching makes this a good stress test for Kudu’s RPC performance and other fixed per-request costs.

+

The first set of experiments runs the YCSB load with the sync_ops=true configuration option. This option means that each client thread will insert one row at a time and synchronously wait for the response before inserting the next row. The lack of batching makes this a good stress test for Kudu’s RPC performance and other fixed per-request costs.

The fact that the requests are synchronous also makes it easy to measure the latency of the write requests. With request batching enabled, latency would be irrelevant.

@@ -170,25 +170,25 @@

png

-
Average throughput: 31163 ops/sec -
+
Average throughput: 31163 ops/sec
+

The results here are interesting: the throughput starts out around 70K rows/second, but then collapses to nearly zero. After staying near zero for a while, it shoots back up to the original performance, and the pattern repeats many times.

Also note that the 99th percentile latency seems to alternate between close to zero and a value near 500ms. This bimodal distribution led me to grep in the Java source for the magic number 500. Sure enough, I found:

-
public static final int SLEEP_TIME = 500; -
+
public static final int SLEEP_TIME = 500;
+

Used in this backoff calculation method (slightly paraphrased here):

-
long getSleepTimeForRpc(KuduRpc<?> rpc) { - // TODO backoffs? Sleep in increments of 500 ms, plus some random time up to 50 - return (attemptCount * SLEEP_TIME) + sleepRandomizer.nextInt(50); - } -
+
  long getSleepTimeForRpc(KuduRpc<?> rpc) {
+    // TODO backoffs? Sleep in increments of 500 ms, plus some random time up to 50
+    return (attemptCount * SLEEP_TIME) + sleepRandomizer.nextInt(50);
+  }
+
-

One reason that a client will back off and retry is a SERVER_TOO_BUSY response from the server. This response is used in a number of overload situations. In a write-mostly workload, the most likely situation is that the server is low on memory and thus asking clients to back off while it flushes. Sure enough, when we graph the heap usage over time, as well as the rate of writes rejected due to low-memory, we see that this is the case:

+

One reason that a client will back off and retry is a SERVER_TOO_BUSY response from the server. This response is used in a number of overload situations. In a write-mostly workload, the most likely situation is that the server is low on memory and thus asking clients to back off while it flushes. Sure enough, when we graph the heap usage over time, as well as the rate of writes rejected due to low-memory, we see that this is the case:

plot_ts_metric(data['default'], "heap_allocated", "Heap usage (GB)", 1024*1024*1024)
 plot_ts_metric(data['default'], "mem_rejections", "Rejected writes\nper sec")
@@ -205,24 +205,24 @@

png

-

I then re-ran the workload while watching iostat -dxm 1 to see the write rates across all of the disks. I could see that each of the disks was busy in turn, rather than busy in parallel.

+

I then re-ran the workload while watching iostat -dxm 1 to see the write rates across all of the disks. I could see that each of the disks was busy in turn, rather than busy in parallel.

This reminded me that the default way in which Kudu flushes data is as follows:

-
for each column: +
for each column:
   open a new block on disk to write that column, round-robining across disks
 iterate over data:
   append data to the already-open blocks
 for each column:
   fsync() the block of data
   close the block
-
+ -

Because Kudu uses buffered writes, the actual appending of data to the open blocks does not generate immediate IO. Instead, it only dirties pages in the Linux page cache. The actual IO is performed with the fsync call at the end. Because Kudu defaults to fsyncing each file in turn from a single thread, this was causing the slow performance identified above.

+

Because Kudu uses buffered writes, the actual appending of data to the open blocks does not generate immediate IO. Instead, it only dirties pages in the Linux page cache. The actual IO is performed with the fsync call at the end. Because Kudu defaults to fsyncing each file in turn from a single thread, this was causing the slow performance identified above.

-

At this point, I consulted with Adar Dembo, who designed much of this code path. He reminded me that we actually have a configuration flag cfile_do_on_finish=flush which changes the code to something resembling the following:

+

At this point, I consulted with Adar Dembo, who designed much of this code path. He reminded me that we actually have a configuration flag cfile_do_on_finish=flush which changes the code to something resembling the following:

-
for each column: +
for each column:
   open a new block on disk to write that column, round-robining across disks
 iterate over data:
   append data to the already-open blocks
@@ -231,16 +231,16 @@ for each column:
 for each column:
   fsync the block
   close the block
-
+ -

The sync_file_range call here asynchronously enqueues the dirty pages to be written back to the disks, and then the following fsync actually waits for the writeback to be complete. I ran the benchmark for a new configuration with this flag enabled, and plotted the results:

+

The sync_file_range call here asynchronously enqueues the dirty pages to be written back to the disks, and then the following fsync actually waits for the writeback to be complete. I ran the benchmark for a new configuration with this flag enabled, and plotted the results:

plot_throughput_latency(data['finish=flush'])

png

-
Average throughput: 52457 ops/sec -
+
Average throughput: 52457 ops/sec
+

This is already a substantial improvement from the default settings. The overall throughput has increased from 31K ops/second to 52K ops/second (67%), and we no longer see any dramatic drops in performance or increases in 99th percentile. In fact, the 99th percentile stays comfortably below 1ms for the entire test.

@@ -282,14 +282,14 @@ for each column:

Writing a lot of small flushes compared to a small number of large flushes means that the on-disk data is not as well sorted in the optimized workload. An individual write may need to consult up to 20 bloom filters corresponding to previously flushed pieces of data in order to ensure that it is not an insert with a duplicate primary key.

-

So, how can we address this issue? It turns out that the flush threshold is actually configurable with the flush_threshold_mb flag. I re-ran the workload yet another time with the flush threshold set to 20GB.

+

So, how can we address this issue? It turns out that the flush threshold is actually configurable with the flush_threshold_mb flag. I re-ran the workload yet another time with the flush threshold set to 20GB.

plot_throughput_latency(data['finish=flush+20GB-threshold'])

png

-
Average throughput: 67123 ops/sec -
+
Average throughput: 67123 ops/sec
+

This gets us another 28% improvement from 52K ops/second up to 67K ops/second (+116% from the default), and we no longer see the troubling downward slope on the throughput graph. Let’s check on the memory and bloom filter metrics again.

@@ -318,13 +318,13 @@ for each column:

Tests with Batched Writes

-

The above tests were done with the sync_ops=true YCSB configuration option. However, we expect that for many heavy write situations, the writers would batch many rows together into larger write operations for better throughput.

+

The above tests were done with the sync_ops=true YCSB configuration option. However, we expect that for many heavy write situations, the writers would batch many rows together into larger write operations for better throughput.

-

I wanted to ensure that the recommended configuration changes above also improved performance for this workload. So, I re-ran the same experiments, but with YCSB configured to send batches of 100 insert operations to the tablet server using the Kudu client’s AUTO_FLUSH_BACKGROUND write mode.

+

I wanted to ensure that the recommended configuration changes above also improved performance for this workload. So, I re-ran the same experiments, but with YCSB configured to send batches of 100 insert operations to the tablet server using the Kudu client’s AUTO_FLUSH_BACKGROUND write mode.

This time, I compared four configurations: - the Kudu default settings -- the defaults, but configured with cfile_do_on_finish=flush to increase flush IO performance +- the defaults, but configured with cfile_do_on_finish=flush to increase flush IO performance - the above, but with the flush thresholds configured to 1G and 10G

For these experiments, we don’t plot latencies, since write latencies are meaningless with batching enabled.

@@ -341,8 +341,8 @@ for each column:

png

-
Average throughput: 33319 ops/sec -
+
Average throughput: 33319 ops/sec
+

png

@@ -351,8 +351,8 @@ for each column:

png

-
Average throughput: 80068 ops/sec -
+
Average throughput: 80068 ops/sec
+

png

@@ -361,8 +361,8 @@ for each column:

png

-
Average throughput: 78040 ops/sec -
+
Average throughput: 78040 ops/sec
+

png

@@ -371,8 +371,8 @@ for each column:

png

-
Average throughput: 82005 ops/sec -
+
Average throughput: 82005 ops/sec
+

png

@@ -393,8 +393,8 @@ for each column:

We will likely make these changes in the next Kudu release. In the meantime, users can experiment by adding the following flags to their tablet server configuration:

    -
  • --cfile_do_on_finish=flush
  • -
  • --flush_threshold_mb=10000
  • +
  • --cfile_do_on_finish=flush
  • +
  • --flush_threshold_mb=10000

Note that, even if the server hosts many tablets or has less memory than the one used in this test, flushes will still be triggered if the overall memory consumption of the process crosses the configured soft limit. So, configuring a 10GB threshold does not increase the risk of out-of-memory errors.

@@ -418,6 +418,8 @@ for each column:

Recent posts

http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/05/03/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/05/03/weekly-update.html b/2016/05/03/weekly-update.html index 9f5540a..2cf3c91 100644 --- a/2016/05/03/weekly-update.html +++ b/2016/05/03/weekly-update.html @@ -190,6 +190,8 @@ list of conferenace sessions and meetups near you.

Recent posts

http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/05/09/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/05/09/weekly-update.html b/2016/05/09/weekly-update.html index 77d2fb5..9d77178 100644 --- a/2016/05/09/weekly-update.html +++ b/2016/05/09/weekly-update.html @@ -180,6 +180,8 @@ on May 10.

Recent posts

http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/05/16/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/05/16/weekly-update.html b/2016/05/16/weekly-update.html index 7fe1a1f..322a6f0 100644 --- a/2016/05/16/weekly-update.html +++ b/2016/05/16/weekly-update.html @@ -215,6 +215,8 @@ meetup.

Recent posts

http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/05/23/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/05/23/weekly-update.html b/2016/05/23/weekly-update.html index d4259e9..fadcfe3 100644 --- a/2016/05/23/weekly-update.html +++ b/2016/05/23/weekly-update.html @@ -149,9 +149,9 @@ first release candidate.

  • Since Kudu’s initial release, one of the most commonly requested features -has been support for the UPSERT operation. UPSERT, known in some other -databases as INSERT ... ON DUPLICATE KEY UPDATE. This operation has the -semantics of an INSERT if no key already exists with the provided primary +has been support for the UPSERT operation. UPSERT, known in some other +databases as INSERT ... ON DUPLICATE KEY UPDATE. This operation has the +semantics of an INSERT if no key already exists with the provided primary key. Otherwise, it replaces the existing row with the new values.

    This week, several developers collaborated to add support for this operation. @@ -204,6 +204,8 @@ Cloudera User Group.

  • Recent posts

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/01/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/06/01/weekly-update.html b/2016/06/01/weekly-update.html index 40fbd88..8238755 100644 --- a/2016/06/01/weekly-update.html +++ b/2016/06/01/weekly-update.html @@ -170,6 +170,8 @@ hadoop-common test jar. This solved build issues while also removing a nasty dep

    Recent posts

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/02/no-default-partitioning.html ---------------------------------------------------------------------- diff --git a/2016/06/02/no-default-partitioning.html b/2016/06/02/no-default-partitioning.html index 2e3a9c9..fd9790e 100644 --- a/2016/06/02/no-default-partitioning.html +++ b/2016/06/02/no-default-partitioning.html @@ -161,7 +161,7 @@ advanced configurations.

    C++ Client

    With the C++ client, creating a new table with hash partitions is as simple as -calling KuduTableCreator:add_hash_partitions with the columns to hash and the +calling KuduTableCreator:add_hash_partitions with the columns to hash and the number of buckets to use:

    unique_ptr<KuduTableCreator> table_creator(my_client->NewTableCreator());
    @@ -182,14 +182,14 @@ number of buckets to use:

    myClient.createTable("my-table", my_schema, options);

    In the examples above, if the hash partition configuration is omitted the create -table operation will fail with the error Table partitioning must be specified +table operation will fail with the error Table partitioning must be specified using setRangePartitionColumns or addHashPartitions. In the Java client this -manifests as a thrown IllegalArgumentException, while in the C++ client it is -returned as a Status::InvalidArgument.

    +manifests as a thrown IllegalArgumentException, while in the C++ client it is +returned as a Status::InvalidArgument.

    Impala

    -

    When creating Kudu tables with Impala, the formerly optional DISTRIBUTE BY +

    When creating Kudu tables with Impala, the formerly optional DISTRIBUTE BY clause is now required:

    CREATE TABLE my_table (key_column_a STRING, key_column_b STRING, other_column STRING)
    @@ -211,6 +211,8 @@ clause is now required:

    Recent posts

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/06/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/06/06/weekly-update.html b/2016/06/06/weekly-update.html index 34105e3..0f5cf61 100644 --- a/2016/06/06/weekly-update.html +++ b/2016/06/06/weekly-update.html @@ -165,6 +165,8 @@ patches in for the Replay Cache<

    Recent posts

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/10/apache-kudu-0-9-0-released.html ---------------------------------------------------------------------- diff --git a/2016/06/10/apache-kudu-0-9-0-released.html b/2016/06/10/apache-kudu-0-9-0-released.html index 00bd3b1..2cda340 100644 --- a/2016/06/10/apache-kudu-0-9-0-released.html +++ b/2016/06/10/apache-kudu-0-9-0-released.html @@ -140,6 +140,8 @@ the specification of a partitioning scheme for new tables.

    Recent posts

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/13/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/06/13/weekly-update.html b/2016/06/13/weekly-update.html index b051ffa..3612f74 100644 --- a/2016/06/13/weekly-update.html +++ b/2016/06/13/weekly-update.html @@ -173,6 +173,8 @@ removal happening in this patch<

    Recent posts

    http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/17/raft-consensus-single-node.html ---------------------------------------------------------------------- diff --git a/2016/06/17/raft-consensus-single-node.html b/2016/06/17/raft-consensus-single-node.html index 6c532c7..cc2b83c 100644 --- a/2016/06/17/raft-consensus-single-node.html +++ b/2016/06/17/raft-consensus-single-node.html @@ -143,13 +143,13 @@ implementation was complete.

    The Consensus API has the following main responsibilities:

      -
    1. Support acting as a Raft LEADER and replicate writes to a local +
    2. Support acting as a Raft LEADER and replicate writes to a local write-ahead log (WAL) as well as followers in the Raft configuration. For each operation written to the leader, a Raft implementation must keep track of how many nodes have written a copy of the operation being replicated, and whether or not that constitutes a majority. Once a majority of the nodes have written a copy of the data, it is considered committed.
    3. -
    4. Support acting as a Raft FOLLOWER by accepting writes from the leader and +
    5. Support acting as a Raft FOLLOWER by accepting writes from the leader and preparing them to be eventually committed.
    6. Support voting in and initiating leader elections.
    7. Support participating in and initiating configuration changes (such as going @@ -215,6 +215,8 @@ dissertation, which you can find linked from the above web site.

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/21/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/06/21/weekly-update.html b/2016/06/21/weekly-update.html index e152b57..0b6b40b 100644 --- a/2016/06/21/weekly-update.html +++ b/2016/06/21/weekly-update.html @@ -134,7 +134,7 @@ leveraging the tablets cache.

    8. In the context of making multi-master reliable in 1.0, Adar Dembo posted a design document on how to handle permanent master failures. Currently the master’s code is missing some features -like remote bootstrap which makes it possible for a new replica to download a snapshot of the data +like remote bootstrap which makes it possible for a new replica to download a snapshot of the data from the leader replica.

    9. @@ -166,6 +166,8 @@ a future post.

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/24/multi-master-1-0-0.html ---------------------------------------------------------------------- diff --git a/2016/06/24/multi-master-1-0-0.html b/2016/06/24/multi-master-1-0-0.html index 3891256..7286d33 100644 --- a/2016/06/24/multi-master-1-0-0.html +++ b/2016/06/24/multi-master-1-0-0.html @@ -144,9 +144,9 @@ can be safely enabled in production clusters.

      To use replicated masters, a Kudu operator must deploy some number of Kudu masters, providing the hostname and port number of each master in the group via -the --master_address command line option. For example, each master in a +the --master_address command line option. For example, each master in a three-node deployment should be started with ---master_address=<host1:port1>,<host2:port2><host3:port3>. In Raft parlance, +--master_address=<host1:port1>,<host2:port2><host3:port3>. In Raft parlance, this group of masters is known as a Raft configuration.

      At startup, a Raft configuration of masters will hold a leader election and @@ -191,7 +191,7 @@ clients are also configured with the locations of all masters. Unlike tablet servers, they always communicate with the leader master as follower masters will reject client requests. To do this, clients must determine which master is the leader before sending the first request as well as whenever any request fails -with a NOT_THE_LEADER error.

      +with a NOT_THE_LEADER error.

      Remaining work for Kudu 1.0

      @@ -228,6 +228,8 @@ nothing has been implemented yet. Stay tuned!

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/06/27/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/06/27/weekly-update.html b/2016/06/27/weekly-update.html index 6f7b17c..4cbc866 100644 --- a/2016/06/27/weekly-update.html +++ b/2016/06/27/weekly-update.html @@ -230,6 +230,8 @@ a future post.

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/07/01/apache-kudu-0-9-1-released.html ---------------------------------------------------------------------- diff --git a/2016/07/01/apache-kudu-0-9-1-released.html b/2016/07/01/apache-kudu-0-9-1-released.html index 101148a..f468513 100644 --- a/2016/07/01/apache-kudu-0-9-1-released.html +++ b/2016/07/01/apache-kudu-0-9-1-released.html @@ -138,6 +138,8 @@ of 0.9.0 are encouraged to update to the new version at their earliest convenien

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/07/11/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/07/11/weekly-update.html b/2016/07/11/weekly-update.html index 09e991e..a377087 100644 --- a/2016/07/11/weekly-update.html +++ b/2016/07/11/weekly-update.html @@ -196,6 +196,8 @@ a future post.

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/07/18/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/07/18/weekly-update.html b/2016/07/18/weekly-update.html index 217519b..8fdfe22 100644 --- a/2016/07/18/weekly-update.html +++ b/2016/07/18/weekly-update.html @@ -188,6 +188,8 @@ a future post.

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/07/25/asf-graduation.html ---------------------------------------------------------------------- diff --git a/2016/07/25/asf-graduation.html b/2016/07/25/asf-graduation.html index e73d7c3..038d214 100644 --- a/2016/07/25/asf-graduation.html +++ b/2016/07/25/asf-graduation.html @@ -164,6 +164,8 @@ Established in 1999, the all-volunteer Foundation oversees more than 350 leading

      Recent posts

      http://git-wip-us.apache.org/repos/asf/kudu-site/blob/854be1d3/2016/07/26/weekly-update.html ---------------------------------------------------------------------- diff --git a/2016/07/26/weekly-update.html b/2016/07/26/weekly-update.html index 2194cb1..c9b34e0 100644 --- a/2016/07/26/weekly-update.html +++ b/2016/07/26/weekly-update.html @@ -136,8 +136,8 @@ new name and status.