hudi-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From bhavanisu...@apache.org
Subject [incubator-hudi] branch asf-site updated: [DOCS] Update site based on latest content for 0.5.1 release (#1294)
Date Fri, 31 Jan 2020 08:12:01 GMT
This is an automated email from the ASF dual-hosted git repository.

bhavanisudha pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-hudi.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 456b990  [DOCS] Update site based on latest content for 0.5.1 release (#1294)
456b990 is described below

commit 456b990ce7e588a88ace7df9eabbb3ac8be6e814
Author: Bhavani Sudha Saktheeswaran <bhasudha@uber.com>
AuthorDate: Fri Jan 31 00:11:54 2020 -0800

    [DOCS] Update site based on latest content for 0.5.1 release (#1294)
---
 content/assets/css/main.css                        |   2 +-
 content/assets/images/hudi-lake.png                | Bin 44413 -> 133775 bytes
 content/assets/js/lunr/lunr-store.js               |  30 +-
 content/cn/community.html                          |   2 +-
 content/cn/contributing.html                       |   2 +-
 content/cn/docs/0.5.0-admin_guide.html             |   2 +-
 content/cn/docs/0.5.0-concepts.html                |   2 +-
 content/cn/docs/0.5.0-configurations.html          |   2 +-
 content/cn/docs/0.5.0-docker_demo.html             |   2 +-
 content/cn/docs/0.5.0-docs-versions.html           |   6 +-
 content/cn/docs/0.5.0-querying_data.html           |   2 +-
 content/cn/docs/0.5.0-quick-start-guide.html       |   2 +-
 content/cn/docs/0.5.0-use_cases.html               |   2 +-
 content/cn/docs/0.5.0-writing_data.html            |   2 +-
 content/cn/docs/comparison.html                    |   2 +-
 content/cn/docs/concepts.html                      |   4 +-
 content/cn/docs/configurations.html                |   4 +-
 .../{0.5.0-admin_guide.html => deployment.html}    |  32 +-
 content/cn/docs/docker_demo.html                   |   4 +-
 content/cn/docs/docs-versions.html                 |   9 +-
 content/cn/docs/gcs_hoodie.html                    |   2 +-
 content/cn/docs/migration_guide.html               |   2 +-
 content/cn/docs/performance.html                   |   2 +-
 content/cn/docs/powered_by.html                    |   2 +-
 content/cn/docs/privacy.html                       |   2 +-
 content/cn/docs/querying_data.html                 |   4 +-
 content/cn/docs/quick-start-guide.html             |   4 +-
 content/cn/docs/s3_hoodie.html                     |   2 +-
 content/cn/docs/use_cases.html                     |   4 +-
 content/cn/docs/writing_data.html                  |   4 +-
 content/cn/releases.html                           |   2 +-
 content/community.html                             |   2 +-
 content/contributing.html                          |   2 +-
 content/docs/0.5.0-admin_guide.html                |   2 +-
 content/docs/0.5.0-concepts.html                   |   2 +-
 content/docs/0.5.0-configurations.html             |   2 +-
 content/docs/0.5.0-docker_demo.html                |   2 +-
 content/docs/0.5.0-docs-versions.html              |   6 +-
 content/docs/0.5.0-querying_data.html              |   2 +-
 content/docs/0.5.0-quick-start-guide.html          |   2 +-
 content/docs/0.5.0-use_cases.html                  |   2 +-
 content/docs/0.5.0-writing_data.html               |   2 +-
 content/docs/comparison.html                       |   4 +-
 content/docs/concepts.html                         | 140 +++++----
 content/docs/configurations.html                   |  46 +--
 .../{0.5.0-admin_guide.html => deployment.html}    | 346 +++++++++++++++------
 content/docs/docker_demo.html                      | 246 ++++++++-------
 content/docs/docs-versions.html                    |   8 +-
 content/docs/gcs_hoodie.html                       |   2 +-
 content/docs/migration_guide.html                  |  49 ++-
 content/docs/performance.html                      |  12 +-
 content/docs/powered_by.html                       |   2 +-
 content/docs/privacy.html                          |   2 +-
 content/docs/querying_data.html                    | 102 +++---
 content/docs/quick-start-guide.html                | 122 +++++---
 content/docs/s3_hoodie.html                        |   2 +-
 content/docs/structure.html                        |  18 +-
 content/docs/use_cases.html                        |  16 +-
 content/docs/writing_data.html                     | 161 ++++++----
 content/releases.html                              |  72 ++++-
 content/sitemap.xml                                |   4 +-
 61 files changed, 918 insertions(+), 605 deletions(-)

diff --git a/content/assets/css/main.css b/content/assets/css/main.css
index 1437b95..b4ed304 100644
--- a/content/assets/css/main.css
+++ b/content/assets/css/main.css
@@ -1 +1 @@
-table{border-color:#1ab7ea !important}.page a{color:#3b9cba !important}.page__content{font-size:17px}.page__content.releases{font-size:17px}.page__footer{font-size:15px !important}.page__footer a{color:#3b9cba !important}.page__content .notice,.page__content .notice--primary,.page__content .notice--info,.page__content .notice--warning,.page__content .notice--success,.page__content .notice--danger{font-size:0.8em !important}.page__content table{font-size:0.8em !important}.page__content ta [...]
+table{border-color:#1ab7ea !important}.page a{color:#3b9cba !important}.page__content{font-size:17px}.page__content.releases{font-size:17px}.page__footer{font-size:15px !important}.page__footer a{color:#3b9cba !important}.page__content .notice,.page__content .notice--primary,.page__content .notice--info,.page__content .notice--warning,.page__content .notice--success,.page__content .notice--danger{font-size:0.8em !important}.page__content table{font-size:0.8em !important}.page__content ta [...]
diff --git a/content/assets/images/hudi-lake.png b/content/assets/images/hudi-lake.png
index 7890498..e5e8238 100644
Binary files a/content/assets/images/hudi-lake.png and b/content/assets/images/hudi-lake.png differ
diff --git a/content/assets/js/lunr/lunr-store.js b/content/assets/js/lunr/lunr-store.js
index 0e2208b..606d4d1 100644
--- a/content/assets/js/lunr/lunr-store.js
+++ b/content/assets/js/lunr/lunr-store.js
@@ -190,7 +190,7 @@ var store = [{
         "url": "http://0.0.0.0:4000/cn/docs/migration_guide.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Migration Guide",
-        "excerpt":"Hudi maintains metadata such as commit timeline and indexes to manage a dataset. The commit timelines helps to understand the actions happening on a dataset as well as the current state of a dataset. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently...","categories": [],
+        "excerpt":"Hudi maintains metadata such as commit timeline and indexes to manage a table. The commit timelines helps to understand the actions happening on a table as well as the current state of a table. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/migration_guide.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
@@ -210,12 +210,12 @@ var store = [{
         "url": "http://0.0.0.0:4000/cn/docs/quick-start-guide.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Quick-Start Guide",
-        "excerpt":"This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi dataset of default storage type: Copy on Write. After each write operation we will also show how to read the data...","categories": [],
+        "excerpt":"This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/quick-start-guide.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Structure",
-        "excerpt":"Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical datasets over DFS (HDFS or cloud stores) and provides three logical views for query access. Read Optimized View - Provides excellent query performance on pure columnar storage, much like plain Parquet tables. Incremental View - Provides a change stream out...","categories": [],
+        "excerpt":"Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical tables over DFS (HDFS or cloud stores) and provides three types of queries. Read Optimized query - Provides excellent query performance on pure columnar storage, much like plain Parquet tables. Incremental query - Provides a change stream out of the...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/structure.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
@@ -255,7 +255,7 @@ var store = [{
         "url": "http://0.0.0.0:4000/cn/docs/concepts.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Concepts",
-        "excerpt":"Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over datasets on DFS Upsert (how do I change the dataset?) Incremental pull (how do I fetch data that changed?) In this section, we will discuss key concepts &amp; terminologies that are important to understand, to be able to effectively use...","categories": [],
+        "excerpt":"Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over hadoop compatible storages Update/Delete Records (how do I change records in a table?) Change Streams (how do I fetch records that changed?) In this section, we will discuss key concepts &amp; terminologies that are important to understand, to be able...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/concepts.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
@@ -264,8 +264,8 @@ var store = [{
         "tags": [],
         "url": "http://0.0.0.0:4000/cn/docs/writing_data.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
-        "title": "Writing Hudi Datasets",
-        "excerpt":"In this section, we will cover ways to ingest new changes from external sources or even other Hudi datasets using the DeltaStreamer tool, as well as speeding up large Spark jobs via upserts using the Hudi datasource. Such datasets can then be queried using various query engines. Write Operations Before...","categories": [],
+        "title": "Writing Hudi Tables",
+        "excerpt":"In this section, we will cover ways to ingest new changes from external sources or even other Hudi tables using the DeltaStreamer tool, as well as speeding up large Spark jobs via upserts using the Hudi datasource. Such tables can then be queried using various query engines. Write Operations Before...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/writing_data.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
@@ -274,8 +274,8 @@ var store = [{
         "tags": [],
         "url": "http://0.0.0.0:4000/cn/docs/querying_data.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
-        "title": "Querying Hudi Datasets",
-        "excerpt":"Conceptually, Hudi stores data physically once on DFS, while providing 3 logical views on top, as explained before. Once the dataset is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudi bundle has been provided, the dataset can be queried...","categories": [],
+        "title": "Querying Hudi Tables",
+        "excerpt":"Conceptually, Hudi stores data physically once on DFS, while providing 3 different ways of querying, as explained before. Once the table is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudi bundle has been provided, the table can be queried...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/querying_data.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
@@ -285,7 +285,7 @@ var store = [{
         "url": "http://0.0.0.0:4000/cn/docs/configurations.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Configurations",
-        "excerpt":"This page covers the different ways of configuring your job to write/read Hudi datasets. At a high level, you can control behaviour at few levels. Spark Datasource Configs : These configs control the Hudi Spark Datasource, providing ability to define keys/partitioning, pick out the write operation, specify how to merge...","categories": [],
+        "excerpt":"This page covers the different ways of configuring your job to write/read Hudi tables. At a high level, you can control behaviour at few levels. Spark Datasource Configs : These configs control the Hudi Spark Datasource, providing ability to define keys/partitioning, pick out the write operation, specify how to merge...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/configurations.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
@@ -295,19 +295,19 @@ var store = [{
         "url": "http://0.0.0.0:4000/cn/docs/performance.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Performance",
-        "excerpt":"In this section, we go over some real world performance numbers for Hudi upserts, incremental pull and compare them against the conventional alternatives for achieving these tasks. Upserts Following shows the speed up obtained for NoSQL database ingestion, from incrementally upserting on a Hudi dataset on the copy-on-write storage, on...","categories": [],
+        "excerpt":"In this section, we go over some real world performance numbers for Hudi upserts, incremental pull and compare them against the conventional alternatives for achieving these tasks. Upserts Following shows the speed up obtained for NoSQL database ingestion, from incrementally upserting on a Hudi table on the copy-on-write storage, on...","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/docs/performance.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "管理 Hudi Pipelines",
         "excerpt":"管理员/运维人员可以通过以下方式了解Hudi数据集/管道 通过Admin CLI进行管理 Graphite指标 Hudi应用程序的Spark UI 本节简要介绍了每一种方法,并提供了有关故障排除的一些常规指南 Admin CLI 一旦构建了hudi,就可以通过cd hudi-cli &amp;&amp; ./hudi-cli.sh启动shell。 一个hudi数据集位于DFS上的basePath位置,我们需要该位置才能连接到Hudi数据集。 Hudi库使用.hoodie子文件夹跟踪所有元数据,从而有效地在内部管理该数据集。 初始化hudi表,可使用如下命令。 18/09/06 15:56:52 INFO annotation.AutowiredAnnotationBeanPostProcessor: JSR-330 'javax.inject.Inject' annotation found and supported for autowiring ========================================== [...]
         "tags": [],
-        "url": "http://0.0.0.0:4000/cn/docs/admin_guide.html",
+        "url": "http://0.0.0.0:4000/cn/docs/deployment.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
-        "title": "Administering Hudi Pipelines",
-        "excerpt":"Admins/ops can gain visibility into Hudi datasets/pipelines in the following ways Administering via the Admin CLI Graphite metrics Spark UI of the Hudi Application This section provides a glimpse into each of these, with some general guidance on troubleshooting Admin CLI Once hudi has been built, the shell can be...","categories": [],
+        "title": "Deployment Guide",
+        "excerpt":"This section provides all the help you need to deploy and operate Hudi tables at scale. Specifically, we will cover the following aspects. Deployment Model : How various Hudi components are deployed and managed. Upgrading Versions : Picking up new releases of Hudi, guidelines and general best-practices. Migrating to Hudi...","categories": [],
         "tags": [],
-        "url": "http://0.0.0.0:4000/docs/admin_guide.html",
+        "url": "http://0.0.0.0:4000/docs/deployment.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "Privacy Policy",
         "excerpt":"Information about your use of this website is collected using server access logs and a tracking cookie. The collected information consists of the following: The IP address from which you access the website; The type of browser and operating system you use to access our site; The date and time...","categories": [],
@@ -320,7 +320,7 @@ var store = [{
         "url": "http://0.0.0.0:4000/docs/privacy.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
         "title": "文档版本",
-        "excerpt":"                                  Latest             英文版             中文版                                      0.5.0             英文版             中文版                       ","categories": [],
+        "excerpt":"                                  Latest             英文版             中文版                                      0.5.0             英文版             中文版                        ","categories": [],
         "tags": [],
         "url": "http://0.0.0.0:4000/cn/docs/docs-versions.html",
         "teaser":"http://0.0.0.0:4000/assets/images/500x300.png"},{
diff --git a/content/cn/community.html b/content/cn/community.html
index f831cea..e95e726 100644
--- a/content/cn/community.html
+++ b/content/cn/community.html
@@ -179,7 +179,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#engage-with-us">Engage with us</a></li>
   <li><a href="#contributing">Contributing</a></li>
diff --git a/content/cn/contributing.html b/content/cn/contributing.html
index 03b8446..c72e171 100644
--- a/content/cn/contributing.html
+++ b/content/cn/contributing.html
@@ -179,7 +179,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#pre-requisites">Pre-requisites</a></li>
   <li><a href="#ide-setup">IDE Setup</a></li>
diff --git a/content/cn/docs/0.5.0-admin_guide.html b/content/cn/docs/0.5.0-admin_guide.html
index fd3897a..413abd6 100644
--- a/content/cn/docs/0.5.0-admin_guide.html
+++ b/content/cn/docs/0.5.0-admin_guide.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#admin-cli">Admin CLI</a>
     <ul>
diff --git a/content/cn/docs/0.5.0-concepts.html b/content/cn/docs/0.5.0-concepts.html
index fe1b59e..68094df 100644
--- a/content/cn/docs/0.5.0-concepts.html
+++ b/content/cn/docs/0.5.0-concepts.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#时间轴">时间轴</a></li>
   <li><a href="#文件组织">文件组织</a></li>
diff --git a/content/cn/docs/0.5.0-configurations.html b/content/cn/docs/0.5.0-configurations.html
index 66150e1..c06c94a 100644
--- a/content/cn/docs/0.5.0-configurations.html
+++ b/content/cn/docs/0.5.0-configurations.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#与云存储连接">与云存储连接</a></li>
   <li><a href="#spark-datasource">Spark数据源配置</a>
diff --git a/content/cn/docs/0.5.0-docker_demo.html b/content/cn/docs/0.5.0-docker_demo.html
index 45de8fc..2ebfe81 100644
--- a/content/cn/docs/0.5.0-docker_demo.html
+++ b/content/cn/docs/0.5.0-docker_demo.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#a-demo-using-docker-containers">A Demo using docker containers</a>
     <ul>
diff --git a/content/cn/docs/0.5.0-docs-versions.html b/content/cn/docs/0.5.0-docs-versions.html
index 8611aac..da9849f 100644
--- a/content/cn/docs/0.5.0-docs-versions.html
+++ b/content/cn/docs/0.5.0-docs-versions.html
@@ -337,17 +337,17 @@
             }
           </style>
         
-        <table>
+        <table class="docversions">
     <tbody>
       
         <tr>
-            <th class="docversions">Latest</th>
+            <th>Latest</th>
             <td><a href="/docs/quick-start-guide.html">英文版</a></td>
             <td><a href="/cn/docs/quick-start-guide.html">中文版</a></td>
         </tr>
       
         <tr>
-            <th class="docversions">0.5.0</th>
+            <th>0.5.0</th>
             <td><a href="/docs/0.5.0-quick-start-guide.html">英文版</a></td>
             <td><a href="/cn/docs/0.5.0-quick-start-guide.html">中文版</a></td>
         </tr>
diff --git a/content/cn/docs/0.5.0-querying_data.html b/content/cn/docs/0.5.0-querying_data.html
index 1a15ecb..5a26bc8 100644
--- a/content/cn/docs/0.5.0-querying_data.html
+++ b/content/cn/docs/0.5.0-querying_data.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#hive">Hive</a>
     <ul>
diff --git a/content/cn/docs/0.5.0-quick-start-guide.html b/content/cn/docs/0.5.0-quick-start-guide.html
index cf8bac8..6fcf409 100644
--- a/content/cn/docs/0.5.0-quick-start-guide.html
+++ b/content/cn/docs/0.5.0-quick-start-guide.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#设置spark-shell">设置spark-shell</a></li>
   <li><a href="#inserts">插入数据</a></li>
diff --git a/content/cn/docs/0.5.0-use_cases.html b/content/cn/docs/0.5.0-use_cases.html
index 5714fce..b50433c 100644
--- a/content/cn/docs/0.5.0-use_cases.html
+++ b/content/cn/docs/0.5.0-use_cases.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#近实时摄取">近实时摄取</a></li>
   <li><a href="#近实时分析">近实时分析</a></li>
diff --git a/content/cn/docs/0.5.0-writing_data.html b/content/cn/docs/0.5.0-writing_data.html
index 3d0a537..2af9453 100644
--- a/content/cn/docs/0.5.0-writing_data.html
+++ b/content/cn/docs/0.5.0-writing_data.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#写操作">写操作</a></li>
   <li><a href="#deltastreamer">DeltaStreamer</a></li>
diff --git a/content/cn/docs/comparison.html b/content/cn/docs/comparison.html
index aadf606..c1c727b 100644
--- a/content/cn/docs/comparison.html
+++ b/content/cn/docs/comparison.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/concepts.html b/content/cn/docs/concepts.html
index f534633..d062aa9 100644
--- a/content/cn/docs/concepts.html
+++ b/content/cn/docs/concepts.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#时间轴">时间轴</a></li>
   <li><a href="#文件组织">文件组织</a></li>
diff --git a/content/cn/docs/configurations.html b/content/cn/docs/configurations.html
index f44b979..aa6fc93 100644
--- a/content/cn/docs/configurations.html
+++ b/content/cn/docs/configurations.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#与云存储连接">与云存储连接</a></li>
   <li><a href="#spark-datasource">Spark数据源配置</a>
diff --git a/content/cn/docs/0.5.0-admin_guide.html b/content/cn/docs/deployment.html
similarity index 98%
copy from content/cn/docs/0.5.0-admin_guide.html
copy to content/cn/docs/deployment.html
index fd3897a..56bf385 100644
--- a/content/cn/docs/0.5.0-admin_guide.html
+++ b/content/cn/docs/deployment.html
@@ -10,7 +10,7 @@
 <meta property="og:locale" content="en_US">
 <meta property="og:site_name" content="">
 <meta property="og:title" content="管理 Hudi Pipelines">
-<meta property="og:url" content="https://hudi.apache.org/cn/docs/0.5.0-admin_guide.html">
+<meta property="og:url" content="https://hudi.apache.org/cn/docs/deployment.html">
 
 
   <meta property="og:description" content="管理员/运维人员可以通过以下方式了解Hudi数据集/管道">
@@ -147,7 +147,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-quick-start-guide.html" class="">快速开始</a></li>
+              <li><a href="/cn/docs/quick-start-guide.html" class="">快速开始</a></li>
             
 
           
@@ -158,7 +158,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-use_cases.html" class="">使用案例</a></li>
+              <li><a href="/cn/docs/use_cases.html" class="">使用案例</a></li>
             
 
           
@@ -169,7 +169,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-powered_by.html" class="">演讲 & hudi 用户</a></li>
+              <li><a href="/cn/docs/powered_by.html" class="">演讲 & hudi 用户</a></li>
             
 
           
@@ -180,7 +180,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-comparison.html" class="">对比</a></li>
+              <li><a href="/cn/docs/comparison.html" class="">对比</a></li>
             
 
           
@@ -191,7 +191,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-docker_demo.html" class="">Docker 示例</a></li>
+              <li><a href="/cn/docs/docker_demo.html" class="">Docker 示例</a></li>
             
 
           
@@ -214,7 +214,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-concepts.html" class="">概念</a></li>
+              <li><a href="/cn/docs/concepts.html" class="">概念</a></li>
             
 
           
@@ -225,7 +225,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-writing_data.html" class="">写入数据</a></li>
+              <li><a href="/cn/docs/writing_data.html" class="">写入数据</a></li>
             
 
           
@@ -236,7 +236,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-querying_data.html" class="">查询数据</a></li>
+              <li><a href="/cn/docs/querying_data.html" class="">查询数据</a></li>
             
 
           
@@ -247,7 +247,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-configurations.html" class="">配置</a></li>
+              <li><a href="/cn/docs/configurations.html" class="">配置</a></li>
             
 
           
@@ -258,7 +258,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-performance.html" class="">性能</a></li>
+              <li><a href="/cn/docs/performance.html" class="">性能</a></li>
             
 
           
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-admin_guide.html" class="active">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="active">管理</a></li>
             
 
           
@@ -292,7 +292,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-docs-versions.html" class="">文档版本</a></li>
+              <li><a href="/cn/docs/docs-versions.html" class="">文档版本</a></li>
             
 
           
@@ -303,7 +303,7 @@
             
 
             
-              <li><a href="/cn/docs/0.5.0-privacy.html" class="">版权信息</a></li>
+              <li><a href="/cn/docs/privacy.html" class="">版权信息</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#admin-cli">Admin CLI</a>
     <ul>
@@ -729,7 +729,7 @@ Hudi库使用.hoodie子文件夹跟踪所有元数据,从而有效地在内部
 
 <h3 id="重复">重复</h3>
 
-<p>首先,请确保访问Hudi数据集的查询是<a href="/docs/0.5.0-querying_data.html">没有问题的</a>,并之后确认的确有重复。</p>
+<p>首先,请确保访问Hudi数据集的查询是<a href="sql_queries.html">没有问题的</a>,并之后确认的确有重复。</p>
 
 <ul>
   <li>如果确认,请使用上面的元数据字段来标识包含记录的物理文件和分区文件。</li>
diff --git a/content/cn/docs/docker_demo.html b/content/cn/docs/docker_demo.html
index 48b5e7f..d14a5ab 100644
--- a/content/cn/docs/docker_demo.html
+++ b/content/cn/docs/docker_demo.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#a-demo-using-docker-containers">A Demo using docker containers</a>
     <ul>
diff --git a/content/cn/docs/docs-versions.html b/content/cn/docs/docs-versions.html
index 020c817..37414e0 100644
--- a/content/cn/docs/docs-versions.html
+++ b/content/cn/docs/docs-versions.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -337,17 +337,17 @@
             }
           </style>
         
-        <table>
+        <table class="docversions">
     <tbody>
       
         <tr>
-            <th class="docversions">Latest</th>
+            <th>Latest</th>
             <td><a href="/docs/quick-start-guide.html">英文版</a></td>
             <td><a href="/cn/docs/quick-start-guide.html">中文版</a></td>
         </tr>
       
         <tr>
-            <th class="docversions">0.5.0</th>
+            <th>0.5.0</th>
             <td><a href="/docs/0.5.0-quick-start-guide.html">英文版</a></td>
             <td><a href="/cn/docs/0.5.0-quick-start-guide.html">中文版</a></td>
         </tr>
@@ -355,6 +355,7 @@
     </tbody>
 </table>
 
+
       </section>
 
       <a href="#masthead__inner-wrap" class="back-to-top">Back to top &uarr;</a>
diff --git a/content/cn/docs/gcs_hoodie.html b/content/cn/docs/gcs_hoodie.html
index 7a4d03f..2e32101 100644
--- a/content/cn/docs/gcs_hoodie.html
+++ b/content/cn/docs/gcs_hoodie.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/migration_guide.html b/content/cn/docs/migration_guide.html
index e5f51b7..ca21c3e 100644
--- a/content/cn/docs/migration_guide.html
+++ b/content/cn/docs/migration_guide.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/performance.html b/content/cn/docs/performance.html
index aae759f..5471bb4 100644
--- a/content/cn/docs/performance.html
+++ b/content/cn/docs/performance.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/powered_by.html b/content/cn/docs/powered_by.html
index faadb61..b695608 100644
--- a/content/cn/docs/powered_by.html
+++ b/content/cn/docs/powered_by.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/privacy.html b/content/cn/docs/privacy.html
index 355b6ff..e7b08a4 100644
--- a/content/cn/docs/privacy.html
+++ b/content/cn/docs/privacy.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/querying_data.html b/content/cn/docs/querying_data.html
index 1f6978d..b9b9147 100644
--- a/content/cn/docs/querying_data.html
+++ b/content/cn/docs/querying_data.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#hive">Hive</a>
     <ul>
diff --git a/content/cn/docs/quick-start-guide.html b/content/cn/docs/quick-start-guide.html
index 90a1f5f..02c11d2 100644
--- a/content/cn/docs/quick-start-guide.html
+++ b/content/cn/docs/quick-start-guide.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#设置spark-shell">设置spark-shell</a></li>
   <li><a href="#inserts">插入数据</a></li>
diff --git a/content/cn/docs/s3_hoodie.html b/content/cn/docs/s3_hoodie.html
index 96112cf..ce68ede 100644
--- a/content/cn/docs/s3_hoodie.html
+++ b/content/cn/docs/s3_hoodie.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
diff --git a/content/cn/docs/use_cases.html b/content/cn/docs/use_cases.html
index cd945c4..5bf4f1b 100644
--- a/content/cn/docs/use_cases.html
+++ b/content/cn/docs/use_cases.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#近实时摄取">近实时摄取</a></li>
   <li><a href="#近实时分析">近实时分析</a></li>
diff --git a/content/cn/docs/writing_data.html b/content/cn/docs/writing_data.html
index d0243c7..24ed3d0 100644
--- a/content/cn/docs/writing_data.html
+++ b/content/cn/docs/writing_data.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/cn/docs/admin_guide.html" class="">管理</a></li>
+              <li><a href="/cn/docs/deployment.html" class="">管理</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#写操作">写操作</a></li>
   <li><a href="#deltastreamer">DeltaStreamer</a></li>
diff --git a/content/cn/releases.html b/content/cn/releases.html
index 0258c6d..140f11b 100644
--- a/content/cn/releases.html
+++ b/content/cn/releases.html
@@ -179,7 +179,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#release-050-incubating">Release 0.5.0-incubating</a>
     <ul>
diff --git a/content/community.html b/content/community.html
index bf0f0a6..728cfce 100644
--- a/content/community.html
+++ b/content/community.html
@@ -179,7 +179,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#engage-with-us">Engage with us</a></li>
   <li><a href="#contributing">Contributing</a></li>
diff --git a/content/contributing.html b/content/contributing.html
index 57e5d10..6b9c264 100644
--- a/content/contributing.html
+++ b/content/contributing.html
@@ -179,7 +179,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#pre-requisites">Pre-requisites</a></li>
   <li><a href="#ide-setup">IDE Setup</a></li>
diff --git a/content/docs/0.5.0-admin_guide.html b/content/docs/0.5.0-admin_guide.html
index 68b2b01..a2ea6a1 100644
--- a/content/docs/0.5.0-admin_guide.html
+++ b/content/docs/0.5.0-admin_guide.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#admin-cli">Admin CLI</a>
     <ul>
diff --git a/content/docs/0.5.0-concepts.html b/content/docs/0.5.0-concepts.html
index 3b79aa1..4dab754 100644
--- a/content/docs/0.5.0-concepts.html
+++ b/content/docs/0.5.0-concepts.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#timeline">Timeline</a></li>
   <li><a href="#file-management">File management</a></li>
diff --git a/content/docs/0.5.0-configurations.html b/content/docs/0.5.0-configurations.html
index 13f5fc4..5d5bc9e 100644
--- a/content/docs/0.5.0-configurations.html
+++ b/content/docs/0.5.0-configurations.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#talking-to-cloud-storage">Talking to Cloud Storage</a></li>
   <li><a href="#spark-datasource">Spark Datasource Configs</a>
diff --git a/content/docs/0.5.0-docker_demo.html b/content/docs/0.5.0-docker_demo.html
index 6f82d61..926ac54 100644
--- a/content/docs/0.5.0-docker_demo.html
+++ b/content/docs/0.5.0-docker_demo.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#a-demo-using-docker-containers">A Demo using docker containers</a>
     <ul>
diff --git a/content/docs/0.5.0-docs-versions.html b/content/docs/0.5.0-docs-versions.html
index 5fe9c30..00b3f11 100644
--- a/content/docs/0.5.0-docs-versions.html
+++ b/content/docs/0.5.0-docs-versions.html
@@ -337,17 +337,17 @@
             }
           </style>
         
-        <table>
+        <table class="docversions">
     <tbody>
       
         <tr>
-            <th class="docversions">Latest</th>
+            <th>Latest</th>
             <td><a href="/docs/quick-start-guide.html">English Version</a></td>
             <td><a href="/cn/docs/quick-start-guide.html">Chinese Version</a></td>
         </tr>
       
         <tr>
-            <th class="docversions">0.5.0</th>
+            <th>0.5.0</th>
             <td><a href="/docs/0.5.0-quick-start-guide.html">English Version</a></td>
             <td><a href="/cn/docs/0.5.0-quick-start-guide.html">Chinese Version</a></td>
         </tr>
diff --git a/content/docs/0.5.0-querying_data.html b/content/docs/0.5.0-querying_data.html
index 5094084..2214bb7 100644
--- a/content/docs/0.5.0-querying_data.html
+++ b/content/docs/0.5.0-querying_data.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#hive">Hive</a>
     <ul>
diff --git a/content/docs/0.5.0-quick-start-guide.html b/content/docs/0.5.0-quick-start-guide.html
index 0dc26a8..e0d3de5 100644
--- a/content/docs/0.5.0-quick-start-guide.html
+++ b/content/docs/0.5.0-quick-start-guide.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#setup-spark-shell">Setup spark-shell</a></li>
   <li><a href="#insert-data">Insert data</a></li>
diff --git a/content/docs/0.5.0-use_cases.html b/content/docs/0.5.0-use_cases.html
index 09508fc..a13386d 100644
--- a/content/docs/0.5.0-use_cases.html
+++ b/content/docs/0.5.0-use_cases.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#near-real-time-ingestion">Near Real-Time Ingestion</a></li>
   <li><a href="#near-real-time-analytics">Near Real-time Analytics</a></li>
diff --git a/content/docs/0.5.0-writing_data.html b/content/docs/0.5.0-writing_data.html
index 8fb4d7c..bb5289a 100644
--- a/content/docs/0.5.0-writing_data.html
+++ b/content/docs/0.5.0-writing_data.html
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#write-operations">Write Operations</a></li>
   <li><a href="#deltastreamer">DeltaStreamer</a></li>
diff --git a/content/docs/comparison.html b/content/docs/comparison.html
index 3dc7213..3919cbd 100644
--- a/content/docs/comparison.html
+++ b/content/docs/comparison.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -374,7 +374,7 @@ just for analytics. Finally, HBase does not support incremental processing primi
 <h2 id="stream-processing">Stream Processing</h2>
 
 <p>A popular question, we get is : “How does Hudi relate to stream processing systems?”, which we will try to answer here. Simply put, Hudi can integrate with
-batch (<code class="highlighter-rouge">copy-on-write storage</code>) and streaming (<code class="highlighter-rouge">merge-on-read storage</code>) jobs of today, to store the computed results in Hadoop. For Spark apps, this can happen via direct
+batch (<code class="highlighter-rouge">copy-on-write table</code>) and streaming (<code class="highlighter-rouge">merge-on-read table</code>) jobs of today, to store the computed results in Hadoop. For Spark apps, this can happen via direct
 integration of Hudi library with Spark/Spark streaming DAGs. In case of Non-Spark processing systems (eg: Flink, Hive), the processing can be done in the respective systems
 and later sent into a Hudi table via a Kafka topic/DFS intermediate file. In more conceptual level, data processing
 pipelines just consist of three components : <code class="highlighter-rouge">source</code>, <code class="highlighter-rouge">processing</code>, <code class="highlighter-rouge">sink</code>, with users ultimately running queries against the sink to use the results of the pipeline.
diff --git a/content/docs/concepts.html b/content/docs/concepts.html
index b29a8bc..5ff0c17 100644
--- a/content/docs/concepts.html
+++ b/content/docs/concepts.html
@@ -4,7 +4,7 @@
     <meta charset="utf-8">
 
 <!-- begin _includes/seo.html --><title>Concepts - Apache Hudi</title>
-<meta name="description" content="Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over datasets on DFS">
+<meta name="description" content="Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over hadoop compatible storages">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
@@ -13,7 +13,7 @@
 <meta property="og:url" content="https://hudi.apache.org/docs/concepts.html">
 
 
-  <meta property="og:description" content="Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over datasets on DFS">
+  <meta property="og:description" content="Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over hadoop compatible storages">
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -333,37 +333,38 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#timeline">Timeline</a></li>
   <li><a href="#file-management">File management</a></li>
-  <li><a href="#storage-types--views">Storage Types &amp; Views</a>
+  <li><a href="#index">Index</a></li>
+  <li><a href="#table-types--queries">Table Types &amp; Queries</a>
     <ul>
-      <li><a href="#storage-types">Storage Types</a></li>
-      <li><a href="#views">Views</a></li>
+      <li><a href="#table-types">Table Types</a></li>
+      <li><a href="#query-types">Query types</a></li>
     </ul>
   </li>
-  <li><a href="#copy-on-write-storage">Copy On Write Storage</a></li>
-  <li><a href="#merge-on-read-storage">Merge On Read Storage</a></li>
+  <li><a href="#copy-on-write-table">Copy On Write Table</a></li>
+  <li><a href="#merge-on-read-table">Merge On Read Table</a></li>
 </ul>
           </nav>
         </aside>
         
-        <p>Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over datasets on DFS</p>
+        <p>Apache Hudi (pronounced “Hudi”) provides the following streaming primitives over hadoop compatible storages</p>
 
 <ul>
-  <li>Upsert                     (how do I change the dataset?)</li>
-  <li>Incremental pull           (how do I fetch data that changed?)</li>
+  <li>Update/Delete Records      (how do I change records in a table?)</li>
+  <li>Change Streams             (how do I fetch records that changed?)</li>
 </ul>
 
 <p>In this section, we will discuss key concepts &amp; terminologies that are important to understand, to be able to effectively use these primitives.</p>
 
 <h2 id="timeline">Timeline</h2>
-<p>At its core, Hudi maintains a <code class="highlighter-rouge">timeline</code> of all actions performed on the dataset at different <code class="highlighter-rouge">instants</code> of time that helps provide instantaneous views of the dataset,
+<p>At its core, Hudi maintains a <code class="highlighter-rouge">timeline</code> of all actions performed on the table at different <code class="highlighter-rouge">instants</code> of time that helps provide instantaneous views of the table,
 while also efficiently supporting retrieval of data in the order of arrival. A Hudi instant consists of the following components</p>
 
 <ul>
-  <li><code class="highlighter-rouge">Action type</code> : Type of action performed on the dataset</li>
+  <li><code class="highlighter-rouge">Instant action</code> : Type of action performed on the table</li>
   <li><code class="highlighter-rouge">Instant time</code> : Instant time is typically a timestamp (e.g: 20190117010349), which monotonically increases in the order of action’s begin time.</li>
   <li><code class="highlighter-rouge">state</code> : current state of the instant</li>
 </ul>
@@ -373,12 +374,12 @@ while also efficiently supporting retrieval of data in the order of arrival. A H
 <p>Key actions performed include</p>
 
 <ul>
-  <li><code class="highlighter-rouge">COMMITS</code> - A commit denotes an <strong>atomic write</strong> of a batch of records into a dataset.</li>
-  <li><code class="highlighter-rouge">CLEANS</code> - Background activity that gets rid of older versions of files in the dataset, that are no longer needed.</li>
-  <li><code class="highlighter-rouge">DELTA_COMMIT</code> - A delta commit refers to an <strong>atomic write</strong> of a batch of records into a  MergeOnRead storage type of dataset, where some/all of the data could be just written to delta logs.</li>
+  <li><code class="highlighter-rouge">COMMITS</code> - A commit denotes an <strong>atomic write</strong> of a batch of records into a table.</li>
+  <li><code class="highlighter-rouge">CLEANS</code> - Background activity that gets rid of older versions of files in the table, that are no longer needed.</li>
+  <li><code class="highlighter-rouge">DELTA_COMMIT</code> - A delta commit refers to an <strong>atomic write</strong> of a batch of records into a  MergeOnRead type table, where some/all of the data could be just written to delta logs.</li>
   <li><code class="highlighter-rouge">COMPACTION</code> - Background activity to reconcile differential data structures within Hudi e.g: moving updates from row based log files to columnar formats. Internally, compaction manifests as a special commit on the timeline</li>
   <li><code class="highlighter-rouge">ROLLBACK</code> - Indicates that a commit/delta commit was unsuccessful &amp; rolled back, removing any partial files produced during such a write</li>
-  <li><code class="highlighter-rouge">SAVEPOINT</code> - Marks certain file groups as “saved”, such that cleaner will not delete them. It helps restore the dataset to a point on the timeline, in case of disaster/data recovery scenarios.</li>
+  <li><code class="highlighter-rouge">SAVEPOINT</code> - Marks certain file groups as “saved”, such that cleaner will not delete them. It helps restore the table to a point on the timeline, in case of disaster/data recovery scenarios.</li>
 </ul>
 
 <p>Any given instant can be 
@@ -394,7 +395,7 @@ in one of the following states</p>
     <img class="docimage" src="/assets/images/hudi_timeline.png" alt="hudi_timeline.png" />
 </figure>
 
-<p>Example above shows upserts happenings between 10:00 and 10:20 on a Hudi dataset, roughly every 5 mins, leaving commit metadata on the Hudi timeline, along
+<p>Example above shows upserts happenings between 10:00 and 10:20 on a Hudi table, roughly every 5 mins, leaving commit metadata on the Hudi timeline, along
 with other background cleaning/compactions. One key observation to make is that the commit time indicates the <code class="highlighter-rouge">arrival time</code> of the data (10:20AM), while the actual data
 organization reflects the actual time or <code class="highlighter-rouge">event time</code>, the data was intended for (hourly buckets from 07:00). These are two key concepts when reasoning about tradeoffs between latency and completeness of data.</p>
 
@@ -403,51 +404,52 @@ With the help of the timeline, an incremental query attempting to get all new da
 only the changed files without say scanning all the time buckets &gt; 07:00.</p>
 
 <h2 id="file-management">File management</h2>
-<p>Hudi organizes a datasets into a directory structure under a <code class="highlighter-rouge">basepath</code> on DFS. Dataset is broken up into partitions, which are folders containing data files for that partition,
+<p>Hudi organizes a table into a directory structure under a <code class="highlighter-rouge">basepath</code> on DFS. Table is broken up into partitions, which are folders containing data files for that partition,
 very similar to Hive tables. Each partition is uniquely identified by its <code class="highlighter-rouge">partitionpath</code>, which is relative to the basepath.</p>
 
 <p>Within each partition, files are organized into <code class="highlighter-rouge">file groups</code>, uniquely identified by a <code class="highlighter-rouge">file id</code>. Each file group contains several
-<code class="highlighter-rouge">file slices</code>, where each slice contains a base columnar file (<code class="highlighter-rouge">*.parquet</code>) produced at a certain commit/compaction instant time,
+<code class="highlighter-rouge">file slices</code>, where each slice contains a base file (<code class="highlighter-rouge">*.parquet</code>) produced at a certain commit/compaction instant time,
  along with set of log files (<code class="highlighter-rouge">*.log.*</code>) that contain inserts/updates to the base file since the base file was produced. 
 Hudi adopts a MVCC design, where compaction action merges logs and base files to produce new file slices and cleaning action gets rid of 
 unused/older file slices to reclaim space on DFS.</p>
 
-<p>Hudi provides efficient upserts, by mapping a given hoodie key (record key + partition path) consistently to a file group, via an indexing mechanism. 
+<h2 id="index">Index</h2>
+<p>Hudi provides efficient upserts, by mapping a given hoodie key (record key + partition path) consistently to a file id, via an indexing mechanism. 
 This mapping between record key and file group/file id, never changes once the first version of a record has been written to a file. In short, the 
 mapped file group contains all versions of a group of records.</p>
 
-<h2 id="storage-types--views">Storage Types &amp; Views</h2>
-<p>Hudi storage types define how data is indexed &amp; laid out on the DFS and how the above primitives and timeline activities are implemented on top of such organization (i.e how data is written). 
-In turn, <code class="highlighter-rouge">views</code> define how the underlying data is exposed to the queries (i.e how data is read).</p>
+<h2 id="table-types--queries">Table Types &amp; Queries</h2>
+<p>Hudi table types define how data is indexed &amp; laid out on the DFS and how the above primitives and timeline activities are implemented on top of such organization (i.e how data is written). 
+In turn, <code class="highlighter-rouge">query types</code> define how the underlying data is exposed to the queries (i.e how data is read).</p>
 
 <table>
   <thead>
     <tr>
-      <th>Storage Type</th>
-      <th>Supported Views</th>
+      <th>Table Type</th>
+      <th>Supported Query types</th>
     </tr>
   </thead>
   <tbody>
     <tr>
       <td>Copy On Write</td>
-      <td>Read Optimized + Incremental</td>
+      <td>Snapshot Queries + Incremental Queries</td>
     </tr>
     <tr>
       <td>Merge On Read</td>
-      <td>Read Optimized + Incremental + Near Real-time</td>
+      <td>Snapshot Queries + Incremental Queries + Read Optimized Queries</td>
     </tr>
   </tbody>
 </table>
 
-<h3 id="storage-types">Storage Types</h3>
-<p>Hudi supports the following storage types.</p>
+<h3 id="table-types">Table Types</h3>
+<p>Hudi supports the following table types.</p>
 
 <ul>
-  <li><a href="#copy-on-write-storage">Copy On Write</a> : Stores data using exclusively columnar file formats (e.g parquet). Updates simply version &amp; rewrite the files by performing a synchronous merge during write.</li>
-  <li><a href="#merge-on-read-storage">Merge On Read</a> : Stores data using a combination of columnar (e.g parquet) + row based (e.g avro) file formats. Updates are logged to delta files &amp; later compacted to produce new versions of columnar files synchronously or asynchronously.</li>
+  <li><a href="#copy-on-write-table">Copy On Write</a> : Stores data using exclusively columnar file formats (e.g parquet). Updates simply version &amp; rewrite the files by performing a synchronous merge during write.</li>
+  <li><a href="#merge-on-read-table">Merge On Read</a> : Stores data using a combination of columnar (e.g parquet) + row based (e.g avro) file formats. Updates are logged to delta files &amp; later compacted to produce new versions of columnar files synchronously or asynchronously.</li>
 </ul>
 
-<p>Following table summarizes the trade-offs between these two storage types</p>
+<p>Following table summarizes the trade-offs between these two table types</p>
 
 <table>
   <thead>
@@ -481,49 +483,49 @@ In turn, <code class="highlighter-rouge">views</code> define how the underlying
   </tbody>
 </table>
 
-<h3 id="views">Views</h3>
-<p>Hudi supports the following views of stored data</p>
+<h3 id="query-types">Query types</h3>
+<p>Hudi supports the following query types</p>
 
 <ul>
-  <li><strong>Read Optimized View</strong> : Queries on this view see the latest snapshot of the dataset as of a given commit or compaction action. 
- This view exposes only the base/columnar files in latest file slices to the queries and guarantees the same columnar query performance compared to a non-hudi columnar dataset.</li>
-  <li><strong>Incremental View</strong> : Queries on this view only see new data written to the dataset, since a given commit/compaction. This view effectively provides change streams to enable incremental data pipelines.</li>
-  <li><strong>Realtime View</strong> : Queries on this view see the latest snapshot of dataset as of a given delta commit action. This view provides near-real time datasets (few mins)
-  by merging the base and delta files of the latest file slice on-the-fly.</li>
+  <li><strong>Snapshot Queries</strong> : Queries see the latest snapshot of the table as of a given commit or compaction action. In case of merge on read table, it exposes near-real time data(few mins) by merging 
+ the base and delta files of the latest file slice on-the-fly. For copy on write table,  it provides a drop-in replacement for existing parquet tables, while providing upsert/delete and other write side features.</li>
+  <li><strong>Incremental Queries</strong> : Queries only see new data written to the table, since a given commit/compaction. This effectively provides change streams to enable incremental data pipelines.</li>
+  <li><strong>Read Optimized Queries</strong> : Queries see the latest snapshot of table as of a given commit/compaction action. Exposes only the base/columnar files in latest file slices and guarantees the 
+ same columnar query performance compared to a non-hudi columnar table.</li>
 </ul>
 
-<p>Following table summarizes the trade-offs between the different views.</p>
+<p>Following table summarizes the trade-offs between the different query types.</p>
 
 <table>
   <thead>
     <tr>
       <th>Trade-off</th>
-      <th>ReadOptimized</th>
-      <th>RealTime</th>
+      <th>Snapshot</th>
+      <th>Read Optimized</th>
     </tr>
   </thead>
   <tbody>
     <tr>
       <td>Data Latency</td>
-      <td>Higher</td>
       <td>Lower</td>
+      <td>Higher</td>
     </tr>
     <tr>
       <td>Query Latency</td>
-      <td>Lower (raw columnar performance)</td>
-      <td>Higher (merge columnar + row based delta)</td>
+      <td>Higher (merge base / columnar file + row based delta / log files)</td>
+      <td>Lower (raw base / columnar file performance)</td>
     </tr>
   </tbody>
 </table>
 
-<h2 id="copy-on-write-storage">Copy On Write Storage</h2>
+<h2 id="copy-on-write-table">Copy On Write Table</h2>
 
-<p>File slices in Copy-On-Write storage only contain the base/columnar file and each commit produces new versions of base files. 
+<p>File slices in Copy-On-Write table only contain the base/columnar file and each commit produces new versions of base files. 
 In other words, we implicitly compact on every commit, such that only columnar data exists. As a result, the write amplification 
 (number of bytes written for 1 byte of incoming data) is much higher, where read amplification is zero. 
 This is a much desired property for analytical workloads, which is predominantly read-heavy.</p>
 
-<p>Following illustrates how this works conceptually, when  data written into copy-on-write storage  and two queries running on top of it.</p>
+<p>Following illustrates how this works conceptually, when data written into copy-on-write table  and two queries running on top of it.</p>
 
 <figure>
     <img class="docimage" src="/assets/images/hudi_cow.png" alt="hudi_cow.png" />
@@ -531,27 +533,27 @@ This is a much desired property for analytical workloads, which is predominantly
 
 <p>As data gets written, updates to existing file groups produce a new slice for that file group stamped with the commit instant time, 
 while inserts allocate a new file group and write its first slice for that file group. These file slices and their commit instant times are color coded above.
-SQL queries running against such a dataset (eg: <code class="highlighter-rouge">select count(*)</code> counting the total records in that partition), first checks the timeline for the latest commit
+SQL queries running against such a table (eg: <code class="highlighter-rouge">select count(*)</code> counting the total records in that partition), first checks the timeline for the latest commit
 and filters all but latest file slices of each file group. As you can see, an old query does not see the current inflight commit’s files color coded in pink,
 but a new query starting after the commit picks up the new data. Thus queries are immune to any write failures/partial writes and only run on committed data.</p>
 
-<p>The intention of copy on write storage, is to fundamentally improve how datasets are managed today through</p>
+<p>The intention of copy on write table, is to fundamentally improve how tables are managed today through</p>
 
 <ul>
   <li>First class support for atomically updating data at file-level, instead of rewriting whole tables/partitions</li>
   <li>Ability to incremental consume changes, as opposed to wasteful scans or fumbling with heuristics</li>
-  <li>Tight control file sizes to keep query performance excellent (small files hurt query performance considerably).</li>
+  <li>Tight control of file sizes to keep query performance excellent (small files hurt query performance considerably).</li>
 </ul>
 
-<h2 id="merge-on-read-storage">Merge On Read Storage</h2>
+<h2 id="merge-on-read-table">Merge On Read Table</h2>
 
-<p>Merge on read storage is a superset of copy on write, in the sense it still provides a read optimized view of the dataset via the Read Optmized table.
-Additionally, it stores incoming upserts for each file group, onto a row based delta log, that enables providing near real-time data to the queries
- by applying the delta log, onto the latest version of each file id on-the-fly during query time. Thus, this storage type attempts to balance read and write amplication intelligently, to provide near real-time queries.
-The most significant change here, would be to the compactor, which now carefully chooses which delta logs need to be compacted onto
-their columnar base file, to keep the query performance in check (larger delta logs would incur longer merge times with merge data on query side)</p>
+<p>Merge on read table is a superset of copy on write, in the sense it still supports read optimized queries of the table by exposing only the base/columnar files in latest file slices.
+Additionally, it stores incoming upserts for each file group, onto a row based delta log, to support snapshot queries by applying the delta log, 
+onto the latest version of each file id on-the-fly during query time. Thus, this table type attempts to balance read and write amplification intelligently, to provide near real-time data.
+The most significant change here, would be to the compactor, which now carefully chooses which delta log files need to be compacted onto
+their columnar base file, to keep the query performance in check (larger delta log files would incur longer merge times with merge data on query side)</p>
 
-<p>Following illustrates how the storage works, and shows queries on both near-real time table and read optimized table.</p>
+<p>Following illustrates how the table works, and shows two types of queries - snapshot query and read optimized query.</p>
 
 <figure>
     <img class="docimage" src="/assets/images/hudi_mor.png" alt="hudi_mor.png" style="max-width: 100%" />
@@ -560,22 +562,22 @@ their columnar base file, to keep the query performance in check (larger delta l
 <p>There are lot of interesting things happening in this example, which bring out the subtleties in the approach.</p>
 
 <ul>
-  <li>We now have commits every 1 minute or so, something we could not do in the other storage type.</li>
-  <li>Within each file id group, now there is an delta log, which holds incoming updates to records in the base columnar files. In the example, the delta logs hold
+  <li>We now have commits every 1 minute or so, something we could not do in the other table type.</li>
+  <li>Within each file id group, now there is an delta log file, which holds incoming updates to records in the base columnar files. In the example, the delta log files hold
  all the data from 10:05 to 10:10. The base columnar files are still versioned with the commit, as before.
- Thus, if one were to simply look at base files alone, then the storage layout looks exactly like a copy on write table.</li>
+ Thus, if one were to simply look at base files alone, then the table layout looks exactly like a copy on write table.</li>
   <li>A periodic compaction process reconciles these changes from the delta log and produces a new version of base file, just like what happened at 10:05 in the example.</li>
-  <li>There are two ways of querying the same underlying storage: ReadOptimized (RO) Table and Near-Realtime (RT) table, depending on whether we chose query performance or freshness of data.</li>
-  <li>The semantics around when data from a commit is available to a query changes in a subtle way for the RO table. Note, that such a query
- running at 10:10, wont see data after 10:05 above, while a query on the RT table always sees the freshest data.</li>
+  <li>There are two ways of querying the same underlying table: Read Optimized query and Snapshot query, depending on whether we chose query performance or freshness of data.</li>
+  <li>The semantics around when data from a commit is available to a query changes in a subtle way for a read optimized query. Note, that such a query
+ running at 10:10, wont see data after 10:05 above, while a snapshot query always sees the freshest data.</li>
   <li>When we trigger compaction &amp; what it decides to compact hold all the key to solving these hard problems. By implementing a compacting
- strategy, where we aggressively compact the latest partitions compared to older partitions, we could ensure the RO Table sees data
+ strategy, where we aggressively compact the latest partitions compared to older partitions, we could ensure the read optimized queries see data
  published within X minutes in a consistent fashion.</li>
 </ul>
 
-<p>The intention of merge on read storage is to enable near real-time processing directly on top of DFS, as opposed to copying
+<p>The intention of merge on read table is to enable near real-time processing directly on top of DFS, as opposed to copying
 data out to specialized systems, which may not be able to handle the data volume. There are also a few secondary side benefits to 
-this storage such as reduced write amplification by avoiding synchronous merge of data, i.e, the amount of data written per 1 bytes of data in a batch</p>
+this table such as reduced write amplification by avoiding synchronous merge of data, i.e, the amount of data written per 1 bytes of data in a batch</p>
 
 
       </section>
diff --git a/content/docs/configurations.html b/content/docs/configurations.html
index c6d3aff..6cf54f9 100644
--- a/content/docs/configurations.html
+++ b/content/docs/configurations.html
@@ -4,7 +4,7 @@
     <meta charset="utf-8">
 
 <!-- begin _includes/seo.html --><title>Configurations - Apache Hudi</title>
-<meta name="description" content="This page covers the different ways of configuring your job to write/read Hudi datasets. At a high level, you can control behaviour at few levels.">
+<meta name="description" content="This page covers the different ways of configuring your job to write/read Hudi tables. At a high level, you can control behaviour at few levels.">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
@@ -13,7 +13,7 @@
 <meta property="og:url" content="https://hudi.apache.org/docs/configurations.html">
 
 
-  <meta property="og:description" content="This page covers the different ways of configuring your job to write/read Hudi datasets. At a high level, you can control behaviour at few levels.">
+  <meta property="og:description" content="This page covers the different ways of configuring your job to write/read Hudi tables. At a high level, you can control behaviour at few levels.">
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#talking-to-cloud-storage">Talking to Cloud Storage</a></li>
   <li><a href="#spark-datasource">Spark Datasource Configs</a>
@@ -355,15 +355,15 @@
           </nav>
         </aside>
         
-        <p>This page covers the different ways of configuring your job to write/read Hudi datasets. 
+        <p>This page covers the different ways of configuring your job to write/read Hudi tables. 
 At a high level, you can control behaviour at few levels.</p>
 
 <ul>
-  <li><strong><a href="#spark-datasource">Spark Datasource Configs</a></strong> : These configs control the Hudi Spark Datasource, providing ability to define keys/partitioning, pick out the write operation, specify how to merge records or choosing view type to read.</li>
+  <li><strong><a href="#spark-datasource">Spark Datasource Configs</a></strong> : These configs control the Hudi Spark Datasource, providing ability to define keys/partitioning, pick out the write operation, specify how to merge records or choosing query type to read.</li>
   <li><strong><a href="#writeclient-configs">WriteClient Configs</a></strong> : Internally, the Hudi datasource uses a RDD based <code class="highlighter-rouge">HoodieWriteClient</code> api to actually perform writes to storage. These configs provide deep control over lower level aspects like 
  file sizing, compression, parallelism, compaction, write schema, cleaning etc. Although Hudi provides sane defaults, from time-time these configs may need to be tweaked to optimize for specific workloads.</li>
   <li><strong><a href="#PAYLOAD_CLASS_OPT_KEY">RecordPayload Config</a></strong> : This is the lowest level of customization offered by Hudi. Record payloads define how to produce new values to upsert based on incoming new record and 
- stored old record. Hudi provides default implementations such as <code class="highlighter-rouge">OverwriteWithLatestAvroPayload</code> which simply update storage with the latest/last-written record. 
+ stored old record. Hudi provides default implementations such as <code class="highlighter-rouge">OverwriteWithLatestAvroPayload</code> which simply update table with the latest/last-written record. 
  This can be overridden to a custom class extending <code class="highlighter-rouge">HoodieRecordPayload</code> class, on both datasource and WriteClient levels.</li>
 </ul>
 
@@ -399,20 +399,20 @@ The actual datasource level configs are listed below.</p>
 <span class="o">.</span><span class="na">save</span><span class="o">(</span><span class="n">basePath</span><span class="o">);</span>
 </code></pre></div></div>
 
-<p>Options useful for writing datasets via <code class="highlighter-rouge">write.format.option(...)</code></p>
+<p>Options useful for writing tables via <code class="highlighter-rouge">write.format.option(...)</code></p>
 
 <h4 id="TABLE_NAME_OPT_KEY">TABLE_NAME_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.write.table.name</code> [Required]<br />
-  <span style="color:grey">Hive table name, to register the dataset into.</span></p>
+  <span style="color:grey">Hive table name, to register the table into.</span></p>
 
 <h4 id="OPERATION_OPT_KEY">OPERATION_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.write.operation</code>, Default: <code class="highlighter-rouge">upsert</code><br />
   <span style="color:grey">whether to do upsert, insert or bulkinsert for the write operation. Use <code class="highlighter-rouge">bulkinsert</code> to load new data into a table, and there on use <code class="highlighter-rouge">upsert</code>/<code class="highlighter-rouge">insert</code>. 
   bulk insert uses a disk based write path to scale to load large inputs without need to cache it.</span></p>
 
-<h4 id="STORAGE_TYPE_OPT_KEY">STORAGE_TYPE_OPT_KEY</h4>
-<p>Property: <code class="highlighter-rouge">hoodie.datasource.write.storage.type</code>, Default: <code class="highlighter-rouge">COPY_ON_WRITE</code> <br />
-  <span style="color:grey">The storage type for the underlying data, for this write. This can’t change between writes.</span></p>
+<h4 id="TABLE_TYPE_OPT_KEY">TABLE_TYPE_OPT_KEY</h4>
+<p>Property: <code class="highlighter-rouge">hoodie.datasource.write.table.type</code>, Default: <code class="highlighter-rouge">COPY_ON_WRITE</code> <br />
+  <span style="color:grey">The table type for the underlying data, for this write. This can’t change between writes.</span></p>
 
 <h4 id="PRECOMBINE_FIELD_OPT_KEY">PRECOMBINE_FIELD_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.write.precombine.field</code>, Default: <code class="highlighter-rouge">ts</code> <br />
@@ -450,7 +450,7 @@ This is useful to store checkpointing information, in a consistent way with the
 
 <h4 id="HIVE_SYNC_ENABLED_OPT_KEY">HIVE_SYNC_ENABLED_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.hive_sync.enable</code>, Default: <code class="highlighter-rouge">false</code> <br />
-  <span style="color:grey">When set to true, register/sync the dataset to Apache Hive metastore</span></p>
+  <span style="color:grey">When set to true, register/sync the table to Apache Hive metastore</span></p>
 
 <h4 id="HIVE_DATABASE_OPT_KEY">HIVE_DATABASE_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.hive_sync.database</code>, Default: <code class="highlighter-rouge">default</code> <br />
@@ -474,7 +474,7 @@ This is useful to store checkpointing information, in a consistent way with the
 
 <h4 id="HIVE_PARTITION_FIELDS_OPT_KEY">HIVE_PARTITION_FIELDS_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.hive_sync.partition_fields</code>, Default: ` ` <br />
-  <span style="color:grey">field in the dataset to use for determining hive partition columns.</span></p>
+  <span style="color:grey">field in the table to use for determining hive partition columns.</span></p>
 
 <h4 id="HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY">HIVE_PARTITION_EXTRACTOR_CLASS_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.hive_sync.partition_extractor_class</code>, Default: <code class="highlighter-rouge">org.apache.hudi.hive.SlashEncodedDayPartitionValueExtractor</code> <br />
@@ -486,13 +486,13 @@ This is useful to store checkpointing information, in a consistent way with the
 
 <h3 id="read-options">Read Options</h3>
 
-<p>Options useful for reading datasets via <code class="highlighter-rouge">read.format.option(...)</code></p>
+<p>Options useful for reading tables via <code class="highlighter-rouge">read.format.option(...)</code></p>
 
-<h4 id="VIEW_TYPE_OPT_KEY">VIEW_TYPE_OPT_KEY</h4>
-<p>Property: <code class="highlighter-rouge">hoodie.datasource.view.type</code>, Default: <code class="highlighter-rouge">read_optimized</code> <br />
+<h4 id="QUERY_TYPE_OPT_KEY">QUERY_TYPE_OPT_KEY</h4>
+<p>Property: <code class="highlighter-rouge">hoodie.datasource.query.type</code>, Default: <code class="highlighter-rouge">snapshot</code> <br />
 <span style="color:grey">Whether data needs to be read, in incremental mode (new data since an instantTime)
 (or) Read Optimized mode (obtain latest view, based on columnar data)
-(or) Real time mode (obtain latest view, based on row &amp; columnar data)</span></p>
+(or) Snapshot mode (obtain latest view, based on row &amp; columnar data)</span></p>
 
 <h4 id="BEGIN_INSTANTTIME_OPT_KEY">BEGIN_INSTANTTIME_OPT_KEY</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.datasource.read.begin.instanttime</code>, [Required in incremental mode] <br />
@@ -530,15 +530,15 @@ HoodieWriteConfig can be built using a builder pattern as below.</p>
 
 <h4 id="withSchema">withSchema(schema_str)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.avro.schema</code> [Required]<br />
-<span style="color:grey">This is the current reader avro schema for the dataset. This is a string of the entire schema. HoodieWriteClient uses this schema to pass on to implementations of HoodieRecordPayload to convert from the source format to avro record. This is also used when re-writing records during an update. </span></p>
+<span style="color:grey">This is the current reader avro schema for the table. This is a string of the entire schema. HoodieWriteClient uses this schema to pass on to implementations of HoodieRecordPayload to convert from the source format to avro record. This is also used when re-writing records during an update. </span></p>
 
 <h4 id="forTable">forTable(table_name)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.table.name</code> [Required] <br />
- <span style="color:grey">Table name for the dataset, will be used for registering with Hive. Needs to be same across runs.</span></p>
+ <span style="color:grey">Table name that will be used for registering with Hive. Needs to be same across runs.</span></p>
 
 <h4 id="withBulkInsertParallelism">withBulkInsertParallelism(bulk_insert_parallelism = 1500)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.bulkinsert.shuffle.parallelism</code><br />
-<span style="color:grey">Bulk insert is meant to be used for large initial imports and this parallelism determines the initial number of files in your dataset. Tune this to achieve a desired optimal size during initial import.</span></p>
+<span style="color:grey">Bulk insert is meant to be used for large initial imports and this parallelism determines the initial number of files in your table. Tune this to achieve a desired optimal size during initial import.</span></p>
 
 <h4 id="withParallelism">withParallelism(insert_shuffle_parallelism = 1500, upsert_shuffle_parallelism = 1500)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.insert.shuffle.parallelism</code>, <code class="highlighter-rouge">hoodie.upsert.shuffle.parallelism</code><br />
@@ -657,7 +657,7 @@ HoodieWriteConfig can be built using a builder pattern as below.</p>
 
 <h4 id="logFileToParquetCompressionRatio">logFileToParquetCompressionRatio(logFileToParquetCompressionRatio = 0.35)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.logfile.to.parquet.compression.ratio</code> <br />
-<span style="color:grey">Expected additional compression as records move from log files to parquet. Used for merge_on_read storage to send inserts into log files &amp; control the size of compacted parquet file.</span></p>
+<span style="color:grey">Expected additional compression as records move from log files to parquet. Used for merge_on_read table to send inserts into log files &amp; control the size of compacted parquet file.</span></p>
 
 <h4 id="parquetCompressionCodec">parquetCompressionCodec(parquetCompressionCodec = gzip)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.parquet.compression.codec</code> <br />
@@ -673,7 +673,7 @@ HoodieWriteConfig can be built using a builder pattern as below.</p>
 
 <h4 id="retainCommits">retainCommits(no_of_commits_to_retain = 24)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.cleaner.commits.retained</code> <br />
-<span style="color:grey">Number of commits to retain. So data will be retained for num_of_commits * time_between_commits (scheduled). This also directly translates into how much you can incrementally pull on this dataset</span></p>
+<span style="color:grey">Number of commits to retain. So data will be retained for num_of_commits * time_between_commits (scheduled). This also directly translates into how much you can incrementally pull on this table</span></p>
 
 <h4 id="archiveCommitsWith">archiveCommitsWith(minCommits = 96, maxCommits = 128)</h4>
 <p>Property: <code class="highlighter-rouge">hoodie.keep.min.commits</code>, <code class="highlighter-rouge">hoodie.keep.max.commits</code> <br />
diff --git a/content/docs/0.5.0-admin_guide.html b/content/docs/deployment.html
similarity index 55%
copy from content/docs/0.5.0-admin_guide.html
copy to content/docs/deployment.html
index 68b2b01..1a25f23 100644
--- a/content/docs/0.5.0-admin_guide.html
+++ b/content/docs/deployment.html
@@ -3,17 +3,17 @@
   <head>
     <meta charset="utf-8">
 
-<!-- begin _includes/seo.html --><title>Administering Hudi Pipelines - Apache Hudi</title>
-<meta name="description" content="Admins/ops can gain visibility into Hudi datasets/pipelines in the following ways">
+<!-- begin _includes/seo.html --><title>Deployment Guide - Apache Hudi</title>
+<meta name="description" content="This section provides all the help you need to deploy and operate Hudi tables at scale. Specifically, we will cover the following aspects.">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
 <meta property="og:site_name" content="">
-<meta property="og:title" content="Administering Hudi Pipelines">
-<meta property="og:url" content="https://hudi.apache.org/docs/0.5.0-admin_guide.html">
+<meta property="og:title" content="Deployment Guide">
+<meta property="og:url" content="https://hudi.apache.org/docs/deployment.html">
 
 
-  <meta property="og:description" content="Admins/ops can gain visibility into Hudi datasets/pipelines in the following ways">
+  <meta property="og:description" content="This section provides all the help you need to deploy and operate Hudi tables at scale. Specifically, we will cover the following aspects.">
 
 
 
@@ -147,7 +147,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-quick-start-guide.html" class="">Quick Start</a></li>
+              <li><a href="/docs/quick-start-guide.html" class="">Quick Start</a></li>
             
 
           
@@ -158,7 +158,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-use_cases.html" class="">Use Cases</a></li>
+              <li><a href="/docs/use_cases.html" class="">Use Cases</a></li>
             
 
           
@@ -169,7 +169,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-powered_by.html" class="">Talks & Powered By</a></li>
+              <li><a href="/docs/powered_by.html" class="">Talks & Powered By</a></li>
             
 
           
@@ -180,7 +180,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-comparison.html" class="">Comparison</a></li>
+              <li><a href="/docs/comparison.html" class="">Comparison</a></li>
             
 
           
@@ -191,7 +191,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-docker_demo.html" class="">Docker Demo</a></li>
+              <li><a href="/docs/docker_demo.html" class="">Docker Demo</a></li>
             
 
           
@@ -214,7 +214,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-concepts.html" class="">Concepts</a></li>
+              <li><a href="/docs/concepts.html" class="">Concepts</a></li>
             
 
           
@@ -225,7 +225,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-writing_data.html" class="">Writing Data</a></li>
+              <li><a href="/docs/writing_data.html" class="">Writing Data</a></li>
             
 
           
@@ -236,7 +236,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-querying_data.html" class="">Querying Data</a></li>
+              <li><a href="/docs/querying_data.html" class="">Querying Data</a></li>
             
 
           
@@ -247,7 +247,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-configurations.html" class="">Configuration</a></li>
+              <li><a href="/docs/configurations.html" class="">Configuration</a></li>
             
 
           
@@ -258,7 +258,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-performance.html" class="">Performance</a></li>
+              <li><a href="/docs/performance.html" class="">Performance</a></li>
             
 
           
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-admin_guide.html" class="active">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="active">Deployment</a></li>
             
 
           
@@ -292,7 +292,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-docs-versions.html" class="">Docs Versions</a></li>
+              <li><a href="/docs/docs-versions.html" class="">Docs Versions</a></li>
             
 
           
@@ -303,7 +303,7 @@
             
 
             
-              <li><a href="/docs/0.5.0-privacy.html" class="">Privacy Policy</a></li>
+              <li><a href="/docs/privacy.html" class="">Privacy Policy</a></li>
             
 
           
@@ -324,7 +324,7 @@
     <div class="page__inner-wrap">
       
         <header>
-          <h1 id="page-title" class="page__title" itemprop="headline">Administering Hudi Pipelines
+          <h1 id="page-title" class="page__title" itemprop="headline">Deployment Guide
 </h1>
         </header>
       
@@ -333,9 +333,17 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
-  <li><a href="#admin-cli">Admin CLI</a>
+  <li><a href="#deploying">Deploying</a>
+    <ul>
+      <li><a href="#deltastreamer">DeltaStreamer</a></li>
+      <li><a href="#spark-datasource-writer-jobs">Spark Datasource Writer Jobs</a></li>
+    </ul>
+  </li>
+  <li><a href="#upgrading">Upgrading</a></li>
+  <li><a href="#migrating">Migrating</a></li>
+  <li><a href="#cli">CLI</a>
     <ul>
       <li><a href="#inspecting-commits">Inspecting Commits</a></li>
       <li><a href="#drilling-down-to-a-specific-commit">Drilling Down to a specific Commit</a></li>
@@ -344,12 +352,12 @@
       <li><a href="#archived-commits">Archived Commits</a></li>
       <li><a href="#compactions">Compactions</a></li>
       <li><a href="#validate-compaction">Validate Compaction</a></li>
-      <li><a href="#unscheduling-compaction">UnScheduling Compaction</a></li>
+      <li><a href="#unscheduling-compaction">Unscheduling Compaction</a></li>
       <li><a href="#repair-compaction">Repair Compaction</a></li>
     </ul>
   </li>
-  <li><a href="#metrics">Metrics</a></li>
-  <li><a href="#troubleshooting">Troubleshooting Failures</a>
+  <li><a href="#monitoring">Monitoring</a></li>
+  <li><a href="#troubleshooting">Troubleshooting</a>
     <ul>
       <li><a href="#missing-records">Missing records</a></li>
       <li><a href="#duplicates">Duplicates</a></li>
@@ -360,45 +368,208 @@
           </nav>
         </aside>
         
-        <p>Admins/ops can gain visibility into Hudi datasets/pipelines in the following ways</p>
+        <p>This section provides all the help you need to deploy and operate Hudi tables at scale. 
+Specifically, we will cover the following aspects.</p>
 
 <ul>
-  <li><a href="#admin-cli">Administering via the Admin CLI</a></li>
-  <li><a href="#metrics">Graphite metrics</a></li>
-  <li><a href="#spark-ui">Spark UI of the Hudi Application</a></li>
+  <li><a href="#deploying">Deployment Model</a> : How various Hudi components are deployed and managed.</li>
+  <li><a href="#upgrading">Upgrading Versions</a> : Picking up new releases of Hudi, guidelines and general best-practices.</li>
+  <li><a href="#migrating">Migrating to Hudi</a> : How to migrate your existing tables to Apache Hudi.</li>
+  <li><a href="#cli">Interacting via CLI</a> : Using the CLI to perform maintenance or deeper introspection.</li>
+  <li><a href="#monitoring">Monitoring</a> : Tracking metrics from your hudi tables using popular tools.</li>
+  <li><a href="#troubleshooting">Troubleshooting</a> : Uncovering, triaging and resolving issues in production.</li>
 </ul>
 
-<p>This section provides a glimpse into each of these, with some general guidance on <a href="#troubleshooting">troubleshooting</a></p>
+<h2 id="deploying">Deploying</h2>
+
+<p>All in all, Hudi deploys with no long running servers or additional infrastructure cost to your data lake. In fact, Hudi pioneered this model of building a transactional distributed storage layer
+using existing infrastructure and its heartening to see other systems adopting similar approaches as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs), deployed per standard Apache Spark <a href="https://spark.apache.org/docs/latest/cluster-overview.html">recommendations</a>.
+Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto and hence no additional infrastructure is necessary.</p>
+
+<p>A typical Hudi data ingestion can be achieved in 2 modes. In a singe run mode, Hudi ingestion reads next batch of data, ingest them to Hudi table and exits. In continuous mode, Hudi ingestion runs as a long-running service executing ingestion in a loop.</p>
+
+<p>With Merge_On_Read Table, Hudi ingestion needs to also take care of compacting delta files. Again, compaction can be performed in an asynchronous-mode by letting compaction run concurrently with ingestion or in a serial fashion with one after another.</p>
+
+<h3 id="deltastreamer">DeltaStreamer</h3>
+
+<p><a href="/docs/writing_data.html#deltastreamer">DeltaStreamer</a> is the standalone utility to incrementally pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest them to hudi tables. It runs as a spark application in 2 modes.</p>
+
+<ul>
+  <li><strong>Run Once Mode</strong> : In this mode, Deltastreamer performs one ingestion round which includes incrementally pulling events from upstream sources and ingesting them to hudi table. Background operations like cleaning old file versions and archiving hoodie timeline are automatically executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part of ingestion unless disabled by passing the flag “–disable-compaction”. By default, Compaction is run [...]
+</ul>
+
+<p>Here is an example invocation for reading from kafka topic in a single-run mode and writing to Merge On Read table type in a yarn cluster.</p>
+
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span><span class="n">hoodie</span><span class="o">]</span><span class="err">$</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="n">packages</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">:</s [...]
+ <span class="o">--</span><span class="n">master</span> <span class="n">yarn</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">deploy</span><span class="o">-</span><span class="n">mode</span> <span class="n">cluster</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">num</span><span class="o">-</span><span class="n">executors</span> <span class="mi">10</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">executor</span><span class="o">-</span><span class="n">memory</span> <span class="mi">3</span><span class="n">g</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="n">memory</span> <span class="mi">6</span><span class="n">g</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">driver</span><span class="o">.</span><span class="na">extraJavaOptions</span><span class="o">=</span><span class="s">"-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/varadarb_ds_driver.hprof"</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">extraJavaOptions</span><span class="o">=</span><span class="s">"-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/varadarb_ds_executor.hprof"</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">queue</span> <span class="n">hadoop</span><span class="o">-</span><span class="n">platform</span><span class="o">-</span><span class="n">queue</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">scheduler</span><span class="o">.</span><span class="na">mode</span><span class="o">=</span><span class="no">FAIR</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">yarn</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">memoryOverhead</span><span class="o">=</span><span class="mi">1072</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">yarn</span><span class="o">.</span><span class="na">driver</span><span class="o">.</span><span class="na">memoryOverhead</span><span class="o">=</span><span class="mi">2048</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">task</span><span class="o">.</span><span class="na">cpus</span><span class="o">=</span><span class="mi">1</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">cores</span><span class="o">=</span><span class="mi">1</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">task</span><span class="o">.</span><span class="na">maxFailures</span><span class="o">=</span><span class="mi">10</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">memory</span><span class="o">.</span><span class="na">fraction</span><span class="o">=</span><span class="mf">0.4</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">rdd</span><span class="o">.</span><span class="na">compress</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">kryoserializer</span><span class="o">.</span><span class="na">buffer</span><span class="o">.</span><span class="na">max</span><span class="o">=</span><span class="mi">200</span><span class="n">m</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">serializer</span><span class="o">=</span><span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">spark</span><span class="o">.</span><span class="na">serializer</span><span class="o">.</span><span class="na">KryoSerializer</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">memory</span><span class="o">.</span><span class="na">storageFraction</span><span class="o">=</span><span class="mf">0.1</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">shuffle</span><span class="o">.</span><span class="na">service</span><span class="o">.</span><span class="na">enabled</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">hive</span><span class="o">.</span><span class="na">convertMetastoreParquet</span><span class="o">=</span><span class="kc">false</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">ui</span><span class="o">.</span><span class="na">port</span><span class="o">=</span><span class="mi">5555</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">driver</span><span class="o">.</span><span class="na">maxResultSize</span><span class="o">=</span><span class="mi">3</span><span class="n">g</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">heartbeatInterval</span><span class="o">=</span><span class="mi">120</span><span class="n">s</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">network</span><span class="o">.</span><span class="na">timeout</span><span class="o">=</span><span class="mi">600</span><span class="n">s</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">eventLog</span><span class="o">.</span><span class="na">overwrite</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">eventLog</span><span class="o">.</span><span class="na">enabled</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">eventLog</span><span class="o">.</span><span class="na">dir</span><span class="o">=</span><span class="nl">hdfs:</span><span class="c1">///user/spark/applicationHistory \</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">yarn</span><span class="o">.</span><span class="na">max</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">failures</span><span class="o">=</span><span class="mi">10</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">catalogImplementation</span><span class="o">=</span><span class="n">hive</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">shuffle</span><span class="o">.</span><span class="na">partitions</span><span class="o">=</span><span class="mi">100</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class</span><span class="err">-</span><span class="nc">path</span> <span class="n">$HADOOP_CONF_DIR</span> <span class="err">\</span>
+ <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">table</span><span class="o">-</span><span class="n">type</span> <span class="no">MERGE_ON_READ</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span><span class="o">.</span><span class="na">JsonKafkaSource</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="n">ordering</span><span class="o">-</span><span class="n">field</span> <span class="n">ts</span>  <span class="err">\</span>
+ <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">base</span><span class="o">-</span><span class="n">path</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hive</span><span class="o">/</span><span class="n">warehouse</span><span class="o">/</span><span class="n">stock_ticks_mor</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">table</span> <span class="n">stock_ticks_mor</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">props</span> <span class="o">/</span><span class="kt">var</span><span class="o">/</span><span class="n">demo</span><span class="o">/</span><span class="n">config</span><span class="o">/</span><span class="n">kafka</span><span class="o">-</span><span class="n">source</span><span class="o">.</span><span class="na">properties</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">schemaprovider</span><span class="o">-</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">schema</span><span class="o">.</span><span class="na">FilebasedSchemaProvider</span>
+</code></pre></div></div>
+
+<ul>
+  <li><strong>Continuous Mode</strong> :  Here, deltastreamer runs an infinite loop with each round performing one ingestion round as described in <strong>Run Once Mode</strong>. The frequency of data ingestion can be controlled by the configuration “–min-sync-interval-seconds”. For Merge-On-Read tables, Compaction is run in asynchronous fashion concurrently with ingestion unless disabled by passing the flag “–disable-compaction”. Every ingestion run triggers a compaction request asynchr [...]
+</ul>
+
+<p>Here is an example invocation for reading from kafka topic in a continuous mode and writing to Merge On Read table type in a yarn cluster.</p>
+
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span><span class="n">hoodie</span><span class="o">]</span><span class="err">$</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="n">packages</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">:</s [...]
+ <span class="o">--</span><span class="n">master</span> <span class="n">yarn</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">deploy</span><span class="o">-</span><span class="n">mode</span> <span class="n">cluster</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">num</span><span class="o">-</span><span class="n">executors</span> <span class="mi">10</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">executor</span><span class="o">-</span><span class="n">memory</span> <span class="mi">3</span><span class="n">g</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="n">memory</span> <span class="mi">6</span><span class="n">g</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">driver</span><span class="o">.</span><span class="na">extraJavaOptions</span><span class="o">=</span><span class="s">"-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/varadarb_ds_driver.hprof"</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">extraJavaOptions</span><span class="o">=</span><span class="s">"-XX:+PrintGCApplicationStoppedTime -XX:+PrintGCApplicationConcurrentTime -XX:+PrintGCTimeStamps -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp/varadarb_ds_executor.hprof"</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">queue</span> <span class="n">hadoop</span><span class="o">-</span><span class="n">platform</span><span class="o">-</span><span class="n">queue</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">scheduler</span><span class="o">.</span><span class="na">mode</span><span class="o">=</span><span class="no">FAIR</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">yarn</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">memoryOverhead</span><span class="o">=</span><span class="mi">1072</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">yarn</span><span class="o">.</span><span class="na">driver</span><span class="o">.</span><span class="na">memoryOverhead</span><span class="o">=</span><span class="mi">2048</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">task</span><span class="o">.</span><span class="na">cpus</span><span class="o">=</span><span class="mi">1</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">cores</span><span class="o">=</span><span class="mi">1</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">task</span><span class="o">.</span><span class="na">maxFailures</span><span class="o">=</span><span class="mi">10</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">memory</span><span class="o">.</span><span class="na">fraction</span><span class="o">=</span><span class="mf">0.4</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">rdd</span><span class="o">.</span><span class="na">compress</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">kryoserializer</span><span class="o">.</span><span class="na">buffer</span><span class="o">.</span><span class="na">max</span><span class="o">=</span><span class="mi">200</span><span class="n">m</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">serializer</span><span class="o">=</span><span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">spark</span><span class="o">.</span><span class="na">serializer</span><span class="o">.</span><span class="na">KryoSerializer</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">memory</span><span class="o">.</span><span class="na">storageFraction</span><span class="o">=</span><span class="mf">0.1</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">shuffle</span><span class="o">.</span><span class="na">service</span><span class="o">.</span><span class="na">enabled</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">hive</span><span class="o">.</span><span class="na">convertMetastoreParquet</span><span class="o">=</span><span class="kc">false</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">ui</span><span class="o">.</span><span class="na">port</span><span class="o">=</span><span class="mi">5555</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">driver</span><span class="o">.</span><span class="na">maxResultSize</span><span class="o">=</span><span class="mi">3</span><span class="n">g</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">heartbeatInterval</span><span class="o">=</span><span class="mi">120</span><span class="n">s</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">network</span><span class="o">.</span><span class="na">timeout</span><span class="o">=</span><span class="mi">600</span><span class="n">s</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">eventLog</span><span class="o">.</span><span class="na">overwrite</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">eventLog</span><span class="o">.</span><span class="na">enabled</span><span class="o">=</span><span class="kc">true</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">eventLog</span><span class="o">.</span><span class="na">dir</span><span class="o">=</span><span class="nl">hdfs:</span><span class="c1">///user/spark/applicationHistory \</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">yarn</span><span class="o">.</span><span class="na">max</span><span class="o">.</span><span class="na">executor</span><span class="o">.</span><span class="na">failures</span><span class="o">=</span><span class="mi">10</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">catalogImplementation</span><span class="o">=</span><span class="n">hive</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">conf</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">.</span><span class="na">shuffle</span><span class="o">.</span><span class="na">partitions</span><span class="o">=</span><span class="mi">100</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class</span><span class="err">-</span><span class="nc">path</span> <span class="n">$HADOOP_CONF_DIR</span> <span class="err">\</span>
+ <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">table</span><span class="o">-</span><span class="n">type</span> <span class="no">MERGE_ON_READ</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span><span class="o">.</span><span class="na">JsonKafkaSource</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="n">ordering</span><span class="o">-</span><span class="n">field</span> <span class="n">ts</span>  <span class="err">\</span>
+ <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">base</span><span class="o">-</span><span class="n">path</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hive</span><span class="o">/</span><span class="n">warehouse</span><span class="o">/</span><span class="n">stock_ticks_mor</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">table</span> <span class="n">stock_ticks_mor</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">props</span> <span class="o">/</span><span class="kt">var</span><span class="o">/</span><span class="n">demo</span><span class="o">/</span><span class="n">config</span><span class="o">/</span><span class="n">kafka</span><span class="o">-</span><span class="n">source</span><span class="o">.</span><span class="na">properties</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">schemaprovider</span><span class="o">-</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">schema</span><span class="o">.</span><span class="na">FilebasedSchemaProvider</span> <span class="err">\</span>
+ <span class="o">--</span><span class="n">continuous</span>
+</code></pre></div></div>
+
+<h3 id="spark-datasource-writer-jobs">Spark Datasource Writer Jobs</h3>
 
-<h2 id="admin-cli">Admin CLI</h2>
+<p>As described in <a href="/docs/writing_data.html#datasource-writer">Writing Data</a>, you can use spark datasource to ingest to hudi table. This mechanism allows you to ingest any spark dataframe in Hudi format. Hudi Spark DataSource also supports spark streaming to ingest a streaming source to Hudi table. For Merge On Read table types, inline compaction is turned on by default which runs after every ingestion run. The compaction frequency can be changed by setting the property “hoodi [...]
 
-<p>Once hudi has been built, the shell can be fired by via  <code class="highlighter-rouge">cd hudi-cli &amp;&amp; ./hudi-cli.sh</code>.
-A hudi dataset resides on DFS, in a location referred to as the <strong>basePath</strong> and we would need this location in order to connect to a Hudi dataset.
-Hudi library effectively manages this dataset internally, using .hoodie subfolder to track all metadata</p>
+<p>Here is an example invocation using spark datasource</p>
+
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">inputDF</span><span class="o">.</span><span class="na">write</span><span class="o">()</span>
+       <span class="o">.</span><span class="na">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">options</span><span class="o">(</span><span class="n">clientOpts</span><span class="o">)</span> <span class="c1">// any of the Hudi client opts can be passed in as well</span>
+       <span class="o">.</span><span class="na">option</span><span class="o">(</span><span class="nc">DataSourceWriteOptions</span><span class="o">.</span><span class="na">RECORDKEY_FIELD_OPT_KEY</span><span class="o">(),</span> <span class="s">"_row_key"</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">option</span><span class="o">(</span><span class="nc">DataSourceWriteOptions</span><span class="o">.</span><span class="na">PARTITIONPATH_FIELD_OPT_KEY</span><span class="o">(),</span> <span class="s">"partition"</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">option</span><span class="o">(</span><span class="nc">DataSourceWriteOptions</span><span class="o">.</span><span class="na">PRECOMBINE_FIELD_OPT_KEY</span><span class="o">(),</span> <span class="s">"timestamp"</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">option</span><span class="o">(</span><span class="nc">HoodieWriteConfig</span><span class="o">.</span><span class="na">TABLE_NAME</span><span class="o">,</span> <span class="n">tableName</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">mode</span><span class="o">(</span><span class="nc">SaveMode</span><span class="o">.</span><span class="na">Append</span><span class="o">)</span>
+       <span class="o">.</span><span class="na">save</span><span class="o">(</span><span class="n">basePath</span><span class="o">);</span>
+</code></pre></div></div>
+
+<h2 id="upgrading">Upgrading</h2>
+
+<p>New Hudi releases are listed on the <a href="/releases">releases page</a>, with detailed notes which list all the changes, with highlights in each release. 
+At the end of the day, Hudi is a storage system and with that comes a lot of responsibilities, which we take seriously.</p>
+
+<p>As general guidelines,</p>
+
+<ul>
+  <li>We strive to keep all changes backwards compatible (i.e new code can read old data/timeline files) and when we cannot, we will provide upgrade/downgrade tools via the CLI</li>
+  <li>We cannot always guarantee forward compatibility (i.e old code being able to read data/timeline files written by a greater version). This is generally the norm, since no new features can be built otherwise.
+However any large such changes, will be turned off by default, for smooth transition to newer release. After a few releases and once enough users deem the feature stable in production, we will flip the defaults in a subsequent release.</li>
+  <li>Always upgrade the query bundles (mr-bundle, presto-bundle, spark-bundle) first and then upgrade the writers (deltastreamer, spark jobs using datasource). This often provides the best experience and it’s easy to fix 
+any issues by rolling forward/back the writer code (which typically you might have more control over)</li>
+  <li>With large, feature rich releases we recommend migrating slowly, by first testing in staging environments and running your own tests. Upgrading Hudi is no different than upgrading any database system.</li>
+</ul>
+
+<p>Note that release notes can override this information with specific instructions, applicable on case-by-case basis.</p>
+
+<h2 id="migrating">Migrating</h2>
+
+<p>Currently migrating to Hudi can be done using two approaches</p>
+
+<ul>
+  <li><strong>Convert newer partitions to Hudi</strong> : This model is suitable for large event tables (e.g: click streams, ad impressions), which also typically receive writes for the last few days alone. You can convert the last 
+ N partitions to Hudi and proceed writing as if it were a Hudi table to begin with. The Hudi query side code is able to correctly handle both hudi and non-hudi data partitions.</li>
+  <li><strong>Full conversion to Hudi</strong> : This model is suitable if you are currently bulk/full loading the table few times a day (e.g database ingestion). The full conversion of Hudi is simply a one-time step (akin to 1 run of your existing job),
+ which moves all of the data into the Hudi format and provides the ability to incrementally update for future writes.</li>
+</ul>
+
+<p>For more details, refer to the detailed <a href="/docs/migration_guide.html">migration guide</a>. In the future, we will be supporting seamless zero-copy bootstrap of existing tables with all the upsert/incremental query capabilities fully supported.</p>
+
+<h2 id="cli">CLI</h2>
+
+<p>Once hudi has been built, the shell can be fired by via  <code class="highlighter-rouge">cd hudi-cli &amp;&amp; ./hudi-cli.sh</code>. A hudi table resides on DFS, in a location referred to as the <code class="highlighter-rouge">basePath</code> and 
+we would need this location in order to connect to a Hudi table. Hudi library effectively manages this table internally, using <code class="highlighter-rouge">.hoodie</code> subfolder to track all metadata.</p>
 
 <p>To initialize a hudi table, use the following command.</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mo">06</span> <span class="mi">15</span><span class="o">:</span><span class="mi">56</span><span class="o">:</span><span class="mi">52</span> <span class="no">INFO</span> <span class="n">annotation</span><span class="o">.</span><span class="na">AutowiredAnnotationBeanPostProcessor</ [...]
-<span class="o">============================================</span>
-<span class="o">*</span>                                          <span class="o">*</span>
-<span class="o">*</span>     <span class="n">_</span>    <span class="n">_</span>           <span class="n">_</span>   <span class="n">_</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span> <span class="o">|</span>  <span class="o">|</span> <span class="o">|</span>         <span class="o">|</span> <span class="o">|</span> <span class="o">(</span><span class="n">_</span><span class="o">)</span>              <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span> <span class="o">|</span><span class="n">__</span><span class="o">|</span> <span class="o">|</span>       <span class="n">__</span><span class="o">|</span> <span class="o">|</span>  <span class="o">-</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span>  <span class="n">__</span>  <span class="o">||</span>   <span class="o">|</span> <span class="o">/</span> <span class="n">_</span><span class="err">`</span> <span class="o">|</span> <span class="o">||</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span> <span class="o">|</span>  <span class="o">|</span> <span class="o">||</span>   <span class="o">||</span> <span class="o">(</span><span class="n">_</span><span class="o">|</span> <span class="o">|</span> <span class="o">||</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span><span class="n">_</span><span class="o">|</span>  <span class="o">|</span><span class="n">_</span><span class="o">|</span><span class="err">\</span><span class="n">___</span><span class="o">/</span> <span class="err">\</span><span class="n">____</span><span class="o">/</span> <span class="o">||</span>               <span class="o">*</span>
-<span class="o">*</span>                                          <span class="o">*</span>
-<span class="o">============================================</span>
-
-<span class="nc">Welcome</span> <span class="n">to</span> <span class="nc">Hoodie</span> <span class="no">CLI</span><span class="o">.</span> <span class="nc">Please</span> <span class="n">type</span> <span class="n">help</span> <span class="k">if</span> <span class="n">you</span> <span class="n">are</span> <span class="n">looking</span> <span class="k">for</span> <span class="n">help</span><span class="o">.</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">===================================================================</span>
+<span class="o">*</span>         <span class="n">___</span>                          <span class="n">___</span>                        <span class="o">*</span>
+<span class="o">*</span>        <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>          <span class="n">___</span>           <span class="o">/</span><span class="err">\</span>  <span class="err">\</span>           <span class="n">___</span>         <span class="o">*</span>
+<span class="o">*</span>       <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>         <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>         <span class="o">/</span>  <span class="err">\</span>  <span class="err">\</span>         <span class="o">/</span><span class="err">\</span>  <span class="err">\</span>        <span class="o">*</span>
+<span class="o">*</span>      <span class="o">/</span> <span class="o">/</span><span class="n">__</span><span class="o">/</span>         <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>        <span class="o">/</span> <span class="o">/</span><span class="err">\</span> <span class="err">\</span>  <span class="err">\</span>        <span class="err">\</span> <span class="err">\</span>  <span class="err">\</span>       <span class="o">*</span>
+<span class="o">*</span>     <span class="o">/</span>  <span class="err">\</span>  <span class="err">\</span> <span class="n">___</span>    <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>        <span class="o">/</span> <span class="o">/</span>  <span class="err">\</span> <span class="err">\</span><span class="n">__</span><span class="err">\</span>       <span class="o">/</span>  <span class="err">\</span><span class="n">__</span><span class="err">\</span>     [...]
+<span class="o">*</span>    <span class="o">/</span> <span class="o">/</span><span class="err">\</span> <span class="err">\</span>  <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>  <span class="o">/</span> <span class="o">/</span><span class="n">__</span><span class="o">/</span>  <span class="n">___</span>   <span class="o">/</span> <span class="o">/</span><span class="n">__</span><span class="o">/</span> <span class="err">\</span> <s [...]
+<span class="o">*</span>    <span class="err">\</span><span class="o">/</span>  <span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>  <span class="err">\</span> <span class="err">\</span>  <span class="err">\</span> <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>  <span class="err">\</span> <span class="err">\</span>  <span class="err">\</span> <span class="o" [...]
+<span class="o">*</span>         <span class="err">\</span>  <span class="o">/</span>  <span class="o">/</span>    <span class="err">\</span> <span class="err">\</span>  <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>   <span class="err">\</span> <span class="err">\</span>  <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>   <span class="err">\</span>  <span class="o">/</span><span class="n">__</span><span class="o">/</span>           [...]
+<span class="o">*</span>         <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>      <span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>     <span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>     <span class="err">\</span> <span class="err">\</span><span class="n">__</span><span class="err">\</span>         [...]
+<span class="o">*</span>        <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>        <span class="err">\</span>  <span class="o">/</span>  <span class="o">/</span>       <span class="err">\</span>  <span class="o">/</span>  <span class="o">/</span>       <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>          <span class="o">*</span>
+<span class="o">*</span>        <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>          <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>         <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>    <span class="nc">Apache</span> <span class="nc">Hudi</span> <span class="no">CLI</span>    <span class="o">*</span>
+<span class="o">*</span>                                                                 <span class="o">*</span>
+<span class="o">===================================================================</span>
+
 <span class="n">hudi</span><span class="o">-&gt;</span><span class="n">create</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hive</span><span class="o">/</span><span class="n">warehouse</span><span class="o">/</span><span class="n">table1</span> <span class="o">--</span><span class="n">tableName</span> <span class="n">hoodie_table_1</span> <span class="o">--</span><span class="n">table [...]
 <span class="o">.....</span>
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mo">06</span> <span class="mi">15</span><span class="o">:</span><span class="mi">57</span><span class="o">:</span><span class="mi">15</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
 </code></pre></div></div>
 
 <p>To see the description of hudi table, use the command:</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">hoodie_table_1</span><span class="o">-&gt;</span><span class="n">desc</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">hoodie_table_1</span><span class="o">-&gt;</span><span class="n">desc</span>
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mo">06</span> <span class="mi">15</span><span class="o">:</span><span class="mi">57</span><span class="o">:</span><span class="mi">19</span> <span class="no">INFO</span> <span class="n">timeline</span><span class="o">.</span><span class="na">HoodieActiveTimeline</span><span class="o">:</span> <span class="nc">Loaded</span> <span class="n">instants</span> <span class="o">[]</span>
     <span class="n">_________________________________________________________</span>
     <span class="o">|</span> <span class="nc">Property</span>                <span class="o">|</span> <span class="nc">Value</span>                        <span class="o">|</span>
@@ -411,35 +582,32 @@ Hudi library effectively manages this dataset internally, using .hoodie subfolde
     <span class="o">|</span> <span class="n">hoodie</span><span class="o">.</span><span class="na">archivelog</span><span class="o">.</span><span class="na">folder</span><span class="o">|</span>                              <span class="o">|</span>
 </code></pre></div></div>
 
-<p>Following is a sample command to connect to a Hudi dataset contains uber trips.</p>
+<p>Following is a sample command to connect to a Hudi table contains uber trips.</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">connect</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">uber</span><span class="o">/</span><span class="n">trips</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">connect</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">app</span><span class="o">/</span><span class="n">uber</span><span class="o">/</span><span class="n">trips</span>
 
-<span class="mi">16</span><span class="o">/</span><span class="mi">10</span><span class="o">/</span><span class="mo">05</span> <span class="mi">23</span><span class="o">:</span><span class="mi">20</span><span class="o">:</span><span class="mi">37</span> <span class="no">INFO</span> <span class="n">model</span><span class="o">.</span><span class="na">HoodieTableMetadata</span><span class="o">:</span> <span class="nc">Attempting</span> <span class="n">to</span> <span class="n">load</span>  [...]
-<span class="mi">16</span><span class="o">/</span><span class="mi">10</span><span class="o">/</span><span class="mo">05</span> <span class="mi">23</span><span class="o">:</span><span class="mi">20</span><span class="o">:</span><span class="mi">37</span> <span class="no">INFO</span> <span class="n">model</span><span class="o">.</span><span class="na">HoodieTableMetadata</span><span class="o">:</span> <span class="nc">Attempting</span> <span class="n">to</span> <span class="n">load</span>  [...]
 <span class="mi">16</span><span class="o">/</span><span class="mi">10</span><span class="o">/</span><span class="mo">05</span> <span class="mi">23</span><span class="o">:</span><span class="mi">20</span><span class="o">:</span><span class="mi">37</span> <span class="no">INFO</span> <span class="n">model</span><span class="o">.</span><span class="na">HoodieTableMetadata</span><span class="o">:</span> <span class="nc">All</span> <span class="n">commits</span> <span class="o">:</span><span  [...]
 <span class="nc">Metadata</span> <span class="k">for</span> <span class="n">table</span> <span class="n">trips</span> <span class="n">loaded</span>
-<span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span>
 </code></pre></div></div>
 
-<p>Once connected to the dataset, a lot of other commands become available. The shell has contextual autocomplete help (press TAB) and below is a list of all commands, few of which are reviewed in this section
+<p>Once connected to the table, a lot of other commands become available. The shell has contextual autocomplete help (press TAB) and below is a list of all commands, few of which are reviewed in this section
 are reviewed</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">help</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">help</span>
 <span class="o">*</span> <span class="o">!</span> <span class="o">-</span> <span class="nc">Allows</span> <span class="n">execution</span> <span class="n">of</span> <span class="n">operating</span> <span class="nf">system</span> <span class="o">(</span><span class="no">OS</span><span class="o">)</span> <span class="n">commands</span>
 <span class="o">*</span> <span class="c1">// - Inline comment markers (start of line only)</span>
 <span class="o">*</span> <span class="o">;</span> <span class="o">-</span> <span class="nc">Inline</span> <span class="n">comment</span> <span class="nf">markers</span> <span class="o">(</span><span class="n">start</span> <span class="n">of</span> <span class="n">line</span> <span class="n">only</span><span class="o">)</span>
-<span class="o">*</span> <span class="n">addpartitionmeta</span> <span class="o">-</span> <span class="nc">Add</span> <span class="n">partition</span> <span class="n">metadata</span> <span class="n">to</span> <span class="n">a</span> <span class="n">dataset</span><span class="o">,</span> <span class="k">if</span> <span class="n">not</span> <span class="n">present</span>
+<span class="o">*</span> <span class="n">addpartitionmeta</span> <span class="o">-</span> <span class="nc">Add</span> <span class="n">partition</span> <span class="n">metadata</span> <span class="n">to</span> <span class="n">a</span> <span class="n">table</span><span class="o">,</span> <span class="k">if</span> <span class="n">not</span> <span class="n">present</span>
 <span class="o">*</span> <span class="n">clear</span> <span class="o">-</span> <span class="nc">Clears</span> <span class="n">the</span> <span class="n">console</span>
 <span class="o">*</span> <span class="n">cls</span> <span class="o">-</span> <span class="nc">Clears</span> <span class="n">the</span> <span class="n">console</span>
 <span class="o">*</span> <span class="n">commit</span> <span class="n">rollback</span> <span class="o">-</span> <span class="nc">Rollback</span> <span class="n">a</span> <span class="n">commit</span>
-<span class="o">*</span> <span class="n">commits</span> <span class="n">compare</span> <span class="o">-</span> <span class="nc">Compare</span> <span class="n">commits</span> <span class="n">with</span> <span class="n">another</span> <span class="nc">Hoodie</span> <span class="n">dataset</span>
+<span class="o">*</span> <span class="n">commits</span> <span class="n">compare</span> <span class="o">-</span> <span class="nc">Compare</span> <span class="n">commits</span> <span class="n">with</span> <span class="n">another</span> <span class="nc">Hoodie</span> <span class="n">table</span>
 <span class="o">*</span> <span class="n">commit</span> <span class="n">showfiles</span> <span class="o">-</span> <span class="nc">Show</span> <span class="n">file</span> <span class="n">level</span> <span class="n">details</span> <span class="n">of</span> <span class="n">a</span> <span class="n">commit</span>
 <span class="o">*</span> <span class="n">commit</span> <span class="n">showpartitions</span> <span class="o">-</span> <span class="nc">Show</span> <span class="n">partition</span> <span class="n">level</span> <span class="n">details</span> <span class="n">of</span> <span class="n">a</span> <span class="n">commit</span>
 <span class="o">*</span> <span class="n">commits</span> <span class="n">refresh</span> <span class="o">-</span> <span class="nc">Refresh</span> <span class="n">the</span> <span class="n">commits</span>
 <span class="o">*</span> <span class="n">commits</span> <span class="n">show</span> <span class="o">-</span> <span class="nc">Show</span> <span class="n">the</span> <span class="n">commits</span>
-<span class="o">*</span> <span class="n">commits</span> <span class="n">sync</span> <span class="o">-</span> <span class="nc">Compare</span> <span class="n">commits</span> <span class="n">with</span> <span class="n">another</span> <span class="nc">Hoodie</span> <span class="n">dataset</span>
-<span class="o">*</span> <span class="n">connect</span> <span class="o">-</span> <span class="nc">Connect</span> <span class="n">to</span> <span class="n">a</span> <span class="n">hoodie</span> <span class="n">dataset</span>
+<span class="o">*</span> <span class="n">commits</span> <span class="n">sync</span> <span class="o">-</span> <span class="nc">Compare</span> <span class="n">commits</span> <span class="n">with</span> <span class="n">another</span> <span class="nc">Hoodie</span> <span class="n">table</span>
+<span class="o">*</span> <span class="n">connect</span> <span class="o">-</span> <span class="nc">Connect</span> <span class="n">to</span> <span class="n">a</span> <span class="n">hoodie</span> <span class="n">table</span>
 <span class="o">*</span> <span class="n">date</span> <span class="o">-</span> <span class="nc">Displays</span> <span class="n">the</span> <span class="n">local</span> <span class="n">date</span> <span class="n">and</span> <span class="n">time</span>
 <span class="o">*</span> <span class="n">exit</span> <span class="o">-</span> <span class="nc">Exits</span> <span class="n">the</span> <span class="n">shell</span>
 <span class="o">*</span> <span class="n">help</span> <span class="o">-</span> <span class="nc">List</span> <span class="n">all</span> <span class="n">commands</span> <span class="n">usage</span>
@@ -453,24 +621,23 @@ are reviewed</p>
 <span class="o">*</span> <span class="n">utils</span> <span class="n">loadClass</span> <span class="o">-</span> <span class="nc">Load</span> <span class="n">a</span> <span class="kd">class</span>
 <span class="err">*</span> <span class="nc">version</span> <span class="o">-</span> <span class="nc">Displays</span> <span class="n">shell</span> <span class="n">version</span>
 
-<span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span>
+<span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span>
 </code></pre></div></div>
 
 <h3 id="inspecting-commits">Inspecting Commits</h3>
 
-<p>The task of upserting or inserting a batch of incoming records is known as a <strong>commit</strong> in Hudi. A commit provides basic atomicity guarantees such that only commited data is available for querying.
+<p>The task of upserting or inserting a batch of incoming records is known as a <strong>commit</strong> in Hudi. A commit provides basic atomicity guarantees such that only committed data is available for querying.
 Each commit has a monotonically increasing string/number called the <strong>commit number</strong>. Typically, this is the time at which we started the commit.</p>
 
 <p>To view some basic information about the last 10 commits,</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">commits</span> <span class="n">show</span> <span class="o">--</span><span class="n">sortBy</span> <span class="s">"Total Bytes Written"</span> <span class="o">--</span><span class="n">desc</span> <span class="kc">true</span> <span class="o">--</span><span class="n">limit</span> <span class=" [...]
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">commits</span> <span class="n">show</span> <span class="o">--</span><span class="n">sortBy</span> <span class="s">"Total Bytes Written"</span> <span class="o">--</span><span class="n">desc</span> <span class="kc">true</span> <span class="o">--</span><span class="n">limit</span> <span class="mi [...]
     <span class="n">________________________________________________________________________________________________________________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">CommitTime</span>    <span class="o">|</span> <span class="nc">Total</span> <span class="nc">Bytes</span> <span class="nc">Written</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Files</span> <span class="nc">Added</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Files</span> <span class="nc">Updated</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Partiti [...]
     <span class="o">|=======================================================================================================================================================================|</span>
     <span class="o">....</span>
     <span class="o">....</span>
     <span class="o">....</span>
-<span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span>
 </code></pre></div></div>
 
 <p>At the start of each write, Hudi also writes a .inflight commit to the .hoodie folder. You can use the timestamp there to estimate how long the commit has been inflight</p>
@@ -483,7 +650,7 @@ Each commit has a monotonically increasing string/number called the <strong>comm
 
 <p>To understand how the writes spread across specific partiions,</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">commit</span> <span class="n">showpartitions</span> <span class="o">--</span><span class="n">commit</span> <span class="mi">20161005165855</span> <span class="o">--</span><span class="n">sortBy</span> <span class="s">"Total Bytes Written"</span> <span class="o">--</span><span class="n">desc< [...]
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">commit</span> <span class="n">showpartitions</span> <span class="o">--</span><span class="n">commit</span> <span class="mi">20161005165855</span> <span class="o">--</span><span class="n">sortBy</span> <span class="s">"Total Bytes Written"</span> <span class="o">--</span><span class="n">desc</s [...]
     <span class="n">__________________________________________________________________________________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">Partition</span> <span class="nc">Path</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Files</span> <span class="nc">Added</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Files</span> <span class="nc">Updated</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Records</span> <span class="nc">Inserted</span><span class="o">|</span> <span class="nc">Total</spa [...]
     <span class="o">|=========================================================================================================================================|</span>
@@ -493,7 +660,7 @@ Each commit has a monotonically increasing string/number called the <strong>comm
 
 <p>If you need file level granularity , we can do the following</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">commit</span> <span class="n">showfiles</span> <span class="o">--</span><span class="n">commit</span> <span class="mi">20161005165855</span> <span class="o">--</span><span class="n">sortBy</span> <span class="s">"Partition Path"</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">commit</span> <span class="n">showfiles</span> <span class="o">--</span><span class="n">commit</span> <span class="mi">20161005165855</span> <span class="o">--</span><span class="n">sortBy</span> <span class="s">"Partition Path"</span>
     <span class="n">________________________________________________________________________________________________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">Partition</span> <span class="nc">Path</span><span class="o">|</span> <span class="nc">File</span> <span class="no">ID</span>                             <span class="o">|</span> <span class="nc">Previous</span> <span class="nc">Commit</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Records</span> <span class="nc">Updated</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Records</span> [...]
     <span class="o">|=======================================================================================================================================================|</span>
@@ -503,10 +670,10 @@ Each commit has a monotonically increasing string/number called the <strong>comm
 
 <h3 id="filesystem-view">FileSystem View</h3>
 
-<p>Hudi views each partition as a collection of file-groups with each file-group containing a list of file-slices in commit
-order (See Concepts). The below commands allow users to view the file-slices for a data-set.</p>
+<p>Hudi views each partition as a collection of file-groups with each file-group containing a list of file-slices in commit order (See <a href="">concepts</a>). 
+The below commands allow users to view the file-slices for a data-set.</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">show</span> <span class="n">fsview</span> <span class="n">all</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">show</span> <span class="n">fsview</span> <span class="n">all</span>
  <span class="o">....</span>
   <span class="n">_______________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________</span>
  <span class="o">|</span> <span class="nc">Partition</span> <span class="o">|</span> <span class="nc">FileId</span> <span class="o">|</span> <span class="nc">Base</span><span class="o">-</span><span class="nc">Instant</span> <span class="o">|</span> <span class="nc">Data</span><span class="o">-</span><span class="nc">File</span> <span class="o">|</span> <span class="nc">Data</span><span class="o">-</span><span class="nc">File</span> <span class="nc">Size</span><span class="o">|</span> <s [...]
@@ -515,21 +682,20 @@ order (See Concepts). The below commands allow users to view the file-slices for
 
 
 
- <span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">show</span> <span class="n">fsview</span> <span class="n">latest</span> <span class="o">--</span><span class="n">partitionPath</span> <span class="s">"2018/08/31"</span>
+<span class="nl">hudi:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">show</span> <span class="n">fsview</span> <span class="n">latest</span> <span class="o">--</span><span class="n">partitionPath</span> <span class="s">"2018/08/31"</span>
  <span class="o">......</span>
  <span class="n">___________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________ [...]
  <span class="o">|</span> <span class="nc">Partition</span> <span class="o">|</span> <span class="nc">FileId</span> <span class="o">|</span> <span class="nc">Base</span><span class="o">-</span><span class="nc">Instant</span> <span class="o">|</span> <span class="nc">Data</span><span class="o">-</span><span class="nc">File</span> <span class="o">|</span> <span class="nc">Data</span><span class="o">-</span><span class="nc">File</span> <span class="nc">Size</span><span class="o">|</span> <s [...]
  <span class="o">|========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== [...]
  <span class="o">|</span> <span class="mi">2018</span><span class="o">/</span><span class="mi">08</span><span class="o">/</span><span class="mi">31</span><span class="o">|</span> <span class="mi">111415</span><span class="n">c3</span><span class="o">-</span><span class="n">f26d</span><span class="o">-</span><span class="mi">4639</span><span class="o">-</span><span class="mi">86</span><span class="n">c8</span><span class="o">-</span><span class="n">f9956f245ac3</span><span class="o">|</sp [...]
 
- <span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span>
 </code></pre></div></div>
 
 <h3 id="statistics">Statistics</h3>
 
-<p>Since Hudi directly manages file sizes for DFS dataset, it might be good to get an overall picture</p>
+<p>Since Hudi directly manages file sizes for DFS table, it might be good to get an overall picture</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">stats</span> <span class="n">filesizes</span> <span class="o">--</span><span class="n">partitionPath</span> <span class="mi">2016</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mo">01</span> <span class="o">--</span><span class="n">sortBy</span>  [...]
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">stats</span> <span class="n">filesizes</span> <span class="o">--</span><span class="n">partitionPath</span> <span class="mi">2016</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mo">01</span> <span class="o">--</span><span class="n">sortBy</span> <s [...]
     <span class="n">________________________________________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">CommitTime</span>    <span class="o">|</span> <span class="nc">Min</span>     <span class="o">|</span> <span class="mi">10</span><span class="n">th</span>    <span class="o">|</span> <span class="mi">50</span><span class="n">th</span>    <span class="o">|</span> <span class="n">avg</span>     <span class="o">|</span> <span class="mi">95</span><span class="n">th</span>    <span class="o">|</span> <span class="nc">Max</span>     <span class="o" [...]
     <span class="o">|===============================================================================================|</span>
@@ -540,7 +706,7 @@ order (See Concepts). The below commands allow users to view the file-slices for
 
 <p>In case of Hudi write taking much longer, it might be good to see the write amplification for any sudden increases</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">stats</span> <span class="n">wa</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">stats</span> <span class="n">wa</span>
     <span class="n">__________________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">CommitTime</span>    <span class="o">|</span> <span class="nc">Total</span> <span class="nc">Upserted</span><span class="o">|</span> <span class="nc">Total</span> <span class="nc">Written</span><span class="o">|</span> <span class="nc">Write</span> <span class="nc">Amplifiation</span> <span class="nc">Factor</span><span class="o">|</span>
     <span class="o">|=========================================================================|</span>
@@ -558,7 +724,7 @@ This is a sequence file that contains a mapping from commitNumber =&gt; json wit
 <p>To get an idea of the lag between compaction and writer applications, use the below command to list down all
 pending compactions.</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compactions</span> <span class="n">show</span> <span class="n">all</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compactions</span> <span class="n">show</span> <span class="n">all</span>
      <span class="n">___________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">Compaction</span> <span class="nc">Instant</span> <span class="nc">Time</span><span class="o">|</span> <span class="nc">State</span>    <span class="o">|</span> <span class="nc">Total</span> <span class="nc">FileIds</span> <span class="n">to</span> <span class="n">be</span> <span class="nc">Compacted</span><span class="o">|</span>
     <span class="o">|==================================================================|</span>
@@ -568,7 +734,7 @@ pending compactions.</p>
 
 <p>To inspect a specific compaction plan, use</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">show</span> <span class="o">--</span><span class="n">instant</span> <span class="o">&lt;</span><span class="no">INSTANT_1</span><span class="o">&gt;</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">show</span> <span class="o">--</span><span class="n">instant</span> <span class="o">&lt;</span><span class="no">INSTANT_1</span><span class="o">&gt;</span>
     <span class="n">_________________________________________________________________________________________________________________________________________________________________________________________________________________________________________________</span>
     <span class="o">|</span> <span class="nc">Partition</span> <span class="nc">Path</span><span class="o">|</span> <span class="nc">File</span> <span class="nc">Id</span> <span class="o">|</span> <span class="nc">Base</span> <span class="nc">Instant</span>  <span class="o">|</span> <span class="nc">Data</span> <span class="nc">File</span> <span class="nc">Path</span>                                    <span class="o">|</span> <span class="nc">Total</span> <span class="nc">Delta</span> < [...]
     <span class="o">|================================================================================================================================================================================================================================================</span>
@@ -579,9 +745,9 @@ pending compactions.</p>
 <p>To manually schedule or run a compaction, use the below command. This command uses spark launcher to perform compaction
 operations.</p>
 
-<p class="notice--info"><strong>NOTE:</strong> Make sure no other application is scheduling compaction for this dataset concurrently</p>
+<p class="notice--info"><strong>NOTE:</strong> Make sure no other application is scheduling compaction for this table concurrently</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">help</span> <span class="n">compaction</span> <span class="n">schedule</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">help</span> <span class="n">compaction</span> <span class="n">schedule</span>
 <span class="nl">Keyword:</span>                   <span class="n">compaction</span> <span class="n">schedule</span>
 <span class="nl">Description:</span>               <span class="nc">Schedule</span> <span class="nc">Compaction</span>
  <span class="nl">Keyword:</span>                  <span class="n">sparkMemory</span>
@@ -593,7 +759,7 @@ operations.</p>
 <span class="o">*</span> <span class="n">compaction</span> <span class="n">schedule</span> <span class="o">-</span> <span class="nc">Schedule</span> <span class="nc">Compaction</span>
 </code></pre></div></div>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">help</span> <span class="n">compaction</span> <span class="n">run</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">help</span> <span class="n">compaction</span> <span class="n">run</span>
 <span class="nl">Keyword:</span>                   <span class="n">compaction</span> <span class="n">run</span>
 <span class="nl">Description:</span>               <span class="nc">Run</span> <span class="nc">Compaction</span> <span class="k">for</span> <span class="n">given</span> <span class="n">instant</span> <span class="n">time</span>
  <span class="nl">Keyword:</span>                  <span class="n">tableName</span>
@@ -627,7 +793,7 @@ operations.</p>
    <span class="nc">Default</span> <span class="k">if</span> <span class="nl">unspecified:</span> <span class="err">'</span><span class="n">__NULL__</span><span class="err">'</span>
 
  <span class="nl">Keyword:</span>                  <span class="n">compactionInstant</span>
-   <span class="nl">Help:</span>                   <span class="nc">Base</span> <span class="n">path</span> <span class="k">for</span> <span class="n">the</span> <span class="n">target</span> <span class="n">hoodie</span> <span class="n">dataset</span>
+   <span class="nl">Help:</span>                   <span class="nc">Base</span> <span class="n">path</span> <span class="k">for</span> <span class="n">the</span> <span class="n">target</span> <span class="n">hoodie</span> <span class="n">table</span>
    <span class="nl">Mandatory:</span>              <span class="kc">true</span>
    <span class="nc">Default</span> <span class="k">if</span> <span class="nl">specified:</span>   <span class="err">'</span><span class="n">__NULL__</span><span class="err">'</span>
    <span class="nc">Default</span> <span class="k">if</span> <span class="nl">unspecified:</span> <span class="err">'</span><span class="n">__NULL__</span><span class="err">'</span>
@@ -639,7 +805,7 @@ operations.</p>
 
 <p>Validating a compaction plan : Check if all the files necessary for compactions are present and are valid</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">validate</span> <span class="o">--</span><span class="n">instant</span> <span class="mi">20181005222611</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">validate</span> <span class="o">--</span><span class="n">instant</span> <span class="mi">20181005222611</span>
 <span class="o">...</span>
 
    <span class="no">COMPACTION</span> <span class="no">PLAN</span> <span class="no">VALID</span>
@@ -651,7 +817,7 @@ operations.</p>
 
 
 
-<span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">validate</span> <span class="o">--</span><span class="n">instant</span> <span class="mi">20181005222601</span>
+<span class="nl">hudi:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">validate</span> <span class="o">--</span><span class="n">instant</span> <span class="mi">20181005222601</span>
 
    <span class="no">COMPACTION</span> <span class="no">PLAN</span> <span class="no">INVALID</span>
 
@@ -667,16 +833,16 @@ operations.</p>
 operation. Any new log-files that happened on this file after the compaction got scheduled will be safely renamed
 so that are preserved. Hudi provides the following CLI to support it</p>
 
-<h3 id="unscheduling-compaction">UnScheduling Compaction</h3>
+<h3 id="unscheduling-compaction">Unscheduling Compaction</h3>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">unscheduleFileId</span> <span class="o">--</span><span class="n">fileId</span> <span class="o">&lt;</span><span class="nc">FileUUID</span><span class="o">&gt;</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">unscheduleFileId</span> <span class="o">--</span><span class="n">fileId</span> <span class="o">&lt;</span><span class="nc">FileUUID</span><span class="o">&gt;</span>
 <span class="o">....</span>
 <span class="nc">No</span> <span class="nc">File</span> <span class="n">renames</span> <span class="n">needed</span> <span class="n">to</span> <span class="n">unschedule</span> <span class="n">file</span> <span class="n">from</span> <span class="n">pending</span> <span class="n">compaction</span><span class="o">.</span> <span class="nc">Operation</span> <span class="n">successful</span><span class="o">.</span>
 </code></pre></div></div>
 
 <p>In other cases, an entire compaction plan needs to be reverted. This is supported by the following CLI</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">unschedule</span> <span class="o">--</span><span class="n">compactionInstant</span> <span class="o">&lt;</span><span class="n">compactionInstant</span><span class="o">&gt;</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">trips</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">unschedule</span> <span class="o">--</span><span class="n">compactionInstant</span> <span class="o">&lt;</span><span class="n">compactionInstant</span><span class="o">&gt;</span>
 <span class="o">.....</span>
 <span class="nc">No</span> <span class="nc">File</span> <span class="n">renames</span> <span class="n">needed</span> <span class="n">to</span> <span class="n">unschedule</span> <span class="n">pending</span> <span class="n">compaction</span><span class="o">.</span> <span class="nc">Operation</span> <span class="n">successful</span><span class="o">.</span>
 </code></pre></div></div>
@@ -689,15 +855,15 @@ partial failures, the compaction operation could become inconsistent with the st
 command comes to the rescue, it will rearrange the file-slices so that there is no loss and the file-slices are
 consistent with the compaction plan</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">repair</span> <span class="o">--</span><span class="n">instant</span> <span class="mi">20181005222611</span>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nl">hudi:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">compaction</span> <span class="n">repair</span> <span class="o">--</span><span class="n">instant</span> <span class="mi">20181005222611</span>
 <span class="o">......</span>
 <span class="nc">Compaction</span> <span class="n">successfully</span> <span class="n">repaired</span>
 <span class="o">.....</span>
 </code></pre></div></div>
 
-<h2 id="metrics">Metrics</h2>
+<h2 id="monitoring">Monitoring</h2>
 
-<p>Once the Hudi Client is configured with the right datasetname and environment for metrics, it produces the following graphite metrics, that aid in debugging hudi datasets</p>
+<p>Once the Hudi writer is configured with the right table and environment for metrics, it produces the following graphite metrics, that aid in debugging hudi tables</p>
 
 <ul>
   <li><strong>Commit Duration</strong> - This is amount of time it took to successfully commit a batch of records</li>
@@ -713,7 +879,7 @@ consistent with the compaction plan</p>
     <img class="docimage" src="/assets/images/hudi_commit_duration.png" alt="hudi_commit_duration.png" style="max-width: 100%" />
 </figure>
 
-<h2 id="troubleshooting">Troubleshooting Failures</h2>
+<h2 id="troubleshooting">Troubleshooting</h2>
 
 <p>Section below generally aids in debugging Hudi failures. Off the bat, the following metadata is added to every record to help triage  issues easily using standard Hadoop SQL engines (Hive/Presto/Spark)</p>
 
@@ -724,7 +890,7 @@ consistent with the compaction plan</p>
   <li><strong>_hoodie_partition_path</strong> - Path from basePath that identifies the partition containing this record</li>
 </ul>
 
-<p class="notice--info"><strong>NOTE:</strong> As of now, Hudi assumes the application passes in the same deterministic partitionpath for a given recordKey. i.e the uniqueness of record key is only enforced within each partition.</p>
+<p>For performance related issues, please refer to the <a href="https://cwiki.apache.org/confluence/display/HUDI/Tuning+Guide">tuning guide</a></p>
 
 <h3 id="missing-records">Missing records</h3>
 
@@ -733,7 +899,7 @@ If you do find errors, then the record was not actually written by Hudi, but han
 
 <h3 id="duplicates">Duplicates</h3>
 
-<p>First of all, please confirm if you do indeed have duplicates <strong>AFTER</strong> ensuring the query is accessing the Hudi datasets <a href="/docs/0.5.0-querying_data.html">properly</a> .</p>
+<p>First of all, please confirm if you do indeed have duplicates <strong>AFTER</strong> ensuring the query is accessing the Hudi table <a href="/docs/sql_queries.html">properly</a> .</p>
 
 <ul>
   <li>If confirmed, please use the metadata fields above, to identify the physical files &amp; partition files containing the records .</li>
diff --git a/content/docs/docker_demo.html b/content/docs/docker_demo.html
index 74f5abb..75f9068 100644
--- a/content/docs/docker_demo.html
+++ b/content/docs/docker_demo.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#a-demo-using-docker-containers">A Demo using docker containers</a>
     <ul>
@@ -360,10 +360,10 @@
       <li><a href="#step-6c-run-presto-queries">Step 6(c): Run Presto Queries</a></li>
       <li><a href="#step-7--incremental-query-for-copy-on-write-table">Step 7 : Incremental Query for COPY-ON-WRITE Table</a></li>
       <li><a href="#incremental-query-with-spark-sql">Incremental Query with Spark SQL:</a></li>
-      <li><a href="#step-8-schedule-and-run-compaction-for-merge-on-read-dataset">Step 8: Schedule and Run Compaction for Merge-On-Read dataset</a></li>
+      <li><a href="#step-8-schedule-and-run-compaction-for-merge-on-read-table">Step 8: Schedule and Run Compaction for Merge-On-Read table</a></li>
       <li><a href="#step-9-run-hive-queries-including-incremental-queries">Step 9: Run Hive Queries including incremental queries</a></li>
-      <li><a href="#step-10-read-optimized-and-realtime-views-for-mor-with-spark-sql-after-compaction">Step 10: Read Optimized and Realtime Views for MOR with Spark-SQL after compaction</a></li>
-      <li><a href="#step-11--presto-queries-over-read-optimized-view-on-mor-dataset-after-compaction">Step 11:  Presto queries over Read Optimized View on MOR dataset after compaction</a></li>
+      <li><a href="#step-10-read-optimized-and-snapshot-queries-for-mor-with-spark-sql-after-compaction">Step 10: Read Optimized and Snapshot queries for MOR with Spark-SQL after compaction</a></li>
+      <li><a href="#step-11--presto-read-optimized-queries-on-mor-table-after-compaction">Step 11:  Presto Read Optimized queries on MOR table after compaction</a></li>
     </ul>
   </li>
   <li><a href="#testing-hudi-in-local-docker-environment">Testing Hudi in Local Docker environment</a>
@@ -407,7 +407,7 @@ data infrastructure is brought up in a local docker cluster within your computer
 
 <h3 id="build-hudi">Build Hudi</h3>
 
-<p>The first step is to build hudi</p>
+<p>The first step is to build hudi. <strong>Note</strong> This step builds hudi on default supported scala version - 2.11.</p>
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">cd</span> <span class="o">&lt;</span><span class="no">HUDI_WORKSPACE</span><span class="o">&gt;</span>
 <span class="n">mvn</span> <span class="kn">package</span> <span class="o">-</span><span class="nc">DskipTests</span>
 </code></pre></div></div>
@@ -428,7 +428,10 @@ This should pull the docker images from docker hub and setup docker cluster.</p>
 <span class="nc">Stopping</span> <span class="n">historyserver</span>             <span class="o">...</span> <span class="n">done</span>
 <span class="o">.......</span>
 <span class="o">......</span>
-<span class="nc">Creating</span> <span class="n">network</span> <span class="s">"hudi_demo"</span> <span class="n">with</span> <span class="n">the</span> <span class="k">default</span> <span class="n">driver</span>
+<span class="nc">Creating</span> <span class="n">network</span> <span class="s">"compose_default"</span> <span class="n">with</span> <span class="n">the</span> <span class="k">default</span> <span class="n">driver</span>
+<span class="nc">Creating</span> <span class="n">volume</span> <span class="s">"compose_namenode"</span> <span class="n">with</span> <span class="k">default</span> <span class="n">driver</span>
+<span class="nc">Creating</span> <span class="n">volume</span> <span class="s">"compose_historyserver"</span> <span class="n">with</span> <span class="k">default</span> <span class="n">driver</span>
+<span class="nc">Creating</span> <span class="n">volume</span> <span class="s">"compose_hive-metastore-postgresql"</span> <span class="n">with</span> <span class="k">default</span> <span class="n">driver</span>
 <span class="nc">Creating</span> <span class="n">hive</span><span class="o">-</span><span class="n">metastore</span><span class="o">-</span><span class="n">postgresql</span> <span class="o">...</span> <span class="n">done</span>
 <span class="nc">Creating</span> <span class="n">namenode</span>                  <span class="o">...</span> <span class="n">done</span>
 <span class="nc">Creating</span> <span class="n">zookeeper</span>                 <span class="o">...</span> <span class="n">done</span>
@@ -461,12 +464,12 @@ This should pull the docker images from docker hub and setup docker cluster.</p>
 
 <h2 id="demo">Demo</h2>
 
-<p>Stock Tracker data will be used to showcase both different Hudi Views and the effects of Compaction.</p>
+<p>Stock Tracker data will be used to showcase different Hudi query types and the effects of Compaction.</p>
 
 <p>Take a look at the directory <code class="highlighter-rouge">docker/demo/data</code>. There are 2 batches of stock data - each at 1 minute granularity.
 The first batch contains stocker tracker data for some stock symbols during the first hour of trading window
 (9:30 a.m to 10:30 a.m). The second batch contains tracker data for next 30 mins (10:30 - 11 a.m). Hudi will
-be used to ingest these batches to a dataset which will contain the latest stock tracker data at hour level granularity.
+be used to ingest these batches to a table which will contain the latest stock tracker data at hour level granularity.
 The batches are windowed intentionally so that the second batch contains updates to some of the rows in the first batch.</p>
 
 <h3 id="step-1--publish-the-first-batch-to-kafka">Step 1 : Publish the first batch to Kafka</h3>
@@ -517,18 +520,18 @@ The batches are windowed intentionally so that the second batch contains updates
 <h3 id="step-2-incrementally-ingest-data-from-kafka-topic">Step 2: Incrementally ingest data from Kafka topic</h3>
 
 <p>Hudi comes with a tool named DeltaStreamer. This tool can connect to variety of data sources (including Kafka) to
-pull changes and apply to Hudi dataset using upsert/insert primitives. Here, we will use the tool to download
+pull changes and apply to Hudi table using upsert/insert primitives. Here, we will use the tool to download
 json data from kafka topic and ingest to both COW and MOR tables we initialized in the previous step. This tool
-automatically initializes the datasets in the file-system if they do not exist yet.</p>
+automatically initializes the tables in the file-system if they do not exist yet.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">2</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
-<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
+<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
+<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
 
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
-<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
+<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
+<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
 
 
 <span class="err">#</span> <span class="nc">As</span> <span class="n">part</span> <span class="n">of</span> <span class="n">the</span> <span class="nf">setup</span> <span class="o">(</span><span class="nc">Look</span> <span class="n">at</span> <span class="n">setup_demo</span><span class="o">.</span><span class="na">sh</span><span class="o">),</span> <span class="n">the</span> <span class="n">configs</span> <span class="n">needed</span> <span class="k">for</span> <span class="nc">DeltaSt [...]
@@ -537,49 +540,49 @@ automatically initializes the datasets in the file-system if they do not exist y
 <span class="n">exit</span>
 </code></pre></div></div>
 
-<p>You can use HDFS web-browser to look at the datasets
+<p>You can use HDFS web-browser to look at the tables
 <code class="highlighter-rouge">http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_cow</code>.</p>
 
-<p>You can explore the new partition folder created in the dataset along with a “deltacommit”
+<p>You can explore the new partition folder created in the table along with a “deltacommit”
 file under .hoodie which signals a successful commit.</p>
 
-<p>There will be a similar setup when you browse the MOR dataset
+<p>There will be a similar setup when you browse the MOR table
 <code class="highlighter-rouge">http://namenode:50070/explorer.html#/user/hive/warehouse/stock_ticks_mor</code></p>
 
 <h3 id="step-3-sync-with-hive">Step 3: Sync with Hive</h3>
 
-<p>At this step, the datasets are available in HDFS. We need to sync with Hive to create new Hive tables and add partitions
-inorder to run Hive queries against those datasets.</p>
+<p>At this step, the tables are available in HDFS. We need to sync with Hive to create new Hive tables and add partitions
+inorder to run Hive queries against those tables.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">2</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
 
-<span class="err">#</span> <span class="nc">THis</span> <span class="n">command</span> <span class="n">takes</span> <span class="n">in</span> <span class="nc">HIveServer</span> <span class="no">URL</span> <span class="n">and</span> <span class="no">COW</span> <span class="nc">Hudi</span> <span class="nc">Dataset</span> <span class="n">location</span> <span class="n">in</span> <span class="no">HDFS</span> <span class="n">and</span> <span class="n">sync</span> <span class="n">the</span> <s [...]
+<span class="err">#</span> <span class="nc">THis</span> <span class="n">command</span> <span class="n">takes</span> <span class="n">in</span> <span class="nc">HIveServer</span> <span class="no">URL</span> <span class="n">and</span> <span class="no">COW</span> <span class="nc">Hudi</span> <span class="n">table</span> <span class="n">location</span> <span class="n">in</span> <span class="no">HDFS</span> <span class="n">and</span> <span class="n">sync</span> <span class="n">the</span> <span [...]
 <span class="o">/</span><span class="kt">var</span><span class="o">/</span><span class="n">hoodie</span><span class="o">/</span><span class="n">ws</span><span class="o">/</span><span class="n">hudi</span><span class="o">-</span><span class="n">hive</span><span class="o">/</span><span class="n">run_sync_tool</span><span class="o">.</span><span class="na">sh</span>  <span class="o">--</span><span class="n">jdbc</span><span class="o">-</span><span class="n">url</span> <span class="nl">jdbc: [...]
 <span class="o">.....</span>
-<span class="mi">2018</span><span class="o">-</span><span class="mi">09</span><span class="o">-</span><span class="mi">24</span> <span class="mi">22</span><span class="o">:</span><span class="mi">22</span><span class="o">:</span><span class="mi">45</span><span class="o">,</span><span class="mi">568</span> <span class="no">INFO</span>  <span class="o">[</span><span class="n">main</span><span class="o">]</span> <span class="n">hive</span><span class="o">.</span><span class="na">HiveSyncToo [...]
+<span class="mi">2020</span><span class="o">-</span><span class="mo">01</span><span class="o">-</span><span class="mi">25</span> <span class="mi">19</span><span class="o">:</span><span class="mi">51</span><span class="o">:</span><span class="mi">28</span><span class="o">,</span><span class="mi">953</span> <span class="no">INFO</span>  <span class="o">[</span><span class="n">main</span><span class="o">]</span> <span class="n">hive</span><span class="o">.</span><span class="na">HiveSyncToo [...]
 <span class="o">.....</span>
 
-<span class="err">#</span> <span class="nc">Now</span> <span class="n">run</span> <span class="n">hive</span><span class="o">-</span><span class="n">sync</span> <span class="k">for</span> <span class="n">the</span> <span class="n">second</span> <span class="n">data</span><span class="o">-</span><span class="n">set</span> <span class="n">in</span> <span class="no">HDFS</span> <span class="n">using</span> <span class="nc">Merge</span><span class="o">-</span><span class="nc">On</span><span  [...]
+<span class="err">#</span> <span class="nc">Now</span> <span class="n">run</span> <span class="n">hive</span><span class="o">-</span><span class="n">sync</span> <span class="k">for</span> <span class="n">the</span> <span class="n">second</span> <span class="n">data</span><span class="o">-</span><span class="n">set</span> <span class="n">in</span> <span class="no">HDFS</span> <span class="n">using</span> <span class="nc">Merge</span><span class="o">-</span><span class="nc">On</span><span  [...]
 <span class="o">/</span><span class="kt">var</span><span class="o">/</span><span class="n">hoodie</span><span class="o">/</span><span class="n">ws</span><span class="o">/</span><span class="n">hudi</span><span class="o">-</span><span class="n">hive</span><span class="o">/</span><span class="n">run_sync_tool</span><span class="o">.</span><span class="na">sh</span>  <span class="o">--</span><span class="n">jdbc</span><span class="o">-</span><span class="n">url</span> <span class="nl">jdbc: [...]
 <span class="o">...</span>
-<span class="mi">2018</span><span class="o">-</span><span class="mi">09</span><span class="o">-</span><span class="mi">24</span> <span class="mi">22</span><span class="o">:</span><span class="mi">23</span><span class="o">:</span><span class="mi">09</span><span class="o">,</span><span class="mi">171</span> <span class="no">INFO</span>  <span class="o">[</span><span class="n">main</span><span class="o">]</span> <span class="n">hive</span><span class="o">.</span><span class="na">HiveSyncToo [...]
+<span class="mi">2020</span><span class="o">-</span><span class="mo">01</span><span class="o">-</span><span class="mi">25</span> <span class="mi">19</span><span class="o">:</span><span class="mi">51</span><span class="o">:</span><span class="mi">51</span><span class="o">,</span><span class="mo">066</span> <span class="no">INFO</span>  <span class="o">[</span><span class="n">main</span><span class="o">]</span> <span class="n">hive</span><span class="o">.</span><span class="na">HiveSyncToo [...]
 <span class="o">...</span>
-<span class="mi">2018</span><span class="o">-</span><span class="mi">09</span><span class="o">-</span><span class="mi">24</span> <span class="mi">22</span><span class="o">:</span><span class="mi">23</span><span class="o">:</span><span class="mi">09</span><span class="o">,</span><span class="mi">559</span> <span class="no">INFO</span>  <span class="o">[</span><span class="n">main</span><span class="o">]</span> <span class="n">hive</span><span class="o">.</span><span class="na">HiveSyncToo [...]
+<span class="mi">2020</span><span class="o">-</span><span class="mo">01</span><span class="o">-</span><span class="mi">25</span> <span class="mi">19</span><span class="o">:</span><span class="mi">51</span><span class="o">:</span><span class="mi">51</span><span class="o">,</span><span class="mi">569</span> <span class="no">INFO</span>  <span class="o">[</span><span class="n">main</span><span class="o">]</span> <span class="n">hive</span><span class="o">.</span><span class="na">HiveSyncToo [...]
 <span class="o">....</span>
 <span class="n">exit</span>
 </code></pre></div></div>
 <p>After executing the above command, you will notice</p>
 
 <ol>
-  <li>A hive table named <code class="highlighter-rouge">stock_ticks_cow</code> created which provides Read-Optimized view for the Copy On Write dataset.</li>
-  <li>Two new tables <code class="highlighter-rouge">stock_ticks_mor</code> and <code class="highlighter-rouge">stock_ticks_mor_rt</code> created for the Merge On Read dataset. The former
-provides the ReadOptimized view for the Hudi dataset and the later provides the realtime-view for the dataset.</li>
+  <li>A hive table named <code class="highlighter-rouge">stock_ticks_cow</code> created which supports Snapshot and Incremental queries on Copy On Write table.</li>
+  <li>Two new tables <code class="highlighter-rouge">stock_ticks_mor_rt</code> and <code class="highlighter-rouge">stock_ticks_mor_ro</code> created for the Merge On Read table. The former
+supports Snapshot and Incremental queries (providing near-real time data) while the later supports ReadOptimized queries.</li>
 </ol>
 
 <h3 id="step-4-a-run-hive-queries">Step 4 (a): Run Hive Queries</h3>
 
-<p>Run a hive query to find the latest timestamp ingested for stock symbol ‘GOOG’. You will notice that both read-optimized
-(for both COW and MOR dataset)and realtime views (for MOR dataset)give the same value “10:29 a.m” as Hudi create a
+<p>Run a hive query to find the latest timestamp ingested for stock symbol ‘GOOG’. You will notice that both snapshot 
+(for both COW and MOR _rt table) and read-optimized queries (for MOR _ro table) give the same value “10:29 a.m” as Hudi create a
 parquet file for the first batch of data.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">2</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
@@ -590,10 +593,10 @@ parquet file for the first batch of data.</p>
 <span class="o">|</span>      <span class="n">tab_name</span>       <span class="o">|</span>
 <span class="o">+---------------------+--+</span>
 <span class="o">|</span> <span class="n">stock_ticks_cow</span>     <span class="o">|</span>
-<span class="o">|</span> <span class="n">stock_ticks_mor</span>     <span class="o">|</span>
+<span class="o">|</span> <span class="n">stock_ticks_mor_ro</span>  <span class="o">|</span>
 <span class="o">|</span> <span class="n">stock_ticks_mor_rt</span>  <span class="o">|</span>
 <span class="o">+---------------------+--+</span>
-<span class="mi">2</span> <span class="n">rows</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">0.801</span> <span class="n">seconds</span><span class="o">)</span>
+<span class="mi">3</span> <span class="n">rows</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">1.199</span> <span class="n">seconds</span><span class="o">)</span>
 <span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt;</span>
 
 
@@ -632,11 +635,11 @@ parquet file for the first batch of data.</p>
 <span class="err">#</span> <span class="nc">Merge</span><span class="o">-</span><span class="nc">On</span><span class="o">-</span><span class="nc">Read</span> <span class="nl">Queries:</span>
 <span class="o">==========================</span>
 
-<span class="nc">Lets</span> <span class="n">run</span> <span class="n">similar</span> <span class="n">queries</span> <span class="n">against</span> <span class="no">M</span><span class="o">-</span><span class="no">O</span><span class="o">-</span><span class="no">R</span> <span class="n">dataset</span><span class="o">.</span> <span class="nc">Lets</span> <span class="n">look</span> <span class="n">at</span> <span class="n">both</span>
-<span class="nc">ReadOptimized</span> <span class="n">and</span> <span class="nc">Realtime</span> <span class="n">views</span> <span class="n">supported</span> <span class="n">by</span> <span class="no">M</span><span class="o">-</span><span class="no">O</span><span class="o">-</span><span class="no">R</span> <span class="n">dataset</span>
+<span class="nc">Lets</span> <span class="n">run</span> <span class="n">similar</span> <span class="n">queries</span> <span class="n">against</span> <span class="no">M</span><span class="o">-</span><span class="no">O</span><span class="o">-</span><span class="no">R</span> <span class="n">table</span><span class="o">.</span> <span class="nc">Lets</span> <span class="n">look</span> <span class="n">at</span> <span class="n">both</span> 
+<span class="nc">ReadOptimized</span> <span class="n">and</span> <span class="nf">Snapshot</span><span class="o">(</span><span class="n">realtime</span> <span class="n">data</span><span class="o">)</span> <span class="n">queries</span> <span class="n">supported</span> <span class="n">by</span> <span class="no">M</span><span class="o">-</span><span class="no">O</span><span class="o">-</span><span class="no">R</span> <span class="n">table</span>
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">against</span> <span class="nc">ReadOptimized</span> <span class="nc">View</span><span class="o">.</span> <span class="nc">Notice</span> <span class="n">that</span> <span class="n">the</span> <span class="n">latest</span> <span class="n">timestamp</span> <span class="n">is</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span>
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor group by symbol HAVING symbol = 'GOOG';</span>
+<span class="err">#</span> <span class="nc">Run</span> <span class="nc">ReadOptimized</span> <span class="nc">Query</span><span class="o">.</span> <span class="nc">Notice</span> <span class="n">that</span> <span class="n">the</span> <span class="n">latest</span> <span class="n">timestamp</span> <span class="n">is</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';</span>
 <span class="nl">WARNING:</span> <span class="nc">Hive</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="no">MR</span> <span class="n">is</span> <span class="n">deprecated</span> <span class="n">in</span> <span class="nc">Hive</span> <span class="mi">2</span> <span class="n">and</span> <span class="n">may</span> <span class="n">not</span> <span class="n">be</span> <span class="n">available</span> <span class="n">in</span> <span class="n">the</spa [...]
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
@@ -646,7 +649,7 @@ parquet file for the first batch of data.</p>
 <span class="mi">1</span> <span class="n">row</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">6.326</span> <span class="n">seconds</span><span class="o">)</span>
 
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">against</span> <span class="nc">Realtime</span> <span class="nc">View</span><span class="o">.</span> <span class="nc">Notice</span> <span class="n">that</span> <span class="n">the</span> <span class="n">latest</span> <span class="n">timestamp</span> <span class="n">is</span> <span class="n">again</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span>
+<span class="err">#</span> <span class="nc">Run</span> <span class="nc">Snapshot</span> <span class="nc">Query</span><span class="o">.</span> <span class="nc">Notice</span> <span class="n">that</span> <span class="n">the</span> <span class="n">latest</span> <span class="n">timestamp</span> <span class="n">is</span> <span class="n">again</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span>
 
 <span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG';</span>
 <span class="nl">WARNING:</span> <span class="nc">Hive</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="no">MR</span> <span class="n">is</span> <span class="n">deprecated</span> <span class="n">in</span> <span class="nc">Hive</span> <span class="mi">2</span> <span class="n">and</span> <span class="n">may</span> <span class="n">not</span> <span class="n">be</span> <span class="n">available</span> <span class="n">in</span> <span class="n">the</spa [...]
@@ -658,9 +661,9 @@ parquet file for the first batch of data.</p>
 <span class="mi">1</span> <span class="n">row</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">1.606</span> <span class="n">seconds</span><span class="o">)</span>
 
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">projection</span> <span class="n">query</span> <span class="n">against</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="n">and</span> <span class="nc">Realtime</span> <span class="n">tables</span>
+<span class="err">#</span> <span class="nc">Run</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="n">and</span> <span class="nc">Snapshot</span> <span class="n">project</span> <span class="n">queries</span>
 
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor where  symbol = 'GOOG';</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor_ro where  symbol = 'GOOG';</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 <span class="o">|</span> <span class="n">_hoodie_commit_time</span>  <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>          <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span>  <span class="o">|</span>    <span class="n">open</span>    <span class="o">|</span>   <span class="n">close</span>   <span class="o">|</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
@@ -685,17 +688,17 @@ parquet file for the first batch of data.</p>
 running in spark-sql</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">1</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
-<span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">master</span> <span class="n">local</span><span class="o">[</span><span class="mi">2</span><span class="o">]</span> <span class="o">--</span><span class="n">driver< [...]
+<span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">master</span> <span class="n">local</span><span class="o">[</span><span class="mi">2</span><span class="o">]</span> <span class="o">--</span><span class="n">driver< [...]
 <span class="o">...</span>
 
 <span class="nc">Welcome</span> <span class="n">to</span>
       <span class="n">____</span>              <span class="n">__</span>
      <span class="o">/</span> <span class="n">__</span><span class="o">/</span><span class="n">__</span>  <span class="n">___</span> <span class="n">_____</span><span class="o">/</span> <span class="o">/</span><span class="n">__</span>
     <span class="n">_</span><span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="n">_</span> <span class="err">\</span><span class="o">/</span> <span class="n">_</span> <span class="err">`</span><span class="o">/</span> <span class="n">__</span><span class="o">/</span>  <span class="err">'</span><span class="n">_</span><span class="o">/</span>
-   <span class="o">/</span><span class="n">___</span><span class="o">/</span> <span class="o">.</span><span class="na">__</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="o">,</span><span class="n">_</span><span class="o">/</span><span class="n">_</span><span class="o">/</span> <span class="o">/</span><span class="n">_</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="err">\</span>   <span class="n">ve [...]
+   <span class="o">/</span><span class="n">___</span><span class="o">/</span> <span class="o">.</span><span class="na">__</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="o">,</span><span class="n">_</span><span class="o">/</span><span class="n">_</span><span class="o">/</span> <span class="o">/</span><span class="n">_</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="err">\</span>   <span class="n">ve [...]
       <span class="o">/</span><span class="n">_</span><span class="o">/</span>
 
-<span class="nc">Using</span> <span class="nc">Scala</span> <span class="n">version</span> <span class="mf">2.11</span><span class="o">.</span><span class="mi">8</span> <span class="o">(</span><span class="nc">Java</span> <span class="nf">HotSpot</span><span class="o">(</span><span class="no">TM</span><span class="o">)</span> <span class="mi">64</span><span class="o">-</span><span class="nc">Bit</span> <span class="nc">Server</span> <span class="no">VM</span><span class="o">,</span> <spa [...]
+<span class="nc">Using</span> <span class="nc">Scala</span> <span class="n">version</span> <span class="mf">2.11</span><span class="o">.</span><span class="mi">12</span> <span class="o">(</span><span class="nc">OpenJDK</span> <span class="mi">64</span><span class="o">-</span><span class="nc">Bit</span> <span class="nc">Server</span> <span class="no">VM</span><span class="o">,</span> <span class="nc">Java</span> <span class="mf">1.8</span><span class="o">.</span><span class="mi">0_212</sp [...]
 <span class="nc">Type</span> <span class="n">in</span> <span class="n">expressions</span> <span class="n">to</span> <span class="n">have</span> <span class="n">them</span> <span class="n">evaluated</span><span class="o">.</span>
 <span class="nc">Type</span> <span class="o">:</span><span class="n">help</span> <span class="k">for</span> <span class="n">more</span> <span class="n">information</span><span class="o">.</span>
 
@@ -705,7 +708,7 @@ running in spark-sql</p>
 <span class="o">|</span><span class="n">database</span><span class="o">|</span><span class="n">tableName</span>         <span class="o">|</span><span class="n">isTemporary</span><span class="o">|</span>
 <span class="o">+--------+------------------+-----------+</span>
 <span class="o">|</span><span class="k">default</span> <span class="o">|</span><span class="n">stock_ticks_cow</span>   <span class="o">|</span><span class="kc">false</span>      <span class="o">|</span>
-<span class="o">|</span><span class="k">default</span> <span class="o">|</span><span class="n">stock_ticks_mor</span>   <span class="o">|</span><span class="kc">false</span>      <span class="o">|</span>
+<span class="o">|</span><span class="k">default</span> <span class="o">|</span><span class="n">stock_ticks_mor_ro</span><span class="o">|</span><span class="kc">false</span>      <span class="o">|</span>
 <span class="o">|</span><span class="k">default</span> <span class="o">|</span><span class="n">stock_ticks_mor_rt</span><span class="o">|</span><span class="kc">false</span>      <span class="o">|</span>
 <span class="o">+--------+------------------+-----------+</span>
 
@@ -736,11 +739,11 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</s
 # Merge-On-Read Queries:
 ==========================
 
-Lets run similar queries against M-O-R dataset. Lets look at both
-ReadOptimized and Realtime views supported by M-O-R dataset
+Lets run similar queries against M-O-R table. Lets look at both
+ReadOptimized and Snapshot queries supported by M-O-R table
 
-# Run against ReadOptimized View. Notice that the latest timestamp is 10:29
-scala&gt; spark.sql("</span><span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <span class="o">=</span> <span class="err">'</span><span class="no" [...]
+# Run ReadOptimized Query. Notice that the latest timestamp is 10:29
+scala&gt; spark.sql("</span><span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor_ro</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <span class="o">=</span> <span class="err">'</span><span class=" [...]
 +------+-------------------+
 |symbol|max(ts)            |
 +------+-------------------+
@@ -748,7 +751,7 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="n">symbol
 +------+-------------------+
 
 
-# Run against Realtime View. Notice that the latest timestamp is again 10:29
+# Run Snapshot Query. Notice that the latest timestamp is again 10:29
 
 scala&gt; spark.sql("</span><span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor_rt</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <span class="o">=</span> <span class="err">'</span><span class=" [...]
 +------+-------------------+
@@ -757,9 +760,9 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="n">symbol
 |GOOG  |2018-08-31 10:29:00|
 +------+-------------------+
 
-# Run projection query against Read Optimized and Realtime tables
+# Run Read Optimized and Snapshot project queries
 
-scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</span><span class="n">_hoodie_commit_time</span><span class="err">`</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor</span> <span cl [...]
+scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</span><span class="n">_hoodie_commit_time</span><span class="err">`</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor_ro</span> <span [...]
 +-------------------+------+-------------------+------+---------+--------+
 |_hoodie_commit_time|symbol|ts                 |volume|open     |close   |
 +-------------------+------+-------------------+------+---------+--------+
@@ -779,7 +782,7 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</s
 
 <h3 id="step-4-c-run-presto-queries">Step 4 (c): Run Presto Queries</h3>
 
-<p>Here are the Presto queries for similar Hive and Spark queries. Currently, Hudi does not support Presto queries on realtime views.</p>
+<p>Here are the Presto queries for similar Hive and Spark queries. Currently, Presto does not support snapshot or incremental queries on Hudi tables.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">presto</span><span class="o">-</span><span class="n">worker</span><span class="o">-</span><span class="mi">1</span> <span class="n">presto</span> <span class="o">--</span><span class="n">server</span> <span class="n">presto</span><span class="o">-</span><span class="n">c [...]
 <span class="n">presto</span><span class="o">&gt;</span> <span class="n">show</span> <span class="n">catalogs</span><span class="o">;</span>
@@ -801,7 +804,7 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</s
        <span class="nc">Table</span>
 <span class="o">--------------------</span>
  <span class="n">stock_ticks_cow</span>
- <span class="n">stock_ticks_mor</span>
+ <span class="n">stock_ticks_mor_ro</span>
  <span class="nf">stock_ticks_mor_rt</span>
 <span class="o">(</span><span class="mi">3</span> <span class="n">rows</span><span class="o">)</span>
 
@@ -839,10 +842,10 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</s
 <span class="err">#</span> <span class="nc">Merge</span><span class="o">-</span><span class="nc">On</span><span class="o">-</span><span class="nc">Read</span> <span class="nl">Queries:</span>
 <span class="o">==========================</span>
 
-<span class="nc">Lets</span> <span class="n">run</span> <span class="n">similar</span> <span class="n">queries</span> <span class="n">against</span> <span class="no">M</span><span class="o">-</span><span class="no">O</span><span class="o">-</span><span class="no">R</span> <span class="n">dataset</span><span class="o">.</span> 
+<span class="nc">Lets</span> <span class="n">run</span> <span class="n">similar</span> <span class="n">queries</span> <span class="n">against</span> <span class="no">M</span><span class="o">-</span><span class="no">O</span><span class="o">-</span><span class="no">R</span> <span class="n">table</span><span class="o">.</span> 
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">against</span> <span class="nc">ReadOptimized</span> <span class="nc">View</span><span class="o">.</span> <span class="nc">Notice</span> <span class="n">that</span> <span class="n">the</span> <span class="n">latest</span> <span class="n">timestamp</span> <span class="n">is</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span>
-<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <span  [...]
+<span class="err">#</span> <span class="nc">Run</span> <span class="nc">ReadOptimized</span> <span class="nc">Query</span><span class="o">.</span> <span class="nc">Notice</span> <span class="n">that</span> <span class="n">the</span> <span class="n">latest</span> <span class="n">timestamp</span> <span class="n">is</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span>
+    <span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor_ro</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> [...]
  <span class="n">symbol</span> <span class="o">|</span>        <span class="n">_col1</span>
 <span class="o">--------+---------------------</span>
  <span class="no">GOOG</span>   <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span><span class="o">:</span><span class="mo">00</span>
@@ -853,7 +856,7 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</s
 <span class="mi">0</span><span class="o">:</span><span class="mo">02</span> <span class="o">[</span><span class="mi">197</span> <span class="n">rows</span><span class="o">,</span> <span class="mi">613</span><span class="no">B</span><span class="o">]</span> <span class="o">[</span><span class="mi">110</span> <span class="n">rows</span><span class="o">/</span><span class="n">s</span><span class="o">,</span> <span class="mi">343</span><span class="no">B</span><span class="o">/</span><span c [...]
 
 
-<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span>  <span class="n">select</span> <span class="s">"_hoodie_commit_time"</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor</sp [...]
+<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span>  <span class="n">select</span> <span class="s">"_hoodie_commit_time"</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor_ro< [...]
  <span class="n">_hoodie_commit_time</span> <span class="o">|</span> <span class="n">symbol</span> <span class="o">|</span>         <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span> <span class="o">|</span>   <span class="n">open</span>    <span class="o">|</span>  <span class="n">close</span>
 <span class="o">---------------------+--------+---------------------+--------+-----------+----------</span>
  <span class="mi">20190822180250</span>      <span class="o">|</span> <span class="no">GOOG</span>   <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">09</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span> <span class="o">|</span>   <span class="mi">6330</span> <span class="o">|</span>    <span class="mf">1230.5</s [...]
@@ -877,12 +880,12 @@ partitions, there is no need to run hive-sync</p>
 <span class="err">#</span> <span class="nc">Within</span> <span class="nc">Docker</span> <span class="n">container</span><span class="o">,</span> <span class="n">run</span> <span class="n">the</span> <span class="n">ingestion</span> <span class="n">command</span>
 <span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">2</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
-<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
+<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
+<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
 
 
-<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
-<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
+<span class="err">#</span> <span class="nc">Run</span> <span class="n">the</span> <span class="n">following</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="n">command</span> <span class="n">to</span> <span class="n">execute</span> <span class="n">the</span> <span class="n">delta</span><span class="o">-</span><span class="n">streamer</span> <span class="n">and</span> <span class="n">ingest</span> <span class="n">to</span> <span class=" [...]
+<span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">deltastreamer</span><span class="o">.</span><span class="na">HoodieDeltaStreamer</span> <span class="n">$HUDI_UTILITIES_BUND [...]
 
 <span class="n">exit</span>
 </code></pre></div></div>
@@ -895,12 +898,12 @@ Take a look at the HDFS filesystem to get an idea: <code class="highlighter-roug
 
 <h3 id="step-6a-run-hive-queries">Step 6(a): Run Hive Queries</h3>
 
-<p>With Copy-On-Write table, the read-optimized view immediately sees the changes as part of second batch once the batch
+<p>With Copy-On-Write table, the Snapshot query immediately sees the changes as part of second batch once the batch
 got committed as each ingestion creates newer versions of parquet files.</p>
 
 <p>With Merge-On-Read table, the second ingestion merely appended the batch to an unmerged delta (log) file.
-This is the time, when ReadOptimized and Realtime views will provide different results. ReadOptimized view will still
-return “10:29 am” as it will only read from the Parquet file. Realtime View will do on-the-fly merge and return
+This is the time, when ReadOptimized and Snapshot queries will provide different results. ReadOptimized query will still
+return “10:29 am” as it will only read from the Parquet file. Snapshot query will do on-the-fly merge and return
 latest committed data which is “10:59 a.m”.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">2</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
@@ -930,8 +933,8 @@ latest committed data which is “10:59 a.m”.</p>
 
 <span class="err">#</span> <span class="nc">Merge</span> <span class="nc">On</span> <span class="nc">Read</span> <span class="nl">Table:</span>
 
-<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">View</span>
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor group by symbol HAVING symbol = 'GOOG';</span>
+<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">Query</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';</span>
 <span class="nl">WARNING:</span> <span class="nc">Hive</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="no">MR</span> <span class="n">is</span> <span class="n">deprecated</span> <span class="n">in</span> <span class="nc">Hive</span> <span class="mi">2</span> <span class="n">and</span> <span class="n">may</span> <span class="n">not</span> <span class="n">be</span> <span class="n">available</span> <span class="n">in</span> <span class="n">the</spa [...]
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
@@ -940,7 +943,7 @@ latest committed data which is “10:59 a.m”.</p>
 <span class="o">+---------+----------------------+--+</span>
 <span class="mi">1</span> <span class="n">row</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">1.6</span> <span class="n">seconds</span><span class="o">)</span>
 
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor where  symbol = 'GOOG';</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor_ro where  symbol = 'GOOG';</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 <span class="o">|</span> <span class="n">_hoodie_commit_time</span>  <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>          <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span>  <span class="o">|</span>    <span class="n">open</span>    <span class="o">|</span>   <span class="n">close</span>   <span class="o">|</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
@@ -948,7 +951,7 @@ latest committed data which is “10:59 a.m”.</p>
 <span class="o">|</span> <span class="mi">20180924222155</span>       <span class="o">|</span> <span class="no">GOOG</span>    <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span><span class="o">:</span><span class="mo">00</span>  <span class="o">|</span> <span class="mi">3391</span>    <span class="o">|</span> < [...]
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 
-<span class="err">#</span> <span class="nc">Realtime</span> <span class="nc">View</span>
+<span class="err">#</span> <span class="nc">Snapshot</span> <span class="nc">Query</span>
 <span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG';</span>
 <span class="nl">WARNING:</span> <span class="nc">Hive</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="no">MR</span> <span class="n">is</span> <span class="n">deprecated</span> <span class="n">in</span> <span class="nc">Hive</span> <span class="mi">2</span> <span class="n">and</span> <span class="n">may</span> <span class="n">not</span> <span class="n">be</span> <span class="n">available</span> <span class="n">in</span> <span class="n">the</spa [...]
 <span class="o">+---------+----------------------+--+</span>
@@ -974,7 +977,7 @@ latest committed data which is “10:59 a.m”.</p>
 <p>Running the same queries in Spark-SQL:</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">1</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
-<span class="n">bash</span><span class="o">-</span><span class="mf">4.4</span><span class="err">#</span> <span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class [...]
+<span class="n">bash</span><span class="o">-</span><span class="mf">4.4</span><span class="err">#</span> <span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class [...]
 
 <span class="err">#</span> <span class="nc">Copy</span> <span class="nc">On</span> <span class="nc">Write</span> <span class="nl">Table:</span>
 
@@ -999,8 +1002,8 @@ latest committed data which is “10:59 a.m”.</p>
 
 <span class="err">#</span> <span class="nc">Merge</span> <span class="nc">On</span> <span class="nc">Read</span> <span class="nl">Table:</span>
 
-<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">View</span>
-<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select symbol, max(ts) from stock_ticks_mor group by symbol HAVING symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
+<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">Query</span>
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
 <span class="o">+---------+----------------------+--+</span>
@@ -1008,7 +1011,7 @@ latest committed data which is “10:59 a.m”.</p>
 <span class="o">+---------+----------------------+--+</span>
 <span class="mi">1</span> <span class="n">row</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">1.6</span> <span class="n">seconds</span><span class="o">)</span>
 
-<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor where  symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor_ro where  symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 <span class="o">|</span> <span class="n">_hoodie_commit_time</span>  <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>          <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span>  <span class="o">|</span>    <span class="n">open</span>    <span class="o">|</span>   <span class="n">close</span>   <span class="o">|</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
@@ -1016,7 +1019,7 @@ latest committed data which is “10:59 a.m”.</p>
 <span class="o">|</span> <span class="mi">20180924222155</span>       <span class="o">|</span> <span class="no">GOOG</span>    <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span><span class="o">:</span><span class="mo">00</span>  <span class="o">|</span> <span class="mi">3391</span>    <span class="o">|</span> < [...]
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 
-<span class="err">#</span> <span class="nc">Realtime</span> <span class="nc">View</span>
+<span class="err">#</span> <span class="nc">Snapshot</span> <span class="nc">Query</span>
 <span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
@@ -1038,7 +1041,7 @@ latest committed data which is “10:59 a.m”.</p>
 
 <h3 id="step-6c-run-presto-queries">Step 6(c): Run Presto Queries</h3>
 
-<p>Running the same queries on Presto for ReadOptimized views.</p>
+<p>Running the same queries on Presto for ReadOptimized queries.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">presto</span><span class="o">-</span><span class="n">worker</span><span class="o">-</span><span class="mi">1</span> <span class="n">presto</span> <span class="o">--</span><span class="n">server</span> <span class="n">presto</span><span class="o">-</span><span class="n">c [...]
 <span class="n">presto</span><span class="o">&gt;</span> <span class="n">use</span> <span class="n">hive</span><span class="o">.</span><span class="na">default</span><span class="o">;</span>
@@ -1072,8 +1075,8 @@ latest committed data which is “10:59 a.m”.</p>
 
 <span class="err">#</span> <span class="nc">Merge</span> <span class="nc">On</span> <span class="nc">Read</span> <span class="nl">Table:</span>
 
-<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">View</span>
-<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <span  [...]
+<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">Query</span>
+<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor_ro</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <sp [...]
  <span class="n">symbol</span> <span class="o">|</span>        <span class="n">_col1</span>
 <span class="o">--------+---------------------</span>
  <span class="no">GOOG</span>   <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">29</span><span class="o">:</span><span class="mo">00</span>
@@ -1083,7 +1086,7 @@ latest committed data which is “10:59 a.m”.</p>
 <span class="nl">Splits:</span> <span class="mi">49</span> <span class="n">total</span><span class="o">,</span> <span class="mi">49</span> <span class="n">done</span> <span class="o">(</span><span class="mf">100.00</span><span class="o">%)</span>
 <span class="mi">0</span><span class="o">:</span><span class="mo">01</span> <span class="o">[</span><span class="mi">197</span> <span class="n">rows</span><span class="o">,</span> <span class="mi">613</span><span class="no">B</span><span class="o">]</span> <span class="o">[</span><span class="mi">139</span> <span class="n">rows</span><span class="o">/</span><span class="n">s</span><span class="o">,</span> <span class="mi">435</span><span class="no">B</span><span class="o">/</span><span c [...]
 
-<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span><span class="n">select</span> <span class="s">"_hoodie_commit_time"</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor</span [...]
+<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span><span class="n">select</span> <span class="s">"_hoodie_commit_time"</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor_ro</s [...]
  <span class="n">_hoodie_commit_time</span> <span class="o">|</span> <span class="n">symbol</span> <span class="o">|</span>         <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span> <span class="o">|</span>   <span class="n">open</span>    <span class="o">|</span>  <span class="n">close</span>
 <span class="o">---------------------+--------+---------------------+--------+-----------+----------</span>
  <span class="mi">20190822180250</span>      <span class="o">|</span> <span class="no">GOOG</span>   <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">09</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span> <span class="o">|</span>   <span class="mi">6330</span> <span class="o">|</span>    <span class="mf">1230.5</s [...]
@@ -1099,7 +1102,7 @@ latest committed data which is “10:59 a.m”.</p>
 
 <h3 id="step-7--incremental-query-for-copy-on-write-table">Step 7 : Incremental Query for COPY-ON-WRITE Table</h3>
 
-<p>With 2 batches of data ingested, lets showcase the support for incremental queries in Hudi Copy-On-Write datasets</p>
+<p>With 2 batches of data ingested, lets showcase the support for incremental queries in Hudi Copy-On-Write tables</p>
 
 <p>Lets take the same projection query example</p>
 
@@ -1151,15 +1154,15 @@ Here is the incremental query :</p>
 
 <h3 id="incremental-query-with-spark-sql">Incremental Query with Spark SQL:</h3>
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">1</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
-<span class="n">bash</span><span class="o">-</span><span class="mf">4.4</span><span class="err">#</span> <span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class [...]
+<span class="n">bash</span><span class="o">-</span><span class="mf">4.4</span><span class="err">#</span> <span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class [...]
 <span class="nc">Welcome</span> <span class="n">to</span>
       <span class="n">____</span>              <span class="n">__</span>
      <span class="o">/</span> <span class="n">__</span><span class="o">/</span><span class="n">__</span>  <span class="n">___</span> <span class="n">_____</span><span class="o">/</span> <span class="o">/</span><span class="n">__</span>
     <span class="n">_</span><span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="n">_</span> <span class="err">\</span><span class="o">/</span> <span class="n">_</span> <span class="err">`</span><span class="o">/</span> <span class="n">__</span><span class="o">/</span>  <span class="err">'</span><span class="n">_</span><span class="o">/</span>
-   <span class="o">/</span><span class="n">___</span><span class="o">/</span> <span class="o">.</span><span class="na">__</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="o">,</span><span class="n">_</span><span class="o">/</span><span class="n">_</span><span class="o">/</span> <span class="o">/</span><span class="n">_</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="err">\</span>   <span class="n">ve [...]
+   <span class="o">/</span><span class="n">___</span><span class="o">/</span> <span class="o">.</span><span class="na">__</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="o">,</span><span class="n">_</span><span class="o">/</span><span class="n">_</span><span class="o">/</span> <span class="o">/</span><span class="n">_</span><span class="o">/</span><span class="err">\</span><span class="n">_</span><span class="err">\</span>   <span class="n">ve [...]
       <span class="o">/</span><span class="n">_</span><span class="o">/</span>
 
-<span class="nc">Using</span> <span class="nc">Scala</span> <span class="n">version</span> <span class="mf">2.11</span><span class="o">.</span><span class="mi">8</span> <span class="o">(</span><span class="nc">Java</span> <span class="nf">HotSpot</span><span class="o">(</span><span class="no">TM</span><span class="o">)</span> <span class="mi">64</span><span class="o">-</span><span class="nc">Bit</span> <span class="nc">Server</span> <span class="no">VM</span><span class="o">,</span> <spa [...]
+<span class="nc">Using</span> <span class="nc">Scala</span> <span class="n">version</span> <span class="mf">2.11</span><span class="o">.</span><span class="mi">12</span> <span class="o">(</span><span class="nc">OpenJDK</span> <span class="mi">64</span><span class="o">-</span><span class="nc">Bit</span> <span class="nc">Server</span> <span class="no">VM</span><span class="o">,</span> <span class="nc">Java</span> <span class="mf">1.8</span><span class="o">.</span><span class="mi">0_212</sp [...]
 <span class="nc">Type</span> <span class="n">in</span> <span class="n">expressions</span> <span class="n">to</span> <span class="n">have</span> <span class="n">them</span> <span class="n">evaluated</span><span class="o">.</span>
 <span class="nc">Type</span> <span class="o">:</span><span class="n">help</span> <span class="k">for</span> <span class="n">more</span> <span class="n">information</span><span class="o">.</span>
 
@@ -1167,7 +1170,7 @@ Here is the incremental query :</p>
 <span class="kn">import</span> <span class="nn">org.apache.hudi.DataSourceReadOptions</span>
 
 <span class="err">#</span> <span class="nc">In</span> <span class="n">the</span> <span class="n">below</span> <span class="n">query</span><span class="o">,</span> <span class="mi">20180925045257</span> <span class="n">is</span> <span class="n">the</span> <span class="n">first</span> <span class="n">commit</span><span class="err">'</span><span class="n">s</span> <span class="n">timestamp</span>
-<span class="n">scala</span><span class="o">&gt;</span> <span class="n">val</span> <span class="n">hoodieIncViewDF</span> <span class="o">=</span>  <span class="n">spark</span><span class="o">.</span><span class="na">read</span><span class="o">.</span><span class="na">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span><span class="na">option</span><span class="o">(</span><span class="nc">DataSourceReadOptions</span><span class="o">.</spa [...]
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">val</span> <span class="n">hoodieIncViewDF</span> <span class="o">=</span>  <span class="n">spark</span><span class="o">.</span><span class="na">read</span><span class="o">.</span><span class="na">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span><span class="na">option</span><span class="o">(</span><span class="nc">DataSourceReadOptions</span><span class="o">.</spa [...]
 <span class="nl">SLF4J:</span> <span class="nc">Failed</span> <span class="n">to</span> <span class="n">load</span> <span class="kd">class</span> <span class="err">"</span><span class="nc">org</span><span class="o">.</span><span class="na">slf4j</span><span class="o">.</span><span class="na">impl</span><span class="o">.</span><span class="na">StaticLoggerBinder</span><span class="s">".
 SLF4J: Defaulting to no-operation (NOP) logger implementation
 SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
@@ -1185,31 +1188,38 @@ scala&gt; spark.sql("</span><span class="n">select</span> <span class="err">`</s
 
 </code></pre></div></div>
 
-<h3 id="step-8-schedule-and-run-compaction-for-merge-on-read-dataset">Step 8: Schedule and Run Compaction for Merge-On-Read dataset</h3>
+<h3 id="step-8-schedule-and-run-compaction-for-merge-on-read-table">Step 8: Schedule and Run Compaction for Merge-On-Read table</h3>
 
 <p>Lets schedule and run a compaction to create a new version of columnar  file so that read-optimized readers will see fresher data.
 Again, You can use Hudi CLI to manually schedule and run compaction</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">1</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
 <span class="n">root</span><span class="nd">@adhoc</span><span class="o">-</span><span class="mi">1</span><span class="o">:/</span><span class="n">opt</span><span class="err">#</span>   <span class="o">/</span><span class="kt">var</span><span class="o">/</span><span class="n">hoodie</span><span class="o">/</span><span class="n">ws</span><span class="o">/</span><span class="n">hudi</span><span class="o">-</span><span class="n">cli</span><span class="o">/</span><span class="n">hudi</span>< [...]
-<span class="o">============================================</span>
-<span class="o">*</span>                                          <span class="o">*</span>
-<span class="o">*</span>     <span class="n">_</span>    <span class="n">_</span>           <span class="n">_</span>   <span class="n">_</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span> <span class="o">|</span>  <span class="o">|</span> <span class="o">|</span>         <span class="o">|</span> <span class="o">|</span> <span class="o">(</span><span class="n">_</span><span class="o">)</span>              <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span> <span class="o">|</span><span class="n">__</span><span class="o">|</span> <span class="o">|</span>       <span class="n">__</span><span class="o">|</span> <span class="o">|</span>  <span class="o">-</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span>  <span class="n">__</span>  <span class="o">||</span>   <span class="o">|</span> <span class="o">/</span> <span class="n">_</span><span class="err">`</span> <span class="o">|</span> <span class="o">||</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span> <span class="o">|</span>  <span class="o">|</span> <span class="o">||</span>   <span class="o">||</span> <span class="o">(</span><span class="n">_</span><span class="o">|</span> <span class="o">|</span> <span class="o">||</span>               <span class="o">*</span>
-<span class="o">*</span>    <span class="o">|</span><span class="n">_</span><span class="o">|</span>  <span class="o">|</span><span class="n">_</span><span class="o">|</span><span class="err">\</span><span class="n">___</span><span class="o">/</span> <span class="err">\</span><span class="n">____</span><span class="o">/</span> <span class="o">||</span>               <span class="o">*</span>
-<span class="o">*</span>                                          <span class="o">*</span>
-<span class="o">============================================</span>
-
-<span class="nc">Welcome</span> <span class="n">to</span> <span class="nc">Hoodie</span> <span class="no">CLI</span><span class="o">.</span> <span class="nc">Please</span> <span class="n">type</span> <span class="n">help</span> <span class="k">if</span> <span class="n">you</span> <span class="n">are</span> <span class="n">looking</span> <span class="k">for</span> <span class="n">help</span><span class="o">.</span>
+<span class="o">...</span>
+<span class="nc">Table</span> <span class="n">command</span> <span class="n">getting</span> <span class="n">loaded</span>
+<span class="nc">HoodieSplashScreen</span> <span class="n">loaded</span>
+<span class="o">===================================================================</span>
+<span class="o">*</span>         <span class="n">___</span>                          <span class="n">___</span>                        <span class="o">*</span>
+<span class="o">*</span>        <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>          <span class="n">___</span>           <span class="o">/</span><span class="err">\</span>  <span class="err">\</span>           <span class="n">___</span>         <span class="o">*</span>
+<span class="o">*</span>       <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>         <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>         <span class="o">/</span>  <span class="err">\</span>  <span class="err">\</span>         <span class="o">/</span><span class="err">\</span>  <span class="err">\</span>        <span class="o">*</span>
+<span class="o">*</span>      <span class="o">/</span> <span class="o">/</span><span class="n">__</span><span class="o">/</span>         <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>        <span class="o">/</span> <span class="o">/</span><span class="err">\</span> <span class="err">\</span>  <span class="err">\</span>        <span class="err">\</span> <span class="err">\</span>  <span class="err">\</span>       <span class="o">*</span>
+<span class="o">*</span>     <span class="o">/</span>  <span class="err">\</span>  <span class="err">\</span> <span class="n">___</span>    <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>        <span class="o">/</span> <span class="o">/</span>  <span class="err">\</span> <span class="err">\</span><span class="n">__</span><span class="err">\</span>       <span class="o">/</span>  <span class="err">\</span><span class="n">__</span><span class="err">\</span>     [...]
+<span class="o">*</span>    <span class="o">/</span> <span class="o">/</span><span class="err">\</span> <span class="err">\</span>  <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>  <span class="o">/</span> <span class="o">/</span><span class="n">__</span><span class="o">/</span>  <span class="n">___</span>   <span class="o">/</span> <span class="o">/</span><span class="n">__</span><span class="o">/</span> <span class="err">\</span> <s [...]
+<span class="o">*</span>    <span class="err">\</span><span class="o">/</span>  <span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>  <span class="err">\</span> <span class="err">\</span>  <span class="err">\</span> <span class="o">/</span><span class="err">\</span><span class="n">__</span><span class="err">\</span>  <span class="err">\</span> <span class="err">\</span>  <span class="err">\</span> <span class="o" [...]
+<span class="o">*</span>         <span class="err">\</span>  <span class="o">/</span>  <span class="o">/</span>    <span class="err">\</span> <span class="err">\</span>  <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>   <span class="err">\</span> <span class="err">\</span>  <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>   <span class="err">\</span>  <span class="o">/</span><span class="n">__</span><span class="o">/</span>           [...]
+<span class="o">*</span>         <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>      <span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>     <span class="err">\</span> <span class="err">\</span><span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>     <span class="err">\</span> <span class="err">\</span><span class="n">__</span><span class="err">\</span>         [...]
+<span class="o">*</span>        <span class="o">/</span> <span class="o">/</span>  <span class="o">/</span>        <span class="err">\</span>  <span class="o">/</span>  <span class="o">/</span>       <span class="err">\</span>  <span class="o">/</span>  <span class="o">/</span>       <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>          <span class="o">*</span>
+<span class="o">*</span>        <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>          <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>         <span class="err">\</span><span class="o">/</span><span class="n">__</span><span class="o">/</span>    <span class="nc">Apache</span> <span class="nc">Hudi</span> <span class="no">CLI</span>    <span class="o">*</span>
+<span class="o">*</span>                                                                 <span class="o">*</span>
+<span class="o">===================================================================</span>
+
+<span class="nc">Welcome</span> <span class="n">to</span> <span class="nc">Apache</span> <span class="nc">Hudi</span> <span class="no">CLI</span><span class="o">.</span> <span class="nc">Please</span> <span class="n">type</span> <span class="n">help</span> <span class="k">if</span> <span class="n">you</span> <span class="n">are</span> <span class="n">looking</span> <span class="k">for</span> <span class="n">help</span><span class="o">.</span>
 <span class="n">hudi</span><span class="o">-&gt;</span><span class="n">connect</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hive</span><span class="o">/</span><span class="n">warehouse</span><span class="o">/</span><span class="n">stock_ticks_mor</span>
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">34</span> <span class="no">WARN</span> <span class="n">util</span><span class="o">.</span><span class="na">NativeCodeLoader</span><span class="o">:</span> <span class="nc">Unable</span> <span class="n">to</span> <span class="n">load</span> <span cl [...]
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">35</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Loading</span> <span class="nc">HoodieTableMetaClient</span> <span cla [...]
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">35</span> <span class="no">INFO</span> <span class="n">util</span><span class="o">.</span><span class="na">FSUtils</span><span class="o">:</span> <span class="nc">Hadoop</span> <span class="nl">Configuration:</span> <span class="n">fs</span><span c [...]
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">35</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableConfig</span><span class="o">:</span> <span class="nc">Loading</span> <span class="n">dataset</span> <span class="n">properties</ [...]
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">36</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
+<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">35</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableConfig</span><span class="o">:</span> <span class="nc">Loading</span> <span class="n">table</span> <span class="n">properties</sp [...]
+<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">06</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mi">36</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
 <span class="nc">Metadata</span> <span class="k">for</span> <span class="n">table</span> <span class="n">stock_ticks_mor</span> <span class="n">loaded</span>
 
 <span class="err">#</span> <span class="nc">Ensure</span> <span class="n">no</span> <span class="n">compactions</span> <span class="n">are</span> <span class="n">present</span>
@@ -1233,8 +1243,8 @@ Again, You can use Hudi CLI to manually schedule and run compaction</p>
 <span class="nl">hoodie:</span><span class="n">stock_ticks</span><span class="o">-&gt;</span><span class="n">connect</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hive</span><span class="o">/</span><span class="n">warehouse</span><span class="o">/</span><span class="n">stock_ticks_mor</span>
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">01</span><span class="o">:</span><span class="mi">16</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Loading</span> <span class="nc">HoodieTableMetaClient</span> <span cla [...]
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">01</span><span class="o">:</span><span class="mi">16</span> <span class="no">INFO</span> <span class="n">util</span><span class="o">.</span><span class="na">FSUtils</span><span class="o">:</span> <span class="nc">Hadoop</span> <span class="nl">Configuration:</span> <span class="n">fs</span><span c [...]
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">01</span><span class="o">:</span><span class="mi">16</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableConfig</span><span class="o">:</span> <span class="nc">Loading</span> <span class="n">dataset</span> <span class="n">properties</ [...]
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">01</span><span class="o">:</span><span class="mi">16</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
+<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">01</span><span class="o">:</span><span class="mi">16</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableConfig</span><span class="o">:</span> <span class="nc">Loading</span> <span class="n">table</span> <span class="n">properties</sp [...]
+<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">01</span><span class="o">:</span><span class="mi">16</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
 <span class="nc">Metadata</span> <span class="k">for</span> <span class="n">table</span> <span class="n">stock_ticks_mor</span> <span class="n">loaded</span>
 
 
@@ -1260,8 +1270,8 @@ Again, You can use Hudi CLI to manually schedule and run compaction</p>
 <span class="nl">hoodie:</span><span class="n">stock_ticks_mor</span><span class="o">-&gt;</span><span class="n">connect</span> <span class="o">--</span><span class="n">path</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hive</span><span class="o">/</span><span class="n">warehouse</span><span class="o">/</span><span class="n">stock_ticks_mor</span>
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">03</span><span class="o">:</span><span class="mo">00</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Loading</span> <span class="nc">HoodieTableMetaClient</span> <span cla [...]
 <span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">03</span><span class="o">:</span><span class="mo">00</span> <span class="no">INFO</span> <span class="n">util</span><span class="o">.</span><span class="na">FSUtils</span><span class="o">:</span> <span class="nc">Hadoop</span> <span class="nl">Configuration:</span> <span class="n">fs</span><span c [...]
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">03</span><span class="o">:</span><span class="mo">00</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableConfig</span><span class="o">:</span> <span class="nc">Loading</span> <span class="n">dataset</span> <span class="n">properties</ [...]
-<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">03</span><span class="o">:</span><span class="mo">00</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
+<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">03</span><span class="o">:</span><span class="mo">00</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableConfig</span><span class="o">:</span> <span class="nc">Loading</span> <span class="n">table</span> <span class="n">properties</sp [...]
+<span class="mi">18</span><span class="o">/</span><span class="mi">09</span><span class="o">/</span><span class="mi">24</span> <span class="mo">07</span><span class="o">:</span><span class="mo">03</span><span class="o">:</span><span class="mo">00</span> <span class="no">INFO</span> <span class="n">table</span><span class="o">.</span><span class="na">HoodieTableMetaClient</span><span class="o">:</span> <span class="nc">Finished</span> <span class="nc">Loading</span> <span class="nc">Table [...]
 <span class="nc">Metadata</span> <span class="k">for</span> <span class="n">table</span> <span class="n">stock_ticks_mor</span> <span class="n">loaded</span>
 
 
@@ -1277,7 +1287,7 @@ Again, You can use Hudi CLI to manually schedule and run compaction</p>
 
 <h3 id="step-9-run-hive-queries-including-incremental-queries">Step 9: Run Hive Queries including incremental queries</h3>
 
-<p>You will see that both ReadOptimized and Realtime Views will show the latest committed data.
+<p>You will see that both ReadOptimized and Snapshot queries will show the latest committed data.
 Lets also run the incremental query for MOR table.
 From looking at the below query output, it will be clear that the fist commit time for the MOR table is 20180924064636
 and the second commit time is 20180924070031</p>
@@ -1285,8 +1295,8 @@ and the second commit time is 20180924070031</p>
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">2</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
 <span class="n">beeline</span> <span class="o">-</span><span class="n">u</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000 --hiveconf hive.input.format=org.apache.hadoop.hive.ql.io.HiveInputFormat --hiveconf hive.stats.autogather=false</span>
 
-<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">View</span>
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor group by symbol HAVING symbol = 'GOOG';</span>
+<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">Query</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG';</span>
 <span class="nl">WARNING:</span> <span class="nc">Hive</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="no">MR</span> <span class="n">is</span> <span class="n">deprecated</span> <span class="n">in</span> <span class="nc">Hive</span> <span class="mi">2</span> <span class="n">and</span> <span class="n">may</span> <span class="n">not</span> <span class="n">be</span> <span class="n">available</span> <span class="n">in</span> <span class="n">the</spa [...]
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
@@ -1295,7 +1305,7 @@ and the second commit time is 20180924070031</p>
 <span class="o">+---------+----------------------+--+</span>
 <span class="mi">1</span> <span class="n">row</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">1.6</span> <span class="n">seconds</span><span class="o">)</span>
 
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor where  symbol = 'GOOG';</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor_ro where  symbol = 'GOOG';</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 <span class="o">|</span> <span class="n">_hoodie_commit_time</span>  <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>          <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span>  <span class="o">|</span>    <span class="n">open</span>    <span class="o">|</span>   <span class="n">close</span>   <span class="o">|</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
@@ -1303,7 +1313,7 @@ and the second commit time is 20180924070031</p>
 <span class="o">|</span> <span class="mi">20180924070031</span>       <span class="o">|</span> <span class="no">GOOG</span>    <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span>  <span class="o">|</span> <span class="mi">9021</span>    <span class="o">|</span> < [...]
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 
-<span class="err">#</span> <span class="nc">Realtime</span> <span class="nc">View</span>
+<span class="err">#</span> <span class="nc">Snapshot</span> <span class="nc">Query</span>
 <span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG';</span>
 <span class="nl">WARNING:</span> <span class="nc">Hive</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="no">MR</span> <span class="n">is</span> <span class="n">deprecated</span> <span class="n">in</span> <span class="nc">Hive</span> <span class="mi">2</span> <span class="n">and</span> <span class="n">may</span> <span class="n">not</span> <span class="n">be</span> <span class="n">available</span> <span class="n">in</span> <span class="n">the</spa [...]
 <span class="o">+---------+----------------------+--+</span>
@@ -1320,7 +1330,7 @@ and the second commit time is 20180924070031</p>
 <span class="o">|</span> <span class="mi">20180924070031</span>       <span class="o">|</span> <span class="no">GOOG</span>    <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span>  <span class="o">|</span> <span class="mi">9021</span>    <span class="o">|</span> < [...]
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 
-<span class="err">#</span> <span class="nc">Incremental</span> <span class="nl">View:</span>
+<span class="err">#</span> <span class="nc">Incremental</span> <span class="nl">Query:</span>
 
 <span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; set hoodie.stock_ticks_mor.consume.mode=INCREMENTAL;</span>
 <span class="nc">No</span> <span class="n">rows</span> <span class="nf">affected</span> <span class="o">(</span><span class="mf">0.008</span> <span class="n">seconds</span><span class="o">)</span>
@@ -1330,7 +1340,7 @@ and the second commit time is 20180924070031</p>
 <span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; set hoodie.stock_ticks_mor.consume.start.timestamp=20180924064636;</span>
 <span class="nc">No</span> <span class="n">rows</span> <span class="nf">affected</span> <span class="o">(</span><span class="mf">0.013</span> <span class="n">seconds</span><span class="o">)</span>
 <span class="err">#</span> <span class="nl">Query:</span>
-<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor where  symbol = 'GOOG' and `_hoodie_commit_time` &gt; '20180924064636';</span>
+<span class="mi">0</span><span class="o">:</span> <span class="nl">jdbc:hive2:</span><span class="c1">//hiveserver:10000&gt; select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor_ro where  symbol = 'GOOG' and `_hoodie_commit_time` &gt; '20180924064636';</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 <span class="o">|</span> <span class="n">_hoodie_commit_time</span>  <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>          <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span>  <span class="o">|</span>    <span class="n">open</span>    <span class="o">|</span>   <span class="n">close</span>   <span class="o">|</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
@@ -1340,13 +1350,13 @@ and the second commit time is 20180924070031</p>
 <span class="n">exit</span>
 </code></pre></div></div>
 
-<h3 id="step-10-read-optimized-and-realtime-views-for-mor-with-spark-sql-after-compaction">Step 10: Read Optimized and Realtime Views for MOR with Spark-SQL after compaction</h3>
+<h3 id="step-10-read-optimized-and-snapshot-queries-for-mor-with-spark-sql-after-compaction">Step 10: Read Optimized and Snapshot queries for MOR with Spark-SQL after compaction</h3>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">adhoc</span><span class="o">-</span><span class="mi">1</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">bash</span>
-<span class="n">bash</span><span class="o">-</span><span class="mf">4.4</span><span class="err">#</span> <span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class [...]
+<span class="n">bash</span><span class="o">-</span><span class="mf">4.4</span><span class="err">#</span> <span class="n">$SPARK_INSTALL</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">$HUDI_SPARK_BUNDLE</span> <span class="o">--</span><span class="n">driver</span><span class="o">-</span><span class="kd">class [...]
 
-<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">View</span>
-<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select symbol, max(ts) from stock_ticks_mor group by symbol HAVING symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
+<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">Query</span>
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select symbol, max(ts) from stock_ticks_mor_ro group by symbol HAVING symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
 <span class="o">+---------+----------------------+--+</span>
@@ -1354,7 +1364,7 @@ and the second commit time is 20180924070031</p>
 <span class="o">+---------+----------------------+--+</span>
 <span class="mi">1</span> <span class="n">row</span> <span class="nf">selected</span> <span class="o">(</span><span class="mf">1.6</span> <span class="n">seconds</span><span class="o">)</span>
 
-<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor where  symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, symbol, ts, volume, open, close  from stock_ticks_mor_ro where  symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 <span class="o">|</span> <span class="n">_hoodie_commit_time</span>  <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>          <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span>  <span class="o">|</span>    <span class="n">open</span>    <span class="o">|</span>   <span class="n">close</span>   <span class="o">|</span>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
@@ -1362,7 +1372,7 @@ and the second commit time is 20180924070031</p>
 <span class="o">|</span> <span class="mi">20180924070031</span>       <span class="o">|</span> <span class="no">GOOG</span>    <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span>  <span class="o">|</span> <span class="mi">9021</span>    <span class="o">|</span> < [...]
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 
-<span class="err">#</span> <span class="nc">Realtime</span> <span class="nc">View</span>
+<span class="err">#</span> <span class="nc">Snapshot</span> <span class="nc">Query</span>
 <span class="n">scala</span><span class="o">&gt;</span> <span class="n">spark</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select symbol, max(ts) from stock_ticks_mor_rt group by symbol HAVING symbol = 'GOOG'"</span><span class="o">).</span><span class="na">show</span><span class="o">(</span><span class="mi">100</span><span class="o">,</span> <span class="kc">false</span><span class="o">)</span>
 <span class="o">+---------+----------------------+--+</span>
 <span class="o">|</span> <span class="n">symbol</span>  <span class="o">|</span>         <span class="n">_c1</span>          <span class="o">|</span>
@@ -1379,14 +1389,14 @@ and the second commit time is 20180924070031</p>
 <span class="o">+----------------------+---------+----------------------+---------+------------+-----------+--+</span>
 </code></pre></div></div>
 
-<h3 id="step-11--presto-queries-over-read-optimized-view-on-mor-dataset-after-compaction">Step 11:  Presto queries over Read Optimized View on MOR dataset after compaction</h3>
+<h3 id="step-11--presto-read-optimized-queries-on-mor-table-after-compaction">Step 11:  Presto Read Optimized queries on MOR table after compaction</h3>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">docker</span> <span class="n">exec</span> <span class="o">-</span><span class="n">it</span> <span class="n">presto</span><span class="o">-</span><span class="n">worker</span><span class="o">-</span><span class="mi">1</span> <span class="n">presto</span> <span class="o">--</span><span class="n">server</span> <span class="n">presto</span><span class="o">-</span><span class="n">c [...]
 <span class="n">presto</span><span class="o">&gt;</span> <span class="n">use</span> <span class="n">hive</span><span class="o">.</span><span class="na">default</span><span class="o">;</span>
 <span class="no">USE</span>
 
-<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">View</span>
-<span class="nl">resto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <span c [...]
+<span class="err">#</span> <span class="nc">Read</span> <span class="nc">Optimized</span> <span class="nc">Query</span>
+<span class="nl">resto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">max</span><span class="o">(</span><span class="n">ts</span><span class="o">)</span> <span class="n">from</span> <span class="n">stock_ticks_mor_ro</span> <span class="n">group</span> <span class="n">by</span> <span class="n">symbol</span> <span class="no">HAVING</span> <span class="n">symbol</span> <spa [...]
   <span class="n">symbol</span> <span class="o">|</span>        <span class="n">_col1</span>
 <span class="o">--------+---------------------</span>
  <span class="no">GOOG</span>   <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">10</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span>
@@ -1396,7 +1406,7 @@ and the second commit time is 20180924070031</p>
 <span class="nl">Splits:</span> <span class="mi">49</span> <span class="n">total</span><span class="o">,</span> <span class="mi">49</span> <span class="n">done</span> <span class="o">(</span><span class="mf">100.00</span><span class="o">%)</span>
 <span class="mi">0</span><span class="o">:</span><span class="mo">01</span> <span class="o">[</span><span class="mi">197</span> <span class="n">rows</span><span class="o">,</span> <span class="mi">613</span><span class="no">B</span><span class="o">]</span> <span class="o">[</span><span class="mi">133</span> <span class="n">rows</span><span class="o">/</span><span class="n">s</span><span class="o">,</span> <span class="mi">414</span><span class="no">B</span><span class="o">/</span><span c [...]
 
-<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="s">"_hoodie_commit_time"</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor</spa [...]
+<span class="nl">presto:</span><span class="k">default</span><span class="o">&gt;</span> <span class="n">select</span> <span class="s">"_hoodie_commit_time"</span><span class="o">,</span> <span class="n">symbol</span><span class="o">,</span> <span class="n">ts</span><span class="o">,</span> <span class="n">volume</span><span class="o">,</span> <span class="n">open</span><span class="o">,</span> <span class="n">close</span>  <span class="n">from</span> <span class="n">stock_ticks_mor_ro</ [...]
  <span class="n">_hoodie_commit_time</span> <span class="o">|</span> <span class="n">symbol</span> <span class="o">|</span>         <span class="n">ts</span>          <span class="o">|</span> <span class="n">volume</span> <span class="o">|</span>   <span class="n">open</span>    <span class="o">|</span>  <span class="n">close</span>
 <span class="o">---------------------+--------+---------------------+--------+-----------+----------</span>
  <span class="mi">20190822180250</span>      <span class="o">|</span> <span class="no">GOOG</span>   <span class="o">|</span> <span class="mi">2018</span><span class="o">-</span><span class="mi">08</span><span class="o">-</span><span class="mi">31</span> <span class="mi">09</span><span class="o">:</span><span class="mi">59</span><span class="o">:</span><span class="mo">00</span> <span class="o">|</span>   <span class="mi">6330</span> <span class="o">|</span>    <span class="mf">1230.5</s [...]
@@ -1420,7 +1430,7 @@ and the second commit time is 20180924070031</p>
 </code></pre></div></div>
 <p>The above command builds docker images for all the services with
 current Hudi source installed at /var/hoodie/ws and also brings up the services using a compose file. We
-currently use Hadoop (v2.8.4), Hive (v2.3.3) and Spark (v2.3.1) in docker images.</p>
+currently use Hadoop (v2.8.4), Hive (v2.3.3) and Spark (v2.4.4) in docker images.</p>
 
 <p>To bring down the containers</p>
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">$</span> <span class="n">cd</span> <span class="n">hudi</span><span class="o">-</span><span class="n">integ</span><span class="o">-</span><span class="n">test</span>
diff --git a/content/docs/docs-versions.html b/content/docs/docs-versions.html
index 723b819..678b275 100644
--- a/content/docs/docs-versions.html
+++ b/content/docs/docs-versions.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -337,17 +337,17 @@
             }
           </style>
         
-        <table>
+        <table class="docversions">
     <tbody>
       
         <tr>
-            <th class="docversions">Latest</th>
+            <th>Latest</th>
             <td><a href="/docs/quick-start-guide.html">English Version</a></td>
             <td><a href="/cn/docs/quick-start-guide.html">Chinese Version</a></td>
         </tr>
       
         <tr>
-            <th class="docversions">0.5.0</th>
+            <th>0.5.0</th>
             <td><a href="/docs/0.5.0-quick-start-guide.html">English Version</a></td>
             <td><a href="/cn/docs/0.5.0-quick-start-guide.html">Chinese Version</a></td>
         </tr>
diff --git a/content/docs/gcs_hoodie.html b/content/docs/gcs_hoodie.html
index 3c1aa93..c41e250 100644
--- a/content/docs/gcs_hoodie.html
+++ b/content/docs/gcs_hoodie.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
diff --git a/content/docs/migration_guide.html b/content/docs/migration_guide.html
index b73dd65..f0dcf31 100644
--- a/content/docs/migration_guide.html
+++ b/content/docs/migration_guide.html
@@ -4,7 +4,7 @@
     <meta charset="utf-8">
 
 <!-- begin _includes/seo.html --><title>Migration Guide - Apache Hudi</title>
-<meta name="description" content="Hudi maintains metadata such as commit timeline and indexes to manage a dataset. The commit timelines helps to understand the actions happening on a dataset as well as the current state of a dataset. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently locate a record. At the moment, Hudi supports writing only parquet columnar formats.To be able to start using Hudi for your existing dataset, you will need to migrate your ex [...]
+<meta name="description" content="Hudi maintains metadata such as commit timeline and indexes to manage a table. The commit timelines helps to understand the actions happening on a table as well as the current state of a table. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently locate a record. At the moment, Hudi supports writing only parquet columnar formats.To be able to start using Hudi for your existing table, you will need to migrate your existing t [...]
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
@@ -13,7 +13,7 @@
 <meta property="og:url" content="https://hudi.apache.org/docs/migration_guide.html">
 
 
-  <meta property="og:description" content="Hudi maintains metadata such as commit timeline and indexes to manage a dataset. The commit timelines helps to understand the actions happening on a dataset as well as the current state of a dataset. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently locate a record. At the moment, Hudi supports writing only parquet columnar formats.To be able to start using Hudi for your existing dataset, you will need to migrat [...]
+  <meta property="og:description" content="Hudi maintains metadata such as commit timeline and indexes to manage a table. The commit timelines helps to understand the actions happening on a table as well as the current state of a table. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently locate a record. At the moment, Hudi supports writing only parquet columnar formats.To be able to start using Hudi for your existing table, you will need to migrate your e [...]
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -337,55 +337,54 @@
             }
           </style>
         
-        <p>Hudi maintains metadata such as commit timeline and indexes to manage a dataset. The commit timelines helps to understand the actions happening on a dataset as well as the current state of a dataset. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently locate a record. At the moment, Hudi supports writing only parquet columnar formats.
-To be able to start using Hudi for your existing dataset, you will need to migrate your existing dataset into a Hudi managed dataset. There are a couple of ways to achieve this.</p>
+        <p>Hudi maintains metadata such as commit timeline and indexes to manage a table. The commit timelines helps to understand the actions happening on a table as well as the current state of a table. Indexes are used by Hudi to maintain a record key to file id mapping to efficiently locate a record. At the moment, Hudi supports writing only parquet columnar formats.
+To be able to start using Hudi for your existing table, you will need to migrate your existing table into a Hudi managed table. There are a couple of ways to achieve this.</p>
 
 <h2 id="approaches">Approaches</h2>
 
 <h3 id="use-hudi-for-new-partitions-alone">Use Hudi for new partitions alone</h3>
 
-<p>Hudi can be used to manage an existing dataset without affecting/altering the historical data already present in the
-dataset. Hudi has been implemented to be compatible with such a mixed dataset with a caveat that either the complete
-Hive partition is Hudi managed or not. Thus the lowest granularity at which Hudi manages a dataset is a Hive
-partition. Start using the datasource API or the WriteClient to write to the dataset and make sure you start writing
+<p>Hudi can be used to manage an existing table without affecting/altering the historical data already present in the
+table. Hudi has been implemented to be compatible with such a mixed table with a caveat that either the complete
+Hive partition is Hudi managed or not. Thus the lowest granularity at which Hudi manages a table is a Hive
+partition. Start using the datasource API or the WriteClient to write to the table and make sure you start writing
 to a new partition or convert your last N partitions into Hudi instead of the entire table. Note, since the historical
- partitions are not managed by HUDI, none of the primitives provided by HUDI work on the data in those partitions. More concretely, one cannot perform upserts or incremental pull on such older partitions not managed by the HUDI dataset.
-Take this approach if your dataset is an append only type of dataset and you do not expect to perform any updates to existing (or non Hudi managed) partitions.</p>
+ partitions are not managed by HUDI, none of the primitives provided by HUDI work on the data in those partitions. More concretely, one cannot perform upserts or incremental pull on such older partitions not managed by the HUDI table.
+Take this approach if your table is an append only type of table and you do not expect to perform any updates to existing (or non Hudi managed) partitions.</p>
 
-<h3 id="convert-existing-dataset-to-hudi">Convert existing dataset to Hudi</h3>
+<h3 id="convert-existing-table-to-hudi">Convert existing table to Hudi</h3>
 
-<p>Import your existing dataset into a Hudi managed dataset. Since all the data is Hudi managed, none of the limitations
- of Approach 1 apply here. Updates spanning any partitions can be applied to this dataset and Hudi will efficiently
- make the update available to queries. Note that not only do you get to use all Hudi primitives on this dataset,
- there are other additional advantages of doing this. Hudi automatically manages file sizes of a Hudi managed dataset
- . You can define the desired file size when converting this dataset and Hudi will ensure it writes out files
+<p>Import your existing table into a Hudi managed table. Since all the data is Hudi managed, none of the limitations
+ of Approach 1 apply here. Updates spanning any partitions can be applied to this table and Hudi will efficiently
+ make the update available to queries. Note that not only do you get to use all Hudi primitives on this table,
+ there are other additional advantages of doing this. Hudi automatically manages file sizes of a Hudi managed table
+ . You can define the desired file size when converting this table and Hudi will ensure it writes out files
  adhering to the config. It will also ensure that smaller files later get corrected by routing some new inserts into
  small files rather than writing new small ones thus maintaining the health of your cluster.</p>
 
 <p>There are a few options when choosing this approach.</p>
 
 <p><strong>Option 1</strong>
-Use the HDFSParquetImporter tool. As the name suggests, this only works if your existing dataset is in parquet file format.
-This tool essentially starts a Spark Job to read the existing parquet dataset and converts it into a HUDI managed dataset by re-writing all the data.</p>
+Use the HDFSParquetImporter tool. As the name suggests, this only works if your existing table is in parquet file format.
+This tool essentially starts a Spark Job to read the existing parquet table and converts it into a HUDI managed table by re-writing all the data.</p>
 
 <p><strong>Option 2</strong>
-For huge datasets, this could be as simple as :</p>
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="n">partition</span> <span class="n">in</span> <span class="o">[</span><span class="n">list</span> <span class="n">of</span> <span class="n">partitions</span> <span class="n">in</span> <span class="n">source</span> <span class="n">dataset</span><span class="o">]</span> <span class="o">{</span>
+For huge tables, this could be as simple as :</p>
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">for</span> <span class="n">partition</span> <span class="n">in</span> <span class="o">[</span><span class="n">list</span> <span class="n">of</span> <span class="n">partitions</span> <span class="n">in</span> <span class="n">source</span> <span class="n">table</span><span class="o">]</span> <span class="o">{</span>
         <span class="n">val</span> <span class="n">inputDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="na">read</span><span class="o">.</span><span class="na">format</span><span class="o">(</span><span class="s">"any_input_format"</span><span class="o">).</span><span class="na">load</span><span class="o">(</span><span class="s">"partition_path"</span><span class="o">)</span>
         <span class="n">inputDF</span><span class="o">.</span><span class="na">write</span><span class="o">.</span><span class="na">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span><span class="na">option</span><span class="o">()....</span><span class="na">save</span><span class="o">(</span><span class="s">"basePath"</span><span class="o">)</span>
 <span class="o">}</span>
 </code></pre></div></div>
 
 <p><strong>Option 3</strong>
-Write your own custom logic of how to load an existing dataset into a Hudi managed one. Please read about the RDD API
+Write your own custom logic of how to load an existing table into a Hudi managed one. Please read about the RDD API
  <a href="/docs/quick-start-guide.html">here</a>. Using the HDFSParquetImporter Tool. Once hudi has been built via <code class="highlighter-rouge">mvn clean install -DskipTests</code>, the shell can be
 fired by via <code class="highlighter-rouge">cd hudi-cli &amp;&amp; ./hudi-cli.sh</code>.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">hudi</span><span class="o">-&gt;</span><span class="n">hdfsparquetimport</span>
         <span class="o">--</span><span class="n">upsert</span> <span class="kc">false</span>
-        <span class="o">--</span><span class="n">srcPath</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">parquet</span><span class="o">/</span><span class="n">dataset</span><span class="o">/</span><span class="n">basepath</span>
-        <span class="o">--</span><span class="n">targetPath</span>
-        <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hoodie</span><span class="o">/</span><span class="n">dataset</span><span class="o">/</span><span class="n">basepath</span>
+        <span class="o">--</span><span class="n">srcPath</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">parquet</span><span class="o">/</span><span class="n">table</span><span class="o">/</span><span class="n">basepath</span>
+        <span class="o">--</span><span class="n">targetPath</span> <span class="o">/</span><span class="n">user</span><span class="o">/</span><span class="n">hoodie</span><span class="o">/</span><span class="n">table</span><span class="o">/</span><span class="n">basepath</span>
         <span class="o">--</span><span class="n">tableName</span> <span class="n">hoodie_table</span>
         <span class="o">--</span><span class="n">tableType</span> <span class="no">COPY_ON_WRITE</span>
         <span class="o">--</span><span class="n">rowKeyField</span> <span class="n">_row_key</span>
diff --git a/content/docs/performance.html b/content/docs/performance.html
index be46982..57daa9a 100644
--- a/content/docs/performance.html
+++ b/content/docs/performance.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -342,14 +342,14 @@ the conventional alternatives for achieving these tasks.</p>
 
 <h2 id="upserts">Upserts</h2>
 
-<p>Following shows the speed up obtained for NoSQL database ingestion, from incrementally upserting on a Hudi dataset on the copy-on-write storage,
+<p>Following shows the speed up obtained for NoSQL database ingestion, from incrementally upserting on a Hudi table on the copy-on-write storage,
 on 5 tables ranging from small to huge (as opposed to bulk loading the tables)</p>
 
 <figure>
     <img class="docimage" src="/assets/images/hudi_upsert_perf1.png" alt="hudi_upsert_perf1.png" style="max-width: 1000px" />
 </figure>
 
-<p>Given Hudi can build the dataset incrementally, it opens doors for also scheduling ingesting more frequently thus reducing latency, with
+<p>Given Hudi can build the table incrementally, it opens doors for also scheduling ingesting more frequently thus reducing latency, with
 significant savings on the overall compute cost.</p>
 
 <figure>
@@ -372,10 +372,10 @@ For e.g , with 100M timestamp prefixed keys (5% updates, 95% inserts) on a event
 <strong>~7X (2880 secs vs 440 secs) speed up</strong> over vanilla spark join. Even for a challenging workload like an ‘100% update’ database ingestion workload spanning 
 3.25B UUID keys/30 partitions/6180 files using 300 cores, Hudi indexing offers a <strong>80-100% speedup</strong>.</p>
 
-<h2 id="read-optimized-queries">Read Optimized Queries</h2>
+<h2 id="snapshot-queries">Snapshot Queries</h2>
 
-<p>The major design goal for read optimized view is to achieve the latency reduction &amp; efficiency gains in previous section,
-with no impact on queries. Following charts compare the Hudi vs non-Hudi datasets across Hive/Presto/Spark queries and demonstrate this.</p>
+<p>The major design goal for snapshot queries is to achieve the latency reduction &amp; efficiency gains in previous section,
+with no impact on queries. Following charts compare the Hudi vs non-Hudi tables across Hive/Presto/Spark queries and demonstrate this.</p>
 
 <p><strong>Hive</strong></p>
 
diff --git a/content/docs/powered_by.html b/content/docs/powered_by.html
index 693a97f..b1dda5a 100644
--- a/content/docs/powered_by.html
+++ b/content/docs/powered_by.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
diff --git a/content/docs/privacy.html b/content/docs/privacy.html
index 7f554d8..cc6462f 100644
--- a/content/docs/privacy.html
+++ b/content/docs/privacy.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
diff --git a/content/docs/querying_data.html b/content/docs/querying_data.html
index c39e5ce..37e16ea 100644
--- a/content/docs/querying_data.html
+++ b/content/docs/querying_data.html
@@ -3,17 +3,17 @@
   <head>
     <meta charset="utf-8">
 
-<!-- begin _includes/seo.html --><title>Querying Hudi Datasets - Apache Hudi</title>
-<meta name="description" content="Conceptually, Hudi stores data physically once on DFS, while providing 3 logical views on top, as explained before. Once the dataset is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudibundle has been provided, the dataset can be queried by popular query engines like Hive, Spark and Presto.">
+<!-- begin _includes/seo.html --><title>Querying Hudi Tables - Apache Hudi</title>
+<meta name="description" content="Conceptually, Hudi stores data physically once on DFS, while providing 3 different ways of querying, as explained before. Once the table is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudibundle has been provided, the table can be queried by popular query engines like Hive, Spark and Presto.">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
 <meta property="og:site_name" content="">
-<meta property="og:title" content="Querying Hudi Datasets">
+<meta property="og:title" content="Querying Hudi Tables">
 <meta property="og:url" content="https://hudi.apache.org/docs/querying_data.html">
 
 
-  <meta property="og:description" content="Conceptually, Hudi stores data physically once on DFS, while providing 3 logical views on top, as explained before. Once the dataset is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudibundle has been provided, the dataset can be queried by popular query engines like Hive, Spark and Presto.">
+  <meta property="og:description" content="Conceptually, Hudi stores data physically once on DFS, while providing 3 different ways of querying, as explained before. Once the table is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudibundle has been provided, the table can be queried by popular query engines like Hive, Spark and Presto.">
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -324,7 +324,7 @@
     <div class="page__inner-wrap">
       
         <header>
-          <h1 id="page-title" class="page__title" itemprop="headline">Querying Hudi Datasets
+          <h1 id="page-title" class="page__title" itemprop="headline">Querying Hudi Tables
 </h1>
         </header>
       
@@ -333,20 +333,20 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#hive">Hive</a>
     <ul>
-      <li><a href="#read-optimized-table">Read Optimized table</a></li>
-      <li><a href="#real-time-table">Real time table</a></li>
-      <li><a href="#incremental-pulling">Incremental Pulling</a></li>
+      <li><a href="#read-optimized-query">Read optimized query</a></li>
+      <li><a href="#snapshot-query">Snapshot query</a></li>
+      <li><a href="#incremental-query">Incremental query</a></li>
     </ul>
   </li>
   <li><a href="#spark">Spark</a>
     <ul>
-      <li><a href="#read-optimized-table-1">Read Optimized table</a></li>
-      <li><a href="#spark-rt-view">Real time table</a></li>
-      <li><a href="#spark-incr-pull">Incremental Pulling</a></li>
+      <li><a href="#read-optimized-query-1">Read optimized query</a></li>
+      <li><a href="#spark-snapshot-query">Snapshot query</a></li>
+      <li><a href="#spark-incr-pull">Incremental pulling</a></li>
     </ul>
   </li>
   <li><a href="#presto">Presto</a></li>
@@ -354,43 +354,48 @@
           </nav>
         </aside>
         
-        <p>Conceptually, Hudi stores data physically once on DFS, while providing 3 logical views on top, as explained <a href="/docs/concepts.html#views">before</a>. 
-Once the dataset is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudi
-bundle has been provided, the dataset can be queried by popular query engines like Hive, Spark and Presto.</p>
+        <p>Conceptually, Hudi stores data physically once on DFS, while providing 3 different ways of querying, as explained <a href="/docs/concepts.html#query-types">before</a>. 
+Once the table is synced to the Hive metastore, it provides external Hive tables backed by Hudi’s custom inputformats. Once the proper hudi
+bundle has been provided, the table can be queried by popular query engines like Hive, Spark and Presto.</p>
 
-<p>Specifically, there are two Hive tables named off <a href="/docs/configurations.html#TABLE_NAME_OPT_KEY">table name</a> passed during write. 
-For e.g, if <code class="highlighter-rouge">table name = hudi_tbl</code>, then we get</p>
+<p>Specifically, following Hive tables are registered based off <a href="/docs/configurations.html#TABLE_NAME_OPT_KEY">table name</a> 
+and <a href="/docs/configurations.html#TABLE_TYPE_OPT_KEY">table type</a> passed during write.</p>
 
+<p>If <code class="highlighter-rouge">table name = hudi_trips</code> and <code class="highlighter-rouge">table type = COPY_ON_WRITE</code>, then we get:</p>
 <ul>
-  <li><code class="highlighter-rouge">hudi_tbl</code> realizes the read optimized view of the dataset backed by <code class="highlighter-rouge">HoodieParquetInputFormat</code>, exposing purely columnar data.</li>
-  <li><code class="highlighter-rouge">hudi_tbl_rt</code> realizes the real time view of the dataset  backed by <code class="highlighter-rouge">HoodieParquetRealtimeInputFormat</code>, exposing merged view of base and log data.</li>
+  <li><code class="highlighter-rouge">hudi_trips</code> supports snapshot query and incremental query on the table backed by <code class="highlighter-rouge">HoodieParquetInputFormat</code>, exposing purely columnar data.</li>
+</ul>
+
+<p>If <code class="highlighter-rouge">table name = hudi_trips</code> and <code class="highlighter-rouge">table type = MERGE_ON_READ</code>, then we get:</p>
+<ul>
+  <li><code class="highlighter-rouge">hudi_trips_rt</code> supports snapshot query and incremental query (providing near-real time data) on the table  backed by <code class="highlighter-rouge">HoodieParquetRealtimeInputFormat</code>, exposing merged view of base and log data.</li>
+  <li><code class="highlighter-rouge">hudi_trips_ro</code> supports read optimized query on the table backed by <code class="highlighter-rouge">HoodieParquetInputFormat</code>, exposing purely columnar data.</li>
 </ul>
 
 <p>As discussed in the concepts section, the one key primitive needed for <a href="https://www.oreilly.com/ideas/ubers-case-for-incremental-processing-on-hadoop">incrementally processing</a>,
-is <code class="highlighter-rouge">incremental pulls</code> (to obtain a change stream/log from a dataset). Hudi datasets can be pulled incrementally, which means you can get ALL and ONLY the updated &amp; new rows 
+is <code class="highlighter-rouge">incremental pulls</code> (to obtain a change stream/log from a table). Hudi tables can be pulled incrementally, which means you can get ALL and ONLY the updated &amp; new rows 
 since a specified instant time. This, together with upserts, are particularly useful for building data pipelines where 1 or more source Hudi tables are incrementally pulled (streams/facts),
-joined with other tables (datasets/dimensions), to <a href="/docs/writing_data.html">write out deltas</a> to a target Hudi dataset. Incremental view is realized by querying one of the tables above, 
-with special configurations that indicates to query planning that only incremental data needs to be fetched out of the dataset.</p>
+joined with other tables (tables/dimensions), to <a href="/docs/writing_data.html">write out deltas</a> to a target Hudi table. Incremental view is realized by querying one of the tables above, 
+with special configurations that indicates to query planning that only incremental data needs to be fetched out of the table.</p>
 
-<p>In sections, below we will discuss in detail how to access all the 3 views on each query engine.</p>
+<p>In sections, below we will discuss how to access these query types from different query engines.</p>
 
 <h2 id="hive">Hive</h2>
 
-<p>In order for Hive to recognize Hudi datasets and query correctly, the HiveServer2 needs to be provided with the <code class="highlighter-rouge">hudi-hadoop-mr-bundle-x.y.z-SNAPSHOT.jar</code> 
+<p>In order for Hive to recognize Hudi tables and query correctly, the HiveServer2 needs to be provided with the <code class="highlighter-rouge">hudi-hadoop-mr-bundle-x.y.z-SNAPSHOT.jar</code> 
 in its <a href="https://www.cloudera.com/documentation/enterprise/5-6-x/topics/cm_mc_hive_udf.html#concept_nc3_mms_lr">aux jars path</a>. This will ensure the input format 
 classes with its dependencies are available for query planning &amp; execution.</p>
 
-<h3 id="read-optimized-table">Read Optimized table</h3>
+<h3 id="read-optimized-query">Read optimized query</h3>
 <p>In addition to setup above, for beeline cli access, the <code class="highlighter-rouge">hive.input.format</code> variable needs to be set to the  fully qualified path name of the 
 inputformat <code class="highlighter-rouge">org.apache.hudi.hadoop.HoodieParquetInputFormat</code>. For Tez, additionally the <code class="highlighter-rouge">hive.tez.input.format</code> needs to be set 
 to <code class="highlighter-rouge">org.apache.hadoop.hive.ql.io.HiveInputFormat</code></p>
 
-<h3 id="real-time-table">Real time table</h3>
+<h3 id="snapshot-query">Snapshot query</h3>
 <p>In addition to installing the hive bundle jar on the HiveServer2, it needs to be put on the hadoop/hive installation across the cluster, so that
 queries can pick up the custom RecordReader as well.</p>
 
-<h3 id="incremental-pulling">Incremental Pulling</h3>
-
+<h3 id="incremental-query">Incremental query</h3>
 <p><code class="highlighter-rouge">HiveIncrementalPuller</code> allows incrementally extracting changes from large fact/dimension tables via HiveQL, combining the benefits of Hive (reliably process complex SQL queries) and 
 incremental primitives (speed up query by pulling tables incrementally instead of scanning fully). The tool uses Hive JDBC to run the hive query and saves its results in a temp table.
 that can later be upserted. Upsert utility (<code class="highlighter-rouge">HoodieDeltaStreamer</code>) has all the state it needs from the directory structure to know what should be the commit time on the target table.
@@ -480,12 +485,12 @@ e.g: <code class="highlighter-rouge">/app/incremental-hql/intermediate/{source_t
   </tbody>
 </table>
 
-<p>Setting fromCommitTime=0 and maxCommits=-1 will pull in the entire source dataset and can be used to initiate backfills. If the target dataset is a Hudi dataset,
-then the utility can determine if the target dataset has no commits or is behind more than 24 hour (this is configurable),
+<p>Setting fromCommitTime=0 and maxCommits=-1 will pull in the entire source table and can be used to initiate backfills. If the target table is a Hudi table,
+then the utility can determine if the target table has no commits or is behind more than 24 hour (this is configurable),
 it will automatically use the backfill configuration, since applying the last 24 hours incrementally could take more time than doing a backfill. The current limitation of the tool
-is the lack of support for self-joining the same table in mixed mode (normal and incremental modes).</p>
+is the lack of support for self-joining the same table in mixed mode (snapshot and incremental modes).</p>
 
-<p><strong>NOTE on Hive queries that are executed using Fetch task:</strong>
+<p><strong>NOTE on Hive incremental queries that are executed using Fetch task:</strong>
 Since Fetch tasks invoke InputFormat.listStatus() per partition, Hoodie metadata can be listed in
 every such listStatus() call. In order to avoid this, it might be useful to disable fetch tasks
 using the hive session property for incremental queries: <code class="highlighter-rouge">set hive.fetch.task.conversion=none;</code> This
@@ -494,18 +499,18 @@ separated) and calls InputFormat.listStatus() only once with all those partition
 
 <h2 id="spark">Spark</h2>
 
-<p>Spark provides much easier deployment &amp; management of Hudi jars and bundles into jobs/notebooks. At a high level, there are two ways to access Hudi datasets in Spark.</p>
+<p>Spark provides much easier deployment &amp; management of Hudi jars and bundles into jobs/notebooks. At a high level, there are two ways to access Hudi tables in Spark.</p>
 
 <ul>
   <li><strong>Hudi DataSource</strong> : Supports Read Optimized, Incremental Pulls similar to how standard datasources (e.g: <code class="highlighter-rouge">spark.read.parquet</code>) work.</li>
-  <li><strong>Read as Hive tables</strong> : Supports all three views, including the real time view, relying on the custom Hudi input formats again like Hive.</li>
+  <li><strong>Read as Hive tables</strong> : Supports all three query types, including the snapshot queries, relying on the custom Hudi input formats again like Hive.</li>
 </ul>
 
-<p>In general, your spark job needs a dependency to <code class="highlighter-rouge">hudi-spark</code> or <code class="highlighter-rouge">hudi-spark-bundle-x.y.z.jar</code> needs to be on the class path of driver &amp; executors (hint: use <code class="highlighter-rouge">--jars</code> argument)</p>
+<p>In general, your spark job needs a dependency to <code class="highlighter-rouge">hudi-spark</code> or <code class="highlighter-rouge">hudi-spark-bundle_2.*-x.y.z.jar</code> needs to be on the class path of driver &amp; executors (hint: use <code class="highlighter-rouge">--jars</code> argument)</p>
 
-<h3 id="read-optimized-table-1">Read Optimized table</h3>
+<h3 id="read-optimized-query-1">Read optimized query</h3>
 
-<p>To read RO table as a Hive table using SparkSQL, simply push a path filter into sparkContext as follows. 
+<p>Pushing a path filter into sparkContext as follows allows for read optimized querying of a Hudi hive table using SparkSQL. 
 This method retains Spark built-in optimizations for reading Parquet files like vectorized reading on Hudi tables.</p>
 
 <div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">spark</span><span class="o">.</span><span class="py">sparkContext</span><span class="o">.</span><span class="py">hadoopConfiguration</span><span class="o">.</span><span class="py">setClass</span><span class="o">(</span><span class="s">"mapreduce.input.pathFilter.class"</span><span class="o">,</span> <span class="n">classOf</span><span class="o">[</span><span class="kt">org.a [...]
@@ -514,21 +519,22 @@ This method retains Spark built-in optimizations for reading Parquet files like
 <p>If you prefer to glob paths on DFS via the datasource, you can simply do something like below to get a Spark dataframe to work with.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nc">Dataset</span><span class="o">&lt;</span><span class="nc">Row</span><span class="o">&gt;</span> <span class="n">hoodieROViewDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="na">read</span><span class="o">().</span><span class="na">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">)</span>
-<span class="c1">// pass any path glob, can include hudi &amp; non-hudi datasets</span>
+<span class="c1">// pass any path glob, can include hudi &amp; non-hudi tables</span>
 <span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="s">"/glob/path/pattern"</span><span class="o">);</span>
 </code></pre></div></div>
 
-<h3 id="spark-rt-view">Real time table</h3>
-<p>Currently, real time table can only be queried as a Hive table in Spark. In order to do this, set <code class="highlighter-rouge">spark.sql.hive.convertMetastoreParquet=false</code>, forcing Spark to fallback 
+<h3 id="spark-snapshot-query">Snapshot query</h3>
+<p>Currently, near-real time data can only be queried as a Hive table in Spark using snapshot query mode. In order to do this, set <code class="highlighter-rouge">spark.sql.hive.convertMetastoreParquet=false</code>, forcing Spark to fallback 
 to using the Hive Serde to read the data (planning/executions is still Spark).</p>
 
-<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">$</span> <span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">hudi</span><span class="o">-</span><span class="n">spark</span><span class="o">-</span><span class="n">bundle</span><span class="o">-</span><span class="n">x</span><span class="o">.</span><span class="na">y</span><span [...]
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="err">$</span> <span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">jars</span> <span class="n">hudi</span><span class="o">-</span><span class="n">spark</span><span class="o">-</span><span class="n">bundle_2</span><span class="o">.</span><span class="mi">11</span><span class="o">-</span><span class="n">x</span><s [...]
 
-<span class="n">scala</span><span class="o">&gt;</span> <span class="n">sqlContext</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select count(*) from hudi_rt where datestr = '2016-10-02'"</span><span class="o">).</span><span class="na">show</span><span class="o">()</span>
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">sqlContext</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select count(*) from hudi_trips_rt where datestr = '2016-10-02'"</span><span class="o">).</span><span class="na">show</span><span class="o">()</span>
+<span class="n">scala</span><span class="o">&gt;</span> <span class="n">sqlContext</span><span class="o">.</span><span class="na">sql</span><span class="o">(</span><span class="s">"select count(*) from hudi_trips_rt where datestr = '2016-10-02'"</span><span class="o">).</span><span class="na">show</span><span class="o">()</span>
 </code></pre></div></div>
 
-<h3 id="spark-incr-pull">Incremental Pulling</h3>
-<p>The <code class="highlighter-rouge">hudi-spark</code> module offers the DataSource API, a more elegant way to pull data from Hudi dataset and process it via Spark.
+<h3 id="spark-incr-pull">Incremental pulling</h3>
+<p>The <code class="highlighter-rouge">hudi-spark</code> module offers the DataSource API, a more elegant way to pull data from Hudi table and process it via Spark.
 A sample incremental pull, that will obtain all records written since <code class="highlighter-rouge">beginInstantTime</code>, looks like below.</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nc">Dataset</span><span class="o">&lt;</span><span class="nc">Row</span><span class="o">&gt;</span> <span class="n">hoodieIncViewDF</span> <span class="o">=</span> <span class="n">spark</span><span class="o">.</span><span class="na">read</span><span class="o">()</span>
@@ -537,7 +543,7 @@ A sample incremental pull, that will obtain all records written since <code clas
              <span class="nc">DataSourceReadOptions</span><span class="o">.</span><span class="na">VIEW_TYPE_INCREMENTAL_OPT_VAL</span><span class="o">())</span>
      <span class="o">.</span><span class="na">option</span><span class="o">(</span><span class="nc">DataSourceReadOptions</span><span class="o">.</span><span class="na">BEGIN_INSTANTTIME_OPT_KEY</span><span class="o">(),</span>
             <span class="o">&lt;</span><span class="n">beginInstantTime</span><span class="o">&gt;)</span>
-     <span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="n">tablePath</span><span class="o">);</span> <span class="c1">// For incremental view, pass in the root/base path of dataset</span>
+     <span class="o">.</span><span class="na">load</span><span class="o">(</span><span class="n">tablePath</span><span class="o">);</span> <span class="c1">// For incremental view, pass in the root/base path of table</span>
 </code></pre></div></div>
 
 <p>Please refer to <a href="/docs/configurations.html#spark-datasource">configurations</a> section, to view all datasource options.</p>
@@ -562,14 +568,14 @@ A sample incremental pull, that will obtain all records written since <code clas
     </tr>
     <tr>
       <td>checkExists(keys)</td>
-      <td>Check if the provided keys exist in a Hudi dataset</td>
+      <td>Check if the provided keys exist in a Hudi table</td>
     </tr>
   </tbody>
 </table>
 
 <h2 id="presto">Presto</h2>
 
-<p>Presto is a popular query engine, providing interactive query performance. Hudi RO tables can be queries seamlessly in Presto. 
+<p>Presto is a popular query engine, providing interactive query performance. Presto currently supports only read optimized queries on Hudi tables. 
 This requires the <code class="highlighter-rouge">hudi-presto-bundle</code> jar to be placed into <code class="highlighter-rouge">&lt;presto_install&gt;/plugin/hive-hadoop2/</code>, across the installation.</p>
 
       </section>
diff --git a/content/docs/quick-start-guide.html b/content/docs/quick-start-guide.html
index d79d1bc..6d8816a 100644
--- a/content/docs/quick-start-guide.html
+++ b/content/docs/quick-start-guide.html
@@ -4,7 +4,7 @@
     <meta charset="utf-8">
 
 <!-- begin _includes/seo.html --><title>Quick-Start Guide - Apache Hudi</title>
-<meta name="description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi dataset of default storage type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.">
+<meta name="description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
@@ -13,7 +13,7 @@
 <meta property="og:url" content="https://hudi.apache.org/docs/quick-start-guide.html">
 
 
-  <meta property="og:description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi dataset of default storage type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.">
+  <meta property="og:description" content="This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through code snippets that allows you to insert and update a Hudi table of default table type: Copy on Write. After each write operation we will also show how to read the data both snapshot and incrementally.">
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#setup-spark-shell">Setup spark-shell</a></li>
   <li><a href="#insert-data">Insert data</a></li>
@@ -341,14 +341,15 @@
   <li><a href="#update-data">Update data</a></li>
   <li><a href="#incremental-query">Incremental query</a></li>
   <li><a href="#point-in-time-query">Point in time query</a></li>
+  <li><a href="#deletes">Delete data</a></li>
   <li><a href="#where-to-go-from-here">Where to go from here?</a></li>
 </ul>
           </nav>
         </aside>
         
         <p>This guide provides a quick peek at Hudi’s capabilities using spark-shell. Using Spark datasources, we will walk through 
-code snippets that allows you to insert and update a Hudi dataset of default storage type: 
-<a href="/docs/concepts.html#copy-on-write-storage">Copy on Write</a>. 
+code snippets that allows you to insert and update a Hudi table of default table type: 
+<a href="/docs/concepts.html#copy-on-write-table">Copy on Write</a>. 
 After each write operation we will also show how to read the data both snapshot and incrementally.</p>
 
 <h2 id="setup-spark-shell">Setup spark-shell</h2>
@@ -356,10 +357,20 @@ After each write operation we will also show how to read the data both snapshot
 <p>Hudi works with Spark-2.x versions. You can follow instructions <a href="https://spark.apache.org/downloads.html">here</a> for setting up spark. 
 From the extracted directory run spark-shell with Hudi as:</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class="o">-</span><span class="n">shell</span> <span class="o">--</span><span class="n">packages</span> <span class="nv">org</span><span class="o">.</span><span class="py">apache</span><span class="o">.</span><span class="py">hudi</span><span class="k">:</span><span class="kt">hudi-spark-bundle:</span><span c [...]
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">spark</span><span class="o">-</span><span class="mf">2.4</span><span class="o">.</span><span class="mi">4</span><span class="o">-</span><span class="n">bin</span><span class="o">-</span><span class="n">hadoop2</span><span class="o">.</span><span class="mi">7</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">spark</span><span class [...]
     <span class="kt">--conf</span> <span class="kt">'spark.serializer</span><span class="o">=</span><span class="nv">org</span><span class="o">.</span><span class="py">apache</span><span class="o">.</span><span class="py">spark</span><span class="o">.</span><span class="py">serializer</span><span class="o">.</span><span class="py">KryoSerializer</span><span class="o">'</span>
 </code></pre></div></div>
 
+<div class="notice--info">
+  <h4>Please note the following: </h4>
+<ul>
+  <li>spark-avro module needs to be specified in --packages as it is not included with spark-shell by default</li>
+  <li>spark-avro and spark versions must match (we have used 2.4.4 for both above)</li>
+  <li>we have used hudi-spark-bundle built for scala 2.11 since the spark-avro module used also depends on 2.11. 
+         If spark-avro_2.12 is used, correspondingly hudi-spark-bundle_2.12 needs to be used. </li>
+</ul>
+</div>
+
 <p>Setup table name, base path and a data generator to generate records for this guide.</p>
 
 <div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">import</span> <span class="nn">org.apache.hudi.QuickstartUtils._</span>
@@ -369,8 +380,8 @@ From the extracted directory run spark-shell with Hudi as:</p>
 <span class="k">import</span> <span class="nn">org.apache.hudi.DataSourceWriteOptions._</span>
 <span class="k">import</span> <span class="nn">org.apache.hudi.config.HoodieWriteConfig._</span>
 
-<span class="k">val</span> <span class="nv">tableName</span> <span class="k">=</span> <span class="s">"hudi_cow_table"</span>
-<span class="k">val</span> <span class="nv">basePath</span> <span class="k">=</span> <span class="s">"file:///tmp/hudi_cow_table"</span>
+<span class="k">val</span> <span class="nv">tableName</span> <span class="k">=</span> <span class="s">"hudi_trips_cow"</span>
+<span class="k">val</span> <span class="nv">basePath</span> <span class="k">=</span> <span class="s">"file:///tmp/hudi_trips_cow"</span>
 <span class="k">val</span> <span class="nv">dataGen</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">DataGenerator</span>
 </code></pre></div></div>
 
@@ -379,7 +390,7 @@ can generate sample inserts and updates based on the the sample trip schema <a h
 
 <h2 id="insert-data">Insert data</h2>
 
-<p>Generate some new trips, load them into a DataFrame and write the DataFrame into the Hudi dataset as below.</p>
+<p>Generate some new trips, load them into a DataFrame and write the DataFrame into the Hudi table as below.</p>
 
 <div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">inserts</span> <span class="k">=</span> <span class="nf">convertToStringList</span><span class="o">(</span><span class="nv">dataGen</span><span class="o">.</span><span class="py">generateInserts</span><span class="o">(</span><span class="mi">10</span><span class="o">))</span>
 <span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">read</span><span class="o">.</span><span class="py">json</span><span class="o">(</span><span class="nv">spark</span><span class="o">.</span><span class="py">sparkContext</span><span class="o">.</span><span class="py">parallelize</span><span class="o">(</span><span class="n">inserts</span><span class="o">,</span> <span class="mi">2</span><spa [...]
@@ -393,12 +404,12 @@ can generate sample inserts and updates based on the the sample trip schema <a h
     <span class="nf">save</span><span class="o">(</span><span class="n">basePath</span><span class="o">);</span>
 </code></pre></div></div>
 
-<p class="notice--info"><code class="highlighter-rouge">mode(Overwrite)</code> overwrites and recreates the dataset if it already exists.
-You can check the data generated under <code class="highlighter-rouge">/tmp/hudi_cow_table/&lt;region&gt;/&lt;country&gt;/&lt;city&gt;/</code>. We provided a record key 
-(<code class="highlighter-rouge">uuid</code> in <a href="#sample-schema">schema</a>), partition field (<code class="highlighter-rouge">region/county/city</code>) and combine logic (<code class="highlighter-rouge">ts</code> in 
-<a href="#sample-schema">schema</a>) to ensure trip records are unique within each partition. For more info, refer to 
+<p class="notice--info"><code class="highlighter-rouge">mode(Overwrite)</code> overwrites and recreates the table if it already exists.
+You can check the data generated under <code class="highlighter-rouge">/tmp/hudi_trips_cow/&lt;region&gt;/&lt;country&gt;/&lt;city&gt;/</code>. We provided a record key 
+(<code class="highlighter-rouge">uuid</code> in <a href="https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58">schema</a>), partition field (<code class="highlighter-rouge">region/county/city</code>) and combine logic (<code class="highlighter-rouge">ts</code> in 
+<a href="https://github.com/apache/incubator-hudi/blob/master/hudi-spark/src/main/java/org/apache/hudi/QuickstartUtils.java#L58">schema</a>) to ensure trip records are unique within each partition. For more info, refer to 
 <a href="https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=113709185#FAQ-HowdoImodelthedatastoredinHudi">Modeling data stored in Hudi</a>
-and for info on ways to ingest data into Hudi, refer to <a href="/docs/writing_data.html">Writing Hudi Datasets</a>.
+and for info on ways to ingest data into Hudi, refer to <a href="/docs/writing_data.html">Writing Hudi Tables</a>.
 Here we are using the default write operation : <code class="highlighter-rouge">upsert</code>. If you have a workload without updates, you can also issue 
 <code class="highlighter-rouge">insert</code> or <code class="highlighter-rouge">bulk_insert</code> operations which could be faster. To know more, refer to <a href="/docs/writing_data#write-operations">Write operations</a></p>
 
@@ -406,23 +417,23 @@ Here we are using the default write operation : <code class="highlighter-rouge">
 
 <p>Load the data files into a DataFrame.</p>
 
-<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">roViewDF</span> <span class="k">=</span> <span class="n">spark</span><span class="o">.</span>
+<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">tripsSnapshotDF</span> <span class="k">=</span> <span class="n">spark</span><span class="o">.</span>
     <span class="n">read</span><span class="o">.</span>
     <span class="nf">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span>
     <span class="nf">load</span><span class="o">(</span><span class="n">basePath</span> <span class="o">+</span> <span class="s">"/*/*/*/*"</span><span class="o">)</span>
-<span class="nv">roViewDF</span><span class="o">.</span><span class="py">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_ro_table"</span><span class="o">)</span>
-<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select fare, begin_lon, begin_lat, ts from  hudi_ro_table where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
-<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_ro_table"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
+<span class="nv">tripsSnapshotDF</span><span class="o">.</span><span class="py">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_trips_snapshot"</span><span class="o">)</span>
+<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select fare, begin_lon, begin_lat, ts from  hudi_trips_snapshot where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
+<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from  hudi_trips_snapshot"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
 </code></pre></div></div>
 
-<p class="notice--info">This query provides a read optimized view of the ingested data. Since our partition path (<code class="highlighter-rouge">region/country/city</code>) is 3 levels nested 
+<p class="notice--info">This query provides snapshot querying of the ingested data. Since our partition path (<code class="highlighter-rouge">region/country/city</code>) is 3 levels nested 
 from base path we ve used <code class="highlighter-rouge">load(basePath + "/*/*/*/*")</code>. 
-Refer to <a href="/docs/concepts#storage-types--views">Storage Types and Views</a> for more info on all storage types and views supported.</p>
+Refer to <a href="/docs/concepts#table-types--queries">Table types and queries</a> for more info on all table types and query types supported.</p>
 
 <h2 id="update-data">Update data</h2>
 
 <p>This is similar to inserting new data. Generate updates to existing trips using the data generator, load into a DataFrame 
-and write DataFrame into the hudi dataset.</p>
+and write DataFrame into the hudi table.</p>
 
 <div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">val</span> <span class="nv">updates</span> <span class="k">=</span> <span class="nf">convertToStringList</span><span class="o">(</span><span class="nv">dataGen</span><span class="o">.</span><span class="py">generateUpdates</span><span class="o">(</span><span class="mi">10</span><span class="o">))</span>
 <span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">read</span><span class="o">.</span><span class="py">json</span><span class="o">(</span><span class="nv">spark</span><span class="o">.</span><span class="py">sparkContext</span><span class="o">.</span><span class="py">parallelize</span><span class="o">(</span><span class="n">updates</span><span class="o">,</span> <span class="mi">2</span><spa [...]
@@ -436,14 +447,14 @@ and write DataFrame into the hudi dataset.</p>
     <span class="nf">save</span><span class="o">(</span><span class="n">basePath</span><span class="o">);</span>
 </code></pre></div></div>
 
-<p class="notice--info">Notice that the save mode is now <code class="highlighter-rouge">Append</code>. In general, always use append mode unless you are trying to create the dataset for the first time.
-<a href="#query-data">Querying</a> the data again will now show updated trips. Each write operation generates a new <a href="http://hudi.incubator.apache.org/concepts.html">commit</a> 
+<p class="notice--info">Notice that the save mode is now <code class="highlighter-rouge">Append</code>. In general, always use append mode unless you are trying to create the table for the first time.
+<a href="#query-data">Querying</a> the data again will now show updated trips. Each write operation generates a new <a href="http://hudi.incubator.apache.org/docs/concepts.html">commit</a> 
 denoted by the timestamp. Look for changes in <code class="highlighter-rouge">_hoodie_commit_time</code>, <code class="highlighter-rouge">rider</code>, <code class="highlighter-rouge">driver</code> fields for the same <code class="highlighter-rouge">_hoodie_record_key</code>s in previous commit.</p>
 
 <h2 id="incremental-query">Incremental query</h2>
 
 <p>Hudi also provides capability to obtain a stream of records that changed since given commit timestamp. 
-This can be achieved using Hudi’s incremental view and providing a begin time from which changes need to be streamed. 
+This can be achieved using Hudi’s incremental querying and providing a begin time from which changes need to be streamed. 
 We do not need to specify endTime, if we want all changes after the given commit (as is the common case).</p>
 
 <div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c1">// reload data
@@ -451,20 +462,20 @@ We do not need to specify endTime, if we want all changes after the given commit
     <span class="n">read</span><span class="o">.</span>
     <span class="nf">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span>
     <span class="nf">load</span><span class="o">(</span><span class="n">basePath</span> <span class="o">+</span> <span class="s">"/*/*/*/*"</span><span class="o">).</span>
-    <span class="nf">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_ro_table"</span><span class="o">)</span>
+    <span class="nf">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_trips_snapshot"</span><span class="o">)</span>
 
-<span class="k">val</span> <span class="nv">commits</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select distinct(_hoodie_commit_time) as commitTime from  hudi_ro_table order by commitTime"</span><span class="o">).</span><span class="py">map</span><span class="o">(</span><span class="n">k</span> <span class="k">=&gt;</span> <span class="nv">k</span><span class="o">.</span><span clas [...]
+<span class="k">val</span> <span class="nv">commits</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select distinct(_hoodie_commit_time) as commitTime from  hudi_trips_snapshot order by commitTime"</span><span class="o">).</span><span class="py">map</span><span class="o">(</span><span class="n">k</span> <span class="k">=&gt;</span> <span class="nv">k</span><span class="o">.</span><spa [...]
 <span class="k">val</span> <span class="nv">beginTime</span> <span class="k">=</span> <span class="nf">commits</span><span class="o">(</span><span class="nv">commits</span><span class="o">.</span><span class="py">length</span> <span class="o">-</span> <span class="mi">2</span><span class="o">)</span> <span class="c1">// commit time we are interested in
 </span>
 <span class="c1">// incrementally query data
-</span><span class="k">val</span> <span class="nv">incViewDF</span> <span class="k">=</span> <span class="n">spark</span><span class="o">.</span>
+</span><span class="k">val</span> <span class="nv">tripsIncrementalDF</span> <span class="k">=</span> <span class="n">spark</span><span class="o">.</span>
     <span class="n">read</span><span class="o">.</span>
     <span class="nf">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span>
-    <span class="nf">option</span><span class="o">(</span><span class="nc">VIEW_TYPE_OPT_KEY</span><span class="o">,</span> <span class="nc">VIEW_TYPE_INCREMENTAL_OPT_VAL</span><span class="o">).</span>
+    <span class="nf">option</span><span class="o">(</span><span class="nc">QUERY_TYPE_OPT_KEY</span><span class="o">,</span> <span class="nc">QUERY_TYPE_INCREMENTAL_OPT_VAL</span><span class="o">).</span>
     <span class="nf">option</span><span class="o">(</span><span class="nc">BEGIN_INSTANTTIME_OPT_KEY</span><span class="o">,</span> <span class="n">beginTime</span><span class="o">).</span>
     <span class="nf">load</span><span class="o">(</span><span class="n">basePath</span><span class="o">);</span>
-<span class="nv">incViewDF</span><span class="o">.</span><span class="py">registerTempTable</span><span class="o">(</span><span class="s">"hudi_incr_table"</span><span class="o">)</span>
-<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_incr_table where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
+<span class="nv">tripsIncrementalDF</span><span class="o">.</span><span class="py">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_trips_incremental"</span><span class="o">)</span>
+<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_trips_incremental where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
 </code></pre></div></div>
 
 <p class="notice--info">This will give all changes that happened after the beginTime commit with the filter of fare &gt; 20.0. The unique thing about this
@@ -479,23 +490,56 @@ specific commit time and beginTime to “000” (denoting earliest possible comm
 </span><span class="k">val</span> <span class="nv">endTime</span> <span class="k">=</span> <span class="nf">commits</span><span class="o">(</span><span class="nv">commits</span><span class="o">.</span><span class="py">length</span> <span class="o">-</span> <span class="mi">2</span><span class="o">)</span> <span class="c1">// commit time we are interested in
 </span>
 <span class="c1">//incrementally query data
-</span><span class="k">val</span> <span class="nv">incViewDF</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">read</span><span class="o">.</span><span class="py">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span>
-    <span class="nf">option</span><span class="o">(</span><span class="nc">VIEW_TYPE_OPT_KEY</span><span class="o">,</span> <span class="nc">VIEW_TYPE_INCREMENTAL_OPT_VAL</span><span class="o">).</span>
+</span><span class="k">val</span> <span class="nv">tripsPointInTimeDF</span> <span class="k">=</span> <span class="nv">spark</span><span class="o">.</span><span class="py">read</span><span class="o">.</span><span class="py">format</span><span class="o">(</span><span class="s">"org.apache.hudi"</span><span class="o">).</span>
+    <span class="nf">option</span><span class="o">(</span><span class="nc">QUERY_TYPE_OPT_KEY</span><span class="o">,</span> <span class="nc">QUERY_TYPE_INCREMENTAL_OPT_VAL</span><span class="o">).</span>
     <span class="nf">option</span><span class="o">(</span><span class="nc">BEGIN_INSTANTTIME_OPT_KEY</span><span class="o">,</span> <span class="n">beginTime</span><span class="o">).</span>
     <span class="nf">option</span><span class="o">(</span><span class="nc">END_INSTANTTIME_OPT_KEY</span><span class="o">,</span> <span class="n">endTime</span><span class="o">).</span>
     <span class="nf">load</span><span class="o">(</span><span class="n">basePath</span><span class="o">);</span>
-<span class="nv">incViewDF</span><span class="o">.</span><span class="py">registerTempTable</span><span class="o">(</span><span class="s">"hudi_incr_table"</span><span class="o">)</span>
-<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_incr_table where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
+<span class="nv">tripsPointInTimeDF</span><span class="o">.</span><span class="py">createOrReplaceTempView</span><span class="o">(</span><span class="s">"hudi_trips_point_in_time"</span><span class="o">)</span>
+<span class="nv">spark</span><span class="o">.</span><span class="py">sql</span><span class="o">(</span><span class="s">"select `_hoodie_commit_time`, fare, begin_lon, begin_lat, ts from  hudi_trips_point_in_time where fare &gt; 20.0"</span><span class="o">).</span><span class="py">show</span><span class="o">()</span>
+</code></pre></div></div>
+
+<h2 id="deletes">Delete data</h2>
+<p>Delete records for the HoodieKeys passed in.</p>
+
+<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>// fetch total records count
+spark.sql("select uuid, partitionPath from hudi_ro_table").count()
+// fetch two records to be deleted
+val ds = spark.sql("select uuid, partitionPath from hudi_ro_table").limit(2)
+
+// issue deletes
+val deletes = dataGen.generateDeletes(ds.collectAsList())
+val df = spark.read.json(spark.sparkContext.parallelize(deletes, 2));
+df.write.format("org.apache.hudi").
+options(getQuickstartWriteConfigs).
+option(OPERATION_OPT_KEY,"delete").
+option(PRECOMBINE_FIELD_OPT_KEY, "ts").
+option(RECORDKEY_FIELD_OPT_KEY, "uuid").
+option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
+option(TABLE_NAME, tableName).
+mode(Append).
+save(basePath);
+
+// run the same read query as above.
+val roAfterDeleteViewDF = spark.
+    read.
+    format("org.apache.hudi").
+    load(basePath + "/*/*/*/*")
+roAfterDeleteViewDF.registerTempTable("hudi_ro_table")
+// fetch should return (total - 2) records
+spark.sql("select uuid, partitionPath from hudi_ro_table").count()
 </code></pre></div></div>
+<p>Note: Only <code class="highlighter-rouge">Append</code> mode is supported for delete operation.</p>
 
 <h2 id="where-to-go-from-here">Where to go from here?</h2>
 
 <p>You can also do the quickstart by <a href="https://github.com/apache/incubator-hudi#building-apache-hudi-from-source">building hudi yourself</a>, 
-and using <code class="highlighter-rouge">--jars &lt;path to hudi_code&gt;/packaging/hudi-spark-bundle/target/hudi-spark-bundle-*.*.*-SNAPSHOT.jar</code> in the spark-shell command above
-instead of <code class="highlighter-rouge">--packages org.apache.hudi:hudi-spark-bundle:0.5.0-incubating</code></p>
+and using <code class="highlighter-rouge">--jars &lt;path to hudi_code&gt;/packaging/hudi-spark-bundle/target/hudi-spark-bundle_2.11-*.*.*-SNAPSHOT.jar</code> in the spark-shell command above
+instead of <code class="highlighter-rouge">--packages org.apache.hudi:hudi-spark-bundle_2.11:0.5.1-incubating</code>. Hudi also supports scala 2.12. Refer <a href="https://github.com/apache/incubator-hudi#build-with-scala-212">build with scala 2.12</a>
+for more info.</p>
 
-<p>Also, we used Spark here to show case the capabilities of Hudi. However, Hudi can support multiple storage types/views and 
-Hudi datasets can be queried from query engines like Hive, Spark, Presto and much more. We have put together a 
+<p>Also, we used Spark here to show case the capabilities of Hudi. However, Hudi can support multiple table types/query types and 
+Hudi tables can be queried from query engines like Hive, Spark, Presto and much more. We have put together a 
 <a href="https://www.youtube.com/watch?v=VhNgUsxdrD0">demo video</a> that show cases all of this on a docker based setup with all 
 dependent systems running locally. We recommend you replicate the same setup and run the demo yourself, by following 
 steps <a href="/docs/docker_demo.html">here</a> to get a taste for it. Also, if you are looking for ways to migrate your existing data 
diff --git a/content/docs/s3_hoodie.html b/content/docs/s3_hoodie.html
index 3663655..61d6244 100644
--- a/content/docs/s3_hoodie.html
+++ b/content/docs/s3_hoodie.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
diff --git a/content/docs/structure.html b/content/docs/structure.html
index 63b7071..266f2a1 100644
--- a/content/docs/structure.html
+++ b/content/docs/structure.html
@@ -4,7 +4,7 @@
     <meta charset="utf-8">
 
 <!-- begin _includes/seo.html --><title>Structure - Apache Hudi</title>
-<meta name="description" content="Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical datasets over DFS (HDFS or cloud stores) and provides three logical views for query access.">
+<meta name="description" content="Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical tables over DFS (HDFS or cloud stores) and provides three types of queries.">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
@@ -13,7 +13,7 @@
 <meta property="og:url" content="https://hudi.apache.org/docs/structure.html">
 
 
-  <meta property="og:description" content="Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical datasets over DFS (HDFS or cloud stores) and provides three logical views for query access.">
+  <meta property="og:description" content="Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical tables over DFS (HDFS or cloud stores) and provides three types of queries.">
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -337,21 +337,21 @@
             }
           </style>
         
-        <p>Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical datasets over DFS (<a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS</a> or cloud stores) and provides three logical views for query access.</p>
+        <p>Hudi (pronounced “Hoodie”) ingests &amp; manages storage of large analytical tables over DFS (<a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html">HDFS</a> or cloud stores) and provides three types of queries.</p>
 
 <ul>
-  <li><strong>Read Optimized View</strong> - Provides excellent query performance on pure columnar storage, much like plain <a href="https://parquet.apache.org/">Parquet</a> tables.</li>
-  <li><strong>Incremental View</strong> - Provides a change stream out of the dataset to feed downstream jobs/ETLs.</li>
-  <li><strong>Near-Real time Table</strong> - Provides queries on real-time data, using a combination of columnar &amp; row based storage (e.g Parquet + <a href="http://avro.apache.org/docs/current/mr.html">Avro</a>)</li>
+  <li><strong>Read Optimized query</strong> - Provides excellent query performance on pure columnar storage, much like plain <a href="https://parquet.apache.org/">Parquet</a> tables.</li>
+  <li><strong>Incremental query</strong> - Provides a change stream out of the dataset to feed downstream jobs/ETLs.</li>
+  <li><strong>Snapshot query</strong> - Provides queries on real-time data, using a combination of columnar &amp; row based storage (e.g Parquet + <a href="http://avro.apache.org/docs/current/mr.html">Avro</a>)</li>
 </ul>
 
 <figure>
     <img class="docimage" src="/assets/images/hudi_intro_1.png" alt="hudi_intro_1.png" />
 </figure>
 
-<p>By carefully managing how data is laid out in storage &amp; how it’s exposed to queries, Hudi is able to power a rich data ecosystem where external sources can be ingested in near real-time and made available for interactive SQL Engines like <a href="https://prestodb.io">Presto</a> &amp; <a href="https://spark.apache.org/sql/">Spark</a>, while at the same time capable of being consumed incrementally from processing/ETL frameworks like <a href="https://hive.apache.org/">Hive</a> &amp;  [...]
+<p>By carefully managing how data is laid out in storage &amp; how it’s exposed to queries, Hudi is able to power a rich data ecosystem where external sources can be ingested in near real-time and made available for interactive SQL Engines like <a href="https://prestodb.io">Presto</a> &amp; <a href="https://spark.apache.org/sql/">Spark</a>, while at the same time capable of being consumed incrementally from processing/ETL frameworks like <a href="https://hive.apache.org/">Hive</a> &amp;  [...]
 
-<p>Hudi broadly consists of a self contained Spark library to build datasets and integrations with existing query engines for data access. See <a href="/docs/quick-start-guide">quickstart</a> for a demo.</p>
+<p>Hudi broadly consists of a self contained Spark library to build tables and integrations with existing query engines for data access. See <a href="/docs/quick-start-guide">quickstart</a> for a demo.</p>
 
       </section>
 
diff --git a/content/docs/use_cases.html b/content/docs/use_cases.html
index 099a640..640c785 100644
--- a/content/docs/use_cases.html
+++ b/content/docs/use_cases.html
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -333,7 +333,7 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#near-real-time-ingestion">Near Real-Time Ingestion</a></li>
   <li><a href="#near-real-time-analytics">Near Real-time Analytics</a></li>
@@ -356,7 +356,7 @@ or <a href="http://hortonworks.com/blog/four-step-strategy-incremental-updates-h
 <p>For NoSQL datastores like <a href="http://cassandra.apache.org/">Cassandra</a> / <a href="http://www.project-voldemort.com/voldemort/">Voldemort</a> / <a href="https://hbase.apache.org/">HBase</a>, even moderately big installations store billions of rows.
 It goes without saying that <strong>full bulk loads are simply infeasible</strong> and more efficient approaches are needed if ingestion is to keep up with the typically high update volumes.</p>
 
-<p>Even for immutable data sources like <a href="kafka.apache.org">Kafka</a> , Hudi helps <strong>enforces a minimum file size on HDFS</strong>, which improves NameNode health by solving one of the <a href="https://blog.cloudera.com/blog/2009/02/the-small-files-problem/">age old problems in Hadoop land</a> in a holistic way. This is all the more important for event streams, since typically its higher volume (eg: click streams) and if not managed well, can cause serious damage to your Had [...]
+<p>Even for immutable data sources like <a href="https://kafka.apache.org">Kafka</a> , Hudi helps <strong>enforces a minimum file size on HDFS</strong>, which improves NameNode health by solving one of the <a href="https://blog.cloudera.com/blog/2009/02/the-small-files-problem/">age old problems in Hadoop land</a> in a holistic way. This is all the more important for event streams, since typically its higher volume (eg: click streams) and if not managed well, can cause serious damage to  [...]
 
 <p>Across all sources, Hudi adds the much needed ability to atomically publish new data to consumers via notion of commits, shielding them from partial ingestion failures</p>
 
@@ -367,12 +367,12 @@ This is absolutely perfect for lower scale (<a href="https://blog.twitter.com/20
 But, typically these systems end up getting abused for less interactive queries also since data on Hadoop is intolerably stale. This leads to under utilization &amp; wasteful hardware/license costs.</p>
 
 <p>On the other hand, interactive SQL solutions on Hadoop such as Presto &amp; SparkSQL excel in <strong>queries that finish within few seconds</strong>.
-By bringing <strong>data freshness to a few minutes</strong>, Hudi can provide a much efficient alternative, as well unlock real-time analytics on <strong>several magnitudes larger datasets</strong> stored in DFS.
+By bringing <strong>data freshness to a few minutes</strong>, Hudi can provide a much efficient alternative, as well unlock real-time analytics on <strong>several magnitudes larger tables</strong> stored in DFS.
 Also, Hudi has no external dependencies (like a dedicated HBase cluster, purely used for real-time analytics) and thus enables faster analytics on much fresher analytics, without increasing the operational overhead.</p>
 
 <h2 id="incremental-processing-pipelines">Incremental Processing Pipelines</h2>
 
-<p>One fundamental ability Hadoop provides is to build a chain of datasets derived from each other via DAGs expressed as workflows.
+<p>One fundamental ability Hadoop provides is to build a chain of tables derived from each other via DAGs expressed as workflows.
 Workflows often depend on new data being output by multiple upstream workflows and traditionally, availability of new data is indicated by a new DFS Folder/Hive Partition.
 Let’s take a concrete example to illustrate this. An upstream workflow <code class="highlighter-rouge">U</code> can create a Hive partition for every hour, with data for that hour (event_time) at the end of each hour (processing_time), providing effective freshness of 1 hour.
 Then, a downstream workflow <code class="highlighter-rouge">D</code>, kicks off immediately after <code class="highlighter-rouge">U</code> finishes, and does its own processing for the next hour, increasing the effective latency to 2 hours.</p>
@@ -382,8 +382,8 @@ Unfortunately, in today’s post-mobile &amp; pre-IoT world, <strong>late data f
 In such cases, the only remedy to guarantee correctness is to <a href="https://falcon.apache.org/FalconDocumentation.html#Handling_late_input_data">reprocess the last few hours</a> worth of data,
 over and over again each hour, which can significantly hurt the efficiency across the entire ecosystem. For e.g; imagine reprocessing TBs worth of data every hour across hundreds of workflows.</p>
 
-<p>Hudi comes to the rescue again, by providing a way to consume new data (including late data) from an upsteam Hudi dataset <code class="highlighter-rouge">HU</code> at a record granularity (not folders/partitions),
-apply the processing logic, and efficiently update/reconcile late data with a downstream Hudi dataset <code class="highlighter-rouge">HD</code>. Here, <code class="highlighter-rouge">HU</code> and <code class="highlighter-rouge">HD</code> can be continuously scheduled at a much more frequent schedule
+<p>Hudi comes to the rescue again, by providing a way to consume new data (including late data) from an upsteam Hudi table <code class="highlighter-rouge">HU</code> at a record granularity (not folders/partitions),
+apply the processing logic, and efficiently update/reconcile late data with a downstream Hudi table <code class="highlighter-rouge">HD</code>. Here, <code class="highlighter-rouge">HU</code> and <code class="highlighter-rouge">HD</code> can be continuously scheduled at a much more frequent schedule
 like 15 mins, and providing an end-end latency of 30 mins at <code class="highlighter-rouge">HD</code>.</p>
 
 <p>To achieve this, Hudi has embraced similar concepts from stream processing frameworks like <a href="https://spark.apache.org/docs/latest/streaming-programming-guide.html#join-operations">Spark Streaming</a> , Pub/Sub systems like <a href="http://kafka.apache.org/documentation/#theconsumer">Kafka</a>
@@ -397,7 +397,7 @@ For e.g, a Spark Pipeline can <a href="https://eng.uber.com/telematics/">determi
 A popular choice for this queue is Kafka and this model often results in <strong>redundant storage of same data on DFS (for offline analysis on computed results) and Kafka (for dispersal)</strong></p>
 
 <p>Once again Hudi can efficiently solve this problem, by having the Spark Pipeline upsert output from
-each run into a Hudi dataset, which can then be incrementally tailed (just like a Kafka topic) for new data &amp; written into the serving store.</p>
+each run into a Hudi table, which can then be incrementally tailed (just like a Kafka topic) for new data &amp; written into the serving store.</p>
 
       </section>
 
diff --git a/content/docs/writing_data.html b/content/docs/writing_data.html
index 1f90b8f..7207c23 100644
--- a/content/docs/writing_data.html
+++ b/content/docs/writing_data.html
@@ -3,17 +3,17 @@
   <head>
     <meta charset="utf-8">
 
-<!-- begin _includes/seo.html --><title>Writing Hudi Datasets - Apache Hudi</title>
-<meta name="description" content="In this section, we will cover ways to ingest new changes from external sources or even other Hudi datasets using the DeltaStreamer tool, as well as speeding up large Spark jobs via upserts using the Hudi datasource. Such datasets can then be queried using various query engines.">
+<!-- begin _includes/seo.html --><title>Writing Hudi Tables - Apache Hudi</title>
+<meta name="description" content="In this section, we will cover ways to ingest new changes from external sources or even other Hudi tables using the DeltaStreamer tool, as well as speeding up large Spark jobs via upserts using the Hudi datasource. Such tables can then be queried using various query engines.">
 
 <meta property="og:type" content="article">
 <meta property="og:locale" content="en_US">
 <meta property="og:site_name" content="">
-<meta property="og:title" content="Writing Hudi Datasets">
+<meta property="og:title" content="Writing Hudi Tables">
 <meta property="og:url" content="https://hudi.apache.org/docs/writing_data.html">
 
 
-  <meta property="og:description" content="In this section, we will cover ways to ingest new changes from external sources or even other Hudi datasets using the DeltaStreamer tool, as well as speeding up large Spark jobs via upserts using the Hudi datasource. Such datasets can then be queried using various query engines.">
+  <meta property="og:description" content="In this section, we will cover ways to ingest new changes from external sources or even other Hudi tables using the DeltaStreamer tool, as well as speeding up large Spark jobs via upserts using the Hudi datasource. Such tables can then be queried using various query engines.">
 
 
 
@@ -269,7 +269,7 @@
             
 
             
-              <li><a href="/docs/admin_guide.html" class="">Administering</a></li>
+              <li><a href="/docs/deployment.html" class="">Deployment</a></li>
             
 
           
@@ -324,7 +324,7 @@
     <div class="page__inner-wrap">
       
         <header>
-          <h1 id="page-title" class="page__title" itemprop="headline">Writing Hudi Datasets
+          <h1 id="page-title" class="page__title" itemprop="headline">Writing Hudi Tables
 </h1>
         </header>
       
@@ -333,35 +333,35 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
   <li><a href="#write-operations">Write Operations</a></li>
   <li><a href="#deltastreamer">DeltaStreamer</a></li>
   <li><a href="#datasource-writer">Datasource Writer</a></li>
   <li><a href="#syncing-to-hive">Syncing to Hive</a></li>
   <li><a href="#deletes">Deletes</a></li>
-  <li><a href="#storage-management">Storage Management</a></li>
+  <li><a href="#optimized-dfs-access">Optimized DFS Access</a></li>
 </ul>
           </nav>
         </aside>
         
-        <p>In this section, we will cover ways to ingest new changes from external sources or even other Hudi datasets using the <a href="#deltastreamer">DeltaStreamer</a> tool, as well as 
-speeding up large Spark jobs via upserts using the <a href="#datasource-writer">Hudi datasource</a>. Such datasets can then be <a href="querying_data.html">queried</a> using various query engines.</p>
+        <p>In this section, we will cover ways to ingest new changes from external sources or even other Hudi tables using the <a href="#deltastreamer">DeltaStreamer</a> tool, as well as 
+speeding up large Spark jobs via upserts using the <a href="#datasource-writer">Hudi datasource</a>. Such tables can then be <a href="/docs/querying_data.html">queried</a> using various query engines.</p>
 
 <h2 id="write-operations">Write Operations</h2>
 
 <p>Before that, it may be helpful to understand the 3 different write operations provided by Hudi datasource or the delta streamer tool and how best to leverage them. These operations
-can be chosen/changed across each commit/deltacommit issued against the dataset.</p>
+can be chosen/changed across each commit/deltacommit issued against the table.</p>
 
 <ul>
   <li><strong>UPSERT</strong> : This is the default operation where the input records are first tagged as inserts or updates by looking up the index and 
  the records are ultimately written after heuristics are run to determine how best to pack them on storage to optimize for things like file sizing. 
  This operation is recommended for use-cases like database change capture where the input almost certainly contains updates.</li>
   <li><strong>INSERT</strong> : This operation is very similar to upsert in terms of heuristics/file sizing but completely skips the index lookup step. Thus, it can be a lot faster than upserts 
- for use-cases like log de-duplication (in conjunction with options to filter duplicates mentioned below). This is also suitable for use-cases where the dataset can tolerate duplicates, but just 
+ for use-cases like log de-duplication (in conjunction with options to filter duplicates mentioned below). This is also suitable for use-cases where the table can tolerate duplicates, but just 
  need the transactional writes/incremental pull/storage management capabilities of Hudi.</li>
   <li><strong>BULK_INSERT</strong> : Both upsert and insert operations keep input records in memory to speed up storage heuristics computations faster (among other things) and thus can be cumbersome for 
- initial loading/bootstrapping a Hudi dataset at first. Bulk insert provides the same semantics as insert, while implementing a sort-based data writing algorithm, which can scale very well for several hundred TBs 
+ initial loading/bootstrapping a Hudi table at first. Bulk insert provides the same semantics as insert, while implementing a sort-based data writing algorithm, which can scale very well for several hundred TBs 
  of initial load. However, this just does a best-effort job at sizing files vs guaranteeing file sizes like inserts/upserts do.</li>
 </ul>
 
@@ -381,23 +381,56 @@ can be chosen/changed across each commit/deltacommit issued against the dataset.
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">[</span><span class="n">hoodie</span><span class="o">]</span><span class="err">$</span> <span class="n">spark</span><span class="o">-</span><span class="n">submit</span> <span class="o">--</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</sp [...]
 <span class="nl">Usage:</span> <span class="o">&lt;</span><span class="n">main</span> <span class="kd">class</span><span class="err">&gt;</span> <span class="err">[</span><span class="nc">options</span><span class="o">]</span>
-  <span class="nl">Options:</span>
+<span class="nl">Options:</span>
+    <span class="o">--</span><span class="n">checkpoint</span>
+      <span class="nc">Resume</span> <span class="nc">Delta</span> <span class="nc">Streamer</span> <span class="n">from</span> <span class="k">this</span> <span class="n">checkpoint</span><span class="o">.</span>
     <span class="o">--</span><span class="n">commit</span><span class="o">-</span><span class="n">on</span><span class="o">-</span><span class="n">errors</span>
-        <span class="nc">Commit</span> <span class="n">even</span> <span class="n">when</span> <span class="n">some</span> <span class="n">records</span> <span class="n">failed</span> <span class="n">to</span> <span class="n">be</span> <span class="n">written</span>
+      <span class="nc">Commit</span> <span class="n">even</span> <span class="n">when</span> <span class="n">some</span> <span class="n">records</span> <span class="n">failed</span> <span class="n">to</span> <span class="n">be</span> <span class="n">written</span>
+      <span class="nl">Default:</span> <span class="kc">false</span>
+    <span class="o">--</span><span class="n">compact</span><span class="o">-</span><span class="n">scheduling</span><span class="o">-</span><span class="n">minshare</span>
+      <span class="nc">Minshare</span> <span class="k">for</span> <span class="n">compaction</span> <span class="n">as</span> <span class="n">defined</span> <span class="n">in</span>
+      <span class="nl">https:</span><span class="c1">//spark.apache.org/docs/latest/job-scheduling.html</span>
+      <span class="nl">Default:</span> <span class="mi">0</span>
+    <span class="o">--</span><span class="n">compact</span><span class="o">-</span><span class="n">scheduling</span><span class="o">-</span><span class="n">weight</span>
+      <span class="nc">Scheduling</span> <span class="n">weight</span> <span class="k">for</span> <span class="n">compaction</span> <span class="n">as</span> <span class="n">defined</span> <span class="n">in</span>
+      <span class="nl">https:</span><span class="c1">//spark.apache.org/docs/latest/job-scheduling.html</span>
+      <span class="nl">Default:</span> <span class="mi">1</span>
+    <span class="o">--</span><span class="n">continuous</span>
+      <span class="nc">Delta</span> <span class="nc">Streamer</span> <span class="n">runs</span> <span class="n">in</span> <span class="n">continuous</span> <span class="n">mode</span> <span class="n">running</span> <span class="n">source</span><span class="o">-</span><span class="n">fetch</span> <span class="o">-&gt;</span> <span class="nc">Transform</span>
+      <span class="o">-&gt;</span> <span class="nc">Hudi</span> <span class="nc">Write</span> <span class="n">in</span> <span class="n">loop</span>
+      <span class="nl">Default:</span> <span class="kc">false</span>
+    <span class="o">--</span><span class="n">delta</span><span class="o">-</span><span class="n">sync</span><span class="o">-</span><span class="n">scheduling</span><span class="o">-</span><span class="n">minshare</span>
+      <span class="nc">Minshare</span> <span class="k">for</span> <span class="n">delta</span> <span class="n">sync</span> <span class="n">as</span> <span class="n">defined</span> <span class="n">in</span>
+      <span class="nl">https:</span><span class="c1">//spark.apache.org/docs/latest/job-scheduling.html</span>
+      <span class="nl">Default:</span> <span class="mi">0</span>
+    <span class="o">--</span><span class="n">delta</span><span class="o">-</span><span class="n">sync</span><span class="o">-</span><span class="n">scheduling</span><span class="o">-</span><span class="n">weight</span>
+      <span class="nc">Scheduling</span> <span class="n">weight</span> <span class="k">for</span> <span class="n">delta</span> <span class="n">sync</span> <span class="n">as</span> <span class="n">defined</span> <span class="n">in</span>
+      <span class="nl">https:</span><span class="c1">//spark.apache.org/docs/latest/job-scheduling.html</span>
+      <span class="nl">Default:</span> <span class="mi">1</span>
+    <span class="o">--</span><span class="n">disable</span><span class="o">-</span><span class="n">compaction</span>
+      <span class="nc">Compaction</span> <span class="n">is</span> <span class="n">enabled</span> <span class="k">for</span> <span class="nc">MoR</span> <span class="n">table</span> <span class="n">by</span> <span class="k">default</span><span class="o">.</span> <span class="nc">This</span> <span class="n">flag</span> <span class="n">disables</span> <span class="n">it</span>
       <span class="nl">Default:</span> <span class="kc">false</span>
     <span class="o">--</span><span class="n">enable</span><span class="o">-</span><span class="n">hive</span><span class="o">-</span><span class="n">sync</span>
-          <span class="nc">Enable</span> <span class="n">syncing</span> <span class="n">to</span> <span class="n">hive</span>
-       <span class="nl">Default:</span> <span class="kc">false</span>
+      <span class="nc">Enable</span> <span class="n">syncing</span> <span class="n">to</span> <span class="n">hive</span>
+      <span class="nl">Default:</span> <span class="kc">false</span>
     <span class="o">--</span><span class="n">filter</span><span class="o">-</span><span class="n">dupes</span>
-          <span class="nc">Should</span> <span class="n">duplicate</span> <span class="n">records</span> <span class="n">from</span> <span class="n">source</span> <span class="n">be</span> <span class="n">dropped</span><span class="o">/</span><span class="n">filtered</span> <span class="n">outbefore</span> 
-          <span class="n">insert</span><span class="o">/</span><span class="n">bulk</span><span class="o">-</span><span class="n">insert</span> 
+      <span class="nc">Should</span> <span class="n">duplicate</span> <span class="n">records</span> <span class="n">from</span> <span class="n">source</span> <span class="n">be</span> <span class="n">dropped</span><span class="o">/</span><span class="n">filtered</span> <span class="n">out</span> <span class="n">before</span>
+      <span class="n">insert</span><span class="o">/</span><span class="n">bulk</span><span class="o">-</span><span class="n">insert</span>
       <span class="nl">Default:</span> <span class="kc">false</span>
     <span class="o">--</span><span class="n">help</span><span class="o">,</span> <span class="o">-</span><span class="n">h</span>
-    <span class="o">--</span><span class="n">hudi</span><span class="o">-</span><span class="n">conf</span>
-          <span class="nc">Any</span> <span class="n">configuration</span> <span class="n">that</span> <span class="n">can</span> <span class="n">be</span> <span class="n">set</span> <span class="n">in</span> <span class="n">the</span> <span class="n">properties</span> <span class="nf">file</span> <span class="o">(</span><span class="n">using</span> <span class="n">the</span> <span class="no">CLI</span> 
-          <span class="n">parameter</span> <span class="s">"--propsFilePath"</span><span class="o">)</span> <span class="n">can</span> <span class="n">also</span> <span class="n">be</span> <span class="n">passed</span> <span class="n">command</span> <span class="n">line</span> <span class="n">using</span> <span class="k">this</span> 
-          <span class="n">parameter</span> 
-          <span class="nl">Default:</span> <span class="o">[]</span>
+
+    <span class="o">--</span><span class="n">hoodie</span><span class="o">-</span><span class="n">conf</span>
+      <span class="nc">Any</span> <span class="n">configuration</span> <span class="n">that</span> <span class="n">can</span> <span class="n">be</span> <span class="n">set</span> <span class="n">in</span> <span class="n">the</span> <span class="n">properties</span> <span class="nf">file</span> <span class="o">(</span><span class="n">using</span> <span class="n">the</span> <span class="no">CLI</span>
+      <span class="n">parameter</span> <span class="s">"--propsFilePath"</span><span class="o">)</span> <span class="n">can</span> <span class="n">also</span> <span class="n">be</span> <span class="n">passed</span> <span class="n">command</span> <span class="n">line</span> <span class="n">using</span> <span class="k">this</span>
+      <span class="n">parameter</span>
+      <span class="nl">Default:</span> <span class="o">[]</span>
+    <span class="o">--</span><span class="n">max</span><span class="o">-</span><span class="n">pending</span><span class="o">-</span><span class="n">compactions</span>
+      <span class="nc">Maximum</span> <span class="n">number</span> <span class="n">of</span> <span class="n">outstanding</span> <span class="n">inflight</span><span class="o">/</span><span class="n">requested</span> <span class="n">compactions</span><span class="o">.</span> <span class="nc">Delta</span> <span class="nc">Sync</span>
+      <span class="n">will</span> <span class="n">not</span> <span class="n">happen</span> <span class="n">unlessoutstanding</span> <span class="n">compactions</span> <span class="n">is</span> <span class="n">less</span> <span class="n">than</span> <span class="k">this</span> <span class="n">number</span>
+      <span class="nl">Default:</span> <span class="mi">5</span>
+    <span class="o">--</span><span class="n">min</span><span class="o">-</span><span class="n">sync</span><span class="o">-</span><span class="n">interval</span><span class="o">-</span><span class="n">seconds</span>
+      <span class="n">the</span> <span class="n">min</span> <span class="n">sync</span> <span class="n">interval</span> <span class="n">of</span> <span class="n">each</span> <span class="n">sync</span> <span class="n">in</span> <span class="n">continuous</span> <span class="n">mode</span>
+      <span class="nl">Default:</span> <span class="mi">0</span>
     <span class="o">--</span><span class="n">op</span>
       <span class="nc">Takes</span> <span class="n">one</span> <span class="n">of</span> <span class="n">these</span> <span class="n">values</span> <span class="o">:</span> <span class="no">UPSERT</span> <span class="o">(</span><span class="k">default</span><span class="o">),</span> <span class="no">INSERT</span> <span class="o">(</span><span class="n">use</span> <span class="n">when</span> <span class="n">input</span> <span class="n">is</span>
       <span class="n">purely</span> <span class="k">new</span> <span class="n">data</span><span class="o">/</span><span class="n">inserts</span> <span class="n">to</span> <span class="n">gain</span> <span class="n">speed</span><span class="o">)</span>
@@ -407,19 +440,22 @@ can be chosen/changed across each commit/deltacommit issued against the dataset.
       <span class="nc">subclass</span> <span class="n">of</span> <span class="nc">HoodieRecordPayload</span><span class="o">,</span> <span class="n">that</span> <span class="n">works</span> <span class="n">off</span> <span class="n">a</span> <span class="nc">GenericRecord</span><span class="o">.</span>
       <span class="nc">Implement</span> <span class="n">your</span> <span class="n">own</span><span class="o">,</span> <span class="k">if</span> <span class="n">you</span> <span class="n">want</span> <span class="n">to</span> <span class="k">do</span> <span class="n">something</span> <span class="n">other</span> <span class="n">than</span> <span class="n">overwriting</span>
       <span class="n">existing</span> <span class="n">value</span>
-      <span class="nl">Default:</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">OverwriteWithLatestAvroPayload</span>
+      <span class="nl">Default:</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">common</span><span class="o">.</span><span class="na">model</span><span class="o">.</span><span class="na">OverwriteWithLatestAvroPayload</span>
     <span class="o">--</span><span class="n">props</span>
       <span class="n">path</span> <span class="n">to</span> <span class="n">properties</span> <span class="n">file</span> <span class="n">on</span> <span class="n">localfs</span> <span class="n">or</span> <span class="n">dfs</span><span class="o">,</span> <span class="n">with</span> <span class="n">configurations</span> <span class="k">for</span>
-      <span class="nc">Hudi</span> <span class="n">client</span><span class="o">,</span> <span class="n">schema</span> <span class="n">provider</span><span class="o">,</span> <span class="n">key</span> <span class="n">generator</span> <span class="n">and</span> <span class="n">data</span> <span class="n">source</span><span class="o">.</span> <span class="nc">For</span>
-      <span class="nc">Hudi</span> <span class="n">client</span> <span class="n">props</span><span class="o">,</span> <span class="n">sane</span> <span class="n">defaults</span> <span class="n">are</span> <span class="n">used</span><span class="o">,</span> <span class="n">but</span> <span class="n">recommend</span> <span class="n">use</span> <span class="n">to</span>
+      <span class="n">hoodie</span> <span class="n">client</span><span class="o">,</span> <span class="n">schema</span> <span class="n">provider</span><span class="o">,</span> <span class="n">key</span> <span class="n">generator</span> <span class="n">and</span> <span class="n">data</span> <span class="n">source</span><span class="o">.</span> <span class="nc">For</span>
+      <span class="n">hoodie</span> <span class="n">client</span> <span class="n">props</span><span class="o">,</span> <span class="n">sane</span> <span class="n">defaults</span> <span class="n">are</span> <span class="n">used</span><span class="o">,</span> <span class="n">but</span> <span class="n">recommend</span> <span class="n">use</span> <span class="n">to</span>
       <span class="n">provide</span> <span class="n">basic</span> <span class="n">things</span> <span class="n">like</span> <span class="n">metrics</span> <span class="n">endpoints</span><span class="o">,</span> <span class="n">hive</span> <span class="n">configs</span> <span class="n">etc</span><span class="o">.</span> <span class="nc">For</span>
       <span class="n">sources</span><span class="o">,</span> <span class="n">referto</span> <span class="n">individual</span> <span class="n">classes</span><span class="o">,</span> <span class="k">for</span> <span class="n">supported</span> <span class="n">properties</span><span class="o">.</span>
       <span class="nl">Default:</span> <span class="nl">file:</span><span class="c1">///Users/vinoth/bin/hoodie/src/test/resources/delta-streamer-config/dfs-source.properties</span>
     <span class="o">--</span><span class="n">schemaprovider</span><span class="o">-</span><span class="kd">class</span>
       <span class="nc">subclass</span> <span class="n">of</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">schema</span><span class="o">.</span><span class="na">SchemaProvider</span> <span class="n">to</span> <span class="n">attach</span>
       <span class="n">schemas</span> <span class="n">to</span> <span class="n">input</span> <span class="o">&amp;</span> <span class="n">target</span> <span class="n">table</span> <span class="n">data</span><span class="o">,</span> <span class="n">built</span> <span class="n">in</span> <span class="nl">options:</span>
-      <span class="nc">FilebasedSchemaProvider</span>
-      <span class="nl">Default:</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">schema</span><span class="o">.</span><span class="na">FilebasedSchemaProvider</span>
+      <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">schema</span><span class="o">.</span><span class="na">FilebasedSchemaProvider</span><span class="o">.</span><span class="na">Source</span> <span class="o">(</span><span class="nc">See</span>
+      <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span><span class="o">.</span><span class="na">Source</span><span class="o">)</span> <span class="n">implementation</span> <span class="n">can</span> <span class="n">implement</span>
+      <span class="n">their</span> <span class="n">own</span> <span class="nc">SchemaProvider</span><span class="o">.</span> <span class="nc">For</span> <span class="nc">Sources</span> <span class="n">that</span> <span class="k">return</span> <span class="nc">Dataset</span><span class="o">&lt;</span><span class="nc">Row</span><span class="o">&gt;,</span> <span class="n">the</span>
+      <span class="n">schema</span> <span class="n">is</span> <span class="n">obtained</span> <span class="n">implicitly</span><span class="o">.</span> <span class="nc">However</span><span class="o">,</span> <span class="k">this</span> <span class="no">CLI</span> <span class="n">option</span> <span class="n">allows</span>
+      <span class="n">overriding</span> <span class="n">the</span> <span class="n">schemaprovider</span> <span class="n">returned</span> <span class="n">by</span> <span class="nc">Source</span><span class="o">.</span>
     <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="kd">class</span>
       <span class="nc">Subclass</span> <span class="n">of</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span> <span class="n">to</span> <span class="n">read</span> <span class="n">data</span><span class="o">.</span> <span class="nc">Built</span><span class="o">-</span><span class="n">in</span>
       <span class="nl">options:</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span><span class="o">.{</span><span class="nc">JsonDFSSource</span> <span class="o">(</span><span class="k">default</span><span class="o">),</span>
@@ -427,7 +463,7 @@ can be chosen/changed across each commit/deltacommit issued against the dataset.
       <span class="nl">Default:</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span><span class="o">.</span><span class="na">JsonDFSSource</span>
     <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="n">limit</span>
       <span class="nc">Maximum</span> <span class="n">amount</span> <span class="n">of</span> <span class="n">data</span> <span class="n">to</span> <span class="n">read</span> <span class="n">from</span> <span class="n">source</span><span class="o">.</span> <span class="nl">Default:</span> <span class="nc">No</span> <span class="n">limit</span> <span class="nc">For</span> <span class="n">e</span><span class="o">.</span><span class="na">g</span><span class="o">:</span>
-      <span class="nc">DFSSource</span> <span class="o">=&gt;</span> <span class="n">max</span> <span class="n">bytes</span> <span class="n">to</span> <span class="n">read</span><span class="o">,</span> <span class="nc">KafkaSource</span> <span class="o">=&gt;</span> <span class="n">max</span> <span class="n">events</span> <span class="n">to</span> <span class="n">read</span>
+      <span class="no">DFS</span><span class="o">-</span><span class="nc">Source</span> <span class="o">=&gt;</span> <span class="n">max</span> <span class="n">bytes</span> <span class="n">to</span> <span class="n">read</span><span class="o">,</span> <span class="nc">Kafka</span><span class="o">-</span><span class="nc">Source</span> <span class="o">=&gt;</span> <span class="n">max</span> <span class="n">events</span> <span class="n">to</span> <span class="n">read</span>
       <span class="nl">Default:</span> <span class="mi">9223372036854775807</span>
     <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="n">ordering</span><span class="o">-</span><span class="n">field</span>
       <span class="nc">Field</span> <span class="n">within</span> <span class="n">source</span> <span class="n">record</span> <span class="n">to</span> <span class="n">decide</span> <span class="n">how</span> <span class="n">to</span> <span class="k">break</span> <span class="n">ties</span> <span class="n">between</span> <span class="n">records</span>
@@ -437,17 +473,19 @@ can be chosen/changed across each commit/deltacommit issued against the dataset.
     <span class="o">--</span><span class="n">spark</span><span class="o">-</span><span class="n">master</span>
       <span class="n">spark</span> <span class="n">master</span> <span class="n">to</span> <span class="n">use</span><span class="o">.</span>
       <span class="nl">Default:</span> <span class="n">local</span><span class="o">[</span><span class="mi">2</span><span class="o">]</span>
+  <span class="o">*</span> <span class="o">--</span><span class="n">table</span><span class="o">-</span><span class="n">type</span>
+      <span class="nc">Type</span> <span class="n">of</span> <span class="n">table</span><span class="o">.</span> <span class="nf">COPY_ON_WRITE</span> <span class="o">(</span><span class="n">or</span><span class="o">)</span> <span class="no">MERGE_ON_READ</span>
   <span class="o">*</span> <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">base</span><span class="o">-</span><span class="n">path</span>
-      <span class="n">base</span> <span class="n">path</span> <span class="k">for</span> <span class="n">the</span> <span class="n">target</span> <span class="nc">Hudi</span> <span class="n">dataset</span><span class="o">.</span> <span class="o">(</span><span class="nc">Will</span> <span class="n">be</span> <span class="n">created</span> <span class="k">if</span> <span class="n">did</span> <span class="n">not</span>
-      <span class="n">exist</span> <span class="n">first</span> <span class="n">time</span> <span class="n">around</span><span class="o">.</span> <span class="nc">If</span> <span class="n">exists</span><span class="o">,</span> <span class="n">expected</span> <span class="n">to</span> <span class="n">be</span> <span class="n">a</span> <span class="nc">Hudi</span> <span class="n">dataset</span><span class="o">)</span>
+      <span class="n">base</span> <span class="n">path</span> <span class="k">for</span> <span class="n">the</span> <span class="n">target</span> <span class="n">hoodie</span> <span class="n">table</span><span class="o">.</span> <span class="o">(</span><span class="nc">Will</span> <span class="n">be</span> <span class="n">created</span> <span class="k">if</span> <span class="n">did</span> <span class="n">not</span> <span class="n">exist</span>
+      <span class="n">first</span> <span class="n">time</span> <span class="n">around</span><span class="o">.</span> <span class="nc">If</span> <span class="n">exists</span><span class="o">,</span> <span class="n">expected</span> <span class="n">to</span> <span class="n">be</span> <span class="n">a</span> <span class="n">hoodie</span> <span class="n">table</span><span class="o">)</span>
   <span class="o">*</span> <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">table</span>
       <span class="n">name</span> <span class="n">of</span> <span class="n">the</span> <span class="n">target</span> <span class="n">table</span> <span class="n">in</span> <span class="nc">Hive</span>
     <span class="o">--</span><span class="n">transformer</span><span class="o">-</span><span class="kd">class</span>
-      <span class="nc">subclass</span> <span class="n">of</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">transform</span><span class="o">.</span><span class="na">Transformer</span><span class="o">.</span> <span class="no">UDF</span> <span class="n">to</span>
-      <span class="n">transform</span> <span class="n">raw</span> <span class="n">source</span> <span class="n">dataset</span> <span class="n">to</span> <span class="n">a</span> <span class="n">target</span> <span class="nf">dataset</span> <span class="o">(</span><span class="n">conforming</span> <span class="n">to</span> <span class="n">target</span>
-      <span class="n">schema</span><span class="o">)</span> <span class="n">before</span> <span class="n">writing</span><span class="o">.</span> <span class="nc">Default</span> <span class="o">:</span> <span class="nc">Not</span> <span class="n">set</span><span class="o">.</span> <span class="nl">E:</span><span class="n">g</span> <span class="o">-</span>
+      <span class="nc">subclass</span> <span class="n">of</span> <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">transform</span><span class="o">.</span><span class="na">Transformer</span><span class="o">.</span> <span class="nc">Allows</span>
+      <span class="n">transforming</span> <span class="n">raw</span> <span class="n">source</span> <span class="nc">Dataset</span> <span class="n">to</span> <span class="n">a</span> <span class="n">target</span> <span class="nf">Dataset</span> <span class="o">(</span><span class="n">conforming</span> <span class="n">to</span>
+      <span class="n">target</span> <span class="n">schema</span><span class="o">)</span> <span class="n">before</span> <span class="n">writing</span><span class="o">.</span> <span class="nc">Default</span> <span class="o">:</span> <span class="nc">Not</span> <span class="n">set</span><span class="o">.</span> <span class="nl">E:</span><span class="n">g</span> <span class="o">-</span>
       <span class="n">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">transform</span><span class="o">.</span><span class="na">SqlQueryBasedTransformer</span> <span class="o">(</span><span class="n">which</span>
-      <span class="n">allows</span> <span class="n">a</span> <span class="no">SQL</span> <span class="n">query</span> <span class="n">template</span> <span class="n">to</span> <span class="n">be</span> <span class="n">passed</span> <span class="n">as</span> <span class="n">a</span> <span class="n">transformation</span> <span class="n">function</span><span class="o">)</span>
+      <span class="n">allows</span> <span class="n">a</span> <span class="no">SQL</span> <span class="n">query</span> <span class="n">templated</span> <span class="n">to</span> <span class="n">be</span> <span class="n">passed</span> <span class="n">as</span> <span class="n">a</span> <span class="n">transformation</span> <span class="n">function</span><span class="o">)</span>
 </code></pre></div></div>
 
 <p>The tool takes a hierarchically composed property file and has pluggable interfaces for extracting data, key generation and providing schema. Sample configs for ingesting from kafka and dfs are
@@ -465,15 +503,16 @@ provided under <code class="highlighter-rouge">hudi-utilities/src/test/resources
   <span class="o">--</span><span class="n">schemaprovider</span><span class="o">-</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">schema</span><span class="o">.</span><span class="na">SchemaRegistryProvider</span> <span class="err">\</span>
   <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="kd">class</span> <span class="nc">org</span><span class="o">.</span><span class="na">apache</span><span class="o">.</span><span class="na">hudi</span><span class="o">.</span><span class="na">utilities</span><span class="o">.</span><span class="na">sources</span><span class="o">.</span><span class="na">AvroKafkaSource</span> <span class="err">\</span>
   <span class="o">--</span><span class="n">source</span><span class="o">-</span><span class="n">ordering</span><span class="o">-</span><span class="n">field</span> <span class="n">impresssiontime</span> <span class="err">\</span>
-  <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">base</span><span class="o">-</span><span class="n">path</span> <span class="nl">file:</span><span class="c1">///tmp/hudi-deltastreamer-op --target-table uber.impressions \</span>
+  <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">base</span><span class="o">-</span><span class="n">path</span> <span class="nl">file:</span><span class="err">\</span><span class="o">/</span><span class="err">\</span><span class="o">/</span><span class="err">\</span><span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">hudi</span><span class="o">-</span><span class="n">deltastreamer</span><span class="o">- [...]
+  <span class="o">--</span><span class="n">target</span><span class="o">-</span><span class="n">table</span> <span class="n">uber</span><span class="o">.</span><span class="na">impressions</span> <span class="err">\</span>
   <span class="o">--</span><span class="n">op</span> <span class="no">BULK_INSERT</span>
 </code></pre></div></div>
 
-<p>In some cases, you may want to migrate your existing dataset into Hudi beforehand. Please refer to <a href="/docs/migration_guide.html">migration guide</a>.</p>
+<p>In some cases, you may want to migrate your existing table into Hudi beforehand. Please refer to <a href="/docs/migration_guide.html">migration guide</a>.</p>
 
 <h2 id="datasource-writer">Datasource Writer</h2>
 
-<p>The <code class="highlighter-rouge">hudi-spark</code> module offers the DataSource API to write (and also read) any data frame into a Hudi dataset.
+<p>The <code class="highlighter-rouge">hudi-spark</code> module offers the DataSource API to write (and also read) any data frame into a Hudi table.
 Following is how we can upsert a dataframe, while specifying the field names that need to be used
 for <code class="highlighter-rouge">recordKey =&gt; _row_key</code>, <code class="highlighter-rouge">partitionPath =&gt; partition</code> and <code class="highlighter-rouge">precombineKey =&gt; timestamp</code></p>
 
@@ -490,41 +529,31 @@ for <code class="highlighter-rouge">recordKey =&gt; _row_key</code>, <code class
 
 <h2 id="syncing-to-hive">Syncing to Hive</h2>
 
-<p>Both tools above support syncing of the dataset’s latest schema to Hive metastore, such that queries can pick up new columns and partitions.
+<p>Both tools above support syncing of the table’s latest schema to Hive metastore, such that queries can pick up new columns and partitions.
 In case, its preferable to run this from commandline or in an independent jvm, Hudi provides a <code class="highlighter-rouge">HiveSyncTool</code>, which can be invoked as below, 
-once you have built the hudi-hive module.</p>
+once you have built the hudi-hive module. Following is how we sync the above Datasource Writer written table to Hive metastore.</p>
+
+<div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">cd</span> <span class="n">hudi</span><span class="o">-</span><span class="n">hive</span>
+<span class="o">./</span><span class="n">run_sync_tool</span><span class="o">.</span><span class="na">sh</span>  <span class="o">--</span><span class="n">jdbc</span><span class="o">-</span><span class="n">url</span> <span class="nl">jdbc:hive2:</span><span class="err">\</span><span class="o">/</span><span class="err">\</span><span class="o">/</span><span class="nl">hiveserver:</span><span class="mi">10000</span> <span class="o">--</span><span class="n">user</span> <span class="n">hive</s [...]
+</code></pre></div></div>
+
+<p>Starting with Hudi 0.5.1 version read optimized version of merge-on-read tables are suffixed ‘_ro’ by default. For backwards compatibility with older Hudi versions, 
+an optional HiveSyncConfig - <code class="highlighter-rouge">--skip-ro-suffix</code>, has been provided to turn off ‘_ro’ suffixing if desired. Explore other hive sync options using the following command:</p>
 
 <div class="language-java highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="n">cd</span> <span class="n">hudi</span><span class="o">-</span><span class="n">hive</span>
 <span class="o">./</span><span class="n">run_sync_tool</span><span class="o">.</span><span class="na">sh</span>
  <span class="o">[</span><span class="n">hudi</span><span class="o">-</span><span class="n">hive</span><span class="o">]</span><span class="err">$</span> <span class="o">./</span><span class="n">run_sync_tool</span><span class="o">.</span><span class="na">sh</span> <span class="o">--</span><span class="n">help</span>
-<span class="nl">Usage:</span> <span class="o">&lt;</span><span class="n">main</span> <span class="kd">class</span><span class="err">&gt;</span> <span class="err">[</span><span class="nc">options</span><span class="o">]</span>
-  <span class="nl">Options:</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">base</span><span class="o">-</span><span class="n">path</span>
-       <span class="nc">Basepath</span> <span class="n">of</span> <span class="nc">Hudi</span> <span class="n">dataset</span> <span class="n">to</span> <span class="n">sync</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">database</span>
-       <span class="n">name</span> <span class="n">of</span> <span class="n">the</span> <span class="n">target</span> <span class="n">database</span> <span class="n">in</span> <span class="nc">Hive</span>
-    <span class="o">--</span><span class="n">help</span><span class="o">,</span> <span class="o">-</span><span class="n">h</span>
-       <span class="nl">Default:</span> <span class="kc">false</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">jdbc</span><span class="o">-</span><span class="n">url</span>
-       <span class="nc">Hive</span> <span class="n">jdbc</span> <span class="n">connect</span> <span class="n">url</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">use</span><span class="o">-</span><span class="n">jdbc</span>
-       <span class="nc">Whether</span> <span class="n">to</span> <span class="n">use</span> <span class="n">jdbc</span> <span class="n">connection</span> <span class="n">or</span> <span class="n">hive</span> <span class="nf">metastore</span> <span class="o">(</span><span class="n">via</span> <span class="n">thrift</span><span class="o">)</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">pass</span>
-       <span class="nc">Hive</span> <span class="n">password</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">table</span>
-       <span class="n">name</span> <span class="n">of</span> <span class="n">the</span> <span class="n">target</span> <span class="n">table</span> <span class="n">in</span> <span class="nc">Hive</span>
-  <span class="o">*</span> <span class="o">--</span><span class="n">user</span>
-       <span class="nc">Hive</span> <span class="n">username</span>
 </code></pre></div></div>
 
 <h2 id="deletes">Deletes</h2>
 
-<p>Hudi supports implementing two types of deletes on data stored in Hudi datasets, by enabling the user to specify a different record payload implementation.</p>
+<p>Hudi supports implementing two types of deletes on data stored in Hudi tables, by enabling the user to specify a different record payload implementation. 
+For more info refer to <a href="https://cwiki.apache.org/confluence/x/6IqvC">Delete support in Hudi</a>.</p>
 
 <ul>
   <li><strong>Soft Deletes</strong> : With soft deletes, user wants to retain the key but just null out the values for all other fields. 
- This can be simply achieved by ensuring the appropriate fields are nullable in the dataset schema and simply upserting the dataset after setting these fields to null.</li>
-  <li><strong>Hard Deletes</strong> : A stronger form of delete is to physically remove any trace of the record from the dataset. This can be achieved by issuing an upsert with a custom payload implementation
+ This can be simply achieved by ensuring the appropriate fields are nullable in the table schema and simply upserting the table after setting these fields to null.</li>
+  <li><strong>Hard Deletes</strong> : A stronger form of delete is to physically remove any trace of the record from the table. This can be achieved by issuing an upsert with a custom payload implementation
  via either DataSource or DeltaStreamer which always returns Optional.Empty as the combined value. Hudi ships with a built-in <code class="highlighter-rouge">org.apache.hudi.EmptyHoodieRecordPayload</code> class that does exactly this.</li>
 </ul>
 
@@ -536,14 +565,14 @@ once you have built the hudi-hive module.</p>
  
 </code></pre></div></div>
 
-<h2 id="storage-management">Storage Management</h2>
+<h2 id="optimized-dfs-access">Optimized DFS Access</h2>
 
-<p>Hudi also performs several key storage management functions on the data stored in a Hudi dataset. A key aspect of storing data on DFS is managing file sizes and counts
+<p>Hudi also performs several key storage management functions on the data stored in a Hudi table. A key aspect of storing data on DFS is managing file sizes and counts
 and reclaiming storage space. For e.g HDFS is infamous for its handling of small files, which exerts memory/RPC pressure on the Name Node and can potentially destabilize
 the entire cluster. In general, query engines provide much better performance on adequately sized columnar files, since they can effectively amortize cost of obtaining 
 column statistics etc. Even on some cloud data stores, there is often cost to listing directories with large number of small files.</p>
 
-<p>Here are some ways to efficiently manage the storage of your Hudi datasets.</p>
+<p>Here are some ways to efficiently manage the storage of your Hudi tables.</p>
 
 <ul>
   <li>The <a href="/docs/configurations.html#compactionSmallFileSize">small file handling feature</a> in Hudi, profiles incoming workload 
@@ -553,7 +582,7 @@ and distributes inserts to existing file groups instead of creating new file gro
 such that sufficient number of inserts are grouped into the same file group, resulting in well sized base files ultimately.</li>
   <li>Intelligently tuning the <a href="/docs/configurations.html#withBulkInsertParallelism">bulk insert parallelism</a>, can again in nicely sized initial file groups. It is in fact critical to get this right, since the file groups
 once created cannot be deleted, but simply expanded as explained before.</li>
-  <li>For workloads with heavy updates, the <a href="/docs/concepts.html#merge-on-read-storage">merge-on-read storage</a> provides a nice mechanism for ingesting quickly into smaller files and then later merging them into larger base files via compaction.</li>
+  <li>For workloads with heavy updates, the <a href="/docs/concepts.html#merge-on-read-table">merge-on-read table</a> provides a nice mechanism for ingesting quickly into smaller files and then later merging them into larger base files via compaction.</li>
 </ul>
 
       </section>
diff --git a/content/releases.html b/content/releases.html
index e356499..8ef2aa7 100644
--- a/content/releases.html
+++ b/content/releases.html
@@ -179,19 +179,26 @@
         
         <aside class="sidebar__right sticky">
           <nav class="toc">
-            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> In this page</h4></header>
+            <header><h4 class="nav__title"><i class="fas fa-file-alt"></i> IN THIS PAGE</h4></header>
             <ul class="toc__menu">
-  <li><a href="#release-050-incubating-docs">Release 0.5.0-incubating (docs)</a>
+  <li><a href="#release-051-incubating">[Release 0.5.1-incubating]</a>
     <ul>
       <li><a href="#download-information">Download Information</a></li>
       <li><a href="#release-highlights">Release Highlights</a></li>
-      <li><a href="#migration-guide-for-this-release">Migration Guide for this release</a></li>
       <li><a href="#raw-release-notes">Raw Release Notes</a></li>
     </ul>
   </li>
-  <li><a href="#release-047">Release 0.4.7</a>
+  <li><a href="#release-050-incubating-docs">Release 0.5.0-incubating (docs)</a>
     <ul>
+      <li><a href="#download-information-1">Download Information</a></li>
       <li><a href="#release-highlights-1">Release Highlights</a></li>
+      <li><a href="#migration-guide-for-this-release">Migration Guide for this release</a></li>
+      <li><a href="#raw-release-notes-1">Raw Release Notes</a></li>
+    </ul>
+  </li>
+  <li><a href="#release-047">Release 0.4.7</a>
+    <ul>
+      <li><a href="#release-highlights-2">Release Highlights</a></li>
       <li><a href="#pr-list">PR LIST</a></li>
     </ul>
   </li>
@@ -199,16 +206,65 @@
           </nav>
         </aside>
         
-        <h2 id="release-050-incubating-docs"><a href="https://github.com/apache/incubator-hudi/releases/tag/release-0.5.0-incubating">Release 0.5.0-incubating</a> (<a href="/docs/0.5.0-quick-start-guide.html">docs</a>)</h2>
+        <h2 id="release-051-incubating">[Release 0.5.1-incubating]</h2>
 
 <h3 id="download-information">Download Information</h3>
 <ul>
-  <li>Source Release : <a href="https://www.apache.org/dist/incubator/hudi/0.5.0-incubating/hudi-0.5.0-incubating.src.tgz">Apache Hudi(incubating) 0.5.0-incubating Source Release</a> (<a href="https://www.apache.org/dist/incubator/hudi/0.5.0-incubating/hudi-0.5.0-incubating.src.tgz.asc">asc</a>, <a href="https://www.apache.org/dist/incubator/hudi/0.5.0-incubating/hudi-0.5.0-incubating.src.tgz.sha512">sha512</a>)</li>
+  <li>Source Release : <a href="https://www.apache.org/dist/incubator/hudi/0.5.1-incubating/hudi-0.5.1-incubating.src.tgz">Apache Hudi(incubating) 0.5.1-incubating Source Release</a> (<a href="https://www.apache.org/dist/incubator/hudi/0.5.1-incubating/hudi-0.5.1-incubating.src.tgz.asc">asc</a>, <a href="https://www.apache.org/dist/incubator/hudi/0.5.1-incubating/hudi-0.5.1-incubating.src.tgz.sha512">sha512</a>)</li>
   <li>Apache Hudi (incubating) jars corresponding to this release is available <a href="https://repository.apache.org/#nexus-search;quick~hudi">here</a></li>
 </ul>
 
 <h3 id="release-highlights">Release Highlights</h3>
 <ul>
+  <li>Dependency Version Upgrades
+    <ul>
+      <li>Upgrade from Spark 2.1.0 to Spark 2.4.4</li>
+      <li>Upgrade from Avro 1.7.7 to Avro 1.8.2</li>
+      <li>Upgrade from Parquet 1.8.1 to Parquet 1.10.1</li>
+      <li>Upgrade from Kafka 0.8.2.1 to Kafka 2.0.0 as a result of updating spark-streaming-kafka artifact from 0.8_2.11/2.12 to 0.10_2.11/2.12.</li>
+    </ul>
+  </li>
+  <li><strong>IMPORTANT</strong> This version requires your runtime spark version to be upgraded to 2.4+.</li>
+  <li>Hudi now supports both Scala 2.11 and Scala 2.12, please refer to <a href="https://github.com/apache/incubator-hudi#build-with-scala-212">Build with Scala 2.12</a> to build with Scala 2.12.
+Also, the packages hudi-spark, hudi-utilities, hudi-spark-bundle and hudi-utilities-bundle are changed correspondingly to hudi-spark_{scala_version}, hudi-spark_{scala_version}, hudi-utilities_{scala_version}, hudi-spark-bundle_{scala_version} and hudi-utilities-bundle_{scala_version}.
+Note that scala_version here is one of (2.11, 2.12).</li>
+  <li>With 0.5.1, we added functionality to stop using renames for Hudi timeline metadata operations. This feature is automatically enabled for newly created Hudi tables. For existing tables, this feature is turned off by default. Please read this <a href="https://hudi.apache.org/docs/deployment.html#upgrading">section</a>, before enabling this feature for existing hudi tables.
+To enable the new hudi timeline layout which avoids renames, use the write config “hoodie.timeline.layout.version=1”. Alternatively, you can use “repair overwrite-hoodie-props” to append the line “hoodie.timeline.layout.version=1” to hoodie.properties. Note that in any case, upgrade hudi readers (query engines) first with 0.5.1-incubating release before upgrading writer.</li>
+  <li>CLI supports <code class="highlighter-rouge">repair overwrite-hoodie-props</code> to overwrite the table’s hoodie.properties with specified file, for one-time updates to table name or even enabling the new timeline layout above. Note that few queries may temporarily fail while the overwrite happens (few milliseconds).</li>
+  <li>DeltaStreamer CLI parameter for capturing table type is changed from –storage-type to –table-type. Refer to <a href="https://cwiki.apache.org/confluence/display/HUDI/Design+And+Architecture">wiki</a> with more latest terminologies.</li>
+  <li>Configuration Value change for Kafka Reset Offset Strategies. Enum values are changed from LARGEST to LATEST, SMALLEST to EARLIEST for configuring Kafka reset offset strategies with configuration(auto.offset.reset) in deltastreamer.</li>
+  <li>When using spark-shell to give a quick peek at Hudi, please provide <code class="highlighter-rouge">--packages org.apache.spark:spark-avro_2.11:2.4.4</code>, more details would refer to <a href="https://hudi.apache.org/docs/quick-start-guide.html">latest quickstart docs</a></li>
+  <li>Key generator moved to separate package under org.apache.hudi.keygen. If you are using overridden key generator classes (configuration (“hoodie.datasource.write.keygenerator.class”)) that comes with hudi package, please ensure the fully qualified class name is changed accordingly.</li>
+  <li>Hive Sync tool will register RO tables for MOR with a _ro suffix, so query with _ro suffix. You would use <code class="highlighter-rouge">--skip-ro-suffix</code> in sync config in sync config to retain the old naming without the _ro suffix.</li>
+  <li>With 0.5.1, hudi-hadoop-mr-bundle which is used by query engines such as presto and hive includes shaded avro package to support hudi real time queries through these engines. Hudi supports pluggable logic for merging of records. Users provide their own implementation of <a href="https://github.com/apache/incubator-hudi/blob/master/hudi-common/src/main/java/org/apache/hudi/common/model/HoodieRecordPayload.java">HoodieRecordPayload</a>.
+If you are using this feature, you need to relocate the avro dependencies in your custom record payload class to be consistent with internal hudi shading. You need to add the following relocation when shading the package containing the record payload implementation.</li>
+</ul>
+
+<div class="language-xml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">&lt;relocation&gt;</span>
+    <span class="nt">&lt;pattern&gt;</span>org.apache.avro.<span class="nt">&lt;/pattern&gt;</span>
+    <span class="nt">&lt;shadedPattern&gt;</span>org.apache.hudi.org.apache.avro.<span class="nt">&lt;/shadedPattern&gt;</span>
+<span class="nt">&lt;/relocation&gt;</span>
+</code></pre></div></div>
+
+<ul>
+  <li>Better delete support in DeltaStreamer, please refer to <a href="https://cwiki.apache.org/confluence/display/HUDI/2020/01/15/Delete+support+in+Hudi">blog</a> for more info.</li>
+  <li>Support for AWS Database Migration Service(DMS) in DeltaStreamer, please refer to <a href="https://cwiki.apache.org/confluence/display/HUDI/2020/01/20/Change+Capture+Using+AWS+Database+Migration+Service+and+Hudi">blog</a> for more info.</li>
+  <li>Support for DynamicBloomFilter. This is turned off by default, to enable the DynamicBloomFilter, please use the index config “hoodie.bloom.index.filter.type=DYNAMIC_V0”.</li>
+</ul>
+
+<h3 id="raw-release-notes">Raw Release Notes</h3>
+<p>The raw release notes are available <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&amp;version=12346183">here</a></p>
+
+<h2 id="release-050-incubating-docs"><a href="https://github.com/apache/incubator-hudi/releases/tag/release-0.5.0-incubating">Release 0.5.0-incubating</a> (<a href="/docs/0.5.0-quick-start-guide.html">docs</a>)</h2>
+
+<h3 id="download-information-1">Download Information</h3>
+<ul>
+  <li>Source Release : <a href="https://www.apache.org/dist/incubator/hudi/0.5.0-incubating/hudi-0.5.0-incubating.src.tgz">Apache Hudi(incubating) 0.5.0-incubating Source Release</a> (<a href="https://www.apache.org/dist/incubator/hudi/0.5.0-incubating/hudi-0.5.0-incubating.src.tgz.asc">asc</a>, <a href="https://www.apache.org/dist/incubator/hudi/0.5.0-incubating/hudi-0.5.0-incubating.src.tgz.sha512">sha512</a>)</li>
+  <li>Apache Hudi (incubating) jars corresponding to this release is available <a href="https://repository.apache.org/#nexus-search;quick~hudi">here</a></li>
+</ul>
+
+<h3 id="release-highlights-1">Release Highlights</h3>
+<ul>
   <li>Package and format renaming from com.uber.hoodie to org.apache.hudi (See migration guide section below)</li>
   <li>Major redo of Hudi bundles to address class and jar version mismatches in different environments</li>
   <li>Upgrade from Hive 1.x to Hive 2.x for compile time dependencies - Hive 1.x runtime integration still works with a patch : See <a href="https://lists.apache.org/thread.html/48b3f0553f47c576fd7072f56bb0d8a24fb47d4003880d179c7f88a3@%3Cdev.hudi.apache.org%3E">the discussion thread</a></li>
@@ -220,12 +276,12 @@
 <h3 id="migration-guide-for-this-release">Migration Guide for this release</h3>
 <p>This is the first Apache release for Hudi (incubating). Prior to this release, Hudi Jars were published using “com.uber.hoodie” maven co-ordinates. We have a <a href="https://cwiki.apache.org/confluence/display/HUDI/Migration+Guide+From+com.uber.hoodie+to+org.apache.hudi">migration guide</a></p>
 
-<h3 id="raw-release-notes">Raw Release Notes</h3>
+<h3 id="raw-release-notes-1">Raw Release Notes</h3>
 <p>The raw release notes are available <a href="https://jira.apache.org/jira/secure/ReleaseNote.jspa?projectId=12322822&amp;version=12346087">here</a></p>
 
 <h2 id="release-047"><a href="https://github.com/apache/incubator-hudi/releases/tag/hoodie-0.4.7">Release 0.4.7</a></h2>
 
-<h3 id="release-highlights-1">Release Highlights</h3>
+<h3 id="release-highlights-2">Release Highlights</h3>
 
 <ul>
   <li>Major releases with fundamental changes to filesystem listing &amp; write failure handling</li>
diff --git a/content/sitemap.xml b/content/sitemap.xml
index a2252e9..98bedaa 100644
--- a/content/sitemap.xml
+++ b/content/sitemap.xml
@@ -241,11 +241,11 @@
 <lastmod>2019-12-30T14:59:57-05:00</lastmod>
 </url>
 <url>
-<loc>http://0.0.0.0:4000/cn/docs/admin_guide.html</loc>
+<loc>http://0.0.0.0:4000/cn/docs/deployment.html</loc>
 <lastmod>2019-12-30T14:59:57-05:00</lastmod>
 </url>
 <url>
-<loc>http://0.0.0.0:4000/docs/admin_guide.html</loc>
+<loc>http://0.0.0.0:4000/docs/deployment.html</loc>
 <lastmod>2019-12-30T14:59:57-05:00</lastmod>
 </url>
 <url>


Mime
View raw message