carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenliang...@apache.org
Subject [05/20] carbondata-site git commit: modified for 1.5.0 version
Date Wed, 17 Oct 2018 10:14:53 GMT
http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/css/style.css
----------------------------------------------------------------------
diff --git a/src/main/webapp/css/style.css b/src/main/webapp/css/style.css
index 88fd05f..719f72d 100644
--- a/src/main/webapp/css/style.css
+++ b/src/main/webapp/css/style.css
@@ -1307,6 +1307,11 @@ width:80%;
 padding-left:30px;
 }
 
+@media  screen and (min-width: 1690px) {
+  .verticalnavbar{
+      width: 11% !important;}
+}
+
 .verticalnavbar {
     float: left;
     text-transform: uppercase;

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/datamap-developer-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/datamap-developer-guide.html b/src/main/webapp/datamap-developer-guide.html
index 4b9aa4b..50ac30f 100644
--- a/src/main/webapp/datamap-developer-guide.html
+++ b/src/main/webapp/datamap-developer-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -220,7 +228,7 @@ Currently, there are two 2 types of DataMap supported:</p>
 <li>MVDataMap: DataMap that leverages Materialized View to accelerate olap style query, like SPJG query (select, predicate, join, groupby)</li>
 </ol>
 <h3>
-<a id="datamap-provider" class="anchor" href="#datamap-provider" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>DataMap provider</h3>
+<a id="datamap-provider" class="anchor" href="#datamap-provider" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>DataMap Provider</h3>
 <p>When user issues <code>CREATE DATAMAP dm ON TABLE main USING 'provider'</code>, the corresponding DataMapProvider implementation will be created and initialized.
 Currently, the provider string can be:</p>
 <ol>
@@ -229,7 +237,7 @@ Currently, the provider string can be:</p>
 <li>class name IndexDataMapFactory  implementation: Developer can implement new type of IndexDataMap by extending IndexDataMapFactory</li>
 </ol>
 <p>When user issues <code>DROP DATAMAP dm ON TABLE main</code>, the corresponding DataMapProvider interface will be called.</p>
-<p>Details about <a href="./datamap-management.html#datamap-management">DataMap Management</a> and supported <a href="./datamap-management.html#overview">DSL</a> are documented <a href="./datamap-management.html">here</a>.</p>
+<p>Click for more details about <a href="./datamap-management.html#datamap-management">DataMap Management</a> and supported <a href="./datamap-management.html#overview">DSL</a>.</p>
 <script>
 $(function() {
   // Show selected style on nav item

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/datamap-management.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/datamap-management.html b/src/main/webapp/datamap-management.html
index e2e89f3..ad22bb8 100644
--- a/src/main/webapp/datamap-management.html
+++ b/src/main/webapp/datamap-management.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -294,8 +302,8 @@ If user create MV datamap without specifying <code>WITH DEFERRED REBUILD</code>,
 <h3>
 <a id="automatic-refresh" class="anchor" href="#automatic-refresh" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Automatic Refresh</h3>
 <p>When user creates a datamap on the main table without using <code>WITH DEFERRED REBUILD</code> syntax, the datamap will be managed by system automatically.
-For every data load to the main table, system will immediately triger a load to the datamap automatically. These two data loading (to main table and datamap) is executed in a transactional manner, meaning that it will be either both success or neither success.</p>
-<p>The data loading to datamap is incremental based on Segment concept, avoiding a expesive total rebuild.</p>
+For every data load to the main table, system will immediately trigger a load to the datamap automatically. These two data loading (to main table and datamap) is executed in a transactional manner, meaning that it will be either both success or neither success.</p>
+<p>The data loading to datamap is incremental based on Segment concept, avoiding a expensive total rebuild.</p>
 <p>If user perform following command on the main table, system will return failure. (reject the operation)</p>
 <ol>
 <li>Data management command: <code>UPDATE/DELETE/DELETE SEGMENT</code>.</li>
@@ -310,7 +318,7 @@ not, the operation is allowed, otherwise operation will be rejected by throwing
 <p>We do recommend you to use this management for index datamap.</p>
 <h3>
 <a id="manual-refresh" class="anchor" href="#manual-refresh" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Manual Refresh</h3>
-<p>When user creates a datamap specifying maunal refresh semantic, the datamap is created with status <em>disabled</em> and query will NOT use this datamap until user can issue REBUILD DATAMAP command to build the datamap. For every REBUILD DATAMAP command, system will trigger a full rebuild of the datamap. After rebuild is done, system will change datamap status to <em>enabled</em>, so that it can be used in query rewrite.</p>
+<p>When user creates a datamap specifying manual refresh semantic, the datamap is created with status <em>disabled</em> and query will NOT use this datamap until user can issue REBUILD DATAMAP command to build the datamap. For every REBUILD DATAMAP command, system will trigger a full rebuild of the datamap. After rebuild is done, system will change datamap status to <em>enabled</em>, so that it can be used in query rewrite.</p>
 <p>For every new data loading, data update, delete, the related datamap will be made <em>disabled</em>,
 which means that the following queries will not benefit from the datamap before it becomes <em>enabled</em> again.</p>
 <p>If the main table is dropped by user, the related datamap will be dropped immediately.</p>
@@ -336,7 +344,7 @@ Manual refresh on this datamap will has no impact.</li>
 <h3>
 <a id="explain" class="anchor" href="#explain" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Explain</h3>
 <p>How can user know whether datamap is used in the query?</p>
-<p>User can use EXPLAIN command to know, it will print out something like</p>
+<p>User can set enable.query.statistics = true and use EXPLAIN command to know, it will print out something like</p>
 <pre lang="text"><code>== CarbonData Profiler ==
 Hit mv DataMap: datamap1
 Scan Table: default.datamap1_table

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/ddl-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/ddl-of-carbondata.html b/src/main/webapp/ddl-of-carbondata.html
index 2582f4d..635d835 100644
--- a/src/main/webapp/ddl-of-carbondata.html
+++ b/src/main/webapp/ddl-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -229,6 +237,8 @@
 <li><a href="#caching-at-block-or-blocklet-level">Caching Level</a></li>
 <li><a href="#support-flat-folder-same-as-hiveparquet">Hive/Parquet folder Structure</a></li>
 <li><a href="#string-longer-than-32000-characters">Extra Long String columns</a></li>
+<li><a href="#compression-for-table">Compression for Table</a></li>
+<li><a href="#bad-records-path">Bad Records Path</a></li>
 </ul>
 </li>
 <li><a href="#create-table-as-select">CREATE TABLE AS SELECT</a></li>
@@ -326,6 +336,10 @@ STORED AS carbondata
 <td>Size of blocks to write onto hdfs</td>
 </tr>
 <tr>
+<td><a href="#table-blocklet-size-configuration">TABLE_BLOCKLET_SIZE</a></td>
+<td>Size of blocklet to write in the file</td>
+</tr>
+<tr>
 <td><a href="#table-compaction-configuration">MAJOR_COMPACTION_SIZE</a></td>
 <td>Size upto which the segments can be combined into one</td>
 </tr>
@@ -346,7 +360,7 @@ STORED AS carbondata
 <td>Segments generated within the configured time limit in days will be compacted, skipping others</td>
 </tr>
 <tr>
-<td><a href="#streaming">streaming</a></td>
+<td><a href="#streaming">STREAMING</a></td>
 <td>Whether the table is a streaming table</td>
 </tr>
 <tr>
@@ -359,11 +373,11 @@ STORED AS carbondata
 </tr>
 <tr>
 <td><a href="#local-dictionary-configuration">LOCAL_DICTIONARY_INCLUDE</a></td>
-<td>Columns for which local dictionary needs to be generated.Useful when local dictionary need not be generated for all string/varchar/char columns</td>
+<td>Columns for which local dictionary needs to be generated. Useful when local dictionary need not be generated for all string/varchar/char columns</td>
 </tr>
 <tr>
 <td><a href="#local-dictionary-configuration">LOCAL_DICTIONARY_EXCLUDE</a></td>
-<td>Columns for which local dictionary generation should be skipped.Useful when local dictionary need not be generated for few string/varchar/char columns</td>
+<td>Columns for which local dictionary generation should be skipped. Useful when local dictionary need not be generated for few string/varchar/char columns</td>
 </tr>
 <tr>
 <td><a href="#caching-minmax-value-for-required-columns">COLUMN_META_CACHE</a></td>
@@ -371,10 +385,10 @@ STORED AS carbondata
 </tr>
 <tr>
 <td><a href="#caching-at-block-or-blocklet-level">CACHE_LEVEL</a></td>
-<td>Column metadata caching level.Whether to cache column metadata of block or blocklet</td>
+<td>Column metadata caching level. Whether to cache column metadata of block or blocklet</td>
 </tr>
 <tr>
-<td><a href="#support-flat-folder-same-as-hiveparquet">flat_folder</a></td>
+<td><a href="#support-flat-folder-same-as-hiveparquet">FLAT_FOLDER</a></td>
 <td>Whether to write all the carbondata files in a single folder.Not writing segments folder during incremental load</td>
 </tr>
 <tr>
@@ -400,12 +414,8 @@ STORED AS carbondata
 Suggested use cases : do dictionary encoding for low cardinality columns, it might help to improve data compression ratio and performance.</p>
 <pre><code>TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
 </code></pre>
+<p><strong>NOTE</strong>: Dictionary Include/Exclude for complex child columns is not supported.</p>
 </li>
-</ul>
-<pre><code>```
- NOTE: Dictionary Include/Exclude for complex child columns is not supported.
-</code></pre>
-<ul>
 <li>
 <h5>
 <a id="inverted-index-configuration" class="anchor" href="#inverted-index-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Inverted Index Configuration</h5>
@@ -421,14 +431,14 @@ Suggested use cases : For high cardinality columns, you can disable the inverted
 <ul>
 <li>If users don't specify "SORT_COLUMN" property, by default MDK index be built by using all dimension columns except complex data type column.</li>
 <li>If this property is specified but with empty argument, then the table will be loaded without sort.</li>
-<li>This supports only string, date, timestamp, short, int, long, and boolean data types.
+<li>This supports only string, date, timestamp, short, int, long, byte and boolean data types.
 Suggested use cases : Only build MDK index for required columns,it might help to improve the data loading performance.</li>
 </ul>
 <pre><code>TBLPROPERTIES ('SORT_COLUMNS'='column1, column3')
 OR
 TBLPROPERTIES ('SORT_COLUMNS'='')
 </code></pre>
-<p>NOTE: Sort_Columns for Complex datatype columns is not supported.</p>
+<p><strong>NOTE</strong>: Sort_Columns for Complex datatype columns is not supported.</p>
 </li>
 <li>
 <h5>
@@ -444,32 +454,43 @@ And if you care about loading resources isolation strictly, because the system u
 </li>
 </ul>
 <pre><code>### Example:
-</code></pre>
-<pre><code> CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-   productNumber INT,
-   productName STRING,
-   storeCity STRING,
-   storeProvince STRING,
-   productCategory STRING,
-   productBatch STRING,
-   saleQuantity INT,
-   revenue INT)
- STORED AS carbondata
- TBLPROPERTIES ('SORT_COLUMNS'='productName,storeCity',
-                'SORT_SCOPE'='NO_SORT')
+
+```
+CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+  productNumber INT,
+  productName STRING,
+  storeCity STRING,
+  storeProvince STRING,
+  productCategory STRING,
+  productBatch STRING,
+  saleQuantity INT,
+  revenue INT)
+STORED AS carbondata
+TBLPROPERTIES ('SORT_COLUMNS'='productName,storeCity',
+               'SORT_SCOPE'='NO_SORT')
+```
 </code></pre>
 <p><strong>NOTE:</strong> CarbonData also supports "using carbondata". Find example code at <a href="https://github.com/apache/carbondata/blob/master/examples/spark2/src/main/scala/org/apache/carbondata/examples/SparkSessionExample.scala" target=_blank>SparkSessionExample</a> in the CarbonData repo.</p>
 <ul>
 <li>
 <h5>
 <a id="table-block-size-configuration" class="anchor" href="#table-block-size-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Table Block Size Configuration</h5>
-<p>This command is for setting block size of this table, the default value is 1024 MB and supports a range of 1 MB to 2048 MB.</p>
+<p>This property is for setting block size of this table, the default value is 1024 MB and supports a range of 1 MB to 2048 MB.</p>
 <pre><code>TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
 </code></pre>
 <p><strong>NOTE:</strong> 512 or 512M both are accepted.</p>
 </li>
 <li>
 <h5>
+<a id="table-blocklet-size-configuration" class="anchor" href="#table-blocklet-size-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Table Blocklet Size Configuration</h5>
+<p>This property is for setting blocklet size in the carbondata file, the default value is 64 MB.
+Blocklet is the minimum IO read unit, in case of point queries reduce blocklet size might improve the query performance.</p>
+<p>Example usage:</p>
+<pre><code>TBLPROPERTIES ('TABLE_BLOCKLET_SIZE'='8')
+</code></pre>
+</li>
+<li>
+<h5>
 <a id="table-compaction-configuration" class="anchor" href="#table-compaction-configuration" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Table Compaction Configuration</h5>
 <p>These properties are table level compaction configurations, if not specified, system level configurations in carbon.properties will be used.
 Following are 5 configurations:</p>
@@ -490,7 +511,7 @@ Following are 5 configurations:</p>
 <li>
 <h5>
 <a id="streaming" class="anchor" href="#streaming" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Streaming</h5>
-<p>CarbonData supports streaming ingestion for real-time data. You can create the ?streaming? table using the following table properties.</p>
+<p>CarbonData supports streaming ingestion for real-time data. You can create the 'streaming' table using the following table properties.</p>
 <pre><code>TBLPROPERTIES ('streaming'='true')
 </code></pre>
 </li>
@@ -534,7 +555,28 @@ Following are 5 configurations:</p>
 <p>In case of multi-level complex dataType columns, primitive string/varchar/char columns are considered for local dictionary generation.</p>
 </li>
 </ul>
-<p>Local dictionary will have to be enabled explicitly during create table or by enabling the <strong>system property</strong> <em><strong>carbon.local.dictionary.enable</strong></em>. By default, Local Dictionary will be disabled for the carbondata table.</p>
+<p>System Level Properties for Local Dictionary:</p>
+<table>
+<thead>
+<tr>
+<th>Properties</th>
+<th>Default value</th>
+<th>Description</th>
+</tr>
+</thead>
+<tbody>
+<tr>
+<td>carbon.local.dictionary.enable</td>
+<td>false</td>
+<td>By default, Local Dictionary will be disabled for the carbondata table.</td>
+</tr>
+<tr>
+<td>carbon.local.dictionary.decoder.fallback</td>
+<td>true</td>
+<td>Page Level data will not be maintained for the blocklet. During fallback, actual data will be retrieved from the encoded page data using local dictionary. <strong>NOTE:</strong> Memory footprint decreases significantly as compared to when this property is set to false</td>
+</tr>
+</tbody>
+</table>
 <p>Local Dictionary can be configured using the following properties during create table command:</p>
 <table>
 <thead>
@@ -553,35 +595,37 @@ Following are 5 configurations:</p>
 <tr>
 <td>LOCAL_DICTIONARY_THRESHOLD</td>
 <td>10000</td>
-<td>The maximum cardinality of a column upto which carbondata can try to generate local dictionary (maximum - 100000)</td>
+<td>The maximum cardinality of a column upto which carbondata can try to generate local dictionary (maximum - 100000). <strong>NOTE:</strong> When LOCAL_DICTIONARY_THRESHOLD is defined for Complex columns, the count of distinct records of all child columns are summed up.</td>
 </tr>
 <tr>
 <td>LOCAL_DICTIONARY_INCLUDE</td>
 <td>string/varchar/char columns</td>
-<td>Columns for which Local Dictionary has to be generated.<strong>NOTE:</strong> Those string/varchar/char columns which are added into DICTIONARY_INCLUDE option will not be considered for local dictionary generation.This property needs to be configured only when local dictionary needs to be generated for few columns, skipping others.This property takes effect only when <strong>LOCAL_DICTIONARY_ENABLE</strong> is true or <strong>carbon.local.dictionary.enable</strong> is true</td>
+<td>Columns for which Local Dictionary has to be generated.<strong>NOTE:</strong> Those string/varchar/char columns which are added into DICTIONARY_INCLUDE option will not be considered for local dictionary generation. This property needs to be configured only when local dictionary needs to be generated for few columns, skipping others. This property takes effect only when <strong>LOCAL_DICTIONARY_ENABLE</strong> is true or <strong>carbon.local.dictionary.enable</strong> is true</td>
 </tr>
 <tr>
 <td>LOCAL_DICTIONARY_EXCLUDE</td>
 <td>none</td>
-<td>Columns for which Local Dictionary need not be generated.This property needs to be configured only when local dictionary needs to be skipped for few columns, generating for others.This property takes effect only when <strong>LOCAL_DICTIONARY_ENABLE</strong> is true or <strong>carbon.local.dictionary.enable</strong> is true</td>
+<td>Columns for which Local Dictionary need not be generated. This property needs to be configured only when local dictionary needs to be skipped for few columns, generating for others. This property takes effect only when <strong>LOCAL_DICTIONARY_ENABLE</strong> is true or <strong>carbon.local.dictionary.enable</strong> is true</td>
 </tr>
 </tbody>
 </table>
 <p><strong>Fallback behavior:</strong></p>
 <ul>
-<li>When the cardinality of a column exceeds the threshold, it triggers a fallback and the generated dictionary will be reverted and data loading will be continued without dictionary encoding.</li>
+<li>
+<p>When the cardinality of a column exceeds the threshold, it triggers a fallback and the generated dictionary will be reverted and data loading will be continued without dictionary encoding.</p>
+</li>
+<li>
+<p>In case of complex columns, fallback is triggered when the summation value of all child columns' distinct records exceeds the defined LOCAL_DICTIONARY_THRESHOLD value.</p>
+</li>
 </ul>
 <p><strong>NOTE:</strong> When fallback is triggered, the data loading performance will decrease as encoded data will be discarded and the actual data is written to the temporary sort files.</p>
 <p><strong>Points to be noted:</strong></p>
-<ol>
+<ul>
 <li>
 <p>Reduce Block size:</p>
 <p>Number of Blocks generated is less in case of Local Dictionary as compression ratio is high. This may reduce the number of tasks launched during query, resulting in degradation of query performance if the pruned blocks are less compared to the number of parallel tasks which can be run. So it is recommended to configure smaller block size which in turn generates more number of blocks.</p>
 </li>
-<li>
-<p>All the page-level data for a blocklet needs to be maintained in memory until all the pages encoded for local dictionary is processed in order to handle fallback. Hence the memory required for local dictionary based table is more and this memory increase is proportional to number of columns.</p>
-</li>
-</ol>
+</ul>
 <h3>
 <a id="example" class="anchor" href="#example" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Example:</h3>
 <pre><code>CREATE TABLE carbontable(
@@ -611,32 +655,32 @@ Following are 5 configurations:</p>
 <ul>
 <li>If you want no column min/max values to be cached in the driver.</li>
 </ul>
-<pre><code>COLUMN_META_CACHE=??
+<pre><code>COLUMN_META_CACHE=''
 </code></pre>
 <ul>
 <li>If you want only col1 min/max values to be cached in the driver.</li>
 </ul>
-<pre><code>COLUMN_META_CACHE=?col1?
+<pre><code>COLUMN_META_CACHE='col1'
 </code></pre>
 <ul>
 <li>If you want min/max values to be cached in driver for all the specified columns.</li>
 </ul>
-<pre><code>COLUMN_META_CACHE=?col1,col2,col3,??
+<pre><code>COLUMN_META_CACHE='col1,col2,col3,?'
 </code></pre>
 <p>Columns to be cached can be specified either while creating table or after creation of the table.
 During create table operation; specify the columns to be cached in table properties.</p>
 <p>Syntax:</p>
-<pre><code>CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 int,?) STORED BY ?carbondata? TBLPROPERTIES (?COLUMN_META_CACHE?=?col1,col2,??)
+<pre><code>CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 int,?) STORED BY 'carbondata' TBLPROPERTIES ('COLUMN_META_CACHE'='col1,col2,?')
 </code></pre>
 <p>Example:</p>
-<pre><code>CREATE TABLE employee (name String, city String, id int) STORED BY ?carbondata? TBLPROPERTIES (?COLUMN_META_CACHE?=?name?)
+<pre><code>CREATE TABLE employee (name String, city String, id int) STORED BY 'carbondata' TBLPROPERTIES ('COLUMN_META_CACHE'='name')
 </code></pre>
 <p>After creation of table or on already created tables use the alter table command to configure the columns to be cached.</p>
 <p>Syntax:</p>
-<pre><code>ALTER TABLE [dbName].tableName SET TBLPROPERTIES (?COLUMN_META_CACHE?=?col1,col2,??)
+<pre><code>ALTER TABLE [dbName].tableName SET TBLPROPERTIES ('COLUMN_META_CACHE'='col1,col2,?')
 </code></pre>
 <p>Example:</p>
-<pre><code>ALTER TABLE employee SET TBLPROPERTIES (?COLUMN_META_CACHE?=?city?)
+<pre><code>ALTER TABLE employee SET TBLPROPERTIES ('COLUMN_META_CACHE'='city')
 </code></pre>
 </li>
 <li>
@@ -645,36 +689,36 @@ During create table operation; specify the columns to be cached in table propert
 <p>This feature allows you to maintain the cache at Block level, resulting in optimized usage of the memory. The memory consumption is high if the Blocklet level caching is maintained as a Block can have multiple Blocklet.</p>
 <p>Following are the valid values for CACHE_LEVEL:</p>
 <p><em>Configuration for caching in driver at Block level (default value).</em></p>
-<pre><code>CACHE_LEVEL= ?BLOCK?
+<pre><code>CACHE_LEVEL= 'BLOCK'
 </code></pre>
 <p><em>Configuration for caching in driver at Blocklet level.</em></p>
-<pre><code>CACHE_LEVEL= ?BLOCKLET?
+<pre><code>CACHE_LEVEL= 'BLOCKLET'
 </code></pre>
 <p>Cache level can be specified either while creating table or after creation of the table.
 During create table operation specify the cache level in table properties.</p>
 <p>Syntax:</p>
-<pre><code>CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 int,?) STORED BY ?carbondata? TBLPROPERTIES (?CACHE_LEVEL?=?Blocklet?)
+<pre><code>CREATE TABLE [dbName].tableName (col1 String, col2 String, col3 int,?) STORED BY 'carbondata' TBLPROPERTIES ('CACHE_LEVEL'='Blocklet')
 </code></pre>
 <p>Example:</p>
-<pre><code>CREATE TABLE employee (name String, city String, id int) STORED BY ?carbondata? TBLPROPERTIES (?CACHE_LEVEL?=?Blocklet?)
+<pre><code>CREATE TABLE employee (name String, city String, id int) STORED BY 'carbondata' TBLPROPERTIES ('CACHE_LEVEL'='Blocklet')
 </code></pre>
 <p>After creation of table or on already created tables use the alter table command to configure the cache level.</p>
 <p>Syntax:</p>
-<pre><code>ALTER TABLE [dbName].tableName SET TBLPROPERTIES (?CACHE_LEVEL?=?Blocklet?)
+<pre><code>ALTER TABLE [dbName].tableName SET TBLPROPERTIES ('CACHE_LEVEL'='Blocklet')
 </code></pre>
 <p>Example:</p>
-<pre><code>ALTER TABLE employee SET TBLPROPERTIES (?CACHE_LEVEL?=?Blocklet?)
+<pre><code>ALTER TABLE employee SET TBLPROPERTIES ('CACHE_LEVEL'='Blocklet')
 </code></pre>
 </li>
 <li>
 <h5>
 <a id="support-flat-folder-same-as-hiveparquet" class="anchor" href="#support-flat-folder-same-as-hiveparquet" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Support Flat folder same as Hive/Parquet</h5>
-<p>This feature allows all carbondata and index files to keep directy under tablepath. Currently all carbondata/carbonindex files written under tablepath/Fact/Part0/Segment_NUM folder and it is not same as hive/parquet folder structure. This feature makes all files written will be directly under tablepath, it does not maintain any segment folder structure.This is useful for interoperability between the execution engines and plugin with other execution engines like hive or presto becomes easier.</p>
+<p>This feature allows all carbondata and index files to keep directy under tablepath. Currently all carbondata/carbonindex files written under tablepath/Fact/Part0/Segment_NUM folder and it is not same as hive/parquet folder structure. This feature makes all files written will be directly under tablepath, it does not maintain any segment folder structure. This is useful for interoperability between the execution engines and plugin with other execution engines like hive or presto becomes easier.</p>
 <p>Following table property enables this feature and default value is false.</p>
 <pre><code> 'flat_folder'='true'
 </code></pre>
 <p>Example:</p>
-<pre><code>CREATE TABLE employee (name String, city String, id int) STORED BY ?carbondata? TBLPROPERTIES ('flat_folder'='true')
+<pre><code>CREATE TABLE employee (name String, city String, id int) STORED BY 'carbondata' TBLPROPERTIES ('flat_folder'='true')
 </code></pre>
 </li>
 <li>
@@ -698,6 +742,37 @@ TBLPROPERTIES ('LONG_STRING_COLUMNS'='col1,col2')
 You can refer to SDKwriterTestCase for example.</p>
 <p><strong>NOTE:</strong> The LONG_STRING_COLUMNS can only be string/char/varchar columns and cannot be dictionary_include/sort_columns/complex columns.</p>
 </li>
+<li>
+<h5>
+<a id="compression-for-table" class="anchor" href="#compression-for-table" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Compression for table</h5>
+<p>Data compression is also supported by CarbonData.
+By default, Snappy is used to compress the data. CarbonData also support ZSTD compressor.
+User can specify the compressor in the table property:</p>
+<pre><code>TBLPROPERTIES('carbon.column.compressor'='snappy')
+</code></pre>
+<p>or</p>
+<pre><code>TBLPROPERTIES('carbon.column.compressor'='zstd')
+</code></pre>
+<p>If the compressor is configured, all the data loading and compaction will use that compressor.
+If the compressor is not configured, the data loading and compaction will use the compressor from current system property.
+In this scenario, the compressor for each load may differ if the system property is changed each time. This is helpful if you want to change the compressor for a table.
+The corresponding system property is configured in carbon.properties file as below:</p>
+<pre><code>carbon.column.compressor=snappy
+</code></pre>
+<p>or</p>
+<pre><code>carbon.column.compressor=zstd
+</code></pre>
+</li>
+<li>
+<h5>
+<a id="bad-records-path" class="anchor" href="#bad-records-path" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Bad Records Path</h5>
+<p>This property is used to specify the location where bad records would be written.
+As the table path remains the same after rename therefore the user can use this property to
+specify bad records path for the table at the time of creation, so that the same path can
+be later viewed in table description for reference.</p>
+<pre><code>  TBLPROPERTIES('BAD_RECORD_PATH'='/opt/badrecords'')
+</code></pre>
+</li>
 </ul>
 <h2>
 <a id="create-table-as-select" class="anchor" href="#create-table-as-select" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CREATE TABLE AS SELECT</h2>
@@ -735,7 +810,7 @@ carbon.sql("SELECT * FROM target_table").show
 <a id="create-external-table" class="anchor" href="#create-external-table" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CREATE EXTERNAL TABLE</h2>
 <p>This function allows user to create external table by specifying location.</p>
 <pre><code>CREATE EXTERNAL TABLE [IF NOT EXISTS] [db_name.]table_name 
-STORED AS carbondata LOCATION ?$FilesPath?
+STORED AS carbondata LOCATION '$FilesPath'
 </code></pre>
 <h3>
 <a id="create-external-table-on-managed-table-data-location" class="anchor" href="#create-external-table-on-managed-table-data-location" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Create external table on managed table data location.</h3>
@@ -781,7 +856,7 @@ suggest to drop the external table and create again to register table with new s
 </code></pre>
 <h3>
 <a id="example-1" class="anchor" href="#example-1" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Example</h3>
-<pre><code>CREATE DATABASE carbon LOCATION ?hdfs://name_cluster/dir1/carbonstore?;
+<pre><code>CREATE DATABASE carbon LOCATION "hdfs://name_cluster/dir1/carbonstore";
 </code></pre>
 <h2>
 <a id="table-management" class="anchor" href="#table-management" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>TABLE MANAGEMENT</h2>
@@ -826,12 +901,11 @@ TBLPROPERTIES('DICTIONARY_INCLUDE'='col_name,...',
 </code></pre>
 <pre><code>ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) TBLPROPERTIES('DEFAULT.VALUE.a1'='10')
 </code></pre>
-<p>NOTE: Add Complex datatype columns is not supported.</p>
+<p><strong>NOTE:</strong> Add Complex datatype columns is not supported.</p>
 </li>
 </ul>
-<p>Users can specify which columns to include and exclude for local dictionary generation after adding new columns. These will be appended with the already existing local dictionary include and exclude columns of main table respectively.</p>
-<pre><code>   ALTER TABLE carbon ADD COLUMNS (a1 STRING, b1 STRING) TBLPROPERTIES('LOCAL_DICTIONARY_INCLUDE'='a1','LOCAL_DICTIONARY_EXCLUDE'='b1')
-</code></pre>
+<p>Users can specify which columns to include and exclude for local dictionary generation after adding new columns. These will be appended with the already existing local dictionary include and exclude columns of main table respectively.
+<code>ALTER TABLE carbon ADD COLUMNS (a1 STRING, b1 STRING) TBLPROPERTIES('LOCAL_DICTIONARY_INCLUDE'='a1','LOCAL_DICTIONARY_EXCLUDE'='b1')</code></p>
 <ul>
 <li>
 <h5>
@@ -846,7 +920,7 @@ ALTER TABLE test_db.carbon DROP COLUMNS (b1)
 
 ALTER TABLE carbon DROP COLUMNS (c1,d1)
 </code></pre>
-<p>NOTE: Drop Complex child column is not supported.</p>
+<p><strong>NOTE:</strong> Drop Complex child column is not supported.</p>
 </li>
 <li>
 <h5>
@@ -876,12 +950,14 @@ Change of decimal data type from lower precision to higher precision will only b
 <pre><code> ALTER TABLE [db_name.]table_name COMPACT 'SEGMENT_INDEX'
 </code></pre>
 <pre><code>Examples:
-```
-ALTER TABLE test_db.carbon COMPACT 'SEGMENT_INDEX'
-```
-**NOTE:**
+</code></pre>
+<pre><code> ALTER TABLE test_db.carbon COMPACT 'SEGMENT_INDEX'
+ ```
+
+ **NOTE:**
+
+ * Merge index is not supported on streaming table.
 
-* Merge index is not supported on streaming table.
 </code></pre>
 </li>
 <li>
@@ -935,7 +1011,7 @@ STORED AS carbondata
 <p>Example:</p>
 <pre><code>CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                               productNumber Int COMMENT 'unique serial number for product')
-COMMENT ?This is table comment?
+COMMENT "This is table comment"
  STORED AS carbondata
  TBLPROPERTIES ('DICTIONARY_INCLUDE'='productNumber')
 </code></pre>
@@ -972,7 +1048,7 @@ COMMENT ?This is table comment?
 PARTITIONED BY (productCategory STRING, productBatch STRING)
 STORED AS carbondata
 </code></pre>
-<p>NOTE: Hive partition is not supported on complex datatype columns.</p>
+<p><strong>NOTE:</strong> Hive partition is not supported on complex datatype columns.</p>
 <h4>
 <a id="show-partitions" class="anchor" href="#show-partitions" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Show Partitions</h4>
 <p>This command gets the Hive partition information of the table</p>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/dml-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/dml-of-carbondata.html b/src/main/webapp/dml-of-carbondata.html
index ac41f7c..e658a68 100644
--- a/src/main/webapp/dml-of-carbondata.html
+++ b/src/main/webapp/dml-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -250,7 +258,7 @@ OPTIONS(property_name=property_value, ...)
 </tr>
 <tr>
 <td><a href="#commentchar">COMMENTCHAR</a></td>
-<td>Character used to comment the rows in the input csv file.Those rows will be skipped from processing</td>
+<td>Character used to comment the rows in the input csv file. Those rows will be skipped from processing</td>
 </tr>
 <tr>
 <td><a href="#header">HEADER</a></td>
@@ -289,11 +297,11 @@ OPTIONS(property_name=property_value, ...)
 <td>Path to read the dictionary data from for particular column</td>
 </tr>
 <tr>
-<td><a href="#dateformat">DATEFORMAT</a></td>
+<td><a href="#dateformattimestampformat">DATEFORMAT</a></td>
 <td>Format of date in the input csv file</td>
 </tr>
 <tr>
-<td><a href="#timestampformat">TIMESTAMPFORMAT</a></td>
+<td><a href="#dateformattimestampformat">TIMESTAMPFORMAT</a></td>
 <td>Format of timestamp in the input csv file</td>
 </tr>
 <tr>
@@ -310,7 +318,7 @@ OPTIONS(property_name=property_value, ...)
 </tr>
 <tr>
 <td><a href="#bad-records-handling">BAD_RECORD_PATH</a></td>
-<td>Bad records logging path.Useful when bad record logging is enabled</td>
+<td>Bad records logging path. Useful when bad record logging is enabled</td>
 </tr>
 <tr>
 <td><a href="#bad-records-handling">BAD_RECORDS_ACTION</a></td>
@@ -428,7 +436,7 @@ true: CSV file is with file header.</p>
 <h5>
 <a id="sort-column-bounds" class="anchor" href="#sort-column-bounds" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>SORT COLUMN BOUNDS:</h5>
 <p>Range bounds for sort columns.</p>
-<p>Suppose the table is created with 'SORT_COLUMNS'='name,id' and the range for name is aaa<del>zzz, the value range for id is 0</del>1000. Then during data loading, we can specify the following option to enhance data loading performance.</p>
+<p>Suppose the table is created with 'SORT_COLUMNS'='name,id' and the range for name is aaa to zzz, the value range for id is 0 to 1000. Then during data loading, we can specify the following option to enhance data loading performance.</p>
 <pre><code>OPTIONS('SORT_COLUMN_BOUNDS'='f,250;l,500;r,750')
 </code></pre>
 <p>Each bound is separated by ';' and each field value in bound is separated by ','. In the example above, we provide 3 bounds to distribute records to 4 partitions. The values 'f','l','r' can evenly distribute the records. Inside carbondata, for a record we compare the value of sort columns with that of the bounds and decide which partition the record will be forwarded to.</p>
@@ -437,7 +445,7 @@ true: CSV file is with file header.</p>
 <li>SORT_COLUMN_BOUNDS will be used only when the SORT_SCOPE is 'local_sort'.</li>
 <li>Carbondata will use these bounds as ranges to process data concurrently during the final sort percedure. The records will be sorted and written out inside each partition. Since the partition is sorted, all records will be sorted.</li>
 <li>Since the actual order and literal order of the dictionary column are not necessarily the same, we do not recommend you to use this feature if the first sort column is 'dictionary_include'.</li>
-<li>The option works better if your CPU usage during loading is low. If your system is already CPU tense, better not to use this option. Besides, it depends on the user to specify the bounds. If user does not know the exactly bounds to make the data distributed evenly among the bounds, loading performance will still be better than before or at least the same as before.</li>
+<li>The option works better if your CPU usage during loading is low. If your current system CPU usage is high, better not to use this option. Besides, it depends on the user to specify the bounds. If user does not know the exactly bounds to make the data distributed evenly among the bounds, loading performance will still be better than before or at least the same as before.</li>
 <li>Users can find more information about this option in the description of PR1953.</li>
 </ul>
 </li>
@@ -492,10 +500,6 @@ projectjoindate,projectenddate,attendance,utilization,salary',
 <li>Since Bad Records Path can be specified in create, load and carbon properties.
 Therefore, value specified in load will have the highest priority, and value specified in carbon properties will have the least priority.</li>
 </ul>
-<p><strong>Bad Records Path:</strong>
-This property is used to specify the location where bad records would be written.</p>
-<pre><code>TBLPROPERTIES('BAD_RECORDS_PATH'='/opt/badrecords'')
-</code></pre>
 <p>Example:</p>
 <pre><code>LOAD DATA INPATH 'filepath.csv' INTO TABLE tablename
 OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true','BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon',

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/documentation.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/documentation.html b/src/main/webapp/documentation.html
index 982becf..c920945 100644
--- a/src/main/webapp/documentation.html
+++ b/src/main/webapp/documentation.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -215,7 +223,7 @@
 <p>Apache CarbonData is a new big data file format for faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data.</p>
 <h2>
 <a id="getting-started" class="anchor" href="#getting-started" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Getting Started</h2>
-<p><strong>File Format Concepts:</strong> Start with the basics of understanding the <a href="./file-structure-of-carbondata.html#carbondata-file-format">CarbonData file format</a> and its <a href="./file-structure-of-carbondata.html">storage structure</a>.This will help to understand other parts of the documentation, including deployment, programming and usage guides.</p>
+<p><strong>File Format Concepts:</strong> Start with the basics of understanding the <a href="./file-structure-of-carbondata.html#carbondata-file-format">CarbonData file format</a> and its <a href="./file-structure-of-carbondata.html">storage structure</a>. This will help to understand other parts of the documentation, including deployment, programming and usage guides.</p>
 <p><strong>Quick Start:</strong> <a href="./quick-start-guide.html#installing-and-configuring-carbondata-to-run-locally-with-spark-shell">Run an example program</a> on your local machine or <a href="https://github.com/apache/carbondata/tree/master/examples/spark2/src/main/scala/org/apache/carbondata/examples" target=_blank>study some examples</a>.</p>
 <p><strong>CarbonData SQL Language Reference:</strong> CarbonData extends the Spark SQL language and adds several <a href="./ddl-of-carbondata.html">DDL</a> and <a href="./dml-of-carbondata.html">DML</a> statements to support operations on it.Refer to the <a href="./language-manual.html">Reference Manual</a> to understand the supported features and functions.</p>
 <p><strong>Programming Guides:</strong> You can read our guides about <a href="./sdk-guide.html">APIs supported</a> to learn how to integrate CarbonData with your applications.</p>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/faq.html b/src/main/webapp/faq.html
index c37284f..aac986c 100644
--- a/src/main/webapp/faq.html
+++ b/src/main/webapp/faq.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -224,6 +232,7 @@
 <li><a href="#why-aggregate-query-is-not-fetching-data-from-aggregate-table">Why aggregate query is not fetching data from aggregate table?</a></li>
 <li><a href="#why-all-executors-are-showing-success-in-spark-ui-even-after-dataload-command-failed-at-driver-side">Why all executors are showing success in Spark UI even after Dataload command failed at Driver side?</a></li>
 <li><a href="#why-different-time-zone-result-for-select-query-output-when-query-sdk-writer-output">Why different time zone result for select query output when query SDK writer output?</a></li>
+<li><a href="#how-to-check-lru-cache-memory-footprint">How to check LRU cache memory footprint?</a></li>
 </ul>
 <h1>
 <a id="troubleshooting" class="anchor" href="#troubleshooting" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>TroubleShooting</h1>
@@ -252,12 +261,12 @@ By default <strong>carbon.badRecords.location</strong> specifies the following l
 <a id="how-to-enable-bad-record-logging" class="anchor" href="#how-to-enable-bad-record-logging" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How to enable Bad Record Logging?</h2>
 <p>While loading data we can specify the approach to handle Bad Records. In order to analyse the cause of the Bad Records the parameter <code>BAD_RECORDS_LOGGER_ENABLE</code> must be set to value <code>TRUE</code>. There are multiple approaches to handle Bad Records which can be specified  by the parameter <code>BAD_RECORDS_ACTION</code>.</p>
 <ul>
-<li>To pad the incorrect values of the csv rows with NULL value and load the data in CarbonData, set the following in the query :</li>
+<li>To pass the incorrect values of the csv rows with NULL value and load the data in CarbonData, set the following in the query :</li>
 </ul>
 <pre><code>'BAD_RECORDS_ACTION'='FORCE'
 </code></pre>
 <ul>
-<li>To write the Bad Records without padding incorrect values with NULL in the raw csv (set in the parameter <strong>carbon.badRecords.location</strong>), set the following in the query :</li>
+<li>To write the Bad Records without passing incorrect values with NULL in the raw csv (set in the parameter <strong>carbon.badRecords.location</strong>), set the following in the query :</li>
 </ul>
 <pre><code>'BAD_RECORDS_ACTION'='REDIRECT'
 </code></pre>
@@ -367,7 +376,7 @@ select cntry,sum(gdp) from gdp21,pop1 where cntry=ctry group by cntry;
 </code></pre>
 <h2>
 <a id="why-all-executors-are-showing-success-in-spark-ui-even-after-dataload-command-failed-at-driver-side" class="anchor" href="#why-all-executors-are-showing-success-in-spark-ui-even-after-dataload-command-failed-at-driver-side" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Why all executors are showing success in Spark UI even after Dataload command failed at Driver side?</h2>
-<p>Spark executor shows task as failed after the maximum number of retry attempts, but loading the data having bad records and BAD_RECORDS_ACTION (carbon.bad.records.action) is set as ?FAIL? will attempt only once but will send the signal to driver as failed instead of throwing the exception to retry, as there is no point to retry if bad record found and BAD_RECORDS_ACTION is set to fail. Hence the Spark executor displays this one attempt as successful but the command has actually failed to execute. Task attempts or executor logs can be checked to observe the failure reason.</p>
+<p>Spark executor shows task as failed after the maximum number of retry attempts, but loading the data having bad records and BAD_RECORDS_ACTION (carbon.bad.records.action) is set as "FAIL" will attempt only once but will send the signal to driver as failed instead of throwing the exception to retry, as there is no point to retry if bad record found and BAD_RECORDS_ACTION is set to fail. Hence the Spark executor displays this one attempt as successful but the command has actually failed to execute. Task attempts or executor logs can be checked to observe the failure reason.</p>
 <h2>
 <a id="why-different-time-zone-result-for-select-query-output-when-query-sdk-writer-output" class="anchor" href="#why-different-time-zone-result-for-select-query-output-when-query-sdk-writer-output" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Why different time zone result for select query output when query SDK writer output?</h2>
 <p>SDK writer is an independent entity, hence SDK writer can generate carbondata files from a non-cluster machine that has different time zones. But at cluster when those files are read, it always takes cluster time-zone. Hence, the value of timestamp and date datatype fields are not original value.
@@ -379,6 +388,20 @@ If wanted to control timezone of data while writing, then set cluster's time-zon
 TimeZone.setDefault(TimeZone.getTimeZone("Asia/Shanghai"))
 </code></pre>
 <h2>
+<a id="how-to-check-lru-cache-memory-footprint" class="anchor" href="#how-to-check-lru-cache-memory-footprint" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>How to check LRU cache memory footprint?</h2>
+<p>To observe the LRU cache memory footprint in the logs, configure the below properties in log4j.properties file.</p>
+<pre><code>log4j.logger.org.apache.carbondata.core.memory.UnsafeMemoryManager = DEBUG
+log4j.logger.org.apache.carbondata.core.cache.CarbonLRUCache = DEBUG
+</code></pre>
+<p>These properties will enable the DEBUG log for the CarbonLRUCache and UnsafeMemoryManager which will print the information of memory consumed using which the LRU cache size can be decided. <strong>Note:</strong> Enabling the DEBUG log will degrade the query performance.</p>
+<p><strong>Example:</strong></p>
+<pre><code>18/09/26 15:05:28 DEBUG UnsafeMemoryManager: pool-44-thread-1 Memory block (org.apache.carbondata.core.memory.MemoryBlock@21312095) is created with size 10. Total memory used 413Bytes, left 536870499Bytes
+18/09/26 15:05:29 DEBUG CarbonLRUCache: main Required size for entry /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge :: 181 Current cache size :: 0
+18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 105available memory:  536870836
+18/09/26 15:05:30 DEBUG UnsafeMemoryManager: main Freeing memory of size: 76available memory:  536870912
+18/09/26 15:05:30 INFO CarbonLRUCache: main Removed entry from InMemory lru cache :: /home/target/store/default/stored_as_carbondata_table/Fact/Part0/Segment_0/0_1537954529044.carbonindexmerge
+</code></pre>
+<h2>
 <a id="getting-tablestatuslock-issues-when-loading-data" class="anchor" href="#getting-tablestatuslock-issues-when-loading-data" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Getting tablestatus.lock issues When loading data</h2>
 <p><strong>Symptom</strong></p>
 <pre><code>17/11/11 16:48:13 ERROR LocalFileLock: main hdfs:/localhost:9000/carbon/store/default/hdfstable/tablestatus.lock (No such file or directory)

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/file-structure-of-carbondata.html b/src/main/webapp/file-structure-of-carbondata.html
index c14ea6d..bd2be65 100644
--- a/src/main/webapp/file-structure-of-carbondata.html
+++ b/src/main/webapp/file-structure-of-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -249,7 +257,7 @@
 <p>The file directory structure is as below:</p>
 <p><a href="../docs/images/2-1_1.png?raw=true" target="_blank" rel="noopener noreferrer"><img src="https://github.com/apache/carbondata/blob/master/docs/images/2-1_1.png?raw=true" alt="File Directory Structure" style="max-width:100%;"></a></p>
 <ol>
-<li>ModifiedTime.htmlt records the timestamp of the metadata with the modification time attribute of the file. When the drop table and create table are used, the modification time of the file is updated.This is common to all databases and hence is kept in parallel to databases</li>
+<li>ModifiedTime.htmlt records the timestamp of the metadata with the modification time attribute of the file. When the drop table and create table are used, the modification time of the file is updated. This is common to all databases and hence is kept in parallel to databases</li>
 <li>The <strong>default</strong> is the database name and contains the user tables.default is used when user doesn't specify any database name;else user configured database name will be the directory name. user_table is the table name.</li>
 <li>Metadata directory stores schema files, tablestatus and dictionary files (including .dict, .dictmeta and .sortindex). There are three types of metadata data information files.</li>
 <li>data and index files are stored under directory named <strong>Fact</strong>. The Fact directory has a Part0 partition directory, where 0 is the partition number.</li>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/how-to-contribute-to-apache-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/how-to-contribute-to-apache-carbondata.html b/src/main/webapp/how-to-contribute-to-apache-carbondata.html
index 122b763..392814c 100644
--- a/src/main/webapp/how-to-contribute-to-apache-carbondata.html
+++ b/src/main/webapp/how-to-contribute-to-apache-carbondata.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -238,7 +246,7 @@ create it. Please discuss your proposal with a committer or the component lead i
 alternatively, on the developer mailing list(<a href="mailto:dev@carbondata.apache.org">dev@carbondata.apache.org</a>).</p>
 <p>If there?s an existing JIRA issue for your intended contribution, please comment about your
 intended work. Once the work is understood, a committer will assign the issue to you.
-(If you don?t have a JIRA role yet, you?ll be added to the ?contributor? role.) If an issue is
+(If you don?t have a JIRA role yet, you?ll be added to the "contributor" role.) If an issue is
 currently assigned, please check with the current assignee before reassigning.</p>
 <p>For moderate or large contributions, you should not start coding or writing a design doc unless
 there is a corresponding JIRA issue assigned to you for that work. Simple changes,
@@ -334,7 +342,7 @@ When you make a revision, always push it in a new commit.</p>
 Please make sure those tests pass,the contribution cannot be merged otherwise.</p>
 <h4>
 <a id="lgtm" class="anchor" href="#lgtm" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>LGTM</h4>
-<p>Once the reviewer is happy with the change, they?ll respond with an LGTM (?looks good to me!?).
+<p>Once the reviewer is happy with the change, they?ll respond with an LGTM ("looks good to me!").
 At this point, the committer will take over, possibly make some additional touch ups,
 and merge your changes into the codebase.</p>
 <p>In the case both the author and the reviewer are committers, either can merge the pull request.

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/introduction.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/introduction.html b/src/main/webapp/introduction.html
index 068d711..8f18870 100644
--- a/src/main/webapp/introduction.html
+++ b/src/main/webapp/introduction.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -229,17 +237,15 @@
 </ul>
 <h2>
 <a id="carbondata-features--functions" class="anchor" href="#carbondata-features--functions" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonData Features &amp; Functions</h2>
-<p>CarbonData has rich set of featues to support various use cases in Big Data analytics.The below table lists the major features supported by CarbonData.</p>
+<p>CarbonData has rich set of features to support various use cases in Big Data analytics. The below table lists the major features supported by CarbonData.</p>
 <h3>
 <a id="table-management" class="anchor" href="#table-management" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Table Management</h3>
 <ul>
 <li>
 <h5>
 <a id="ddl-create-alterdropctas" class="anchor" href="#ddl-create-alterdropctas" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>DDL (Create, Alter,Drop,CTAS)</h5>
+<p>CarbonData provides its own DDL to create and manage carbondata tables. These DDL conform to Hive,Spark SQL format and support additional properties and configuration to take advantages of CarbonData functionalities.</p>
 </li>
-</ul>
-<p>?	CarbonData provides its own DDL to create and manage carbondata tables.These DDL conform to 			Hive,Spark SQL format and support additional properties and configuration to take advantages of CarbonData functionalities.</p>
-<ul>
 <li>
 <h5>
 <a id="dmlloadinsert" class="anchor" href="#dmlloadinsert" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>DML(Load,Insert)</h5>
@@ -263,7 +269,7 @@
 <li>
 <h5>
 <a id="compaction" class="anchor" href="#compaction" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Compaction</h5>
-<p>CarbonData manages incremental loads as segments.Compaction help to compact the growing number of segments and also to improve query filter pruning.</p>
+<p>CarbonData manages incremental loads as segments. Compaction helps to compact the growing number of segments and also to improve query filter pruning.</p>
 </li>
 <li>
 <h5>
@@ -277,12 +283,12 @@
 <li>
 <h5>
 <a id="pre-aggregate" class="anchor" href="#pre-aggregate" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Pre-Aggregate</h5>
-<p>CarbonData has concept of datamaps to assist in pruning of data while querying so that performance is faster.Pre Aggregate tables are kind of datamaps which can improve the query performance by order of magnitude.CarbonData will automatically pre-aggregae the incremental data and re-write the query to automatically fetch from the most appropriate pre-aggregate table to serve the query faster.</p>
+<p>CarbonData has concept of datamaps to assist in pruning of data while querying so that performance is faster.Pre Aggregate tables are kind of datamaps which can improve the query performance by order of magnitude.CarbonData will automatically pre-aggregate the incremental data and re-write the query to automatically fetch from the most appropriate pre-aggregate table to serve the query faster.</p>
 </li>
 <li>
 <h5>
 <a id="time-series" class="anchor" href="#time-series" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Time Series</h5>
-<p>CarbonData has built in understanding of time order(Year, month,day,hour, minute,second).Time series is a pre-aggregate table which can automatically roll-up the data to the desired level during incremental load and serve the query from the most appropriate pre-aggregate table.</p>
+<p>CarbonData has built in understanding of time order(Year, month,day,hour, minute,second). Time series is a pre-aggregate table which can automatically roll-up the data to the desired level during incremental load and serve the query from the most appropriate pre-aggregate table.</p>
 </li>
 <li>
 <h5>
@@ -297,7 +303,7 @@
 <li>
 <h5>
 <a id="mv-materialized-views" class="anchor" href="#mv-materialized-views" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>MV (Materialized Views)</h5>
-<p>MVs are kind of pre-aggregate tables which can support efficent query re-write and processing.CarbonData provides MV which can rewrite query to fetch from any table(including non-carbondata tables).Typical usecase is to store the aggregated data of a non-carbondata fact table into carbondata and use mv to rewrite the query to fetch from carbondata.</p>
+<p>MVs are kind of pre-aggregate tables which can support efficent query re-write and processing.CarbonData provides MV which can rewrite query to fetch from any table(including non-carbondata tables). Typical usecase is to store the aggregated data of a non-carbondata fact table into carbondata and use mv to rewrite the query to fetch from carbondata.</p>
 </li>
 </ul>
 <h3>
@@ -315,12 +321,12 @@
 <li>
 <h5>
 <a id="carbondata-writer" class="anchor" href="#carbondata-writer" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonData writer</h5>
-<p>CarbonData supports writing data from non-spark application using SDK.Users can use SDK to generate carbondata files from custom applications.Typical usecase is to write the streaming application plugged in to kafka and use carbondata as sink(target) table for storing.</p>
+<p>CarbonData supports writing data from non-spark application using SDK.Users can use SDK to generate carbondata files from custom applications. Typical usecase is to write the streaming application plugged in to kafka and use carbondata as sink(target) table for storing.</p>
 </li>
 <li>
 <h5>
 <a id="carbondata-reader" class="anchor" href="#carbondata-reader" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>CarbonData reader</h5>
-<p>CarbonData supports reading of data from non-spark application using SDK.Users can use the SDK to read the carbondata files from their application and do custom processing.</p>
+<p>CarbonData supports reading of data from non-spark application using SDK. Users can use the SDK to read the carbondata files from their application and do custom processing.</p>
 </li>
 </ul>
 <h3>
@@ -329,7 +335,7 @@
 <li>
 <h5>
 <a id="s3" class="anchor" href="#s3" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>S3</h5>
-<p>CarbonData can write to S3, OBS or any cloud storage confirming to S3 protocol.CarbonData uses the HDFS api to write to cloud object stores.</p>
+<p>CarbonData can write to S3, OBS or any cloud storage confirming to S3 protocol. CarbonData uses the HDFS api to write to cloud object stores.</p>
 </li>
 <li>
 <h5>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/language-manual.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/language-manual.html b/src/main/webapp/language-manual.html
index a0ea674..74c18f4 100644
--- a/src/main/webapp/language-manual.html
+++ b/src/main/webapp/language-manual.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -236,11 +244,12 @@
 <li>Data Manipulation Statements
 <ul>
 <li>
-<a href="./dml-of-carbondata.html">DML:</a> <a href="./dml-of-carbondata.html#load-data">Load</a>, <a href="./ddl-of-carbondata.html#insert-overwrite">Insert</a>, <a href="./dml-of-carbondata.html#update">Update</a>, <a href="./dml-of-carbondata.html#delete">Delete</a>
+<a href="./dml-of-carbondata.html">DML:</a> <a href="./dml-of-carbondata.html#load-data">Load</a>, <a href="./dml-of-carbondata.html#insert-data-into-carbondata-table">Insert</a>, <a href="./dml-of-carbondata.html#update">Update</a>, <a href="./dml-of-carbondata.html#delete">Delete</a>
 </li>
 <li><a href="./segment-management-on-carbondata.html">Segment Management</a></li>
 </ul>
 </li>
+<li><a href="./carbon-as-spark-datasource-guide.html">CarbonData as Spark's Datasource</a></li>
 <li><a href="./configuration-parameters.html">Configuration Properties</a></li>
 </ul>
 <script>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/lucene-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/lucene-datamap-guide.html b/src/main/webapp/lucene-datamap-guide.html
index b8164a2..357286a 100644
--- a/src/main/webapp/lucene-datamap-guide.html
+++ b/src/main/webapp/lucene-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -239,7 +247,7 @@ ON TABLE main_table
 <h2>
 <a id="lucene-datamap-introduction" class="anchor" href="#lucene-datamap-introduction" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Lucene DataMap Introduction</h2>
 <p>Lucene is a high performance, full featured text search engine. Lucene is integrated to carbon as
-an index datamap and managed along with main tables by CarbonData.User can create lucene datamap
+an index datamap and managed along with main tables by CarbonData. User can create lucene datamap
 to improve query performance on string columns which has content of more length. So, user can
 search tokenized word or pattern of it using lucene query on text content.</p>
 <p>For instance, main table called <strong>datamap_test</strong> which is defined as:</p>
@@ -281,7 +289,7 @@ value is compression, the index file size will be compressed.</p>
 Queries are to be made on main table. when a query with TEXT_MATCH('name:c10') or
 TEXT_MATCH_WITH_LIMIT('name:n10',10)[the second parameter represents the number of result to be
 returned, if user does not specify this value, all results will be returned without any limit] is
-fired, two jobs are fired.The first job writes the temporary files in folder created at table level
+fired, two jobs are fired. The first job writes the temporary files in folder created at table level
 which contains lucene's seach results and these files will be read in second job to give faster
 results. These temporary files will be cleared once the query finishes.</p>
 <p>User can verify whether a query can leverage Lucene datamap or not by executing <code>EXPLAIN</code>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/performance-tuning.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/performance-tuning.html b/src/main/webapp/performance-tuning.html
index 480911c..c63462f 100644
--- a/src/main/webapp/performance-tuning.html
+++ b/src/main/webapp/performance-tuning.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -381,7 +389,7 @@ You can configure CarbonData by tuning following properties in carbon.properties
 <tbody>
 <tr>
 <td>carbon.number.of.cores.while.loading</td>
-<td>Default: 2.This value should be &gt;= 2</td>
+<td>Default: 2. This value should be &gt;= 2</td>
 <td>Specifies the number of cores used for data processing during data loading in CarbonData.</td>
 </tr>
 <tr>
@@ -447,7 +455,7 @@ scenarios. After the completion of POC, some of the configurations impacting the
 <td>spark/carbonlib/carbon.properties</td>
 <td>Data loading and Querying</td>
 <td>For minor compaction, specifies the number of segments to be merged in stage 1 and number of compacted segments to be merged in stage 2.</td>
-<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. Configuring this parameter will merge the small segment to one big segment which will sort the data and improve the performance. For Example in one telecommunication scenario, the performance improves about 2 times after minor compaction.</td>
+<td>Each CarbonData load will create one segment, if every load is small in size it will generate many small files over a period of time impacting the query performance. Configuring this parameter will merge the small segment to one big segment which will sort the data and improve the performance. For Example in one telecommunication scenario, the performance improves about 2 times after minor compaction.</td>
 </tr>
 <tr>
 <td>spark.sql.shuffle.partitions</td>
@@ -489,21 +497,21 @@ scenarios. After the completion of POC, some of the configurations impacting the
 <td>spark/carbonlib/carbon.properties</td>
 <td>Data loading</td>
 <td>Specify the name of compressor to compress the intermediate sort temporary files during sort procedure in data loading.</td>
-<td>The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD' and empty. By default, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck.</td>
+<td>The optional values are 'SNAPPY','GZIP','BZIP2','LZ4','ZSTD', and empty. By default, empty means that Carbondata will not compress the sort temp files. This parameter will be useful if you encounter disk bottleneck.</td>
 </tr>
 <tr>
 <td>carbon.load.skewedDataOptimization.enabled</td>
 <td>spark/carbonlib/carbon.properties</td>
 <td>Data loading</td>
 <td>Whether to enable size based block allocation strategy for data loading.</td>
-<td>When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data -- It's useful if the size of your input data files varies widely, say 1MB~1GB.</td>
+<td>When loading, carbondata will use file size based block allocation strategy for task distribution. It will make sure that all the executors process the same size of data -- It's useful if the size of your input data files varies widely, say 1MB to 1GB.</td>
 </tr>
 <tr>
 <td>carbon.load.min.size.enabled</td>
 <td>spark/carbonlib/carbon.properties</td>
 <td>Data loading</td>
 <td>Whether to enable node minumun input data size allocation strategy for data loading.</td>
-<td>When loading, carbondata will use node minumun input data size allocation strategy for task distribution. It will make sure the node load the minimum amount of data -- It's useful if the size of your input data files very small, say 1MB~256MB,Avoid generating a large number of small files.</td>
+<td>When loading, carbondata will use node minumun input data size allocation strategy for task distribution. It will make sure the nodes load the minimum amount of data -- It's useful if the size of your input data files very small, say 1MB to 256MB,Avoid generating a large number of small files.</td>
 </tr>
 </tbody>
 </table>

http://git-wip-us.apache.org/repos/asf/carbondata-site/blob/6ad7599a/src/main/webapp/preaggregate-datamap-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/preaggregate-datamap-guide.html b/src/main/webapp/preaggregate-datamap-guide.html
index 6b0783e..c3d4a85 100644
--- a/src/main/webapp/preaggregate-datamap-guide.html
+++ b/src/main/webapp/preaggregate-datamap-guide.html
@@ -52,6 +52,9 @@
                            aria-expanded="false"> Download <span class="caret"></span></a>
                         <ul class="dropdown-menu">
                             <li>
+                                <a href="https://dist.apache.org/repos/dist/release/carbondata/1.5.0/"
+                                   target="_blank">Apache CarbonData 1.5.0</a></li>
+                            <li>
                                 <a href="https://dist.apache.org/repos/dist/release/carbondata/1.4.1/"
                                    target="_blank">Apache CarbonData 1.4.1</a></li>
 							<li>
@@ -179,7 +182,12 @@
                                 <a class="nav__item nav__sub__item" href="./timeseries-datamap-guide.html">Time Series</a>
                             </div>
 
-                            <a class="b-nav__api nav__item" href="./sdk-guide.html">API</a>
+                            <div class="nav__item nav__item__with__subs">
+                                <a class="b-nav__api nav__item nav__sub__anchor" href="./sdk-guide.html">API</a>
+                                <a class="nav__item nav__sub__item" href="./sdk-guide.html">Java SDK</a>
+                                <a class="nav__item nav__sub__item" href="./CSDK-guide.html">C++ SDK</a>
+                            </div>
+
                             <a class="b-nav__perf nav__item" href="./performance-tuning.html">Performance Tuning</a>
                             <a class="b-nav__s3 nav__item" href="./s3-guide.html">S3 Storage</a>
                             <a class="b-nav__faq nav__item" href="./faq.html">FAQ</a>
@@ -444,7 +452,7 @@ pre-aggregate tables. To further improve the query performance, compaction on pr
 can be triggered to merge the segments and files in the pre-aggregate tables.</p>
 <h2>
 <a id="data-management-with-pre-aggregate-tables" class="anchor" href="#data-management-with-pre-aggregate-tables" aria-hidden="true"><span aria-hidden="true" class="octicon octicon-link"></span></a>Data Management with pre-aggregate tables</h2>
-<p>In current implementation, data consistence need to be maintained for both main table and pre-aggregate
+<p>In current implementation, data consistency needs to be maintained for both main table and pre-aggregate
 tables. Once there is pre-aggregate table created on the main table, following command on the main
 table
 is not supported:</p>


Mime
View raw message