carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenliang...@apache.org
Subject [1/3] incubator-carbondata-site git commit: update pmc and committer link
Date Mon, 20 Feb 2017 23:51:56 GMT
Repository: incubator-carbondata-site
Updated Branches:
  refs/heads/asf-site b0c921fae -> 9ebca1554


http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/dml-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/dml-operation-on-carbondata.html b/src/main/webapp/docs/latest_htmls/dml-operation-on-carbondata.html
new file mode 100644
index 0000000..159bc31
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/dml-operation-on-carbondata.html
@@ -0,0 +1,361 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>DML Operations on CarbonData</h1><p>This tutorial guides you through the data manipulation language support provided by CarbonData.</p><h2>Overview</h2><p>The following DML operations are supported in CarbonData :</p>
+<ul>
+  <li><a href="#load-data">LOAD DATA</a></li>
+  <li><a href="#insert-data-into-a-carbondata-table">INSERT DATA INTO A CARBONDATA TABLE</a></li>
+  <li><a href="#show-segments">SHOW SEGMENTS</a></li>
+  <li><a href="#delete-segment-by-id">DELETE SEGMENT BY ID</a></li>
+  <li><a href="#delete-segment-by-date">DELETE SEGMENT BY DATE</a></li>
+  <li><a href="#update-carbondata-table">UPDATE CARBONDATA TABLE</a></li>
+  <li><a href="#delete-records-from-carbondata-table">DELETE RECORDS FROM CARBONDATA TABLE</a></li>
+</ul><h2>LOAD DATA</h2><p>This command loads the user data in raw format to the CarbonData specific data format store, this allows CarbonData to provide good performance while querying the data. Please visit <a href="data-management.md">Data Management</a> for more details on LOAD.</p><h3>Syntax</h3><p><code>
+LOAD DATA [LOCAL] INPATH &#39;folder_path&#39; 
+INTO TABLE [db_name.]table_name 
+OPTIONS(property_name=property_value, ...)
+</code></p><p>OPTIONS are not mandatory for data loading process. Inside OPTIONS user can provide either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE as per requirement.</p><p>NOTE: The path shall be canonical path.</p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>folder_path </td>
+      <td>Path of raw csv data folder or file. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>OPTIONS </td>
+      <td>Extra options provided to Load </td>
+      <td>YES </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>You can use the following options to load data:</p>
+<ul>
+  <li><p><strong>DELIMITER:</strong> Delimiters can be provided in the load command.</p><p><code>
+OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;)
+</code></p></li>
+  <li><p><strong>QUOTECHAR:</strong> Quote Characters can be provided in the load command.</p><p><code>
+OPTIONS(&#39;QUOTECHAR&#39;=&#39;&quot;&#39;)
+</code></p></li>
+  <li><p><strong>COMMENTCHAR:</strong> Comment Characters can be provided in the load command if user want to comment lines.</p><p><code>
+OPTIONS(&#39;COMMENTCHAR&#39;=&#39;#&#39;)
+</code></p></li>
+  <li><p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD DATA command if headers are missing in the source files.</p><p><code>
+OPTIONS(&#39;FILEHEADER&#39;=&#39;column1,column2&#39;) 
+</code></p></li>
+  <li><p><strong>MULTILINE:</strong> CSV with new line character in quotes. 5 <code>
+OPTIONS(&#39;MULTILINE&#39;=&#39;true&#39;) 
+</code></p></li>
+  <li><p><strong>ESCAPECHAR:</strong> Escape char can be provided if user want strict validation of escape character on CSV.</p><p><code>
+OPTIONS(&#39;ESCAPECHAR&#39;=&#39;\&#39;) 
+</code></p></li>
+  <li><p><strong>COMPLEX_DELIMITER_LEVEL_1:</strong> Split the complex type data column in a row (eg., a$b$c --&gt; Array = {a,b,c}).</p><p><code>
+OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;) 
+</code></p></li>
+  <li><p><strong>COMPLEX_DELIMITER_LEVEL_2:</strong> Split the complex type nested data column in a row. Applies level_1 delimiter &amp; applies level_2 based on complex data type (eg., a:b$c:d --&gt; Array&gt; = {{a,b},{c,d}}).</p><p><code>
+OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;) 
+</code></p></li>
+  <li><p><strong>ALL_DICTIONARY_PATH:</strong> All dictionary files path.</p><p><code>
+OPTIONS(&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
+</code></p></li>
+  <li><p><strong>COLUMNDICT:</strong> Dictionary file path for specified column.</p><p><code>
+OPTIONS(&#39;COLUMNDICT&#39;=&#39;column1:dictionaryFilePath1,
+column2:dictionaryFilePath2&#39;)
+</code></p><p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can not be used together.</p></li>
+  <li><p><strong>DATEFORMAT:</strong> Date format for specified column.</p><p><code>
+OPTIONS(&#39;DATEFORMAT&#39;=&#39;column1:dateFormat1, column2:dateFormat2&#39;)
+</code></p><p>NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to <a href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</a>.</p></li>
+  <li><p><strong>USE_KETTLE:</strong> This option is used to specify whether to use kettle for loading data or not. By default kettle is not used for data loading.</p><p><code>
+OPTIONS(&#39;USE_KETTLE&#39;=&#39;FALSE&#39;)
+</code></p></li>
+</ul><p>Note : It is recommended to set the value for this option as false.</p>
+<ul>
+  <li><strong>SINGLE_PASS:</strong> Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance  in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.</li>
+</ul><p>This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.</p>
+<pre><code>```
+OPTIONS(&#39;SINGLE_PASS&#39;=&#39;TRUE&#39;)
+```
+</code></pre><p>Note :</p>
+<ul>
+  <li><p>If this option is set to TRUE then data loading will take less time.</p></li>
+  <li><p>If this option is set to some invalid value other than TRUE or FALSE then it uses the default value.</p></li>
+</ul><h3>Example:</h3><p><code>
+LOAD DATA local inpath &#39;/opt/rawdata/data.csv&#39; INTO table carbontable
+options(&#39;DELIMITER&#39;=&#39;,&#39;, &#39;QUOTECHAR&#39;=&#39;&quot;&#39;,&#39;COMMENTCHAR&#39;=&#39;#&#39;,
+&#39;FILEHEADER&#39;=&#39;empno,empname,designation,doj,workgroupcategory,
+ workgroupcategoryname,deptno,deptname,projectcode,
+ projectjoindate,projectenddate,attendance,utilization,salary&#39;,
+&#39;MULTILINE&#39;=&#39;true&#39;,&#39;ESCAPECHAR&#39;=&#39;\&#39;,&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;, 
+&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;,
+&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;,
+&#39;USE_KETTLE&#39;=&#39;FALSE&#39;,
+&#39;SINGLE_PASS&#39;=&#39;TRUE&#39;
+)
+</code></p><h2>INSERT DATA INTO A CARBONDATA TABLE</h2><p>This command inserts data into a CarbonData table. It is defined as a combination of two queries Insert and Select query respectively. It inserts records from a source table into a target CarbonData table. The source table can be a Hive table, Parquet table or a CarbonData table itself. It comes with the functionality to aggregate the records of a table by performing Select query on source table and load its corresponding resultant records into a CarbonData table.</p><p><strong>NOTE</strong> : The client node where the INSERT command is executing, must be part of the cluster.</p><h3>Syntax</h3><p><code>
+INSERT INTO TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
+[ WHERE { &lt;filter_condition&gt; } ];
+</code></p><p>You can also omit the <code>table</code> keyword and write your query as:</p><p><code>
+INSERT INTO &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
+[ WHERE { &lt;filter_condition&gt; } ];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CARBON TABLE </td>
+      <td>The name of the Carbon table in which you want to perform the insert operation. </td>
+    </tr>
+    <tr>
+      <td>sourceTableName </td>
+      <td>The table from which the records are read and inserted into destination CarbonData table. </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>The following condition must be met for successful insert operation :</p>
+<ul>
+  <li>The source table and the CarbonData table must have the same table schema.</li>
+  <li>The table must be created.</li>
+  <li>Overwrite is not supported for CarbonData table.</li>
+  <li>The data type of source and destination table columns should be same, else the data from source table will be treated as bad records and the INSERT command fails.</li>
+  <li>INSERT INTO command does not support partial success if bad records are found, it will fail.</li>
+  <li>Data cannot be loaded or updated in source table while insert from source table to target table is in progress.</li>
+</ul><p>To enable data load or update during insert operation, configure the following property to true.</p><p><code>
+carbon.insert.persist.enable=true
+</code></p><p>By default the above configuration will be false.</p><p><strong>NOTE</strong>: Enabling this property will reduce the performance.</p><h3>Examples</h3><p><code>
+INSERT INTO table1 SELECT item1, sum(item2 + 1000) as result FROM
+table2 group by item1;
+</code></p><p><code>
+INSERT INTO table1 SELECT item1, item2, item3 FROM table2 
+where item2=&#39;xyz&#39;;
+</code></p><p><code>
+INSERT INTO table1 SELECT * FROM table2 
+where exists (select * from table3 
+where table2.item1 = table3.item1);
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log.</strong></p><h2>SHOW SEGMENTS</h2><p>This command is used to get the segments of CarbonData table.</p><p><code>
+SHOW SEGMENTS FOR TABLE [db_name.]table_name 
+LIMIT number_of_segments;
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>number_of_segments </td>
+      <td>Limit the output to this number. </td>
+      <td>YES </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
+</code></p><h2>DELETE SEGMENT BY ID</h2><p>This command is used to delete segment by using the segment ID. Each segment has a unique segment ID associated with it. Using this segment ID, you can remove the segment.</p><p>The following command will get the segmentID.</p><p><code>
+SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
+</code></p><p>After you retrieve the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.</p><p><code>
+DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, .... 
+FROM TABLE tableName
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>segment_id </td>
+      <td>Segment Id of the load. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+DELETE SEGMENT 0 FROM TABLE CarbonDatabase.CarbonTable;
+DELETE SEGMENT 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
+</code>  NOTE: Here 0.1 is compacted segment sequence id. </p><h2>DELETE SEGMENT BY DATE</h2><p>This command will allow to delete the CarbonData segment(s) from the store based on the date provided by the user in the DML command. The segment created before the particular date will be removed from the specific stores.</p><p><code>
+DELETE FROM TABLE [schema_name.]table_name 
+WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>DATE_VALUE </td>
+      <td>Valid segment load start time value. All the segments before this specified date will be deleted. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+ DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable 
+ WHERE STARTTIME BEFORE &#39;2017-06-01 12:05:06&#39;;  
+</code></p><h2>Update CarbonData Table</h2><p>This command will allow to update the carbon table based on the column expression and optional filter conditions.</p><h3>Syntax</h3><p><code>
+ UPDATE &lt;table_name&gt;
+ SET (column_name1, column_name2, ... column_name n) =
+ (column1_expression, column2_expression . .. column n_expression )
+ [ WHERE { &lt;filter_condition&gt; } ];
+</code></p><p>alternatively the following the command can also be used for updating the CarbonData Table :</p><p><code>
+UPDATE &lt;table_name&gt;
+SET (column_name1, column_name2,) =
+(select sourceColumn1, sourceColumn2 from sourceTable
+[ WHERE { &lt;filter_condition&gt; } ] )
+[ WHERE { &lt;filter_condition&gt; } ];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the Carbon table in which you want to perform the update operation. </td>
+    </tr>
+    <tr>
+      <td>column_name </td>
+      <td>The destination columns to be updated. </td>
+    </tr>
+    <tr>
+      <td>sourceColumn </td>
+      <td>The source table column values to be updated in destination table. </td>
+    </tr>
+    <tr>
+      <td>sourceTable </td>
+      <td>The table from which the records are updated into destination Carbon table. </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>The following conditions must be met for successful updation :</p>
+<ul>
+  <li>The update command fails if multiple input rows in source table are matched with single row in destination table.</li>
+  <li>If the source table generates empty records, the update operation will complete successfully without updating the table.</li>
+  <li>If a source table row does not correspond to any of the existing rows in a destination table, the update operation will complete successfully without updating the table.</li>
+  <li>In sub-query, if the source table and the target table are same, then the update operation fails.</li>
+  <li>If the sub-query used in UPDATE statement contains aggregate method or group by query, then the UPDATE operation fails.</li>
+</ul><h3>Examples</h3><p>Update is not supported for queries that contain aggregate or group by.</p><p><code>
+ UPDATE t_carbn01 a
+ SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
+ sum(b.profit) from t_carbn01b b
+ WHERE item_type_cd =2 group by item_type_code);
+</code></p><p>Here the Update Operation fails as the query contains aggregate function sum(b.profit) and group by clause in the sub-query.</p><p><code>
+UPDATE carbonTable1 d
+SET(d.column3,d.column5 ) = (SELECT s.c33, s.c55
+FROM sourceTable1 s WHERE d.column1 = s.c11)
+WHERE d.column1 = &#39;china&#39; EXISTS( SELECT * from table3 o where o.c2 &gt; 1);
+</code></p><p><code>
+UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
+WHERE d.column1 = s.c11)
+WHERE exists( select * from iud.other o where o.c2 &gt; 1);
+</code></p><p><code>
+UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5, &quot;y&quot; ));
+</code></p><p><code>
+UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
+WHERE d.column1 = &#39;india&#39;;
+</code></p><p><code>
+UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
+WHERE d.column1 = &#39;india&#39;
+and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 1);
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the client.</strong></p><h2>Delete Records from CarbonData Table</h2><p>This command allows us to delete records from CarbonData table.</p><h3>Syntax</h3><p><code>
+DELETE FROM table_name [WHERE expression];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the Carbon table in which you want to perform the delete. </td>
+    </tr>
+  </tbody>
+</table><h3>Examples</h3><p><code>
+DELETE FROM columncarbonTable1 d WHERE d.column1  = &#39;china&#39;;
+</code></p><p><code>
+DELETE FROM dest WHERE column1 IN (&#39;china&#39;, &#39;USA&#39;);
+</code></p><p><code>
+DELETE FROM columncarbonTable1
+WHERE column1 IN (SELECT column11 FROM sourceTable2);
+</code></p><p><code>
+DELETE FROM columncarbonTable1
+WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
+column1 = &#39;USA&#39;);
+</code></p><p><code>
+DELETE FROM columncarbonTable1 WHERE column2 &gt;= 4
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the client.</strong></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/faq.html b/src/main/webapp/docs/latest_htmls/faq.html
new file mode 100644
index 0000000..fc17e28
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/faq.html
@@ -0,0 +1,45 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>FAQs</h1>
+<ul>
+  <li><a href="#can-we-preserve-segments-from-compaction">Can we preserve Segments from Compaction?</a></li>
+  <li><a href="#can-we-disable-horizontal-compaction">Can we disable horizontal compaction?</a></li>
+  <li><a href="#what-is-horizontal-compaction">What is horizontal compaction?</a></li>
+  <li><a href="#how-to-enable-compaction-while-data-loading">How to enable Compaction while data loading?</a></li>
+  <li><a href="#where-are-bad-records-stored-in-carbondata">Where are Bad Records Stored in CarbonData?</a></li>
+  <li><a href="#what-are-bad-records">What are Bad Records?</a></li>
+  <li><a href="#can-we-use-carbondata-on-standalone-spark-cluster">Can we use CarbonData on Standalone Spark Cluster?</a></li>
+  <li><a href="#what-versions-of-apache-spark-are-compatible-with-carbondata">What versions of Apache Spark are Compatible with CarbonData?</a></li>
+  <li><a href="#can-we-load-data-from-excel">Can we Load Data from excel?</a></li>
+  <li><a href="#how-to-enable-single-pass-data-loading">How to enable Single Pass Data Loading?</a></li>
+  <li><a href="#what-is-single-pass-data-loading">What is Single Pass Data Loading?</a></li>
+  <li><a href="#how-to-specify-the-data-loading-format-for-carbondata">How to specify the data loading format for CarbonData ?</a></li>
+  <li><a href="#how-to-resolve-store-location-can-not-be-found">How to resolve store location can?t be found?</a></li>
+  <li><a href="">What is carbon.lock.type?</a></li>
+  <li><a href="#how-to-enable-auto-compaction">How to enable Auto Compaction?</a></li>
+  <li><a href="#how-to-resolve-abstract-method-error">How to resolve Abstract Method Error?</a></li>
+  <li><a href="#getting-exception-on-creating-a-view">Getting Exception on Creating a View</a></li>
+  <li><a href="#is-carbondata-supported-for-windows">Is CarbonData supported for Windows?</a></li>
+</ul><h2>Can we preserve Segments from Compaction?</h2><p>If you want to preserve number of segments from being compacted then you can set the property <strong>carbon.numberof.preserve.segments</strong> equal to the <strong>value of number of segments to be preserved</strong>.</p><p>Note : <em>No segments are preserved by Default.</em></p><h2>Can we disable horizontal compaction?</h2><p>Yes, to disable horizontal compaction, set <strong>carbon.horizontal.compaction.enable</strong> to <code>FALSE</code> in carbon.properties file.</p><h2>What is horizontal compaction?</h2><p>Compaction performed after Update and Delete operations is referred as Horizontal Compaction. After every DELETE and UPDATE operation, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.</p><p>By default the parameter <strong>carbon.horizontal.compaction.enable</strong> enabling the horizontal compaction is set to <code>TRUE</code>.</p><h2>How to enable C
 ompaction while data loading?</h2><p>To enable compaction while data loading, set <strong>carbon.enable.auto.load.merge</strong> to <code>TRUE</code> in carbon.properties file.</p><h2>Where are Bad Records Stored in CarbonData?</h2><p>The bad records are stored at the location set in carbon.badRecords.location in carbon.properties file. By default <strong>carbon.badRecords.location</strong> specifies the following location <code>/opt/Carbon/Spark/badrecords</code>.</p><h2>What are Bad Records?</h2><p>Records that fail to get loaded into the CarbonData due to data type incompatibility are classified as Bad Records.</p><h2>Can we use CarbonData on Standalone Spark Cluster?</h2><p>Yes, CarbonData can be used on a Standalone spark cluster. But using a standalone cluster has following limitations: - single node cluster cannot be scaled up - the maximum memory and the CPU computation power has a fixed limit - the number of processors are limited in a single node cluster</p><p>To harness t
 he actual speed of execution of CarbonData on petabytes of data, it is suggested to use a Multinode Cluster.</p><h2>What versions of Apache Spark are Compatible with CarbonData?</h2><p>Currently <strong>Spark 1.6.2</strong> and <strong>Spark 2.1</strong> is compatible with CarbonData.</p><h2>Can we Load Data from excel?</h2><p>Yes, the data can be loaded from excel provided the data is in CSV format.</p><h2>How to enable Single Pass Data Loading?</h2><p>You need to set <strong>SINGLE_PASS</strong> to <code>True</code> and append it to <code>OPTIONS</code> Section in the query as demonstrated in the Load Query below : <code>
+LOAD DATA local inpath &#39;/opt/rawdata/data.csv&#39; INTO table carbontable
+OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;, &#39;QUOTECHAR&#39;=&#39;&quot;&#39;,&#39;FILEHEADER&#39;=&#39;empno,empname,designation&#39;,&#39;USE_KETTLE&#39;=&#39;FALSE&#39;)
+</code> Refer to <a href="https://github.com/PallaviSingh1992/incubator-carbondata/blob/6b4dd5f3dea8c93839a94c2d2c80ab7a799cf209/docs/dml-operation-on-carbondata.md">DML-operations-in-CarbonData</a> for more details and example.</p><h2>What is Single Pass Data Loading?</h2><p>Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary. This option specifies whether to use single pass for loading data or not. By default this option is set to <code>FALSE</code>.</p><h2>How to specify the data loading format for CarbonData?</h2><p>Edit carbon.properties file. Modify the value of parameter <strong>carbon.data.file.version</strong>. Setting the parameter <strong>carbon.data.file.version</strong> to <code>1</code> will support data loading in <code>old format(0.x version)</code> and setting <strong>carbo
 n.data.file.version</strong> to <code>2</code> will support data loading in <code>new format(1.x onwards)</code> only. By default the data loading is supported using the new format.</p><h2>How to resolve store location can not be found?</h2><p>Try creating <code>carbonsession</code> with <code>storepath</code> specified in the following manner : <code>
+val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&lt;store_path&gt;)
+</code> Example: <code>
+val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(&quot;hdfs://localhost:9000/carbon/store &quot;)
+</code></p><h2>What is carbon.lock.type?</h2><p>This property configuration specifies the type of lock to be acquired during concurrent operations on table. This property can be set with the following values : - <strong>LOCALLOCK</strong> : This Lock is created on local file system as file. This lock is useful when only one spark driver (thrift server) runs on a machine and no other CarbonData spark application is launched concurrently. - <strong>HDFSLOCK</strong> : This Lock is created on HDFS file system as file. This lock is useful when multiple CarbonData spark applications are launched and no ZooKeeper is running on cluster and the HDFS supports, file based locking.</p><h2>How to enable Auto Compaction?</h2><p>To enable compaction set <strong>carbon.enable.auto.load.merge</strong> to <code>TRUE</code> in the carbon.properties file.</p><h2>How to resolve Abstract Method Error?</h2><p>You need to specify the <code>spark version</code> while using Maven to build project.</p><h2>Ge
 tting Exception on Creating a View</h2><p>View not supported in CarbonData.</p><h2>Is CarbonData supported for Windows?</h2><p>We may provide support for windows in future. You are welcome to contribute if you want to add the support :)</p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/file-structure-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/file-structure-of-carbondata.html b/src/main/webapp/docs/latest_htmls/file-structure-of-carbondata.html
new file mode 100644
index 0000000..300a4c4
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/file-structure-of-carbondata.html
@@ -0,0 +1,6 @@
+<h1>CarbonData File Structure</h1><p>CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file footer, co-located in HDFS.</p><p>The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.</p><p>Each blocklet in the file is further divided into chunks of data called data chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in a file contain the same number and type of data chunks.</p><p><img src="../../../src/site/markdown/images/carbon_data_file_structure_new.png?raw=true" alt="CarbonData File Structure" /></p><p>Each data chunk contains multiple groups of data called as pages. There are three types of pages.</p>
+<ul>
+  <li>Data Page: Contains the encoded data of a column/group of columns.</li>
+  <li>Row ID Page (optional): Contains the row ID mappings used when the data page is stored as an inverted index.</li>
+  <li>RLE Page (optional): Contains additional metadata used when the data page is RLE coded.</li>
+</ul><p><img src="../../../src/site/markdown/images/carbon_data_format_new.png?raw=true" alt="CarbonData File Format" /></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/installation-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/installation-guide.html b/src/main/webapp/docs/latest_htmls/installation-guide.html
new file mode 100644
index 0000000..104ed07
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/installation-guide.html
@@ -0,0 +1,245 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Installation Guide</h1><p>This tutorial guides you through the installation and configuration of CarbonData in the following two modes :</p>
+<ul>
+  <li><a href="#installing-and-configuring-carbondata-on-standalone-spark-cluster">Installing and Configuring CarbonData on Standalone Spark Cluster</a></li>
+  <li><a href="#installing-and-configuring-carbondata-on-spark-on-yarn-cluster">Installing and Configuring CarbonData on ?Spark on YARN? Cluster</a></li>
+</ul><p>followed by :</p>
+<ul>
+  <li><a href="#query-execution-using-carbondata-thrift-server">Query Execution using CarbonData Thrift Server</a></li>
+</ul><h2>Installing and Configuring CarbonData on Standalone Spark Cluster</h2><h3>Prerequisites</h3>
+<ul>
+  <li><p>Hadoop HDFS and Yarn should be installed and running.</p></li>
+  <li><p>Spark should be installed and running on all the cluster nodes.</p></li>
+  <li><p>CarbonData user should have permission to access HDFS.</p></li>
+</ul><h3>Procedure</h3>
+<ul>
+  <li><p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder.</p><p>NOTE: Create the carbonlib folder if it does not exists inside <code>&quot;&lt;SPARK_HOME&gt;&quot;</code> path.</p></li>
+  <li><p>Add the carbonlib folder path in the Spark classpath. (Edit <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-env.sh&quot;</code> file and modify the value of SPARK_CLASSPATH by appending <code>&quot;&lt;SPARK_HOME&gt;/carbonlib/*&quot;</code> to the existing value)</p></li>
+  <li><p>Copy the carbon.properties.template to <code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code> folder from "./conf/" of CarbonData repository.</p></li>
+  <li><p>Copy the "carbonplugins" folder to <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder from "./processing/" folder of CarbonData repository.</p><p>NOTE: carbonplugins will contain .kettle folder.</p></li>
+  <li><p>In Spark node, configure the properties mentioned in the following table in <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-defaults.conf&quot;</code> file.</p></li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Value </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>$SPARK_HOME /carbonlib/carbonplugins </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraJavaOptions </td>
+      <td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties </td>
+      <td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraJavaOptions </td>
+      <td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties </td>
+      <td>A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space. </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Add the following properties in <code>&quot;&lt;SPARK_HOME&gt;/conf/&quot; carbon.properties</code>:</li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Required </th>
+      <th>Description </th>
+      <th>Example </th>
+      <th>Remark </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.storelocation </td>
+      <td>NO </td>
+      <td>Location where data CarbonData will create the store and write the data in its own format. </td>
+      <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore </td>
+      <td>Propose to set HDFS directory </td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>YES </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td>$SPARK_HOME/carbonlib/carbonplugins </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Verify the installation. For example:</li>
+</ul><p><code>
+   ./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
+   --executor-memory 2G
+</code></p><p>NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.</p><p>To get started with CarbonData : <a href="quick-start-guide.md">Quick Start</a>, <a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></p><h2>Installing and Configuring CarbonData on "Spark on YARN" Cluster</h2><p>This section provides the procedure to install CarbonData on "Spark on YARN" cluster.</p><h3>Prerequisites</h3>
+<ul>
+  <li>Hadoop HDFS and Yarn should be installed and running.</li>
+  <li>Spark should be installed and running in all the clients.</li>
+  <li>CarbonData user should have permission to access HDFS.</li>
+</ul><h3>Procedure</h3><p>The following steps are only for Driver Nodes. (Driver nodes are the one which starts the spark context.)</p>
+<ul>
+  <li><p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder.</p><p>NOTE: Create the carbonlib folder if it does not exists inside <code>&quot;&lt;SPARK_HOME&gt;&quot;</code> path.</p></li>
+  <li><p>Copy "carbonplugins" folder to <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder from "./processing/" folder of CarbonData repository.  carbonplugins will contain .kettle folder.</p></li>
+  <li><p>Copy the "carbon.properties.template" to <code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code> folder from conf folder of CarbonData repository.</p></li>
+  <li>Modify the parameters in "spark-default.conf" located in the <code>&quot;&lt;SPARK_HOME&gt;/conf</code>"</li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Description </th>
+      <th>Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>spark.master </td>
+      <td>Set this value to run the Spark in yarn cluster mode. </td>
+      <td>Set "yarn-client" to run the Spark in yarn cluster mode. </td>
+    </tr>
+    <tr>
+      <td>spark.yarn.dist.files </td>
+      <td>Comma-separated list of files to be placed in the working directory of each executor. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>spark.yarn.dist.archives </td>
+      <td>Comma-separated list of archives to be extracted into the working directory of each executor. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to executors. For instance NOTE: You can enter multiple values separated by space. </td>
+      <td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraClassPath </td>
+      <td>Extra classpath entries to prepend to the classpath of executors. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the values in below parameter spark.driver.extraClassPath </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraClassPath </td>
+      <td>Extra classpath entries to prepend to the classpath of the driver. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the value in below parameter spark.driver.extraClassPath. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. </td>
+      <td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonplugins</code> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Add the following properties in <code>&lt;SPARK_HOME&gt;/conf/ carbon.properties</code>:</li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Required </th>
+      <th>Description </th>
+      <th>Example </th>
+      <th>Default Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.storelocation </td>
+      <td>NO </td>
+      <td>Location where CarbonData will create the store and write the data in its own format. </td>
+      <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore </td>
+      <td>Propose to set HDFS directory</td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>YES </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td>$SPARK_HOME/carbonlib/carbonplugins </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Verify the installation.</li>
+</ul><p><code>
+     ./bin/spark-shell --master yarn-client --driver-memory 1g 
+     --executor-cores 2 --executor-memory 2G
+</code>  NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.</p><p>Getting started with CarbonData : <a href="quick-start-guide.md">Quick Start</a>, <a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></p><h2>Query Execution Using CarbonData Thrift Server</h2><h3>Starting CarbonData Thrift Server</h3><p>a. cd <code>&lt;SPARK_HOME&gt;</code></p><p>b. Run the following command to start the CarbonData thrift server.</p><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
+$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
+</code></p>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Example </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CARBON_ASSEMBLY_JAR </td>
+      <td>CarbonData assembly jar name present in the <code>&quot;&lt;SPARK_HOME&gt;&quot;/carbonlib/</code> folder. </td>
+      <td>carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar </td>
+    </tr>
+    <tr>
+      <td>carbon_store_path </td>
+      <td>This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. </td>
+      <td><code>hdfs//&lt;host_name&gt;:54310/user/hive/warehouse/carbon.store</code> </td>
+    </tr>
+  </tbody>
+</table><h3>Examples</h3>
+<ul>
+  <li>Start with default memory and executors</li>
+</ul><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
+$SPARK_HOME/carbonlib
+/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
+hdfs://hacluster/user/hive/warehouse/carbon.store
+</code></p>
+<ul>
+  <li>Start with Fixed executors and resources</li>
+</ul><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
+--num-executors 3 --driver-memory 20g --executor-memory 250g 
+--executor-cores 32 
+/srv/OSCON/BigData/HACluster/install/spark/sparkJdbc/lib
+/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
+hdfs://hacluster/user/hive/warehouse/carbon.store
+</code></p><h3>Connecting to CarbonData Thrift Server Using Beeline</h3><p>```  cd <SPARK_HOME>  ./bin/beeline jdbc:hive2://<thrftserver_host>:port</p>
+<pre><code> Example
+ ./bin/beeline jdbc:hive2://10.10.10.10:10000
+</code></pre><p>```</p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/overview-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/overview-of-carbondata.html b/src/main/webapp/docs/latest_htmls/overview-of-carbondata.html
new file mode 100644
index 0000000..f330570
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/overview-of-carbondata.html
@@ -0,0 +1,44 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Overview</h1><p>This tutorial provides a detailed overview about :</p>
+<ul>
+  <li><a href="#introduction">Introduction</a></li>
+  <li><a href="#features">Features</a></li>
+</ul><h2>Introduction</h2><p>CarbonData is a fully indexed columnar and Hadoop native data-store for processing heavy analytical workloads and detailed queries on big data. CarbonData allows faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data.</p><p>In customer benchmarks, CarbonData has proven to manage Petabyte of data running on extraordinarily low-cost hardware and answers queries around 10 times faster than the current open source solutions (column-oriented SQL on Hadoop data-stores).</p><p>Some of the salient features of CarbonData are :</p>
+<ul>
+  <li>Low-Latency for various types of data access patterns like Sequential, Random and OLAP.</li>
+  <li>Fast query on fast data.</li>
+  <li>Space efficiency.</li>
+  <li>General format available on Hadoop-ecosystem.</li>
+</ul><h2>Features</h2><p>CarbonData file format is a columnar store in HDFS. It has many features that a modern columnar format has, such as splittable, compression schema, complex data type etc and CarbonData has following unique features:</p>
+<ul>
+  <li><p>Unique Data Organization: Though CarbonData stores data in Columnar format, it differs from traditional Columnar formats as the columns in each row-group(Data Block) is sorted independent of the other columns. Though this arrangement requires CarbonData to store the row-number mapping against each column value, it makes it possible to use binary search for faster filtering and since the values are sorted, same/similar values come together which yields better compression and offsets the storage overhead required by the row number mapping.</p></li>
+  <li><p>Advanced Push Down Optimizations: CarbonData pushes as much of query processing as possible close to the data to minimize the amount of data being read, processed, converted and transmitted/shuffled. Using projections and filters it reads only the required columns form the store and also reads only the rows that match the filter conditions provided in the query.</p></li>
+  <li><p>Multi Level Indexing: CarbonData uses multiple indices at various levels to enable faster search and speed up query processing.</p></li>
+  <li><p>Dictionary Encoding: Most databases and big data SQL data stores employ columnar encoding to achieve data compression by storing small integers numbers (surrogate value) instead of full string values. However, almost all existing databases and data stores divide the data into row groups containing anywhere from few thousand to a million rows and employ dictionary encoding only within each row group. Hence, the same column value can have different surrogate values in different row groups. So, while reading the data, conversion from surrogate value to actual value needs to be done immediately after the data is read from the disk. But CarbonData employs global surrogate key which means that a common dictionary is maintained for the full store on one machine/node. So CarbonData can perform all the query processing work such as grouping/aggregation, sorting etc on light weight surrogate values. The conversion from surrogate to actual values needs to be done only on the final res
 ult. This procedure improves performance on two aspects. Conversion from surrogate values to actual values is done only for the final result rows which are much less than the actual rows read from the store. All query processing and computation such as grouping/aggregation, sorting, and so on is done on lightweight surrogate values which requires less memory and CPU time compared to actual values.</p></li>
+  <li><p>Deep Spark Integration: It has built-in spark integration for Spark 1.6.2, 2.1 and interfaces for Spark SQL, DataFrame API and query optimization. It supports bulk data ingestion and allows saving of spark dataframes as CarbonData files.</p></li>
+  <li><p>Update Delete Support: It supports batch updates like daily update scenarios for OLAP and Base+Delta file based design.</p></li>
+  <li><p>Bucketing : It is a technique that is used for uniform distribution of data across files in CarbonData. It enhances the performance of join queries. While loading the data, records are placed into buckets based on hashing algorithm. During the execution of join queries the records can be fetched from buckets with out need of shuffling.This feature is used to distribute/organize the table/partition data into multiple files placing similar records in same file.</p></li>
+  <li><p>Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- measure columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</p></li>
+  <li><p>Min-Max Index for all columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</p></li>
+  <li><p>Data Block level Inverted Index for all columns: Aids in quickly locating the rows that contain the data matching search/filter criteria within a row group(Data Blocks).</p></li>
+  <li><p>Store data along with index: Significantly accelerates query performance and reduces the I/O scans and CPU resources, when there are filters in the query. CarbonData index consists of multiple levels of indices. A processing framework can leverage this index to reduce the task it needs to schedule and process. It can also do skip scan in more finer grain units (called blocklet) in task side scanning instead of scanning the whole file.</p></li>
+  <li><p>Operable encoded data: It supports efficient compression and global encoding schemes and can query on compressed/encoded data. The data can be converted just before returning the results to the users, which is "late materialized".</p></li>
+  <li><p>Column group: Allows multiple columns to form a column group that would be stored as row format. This reduces the row reconstruction cost at query time.</p></li>
+  <li><p>Support for various use cases with one single Data format: Examples are interactive OLAP-style query, Sequential Access (big scan) and Random Access (narrow scan).</p></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/quick-start-guide.html b/src/main/webapp/docs/latest_htmls/quick-start-guide.html
new file mode 100644
index 0000000..a96c716
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/quick-start-guide.html
@@ -0,0 +1,77 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Quick Start</h1><p>This tutorial provides a quick introduction to using CarbonData.</p><h2>Prerequisites</h2>
+<ul>
+  <li><a href="https://github.com/apache/incubator-carbondata/blob/master/build">Installation and building CarbonData</a>.</li>
+  <li>Create a sample.csv file using the following commands. The CSV file is required for loading data into CarbonData.</li>
+</ul><p><code>
+cd carbondata
+cat &gt; sample.csv &lt;&lt; EOF
+id,name,city,age
+1,david,shenzhen,31
+2,eason,shenzhen,27
+3,jarry,wuhan,35
+EOF
+</code></p><h2>Interactive Analysis with Spark Shell Version 2.1</h2><p>Apache Spark Shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. Please visit <a href="http://spark.apache.org/docs/latest/">Apache Spark Documentation</a> for more details on Spark shell.</p><h4>Basics</h4><p>Start Spark shell by running the following command in the Spark directory:</p><p><code>
+./bin/spark-shell --jars &lt;carbondata assembly jar path&gt;
+</code></p><p>In this shell, SparkSession is readily available as 'spark' and Spark context is readily available as 'sc'.</p><p>In order to create a CarbonSession we will have to configure it explicitly in the following manner :</p>
+<ul>
+  <li>Import the following :</li>
+</ul><p><code>
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.CarbonSession._
+</code></p>
+<ul>
+  <li>Create a CarbonSession :</li>
+</ul><p><code>
+val carbon = SparkSession
+            .builder()
+            .config(sc.getConf)
+            .getOrCreateCarbonSession()
+</code></p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+scala&gt;carbon.sql(&quot;CREATE TABLE IF NOT EXISTS test_table
+     (id string, name string, city string, age Int)
+     STORED BY &#39;carbondata&#39;&quot;)
+</code></p><h5>Loading Data to a Table</h5><p><code>
+scala&gt;carbon.sql(&quot;LOAD DATA INPATH &#39;sample.csv file path&#39; INTO TABLE test_table&quot;)
+</code> NOTE:Please provide the real file path of sample.csv for the above script.</p><h6>Query Data from a Table</h6><p>``` scala&gt;carbon.sql("SELECT * FROM test_table").show()</p><p>scala&gt;carbon.sql("SELECT city, avg(age), sum(age)  FROM test_table GROUP BY city").show() ```</p><h2>Interactive Analysis with Spark Shell Version 1.6</h2><h4>Basics</h4><p>Start Spark shell by running the following command in the Spark directory:</p><p><code>
+./bin/spark-shell --jars &lt;carbondata assembly jar path&gt;
+</code></p><p>NOTE: In this shell, SparkContext is readily available as sc.</p>
+<ul>
+  <li>In order to execute the Queries we need to import CarbonContext:</li>
+</ul><p><code>
+import org.apache.spark.sql.CarbonContext
+</code></p>
+<ul>
+  <li>Create an instance of CarbonContext in the following manner :</li>
+</ul><p><code>
+val cc = new CarbonContext(sc)
+</code></p><p>NOTE: By default store location is pointed to "../carbon.store", user can provide own store location to CarbonContext like new CarbonContext(sc, storeLocation).</p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+scala&gt;cc.sql(&quot;CREATE TABLE IF NOT EXISTS test_table
+     (id string, name string, city string, age Int)
+     STORED BY &#39;carbondata&#39;&quot;)
+</code> To see the table created :</p><p><code>
+scala&gt;cc.sql(&quot;SHOW TABLES&quot;).show()
+</code></p><h5>Loading Data to a Table</h5><p><code>
+scala&gt;cc.sql(&quot;LOAD DATA INPATH &#39;sample.csv file path&#39;
+      INTO TABLE test_table&quot;)
+</code> NOTE:Please provide the real file path of sample.csv for the above script.</p><h5>Query Data from a Table</h5><p><code>
+scala&gt;cc.sql(&quot;SELECT * FROM test_table&quot;).show()
+scala&gt;cc.sql(&quot;SELECT city, avg(age), sum(age)
+      FROM test_table GROUP BY city&quot;).show()
+</code></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/supported-data-types-in-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/supported-data-types-in-carbondata.html b/src/main/webapp/docs/latest_htmls/supported-data-types-in-carbondata.html
new file mode 100644
index 0000000..a099fbb
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/supported-data-types-in-carbondata.html
@@ -0,0 +1,18 @@
+<h1>Data Types</h1><h4>CarbonData supports the following data types:</h4>
+<ul>
+  <li>Numeric Types</li>
+  <li>SMALLINT</li>
+  <li>INT/INTEGER</li>
+  <li>BIGINT</li>
+  <li>DOUBLE</li>
+  <li>DECIMAL</li>
+  <li><p>Date/Time Types</p></li>
+  <li>TIMESTAMP</li>
+  <li><p>String Types</p></li>
+  <li>STRING</li>
+  <li><p>Complex Types</p>
+  <ul>
+    <li>arrays: ARRAY<code>&lt;data_type&gt;</code></li>
+    <li>structs: STRUCT<code>&lt;col_name : data_type COMMENT col_comment, ...&gt;</code></li>
+  </ul></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/troubleshooting.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/troubleshooting.html b/src/main/webapp/docs/latest_htmls/troubleshooting.html
new file mode 100644
index 0000000..2453df3
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/troubleshooting.html
@@ -0,0 +1,108 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Troubleshooting</h1><p>This tutorial is designed to provide troubleshooting for end users and developers who are building, deploying, and using CarbonData.</p>
+<ul>
+  <li><a href="#failed-to-load-thrift-libraries">Failed to load thrift libraries</a></li>
+  <li><a href="#failed-to-launch-the-spark-shell">Failed to launch the Spark Shell</a></li>
+  <li><a href="#query-failure-with-generic-error-on-the-beeline">Query Failure with Generic Error on the Beeline</a></li>
+  <li><a href="#failed-to-execute-load-query-on-cluster">Failed to execute load query on cluster</a></li>
+  <li><a href="#failed-to-execute-insert-query-on-cluster">Failed to execute insert query on cluster</a></li>
+  <li><a href="#failed-to-connect-to-hiveuser-with-thrift">Failed to connect to hiveuser with thrift</a></li>
+  <li><a href="#failure-to-read-the-metastore-db-during-table-creation">Failure to read the metastore db during table creation</a></li>
+  <li><a href="#failed-to-load-data-on-the-cluster">Failed to load data on the cluster</a></li>
+  <li><a href="#failed-to-insert-data-on-the-cluster">Failed to insert data on the cluster</a></li>
+  <li><a href="#failed-to-execute-concurrent-operations">Failed to execute Concurrent Operations</a></li>
+  <li><a href="#failed-to-create-a-table-with-a-single-numeric-column">Failed to create a table with a single numeric column</a></li>
+  <li><a href="#data-failure-because-of-bad-records">Data Failure because of Bad Records</a></li>
+</ul><h2>Failed to load thrift libraries</h2><p><strong>Symptom</strong></p><p>Thrift throws following exception :</p><p><code>
+  thrift: error while loading shared libraries:
+  libthriftc.so.0: cannot open shared object file: No such file or directory
+</code></p><p><strong>Possible Cause</strong></p><p>The complete path to the directory containing the libraries is not configured correctly.</p><p><strong>Procedure</strong></p><p>Follow the steps below to ensure loading of libraries appropriately :</p>
+<ol>
+  <li><p>For ubuntu you have to add a custom.conf file to /etc/ld.so.conf.d  For example,</p><p><code>
+ sudo gedit /etc/ld.so.conf.d/randomLibs.conf
+</code></p><p>Inside this file you are supposed to configure the complete path to the directory that contains all the libraries that you wish to add to the system, let us say /home/ubuntu/localLibs</p></li>
+  <li><p>To ensure your library location ,check for existence of libthrift.so</p></li>
+  <li><p>Save and run the following command to update the system with this libs.</p><p><code>
+  sudo ldconfig
+</code></p><p>Note : Remember to add only the path to the directory, not the full path for that file, all the libraries inside that path will be automatically indexed.</p></li>
+</ol><h2>Failed to launch the Spark Shell</h2><p><strong>Symptom</strong></p><p>The shell prompts the following error :</p><p><code>
+  org.apache.spark.sql.CarbonContext$$anon$$apache$spark$sql$catalyst$analysis
+  $OverrideCatalog$_setter_$org$apache$spark$sql$catalyst$analysis
+  $OverrideCatalog$$overrides_$e
+</code></p><p><strong>Possible Cause</strong></p><p>The Spark Version and the selected Spark Profile do not match.</p><p><strong>Procedure</strong></p>
+<ol>
+  <li><p>Ensure your spark version and selected profile for spark are correct.</p></li>
+  <li><p>Use the following command :</p><p><code>
+ &quot;mvn -Pspark-2.1 -Dspark.version {yourSparkVersion} clean package&quot;
+</code></p><p>Note : Refrain from using "mvn clean package" without specifying the profile.</p></li>
+</ol><h2>Query Failure with Generic Error on the Beeline</h2><p><strong>Symptom</strong></p><p>Query fails on the executor side and generic error message is printed on the beeline console</p><p><img src="../../../src/site/markdown/images/query_failure_beeline.png?raw=true" alt="Query Failure Beeline" /></p><p><strong>Possible Causes</strong></p>
+<ul>
+  <li>In Query flow, Table B-Tree will be loaded into memory on the driver side and filter condition is validated against the min-max of each block to identify false positive,  Once the blocks are selected, based on number of available executors, blocks will be distributed to each executor node as shown in below driver logs snapshot</li>
+</ul><p><img src="../../../src/site/markdown/images/query_failure_logs.png?raw=true" alt="Query Failure Logs" /></p>
+<ul>
+  <li><p>When the error occurs in driver side while b-tree loading or block distribution, detail error message will be printed on the beeline console and error trace will be printed on the driver logs.</p></li>
+  <li><p>When the error occurs in the executor side, generic error message will be printed as shown in issue description.</p></li>
+</ul><p><img src="../../../src/site/markdown/images/query_failure_job_details.png?raw=true" alt="Query Failure Job Details" /></p>
+<ul>
+  <li>Details of the failed stages can be seen in the Spark Application UI by clicking on the failed stages on the failed job as shown in previous snapshot</li>
+</ul><p><img src="../../../src/site/markdown/images/query_failure_spark_ui.png?raw=true" alt="Query Failure Spark UI" /></p><p><strong>Procedure</strong></p><p>Details of the error can be analyzed in details using executor logs available in stdout</p><p><img src="../../../src/site/markdown/images/query_failure_procedure.png?raw=true" alt="Query Failure Spark UI" /></p><p>Below snapshot shows executor logs with error message for query failure which can be helpful to locate the error</p><p><img src="../../../src/site/markdown/images/query_failure_issue.png?raw=true" alt="Query Failure Spark UI" /> </p><h2>Failed to execute load query on cluster.</h2><p><strong>Symptom</strong></p><p>Load query failed with the following exception:</p><p><code>
+  Dictionary file is locked for updation.
+</code></p><p><strong>Possible Cause</strong></p><p>The carbon.properties file is not identical in all the nodes of the cluster.</p><p><strong>Procedure</strong></p><p>Follow the steps to ensure the carbon.properties file is consistent across all the nodes:</p>
+<ol>
+  <li><p>Copy the carbon.properties file from the master node to all the other nodes in the cluster.  For example, you can use ssh to copy this file to all the nodes.</p></li>
+  <li><p>For the changes to take effect, restart the Spark cluster.</p></li>
+</ol><h2>Failed to execute insert query on cluster.</h2><p><strong>Symptom</strong></p><p>Load query failed with the following exception:</p><p><code>
+  Dictionary file is locked for updation.
+</code></p><p><strong>Possible Cause</strong></p><p>The carbon.properties file is not identical in all the nodes of the cluster.</p><p><strong>Procedure</strong></p><p>Follow the steps to ensure the carbon.properties file is consistent across all the nodes:</p>
+<ol>
+  <li><p>Copy the carbon.properties file from the master node to all the other nodes in the cluster.  For example, you can use scp to copy this file to all the nodes.</p></li>
+  <li><p>For the changes to take effect, restart the Spark cluster.</p></li>
+</ol><h2>Failed to connect to hiveuser with thrift</h2><p><strong>Symptom</strong></p><p>We get the following exception :</p><p><code>
+  Cannot connect to hiveuser.
+</code></p><p><strong>Possible Cause</strong></p><p>The external process does not have permission to access.</p><p><strong>Procedure</strong></p><p>Ensure that the Hiveuser in mysql must allow its access to the external processes.</p><h2>Failure to read the metastore db during table creation.</h2><p><strong>Symptom</strong></p><p>We get the following exception on trying to connect :</p><p><code>
+  Cannot read the metastore db
+</code></p><p><strong>Possible Cause</strong></p><p>The metastore db is dysfunctional.</p><p><strong>Procedure</strong></p><p>Remove the metastore db from the carbon.metastore in the Spark Directory.</p><h2>Failed to load data on the cluster</h2><p><strong>Symptom</strong></p><p>Data loading fails with the following exception :</p><p><code>
+   Data Load failure exeception
+</code></p><p><strong>Possible Cause</strong></p><p>The following issue can cause the failure :</p>
+<ol>
+  <li><p>The core-site.xml, hive-site.xml, yarn-site and carbon.properties are not consistent across all nodes of the cluster.</p></li>
+  <li><p>Path to hdfs ddl is not configured correctly in the carbon.properties.</p></li>
+</ol><p><strong>Procedure</strong></p><p>Follow the steps to ensure the following configuration files are consistent across all the nodes:</p>
+<ol>
+  <li><p>Copy the core-site.xml, hive-site.xml, yarn-site,carbon.properties files from the master node to all the other nodes in the cluster.  For example, you can use scp to copy this file to all the nodes.</p><p>Note : Set the path to hdfs ddl in carbon.properties in the master node.</p></li>
+  <li><p>For the changes to take effect, restart the Spark cluster.</p></li>
+</ol><h2>Failed to insert data on the cluster</h2><p><strong>Symptom</strong></p><p>Insertion fails with the following exception :</p><p><code>
+   Data Load failure exeception
+</code></p><p><strong>Possible Cause</strong></p><p>The following issue can cause the failure :</p>
+<ol>
+  <li><p>The core-site.xml, hive-site.xml, yarn-site and carbon.properties are not consistent across all nodes of the cluster.</p></li>
+  <li><p>Path to hdfs ddl is not configured correctly in the carbon.properties.</p></li>
+</ol><p><strong>Procedure</strong></p><p>Follow the steps to ensure the following configuration files are consistent across all the nodes:</p>
+<ol>
+  <li><p>Copy the core-site.xml, hive-site.xml, yarn-site,carbon.properties files from the master node to all the other nodes in the cluster.  For example, you can use scp to copy this file to all the nodes.</p><p>Note : Set the path to hdfs ddl in carbon.properties in the master node.</p></li>
+  <li><p>For the changes to take effect, restart the Spark cluster.</p></li>
+</ol><h2>Failed to execute Concurrent Operations.</h2><p><strong>Symptom</strong></p><p>Execution of Concurrent Operations (Load,Insert,Update) on table by multiple workers fails with the following exception :</p><p><code>
+   Table is locked for updation.
+</code></p><p><strong>Possible Cause</strong></p><p>Concurrency not supported.</p><p><strong>Procedure</strong></p><p>Worker must wait for the query execution to complete and the table to release the lock for another query execution to succeed..</p><h2>Failed to create a table with a single numeric column.</h2><p><strong>Symptom</strong></p><p>Execution fails with the following exception :</p><p><code>
+   Table creation fails.
+</code></p><p><strong>Possible Cause</strong></p><p>Behavior not supported.</p><p><strong>Procedure</strong></p><p>A single column that can be considered as dimension is mandatory for table creation.</p><h2>Data Failure because of Bad Records</h2><p><strong>Symptom</strong></p><p>Data Loading fails with the following exception</p><p><code>
+   Error: java.lang.Exception: Data load failed due to Bad record
+</code></p><p><strong>Possible Causes</strong></p><p>The parameter BAD_RECORDS_ACTION has not been specified in the Query.</p><p><strong>Procedure</strong></p><p>Set the following parameter in the load command OPTIONS as shown below :</p><p>'BAD_RECORDS_ACTION'='FORCE?</p><p><em>Example :</em></p><p><code>
+   LOAD DATA INPATH &#39;hdfs://hacluster/user/loader/moredata01.csv&#39; INTO TABLE flow_carbon_256b OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;, &#39;BAD_RECORDS_ACTION&#39;=&#39;FORCE&#39;);
+</code></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/useful-tips-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/useful-tips-on-carbondata.html b/src/main/webapp/docs/latest_htmls/useful-tips-on-carbondata.html
new file mode 100644
index 0000000..51375ca
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/useful-tips-on-carbondata.html
@@ -0,0 +1,208 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Useful Tips</h1><p>This tutorial guides you to create CarbonData Tables and optimize performance. The following sections will elaborate on the above topics :</p>
+<ul>
+  <li><a href="#suggestions-to-create-carbondata-table">Suggestions to create CarbonData Table</a></li>
+  <li><a href="#configurations-for-optimizing-carbondata-performance">Configurations For Optimizing CarbonData Performance</a></li>
+</ul><h2>Suggestions to Create CarbonData Table</h2><p>Recently CarbonData was used to analyze performance of Telecommunication field. The results of the analysis for table creation with dimensions ranging from 10 thousand to 10 billion rows and 100 to 300 columns have been summarized below. </p><p>The following table describes some of the columns from the table used.</p><p><strong>Table Column Description</strong></p>
+<table>
+  <thead>
+    <tr>
+      <th>Column Name </th>
+      <th>Data Type </th>
+      <th>Cardinality </th>
+      <th>Attribution </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>msisdn </td>
+      <td>String </td>
+      <td>30 million </td>
+      <td>Dimension </td>
+    </tr>
+    <tr>
+      <td>BEGIN_TIME </td>
+      <td>BigInt </td>
+      <td>10 Thousand </td>
+      <td>Dimension </td>
+    </tr>
+    <tr>
+      <td>HOST </td>
+      <td>String </td>
+      <td>1 million </td>
+      <td>Dimension </td>
+    </tr>
+    <tr>
+      <td>Dime_1 </td>
+      <td>String </td>
+      <td>1 Thousand </td>
+      <td>Dimension </td>
+    </tr>
+    <tr>
+      <td>counter_1 </td>
+      <td>Numeric(20,0) </td>
+      <td>NA </td>
+      <td>Measure </td>
+    </tr>
+    <tr>
+      <td>... </td>
+      <td>... </td>
+      <td>NA </td>
+      <td>Measure </td>
+    </tr>
+    <tr>
+      <td>counter_100 </td>
+      <td>Numeric(20,0) </td>
+      <td>NA </td>
+      <td>Measure </td>
+    </tr>
+  </tbody>
+</table><p>CarbonData has more than 50 test cases, on the basis of these we have following suggestions to enhance the query performance :</p>
+<ul>
+  <li><strong>Put the frequently-used column filter in the beginning</strong></li>
+</ul><p>For example, MSISDN filter is used in most of the query then we must put the MSISDN in the first column. The create table command can be modified as suggested below :</p><p><code>
+  create table carbondata_table(
+  msisdn String,
+  ...
+  )STORED BY &#39;org.apache.carbondata.format&#39; 
+  TBLPROPERTIES ( &#39;DICTIONARY_EXCLUDE&#39;=&#39;MSISDN,..&#39;,
+  &#39;DICTIONARY_INCLUDE&#39;=&#39;...&#39;);
+</code></p><p>Now the query with MSISDN in the filter will be more efficient.</p>
+<ul>
+  <li><strong>Put the frequently-used columns in the order of low to high cardinality</strong></li>
+</ul><p>If the table in the specified query has multiple columns which are frequently used to filter the results, it is suggested to put  the columns in the order of cardinality low to high. This ordering of frequently used columns improves the compression ratio and  enhances the performance of queries with filter on these columns.</p><p>For example if MSISDN, HOST and Dime_1 are frequently-used columns, then the column order of table is suggested as  Dime_1&gt;HOST&gt;MSISDN as Dime_1 has the lowest cardinality.  The create table command can be modified as suggested below :</p><p><code>
+  create table carbondata_table(
+  Dime_1 String,
+  HOST String,
+  MSISDN String,
+  ...
+  )STORED BY &#39;org.apache.carbondata.format&#39; 
+  TBLPROPERTIES ( &#39;DICTIONARY_EXCLUDE&#39;=&#39;MSISDN,HOST..&#39;,
+  &#39;DICTIONARY_INCLUDE&#39;=&#39;Dime_1..&#39;);
+</code></p>
+<ul>
+  <li><strong>Put the Dimension type columns in order of low to high cardinality</strong></li>
+</ul><p>If the columns used to filter are not frequently used, then it is suggested to order all the columns of dimension type in order of low to high cardinality. The create table command can be modified as below :</p><p><code>
+  create table carbondata_table(
+  Dime_1 String,
+  BEGIN_TIME bigint
+  HOST String,
+  MSISDN String,
+  ...
+  )STORED BY &#39;org.apache.carbondata.format&#39; 
+  TBLPROPERTIES ( &#39;DICTIONARY_EXCLUDE&#39;=&#39;MSISDN,HOST,IMSI..&#39;,
+  &#39;DICTIONARY_INCLUDE&#39;=&#39;Dime_1,END_TIME,BEGIN_TIME..&#39;);
+</code></p>
+<ul>
+  <li><strong>For measure type columns with non high accuracy, replace Numeric(20,0) data type with Double data type</strong></li>
+</ul><p>For columns of measure type, not requiring high accuracy, it is suggested to replace Numeric data type with Double to enhance query performance. The create table command can be modified as below :</p><p><code>
+  create table carbondata_table(
+  Dime_1 String,
+  BEGIN_TIME bigint
+  HOST String,
+  MSISDN String,
+  counter_1 double,
+  counter_2 double,
+  ...
+  counter_100 double
+  )STORED BY &#39;org.apache.carbondata.format&#39; 
+  TBLPROPERTIES ( &#39;DICTIONARY_EXCLUDE&#39;=&#39;MSISDN,HOST,IMSI&#39;,
+  &#39;DICTIONARY_INCLUDE&#39;=&#39;Dime_1,END_TIME,BEGIN_TIME&#39;);
+</code>  The result of performance analysis of test-case shows reduction in query execution time from 15 to 3 seconds, thereby improving performance by nearly 5 times.</p>
+<ul>
+  <li><strong>Columns of incremental character should be re-arranged at the end of dimensions</strong></li>
+</ul><p>Consider the following scenario where data is loaded each day and the start_time is incremental for each load, it is suggested to put start_time at the end of dimensions. </p><p>Incremental values are efficient in using min/max index. The create table command can be modified as below :</p><p><code>
+  create table carbondata_table(
+  Dime_1 String,
+  HOST String,
+  MSISDN String,
+  counter_1 double,
+  counter_2 double,
+  BEGIN_TIME bigint,
+  ...
+  counter_100 double
+  )STORED BY &#39;org.apache.carbondata.format&#39; 
+  TBLPROPERTIES ( &#39;DICTIONARY_EXCLUDE&#39;=&#39;MSISDN,HOST,IMSI&#39;,
+  &#39;DICTIONARY_INCLUDE&#39;=&#39;Dime_1,END_TIME,BEGIN_TIME&#39;); 
+</code></p>
+<ul>
+  <li><strong>Avoid adding high cardinality columns to dictionary</strong></li>
+</ul><p>If the system has low memory configuration, then it is suggested to exclude high cardinality columns from the dictionary to enhance load performance. Creation of dictionary for high cardinality columns at time of load will degrade load performance due to excessive memory usage. </p><p>By default CarbonData determines the cardinality at the first data load and allows for dictionary creation only if the cardinality is less than 1 million.</p><h2>Configurations for Optimizing CarbonData Performance</h2><p>Recently we did some performance POC on CarbonData for Finance and telecommunication Field. It involved detailed queries and aggregation scenarios. After the completion of POC, some of the configurations impacting the performance have been identified and tabulated below :</p>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Location </th>
+      <th>Used For </th>
+      <th>Description </th>
+      <th>Tuning </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.sort.intermediate.files.limit </td>
+      <td>spark/carbonlib/carbon.properties </td>
+      <td>Data loading </td>
+      <td>During the loading of data, local temp is used to sort the data. This number specifies the minimum number of intermediate files after which the merge sort has to be initiated. </td>
+      <td>Increasing the parameter to a higher value will improve the load performance. For example, when we increase the value from 20 to 100, it increases the data load performance from 35MB/S to more than 50MB/S. Higher values of this parameter consumes more memory during the load. </td>
+    </tr>
+    <tr>
+      <td>carbon.number.of.cores.while.loading </td>
+      <td>spark/carbonlib/carbon.properties </td>
+      <td>Data loading </td>
+      <td>Specifies the number of cores used for data processing during data loading in CarbonData. </td>
+      <td>If you have more number of CPUs, then you can increase the number of CPUs, which will increase the performance. For example if we increase the value from 2 to 4 then the CSV reading performance can increase about 1 times </td>
+    </tr>
+    <tr>
+      <td>carbon.compaction.level.threshold </td>
+      <td>spark/carbonlib/carbon.properties </td>
+      <td>Data loading and Querying </td>
+      <td>For minor compaction, specifies the number of segments to be merged in stage 1 and number of compacted segments to be merged in stage 2. </td>
+      <td>Each CarbonData load will create one segment, if every load is small in size it will generate many small file over a period of time impacting the query performance. Configuring this parameter will merge the small segment to one big segment which will sort the data and improve the performance. For Example in one telecommunication scenario, the performance improves about 2 times after minor compaction. </td>
+    </tr>
+    <tr>
+      <td>spark.sql.shuffle.partitions </td>
+      <td>spark/con/spark-defaults.conf </td>
+      <td>Querying </td>
+      <td>The number of task started when spark shuffle. </td>
+      <td>The value can be 1 to 2 times as much as the executor cores. In an aggregation scenario, reducing the number from 200 to 32 reduced the query time from 17 to 9 seconds. </td>
+    </tr>
+    <tr>
+      <td>num-executors/executor-cores/executor-memory </td>
+      <td>spark/con/spark-defaults.conf </td>
+      <td>Querying </td>
+      <td>The number of executors, CPU cores, and memory used for CarbonData query. </td>
+      <td>In the bank scenario, we provide the 4 CPUs cores and 15 GB for each executor which can get good performance. This 2 value does not mean more the better. It needs to be configured properly in case of limited resources. For example, In the bank scenario, it has enough CPU 32 cores each node but less memory 64 GB each node. So we cannot give more CPU but less memory. For example, when 4 cores and 12GB for each executor. It sometimes happens GC during the query which impact the query performance very much from the 3 second to more than 15 seconds. In this scenario need to increase the memory or decrease the CPU cores. </td>
+    </tr>
+    <tr>
+      <td>carbon.detail.batch.size </td>
+      <td>spark/carbonlib/carbon.properties </td>
+      <td>Data loading </td>
+      <td>The buffer size to store records, returned from the block scan. </td>
+      <td>In limit scenario this parameter is very important. For example your query limit is 1000. But if we set this value to 3000 that means we get 3000 records from scan but spark will only take 1000 rows. So the 2000 remaining are useless. In one Finance test case after we set it to 100, in the limit 1000 scenario the performance increase about 2 times in comparison to if we set this value to 12000. </td>
+    </tr>
+    <tr>
+      <td>carbon.use.local.dir </td>
+      <td>spark/carbonlib/carbon.properties </td>
+      <td>Data loading </td>
+      <td>Whether use YARN local directories for multi-table load disk load balance </td>
+      <td>If this is set it to true CarbonData will use YARN local directories for multi-table load disk load balance, that will improve the data load performance. </td>
+    </tr>
+  </tbody>
+</table>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/user-guide-toc.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/user-guide-toc.html b/src/main/webapp/docs/latest_htmls/user-guide-toc.html
new file mode 100644
index 0000000..f9124fe
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/user-guide-toc.html
@@ -0,0 +1,45 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>User Guide</h1><p>Welcome to Apache CarbonData. Apache CarbonData(incubating) is a new big data file format for faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data. This user guide provides a detailed description about the CarbonData and its features.</p><p>Let's get started !</p>
+<ul>
+  <li><a href="overview-of-carbondata.md">Overview</a>
+  <ul>
+    <li>Introduction</li>
+    <li>Features</li>
+    <li><a href="supported-data-types-in-carbondata.md">Data Types</a></li>
+    <li><a href="file-structure-of-carbondata.md">CarbonData File Structure</a></li>
+  </ul></li>
+  <li><a href="installation-guide.md">Installation Guide</a>
+  <ul>
+    <li>Installing and Configuring CarbonData on Standalone Spark Cluster</li>
+    <li>Installing and Configuring CarbonData on "Spark on YARN Cluster</li>
+  </ul></li>
+  <li><a href="configuration-parameters.md">Configuring CarbonData</a>
+  <ul>
+    <li>System Configuration</li>
+    <li>Performance Configuration</li>
+    <li>Miscellaneous Configuration</li>
+    <li>Spark Configuration</li>
+  </ul></li>
+  <li><a href="using-carbondata.md">Using CarbonData</a>
+  <ul>
+    <li><a href="data-management.md">Data Management</a></li>
+    <li><a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></li>
+    <li><a href="dml-operation-on-carbondata.md">DML Operations on CarbonData</a></li>
+  </ul></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/docs/latest_htmls/using-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest_htmls/using-carbondata.html b/src/main/webapp/docs/latest_htmls/using-carbondata.html
new file mode 100644
index 0000000..c501d0f
--- /dev/null
+++ b/src/main/webapp/docs/latest_htmls/using-carbondata.html
@@ -0,0 +1,14 @@
+<h1>Using CarbonData</h1><p>This tutorial discusses the disciplines related to management of data in Apache CarbonData. Following below each section is a brief introduction to respective disciplines related to data management.</p><h2>Data Management</h2><p>This section shall be dealing with the disciplines related to managing data in the application, focusing on conceptual details related to operations like load data, delete data, update data and Compacting Data.</p><p>For complete details refer to <a href="data-management.md">Data Management</a></p><h2>Data Definition Language Support</h2><p>This section deals with the aspects related to creation and modification of the structure of database. It shall discuss in detail about</p>
+<ul>
+  <li>Table creation</li>
+  <li>Table deletion</li>
+  <li>Table description</li>
+  <li>Compaction</li>
+</ul><p>For complete details refer to <a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></p><h2>Data Manipulation Language Support</h2><p>This section deals with the aspects related to data manipulation in database. It shall discuss in detail about selecting, loading and deleting in a database. This manipulation comprises of</p>
+<ul>
+  <li>Loading data into database tables</li>
+  <li>Retrieving existing data</li>
+  <li>Deleting data from existing tables</li>
+  <li>Deleting segments from existing tables</li>
+  <li>Updating data in existing tables</li>
+</ul><p>For complete details refer to <a href="dml-operation-on-carbondata.md">DML Operations on CarbonData</a></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/9ebca155/src/main/webapp/index.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/index.html b/src/main/webapp/index.html
index 6453275..8894021 100644
--- a/src/main/webapp/index.html
+++ b/src/main/webapp/index.html
@@ -83,8 +83,8 @@
                                 <a href="https://github.com/apache/incubator-carbondata/blob/master/docs/How-to-contribute-to-Apache-CarbonData.md"
                                    target="_blank">Contributing to CarbonData</a></li>
                             <li>
-                                <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Committers"
-                                   target="_blank">Project Committers</a></li>
+                                <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/PPMC+and+Committers+member+list"
+                                   target="_blank">Project PPMC and Committers</a></li>
                             <li><a href="meetup.html">CarbonData Meetups </a></li>
                             <li><a href="security.html">Apache CarbonData Security</a></li>
                         </ul>



Mime
View raw message