carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenliang...@apache.org
Subject [07/13] incubator-carbondata-site git commit: Updated website for CarbonData V 1.0
Date Thu, 19 Jan 2017 23:14:57 GMT
http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/data-management.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/data-management.html b/src/main/webapp/docs/latest1/data-management.html
new file mode 100644
index 0000000..f5f99c2
--- /dev/null
+++ b/src/main/webapp/docs/latest1/data-management.html
@@ -0,0 +1,175 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Data Management</h1><p>This tutorial is going to introduce you to the conceptual details of data management like:</p>
+<ul>
+  <li><a href="#loading-data">Loading Data</a></li>
+  <li><a href="#deleting-data">Deleting Data</a></li>
+  <li><a href="#compacting-data">Compacting Data</a></li>
+  <li><a href="#updating-data">Updating Data</a></li>
+</ul><h2>Loading Data</h2>
+<ul>
+  <li><strong>Scenario</strong></li>
+</ul><p>After creating a table, you can load data to the table using the <a href="dml-operation-on-carbondata.md">LOAD DATA</a> command. The loaded data is available for querying.  When data load is triggered, the data is encoded in CarbonData format and copied into HDFS CarbonData store path (specified in carbon.properties file)  in compressed, multi dimensional columnar format for quick analysis queries. The same command can be used to load new data or to  update the existing data. Only one data load can be triggered for one table. The high cardinality columns of the dictionary encoding are  automatically recognized and these columns will not be used for dictionary encoding.</p>
+<ul>
+  <li><strong>Procedure</strong></li>
+</ul><p>Data loading is a process that involves execution of multiple steps to read, sort and encode the data in CarbonData store format.  Each step is executed on different threads. After data loading process is complete, the status (success/partial success) is updated to  CarbonData store metadata. The table below lists the possible load status.</p>
+<table>
+  <thead>
+    <tr>
+      <th>Status </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>Success </td>
+      <td>All the data is loaded into table and no bad records found. </td>
+    </tr>
+    <tr>
+      <td>Partial Success </td>
+      <td>Data is loaded into table and bad records are found. Bad records are stored at carbon.badrecords.location. </td>
+    </tr>
+  </tbody>
+</table><p>In case of failure, the error will be logged in error log. Details of loads can be seen with <a href="dml-operation-on-carbondata.md">SHOW SEGMENTS</a> command. The show segment command output consists of :</p>
+<ul>
+  <li>SegmentSequenceID</li>
+  <li>START_TIME OF LOAD</li>
+  <li>END_TIME OF LOAD</li>
+  <li>LOAD STATUS</li>
+</ul><p>The latest load will be displayed first in the output.</p><p>Refer to <a href="dml-operation-on-carbondata.md">DML operations on CarbonData</a> for load commands.</p><h2>Deleting Data</h2>
+<ul>
+  <li><strong>Scenario</strong></li>
+</ul><p>If you have loaded wrong data into the table, or too many bad records are present and you want to modify and reload the data, you can delete required data loads.  The load can be deleted using the Segment Sequence Id or if the table contains date field then the data can be deleted using the date field.  If there are some specific records that need to be deleted based on some filter condition(s) we can delete by records.</p>
+<ul>
+  <li><strong>Procedure</strong></li>
+</ul><p>The loaded data can be deleted in the following ways:</p>
+<ul>
+  <li><p>Delete by Segment ID</p><p>After you get the segment ID of the segment that you want to delete, execute the <a href="dml-operation-on-carbondata.md">DELETE</a> command for the selected segment.  The status of deleted segment is updated to Marked for delete / Marked for Update.</p></li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>SegmentSequenceId </th>
+      <th>Status </th>
+      <th>Load Start Time </th>
+      <th>Load End Time </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>0 </td>
+      <td>Success </td>
+      <td>2015-11-19 19:14:... </td>
+      <td>2015-11-19 19:14:... </td>
+    </tr>
+    <tr>
+      <td>1 </td>
+      <td>Marked for Update </td>
+      <td>2015-11-19 19:54:... </td>
+      <td>2015-11-19 20:08:... </td>
+    </tr>
+    <tr>
+      <td>2 </td>
+      <td>Marked for Delete </td>
+      <td>2015-11-19 20:25:... </td>
+      <td>2015-11-19 20:49:... </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li><p>Delete by Date Field</p><p>If the table contains date field, you can delete the data based on a specific date.</p></li>
+  <li><p>Delete by Record</p><p>To delete records from CarbonData table based on some filter Condition(s).</p><p>For delete commands refer to <a href="dml-operation-on-carbondata.md">DML operations on CarbonData</a>.</p></li>
+  <li><p><strong>NOTE</strong>:</p>
+  <ul>
+    <li>When the delete segment DML is called, segment will not be deleted physically from the file system. Instead the segment status will be marked as "Marked for Delete". For the query execution, this deleted segment will be excluded.</li>
+  </ul>
+  <ul>
+    <li>The deleted segment will be deleted physically during the next load operation and only after the maximum query execution time configured using "max.query.execution.time". By default it is 60 minutes.</li>
+  </ul>
+  <ul>
+    <li>If the user wants to force delete the segment physically then he can use CLEAN FILES Command.</li>
+  </ul></li>
+</ul><p>Example :</p><p><code>
+CLEAN FILES FOR TABLE table1
+</code></p><p>This DML will physically delete the segment which are "Marked for delete" immediately.</p><h2>Compacting Data</h2>
+<ul>
+  <li><strong>Scenario</strong></li>
+</ul><p>Frequent data ingestion results in several fragmented CarbonData files in the store directory. Since data is sorted only within each load, the indices perform only within each  load. This means that there will be one index for each load and as number of data load increases, the number of indices also increases. As each index works only on one load,  the performance of indices is reduced. CarbonData provides provision for compacting the loads. Compaction process combines several segments into one large segment by merge sorting the data from across the segments. </p>
+<ul>
+  <li><strong>Procedure</strong></li>
+</ul><p>There are two types of compaction Minor and Major compaction.</p>
+<ul>
+  <li><p><strong>Minor Compaction</strong></p><p>In minor compaction the user can specify how many loads to be merged. Minor compaction triggers for every data load if the parameter carbon.enable.auto.load.merge is set. If any segments are available to be merged, then compaction will  run parallel with data load. There are 2 levels in minor compaction.</p>
+  <ul>
+    <li>Level 1: Merging of the segments which are not yet compacted.</li>
+    <li>Level 2: Merging of the compacted segments again to form a bigger segment.</li>
+  </ul></li>
+  <li><p><strong>Major Compaction</strong></p><p>In Major compaction, many segments can be merged into one big segment. User will specify the compaction size until which segments can be merged. Major compaction is usually done during the off-peak time. </p></li>
+</ul><p>There are number of parameters related to Compaction that can be set in carbon.properties file </p>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Default </th>
+      <th>Application </th>
+      <th>Description </th>
+      <th>Valid Values </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.compaction.level.threshold </td>
+      <td>4, 3 </td>
+      <td>Minor </td>
+      <td>This property is for minor compaction which decides how many segments to be merged. Example: If it is set as 2, 3 then minor compaction will be triggered for every 2 segments. 3 is the number of level 1 compacted segment which is further compacted to new segment. </td>
+      <td>NA </td>
+    </tr>
+    <tr>
+      <td>carbon.major.compaction.size </td>
+      <td>1024 MB </td>
+      <td>Major </td>
+      <td>Major compaction size can be configured using this parameter. Sum of the segments which is below this threshold will be merged. </td>
+      <td>NA </td>
+    </tr>
+    <tr>
+      <td>carbon.numberof.preserve.segments </td>
+      <td>0 </td>
+      <td>Minor/Major </td>
+      <td>If the user wants to preserve some number of segments from being compacted then he can set this property. Example: carbon.numberof.preserve.segments=2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default. </td>
+      <td>0-100 </td>
+    </tr>
+    <tr>
+      <td>carbon.allowed.compaction.days </td>
+      <td>0 </td>
+      <td>Minor/Major </td>
+      <td>Compaction will merge the segments which are loaded within the specific number of days configured. Example: If the configuration is 2, then the segments which are loaded in the time frame of 2 days only will get merged. Segments which are loaded 2 days apart will not be merged. This is disabled by default. </td>
+      <td>0-100 </td>
+    </tr>
+    <tr>
+      <td>carbon.number.of.cores.while.compacting </td>
+      <td>2 </td>
+      <td>Minor/Major </td>
+      <td>Number of cores which is used to write data during compaction. </td>
+      <td>0-100 </td>
+    </tr>
+  </tbody>
+</table><p>For compaction commands refer to <a href="ddl-operation-on-carbondata.md">DDL operations on CarbonData</a></p><h2>Updating Data</h2>
+<ul>
+  <li><p><strong>Scenario</strong></p><p>Sometimes after the data has been ingested into the System, it is required to be updated. Also there may be situations where some specific columns need to be updated on the basis of column expression and optional filter conditions.</p></li>
+  <li><p><strong>Procedure</strong></p><p>To update we need to specify the column expression with an optional filter condition(s).</p><p>For update commands refer to <a href="dml-operation-on-carbondata.md">DML operations on CarbonData</a>.</p></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/ddl-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/ddl-operation-on-carbondata.html b/src/main/webapp/docs/latest1/ddl-operation-on-carbondata.html
new file mode 100644
index 0000000..fe162a1
--- /dev/null
+++ b/src/main/webapp/docs/latest1/ddl-operation-on-carbondata.html
@@ -0,0 +1,182 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>DDL Operations on CarbonData</h1><p>This tutorial guides you through the data definition language support provided by CarbonData.</p><h2>Overview</h2><p>The following DDL operations are supported in CarbonData :</p>
+<ul>
+  <li><a href="#create-table">CREATE TABLE</a></li>
+  <li><a href="#show-table">SHOW TABLE</a></li>
+  <li><a href="#drop-table">DROP TABLE</a></li>
+  <li><a href="#compaction">COMPACTION</a></li>
+</ul><h2>CREATE TABLE</h2><p>This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.</p><p><code>
+   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
+                    [(col_name data_type , ...)]               
+   STORED BY &#39;carbondata&#39;
+   [TBLPROPERTIES (property_name=property_value, ...)]
+   // All Carbon&#39;s additional table options will go into properties
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>db_name </td>
+      <td>Name of the database. Database name should consist of alphanumeric characters and underscore(_) special character. </td>
+      <td>Yes </td>
+    </tr>
+    <tr>
+      <td>field_list </td>
+      <td>Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(_) special character. </td>
+      <td>No </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. </td>
+      <td>No </td>
+    </tr>
+    <tr>
+      <td>STORED BY </td>
+      <td>"org.apache.carbondata.format", identifies and creates a CarbonData table. </td>
+      <td>No </td>
+    </tr>
+    <tr>
+      <td>TBLPROPERTIES </td>
+      <td>List of CarbonData table properties. </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>Following are the guidelines for using table properties.</p>
+<ul>
+  <li><p><strong>Dictionary Encoding Configuration</strong></p><p>Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.</p></li>
+</ul><p><code>
+       TBLPROPERTIES (&quot;DICTIONARY_EXCLUDE&quot;=&quot;column1, column2&quot;) 
+       TBLPROPERTIES (&quot;DICTIONARY_INCLUDE&quot;=&quot;column1, column2&quot;) 
+</code></p><p>Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is an optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.</p>
+<ul>
+  <li><p><strong>Row/Column Format Configuration</strong></p><p>Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.</p></li>
+</ul><p><code>
+TBLPROPERTIES (&quot;COLUMN_GROUPS&quot;=&quot;(column1,column3),
+(Column4,Column5,Column6)&quot;) 
+</code></p>
+<ul>
+  <li><p><strong>Table Block Size Configuration</strong></p><p>The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.  If you do not specify this value in the DDL command , default value is used. </p></li>
+</ul><p><code>
+       TBLPROPERTIES (&quot;TABLE_BLOCKSIZE&quot;=&quot;512 MB&quot;)
+</code></p><p>Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.</p>
+<ul>
+  <li><p><strong>Inverted Index Configuration</strong></p><p>Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.  By default inverted index is enabled. The user can disable the inverted index creation for some columns.</p></li>
+</ul><p><code>
+       TBLPROPERTIES (&quot;NO_INVERTED_INDEX&quot;=&quot;column1,column3&quot;)
+</code></p><p>No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.</p><p>NOTE:</p>
+<ul>
+  <li><p>By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.</p></li>
+  <li><p>All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.</p><h3>Example:</h3><p><code>
+   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                            productNumber Int,
+                            productName String, 
+                            storeCity String, 
+                            storeProvince String, 
+                            productCategory String, 
+                            productBatch String,
+                            saleQuantity Int,
+                            revenue Int)       
+   STORED BY &#39;carbondata&#39; 
+   TBLPROPERTIES (&#39;COLUMN_GROUPS&#39;=&#39;(productName,productCategory)&#39;,
+              &#39;DICTIONARY_EXCLUDE&#39;=&#39;productName&#39;,
+              &#39;DICTIONARY_INCLUDE&#39;=&#39;productNumber&#39;,
+              &#39;NO_INVERTED_INDEX&#39;=&#39;productBatch&#39;)
+</code></p></li>
+</ul><h2>SHOW TABLE</h2><p>This command can be used to list all the tables in current database or all the tables of a specific database. <code>
+  SHOW TABLES [IN db_Name];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>IN db_Name </td>
+      <td>Name of the database. Required only if tables of this specific database are to be listed. </td>
+      <td>Yes </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+  SHOW TABLES IN ProductSchema;
+</code></p><h2>DROP TABLE</h2><p>This command is used to delete an existing table.</p><p><code>
+  DROP TABLE [IF EXISTS] [db_name.]table_name;
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>db_Name </td>
+      <td>Name of the database. If not specified, current database will be selected. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>Name of the table to be deleted. </td>
+      <td>NO </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+  DROP TABLE IF EXISTS productSchema.productSalesTable;
+</code></p><h2>COMPACTION</h2><p>This command merges the specified number of segments into one segment. This enhances the query performance of the table.</p><p><code>
+  ALTER TABLE [db_name.]table_name COMPACT &#39;MINOR/MAJOR&#39;;
+</code></p><p>To get details about Compaction refer to <a href="data-management.md">Data Management</a></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database.</td>
+      <td>NO </td>
+    </tr>
+  </tbody>
+</table><h3>Syntax</h3>
+<ul>
+  <li><strong>Minor Compaction</strong></li>
+</ul><p><code>
+ALTER TABLE table_name COMPACT &#39;MINOR&#39;;
+</code> - <strong>Major Compaction</strong></p><p><code>
+ALTER TABLE table_name COMPACT &#39;MAJOR&#39;;
+</code></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/dml-operation-on-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/dml-operation-on-carbondata.html b/src/main/webapp/docs/latest1/dml-operation-on-carbondata.html
new file mode 100644
index 0000000..17b0740
--- /dev/null
+++ b/src/main/webapp/docs/latest1/dml-operation-on-carbondata.html
@@ -0,0 +1,345 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>DML Operations on CarbonData</h1><p>This tutorial guides you through the data manipulation language support provided by CarbonData.</p><h2>Overview</h2><p>The following DML operations are supported in CarbonData :</p>
+<ul>
+  <li><a href="#load-data">LOAD DATA</a></li>
+  <li><a href="#insert-data-into-a-carbondata-table">INSERT DATA INTO A CARBONDATA TABLE</a></li>
+  <li><a href="#show-segments">SHOW SEGMENTS</a></li>
+  <li><a href="#delete-segment-by-id">DELETE SEGMENT BY ID</a></li>
+  <li><a href="#delete-segment-by-date">DELETE SEGMENT BY DATE</a></li>
+  <li><a href="#update-carbondata-table">UPDATE CARBONDATA TABLE</a></li>
+  <li><a href="#delete-records-from-carbondata-table">DELETE RECORDS FROM CARBONDATA TABLE</a></li>
+</ul><h2>LOAD DATA</h2><p>This command loads the user data in raw format to the CarbonData specific data format store, this allows CarbonData to provide good performance while querying the data. Please visit <a href="data-management.md">Data Management</a> for more details on LOAD.</p><h3>Syntax</h3><p><code>
+LOAD DATA [LOCAL] INPATH &#39;folder_path&#39; 
+INTO TABLE [db_name.]table_name 
+OPTIONS(property_name=property_value, ...)
+</code></p><p>OPTIONS are not mandatory for data loading process. Inside OPTIONS user can provide either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE as per requirement.</p><p>NOTE: The path shall be canonical path.</p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>folder_path </td>
+      <td>Path of raw csv data folder or file. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>OPTIONS </td>
+      <td>Extra options provided to Load </td>
+      <td>YES </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>You can use the following options to load data:</p>
+<ul>
+  <li><p><strong>DELIMITER:</strong> Delimiters can be provided in the load command.</p><p><code>
+OPTIONS(&#39;DELIMITER&#39;=&#39;,&#39;)
+</code></p></li>
+  <li><p><strong>QUOTECHAR:</strong> Quote Characters can be provided in the load command.</p><p><code>
+OPTIONS(&#39;QUOTECHAR&#39;=&#39;&quot;&#39;)
+</code></p></li>
+  <li><p><strong>COMMENTCHAR:</strong> Comment Characters can be provided in the load command if user want to comment lines.</p><p><code>
+OPTIONS(&#39;COMMENTCHAR&#39;=&#39;#&#39;)
+</code></p></li>
+  <li><p><strong>FILEHEADER:</strong> Headers can be provided in the LOAD DATA command if headers are missing in the source files.</p><p><code>
+OPTIONS(&#39;FILEHEADER&#39;=&#39;column1,column2&#39;) 
+</code></p></li>
+  <li><p><strong>MULTILINE:</strong> CSV with new line character in quotes.</p><p><code>
+OPTIONS(&#39;MULTILINE&#39;=&#39;true&#39;) 
+</code></p></li>
+  <li><p><strong>ESCAPECHAR:</strong> Escape char can be provided if user want strict validation of escape character on CSV.</p><p><code>
+OPTIONS(&#39;ESCAPECHAR&#39;=&#39;\&#39;) 
+</code></p></li>
+  <li><p><strong>COMPLEX_DELIMITER_LEVEL_1:</strong> Split the complex type data column in a row (eg., a$b$c --&gt; Array = {a,b,c}).</p><p><code>
+OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;) 
+</code></p></li>
+  <li><p><strong>COMPLEX_DELIMITER_LEVEL_2:</strong> Split the complex type nested data column in a row. Applies level_1 delimiter &amp; applies level_2 based on complex data type (eg., a:b$c:d --&gt; Array&gt; = {{a,b},{c,d}}).</p><p><code>
+OPTIONS(&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;) 
+</code></p></li>
+  <li><p><strong>ALL_DICTIONARY_PATH:</strong> All dictionary files path.</p><p><code>
+OPTIONS(&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;)
+</code></p></li>
+  <li><p><strong>COLUMNDICT:</strong> Dictionary file path for specified column.</p><p><code>
+OPTIONS(&#39;COLUMNDICT&#39;=&#39;column1:dictionaryFilePath1,
+column2:dictionaryFilePath2&#39;)
+</code></p><p>NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.</p></li>
+  <li><p><strong>DATEFORMAT:</strong> Date format for specified column.</p><p><code>
+OPTIONS(&#39;DATEFORMAT&#39;=&#39;column1:dateFormat1, column2:dateFormat2&#39;)
+</code></p><p>NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to <a href="http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html">SimpleDateFormat</a>.</p></li>
+</ul><h3>Example:</h3><p><code>
+LOAD DATA local inpath &#39;/opt/rawdata/data.csv&#39; INTO table carbontable
+options(&#39;DELIMITER&#39;=&#39;,&#39;, &#39;QUOTECHAR&#39;=&#39;&quot;&#39;,&#39;COMMENTCHAR&#39;=&#39;#&#39;,
+&#39;FILEHEADER&#39;=&#39;empno,empname,designation,doj,workgroupcategory,
+ workgroupcategoryname,deptno,deptname,projectcode,
+ projectjoindate,projectenddate,attendance,utilization,salary&#39;,
+&#39;MULTILINE&#39;=&#39;true&#39;,&#39;ESCAPECHAR&#39;=&#39;\&#39;,&#39;COMPLEX_DELIMITER_LEVEL_1&#39;=&#39;$&#39;, 
+&#39;COMPLEX_DELIMITER_LEVEL_2&#39;=&#39;:&#39;,
+&#39;ALL_DICTIONARY_PATH&#39;=&#39;/opt/alldictionary/data.dictionary&#39;
+)
+</code></p><h2>INSERT DATA INTO A CARBONDATA TABLE</h2><p>This command inserts data into a CarbonData table. It is defined as a combination of two queries Insert and Select query respectively. It inserts records from a source table into a target CarbonData table. The source table can be a Hive table, Parquet table or a CarbonData table itself. It comes with the functionality to aggregate the records of a table by performing Select query on source table and load its corresponding resultant records into a CarbonData table.</p><p><strong>NOTE</strong> : The client node where the INSERT command is executing, must be part of the cluster.</p><h3>Syntax</h3><p><code>
+INSERT INTO TABLE &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
+[ WHERE { &lt;filter_condition&gt; } ];
+</code></p><p>You can also omit the <code>table</code> keyword and write your query as:</p><p><code>
+INSERT INTO &lt;CARBONDATA TABLE&gt; SELECT * FROM sourceTableName 
+[ WHERE { &lt;filter_condition&gt; } ];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CARBON TABLE </td>
+      <td>The name of the Carbon table in which you want to perform the insert operation. </td>
+    </tr>
+    <tr>
+      <td>sourceTableName </td>
+      <td>The table from which the records are read and inserted into destination CarbonData table. </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>The following condition must be met for successful insert operation :</p>
+<ul>
+  <li>The source table and the CarbonData table must have the same table schema.</li>
+  <li>The table must be created.</li>
+  <li>Overwrite is not supported for CarbonData table.</li>
+  <li>The data type of source and destination table columns should be same, else the data from source table will be treated as bad records and the INSERT command fails.</li>
+  <li>INSERT INTO command does not support partial success if bad records are found, it will fail.</li>
+  <li>Data cannot be loaded or updated in source table while insert from source table to target table is in progress.</li>
+</ul><p>To enable data load or update during insert operation, configure the following property to true.</p><p><code>
+carbon.insert.persist.enable=true
+</code></p><p>By default the above configuration will be false.</p><p><strong>NOTE</strong>: Enabling this property will reduce the performance.</p><h3>Examples</h3><p><code>
+INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM 
+table2 group by item1;
+</code></p><p><code>
+INSERT INTO table1 SELECT item1, item2, item3 FROM table2 
+where item2=&#39;xyz&#39;;
+</code></p><p><code>
+INSERT INTO table1 SELECT * FROM table2 
+where exists (select * from table3 
+where table2.item1 = table3.item1);
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log.</strong></p><h2>SHOW SEGMENTS</h2><p>This command is used to get the segments of CarbonData table.</p><p><code>
+SHOW SEGMENTS FOR TABLE [db_name.]table_name 
+LIMIT number_of_segments;
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>number_of_segments </td>
+      <td>Limit the output to this number. </td>
+      <td>YES </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
+</code></p><h2>DELETE SEGMENT BY ID</h2><p>This command is used to delete segment by using the segment ID. Each segment has a unique segment ID associated with it. Using this segment ID, you can remove the segment.</p><p>The following command will get the segmentID.</p><p><code>
+SHOW SEGMENTS FOR Table dbname.tablename LIMIT number_of_segments
+</code></p><p>After you retrieve the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.</p><p><code>
+DELETE SEGMENT segment_sequence_id1, segments_sequence_id2, .... 
+FROM TABLE tableName
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>segment_id </td>
+      <td>Segment Id of the load. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+DELETE SEGMENT 0 FROM TABLE CarbonDatabase.CarbonTable;
+DELETE SEGMENT 0.1,5,8 FROM TABLE CarbonDatabase.CarbonTable;
+</code>  NOTE: Here 0.1 is compacted segment sequence id. </p><h2>DELETE SEGMENT BY DATE</h2><p>This command will allow to delete the CarbonData segment(s) from the store based on the date provided by the user in the DML command. The segment created before the particular date will be removed from the specific stores.</p><p><code>
+DELETE FROM TABLE [schema_name.]table_name 
+WHERE[DATE_FIELD]BEFORE [DATE_VALUE]
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Optional </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>DATE_VALUE </td>
+      <td>Valid segment load start time value. All the segments before this specified date will be deleted. </td>
+      <td>NO </td>
+    </tr>
+    <tr>
+      <td>db_name </td>
+      <td>Database name, if it is not specified then it uses the current database. </td>
+      <td>YES </td>
+    </tr>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the table in provided database. </td>
+      <td>NO </td>
+    </tr>
+  </tbody>
+</table><h3>Example:</h3><p><code>
+ DELETE SEGMENTS FROM TABLE CarbonDatabase.CarbonTable 
+ WHERE STARTTIME BEFORE &#39;2017-06-01 12:05:06&#39;;  
+</code></p><h2>Update CarbonData Table</h2><p>This command will allow to update the carbon table based on the column expression and optional filter conditions.</p><h3>Syntax</h3><p><code>
+ UPDATE &lt;table_name&gt;
+ SET (column_name1, column_name2, ... column_name n) =
+ (column1_expression , column2_expression . .. column n_expression )
+ [ WHERE { &lt;filter_condition&gt; } ];
+</code></p><p>alternatively the following the command can also be used for updating the CarbonData Table :</p><p><code>
+UPDATE &lt;table_name&gt;
+SET (column_name1, column_name2,) =
+(select sourceColumn1, sourceColumn2 from sourceTable
+[ WHERE { &lt;filter_condition&gt; } ] )
+[ WHERE { &lt;filter_condition&gt; } ];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the Carbon table in which you want to perform the update operation. </td>
+    </tr>
+    <tr>
+      <td>column_name </td>
+      <td>The destination columns to be updated. </td>
+    </tr>
+    <tr>
+      <td>sourceColumn </td>
+      <td>The source table column values to be updated in destination table. </td>
+    </tr>
+    <tr>
+      <td>sourceTable </td>
+      <td>The table from which the records are updated into destination Carbon table. </td>
+    </tr>
+  </tbody>
+</table><h3>Usage Guidelines</h3><p>The following conditions must be met for successful updation :</p>
+<ul>
+  <li>The update command fails if multiple input rows in source table are matched with single row in destination table.</li>
+  <li>If the source table generates empty records, the update operation will complete successfully without updating the table.</li>
+  <li>If a source table row does not correspond to any of the existing rows in a destination table, the update operation will complete successfully without updating the table.</li>
+  <li>In sub-query, if the source table and the target table are same, then the update operation fails.</li>
+  <li>If the sub-query used in UPDATE statement contains aggregate method or group by query, then the UPDATE operation fails.</li>
+</ul><h3>Examples</h3><p>Update is not supported for queries that contain aggregate or group by.</p><p><code>
+ UPDATE t_carbn01 a
+ SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
+ sum(b.profit) from t_carbn01b b
+ WHERE item_type_cd =2 group by item_type_code);
+</code></p><p>Here the Update Operation fails as the query contains aggregate function sum(b.profit) and group by clause in the sub-query.</p><p><code>
+UPDATE carbonTable1 d
+SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
+FROM sourceTable1 s WHERE d.column1 = s.c11)
+WHERE d.column1 = &#39;china&#39; EXISTS( SELECT * from table3 o where o.c2 &gt; 1);
+</code></p><p><code>
+UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
+WHERE d.column1 = s.c11)
+WHERE exists( select * from iud.other o where o.c2 &gt; 1);
+</code></p><p><code>
+UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , &quot;y&quot; ));
+</code></p><p><code>
+UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
+WHERE d.column1 = &#39;india&#39;;
+</code></p><p><code>
+UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, &quot;xyx&quot;)
+WHERE d.column1 = &#39;india&#39;
+and EXISTS( SELECT * FROM table3 o WHERE o.column2 &gt; 1);
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the client.</strong></p><h2>Delete Records from CarbonData Table</h2><p>This command allows us to delete records from CarbonData table.</p><h3>Syntax</h3><p><code>
+DELETE FROM table_name [WHERE expression];
+</code></p><h3>Parameter Description</h3>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>table_name </td>
+      <td>The name of the Carbon table in which you want to perform the delete. </td>
+    </tr>
+  </tbody>
+</table><h3>Examples</h3><p><code>
+DELETE FROM columncarbonTable1 d WHERE d.column1  = &#39;china&#39;;
+</code></p><p><code>
+DELETE FROM dest WHERE column1 IN (&#39;china&#39;, &#39;USA&#39;);
+</code></p><p><code>
+DELETE FROM columncarbonTable1
+WHERE column1 IN (SELECT column11 FROM sourceTable2);
+</code></p><p><code>
+DELETE FROM columncarbonTable1
+WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
+column1 = &#39;USA&#39;);
+</code></p><p><code>
+DELETE FROM columncarbonTable1 WHERE column2 &gt;= 4
+</code></p><p><strong>The Status Success/Failure shall be captured in the driver log and the client.</strong></p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/faq.html b/src/main/webapp/docs/latest1/faq.html
new file mode 100644
index 0000000..0645fea
--- /dev/null
+++ b/src/main/webapp/docs/latest1/faq.html
@@ -0,0 +1,26 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>FAQs</h1>
+<ul>
+  <li><p><strong>Auto Compaction not Working</strong></p><p>The Property carbon.enable.auto.load.merge in carbon.properties need to be set to true.</p></li>
+  <li><p><strong>Getting Abstract method error</strong></p><p>You need to specify the spark version while using Maven to build project.</p></li>
+  <li><p><strong>Getting NotImplementedException for subquery using IN and EXISTS</strong></p><p>Subquery with in and exists not supported in CarbonData.</p></li>
+  <li><p><strong>Getting Exceptions on creating a view</strong></p><p>View not supported in CarbonData.</p></li>
+  <li><p><strong>How to verify if ColumnGroups have been created as desired.</strong></p><p>Try using desc table query.</p></li>
+  <li><p><strong>Did anyone try to run CarbonData on windows? Is it supported on Windows?</strong></p><p>We may provide support for windows in future. You are welcome to contribute if you want to add the support :) </p></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/installation-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/installation-guide.html b/src/main/webapp/docs/latest1/installation-guide.html
new file mode 100644
index 0000000..60b6685
--- /dev/null
+++ b/src/main/webapp/docs/latest1/installation-guide.html
@@ -0,0 +1,245 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Installation Guide</h1><p>This tutorial guides you through the installation and configuration of CarbonData in the following two modes :</p>
+<ul>
+  <li><a href="#installing-and-configuring-carbondata-on-standalone-spark-cluster">Installing and Configuring CarbonData on Standalone Spark Cluster</a></li>
+  <li><a href="#installing-and-configuring-carbondata-on-spark-on-yarn-cluster">Installing and Configuring CarbonData on ?Spark on YARN? Cluster</a></li>
+</ul><p>followed by :</p>
+<ul>
+  <li><a href="#query-execution-using-carbondata-thrift-server">Query Execution using CarbonData Thrift Server</a></li>
+</ul><h2>Installing and Configuring CarbonData on Standalone Spark Cluster</h2><h3>Prerequisites</h3>
+<ul>
+  <li><p>Hadoop HDFS and Yarn should be installed and running.</p></li>
+  <li><p>Spark should be installed and running on all the cluster nodes.</p></li>
+  <li><p>CarbonData user should have permission to access HDFS.</p></li>
+</ul><h3>Procedure</h3>
+<ul>
+  <li><p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder.</p><p>NOTE: Create the carbonlib folder if it does not exists inside <code>&quot;&lt;SPARK_HOME&gt;&quot;</code> path.</p></li>
+  <li><p>Add the carbonlib folder path in the Spark classpath. (Edit <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-env.sh&quot;</code> file and modify the value of SPARK_CLASSPATH by appending <code>&quot;&lt;SPARK_HOME&gt;/carbonlib/*&quot;</code> to the existing value)</p></li>
+  <li><p>Copy the carbon.properties.template to <code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code> folder from "./conf/" of CarbonData repository.</p></li>
+  <li><p>Copy the "carbonplugins" folder to <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder from "./processing/" folder of CarbonData repository.</p><p>NOTE: carbonplugins will contain .kettle folder.</p></li>
+  <li><p>In Spark node, configure the properties mentioned in the following table in <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-defaults.conf&quot;</code> file.</p></li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Value </th>
+      <th>Description </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>$SPARK_HOME /carbonlib/carbonplugins </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraJavaOptions </td>
+      <td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties </td>
+      <td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraJavaOptions </td>
+      <td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties </td>
+      <td>A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space. </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Add the following properties in <code>&quot;&lt;SPARK_HOME&gt;/conf/&quot; carbon.properties</code>:</li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Required </th>
+      <th>Description </th>
+      <th>Example </th>
+      <th>Remark </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.storelocation </td>
+      <td>NO </td>
+      <td>Location where data CarbonData will create the store and write the data in its own format. </td>
+      <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore </td>
+      <td>Propose to set HDFS directory </td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>YES </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td>$SPARK_HOME/carbonlib/carbonplugins </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Verify the installation. For example:</li>
+</ul><p><code>
+   ./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
+   --executor-memory 2G
+</code></p><p>NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.</p><p>To get started with CarbonData : <a href="quick-start-guide.md">Quick Start</a> , <a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></p><h2>Installing and Configuring CarbonData on "Spark on YARN" Cluster</h2><p>This section provides the procedure to install CarbonData on "Spark on YARN" cluster.</p><h3>Prerequisites</h3>
+<ul>
+  <li>Hadoop HDFS and Yarn should be installed and running.</li>
+  <li>Spark should be installed and running in all the clients.</li>
+  <li>CarbonData user should have permission to access HDFS.</li>
+</ul><h3>Procedure</h3><p>The following steps are only for Driver Nodes. (Driver nodes are the one which starts the spark context.)</p>
+<ul>
+  <li><p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder.</p><p>NOTE: Create the carbonlib folder if it does not exists inside <code>&quot;&lt;SPARK_HOME&gt;&quot;</code> path.</p></li>
+  <li><p>Copy "carbonplugins" folder to <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder from "./processing/" folder of CarbonData repository.  carbonplugins will contain .kettle folder.</p></li>
+  <li><p>Copy the "carbon.properties.template" to <code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code> folder from conf folder of CarbonData repository.</p></li>
+  <li>Modify the parameters in "spark-default.conf" located in the <code>&quot;&lt;SPARK_HOME&gt;/conf</code>"</li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Description </th>
+      <th>Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>spark.master </td>
+      <td>Set this value to run the Spark in yarn cluster mode. </td>
+      <td>Set "yarn-client" to run the Spark in yarn cluster mode. </td>
+    </tr>
+    <tr>
+      <td>spark.yarn.dist.files </td>
+      <td>Comma-separated list of files to be placed in the working directory of each executor. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>spark.yarn.dist.archives </td>
+      <td>Comma-separated list of archives to be extracted into the working directory of each executor. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to executors. For instance NOTE: You can enter multiple values separated by space. </td>
+      <td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraClassPath </td>
+      <td>Extra classpath entries to prepend to the classpath of executors. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the values in below parameter spark.driver.extraClassPath </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraClassPath </td>
+      <td>Extra classpath entries to prepend to the classpath of the driver. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the value in below parameter spark.driver.extraClassPath. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. </td>
+      <td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonplugins</code> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Add the following properties in <code>&lt;SPARK_HOME&gt;/conf/ carbon.properties</code>:</li>
+</ul>
+<table>
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Required </th>
+      <th>Description </th>
+      <th>Example </th>
+      <th>Default Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.storelocation </td>
+      <td>NO </td>
+      <td>Location where CarbonData will create the store and write the data in its own format. </td>
+      <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore </td>
+      <td>Propose to set HDFS directory</td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>YES </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td>$SPARK_HOME/carbonlib/carbonplugins </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Verify the installation.</li>
+</ul><p><code>
+     ./bin/spark-shell --master yarn-client --driver-memory 1g 
+     --executor-cores 2 --executor-memory 2G
+</code>  NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.</p><p>Getting started with CarbonData : <a href="quick-start-guide.md">Quick Start</a> , <a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></p><h2>Query Execution Using CarbonData Thrift Server</h2><h3>Starting CarbonData Thrift Server</h3><p>a. cd <code>&lt;SPARK_HOME&gt;</code></p><p>b. Run the following command to start the CarbonData thrift server.</p><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
+$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
+</code></p>
+<table>
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Example </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CARBON_ASSEMBLY_JAR </td>
+      <td>CarbonData assembly jar name present in the <code>&quot;&lt;SPARK_HOME&gt;&quot;/carbonlib/</code> folder. </td>
+      <td>carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar </td>
+    </tr>
+    <tr>
+      <td>carbon_store_path </td>
+      <td>This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. </td>
+      <td><code>hdfs//&lt;host_name&gt;:54310/user/hive/warehouse/carbon.store</code> </td>
+    </tr>
+  </tbody>
+</table><h3>Examples</h3>
+<ul>
+  <li>Start with default memory and executors</li>
+</ul><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
+$SPARK_HOME/carbonlib
+/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
+hdfs://hacluster/user/hive/warehouse/carbon.store
+</code></p>
+<ul>
+  <li>Start with Fixed executors and resources</li>
+</ul><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
+--num-executors 3 --driver-memory 20g --executor-memory 250g 
+--executor-cores 32 
+/srv/OSCON/BigData/HACluster/install/spark/sparkJdbc/lib
+/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
+hdfs://hacluster/user/hive/warehouse/carbon.store
+</code></p><h3>Connecting to CarbonData Thrift Server Using Beeline</h3><p>```  cd <SPARK_HOME>  ./bin/beeline jdbc:hive2://<thrftserver_host>:port</p>
+<pre><code> Example
+ ./bin/beeline jdbc:hive2://10.10.10.10:10000
+</code></pre><p>```</p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/overview-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/overview-of-carbondata.html b/src/main/webapp/docs/latest1/overview-of-carbondata.html
new file mode 100644
index 0000000..a31f9d6
--- /dev/null
+++ b/src/main/webapp/docs/latest1/overview-of-carbondata.html
@@ -0,0 +1,124 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Overview</h1><p>This tutorial provides a detailed overview about :</p>
+<ul>
+  <li><a href="#introduction">Introduction</a></li>
+  <li><a href="#carbondata-file-structure">CarbonData File Structure</a></li>
+  <li><a href="#features">Features</a></li>
+  <li><a href="#data-types">Data Types</a></li>
+  <li><a href="#interfaces">Interfaces</a></li>
+</ul><h2>Introduction</h2><p>CarbonData is a fully indexed columnar and Hadoop native data-store for processing heavy analytical workloads and detailed queries on big data. CarbonData allows faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data.</p><p>In customer benchmarks, CarbonData has proven to manage Petabyte of data running on extraordinarily low-cost hardware and answers queries around 10 times faster than the current open source solutions (column-oriented SQL on Hadoop data-stores).</p><p>Some of the salient features of CarbonData are :</p>
+<ul>
+  <li>Low-Latency for various types of data access patterns like Sequential, Random and OLAP.</li>
+  <li>Fast query on fast data.</li>
+  <li>Space efficiency.</li>
+  <li>General format available on Hadoop-ecosystem.</li>
+</ul><h2>CarbonData File Structure</h2><p>CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file footer, co-located in HDFS.</p><p>The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.</p><p>Each blocklet in the file is further divided into chunks of data called data chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in a file contain the same number and type of data chunks.</p><p><img src="../../../src/site/markdown/images/carbon_data_file_structure_new.png?raw=true" alt="CarbonData File Structure" /></p><p>Each data chunk contains multiple groups of data called as pages. There are three types of pages.</p>
+<ul>
+  <li>Data Page: Contains the encoded data of a column/group of columns.</li>
+  <li>Row ID Page (optional): Contains the row ID mappings used when the data page is stored as an inverted index.</li>
+  <li>RLE Page (optional): Contains additional metadata used when the data page is RLE coded.</li>
+</ul><p><img src="../../../src/site/markdown/images/carbon_data_format_new.png?raw=true" alt="CarbonData File Format" /></p><h2>Features</h2><p>CarbonData file format is a columnar store in HDFS. It has many features that a modern columnar format has, such as splittable, compression schema, complex data type etc and CarbonData has following unique features:</p>
+<ul>
+  <li><p>Unique Data Organization: Though CarbonData stores data in Columnar format, it differs from traditional Columnar formats as the columns in each row-group(Data Block) is sorted independent of the other columns. Though this arrangement requires CarbonData to store the row-number mapping against each column value, it makes it possible to use binary search for faster filtering and since the values are sorted, same/similar values come together which yields better compression and offsets the storage overhead required by the row number mapping.</p></li>
+  <li><p>Advanced Push Down Optimizations: CarbonData pushes as much of query processing as possible close to the data to minimize the amount of data being read, processed, converted and transmitted/shuffled. Using projections and filters it reads only the required columns form the store and also reads only the rows that match the filter conditions provided in the query.</p></li>
+  <li><p>Multi Level Indexing: CarbonData uses multiple indices at various levels to enable faster search and speed up query processing.</p></li>
+  <li><p>Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- measure columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</p></li>
+  <li><p>Min-Max Index for all columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</p></li>
+  <li><p>Data Block level Inverted Index for all columns: Aids in quickly locating the rows that contain the data matching search/filter criteria within a row group(Data Blocks).</p></li>
+  <li><p>Dictionary Encoding: Most databases and big data SQL data stores employ columnar encoding to achieve data compression by storing small integers numbers (surrogate value) instead of full string values. However, almost all existing databases and data stores divide the data into row groups containing anywhere from few thousand to a million rows and employ dictionary encoding only within each row group. Hence, the same column value can have different surrogate values in different row groups. So, while reading the data, conversion from surrogate value to actual value needs to be done immediately after the data is read from the disk. But CarbonData employs global surrogate key which means that a common dictionary is maintained for the full store on one machine/node. So CarbonData can perform all the query processing work such as grouping/aggregation, sorting etc on light weight surrogate values. The conversion from surrogate to actual values needs to be done only on the final res
 ult. This procedure improves performance on two aspects. Conversion from surrogate values to actual values is done only for the final result rows which are much less than the actual rows read from the store. All query processing and computation such as grouping/aggregation, sorting, and so on is done on lightweight surrogate values which requires less memory and CPU time compared to actual values.</p></li>
+  <li><p>Deep Spark Integration: It has built-in spark integration for Spark 1.5, 1.6 and interfaces for Spark SQL, DataFrame API and query optimization. It supports bulk data ingestion and allows saving of spark dataframes as CarbonData files.</p></li>
+  <li><p>Update Delete Support: It supports batch updates like daily update scenarios for OLAP and Base+Delta file based design.</p></li>
+  <li><p>Store data along with index: Significantly accelerates query performance and reduces the I/O scans and CPU resources, when there are filters in the query. CarbonData index consists of multiple levels of indices. A processing framework can leverage this index to reduce the task it needs to schedule and process. It can also do skip scan in more finer grain units (called blocklet) in task side scanning instead of scanning the whole file.</p></li>
+  <li><p>Operable encoded data: It supports efficient compression and global encoding schemes and can query on compressed/encoded data. The data can be converted just before returning the results to the users, which is "late materialized".</p></li>
+  <li><p>Column group: Allows multiple columns to form a column group that would be stored as row format. This reduces the row reconstruction cost at query time.</p></li>
+  <li><p>Support for various use cases with one single Data format: Examples are interactive OLAP-style query, Sequential Access (big scan) and Random Access (narrow scan).</p></li>
+</ul><h2>Data Types</h2><h4>CarbonData supports the following data types:</h4>
+<ul>
+  <li>Numeric Types</li>
+  <li>SMALLINT</li>
+  <li>INT/INTEGER</li>
+  <li>BIGINT</li>
+  <li>DOUBLE</li>
+  <li>DECIMAL</li>
+  <li><p>Date/Time Types</p></li>
+  <li>TIMESTAMP</li>
+  <li><p>String Types</p></li>
+  <li>STRING</li>
+  <li><p>Complex Types</p>
+  <ul>
+    <li>arrays: ARRAY<code>&lt;data_type&gt;</code></li>
+    <li>structs: STRUCT<code>&lt;col_name : data_type COMMENT col_comment, ...&gt;</code></li>
+  </ul></li>
+</ul><h2>Interfaces</h2><h4>API</h4><p>CarbonData can be used in following scenarios:</p>
+<ul>
+  <li>For MapReduce application user</li>
+</ul><p>This User API is provided by carbon-hadoop. In this scenario, user can process CarbonData files in his MapReduce application by choosing CarbonInput/OutputFormat, and is responsible for using it correctly. Currently only CarbonInputFormat is provided and OutputFormat will be provided soon.</p>
+<ul>
+  <li>For Spark user</li>
+</ul><p>This User API is provided by Spark itself. There are two levels of APIs</p>
+<ul>
+  <li><p><strong>CarbonData File</strong></p><p>Similar to parquet, json, or other data source in Spark, CarbonData can be used with data source API. For example (please refer to DataFrameAPIExample for more detail):</p></li>
+</ul><p>```  // User can create a DataFrame from any data source  // or transformation.  val df = ...</p>
+<pre><code>  // Write data
+  // User can write a DataFrame to a CarbonData file
+  df.write
+  .format(&quot;carbondata&quot;)
+  .option(&quot;tableName&quot;, &quot;carbontable&quot;)
+  .mode(SaveMode.Overwrite)
+  .save()
+
+
+  // read CarbonData by data source API
+  df = carbonContext.read
+  .format(&quot;carbondata&quot;)
+  .option(&quot;tableName&quot;, &quot;carbontable&quot;)
+  .load(&quot;/path&quot;)
+
+  // User can then use DataFrame for analysis
+  df.count
+  SVMWithSGD.train(df, numIterations)
+
+  // User can also register the DataFrame with a table name, 
+  // and use SQL for analysis
+  df.registerTempTable(&quot;t1&quot;)  // register temporary table 
+                              // in SparkSQL catalog
+  df.registerHiveTable(&quot;t2&quot;)  // Or, use a implicit funtion 
+                              // to register to Hive metastore
+  sqlContext.sql(&quot;select count(*) from t1&quot;).show
+</code></pre><p>```</p>
+<ul>
+  <li><p><strong>Managed CarbonData Table</strong></p><p>CarbonData has in built support for high level concept like Table, Database, and supports full data lifecycle management, instead of dealing with just files user can use CarbonData specific DDL to manipulate data in Table and Database level. Please refer <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DDL">DDL</a> and <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DML">DML</a>.</p></li>
+</ul><p><code>
+      // Use SQL to manage table and query data
+      create database db1;
+      use database db1;
+      show databases;
+      create table tbl1 using org.apache.carbondata.spark;
+      load data into table tlb1 path &#39;some_files&#39;;
+      select count(*) from tbl1;
+</code></p>
+<ul>
+  <li><p>For developer who want to integrate CarbonData into processing engines like spark, hive or flink, use API provided by carbon-hadoop and carbon-processing:</p>
+  <ul>
+    <li><strong>Query</strong> : Integrate carbon-hadoop with engine specific API, like spark data source API.</li>
+  </ul>
+  <ul>
+    <li><strong>Data life cycle management</strong> : CarbonData provides utility functions in carbon-processing to manage data life cycle, like data loading, compact, retention, schema evolution. Developer can implement DDLs of their choice and leverage these utility function to do data life cycle management.</li>
+  </ul></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/quick-start-guide.html b/src/main/webapp/docs/latest1/quick-start-guide.html
new file mode 100644
index 0000000..67c283a
--- /dev/null
+++ b/src/main/webapp/docs/latest1/quick-start-guide.html
@@ -0,0 +1,103 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Quick Start</h1><p>This tutorial provides a quick introduction to using CarbonData.</p><h2>Getting started with Apache CarbonData</h2>
+<ul>
+  <li><a href="#installation">Installation</a></li>
+  <li><a href="#prerequisites">Prerequisites</a></li>
+  <li><a href="#interactive-analysis-with-spark-shell">Interactive Analysis with Spark Shell Version 2.1</a></li>
+  <li>Basics</li>
+  <li>Executing Queries
+  <ul>
+    <li>Creating a Table</li>
+    <li>Loading Data to a Table</li>
+    <li>Query Data from a Table</li>
+  </ul></li>
+  <li>Interactive Analysis with Spark Shell Version 1.6</li>
+  <li>Basics</li>
+  <li>Executing Queries
+  <ul>
+    <li>Creating a Table</li>
+    <li>Loading Data to a Table</li>
+    <li>Query Data from a Table</li>
+  </ul></li>
+  <li><a href="#building-carbondata">Building CarbonData</a></li>
+</ul><h2>Installation</h2>
+<ul>
+  <li>Download a released package of <a href="http://spark.apache.org/downloads.html">Spark 1.6.2 or 2.1.0</a>.</li>
+  <li>Download and install <a href="http://thrift-tutorial.readthedocs.io/en/latest/installation.html">Apache Thrift 0.9.3</a>, make sure Thrift is added to system path.</li>
+  <li>Download <a href="https://github.com/apache/incubator-carbondata">Apache CarbonData code</a> and build it. Please visit <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Building CarbonData And IDE Configuration</a> for more information.</li>
+</ul><h2>Prerequisites</h2>
+<ul>
+  <li>Create a sample.csv file using the following commands. The CSV file is required for loading data into CarbonData.</li>
+</ul><p><code>
+$ cd carbondata
+$ cat &gt; sample.csv &lt;&lt; EOF
+  id,name,city,age
+  1,david,shenzhen,31
+  2,eason,shenzhen,27
+  3,jarry,wuhan,35
+  EOF
+</code></p><h2>Interactive Analysis with Spark Shell</h2><h2>Version 2.1</h2><p>Apache Spark Shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. Please visit <a href="http://spark.apache.org/docs/latest/">Apache Spark Documentation</a> for more details on Spark shell.</p><h4>Basics</h4><p>Start Spark shell by running the following command in the Spark directory:</p><p><code>
+./bin/spark-shell --jars &lt;carbondata jar path&gt;
+</code></p><p>In this shell, SparkSession is readily available as 'spark' and Spark context is readily available as 'sc'.</p><p>In order to create a CarbonSession we will have to configure it explicitly in the following manner :</p>
+<ul>
+  <li>Import the following :</li>
+</ul><p><code>
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.CarbonSession._
+</code></p>
+<ul>
+  <li>Create a CarbonSession :</li>
+</ul><p><code>
+val carbon = SparkSession.builder()
+             .config(sc.getConf)
+             .getOrCreateCarbonSession()
+</code></p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+scala&gt;carbon.sql(&quot;create table if not exists test_table
+                (id string, name string, city string, age Int)
+                STORED BY &#39;carbondata&#39;&quot;)
+</code></p><h5>Loading Data to a Table</h5><p><code>
+scala&gt;carbon.sql(s&quot;load data inpath
+&#39;${new java.io.File(&quot;../carbondata/sample.csv&quot;).getCanonicalPath}&#39;
+into table test_table&quot;)
+</code></p><h6>Query Data from a Table</h6><p>``` scala&gt;spark.sql("select * from test_table").show</p><p>scala&gt;spark.sql("select city, avg(age), sum(age) from test_table group by city").show ```</p><h2>Interactive Analysis with Spark Shell</h2><h2>Version 1.6</h2><h4>Basics</h4><p>Start Spark shell by running the following command in the Spark directory:</p><p><code>
+./bin/spark-shell --jars &lt;carbondata jar path&gt;
+</code></p><p>NOTE: In this shell, SparkContext is readily available as sc.</p>
+<ul>
+  <li>In order to execute the Queries we need to import CarbonContext:</li>
+</ul><p><code>
+import org.apache.spark.sql.CarbonContext
+</code></p>
+<ul>
+  <li>Create an instance of CarbonContext in the following manner :</li>
+</ul><p><code>
+val cc = new CarbonContext(sc)
+</code></p><p>NOTE: By default store location is pointed to "../carbon.store", user can provide own store location to CarbonContext like new CarbonContext(sc, storeLocation).</p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+scala&gt;cc.sql(&quot;create table if not exists test_table
+(id string, name string, city string, age Int) STORED BY &#39;carbondata&#39;&quot;)
+</code> To see the table created :</p><p><code>
+scala&gt;cc.sql(&quot;show tables&quot;).show
+</code></p><h5>Loading Data to a Table</h5><p><code>
+scala&gt;cc.sql(s&quot;load data inpath
+&#39;${new java.io.File(&quot;../carbondata/sample.csv&quot;).getCanonicalPath}&#39;
+into table test_table&quot;)
+</code></p><h5>Query Data from a Table</h5><p><code>
+scala&gt;cc.sql(&quot;select * from test_table&quot;).show
+scala&gt;cc.sql(&quot;select city, avg(age), sum(age)
+from test_table group by city&quot;).show
+</code></p><h2>Building CarbonData</h2><p>To get started, get CarbonData from the <a href="http://carbondata.incubator.apache.org/">downloads</a> section on the <a href="http://carbondata.incubator.apache.org.">http://carbondata.incubator.apache.org.</a> CarbonData uses Hadoop?s client libraries for HDFS and YARN and Spark's libraries. Downloads are pre-packaged for a handful of popular Spark versions.</p><p>If you?d like to build CarbonData from source, visit <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Building CarbonData And IDE Configuration</a>.</p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/table-of-content.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/table-of-content.html b/src/main/webapp/docs/latest1/table-of-content.html
new file mode 100644
index 0000000..04ccd79
--- /dev/null
+++ b/src/main/webapp/docs/latest1/table-of-content.html
@@ -0,0 +1,53 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Table of Contents</h1>
+<ul>
+  <li><a href="quick-start-guide.md">Quick Start</a>
+  <ul>
+    <li><a href="">Getting started with Apache CarbonData</a></li>
+  </ul></li>
+  <li><a href="user-guide-toc.md">User Guide</a>
+  <ul>
+    <li><a href="overview-of-carbondata.md">Overview</a></li>
+    <li>Introduction</li>
+    <li>CarbonData File Structure</li>
+    <li>Features</li>
+    <li>Data Types</li>
+    <li>Interfaces</li>
+    <li><a href="installation-guide.md">Installation Guide</a></li>
+    <li>Installing and Configuring CarbonData on Standalone Spark Cluster</li>
+    <li>Installing and Configuring CarbonData on ?Spark on YARN? Cluster</li>
+    <li><a href="configuration-parameters.md">Configuring CarbonData</a></li>
+    <li>System Configuration</li>
+    <li>Performance Configuration</li>
+    <li>Miscellaneous Configuration</li>
+    <li>Spark Configuration</li>
+    <li><a href="using-carbondata.md">Using CarbonData</a></li>
+    <li><a href="data-management.md">Data Management</a></li>
+    <li><a href="ddl-operation-on-carbondata.md">DDL Operations on CarbonData</a></li>
+    <li><a href="dml-operation-on-carbondata.md">DML Operations on CarbonData</a></li>
+  </ul></li>
+  <li><a href="useful-tips-on-carbondata.md">Useful Tips</a>
+  <ul>
+    <li>Suggestion to create CarbonData Table</li>
+    <li>Configurations for Optimizing CarbonData Performance</li>
+  </ul></li>
+  <li><a href="use-cases-of-carbondata.md">CarbonData Use Cases</a></li>
+  <li><a href="troubleshooting.md">Troubleshooting</a></li>
+  <li><a href="faq.md">FAQs</a></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/troubleshooting.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/troubleshooting.html b/src/main/webapp/docs/latest1/troubleshooting.html
new file mode 100644
index 0000000..995e0f0
--- /dev/null
+++ b/src/main/webapp/docs/latest1/troubleshooting.html
@@ -0,0 +1,22 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Troubleshooting</h1><p>This tutorial is designed to provide troubleshooting for end users and developers who are building, deploying, and using CarbonData.</p><h3>General Prevention and Best Practices</h3>
+<ul>
+  <li><p>When trying to create a table with a single numeric column, table creation fails:  One column that can be considered as dimension is mandatory for table creation.</p></li>
+  <li><p>"Files locked for updation" when same table is accessed from two or more instances: Remove metastore_db from the examples folder.</p></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest1/use-cases-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest1/use-cases-of-carbondata.html b/src/main/webapp/docs/latest1/use-cases-of-carbondata.html
new file mode 100644
index 0000000..c14e95f
--- /dev/null
+++ b/src/main/webapp/docs/latest1/use-cases-of-carbondata.html
@@ -0,0 +1,49 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>CarbonData Use Cases</h1><p>This tutorial discusses about the problems that CarbonData addresses. It shall take you through the identified top use cases of CarbonData.</p><h2>Introduction</h2><p>For big data interactive analysis scenarios, many customers expect sub-second response to query TB-PB level data on general hardware clusters with just a few nodes.</p><p>In the current big data ecosystem, there are few columnar storage formats such as ORC and Parquet that are designed for SQL on Big Data. Apache Hive?s ORC format is a columnar storage format with basic indexing capability. However, ORC cannot meet the sub-second query response expectation on TB level data, as it performs only stride level dictionary encoding and all analytical operations such as filtering and aggregation is done on the actual data. Apache Parquet is a columnar storage format that can improve performance in comparison to ORC due to its more efficient storage organization. Though Parquet can provide qu
 ery response on TB level data in a few seconds, it is still far from the sub-second expectation of interactive analysis users. Cloudera Kudu can effectively solve some query performance issues, but kudu is not hadoop native, can?t seamlessly integrate historic HDFS data into new kudu system.</p><p>However, CarbonData uses specially engineered optimizations targeted to improve performance of analytical queries which can include filters, aggregation and distinct counts, the required data to be stored in an indexed, well organized, read-optimized format, CarbonData?s query performance can achieve sub-second response.</p><h2>Motivation: Single Format to provide Low Latency Response for all Use Cases</h2><p>The main motivation behind CarbonData is to provide a single storage format for all the usecases of querying big data on Hadoop. Thus CarbonData is able to cover all use-cases into a single storage format.</p><p><img src="../../../src/site/markdown/images/carbon_data_motivation.png?ra
 w=true" alt="Motivation" /></p><h2>Use Cases</h2><h3>Sequential Access</h3>
+<ul>
+  <li>Supports queries that select only a few columns with a group by clause but do not contain any filters.  This results in full scan over the complete store for the selected columns.</li>
+</ul><p><img src="../../../src/site/markdown/images/carbon_data_full_scan.png?raw=true" alt="Sequential_Scan" /></p><p><strong>Scenario</strong></p>
+<ul>
+  <li>ETL jobs</li>
+  <li>Log Analysis</li>
+</ul><h3>Random Access</h3>
+<ul>
+  <li>Supports Point Query. These are queries used from operational applications and usually select all or most of the columns and involves a large number of  filters which reduce the result to a small size. Such queries generally do not involve any aggregation or group by clause.
+  <ul>
+    <li>Row-key query(like HBase)</li>
+    <li>Narrow Scan</li>
+    <li>Requires second/sub-second level low latency</li>
+  </ul></li>
+</ul><p><img src="../../../src/site/markdown/images/carbon_data_random_scan.png?raw=true" alt="random_access" /></p><p><strong>Scenario</strong></p>
+<ul>
+  <li>Operational Query</li>
+  <li>User Profiling</li>
+</ul><h3>Olap Style Query</h3>
+<ul>
+  <li>Supports Interactive data analysis for any dimensions. These are queries which are typically fired from Interactive Analysis tools.  Such queries often select a few columns and involves filters and group by on a column or a grouping expression.  It also supports queries that :
+  <ul>
+    <li>Involves aggregation/join</li>
+    <li>Roll-up,Drill-down,Slicing and Dicing</li>
+    <li>Low-latency ad-hoc query</li>
+  </ul></li>
+</ul><p><img src="../../../src/site/markdown/images/carbon_data_olap_scan.png?raw=true" alt="Olap_style_query" /></p><p><strong>Scenario</strong></p>
+<ul>
+  <li>Dash-board reporting</li>
+  <li>Fraud &amp; Ad-hoc Analysis</li>
+</ul>
\ No newline at end of file


Mime
View raw message