carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenliang...@apache.org
Subject [09/13] incubator-carbondata-site git commit: Updated website for CarbonData V 1.0
Date Thu, 19 Jan 2017 23:14:59 GMT
http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/faq.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/faq.html b/src/main/webapp/docs/latest/faq.html
index cde4194..0645fea 100644
--- a/src/main/webapp/docs/latest/faq.html
+++ b/src/main/webapp/docs/latest/faq.html
@@ -1,67 +1,26 @@
-<!DOCTYPE html><html><head><meta charset="utf-8"><title>Untitled Document.md</title><style>
-
-</style></head><body id="preview">
 <!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-“License”); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
 
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-&quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-    
-<h1><a id="FAQs_0"></a>FAQs</h1>
-<ul>
-<li>
-<p><strong>Auto Compaction not Working</strong></p>
-<p>The Property carbon.enable.auto.load.merge in carbon.properties need to be set to true.</p>
-</li>
-<li>
-<p><strong>Getting Abstract method error</strong></p>
-<p>You need to specify the spark version while using Maven to build project.</p>
-</li>
-<li>
-<p><strong>Getting NotImplementedException for subquery using IN and EXISTS</strong></p>
-<p>Subquery with in and exists not supported in CarbonData.</p>
-</li>
-<li>
-<p><strong>Getting Exceptions on creating  a view</strong></p>
-<p>View not supported in CarbonData.</p>
-</li>
-<li>
-<p><strong>How to verify if ColumnGroups have been created as desired.</strong></p>
-<p>Try using desc table query.</p>
-</li>
-<li>
-<p><strong>Did anyone try to run CarbonData on windows? Is it supported on Windows?</strong></p>
-<p>We may provide support for windows in future. You are welcome to contribute if you want to add the support :)</p>
-</li>
-</ul>
-<!--
-<center>
-  <b><a href="#top">Top</a></b>
-</center>-->
+      http://www.apache.org/licenses/LICENSE-2.0
 
-<script type="text/javascript">
- $('a[href*="#"]:not([href="#"])').click(function() {
-   if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) {
-    var target = $(this.hash);
-    target = target.length ? target : $('[name=' + this.hash.slice(1) + ']');
-    if (target.length) 
-        { $('html, body').animate({ scrollTop: target.offset().top - 52 },100);
-          return false;
-        }
-     }
-  });
-</script>
-
-</body></html>
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>FAQs</h1>
+<ul>
+  <li><p><strong>Auto Compaction not Working</strong></p><p>The Property carbon.enable.auto.load.merge in carbon.properties need to be set to true.</p></li>
+  <li><p><strong>Getting Abstract method error</strong></p><p>You need to specify the spark version while using Maven to build project.</p></li>
+  <li><p><strong>Getting NotImplementedException for subquery using IN and EXISTS</strong></p><p>Subquery with in and exists not supported in CarbonData.</p></li>
+  <li><p><strong>Getting Exceptions on creating a view</strong></p><p>View not supported in CarbonData.</p></li>
+  <li><p><strong>How to verify if ColumnGroups have been created as desired.</strong></p><p>Try using desc table query.</p></li>
+  <li><p><strong>Did anyone try to run CarbonData on windows? Is it supported on Windows?</strong></p><p>We may provide support for windows in future. You are welcome to contribute if you want to add the support :) </p></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/CarbonData_icon.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/CarbonData_icon.png b/src/main/webapp/docs/latest/images/CarbonData_icon.png
new file mode 100644
index 0000000..3ea7f54
Binary files /dev/null and b/src/main/webapp/docs/latest/images/CarbonData_icon.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/CarbonData_logo.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/CarbonData_logo.png b/src/main/webapp/docs/latest/images/CarbonData_logo.png
new file mode 100644
index 0000000..bc09b23
Binary files /dev/null and b/src/main/webapp/docs/latest/images/CarbonData_logo.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png b/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png
new file mode 100644
index 0000000..3f9241b
Binary files /dev/null and b/src/main/webapp/docs/latest/images/carbon_data_file_structure_new.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/carbon_data_format_new.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_format_new.png b/src/main/webapp/docs/latest/images/carbon_data_format_new.png
new file mode 100644
index 0000000..9d0b194
Binary files /dev/null and b/src/main/webapp/docs/latest/images/carbon_data_format_new.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/carbon_data_full_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_full_scan.png b/src/main/webapp/docs/latest/images/carbon_data_full_scan.png
new file mode 100644
index 0000000..46715e7
Binary files /dev/null and b/src/main/webapp/docs/latest/images/carbon_data_full_scan.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/carbon_data_motivation.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_motivation.png b/src/main/webapp/docs/latest/images/carbon_data_motivation.png
new file mode 100644
index 0000000..6e454c6
Binary files /dev/null and b/src/main/webapp/docs/latest/images/carbon_data_motivation.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png b/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png
new file mode 100644
index 0000000..c1dfb18
Binary files /dev/null and b/src/main/webapp/docs/latest/images/carbon_data_olap_scan.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/carbon_data_random_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/carbon_data_random_scan.png b/src/main/webapp/docs/latest/images/carbon_data_random_scan.png
new file mode 100644
index 0000000..7d44d34
Binary files /dev/null and b/src/main/webapp/docs/latest/images/carbon_data_random_scan.png differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/CarbonData_icon.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/CarbonData_icon.png b/src/main/webapp/docs/latest/images/format/CarbonData_icon.png
deleted file mode 100644
index 3ea7f54..0000000
Binary files a/src/main/webapp/docs/latest/images/format/CarbonData_icon.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/CarbonData_logo.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/CarbonData_logo.png b/src/main/webapp/docs/latest/images/format/CarbonData_logo.png
deleted file mode 100755
index bc09b23..0000000
Binary files a/src/main/webapp/docs/latest/images/format/CarbonData_logo.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/carbon_data_file_structure_new.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/carbon_data_file_structure_new.png b/src/main/webapp/docs/latest/images/format/carbon_data_file_structure_new.png
deleted file mode 100755
index 3f9241b..0000000
Binary files a/src/main/webapp/docs/latest/images/format/carbon_data_file_structure_new.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/carbon_data_format_new.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/carbon_data_format_new.png b/src/main/webapp/docs/latest/images/format/carbon_data_format_new.png
deleted file mode 100755
index 9d0b194..0000000
Binary files a/src/main/webapp/docs/latest/images/format/carbon_data_format_new.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/carbon_data_full_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/carbon_data_full_scan.png b/src/main/webapp/docs/latest/images/format/carbon_data_full_scan.png
deleted file mode 100755
index 46715e7..0000000
Binary files a/src/main/webapp/docs/latest/images/format/carbon_data_full_scan.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/carbon_data_motivation.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/carbon_data_motivation.png b/src/main/webapp/docs/latest/images/format/carbon_data_motivation.png
deleted file mode 100755
index 6e454c6..0000000
Binary files a/src/main/webapp/docs/latest/images/format/carbon_data_motivation.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/carbon_data_olap_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/carbon_data_olap_scan.png b/src/main/webapp/docs/latest/images/format/carbon_data_olap_scan.png
deleted file mode 100755
index c1dfb18..0000000
Binary files a/src/main/webapp/docs/latest/images/format/carbon_data_olap_scan.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/images/format/carbon_data_random_scan.png
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/images/format/carbon_data_random_scan.png b/src/main/webapp/docs/latest/images/format/carbon_data_random_scan.png
deleted file mode 100755
index 7d44d34..0000000
Binary files a/src/main/webapp/docs/latest/images/format/carbon_data_random_scan.png and /dev/null differ

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/index.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/index.html b/src/main/webapp/docs/latest/index.html
deleted file mode 100644
index 6068944..0000000
--- a/src/main/webapp/docs/latest/index.html
+++ /dev/null
@@ -1,88 +0,0 @@
-<!DOCTYPE html><html><head><meta charset="utf-8"><title>Untitled Document.md</title><style>
-
-</style></head><body id="preview">
-  <!--
-  Licensed to the Apache Software Foundation (ASF) under one
-  or more contributor license agreements.  See the NOTICE file
-  distributed with this work for additional information
-  regarding copyright ownership.  The ASF licenses this file
-  to you under the Apache License, Version 2.0 (the
-  “License”); you may not use this file except in compliance
-  with the License.  You may obtain a copy of the License at
-   http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing,
-  software distributed under the License is distributed on an
-  &quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-  KIND, either express or implied.  See the License for the
-  specific language governing permissions and limitations
-  under the License.
-  -->
-<h1><a id="Table_of_Contents_1"></a>Table of Contents</h1>
-<ul>
-<li><a href="quick_start.html">Quick Start</a>
-<ul>
-<li><a href="quick_start.html#getting-started-with-carbondata">Getting started with Apache CarbonData</a></li>
-<li><a href="">First CarbonData Project</a></li>
-</ul>
-</li>
-<li><a href="user_guide.html">User Guide</a>
-<ul>
-<li><a href="overview.html">Overview</a>
-<ul>
-<li><a href="overview.html#introduction">Introduction</a></li>
-<li><a href="overview.html#file-format">CarbonData File Structure</a></li>
-<li><a href="overview.html#features">Features</a></li>
-<li><a href="overview.html#data-types">Data Types</a></li>
-<!--<li><a href="overview.html#compatibility">Compatibility</a></li>-->
-<li><a href="overview.html#packaging-interfaces">Interfaces</a></li>
-</ul>
-</li>
-<li><a href="installation.html">Installing CarbonData</a>
-<ul>
-<li><a href="installation.html#spark-cluster">Standalone Spark Cluster</a></li>
-<li><a href="installation.html#yarn-cluster">Spark on Yarn Cluster</a></li>
-</ul>
-</li>
-<li><a href="configuring.html">Configuring CarbonData</a>
-<ul>
-<li><a href="configuring.html#system-configuration">System Configuration</a></li>
-<li><a href="configuring.html#performance-configuration">Performance Configuration</a></li>
-<li><a href="configuring.html#miscellaneous-configuration">Miscellaneous Configuration</a></li>
-<li><a href="configuring.html#spark-configuration">Spark Configuration</a></li>
-</ul>
-</li>
-<li><a href="usingCarbonData.html">Using CarbonData</a>
-<ul>
-<li><a href="data_management.html">Data Management</a></li>
-<li><a href="ddl.html">DDL Operations</a></li>
-<li><a href="dml.html">DML Operations</a></li>
-</ul>
-</li>
-</ul>
-</li>
-<li><a href="useful_tips.html">Useful Tips</a>
-<ul>
-<li><a href="useful_tips.html#suggestions-to-create-carbondata-tables">Suggestion to create CarbonData table</a></li>
-<li><a href="useful_tips.html#configurations-for-optimizing-carbondata-performance">Configurations For Optimizing CarbonData Performance</a></li>
-</ul>
-</li>
-<li><a href="use_cases.html">Use Cases</a></li>
-<li><a href="troubleshooting.html">Troubleshooting</a></li>
-<li><a href="faq.html">FAQ</a></li>
-</ul>
-
-<script type="text/javascript">
- $('a[href*="#"]:not([href="#"])').click(function() {
-   if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) {
-    var target = $(this.hash);
-    target = target.length ? target : $('[name=' + this.hash.slice(1) + ']');
-    if (target.length) 
-        { $('html, body').animate({    scrollTop: target.offset().top - 52 },100);
-          return false;
-        }
-     }
-  });
-</script>
-
-</body></html>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/installation-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/installation-guide.html b/src/main/webapp/docs/latest/installation-guide.html
new file mode 100644
index 0000000..7e23c48
--- /dev/null
+++ b/src/main/webapp/docs/latest/installation-guide.html
@@ -0,0 +1,245 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Installation Guide</h1><p>This tutorial guides you through the installation and configuration of CarbonData in the following two modes :</p>
+<ul>
+  <li><a href="#installing-and-configuring-carbondata-on-standalone-spark-cluster">Installing and Configuring CarbonData on Standalone Spark Cluster</a></li>
+  <li><a href="#installing-and-configuring-carbondata-on-spark-on-yarn-cluster">Installing and Configuring CarbonData on ?Spark on YARN? Cluster</a></li>
+</ul><p>followed by :</p>
+<ul>
+  <li><a href="#query-execution-using-carbondata-thrift-server">Query Execution using CarbonData Thrift Server</a></li>
+</ul><h2>Installing and Configuring CarbonData on Standalone Spark Cluster</h2><h3>Prerequisites</h3>
+<ul>
+  <li><p>Hadoop HDFS and Yarn should be installed and running.</p></li>
+  <li><p>Spark should be installed and running on all the cluster nodes.</p></li>
+  <li><p>CarbonData user should have permission to access HDFS.</p></li>
+</ul><h3>Procedure</h3>
+<ul>
+  <li><p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder.</p><p>NOTE: Create the carbonlib folder if it does not exists inside <code>&quot;&lt;SPARK_HOME&gt;&quot;</code> path.</p></li>
+  <li><p>Add the carbonlib folder path in the Spark classpath. (Edit <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-env.sh&quot;</code> file and modify the value of SPARK_CLASSPATH by appending <code>&quot;&lt;SPARK_HOME&gt;/carbonlib/*&quot;</code> to the existing value)</p></li>
+  <li><p>Copy the carbon.properties.template to <code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code> folder from "./conf/" of CarbonData repository.</p></li>
+  <li><p>Copy the "carbonplugins" folder to <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder from "./processing/" folder of CarbonData repository.</p><p>NOTE: carbonplugins will contain .kettle folder.</p></li>
+  <li><p>In Spark node, configure the properties mentioned in the following table in <code>&quot;&lt;SPARK_HOME&gt;/conf/spark-defaults.conf&quot;</code> file.</p></li>
+</ul>
+<table class="table table-striped table-bordered">
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Description </th>
+      <th>Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data </td>
+      <td>$SPARK_HOME /carbonlib/carbonplugins </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. </td>
+      <td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space. </td>
+      <td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Add the following properties in <code>&quot;&lt;SPARK_HOME&gt;/conf/&quot; carbon.properties</code>:</li>
+</ul>
+<table class="table table-striped table-bordered">
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Required </th>
+      <th>Description </th>
+      <th>Example </th>
+      <th>Remark </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.storelocation </td>
+      <td>NO </td>
+      <td>Location where data CarbonData will create the store and write the data in its own format. </td>
+      <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore </td>
+      <td>Propose to set HDFS directory </td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>YES </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td>$SPARK_HOME/carbonlib/carbonplugins </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Verify the installation. For example:</li>
+</ul><p><code>
+   ./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
+   --executor-memory 2G
+</code></p><p>NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.</p><p>To get started with CarbonData : <a href="quick-start-guide.html">Quick Start</a> , <a href="ddl-operation-on-carbondata.html">DDL Operations on CarbonData</a></p><h2>Installing and Configuring CarbonData on "Spark on YARN" Cluster</h2><p>This section provides the procedure to install CarbonData on "Spark on YARN" cluster.</p><h3>Prerequisites</h3>
+<ul>
+  <li>Hadoop HDFS and Yarn should be installed and running.</li>
+  <li>Spark should be installed and running in all the clients.</li>
+  <li>CarbonData user should have permission to access HDFS.</li>
+</ul><h3>Procedure</h3><p>The following steps are only for Driver Nodes. (Driver nodes are the one which starts the spark context.)</p>
+<ul>
+  <li><p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder.</p><p>NOTE: Create the carbonlib folder if it does not exists inside <code>&quot;&lt;SPARK_HOME&gt;&quot;</code> path.</p></li>
+  <li><p>Copy "carbonplugins" folder to <code>&quot;&lt;SPARK_HOME&gt;/carbonlib&quot;</code> folder from "./processing/" folder of CarbonData repository.  carbonplugins will contain .kettle folder.</p></li>
+  <li><p>Copy the "carbon.properties.template" to <code>&quot;&lt;SPARK_HOME&gt;/conf/carbon.properties&quot;</code> folder from conf folder of CarbonData repository.</p></li>
+  <li>Modify the parameters in "spark-default.conf" located in the <code>&quot;&lt;SPARK_HOME&gt;/conf</code>"</li>
+</ul>
+<table class="table table-striped table-bordered">
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Description </th>
+      <th>Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>spark.master </td>
+      <td>Set this value to run the Spark in yarn cluster mode. </td>
+      <td>Set "yarn-client" to run the Spark in yarn cluster mode. </td>
+    </tr>
+    <tr>
+      <td>spark.yarn.dist.files </td>
+      <td>Comma-separated list of files to be placed in the working directory of each executor. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>spark.yarn.dist.archives </td>
+      <td>Comma-separated list of archives to be extracted into the working directory of each executor. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to executors. For instance NOTE: You can enter multiple values separated by space. </td>
+      <td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>spark.executor.extraClassPath </td>
+      <td>Extra classpath entries to prepend to the classpath of executors. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the values in below parameter spark.driver.extraClassPath </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraClassPath </td>
+      <td>Extra classpath entries to prepend to the classpath of the driver. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the value in below parameter spark.driver.extraClassPath. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonlib/carbondata_xxx.jar</code> </td>
+    </tr>
+    <tr>
+      <td>spark.driver.extraJavaOptions </td>
+      <td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. </td>
+      <td><code>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</code> </td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td><code>&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/carbonlib/carbonplugins</code> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Add the following properties in <code>&lt;SPARK_HOME&gt;/conf/ carbon.properties</code>:</li>
+</ul>
+<table class="table table-striped table-bordered">
+  <thead>
+    <tr>
+      <th>Property </th>
+      <th>Required </th>
+      <th>Description </th>
+      <th>Example </th>
+      <th>Default Value </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>carbon.storelocation </td>
+      <td>NO </td>
+      <td>Location where CarbonData will create the store and write the data in its own format. </td>
+      <td>hdfs://HOSTNAME:PORT/Opt/CarbonStore </td>
+      <td>Propose to set HDFS directory</td>
+    </tr>
+    <tr>
+      <td>carbon.kettle.home </td>
+      <td>YES </td>
+      <td>Path that will be used by CarbonData internally to create graph for loading the data. </td>
+      <td>$SPARK_HOME/carbonlib/carbonplugins </td>
+      <td> </td>
+    </tr>
+  </tbody>
+</table>
+<ul>
+  <li>Verify the installation.</li>
+</ul><p><code>
+     ./bin/spark-shell --master yarn-client --driver-memory 1g 
+     --executor-cores 2 --executor-memory 2G
+</code>  NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.</p><p>Getting started with CarbonData : <a href="quick-start-guide.html">Quick Start</a> , <a href="ddl-operation-on-carbondata.html">DDL Operations on CarbonData</a></p><h2>Query Execution Using CarbonData Thrift Server</h2><h3>Starting CarbonData Thrift Server</h3><p>a. cd <code>&lt;SPARK_HOME&gt;</code></p><p>b. Run the following command to start the CarbonData thrift server.</p><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
+$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
+</code></p>
+<table class="table table-striped table-bordered">
+  <thead>
+    <tr>
+      <th>Parameter </th>
+      <th>Description </th>
+      <th>Example </th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>CARBON_ASSEMBLY_JAR </td>
+      <td>CarbonData assembly jar name present in the <code>&quot;&lt;SPARK_HOME&gt;&quot;/carbonlib/</code> folder. </td>
+      <td>carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar </td>
+    </tr>
+    <tr>
+      <td>carbon_store_path </td>
+      <td>This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. </td>
+      <td><code>hdfs//&lt;host_name&gt;:54310/user/hive/warehouse/carbon.store</code> </td>
+    </tr>
+  </tbody>
+</table><h3>Examples</h3>
+<ul>
+  <li>Start with default memory and executors</li>
+</ul><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
+$SPARK_HOME/carbonlib
+/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
+hdfs://hacluster/user/hive/warehouse/carbon.store
+</code></p>
+<ul>
+  <li>Start with Fixed executors and resources</li>
+</ul><p><code>
+./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
+--class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
+--num-executors 3 --driver-memory 20g --executor-memory 250g 
+--executor-cores 32 
+/srv/OSCON/BigData/HACluster/install/spark/sparkJdbc/lib
+/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
+hdfs://hacluster/user/hive/warehouse/carbon.store
+</code></p><h3>Connecting to CarbonData Thrift Server Using Beeline</h3><p>```  cd <SPARK_HOME>  ./bin/beeline jdbc:hive2://<thrftserver_host>:port</p>
+<pre><code> Example
+ ./bin/beeline jdbc:hive2://10.10.10.10:10000
+</code></pre><p>```</p>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/installation.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/installation.html b/src/main/webapp/docs/latest/installation.html
deleted file mode 100644
index 162d433..0000000
--- a/src/main/webapp/docs/latest/installation.html
+++ /dev/null
@@ -1,303 +0,0 @@
-<!DOCTYPE html><html><head><meta charset="utf-8"><title>Untitled Document.md</title><style>
-
-</style></head><body id="preview">
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-“License”); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-&quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-    
-<h1><a id="Installation_Guide_0"></a>Installation Guide</h1>
-<p>This tutorial guides you through the installation and configuration of CarbonData in the following two modes:</p>
-<ul>
-<li><a href="#spark-cluster">Installing and Configuring CarbonData on Standalone Spark Cluster</a></li>
-<li><a href="#yarn-cluster">Installing and Configuring CarbonData on “Spark on YARN” Cluster</a></li>
-</ul>
-<p>followed by :</p>
-<ul>
-<li><a href="#thrift-server">Query Execution using CarbonData Thrift Server</a></li>
-</ul>
-<div id="spark-cluster"></div>
-<h2><a id="Installing_and_Configuring_CarbonData_on_Standalone_Spark_Cluster_9"></a>Installing and Configuring CarbonData on Standalone Spark Cluster</h2>
-<h3><a id="Prerequisite_11"></a>Prerequisites</h3>
-<ul>
-<li>Hadoop HDFS and Yarn should be installed and running.</li>
-<li>Spark should be installed and running on all the cluster nodes.</li>
-<li>CarbonData user should have permission to access HDFS.</li>
-</ul>
-<h3><a id="Procedure_16"></a>Procedure</h3>
-<ol>
-<li>
-<p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from “./assembly/target/scala-2.10/carbondata_xxx.jar” and put in the “&lt;SPARK_HOME&gt;/carbonlib” folder.</p>
-<p>NOTE: Create the carbonlib folder if it does not exists inside “&lt;SPARK_HOME&gt;” path.</p>
-</li>
-<li>
-<p>Add the carbonlib folder path in the Spark classpath. (Edit “&lt;SPARK_HOME&gt;/conf/spark-env.sh” file and modify the value of SPARK_CLASSPATH by appending “&lt;SPARK_HOME&gt;/carbonlib/*” to the existing value)</p>
-</li>
-<li>
-<p>Copy the carbon.properties.template to “&lt;SPARK_HOME&gt;/conf/carbon.properties” folder from “./conf/” of CarbonData repository.</p>
-</li>
-<li>
-<p>Copy the “carbonplugins” folder  to “&lt;SPARK_HOME&gt;/carbonlib” folder from “./processing/” folder of CarbonData repository.</p>
-<p>NOTE: carbonplugins will contain .kettle folder.</p>
-</li>
-<li>
-<p>In Spark node, configure the properties mentioned in the following table in “&lt;SPARK_HOME&gt;/conf/spark-defaults.conf” file.</p>
-<table class="table table-striped table-bordered">
-<thead>
-<tr>
-<th>Property</th>
-<th>Description</th>
-<th>Value</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>carbon.kettle.home</td>
-<td>Path that will be used by CarbonData internally to create graph for loading the data</td>
-<td>$SPARK_HOME /carbonlib/carbonplugins</td>
-</tr>
-<tr>
-<td>spark.driver.extraJavaOptions</td>
-<td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging.</td>
-<td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties</td>
-</tr>
-<tr>
-<td>spark.executor.extraJavaOptions</td>
-<td>A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space.</td>
-<td>-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties</td>
-</tr>
-</tbody>
-</table>
-</li>
-<li>Add the following properties in “&lt;SPARK_HOME&gt;/conf/” carbon.properties:
-<table class="table table-striped table-bordered">
-<thead>
-<tr>
-<th>Property</th>
-<th>Required</th>
-<th>Description</th>
-<th>Example</th>
-<th>Remark</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>carbon.storelocation</td>
-<td>NO</td>
-<td>Location where data CarbonData will create the store and write the data in its own format.</td>
-<td>hdfs://IP:PORT/Opt/CarbonStore</td>
-<td>Propose to set HDFS directory</td>
-</tr>
-<tr>
-<td>carbon.kettle.home</td>
-<td>YES</td>
-<td>Path that will used by CarbonData internally to create graph for loading the data.</td>
-<td>$SPARK_HOME/carbonlib/carbonplugins</td>
-<td></td>
-</tr>
-</tbody>
-</table>
-</li>
-<li>Verify the installation. For example:
-<pre><code>./spark-shell --master spark://IP:PORT --total-executor-cores 2 --executor-memory 2G
-</code></pre>
-<p>NOTE: Make sure that you have permissions for CarbonData JARs and files through which driver and executor will start.</p>
-<p>To get started with CarbonData : <a href="quick_start.html">Quick Start</a>, <a href="data_management.html#DDL_Operations_on_CarbonData_0">DDL Operations on CarbonData</a></p>
-</li>
-</ol>
-<div id="yarn-cluster"></div>
-<h2><a id="Installing_and_Configuring_Carbon_on_Spark_on_YARN_Cluster_53"></a>Installing and Configuring CarbonData on “Spark on YARN” Cluster</h2>
-<p>This section provides the procedure to install CarbonData on “Spark on YARN” cluster.</p>
-<h3><a id="Prerequisite_56"></a>Prerequisites</h3>
-<ul>
-<li>Hadoop HDFS and Yarn should be installed and running.</li>
-<li>Spark should be installed and running in all the clients.</li>
-<li>CarbonData user should have permission to access HDFS.</li>
-</ul>
-<h3><a id="Procedure_61"></a>Procedure</h3>
-<p>The following steps are only for Driver Nodes. (Driver nodes are the one which starts the spark context.)</p>
-<ol>
-<li>
-<p><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Build the CarbonData</a> project and get the assembly jar from “./assembly/target/scala-2.10/carbondata_xxx.jar” and put in the “&lt;SPARK_HOME&gt;/carbonlib” folder.</p>
-<p>NOTE: Create the carbonlib folder if it does not exists inside “&lt;SPARK_HOME&gt;” path.</p>
-</li>
-<li>
-<p>Copy the "carbonplugins" folder to “&lt;SPARK_HOME&gt;/carbonlib” folder from “./processing/” of CarbonData repository. carbonplugins will contain .kettle folder.</p>
-</li>
-<li>
-<p>Copy the “carbon.properties.template” to “&lt;SPARK_HOME&gt;/conf/carbon.properties” folder from conf folder of CarbonData repository.</p>
-</li>
-<li>
-<p>Modify the parameters in “spark-default.conf” located in the “&lt;SPARK_HOME&gt;/conf”</p>
-<table class="table table-striped table-bordered">
-<thead>
-<tr>
-<th>Property</th>
-<th>Description</th>
-<th>Value</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>spark.master</td>
-<td>Set this value to run the Spark in yarn cluster mode.</td>
-<td>Set “yarn-client” to run the Spark in yarn cluster mode.</td>
-</tr>
-<tr>
-<td>spark.yarn.dist.files</td>
-<td>Comma-separated list of files to be placed in the working directory of each executor.</td>
-<td>“&lt;YOUR_SPARK_HOME_PATH&gt;”/conf/carbon.properties</td>
-</tr>
-<tr>
-<td>spark.yarn.dist.archives</td>
-<td>Comma-separated list of archives to be extracted into the working directory of each executor.</td>
-<td>“&lt;YOUR_SPARK_HOME_PATH&gt;”/carbonlib/carbondata_xxx.jar</td>
-</tr>
-<tr>
-<td>spark.executor.extraJavaOptions</td>
-<td>A string of extra JVM options to pass to executors. For instance  NOTE: You can enter multiple values separated by space.</td>
-<td>-Dcarbon.properties.filepath=carbon.properties</td>
-</tr>
-<tr>
-<td>spark.executor.extraClassPath</td>
-<td>Extra classpath entries to prepend to the classpath of executors. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the values in below parameter spark.driver.extraClassPath</td>
-<td>“&lt;YOUR_SPARK_HOME_PATH&gt;”/carbonlib/carbonlib/carbondata_xxx.jar</td>
-</tr>
-<tr>
-<td>spark.driver.extraClassPath</td>
-<td>Extra classpath entries to prepend to the classpath of the driver. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the value in below parameter spark.driver.extraClassPath.</td>
-<td>“&lt;YOUR_SPARK_HOME_PATH&gt;”/carbonlib/carbonlib/carbondata_xxx.jar</td>
-</tr>
-<tr>
-<td>spark.driver.extraJavaOptions</td>
-<td>A string of extra JVM options to pass to the driver. For instance, GC settings or other logging.</td>
-<td>-Dcarbon.properties.filepath=&quot;&lt;YOUR_SPARK_HOME_PATH&gt;&quot;/conf/carbon.properties</td>
-</tr>
-<tr>
-<td>carbon.kettle.home</td>
-<td>Path that will used by CarbonData internally to create graph for loading the data.</td>
-<td>“&lt;YOUR_SPARK_HOME_PATH&gt;”/carbonlib/carbonplugins</td>
-</tr>
-</tbody>
-</table>
-</li>
-
-<li>Add the following properties in &lt;SPARK_HOME&gt;/conf/ carbon.properties:
-
-<table class="table table-striped table-bordered">
-<thead>
-<tr>
-<th>Property</th>
-<th>Required</th>
-<th>Description</th>
-<th>Example</th>
-<th>Default Value</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>carbon.storelocation</td>
-<td>NO</td>
-<td>Location where data CarbonData will create the store and write the data in its own format.</td>
-<td>hdfs://IP:PORT/Opt/CarbonStore</td>
-<td>Propose to set HDFS directory</td>
-</tr>
-<tr>
-<td>carbon.kettle.home</td>
-<td>YES</td>
-<td>Path that will used by CarbonData internally to create graph for loading the data.</td>
-<td>$SPARK_HOME/carbonlib/carbonplugins</td>
-<td></td>
-</tr>
-</tbody>
-</table>
-</li>
-<li>Verify installation.
-<pre><code>./bin/spark-shell --master yarn-client --driver-memory 1g --executor-cores 2 --executor-memory 2G
-</code></pre>
-<p>NOTE: Make sure that you have permissions for CarbonData JARs and files through which driver and executor will start.</p>
-<p>Getting started with CarbonData :<a href="quick_start.html">Quick Start</a>, <a href="data_management.html#DDL_Operations_on_CarbonData_0">DDL Operations on CarbonData</a></p>
-</li>
-</ol>
-<div id="thrift-server"></div>
-<h2><a id="Query_execution_using_Carbon_thrift_server_99"></a>Query Execution Using CarbonData Thrift Server</h2>
-<h3><a id="Start_Thrift_server_101"></a>Starting CarbonData Thrift server</h3>
-<ol>
-<li>
-<p>cd &lt;SPARK_HOME&gt;</p>
-</li>
-<li>
-<p>Run the following command to start the CarbonData thrift server.</p>
-<pre><code>./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR &lt;carbon_store_path&gt;
-</code></pre>
-<table class="table table-striped table-bordered">
-<thead>
-<tr>
-<th>Parameter</th>
-<th>Description</th>
-<th>Example</th>
-</tr>
-</thead>
-<tbody>
-<tr>
-<td>CARBON_ASSEMBLY_JAR</td>
-<td>CarbonData assembly jar name present in the “”/carbonlib/ folder.</td>
-<td>carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar</td>
-</tr>
-<tr>
-<td>carbon_store_path</td>
-<td>This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.proeprties.</td>
-<td>hdfs//hacluster/user/hive/warehouse/carbon.store hdfs//10.10.10.10:54310 /user/hive/warehouse/carbon.store</td>
-</tr>
-</tbody>
-</table>
-</li>
-</ol>
-<h3><a id="Examples_114"></a>Examples</h3>
-<ul>
-<li>Start with default memory and executors.
-<pre><code>./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer $SPARK_HOME/carbonlib/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar hdfs://hacluster/user/hive/warehouse/carbon.store
-</code></pre>
-</li>
-<li>Start with Fixed executors and resources.
-
-<pre><code>./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer --num-executors 3 --driver-memory 20g --executor-memory 250g --executor-cores 32 /srv/OSCON/BigData/HACluster/install/spark/sparkJdbc/lib/carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar hdfs://hacluster/user/hive/warehouse/carbon.store
-</code></pre>
-</li>
-</ul>
-<h3><a id="Connecting_to_Carbon_Thrift_Server_Using_Beeline_123"></a>Connecting to CarbonData Thrift Server Using Beeline</h3>
-<pre><code>cd &lt;SPARK_HOME&gt;
-./bin/beeline jdbc:hive2://&lt;thrftserver_host&gt;:port
-
-Example
-./bin/beeline jdbc:hive2://10.10.10.10:10000
-</code></pre>
-
-<script type="text/javascript">
- $('a[href*="#"]:not([href="#"])').click(function() {
-   if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) {
-    var target = $(this.hash);
-    target = target.length ? target : $('[name=' + this.hash.slice(1) + ']');
-    if (target.length) 
-        { $('html, body').animate({    scrollTop: target.offset().top - 52 },100);
-          return false;
-        }
-     }
-  });
-</script>
-
-</body></html>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/mainpage.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/mainpage.html b/src/main/webapp/docs/latest/mainpage.html
new file mode 100644
index 0000000..6cc23ab
--- /dev/null
+++ b/src/main/webapp/docs/latest/mainpage.html
@@ -0,0 +1,154 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <link href='../../images/favicon.ico' rel='shortcut icon' type='image/x-icon'>
+    <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
+    <title>CarbonData</title>
+<style>
+
+</style>
+    <!-- Bootstrap -->
+  
+    <link rel="stylesheet" href="../../css/bootstrap.min.css">
+    <link href="../../css/style.css" rel="stylesheet">      
+    <link href="../../css/print.css" rel="stylesheet" >     
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+      <script src="https://oss.maxcdn.com/html5shiv/3.7.3/html5shiv.min.js"></script>
+      <script src="https://oss.maxcdn.scom/respond/1.4.2/respond.min.js"></script>
+    <![endif]-->
+    <script src="../../js/jquery.min.js"></script>
+    <script src="../../js/bootstrap.min.js"></script>    
+    
+
+ 
+  </head>
+  <body>
+    <header>
+     <nav class="navbar navbar-default navbar-custom cd-navbar-wrapper" >
+      <div class="container">
+        <div class="navbar-header">
+          <button aria-controls="navbar" aria-expanded="false" data-target="#navbar" data-toggle="collapse" class="navbar-toggle collapsed" type="button">
+            <span class="sr-only">Toggle navigation</span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <a href="../../index.html" class="logo">
+             <img src="../../images/CarbonDataLogo.png" alt="CarbonData logo" title="CarbocnData logo"  />      
+          </a>
+        </div>
+        <div class="navbar-collapse collapse cd_navcontnt" id="navbar">         
+         <ul class="nav navbar-nav navbar-right navlist-custom">
+              <li><a href="../../index.html" class="hidden-xs"><i class="fa fa-home" aria-hidden="true"></i> </a></li>
+              <li><a href="../../index.html" class="hidden-lg hidden-md hidden-sm">Home</a></li>
+              <li class="dropdown">
+                  <a href="#" class="dropdown-toggle " data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false"> Download <span class="caret"></span></a>
+                  <ul class="dropdown-menu">
+                    <li><a href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/0.2.0-incubating"  target="_blank">Apache CarbonData 0.2.0</a></li>
+                    <li><a href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/0.1.1-incubating"  target="_blank">Apache CarbonData 0.1.1</a></li>
+                    <li><a href="https://www.apache.org/dyn/closer.lua/incubator/carbondata/0.1.0-incubating"  target="_blank">Apache CarbonData 0.1.0</a></li>
+                    <li><a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Releases"  target="_blank">Release Archive</a></li>
+
+                    </ul>
+                </li>
+
+              <li><a href="mainpage.html" class="">Documentation</a></li>
+                <!--<li class="dropdown">
+                  <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Documentation <span class="caret"></span></a>
+                  <ul class="dropdown-menu">
+                    <li><a href="quickstartDoc.html">Quick Start</a></li>
+                    <li><a href="overDoc.html">Overview</a></li>
+                    <li><a href="installingcarbondataDoc.html">Installation Guide</a></li>
+                    <li><a href="configuringcarbondataDoc.html">Configuring CarbonData</a></li>
+                    <li><a href="usingcarbondataDoc.html">Using CarbonData</a></li>
+                    <li><a href="usefultipsDoc.html">Useful Tips</a></li>
+                    <li><a href="usecasesDoc.html">CarbonData Use Cases</a></li>
+                    <li><a href="troubleshootingDoc.html">Troubleshooting</a></li>
+                    <li><a href="faqDoc.html">FAQs</a></li> 
+                  </ul>
+
+              </li> -->     
+              <li class="dropdown">
+                  <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Community <span class="caret"></span></a>                  
+                  <ul class="dropdown-menu">
+                    <li><a href="../../contributingDoc.html">Contributing to CarbonData</a></li>
+                    <li><a href="../../contributingDoc.html#Mailing_List">Mailing Lists</a></li>
+                    <li><a href="../../contributingDoc.html#Apache_Jira">Issue Tracker</a></li>
+                    <li><a href="../../contributingDoc.html#Committers">Project Committers</a></li>
+                    <li><a href="../../jenkins.html">Jenkins for Apache CarbonData</a></li>
+                    <li><a href="../../meetup.html">CarbonData Meetups </a></li>
+                  </ul>
+                </li>
+                <li class="dropdown">
+                  <a href="http://www.apache.org/" class="apache_link hidden-xs dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Apache</a>
+                   <ul class="dropdown-menu">
+                      <li><a href="http://www.apache.org/"  target="_blank">Apache Homepage</a></li>
+                      <li><a href="http://www.apache.org/licenses/"  target="_blank">License</a></li>
+                      <li><a href="http://www.apache.org/foundation/sponsorship.html"  target="_blank">Sponsorship</a></li>
+                      <li><a href="http://www.apache.org/foundation/thanks.html"  target="_blank">Thanks</a></li>                      
+                    </ul>
+                </li>
+
+                <li class="dropdown">
+                  <a href="http://www.apache.org/" class="hidden-lg hidden-md hidden-sm dropdown-toggle" data-toggle="dropdown" role="button" aria-haspopup="true" aria-expanded="false">Apache</a>
+                   <ul class="dropdown-menu">
+                      <li><a href="http://www.apache.org/"  target="_blank">Apache Homepage</a></li>
+                      <li><a href="http://www.apache.org/licenses/"  target="_blank">License</a></li>
+                      <li><a href="http://www.apache.org/foundation/sponsorship.html"  target="_blank">Sponsorship</a></li>
+                      <li><a href="http://www.apache.org/foundation/thanks.html"  target="_blank">Thanks</a></li>                      
+                    </ul>
+                </li>
+
+           </ul>
+        </div><!--/.nav-collapse -->
+      </div>
+    </nav>
+     </header> <!-- end Header part -->
+   
+   <div class="fixed-padding"></div> <!--  top padding with fixde header  -->
+ 
+   <section><!-- Dashboard nav -->
+    <div class="container-fluid container-fluid-full">
+      <div class="row">
+        <div class="col-sm-3 col-md-3 sidebar leftMainMenu">   
+          <div id="leftmenu" nama="leftmenu" onload="pageLoaded()"></div>
+        </div>     
+        <div class="col-sm-9 col-sm-offset-3 col-md-9 col-md-offset-3 maindashboard">
+              <div class="row">            
+                <section>
+                  <div style="padding:10px 15px;">
+                    <div class="doc-header">                    
+                       <img src="../../images/format/CarbonData_icon.png" alt="" class="logo-print" >
+                       <span>Version: 0.2.0 | Last Published: 21-11-2016</span> 
+                       <i class="fa fa-print print-icon" aria-hidden="true" onclick="divPrint();"></i>
+                    </div>
+                    <div id="viewpage" name="viewpage">   </div>
+                    <div class="doc-footer">
+                         <a href="#top" class="scroll-top">Top</a>
+                    </div>
+                  </div>
+                </section>
+              </div>         
+        </div>
+      </div>
+     </div>
+    </section><!-- End systemblock part -->
+
+  <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
+
+    <script src="../../js/custom.js"></script> 
+    <script src="../../js/mdNavigation.js" type="text/javascript"></script>
+
+    <script type="text/javascript">
+      $("#leftmenu").load("table-of-content.html");
+    </script>   
+   
+     
+    
+  </body>
+  </html>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/overview-of-carbondata.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/overview-of-carbondata.html b/src/main/webapp/docs/latest/overview-of-carbondata.html
new file mode 100644
index 0000000..ae7e62c
--- /dev/null
+++ b/src/main/webapp/docs/latest/overview-of-carbondata.html
@@ -0,0 +1,136 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Overview</h1><p>This tutorial provides a detailed overview about :</p>
+<ul>
+  <li><a href="#introduction">Introduction</a></li>
+  <li><a href="#carbondata-file-structure">CarbonData File Structure</a></li>
+  <li><a href="#features">Features</a></li>
+  <li><a href="#data-types">Data Types</a></li>
+  <li><a href="#interfaces">Interfaces</a></li>
+</ul>
+
+<div id="introduction"></div>
+<h2>Introduction</h2><p>CarbonData is a fully indexed columnar and Hadoop native data-store for processing heavy analytical workloads and detailed queries on big data. CarbonData allows faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data.</p><p>In customer benchmarks, CarbonData has proven to manage Petabyte of data running on extraordinarily low-cost hardware and answers queries around 10 times faster than the current open source solutions (column-oriented SQL on Hadoop data-stores).</p><p>Some of the salient features of CarbonData are :</p>
+<ul>
+  <li>Low-Latency for various types of data access patterns like Sequential, Random and OLAP.</li>
+  <li>Fast query on fast data.</li>
+  <li>Space efficiency.</li>
+  <li>General format available on Hadoop-ecosystem.</li>
+  <div id="carbondata-file-structure"></div>
+</ul><h2>CarbonData File Structure</h2><p>CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file footer, co-located in HDFS.</p><p>The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.</p><p>Each blocklet in the file is further divided into chunks of data called data chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in a file contain the same number and type of data chunks.</p><p><img src="../../../webapp/docs/latest/images/carbon_data_file_structure_new.png?raw=true" alt="CarbonData File Structure" /></p><p>Each data chunk contains multiple groups of data called as pages. There are three types of pages.</p>
+<ul>
+  <li>Data Page: Contains the encoded data of a column/group of columns.</li>
+  <li>Row ID Page (optional): Contains the row ID mappings used when the data page is stored as an inverted index.</li>
+  <li>RLE Page (optional): Contains additional metadata used when the data page is RLE coded.</li>
+</ul><p><img src="../../../webapp/docs/latest/images/carbon_data_format_new.png?raw=true" alt="CarbonData File Format" /></p>
+
+<div id="features"></div>
+<h2>Features</h2><p>CarbonData file format is a columnar store in HDFS. It has many features that a modern columnar format has, such as splittable, compression schema, complex data type etc and CarbonData has following unique features:</p>
+<ul>
+  <li><p>Unique Data Organization: Though CarbonData stores data in Columnar format, it differs from traditional Columnar formats as the columns in each row-group(Data Block) is sorted independent of the other columns. Though this arrangement requires CarbonData to store the row-number mapping against each column value, it makes it possible to use binary search for faster filtering and since the values are sorted, same/similar values come together which yields better compression and offsets the storage overhead required by the row number mapping.</p></li>
+  <li><p>Advanced Push Down Optimizations: CarbonData pushes as much of query processing as possible close to the data to minimize the amount of data being read, processed, converted and transmitted/shuffled. Using projections and filters it reads only the required columns form the store and also reads only the rows that match the filter conditions provided in the query.</p></li>
+  <li><p>Multi Level Indexing: CarbonData uses multiple indices at various levels to enable faster search and speed up query processing.</p></li>
+  <li><p>Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- measure columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</p></li>
+  <li><p>Min-Max Index for all columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</p></li>
+  <li><p>Data Block level Inverted Index for all columns: Aids in quickly locating the rows that contain the data matching search/filter criteria within a row group(Data Blocks).</p></li>
+  <li><p>Dictionary Encoding: Most databases and big data SQL data stores employ columnar encoding to achieve data compression by storing small integers numbers (surrogate value) instead of full string values. However, almost all existing databases and data stores divide the data into row groups containing anywhere from few thousand to a million rows and employ dictionary encoding only within each row group. Hence, the same column value can have different surrogate values in different row groups. So, while reading the data, conversion from surrogate value to actual value needs to be done immediately after the data is read from the disk. But CarbonData employs global surrogate key which means that a common dictionary is maintained for the full store on one machine/node. So CarbonData can perform all the query processing work such as grouping/aggregation, sorting etc on light weight surrogate values. The conversion from surrogate to actual values needs to be done only on the final res
 ult. This procedure improves performance on two aspects. Conversion from surrogate values to actual values is done only for the final result rows which are much less than the actual rows read from the store. All query processing and computation such as grouping/aggregation, sorting, and so on is done on lightweight surrogate values which requires less memory and CPU time compared to actual values.</p></li>
+  <li><p>Deep Spark Integration: It has built-in spark integration for Spark 1.5, 1.6 and interfaces for Spark SQL, DataFrame API and query optimization. It supports bulk data ingestion and allows saving of spark dataframes as CarbonData files.</p></li>
+  <li><p>Update Delete Support: It supports batch updates like daily update scenarios for OLAP and Base+Delta file based design.</p></li>
+  <li><p>Store data along with index: Significantly accelerates query performance and reduces the I/O scans and CPU resources, when there are filters in the query. CarbonData index consists of multiple levels of indices. A processing framework can leverage this index to reduce the task it needs to schedule and process. It can also do skip scan in more finer grain units (called blocklet) in task side scanning instead of scanning the whole file.</p></li>
+  <li><p>Operable encoded data: It supports efficient compression and global encoding schemes and can query on compressed/encoded data. The data can be converted just before returning the results to the users, which is "late materialized".</p></li>
+  <li><p>Column group: Allows multiple columns to form a column group that would be stored as row format. This reduces the row reconstruction cost at query time.</p></li>
+  <li><p>Support for various use cases with one single Data format: Examples are interactive OLAP-style query, Sequential Access (big scan) and Random Access (narrow scan).</p></li>
+</ul>
+<div id="data-types"></div>
+
+<h2>Data Types</h2><h4>CarbonData supports the following data types:</h4>
+<ul>
+  <li>Numeric Types</li>
+  <li>SMALLINT</li>
+  <li>INT/INTEGER</li>
+  <li>BIGINT</li>
+  <li>DOUBLE</li>
+  <li>DECIMAL</li>
+  <li><p>Date/Time Types</p></li>
+  <li>TIMESTAMP</li>
+  <li><p>String Types</p></li>
+  <li>STRING</li>
+  <li><p>Complex Types</p>
+  <ul>
+    <li>arrays: ARRAY<code>&lt;data_type&gt;</code></li>
+    <li>structs: STRUCT<code>&lt;col_name : data_type COMMENT col_comment, ...&gt;</code></li>
+  </ul></li>
+</ul>
+<div id="interfaces"></div>
+<h2>Interfaces</h2><h4>API</h4><p>CarbonData can be used in following scenarios:</p>
+<ul>
+  <li>For MapReduce application user</li>
+</ul><p>This User API is provided by carbon-hadoop. In this scenario, user can process CarbonData files in his MapReduce application by choosing CarbonInput/OutputFormat, and is responsible for using it correctly. Currently only CarbonInputFormat is provided and OutputFormat will be provided soon.</p>
+<ul>
+  <li>For Spark user</li>
+</ul><p>This User API is provided by Spark itself. There are two levels of APIs</p>
+<ul>
+  <li><p><strong>CarbonData File</strong></p><p>Similar to parquet, json, or other data source in Spark, CarbonData can be used with data source API. For example (please refer to DataFrameAPIExample for more detail):</p></li>
+</ul><p>```  // User can create a DataFrame from any data source  // or transformation.  val df = ...</p>
+<pre><code>  // Write data
+  // User can write a DataFrame to a CarbonData file
+  df.write
+  .format(&quot;carbondata&quot;)
+  .option(&quot;tableName&quot;, &quot;carbontable&quot;)
+  .mode(SaveMode.Overwrite)
+  .save()
+
+
+  // read CarbonData by data source API
+  df = carbonContext.read
+  .format(&quot;carbondata&quot;)
+  .option(&quot;tableName&quot;, &quot;carbontable&quot;)
+  .load(&quot;/path&quot;)
+
+  // User can then use DataFrame for analysis
+  df.count
+  SVMWithSGD.train(df, numIterations)
+
+  // User can also register the DataFrame with a table name, 
+  // and use SQL for analysis
+  df.registerTempTable(&quot;t1&quot;)  // register temporary table 
+                              // in SparkSQL catalog
+  df.registerHiveTable(&quot;t2&quot;)  // Or, use a implicit funtion 
+                              // to register to Hive metastore
+  sqlContext.sql(&quot;select count(*) from t1&quot;).show
+</code></pre><p>```</p>
+<ul>
+  <li><p><strong>Managed CarbonData Table</strong></p><p>CarbonData has in built support for high level concept like Table, Database, and supports full data lifecycle management, instead of dealing with just files user can use CarbonData specific DDL to manipulate data in Table and Database level. Please refer <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DDL">DDL</a> and <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DML">DML</a>.</p></li>
+</ul><p><code>
+      // Use SQL to manage table and query data
+      create database db1;
+      use database db1;
+      show databases;
+      create table tbl1 using org.apache.carbondata.spark;
+      load data into table tlb1 path &#39;some_files&#39;;
+      select count(*) from tbl1;
+</code></p>
+<ul>
+  <li><p>For developer who want to integrate CarbonData into processing engines like spark, hive or flink, use API provided by carbon-hadoop and carbon-processing:</p>
+  <ul>
+    <li><strong>Query</strong> : Integrate carbon-hadoop with engine specific API, like spark data source API.</li>
+  </ul>
+  <ul>
+    <li><strong>Data life cycle management</strong> : CarbonData provides utility functions in carbon-processing to manage data life cycle, like data loading, compact, retention, schema evolution. Developer can implement DDLs of their choice and leverage these utility function to do data life cycle management.</li>
+  </ul></li>
+</ul>
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/overview.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/overview.html b/src/main/webapp/docs/latest/overview.html
deleted file mode 100644
index 2828265..0000000
--- a/src/main/webapp/docs/latest/overview.html
+++ /dev/null
@@ -1,235 +0,0 @@
-<!DOCTYPE html><html><head><meta charset="utf-8"><title>Untitled Document.md</title><style>
-
-</style></head><body id="preview">
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-“License”); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
- http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-&quot;AS IS&quot; BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->    
-<h1><b>Overview</b></h1>
-<p>This tutorial provides a detailed overview about :</p>
-<ul>
-<li><a href="#introduction">Introduction</a></li>
-<li><a href="#file-format">CarbonData File Structure</a></li>
-<li><a href="#features">Features</a></li>
-<li><a href="#data-types">Data Types</a></li>
-<li><a href="#packaging-interfaces">Interfaces</a></li>
-</ul>
-<div id="introduction"></div>
-<h2><a id="Introduction_11"></a>Introduction</h2>
-<p>CarbonData is a fully indexed columnar and Hadoop native data-store for processing heavy analytical workloads and detailed queries on big data. CarbonData allows  faster interactive query using advanced columnar storage, index, compression and encoding techniques to improve computing efficiency, which helps in speeding up queries by an order of magnitude faster over PetaBytes of data.</p>
-<p>In customer benchmarks, CarbonData has proven to manage Petabyte of data running on extraordinarily low-cost hardware and answers queries around 10 times faster than the current open source solutions (column-oriented SQL on Hadoop data-stores).</p>
-<p>Some of the salient features of CarbonData are :</p>
-<ul>
-<li>Low-Latency for various types of data access patterns like Sequential, Random and OLAP.</li>
-<li>Fast query on fast data.</li>
-<li>Space efficiency.</li>
-<li>General format available on Hadoop-ecosystem.</li>
-</ul>
-<div id="file-format"></div>
-<h2><a id="CarbonData_File_Structure_23"></a>CarbonData File Structure</h2>
-<p>CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file footer, co-located in HDFS. The file footer can be read once to build the indices in memory, which can be utilized for optimizing the scans and processing for all subsequent queries.</p>
-<p>Each blocklet in the file is further divided into chunks of data called data chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in a file contain the same number and type of data chunks.</p>
-  <div style="text-align:center">
-    <img src="images/format/carbon_data_file_structure_new.png?raw=true" alt="CarbonData File Structure">
-  </div>
-<p>Each data chunk contains multiple groups of data called as pages. There are three types of pages.</p>
-<ul>
-<li>Data Page: Contains the encoded data of a column/group of columns.</li>
-<li>Row ID Page (optional): Contains the row ID mappings used when the data page is stored as an inverted index.</li>
-<li>RLE Page (optional): Contains additional metadata used when the data page is RLE coded.</li>
-</ul>
-<div style="text-align:center">
-<img src="images/format/carbon_data_format_new.png?raw=true" alt="CarbonData File Format">
-</div>
-
-<div id="features"></div>
-<h2><a id="Features_40"></a>Features</h2>
-<p>CarbonData file format is a columnar store in HDFS, it has many features that a modern columnar format has, such as splittable, compression schema, complex data type etc and CarbonData has following unique features:</p>
-<ul>
-
-<li><div id="Unique_Data_Organization">
-	<b>Unique Data Organization</b>: Though CarbonData stores data in Columnar format, it differs from traditional Columnar formats as the columns in each row-group(Data Block) is sorted independent of the other columns. Though this arrangement requires CarbonData to store the row-number mapping against each column value, it makes it possible to use binary search for faster filtering and since the values are sorted, same/similar values come together which yields better compression and offsets the storage overhead required by the row number mapping.
-</div>
-</li>
-
-<li>
-<div id="Advanced_Push_Down_Optimizations">	
-<b>Advanced Push Down Optimizations</b>: CarbonData pushes as much of query processing as possible close to the data to minimize the amount of data being read, processed, converted and transmitted/shuffled. Using projections and filters it reads only the required columns form the store and also reads only the rows that match the filter conditions provided in the query.
-</div>
-</li>
-
-<li>
-<div id="Multi_Level_Indexing">
-<b>Multi Level Indexing</b>: CarbonData uses multiple indices at various levels to enable faster search and speed up query processing.
-<ul>
-<li>
-Global Multi Dimensional Keys(MDK) based B+Tree Index for all non- measure columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</li>
-<li>
-Min-Max Index for all columns: Aids in quickly locating the row groups(Data Blocks) that contain the data matching search/filter criteria.</li>
-<li>
-Data Block level Inverted Index for all columns: Aids in quickly locating the rows that contain the data matching search/filter criteria within a row group(Data Blocks).
-</li>
-</ul>
-</div>
-</li>
-<li>
-<div id="Dictionary_Encoding">
-<b>Dictionary Encoding</b>: Most databases and big data SQL data stores employ columnar encoding to achieve data compression by storing small integers numbers (surrogate value) instead of full string values. However, almost all existing databases and data stores divide the data into row groups containing anywhere from few thousand to a million rows and employ dictionary encoding only within each row group. Hence, the same column value can have different surrogate values in different row groups. So, while reading the data, conversion from surrogate value to actual value needs to be done immediately after the data is read from the disk. But CarbonData employs global surrogate key which means that a common dictionary is maintained for the full store on one machine/node. So CarbonData can perform all the query processing work such as grouping/aggregation, sorting etc on light weight surrogate values. The conversion from surrogate to actual values needs to be done only on the final resul
 t. This procedure improves performance on two aspects.	
-Conversion from surrogate values to actual values is done only for the final result rows which are much less than the actual rows read from the store.
-All query processing and computation such as grouping/aggregation, sorting, and so on is done on lightweight surrogate values which requires less memory and CPU time compared to actual values.
-</div>
-</li>
-<li>
-<div id="Update_Delete_Support">
-	<b>Update Delete Support</b>: It supports batch updates like daily update scenarios for OLAP and Base+Delta file based design.
-	</div>
-</li>
-<li>
-<div id="Deep_Spark_Integgration">
-	<b>Deep Spark Integration</b>: It has built-in spark integration for Spark 1.5, 1.6 and interfaces for Spark SQL, DataFrame API and query optimization. It supports bulk data ingestion and allows saving of spark dataframes as CarbonData files.
-	</div>
-</li>
-<li><b>Store data along with index</b>: Significantly accelerates query performance and reduces the I/O scans and CPU resources when there are filters in the query. CarbonData index consists of multiple levels of indices. A processing framework can leverage this index to reduce the task it needs to schedule and process. It can also do skip scan in more finer grain units (called blocklet) in task side scanning instead of scanning the whole file.</li>
-<li><b>Operable encoded data</b>: By supporting efficient compression and global encoding schemes, it can query on compressed/encoded data, the data can be converted just before returning the results to the users, which is “late materialized”.</li>
-<li><b>Column group</b>: Allows multiple columns to form a column group that would be stored as row format. This reduces the row reconstruction cost at query time.</li>
-<li><b>Supports for various use cases with one single Data format</b>: Examples are interactive OLAP-style query, Sequential Access (big scan), and Random Access (narrow scan).</li>
-</ul>
-<div id="data-types"></div>
-<h2><a id="Data_Types_49"></a>Data Types</h2>
-<p>CarbonData supports the following data types:</p>
-<ul>
-<li>
-<p>Numeric Types</p>
-<ul>
-<li>SMALLINT</li>
-<li>INT/INTEGER</li>
-<li>BIGINT</li>
-<li>DOUBLE</li>
-<li>DECIMAL</li>
-</ul>
-</li>
-<li>
-<p>Date/Time Types</p>
-<ul>
-<li>TIMESTAMP</li>
-</ul>
-</li>
-<li>
-<p>String Types</p>
-<ul>
-<li>STRING</li>
-</ul>
-</li>
-<li>
-<p>Complex Types</p>
-<ul>
-<li>arrays: ARRAY&lt;data_type&gt;</li>
-<li>structs: STRUCT&lt;col_name : data_type [COMMENT col_comment], …&gt;</li>
-</ul>
-</li>
-</ul>
-<!-- <div id="compatibility"></div>
-<h2><a id="Compatibility_70"></a>Compatibility</h2> -->
-
-<div id="packaging-interfaces"></div>
-<h2><a id="Packaging_and_Interfaces_73"></a>Interfaces</h2>
-<ul>
-<li>
-<h4><a id="API_90"></a>API</h4>
-<p>CarbonData can be used in following scenarios:</p>
-<ol>
-<li>
-<p>For MapReduce application user<br>
-This User API is provided by carbon-hadoop. In this scenario, user can process CarbonData files in his MapReduce application by choosing CarbonInput/OutputFormat, and is responsible for using it correctly. Currently only CarbonInputFormat is provided and OutputFormat will be provided soon.</p>
-</li>
-<li>
-<p>For Spark user<br>
-This User API is provided by Spark itself. There are two levels of APIs</p>
-<ul>
-<li>
-<p><strong>CarbonData File</strong></p>
-<p>Similar to parquet, json, or other data source in Spark, CarbonData can be used with data source API. For example (please refer to DataFrameAPIExample for more detail):</p>
-<pre><code>// User can create a DataFrame from any data source or transformation.
-val df = ...
-
-// Write data
-// User can write a DataFrame to a CarbonData file
-df.write
-.format(&quot;carbondata&quot;)
-.option(&quot;tableName&quot;, &quot;carbontable&quot;)
-.mode(SaveMode.Overwrite)
-.save()
-
-
-// read CarbonData data by data source API
-df = carbonContext.read
-.format(&quot;carbondata&quot;)
-.option(&quot;tableName&quot;, &quot;carbontable&quot;)
-.load(&quot;/path&quot;)
-
-// User can then use DataFrame for analysis
-df.count
-SVMWithSGD.train(df, numIterations)
-
-// User can also register the DataFrame with a table name, and use SQL for analysis
-df.registerTempTable(&quot;t1&quot;)  // register temporary table in SparkSQL catalog
-df.registerHiveTable(&quot;t2&quot;)  // Or, use a implicit funtion to register to Hive metastore
-sqlContext.sql(&quot;select count(*) from t1&quot;).show
-</code></pre>
-</li>
-<li>
-<p><strong>Managed CarbonData Table</strong></p>
-<p>CarbonData has in built support for high level concept like Table, Database, and supports full data lifecycle management, instead of dealing with just files user can use CarbonData specific DDL to manipulate data in Table and Database level. Please refer <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DDL">DDL</a> and <a href="https://github.com/HuaweiBigData/carbondata/wiki/Language-Manual:-DML">DML</a>.</p>
-<pre><code>// Use SQL to manage table and query data
-create database db1;
-use database db1;
-show databases;
-create table tbl1 using org.apache.carbondata.spark;
-load data into table tlb1 path 'some_files';
-select count(*) from tbl1;
-</code></pre>
-</li>
-</ul>
-</li>
-<li>
-<p>For developer who want to integrate CarbonData into processing engines like spark, hive or flink, use API provided by carbon-hadoop and carbon-processing:</p>
-<ul>
-<li>
-<p><strong>Query</strong> : Integrate carbon-hadoop with engine specific API, like spark data source API.</p>
-</li>
-<li>
-<p><strong>Data life cycle management</strong> : CarbonData provides utility functions in carbon-processing to manage data life cycle, like data loading, compact, retention, schema evolution. Developer can implement DDLs of their choice and leverage these utility function to do data life cycle management.</p>
-</li>
-</ul>
-</li>
-</ol>
-</li>
-</ul>
-
-<script type="text/javascript">
- $('a[href*="#"]:not([href="#"])').click(function() {
-   if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) {
-    var target = $(this.hash);
-    target = target.length ? target : $('[name=' + this.hash.slice(1) + ']');
-    if (target.length) 
-        { $('html, body').animate({    scrollTop: target.offset().top - 52 },100);
-          return false;
-        }
-     }
-  });
-</script>
-
-</body></html>

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/b427c61c/src/main/webapp/docs/latest/quick-start-guide.html
----------------------------------------------------------------------
diff --git a/src/main/webapp/docs/latest/quick-start-guide.html b/src/main/webapp/docs/latest/quick-start-guide.html
new file mode 100644
index 0000000..f17cf13
--- /dev/null
+++ b/src/main/webapp/docs/latest/quick-start-guide.html
@@ -0,0 +1,103 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+--><h1>Quick Start</h1><p>This tutorial provides a quick introduction to using CarbonData.</p><h2>Getting started with Apache CarbonData</h2>
+<ul>
+  <li><a href="#installation">Installation</a></li>
+  <li><a href="#prerequisites">Prerequisites</a></li>
+  <li><a href="#interactive-analysis-with-spark-shell">Interactive Analysis with Spark Shell Version 2.1</a></li>
+  <li>Basics</li>
+  <li>Executing Queries
+  <ul>
+    <li>Creating a Table</li>
+    <li>Loading Data to a Table</li>
+    <li>Query Data from a Table</li>
+  </ul></li>
+  <li>Interactive Analysis with Spark Shell Version 1.6</li>
+  <li>Basics</li>
+  <li>Executing Queries
+  <ul>
+    <li>Creating a Table</li>
+    <li>Loading Data to a Table</li>
+    <li>Query Data from a Table</li>
+  </ul></li>
+  <li><a href="#building-carbondata">Building CarbonData</a></li>
+
+</ul>
+<div id="installation"></div>
+<h2>Installation</h2>
+<ul>
+  <li>Download a released package of <a href="http://spark.apache.org/downloads.html">Spark 1.6.2 or 2.1.0</a>.</li>
+  <li>Download and install <a href="http://thrift-tutorial.readthedocs.io/en/latest/installation.html">Apache Thrift 0.9.3</a>, make sure Thrift is added to system path.</li>
+  <li>Download <a href="https://github.com/apache/incubator-carbondata">Apache CarbonData code</a> and build it. Please visit <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Building CarbonData And IDE Configuration</a> for more information.</li>
+</ul>
+<div id="prerequisites"></div>
+<h2>Prerequisites</h2>
+<ul>
+  <li>Create a sample.csv file using the following commands. The CSV file is required for loading data into CarbonData.</li>
+</ul><p><code>
+$ cd carbondata
+$ cat &gt; sample.csv &lt;&lt; EOF
+  id,name,city,age
+  1,david,shenzhen,31
+  2,eason,shenzhen,27
+  3,jarry,wuhan,35
+  EOF
+</code></p>
+<div id="interactive-analysis-with-spark-shell"></div>
+<h2>Interactive Analysis with Spark Shell</h2><h2>Version 2.1</h2><p>Apache Spark Shell provides a simple way to learn the API, as well as a powerful tool to analyze data interactively. Please visit <a href="http://spark.apache.org/docs/latest/">Apache Spark Documentation</a> for more details on Spark shell.</p><h4>Basics</h4><p>Start Spark shell by running the following command in the Spark directory:</p><p><code>
+./bin/spark-shell --jars &lt;carbondata jar path&gt;
+</code></p><p>In this shell, SparkSession is readily available as 'spark' and Spark context is readily available as 'sc'.</p><p>In order to create a CarbonSession we will have to configure it explicitly in the following manner :</p>
+<ul>
+  <li>Import the following :</li>
+</ul><p><code>
+import org.apache.spark.sql.SparkSession
+import org.apache.spark.sql.CarbonSession._
+</code></p>
+<ul>
+  <li>Create a CarbonSession :</li>
+</ul><p><code>
+val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession()
+</code></p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+scala&gt;carbon.sql(&quot;create table if not exists test_table
+                (id string, name string, city string, age Int)
+                STORED BY &#39;carbondata&#39;&quot;)
+</code></p><h5>Loading Data to a Table</h5><p><code>
+scala&gt;carbon.sql(s&quot;load data inpath &#39;${new java.io.File(&quot;../carbondata/sample.csv&quot;).getCanonicalPath}&#39; into table test_table&quot;)
+</code></p><h6>Query Data from a Table</h6><p>``` scala&gt;spark.sql("select * from test_table").show</p><p>scala&gt;spark.sql("select city, avg(age), sum(age) from test_table group by city").show ```</p><h2>Interactive Analysis with Spark Shell</h2><h2>Version 1.6</h2><h4>Basics</h4><p>Start Spark shell by running the following command in the Spark directory:</p><p><code>
+./bin/spark-shell --jars &lt;carbondata jar path&gt;
+</code></p><p>NOTE: In this shell, SparkContext is readily available as sc.</p>
+<ul>
+  <li>In order to execute the Queries we need to import CarbonContext:</li>
+</ul><p><code>
+import org.apache.spark.sql.CarbonContext
+</code></p>
+<ul>
+  <li>Create an instance of CarbonContext in the following manner :</li>
+</ul><p><code>
+val cc = new CarbonContext(sc)
+</code></p><p>NOTE: By default store location is pointed to "../carbon.store", user can provide own store location to CarbonContext like new CarbonContext(sc, storeLocation).</p><h4>Executing Queries</h4><h5>Creating a Table</h5><p><code>
+scala&gt;cc.sql(&quot;create table if not exists test_table (id string, name string, city string, age Int) STORED BY &#39;carbondata&#39;&quot;)
+</code> To see the table created :</p><p><code>
+scala&gt;cc.sql(&quot;show tables&quot;).show
+</code></p><h5>Loading Data to a Table</h5><p><code>
+scala&gt;cc.sql(s&quot;load data inpath &#39;${new java.io.File(&quot;../carbondata/sample.csv&quot;).getCanonicalPath}&#39; into table test_table&quot;)
+</code></p><h5>Query Data from a Table</h5><p><code>
+scala&gt;cc.sql(&quot;select * from test_table&quot;).show
+scala&gt;cc.sql(&quot;select city, avg(age), sum(age) from test_table group by city&quot;).show
+  <div id="building-carbondata"></div>
+</code></p><h2>Building CarbonData</h2><p>To get started, get CarbonData from the <a href="http://carbondata.incubator.apache.org/">downloads</a> section on the <a href="http://carbondata.incubator.apache.org.">http://carbondata.incubator.apache.org.</a> CarbonData uses Hadoop?s client libraries for HDFS and YARN and Spark's libraries. Downloads are pre-packaged for a handful of popular Spark versions.</p><p>If you?d like to build CarbonData from source, visit <a href="https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration">Building CarbonData And IDE Configuration</a>.</p>
\ No newline at end of file


Mime
View raw message