carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ravipes...@apache.org
Subject [2/2] carbondata git commit: [CARBONDATA-1770] Update error docs and consolidate DDL, DML, Partition docs
Date Wed, 22 Nov 2017 15:19:34 GMT
[CARBONDATA-1770] Update error docs and consolidate DDL,DML,Partition docs

Update documents : there are some error description.
Consolidate Data management, DDL,DML,Partition docs, to ensure one feature which only be described in one place.

This closes #1534


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/9a69d638
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/9a69d638
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/9a69d638

Branch: refs/heads/master
Commit: 9a69d638bba50a4b1382f168f2f759d181113e2a
Parents: ab9c2c0
Author: chenliang613 <chenliang613@huawei.com>
Authored: Sun Nov 19 21:12:11 2017 +0800
Committer: Ravindra Pesala <ravi.pesala@gmail.com>
Committed: Wed Nov 22 20:49:04 2017 +0530

----------------------------------------------------------------------
 README.md                                      |   5 +-
 docs/How-to-contribute-to-Apache-CarbonData.md |  24 +-
 docs/configuration-parameters.md               |  24 +-
 docs/data-management-on-carbondata.md          | 713 ++++++++++++++++++++
 docs/data-management.md                        | 157 -----
 docs/ddl-operation-on-carbondata.md            | 448 ------------
 docs/dml-operation-on-carbondata.md            | 484 -------------
 docs/faq.md                                    |  24 +-
 docs/file-structure-of-carbondata.md           |  24 +-
 docs/installation-guide.md                     |  24 +-
 docs/partition-guide.md                        | 188 ------
 docs/quick-start-guide.md                      |  27 +-
 docs/release-guide.md                          |  24 +-
 docs/supported-data-types-in-carbondata.md     |  24 +-
 docs/troubleshooting.md                        |  47 +-
 docs/useful-tips-on-carbondata.md              | 271 +++-----
 16 files changed, 942 insertions(+), 1566 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index ed55de0..b35339c 100644
--- a/README.md
+++ b/README.md
@@ -41,10 +41,7 @@ CarbonData is built using Apache Maven, to [build CarbonData](https://github.com
 * [Quick Start](https://github.com/apache/carbondata/blob/master/docs/quick-start-guide.md)
 * [CarbonData File Structure](https://github.com/apache/carbondata/blob/master/docs/file-structure-of-carbondata.md)
 * [Data Types](https://github.com/apache/carbondata/blob/master/docs/supported-data-types-in-carbondata.md)
-* [Data Management](https://github.com/apache/carbondata/blob/master/docs/data-management.md)
-* [DDL Operations on CarbonData](https://github.com/apache/carbondata/blob/master/docs/ddl-operation-on-carbondata.md) 
-* [DML Operations on CarbonData](https://github.com/apache/carbondata/blob/master/docs/dml-operation-on-carbondata.md)  
-* [Partition Table](https://github.com/apache/carbondata/blob/master/docs/partition-guide.md)
+* [Data Management on CarbonData](https://github.com/apache/carbondata/blob/master/docs/data-management-on-carbondata.md)
 * [Cluster Installation and Deployment](https://github.com/apache/carbondata/blob/master/docs/installation-guide.md)
 * [Configuring Carbondata](https://github.com/apache/carbondata/blob/master/docs/configuration-parameters.md)
 * [FAQ](https://github.com/apache/carbondata/blob/master/docs/faq.md)

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/How-to-contribute-to-Apache-CarbonData.md
----------------------------------------------------------------------
diff --git a/docs/How-to-contribute-to-Apache-CarbonData.md b/docs/How-to-contribute-to-Apache-CarbonData.md
index f57642d..1d9cbe9 100644
--- a/docs/How-to-contribute-to-Apache-CarbonData.md
+++ b/docs/How-to-contribute-to-Apache-CarbonData.md
@@ -1,20 +1,18 @@
 <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
 
       http://www.apache.org/licenses/LICENSE-2.0
 
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
 -->
 
 # How to contribute to Apache CarbonData

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/docs/configuration-parameters.md b/docs/configuration-parameters.md
index 141a60c..5875529 100644
--- a/docs/configuration-parameters.md
+++ b/docs/configuration-parameters.md
@@ -1,20 +1,18 @@
 <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
 
       http://www.apache.org/licenses/LICENSE-2.0
 
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
 -->
 
 # Configuring CarbonData

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
new file mode 100644
index 0000000..6880ba1
--- /dev/null
+++ b/docs/data-management-on-carbondata.md
@@ -0,0 +1,713 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
+-->
+
+# Data Management on CarbonData
+
+This tutorial is going to introduce all commands and data operations on CarbonData.
+
+* [CREATE TABLE](#create-table)
+* [TABLE MANAGEMENT](#table-management)
+* [LOAD DATA](#load-data)
+* [UPDATE AND DELETE](#update-and-delete)
+* [COMPACTION](#compaction)
+* [PARTITION](#partition)
+* [BUCKETING](#bucketing)
+* [SEGMENT MANAGEMENT](#segment-management)
+
+## CREATE TABLE
+
+  This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.
+  
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name[(col_name data_type , ...)]
+  STORED BY 'carbondata'
+  [TBLPROPERTIES (property_name=property_value, ...)]
+  ```  
+  
+### Usage Guidelines
+
+  Following are the guidelines for TBLPROPERTIES, CarbonData's additional table options can be set via carbon.properties.
+  
+   - **Dictionary Encoding Configuration**
+
+     Dictionary encoding is turned off for all columns by default from 1.3 onwards, you can use this command for including columns to do dictionary encoding.
+     Suggested use cases : do dictionary encoding for low cardinality columns, it might help to improve data compression ratio and performance.
+
+     ```
+     TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
+     ```
+     
+   - **Inverted Index Configuration**
+
+     By default inverted index is enabled, it might help to improve compression ratio and query speed, especially for low cardinality columns which are in reward position.
+     Suggested use cases : For high cardinality columns, you can disable the inverted index for improving the data loading performance.
+
+     ```
+     TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
+     ```
+
+   - **Sort Columns Configuration**
+
+     This property is for users to specify which columns belong to the MDK(Multi-Dimensions-Key) index.
+     * If users don't specify "SORT_COLUMN" property, by default MDK index be built by using all dimension columns except complex datatype column. 
+     * If this property is specified but with empty argument, then the table will be loaded without sort..
+     Suggested use cases : Only build MDK index for required columns,it might help to improve the data loading performance.
+
+     ```
+     TBLPROPERTIES ('SORT_COLUMNS'='column1, column3')
+     OR
+     TBLPROPERTIES ('SORT_COLUMNS'='')
+     ```
+
+   - **Sort Scope Configuration**
+   
+     This property is for users to specify the scope of the sort during data load, following are the types of sort scope.
+     
+     * LOCAL_SORT: It is the default sort scope.             
+     * NO_SORT: It will load the data in unsorted manner, it will significantly increase load performance.       
+     * BATCH_SORT: It increases the load performance but decreases the query performance if identified blocks > parallelism.
+     * GLOBAL_SORT: It increases the query performance, especially high concurrent point query.
+       And if you care about loading resources isolation strictly, because the system uses the spark GroupBy to sort data, the resource can be controlled by spark. 
+ 
+   - **Table Block Size Configuration**
+
+     This command is for setting block size of this table, the default value is 1024 MB and supports a range of 1 MB to 2048 MB.
+
+     ```
+     TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
+     ```
+     Note: 512 or 512M both are accepted.
+
+### Example:
+    ```
+    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                   productNumber Int,
+                                   productName String,
+                                   storeCity String,
+                                   storeProvince String,
+                                   productCategory String,
+                                   productBatch String,
+                                   saleQuantity Int,
+                                   revenue Int)
+    STORED BY 'carbondata'
+    TBLPROPERTIES ('DICTIONARY_INCLUDE'='productNumber',
+                   'NO_INVERTED_INDEX'='productBatch',
+                   'SORT_COLUMNS'='productName,storeCity',
+                   'SORT_SCOPE'='NO_SORT',
+                   'TABLE_BLOCKSIZE'='512')
+    ```
+        
+## TABLE MANAGEMENT  
+
+### SHOW TABLE
+
+  This command can be used to list all the tables in current database or all the tables of a specific database.
+  ```
+  SHOW TABLES [IN db_Name]
+  ```
+
+  Example:
+  ```
+  SHOW TABLES
+  OR
+  SHOW TABLES IN defaultdb
+  ```
+
+### ALTER TALBE
+
+  The following section introduce the commands to modify the physical or logical state of the existing table(s).
+
+   - **RENAME TABLE**
+   
+     This command is used to rename the existing table.
+     ```
+     ALTER TABLE [db_name.]table_name RENAME TO new_table_name
+     ```
+
+     Examples:
+     ```
+     ALTER TABLE carbon RENAME TO carbondata
+     OR
+     ALTER TABLE test_db.carbon RENAME TO test_db.carbondata
+     ```
+
+   - **ADD COLUMNS**
+   
+     This command is used to add a new column to the existing table.
+     ```
+     ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...)
+     TBLPROPERTIES('DICTIONARY_INCLUDE'='col_name,...',
+     'DEFAULT.VALUE.COLUMN_NAME'='default_value')
+     ```
+
+     Examples:
+     ```
+     ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
+     ```
+
+     ```
+     ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) TBLPROPERTIES('DICTIONARY_INCLUDE'='a1')
+     ```
+
+     ```
+     ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING) TBLPROPERTIES('DEFAULT.VALUE.a1'='10')
+     ```
+
+   - **DROP COLUMNS**
+   
+     This command is used to delete the existing column(s) in a table.
+     ```
+     ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...)
+     ```
+
+     Examples:
+     ```
+     ALTER TABLE carbon DROP COLUMNS (b1)
+     OR
+     ALTER TABLE test_db.carbon DROP COLUMNS (b1)
+     
+     ALTER TABLE carbon DROP COLUMNS (c1,d1)
+     ```
+
+   - **CHANGE DATA TYPE**
+   
+     This command is used to change the data type from INT to BIGINT or decimal precision from lower to higher.
+     Change of decimal data type from lower precision to higher precision will only be supported for cases where there is no data loss.
+     ```
+     ALTER TABLE [db_name.]table_name CHANGE col_name col_name changed_column_type
+     ```
+
+     Valid Scenarios
+     - Invalid scenario - Change of decimal precision from (10,2) to (10,5) is invalid as in this case only scale is increased but total number of digits remains the same.
+     - Valid scenario - Change of decimal precision from (10,2) to (12,3) is valid as the total number of digits are increased by 2 but scale is increased only by 1 which will not lead to any data loss.
+     - Note :The allowed range is 38,38 (precision, scale) and is a valid upper case scenario which is not resulting in data loss.
+
+     Example1:Changing data type of column a1 from INT to BIGINT.
+     ```
+     ALTER TABLE test_db.carbon CHANGE a1 a1 BIGINT
+     ```
+     
+     Example2:Changing decimal precision of column a1 from 10 to 18.
+     ```
+     ALTER TABLE test_db.carbon CHANGE a1 a1 DECIMAL(18,2)
+     ```
+
+### DROP TABLE
+  
+  This command is used to delete an existing table.
+  ```
+  DROP TABLE [IF EXISTS] [db_name.]table_name
+  ```
+
+  Example:
+  ```
+  DROP TABLE IF EXISTS productSchema.productSalesTable
+  ```
+  
+## LOAD DATA
+
+### LOAD FILES TO CARBONDATA TABLE
+  
+  This command is used to load csv files to carbondata, OPTIONS are not mandatory for data loading process. 
+  Inside OPTIONS user can provide either of any options like DELIMITER, QUOTECHAR, FILEHEADER, ESCAPECHAR, MULTILINE as per requirement.
+  
+  ```
+  LOAD DATA [LOCAL] INPATH 'folder_path' 
+  INTO TABLE [db_name.]table_name 
+  OPTIONS(property_name=property_value, ...)
+  ```
+
+  You can use the following options to load data:
+  
+  - **DELIMITER:** Delimiters can be provided in the load command.
+
+    ``` 
+    OPTIONS('DELIMITER'=',')
+    ```
+
+  - **QUOTECHAR:** Quote Characters can be provided in the load command.
+
+    ```
+    OPTIONS('QUOTECHAR'='"')
+    ```
+
+  - **COMMENTCHAR:** Comment Characters can be provided in the load command if user want to comment lines.
+
+    ```
+    OPTIONS('COMMENTCHAR'='#')
+    ```
+
+  - **FILEHEADER:** Headers can be provided in the LOAD DATA command if headers are missing in the source files.
+
+    ```
+    OPTIONS('FILEHEADER'='column1,column2') 
+    ```
+
+  - **MULTILINE:** CSV with new line character in quotes.
+
+    ```
+    OPTIONS('MULTILINE'='true') 
+    ```
+
+  - **ESCAPECHAR:** Escape char can be provided if user want strict validation of escape character on CSV.
+
+    ```
+    OPTIONS('ESCAPECHAR'='\') 
+    ```
+
+  - **COMPLEX_DELIMITER_LEVEL_1:** Split the complex type data column in a row (eg., a$b$c --> Array = {a,b,c}).
+
+    ```
+    OPTIONS('COMPLEX_DELIMITER_LEVEL_1'='$') 
+    ```
+
+  - **COMPLEX_DELIMITER_LEVEL_2:** Split the complex type nested data column in a row. Applies level_1 delimiter & applies level_2 based on complex data type (eg., a:b$c:d --> Array> = {{a,b},{c,d}}).
+
+    ```
+    OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':')
+    ```
+
+  - **ALL_DICTIONARY_PATH:** All dictionary files path.
+
+    ```
+    OPTIONS('ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary')
+    ```
+
+  - **COLUMNDICT:** Dictionary file path for specified column.
+
+    ```
+    OPTIONS('COLUMNDICT'='column1:dictionaryFilePath1,column2:dictionaryFilePath2')
+    ```
+    NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.
+    
+  - **DATEFORMAT:** Date format for specified column.
+
+    ```
+    OPTIONS('DATEFORMAT'='column1:dateFormat1, column2:dateFormat2')
+    ```
+    NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to [SimpleDateFormat](http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html).
+
+  - **SINGLE_PASS:** Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
+
+   This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
+
+    ```
+    OPTIONS('SINGLE_PASS'='TRUE')
+    ```
+   Note :
+   * If this option is set to TRUE then data loading will take less time.
+   * If this option is set to some invalid value other than TRUE or FALSE then it uses the default value.
+   * If this option is set to TRUE, then high.cardinality.identify.enable property will be disabled during data load.
+   
+   Example:
+   ```
+   LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
+   options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
+   'FILEHEADER'='empno,empname,designation,doj,workgroupcategory,
+   workgroupcategoryname,deptno,deptname,projectcode,
+   projectjoindate,projectenddate,attendance,utilization,salary',
+   'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$',
+   'COMPLEX_DELIMITER_LEVEL_2'=':',
+   'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary',
+   'SINGLE_PASS'='TRUE')
+   ```
+
+  - **BAD RECORDS HANDLING:** Methods of handling bad records are as follows:
+
+    * Load all of the data before dealing with the errors.
+    * Clean or delete bad records before loading data or stop the loading when bad records are found.
+
+    ```
+    OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true', 'BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon', 'BAD_RECORDS_ACTION'='REDIRECT', 'IS_EMPTY_DATA_BAD_RECORD'='false')
+    ```
+
+  NOTE:
+  * If the REDIRECT option is used, CarbonData will add all bad records in to a separate CSV file. However, this file must not be used for subsequent data loading because the content may not exactly match the source record. You are advised to cleanse the original source record for further data ingestion. This option is used to remind you which records are bad records.
+  * In loaded data, if all records are bad records, the BAD_RECORDS_ACTION is invalid and the load operation fails.
+  * The maximum number of characters per column is 100000. If there are more than 100000 characters in a column, data loading will fail.
+
+  Example:
+  ```
+  LOAD DATA INPATH 'filepath.csv' INTO TABLE tablename
+  OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true','BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon',
+  'BAD_RECORDS_ACTION'='REDIRECT','IS_EMPTY_DATA_BAD_RECORD'='false')
+  ```
+
+### INSERT DATA INTO CARBONDATA TABLE
+
+  This command inserts data into a CarbonData table, it is defined as a combination of two queries Insert and Select query respectively. 
+  It inserts records from a source table into a target CarbonData table, the source table can be a Hive table, Parquet table or a CarbonData table itself. 
+  It comes with the functionality to aggregate the records of a table by performing Select query on source table and load its corresponding resultant records into a CarbonData table.
+
+  ```
+  INSERT INTO TABLE <CARBONDATA TABLE> SELECT * FROM sourceTableName 
+  [ WHERE { <filter_condition> } ]
+  ```
+
+  You can also omit the `table` keyword and write your query as:
+ 
+  ```
+  INSERT INTO <CARBONDATA TABLE> SELECT * FROM sourceTableName 
+  [ WHERE { <filter_condition> } ]
+  ```
+
+  Overwrite insert data:
+  ```
+  INSERT OVERWRITE <CARBONDATA TABLE> SELECT * FROM sourceTableName 
+  [ WHERE { <filter_condition> } ]
+  ```
+
+  NOTE:
+  * The source table and the CarbonData table must have the same table schema.
+  * The data type of source and destination table columns should be same
+  * INSERT INTO command does not support partial success if bad records are found, it will fail.
+  * Data cannot be loaded or updated in source table while insert from source table to target table is in progress.
+
+  Examples
+  ```
+  INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM table2 group by item1
+  ```
+
+  ```
+  INSERT INTO table1 SELECT item1, item2, item3 FROM table2 where item2='xyz'
+  ```
+
+  ```
+  INSERT OVERWRITE table1 SELECT * FROM TABLE2
+  ```
+
+## UPDATE AND DELETE
+  
+### UPDATE
+  
+  This command will allow to update the CarbonData table based on the column expression and optional filter conditions.
+    
+  ```
+  UPDATE <table_name> 
+  SET (column_name1, column_name2, ... column_name n) = (column1_expression , column2_expression, ... column n_expression )
+  [ WHERE { <filter_condition> } ]
+  ```
+  
+  alternatively the following the command can also be used for updating the CarbonData Table :
+  
+  ```
+  UPDATE <table_name>
+  SET (column_name1, column_name2) =(select sourceColumn1, sourceColumn2 from sourceTable [ WHERE { <filter_condition> } ] )
+  [ WHERE { <filter_condition> } ]
+  ```
+  
+  NOTE:The update command fails if multiple input rows in source table are matched with single row in destination table.
+  
+  Examples:
+  ```
+  UPDATE t3 SET (t3_salary) = (t3_salary + 9) WHERE t3_name = 'aaa1'
+  ```
+  
+  ```
+  UPDATE t3 SET (t3_date, t3_country) = ('2017-11-18', 'india') WHERE t3_salary < 15003
+  ```
+  
+  ```
+  UPDATE t3 SET (t3_country, t3_name) = (SELECT t5_country, t5_name FROM t5 WHERE t5_id = 5) WHERE t3_id < 5
+  ```
+  
+  ```
+  UPDATE t3 SET (t3_date, t3_serialname, t3_salary) = (SELECT '2099-09-09', t5_serialname, '9999' FROM t5 WHERE t5_id = 5) WHERE t3_id < 5
+  ```
+  
+  
+  ```
+  UPDATE t3 SET (t3_country, t3_salary) = (SELECT t5_country, t5_salary FROM t5 FULL JOIN t3 u WHERE u.t3_id = t5_id and t5_id=6) WHERE t3_id >6
+  ```
+    
+### DELETE
+
+  This command allows us to delete records from CarbonData table.
+  ```
+  DELETE FROM table_name [WHERE expression]
+  ```
+  
+  Examples:
+  
+  ```
+  DELETE FROM carbontable WHERE column1  = 'china'
+  ```
+  
+  ```
+  DELETE FROM carbontable WHERE column1 IN ('china', 'USA')
+  ```
+  
+  ```
+  DELETE FROM carbontable WHERE column1 IN (SELECT column11 FROM sourceTable2)
+  ```
+  
+  ```
+  DELETE FROM carbontable WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE column1 = 'USA')
+  ```
+
+## COMPACTION
+
+  Compaction improves the query performance significantly. 
+  During the load data, several CarbonData files are generated, this is because data is sorted only within each load (per load segment and one B+ tree index).
+  
+  There are two types of compaction, Minor and Major compaction.
+  
+  ```
+  ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR'
+  ```
+
+  - **Minor Compaction**
+  
+  In Minor compaction, user can specify the number of loads to be merged. 
+  Minor compaction triggers for every data load if the parameter carbon.enable.auto.load.merge is set to true. 
+  If any segments are available to be merged, then compaction will run parallel with data load, there are 2 levels in minor compaction:
+  * Level 1: Merging of the segments which are not yet compacted.
+  * Level 2: Merging of the compacted segments again to form a larger segment.
+  
+  ```
+  ALTER TABLE table_name COMPACT 'MINOR'
+  ```
+  
+  - **Major Compaction**
+  
+  In Major compaction, multiple segments can be merged into one large segment. 
+  User will specify the compaction size until which segments can be merged, Major compaction is usually done during the off-peak time.
+  This command merges the specified number of segments into one segment: 
+     
+  ```
+  ALTER TABLE table_name COMPACT 'MAJOR'
+  ```
+
+## PARTITION
+
+  Similar to other system's partition features, CarbonData's partition feature also can be used to improve query performance by filtering on the partition column.
+
+### Create Hash Partition Table
+
+  This command allows us to create hash partition.
+  
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type , ...)]
+  PARTITIONED BY (partition_col_name data_type)
+  STORED BY 'carbondata'
+  [TBLPROPERTIES ('PARTITION_TYPE'='HASH',
+                  'NUM_PARTITIONS'='N' ...)]
+  //N is the number of hash partitions
+  ```
+
+  Example:
+  ```
+  CREATE TABLE IF NOT EXISTS hash_partition_table(
+      col_A String,
+      col_B Int,
+      col_C Long,
+      col_D Decimal(10,2),
+      col_F Timestamp
+  ) PARTITIONED BY (col_E Long)
+  STORED BY 'carbondata' TBLPROPERTIES('PARTITION_TYPE'='HASH','NUM_PARTITIONS'='9')
+  ```
+
+### Create Range Partition Table
+
+  This command allows us to create range partition.
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type , ...)]
+  PARTITIONED BY (partition_col_name data_type)
+  STORED BY 'carbondata'
+  [TBLPROPERTIES ('PARTITION_TYPE'='RANGE',
+                  'RANGE_INFO'='2014-01-01, 2015-01-01, 2016-01-01' ...)]
+  ```
+
+  NOTE:
+  * The 'RANGE_INFO' must be defined in ascending order in the table properties.
+  * The default format for partition column of Date/Timestamp type is yyyy-MM-dd. Alternate formats for Date/Timestamp could be defined in CarbonProperties.
+
+  Example:
+  ```
+  CREATE TABLE IF NOT EXISTS range_partition_table(
+      col_A String,
+      col_B Int,
+      col_C Long,
+      col_D Decimal(10,2),
+      col_E Long
+   ) partitioned by (col_F Timestamp)
+   PARTITIONED BY 'carbondata'
+   TBLPROPERTIES('PARTITION_TYPE'='RANGE',
+   'RANGE_INFO'='2015-01-01, 2016-01-01, 2017-01-01, 2017-02-01')
+  ```
+
+### Create List Partition Table
+
+  This command allows us to create list partition.
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type , ...)]
+  PARTITIONED BY (partition_col_name data_type)
+  STORED BY 'carbondata'
+  [TBLPROPERTIES ('PARTITION_TYPE'='LIST',
+                  'LIST_INFO'='A, B, C' ...)]
+  ```
+  NOTE : List partition supports list info in one level group.
+
+  Example:
+  ```
+  CREATE TABLE IF NOT EXISTS list_partition_table(
+      col_B Int,
+      col_C Long,
+      col_D Decimal(10,2),
+      col_E Long,
+      col_F Timestamp
+   ) PARTITIONED BY (col_A String)
+   STORED BY 'carbondata'
+   TBLPROPERTIES('PARTITION_TYPE'='LIST',
+   'LIST_INFO'='aaaa, bbbb, (cccc, dddd), eeee')
+  ```
+
+
+### Show Partitions
+
+  The following command is executed to get the partition information of the table
+
+  ```
+  SHOW PARTITIONS [db_name.]table_name
+  ```
+
+### Add a new partition
+
+  ```
+  ALTER TABLE [db_name].table_name ADD PARTITION('new_partition')
+  ```
+
+### Split a partition
+
+  ```
+  ALTER TABLE [db_name].table_name SPLIT PARTITION(partition_id) INTO('new_partition1', 'new_partition2'...)
+  ```
+
+### Drop a partition
+
+  ```
+  //Only drop partition definition, but keep data
+  ALTER TABLE [db_name].table_name DROP PARTITION(partition_id)
+
+  //Drop both partition definition and data
+  ALTER TABLE [db_name].table_name DROP PARTITION(partition_id) WITH DATA
+  ```
+
+  NOTE:
+  * Hash partition table is not supported for ADD, SPLIT and DROP commands.
+  * Partition Id: in CarbonData like the hive, folders are not used to divide partitions instead partition id is used to replace the task id. It could make use of the characteristic and meanwhile reduce some metadata.
+
+  ```
+  SegmentDir/0_batchno0-0-1502703086921.carbonindex
+            ^
+  SegmentDir/part-0-0_batchno0-0-1502703086921.carbondata
+                     ^
+  ```
+
+  Here are some useful tips to improve query performance of carbonData partition table:
+  * The partitioned column can be excluded from SORT_COLUMNS, this will let other columns to do the efficient sorting.
+  * When writing SQL on a partition table, try to use filters on the partition column.
+
+## BUCKETING
+
+  Bucketing feature can be used to distribute/organize the table/partition data into multiple files such
+  that similar records are present in the same file. While creating a table, user needs to specify the
+  columns to be used for bucketing and the number of buckets. For the selection of bucket the Hash value
+  of columns is used.
+
+  ```
+  CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type, ...)]
+  STORED BY 'carbondata'
+  TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
+  'BUCKETCOLUMNS'='columnname')
+  ```
+
+  NOTE:
+  * Bucketing can not be performed for columns of Complex Data Types.
+  * Columns in the BUCKETCOLUMN parameter must be only dimension. The BUCKETCOLUMN parameter can not be a measure or a combination of measures and dimensions.
+
+  Example:
+  ```
+  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                productNumber Int,
+                                saleQuantity Int,
+                                productName String,
+                                storeCity String,
+                                storeProvince String,
+                                productCategory String,
+                                productBatch String,
+                                revenue Int)
+  STORED BY 'carbondata'
+  TBLPROPERTIES ('BUCKETNUMBER'='4', 'BUCKETCOLUMNS'='productName')
+  ```
+  
+## SEGMENT MANAGEMENT  
+
+### SHOW SEGMENT
+
+  This command is used to get the segments of CarbonData table.
+
+  ```
+  SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
+  ```
+  
+  Example:
+  ```
+  SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4
+  ```
+
+### DELETE SEGMENT BY ID
+
+  This command is used to delete segment by using the segment ID. Each segment has a unique segment ID associated with it. 
+  Using this segment ID, you can remove the segment.
+
+  The following command will get the segmentID.
+
+  ```
+  SHOW SEGMENTS FOR TABLE [db_name.]table_name LIMIT number_of_segments
+  ```
+
+  After you retrieve the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.
+
+  ```
+  DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.ID IN (segment_id1, segments_id2, ...)
+  ```
+
+  Example:
+
+  ```
+  DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0)
+  DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0,5,8)
+  ```
+
+### DELETE SEGMENT BY DATE
+
+  This command will allow to delete the CarbonData segment(s) from the store based on the date provided by the user in the DML command. 
+  The segment created before the particular date will be removed from the specific stores.
+
+  ```
+  DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.STARTTIME BEFORE DATE_VALUE
+  ```
+
+  Example:
+  ```
+  DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.STARTTIME BEFORE '2017-06-01 12:05:06' 
+  ```
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/data-management.md
----------------------------------------------------------------------
diff --git a/docs/data-management.md b/docs/data-management.md
deleted file mode 100644
index b1a3eef..0000000
--- a/docs/data-management.md
+++ /dev/null
@@ -1,157 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-
-# Data Management
-This tutorial is going to introduce you to the conceptual details of data management like:
-
-* [Loading Data](#loading-data)
-* [Deleting Data](#deleting-data)
-* [Compacting Data](#compacting-data)
-* [Updating Data](#updating-data)
-
-## Loading Data
-
-* **Scenario**
-
-   After creating a table, you can load data to the table using the [LOAD DATA](dml-operation-on-carbondata.md) command. The loaded data is available for querying.
-   When data load is triggered, the data is encoded in CarbonData format and copied into HDFS CarbonData store path (specified in carbon.properties file) 
-   in compressed, multi dimensional columnar format for quick analysis queries. The same command can be used to load new data or to
-   update the existing data. Only one data load can be triggered for one table. The high cardinality columns of the dictionary encoding are 
-   automatically recognized and these columns will not be used for dictionary encoding.
-
-* **Procedure**
-  
-   Data loading is a process that involves execution of multiple steps to read, sort and encode the data in CarbonData store format.
-   Each step is executed on different threads. After data loading process is complete, the status (success/partial success) is updated to 
-   CarbonData store metadata. The table below lists the possible load status.
-   
-   
-| Status | Description |
-|-----------------|------------------------------------------------------------------------------------------------------------|
-| Success | All the data is loaded into table and no bad records found. |
-| Partial Success | Data is loaded into table and bad records are found. Bad records are stored at carbon.badrecords.location. |
-   
-   In case of failure, the error will be logged in error log. Details of loads can be seen with [SHOW SEGMENTS](dml-operation-on-carbondata.md#show-segments) command. The show segment command output consists of :
-   
-   - SegmentSequenceId
-   - Status
-   - Load Start Time
-   - Load End Time 
- 
-   The latest load will be displayed first in the output.
-   
-   Refer to [DML operations on CarbonData](dml-operation-on-carbondata.md) for load commands.
-   
-   
-## Deleting Data  
-
-* **Scenario**
-   
-   If you have loaded wrong data into the table, or too many bad records are present and you want to modify and reload the data, you can delete required data loads. 
-   The load can be deleted using the Segment Sequence Id or if the table contains date field then the data can be deleted using the date field.
-   If there are some specific records that need to be deleted based on some filter condition(s) we can delete by records.
-   
-* **Procedure** 
-
-   The loaded data can be deleted in the following ways:
-   
-   * Delete by Segment ID
-      
-      After you get the segment ID of the segment that you want to delete, execute the delete command for the selected segment.
-      The status of deleted segment is updated to Marked for delete / Marked for Update.
-      
-| SegmentSequenceId | Status            | Load Start Time      | Load End Time        |
-|-------------------|-------------------|----------------------|----------------------|
-| 0                 | Success           | 2015-11-19 19:14:... | 2015-11-19 19:14:... |
-| 1                 | Marked for Update | 2015-11-19 19:54:... | 2015-11-19 20:08:... |
-| 2                 | Marked for Delete | 2015-11-19 20:25:... | 2015-11-19 20:49:... |
-
-   * Delete by Date Field
-   
-      If the table contains date field, you can delete the data based on a specific date.
-
-   * Delete by Record
-
-      To delete records from CarbonData table based on some filter Condition(s).
-      
-      For delete commands refer to [DML operations on CarbonData](dml-operation-on-carbondata.md).
-      
-   * **NOTE**:
-    
-     - When the delete segment DML is called, segment will not be deleted physically from the file system. Instead the segment status will be marked as "Marked for Delete". For the query execution, this deleted segment will be excluded.
-     
-     - The deleted segment will be deleted physically during the next load operation and only after the maximum query execution time configured using "max.query.execution.time". By default it is 60 minutes.
-     
-     - If the user wants to force delete the segment physically then he can use CLEAN FILES Command.
-        
-Example :
-          
-```
-CLEAN FILES FOR TABLE table1
-```
-
- This DML will physically delete the segment which are "Marked for delete" and "Compacted" immediately.
-
-## Compacting Data
-      
-* **Scenario**
-  
-  Frequent data ingestion results in several fragmented CarbonData files in the store directory. Since data is sorted only within each load, the indices perform only within each 
-  load. This means that there will be one index for each load and as number of data load increases, the number of indices also increases. As each index works only on one load, 
-  the performance of indices is reduced. CarbonData provides provision for compacting the loads. Compaction process combines several segments into one large segment by merge sorting the data from across the segments.  
-      
-* **Procedure**
-
-  There are two types of compaction Minor and Major compaction.
-  
-  - **Minor Compaction**
-    
-     In minor compaction the user can specify how many loads to be merged. Minor compaction triggers for every data load if the parameter carbon.enable.auto.load.merge is set. If any segments are available to be merged, then compaction will 
-     run parallel with data load. There are 2 levels in minor compaction.
-     
-     - Level 1: Merging of the segments which are not yet compacted.
-     - Level 2: Merging of the compacted segments again to form a bigger segment. 
-  - **Major Compaction**
-     
-     In Major compaction, many segments can be merged into one big segment. User will specify the compaction size until which segments can be merged. Major compaction is usually done during the off-peak time. 
-      
-   There are number of parameters related to Compaction that can be set in carbon.properties file 
-   
-| Parameter | Default | Application | Description | Valid Values |
-|-----------------------------------------|---------|-------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------|
-| carbon.compaction.level.threshold | 4, 3 | Minor | This property is for minor compaction which decides how many segments to be merged. Example: If it is set as "2, 3", then minor compaction will be triggered for every 2 segments in level 1. 3 is the number of level 1 compacted segment which is further compacted to new segment in level 2. | NA |
-| carbon.major.compaction.size | 1024 MB | Major | Major compaction size can be configured using this parameter. Sum of the segments which is below this threshold will be merged. | NA |
-| carbon.numberof.preserve.segments | 0 | Minor/Major | This property configures number of segments to preserve from being compacted. Example: carbon.numberof.preserve.segments=2 then 2 latest segments will always be excluded from the compaction. No segments will be preserved by default. | 0-100 |
-| carbon.allowed.compaction.days | 0 | Minor/Major | Compaction will merge the segments which are loaded within the specific number of days configured. Example: If the configuration is 2, then the segments which are loaded in the time frame of 2 days only will get merged. Segments which are loaded 2 days apart will not be merged. This is disabled by default. | 0-100 |
-| carbon.number.of.cores.while.compacting | 2 | Minor/Major | Number of cores which is used to write data during compaction. | 0-100 |   
-  
-   For compaction commands refer to [DDL operations on CarbonData](ddl-operation-on-carbondata.md)
-
-## Updating Data
-
-* **Scenario**
-
-    Sometimes after the data has been ingested into the System, it is required to be updated. Also there may be situations where some specific columns need to be updated
-    on the basis of column expression and optional filter conditions.
-
-* **Procedure**
-
-    To update we need to specify the column expression with an optional filter condition(s).
-
-    For update commands refer to [DML operations on CarbonData](dml-operation-on-carbondata.md).

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/ddl-operation-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/ddl-operation-on-carbondata.md b/docs/ddl-operation-on-carbondata.md
deleted file mode 100644
index d1fee46..0000000
--- a/docs/ddl-operation-on-carbondata.md
+++ /dev/null
@@ -1,448 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-
-# DDL Operations on CarbonData
-This tutorial guides you through the data definition language support provided by CarbonData.
-
-## Overview
-The following DDL operations are supported in CarbonData :
-
-* [CREATE TABLE](#create-table)
-* [SHOW TABLE](#show-table)
-* [ALTER TABLE](#alter-table)
-  - [RENAME TABLE](#rename-table)
-  - [ADD COLUMN](#add-column)
-  - [DROP COLUMNS](#drop-columns)
-  - [CHANGE DATA TYPE](#change-data-type)
-* [DROP TABLE](#drop-table)
-* [COMPACTION](#compaction)
-* [BUCKETING](#bucketing)
-
-
-## CREATE TABLE
-  This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.
-
-```
-   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type , ...)]
-   STORED BY 'carbondata'
-   [TBLPROPERTIES (property_name=property_value, ...)]
-   // All Carbon's additional table options will go into properties
-```
-
-### Parameter Description
-
-| Parameter | Description | Optional |
-|---------------|-----------------------------------------------------------------------------------------------------------------------------------------------|----------|
-| db_name | Name of the database. Database name should consist of alphanumeric characters and underscore(\_) special character. | YES |
-| field_list | Comma separated List of fields with data type. The field names should consist of alphanumeric characters and underscore(\_) special character. | NO |
-| table_name | The name of the table in Database. Table name should consist of alphanumeric characters and underscore(\_) special character. | NO |
-| STORED BY | "org.apache.carbondata.format", identifies and creates a CarbonData table. | NO |
-| TBLPROPERTIES | List of CarbonData table properties. | YES |
-
-### Usage Guidelines
-
-   Following are the guidelines for using table properties.
-
-   - **Dictionary Encoding Configuration**
-
-       Dictionary encoding is turned off for all columns by default. You can include and exclude columns for dictionary encoding.
-
-```
-       TBLPROPERTIES ('DICTIONARY_EXCLUDE'='column1, column2')
-       TBLPROPERTIES ('DICTIONARY_INCLUDE'='column1, column2')
-```
-
-   Here, DICTIONARY_INCLUDE will improve the performance for low cardinality dimensions, considerably for string. DICTIONARY_INCLUDE will generate dictionary for the columns specified.
-
-
-
-   - **Table Block Size Configuration**
-
-     The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.
-     If you do not specify this value in the DDL command, default value is used.
-
-```
-       TBLPROPERTIES ('TABLE_BLOCKSIZE'='512')
-```
-
-  Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.
-
-   - **Inverted Index Configuration**
-
-      Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns which are in reward position.
-      By default inverted index is enabled. The user can disable the inverted index creation for some columns.
-
-```
-       TBLPROPERTIES ('NO_INVERTED_INDEX'='column1, column3')
-```
-
-  No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.
-
-   NOTE:
-
-   - By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.
-
-   - All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.
-
-   - **Sort Columns Configuration**
-
-     "SORT_COLUMN" property is for users to specify which columns belong to the MDK index. If user don't specify "SORT_COLUMN" property, by default MDK index be built by using all dimension columns except complex datatype column. 
-
-```
-       TBLPROPERTIES ('SORT_COLUMNS'='column1, column3')
-```
-
-### Example:
-```
-    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                   productNumber Int,
-                                   productName String,
-                                   storeCity String,
-                                   storeProvince String,
-                                   productCategory String,
-                                   productBatch String,
-                                   saleQuantity Int,
-                                   revenue Int)
-      STORED BY 'carbondata'
-      TBLPROPERTIES ('DICTIONARY_EXCLUDE'='storeCity',
-                     'DICTIONARY_INCLUDE'='productNumber',
-                     'NO_INVERTED_INDEX'='productBatch',
-                     'SORT_COLUMNS'='productName,storeCity')
-```
-
-   - **SORT_COLUMNS**
-
-      This table property specifies the order of the sort column.
-
-```
-    TBLPROPERTIES('SORT_COLUMNS'='column1, column3')
-```
-
-   NOTE:
-
-   - If this property is not specified, then by default SORT_COLUMNS consist of all dimension (exclude Complex Column).
-
-   - If this property is specified but with empty argument, then the table will be loaded without sort. For example, ('SORT_COLUMNS'='')
-   
-   - **SORT_SCOPE**
-      This option specifies the scope of the sort during data load. Following are the types of sort scope.
-     * BATCH_SORT: it will increase the load performance but decreases the query performance if identified blocks > parallelism.
-```
-    OPTIONS ('SORT_SCOPE'='BATCH_SORT')
-```
-      You can also specify the sort size option for sort scope.
-```
-    OPTIONS ('SORT_SCOPE'='BATCH_SORT', 'batch_sort_size_inmb'='7')
-```
-     * GLOBAL_SORT: it increases the query performance, especially point query.
-```
-    OPTIONS ('SORT_SCOPE'= GLOBAL_SORT ')
-```
-	 You can also specify the number of partitions to use when shuffling data for sort. If it is not configured, or configured less than 1, then it uses the number of map tasks as reduce tasks. It is recommended that each reduce task deal with 512MB - 1GB data.
-```
-    OPTIONS( 'SORT_SCOPE'='GLOBAL_SORT', 'GLOBAL_SORT_PARTITIONS'='2')
-```
-   NOTE:
-   - Increasing number of partitions might require increasing spark.driver.maxResultSize as sampling data collected at driver increases with increasing partitions.
-   - Increasing number of partitions might increase the number of Btree.
-     * LOCAL_SORT: it is the default sort scope.
-	 * NO_SORT: it will load the data in unsorted manner.
-	 
-
-## SHOW TABLE
-
-  This command can be used to list all the tables in current database or all the tables of a specific database.
-```
-  SHOW TABLES [IN db_Name];
-```
-
-### Parameter Description
-| Parameter  | Description                                                                               | Optional |
-|------------|-------------------------------------------------------------------------------------------|----------|
-| IN db_Name | Name of the database. Required only if tables of this specific database are to be listed. | YES      |
-
-### Example:
-```
-  SHOW TABLES IN ProductSchema;
-```
-
-## ALTER TABLE
-
-The following section shall discuss the commands to modify the physical or logical state of the existing table(s).
-
-### **RENAME TABLE**
-
-This command is used to rename the existing table.
-```
-    ALTER TABLE [db_name.]table_name RENAME TO new_table_name;
-```
-
-#### Parameter Description
-| Parameter     | Description                                                                                   | Optional |
-|---------------|-----------------------------------------------------------------------------------------------|----------|
-| db_Name       | Name of the database. If this parameter is left unspecified, the current database is selected.|   YES    |
-|table_name     | Name of the existing table.                                                                   |   NO     |
-|new_table_name | New table name for the existing table.                                                        |   NO     |
-
-#### Usage Guidelines
-
-- Queries that require the formation of path using the table name for reading carbon store files, running in parallel with Rename command might fail during the renaming operation.
-
-- Renaming of Secondary index table(s) is not permitted.
-
-#### Examples:
-
-```
-    ALTER TABLE carbon RENAME TO carbondata;
-```
-
-```
-    ALTER TABLE test_db.carbon RENAME TO test_db.carbondata;
-```
-
-### **ADD COLUMN**
-
-This command is used to add a new column to the existing table.
-
-```
-    ALTER TABLE [db_name.]table_name ADD COLUMNS (col_name data_type,...)
-    TBLPROPERTIES('DICTIONARY_INCLUDE'='col_name,...',
-    'DICTIONARY_EXCLUDE'='col_name,...',
-    'DEFAULT.VALUE.COLUMN_NAME'='default_value');
-```
-
-#### Parameter Description
-| Parameter        | Description                                                                                               |Optional|
-|------------------|-----------------------------------------------------------------------------------------------------------|------------|
-|db_Name           | Name of the database. If this parameter is left unspecified, the current database is selected.            |YES|
-|table_name        | Name of the existing table.                                                                               |NO |
-|col_name data_type| Name of comma-separated column with data type. Column names contain letters, digits, and underscores (\_). |NO |
-
-NOTE: Do not name the column after name, tupleId, PositionId, and PositionReference when creating Carbon tables because they are used internally by UPDATE, DELETE, and secondary index.
-
-#### Usage Guidelines
-
-- Apart from DICTIONARY_INCLUDE, DICTIONARY_EXCLUDE and default_value no other property will be read. If any other property name is specified, error will not be thrown, it will be ignored.
-
-- If default value is not specified, then NULL will be considered as the default value for the column.
-
-- For addition of column, if DICTIONARY_INCLUDE and DICTIONARY_EXCLUDE are not specified, then the decision will be taken based on data type of the column.
-
-#### Examples:
-
-```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING);
-```
-
-```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
-    TBLPROPERTIES('DICTIONARY_EXCLUDE'='b1');
-```
-
-```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
-    TBLPROPERTIES('DICTIONARY_INCLUDE'='a1');
-```
-
-```
-    ALTER TABLE carbon ADD COLUMNS (a1 INT, b1 STRING)
-    TBLPROPERTIES('DEFAULT.VALUE.a1'='10');
-```
-
-
-### **DROP COLUMNS**
-
-This command is used to delete a existing column or multiple columns in a table.
-
-```
-    ALTER TABLE [db_name.]table_name DROP COLUMNS (col_name, ...);
-```
-
-#### Parameter Description
-| Parameter  | Description                                                                                              | Optional |
-|------------|----------------------------------------------------------------------------------------------------------|----------|
-| db_Name    | Name of the database. If this parameter is left unspecified, the current database is selected.           |  YES     |
-| table_name | Name of the existing table.                                                                              |  NO      |
-| col_name   | Name of comma-separated column with data type. Column names contain letters, digits, and underscores (\_) | NO      |
-
-#### Usage Guidelines
-
-- Deleting a column will also clear the dictionary files, provided the column is of type dictionary.
-
-- For delete column operation, there should be at least one key column that exists in the schema after deletion else error message will be displayed and the operation shall fail.
-
-#### Examples:
-
-If the table contains 4 columns namely a1, b1, c1, and d1.
-
-- **To delete a single column:**
-
-```
-   ALTER TABLE carbon DROP COLUMNS (b1);
-```
-
-```
-    ALTER TABLE test_db.carbon DROP COLUMNS (b1);
-```
-
-
-- **To delete multiple columns:**
-
-```
-   ALTER TABLE carbon DROP COLUMNS (c1,d1);
-```
-
-
-### **CHANGE DATA TYPE**
-
-This command is used to change the data type from INT to BIGINT or decimal precision from lower to higher.
-
-```
-    ALTER TABLE [db_name.]table_name
-    CHANGE col_name col_name changed_column_type;
-```
-
-#### Parameter Description
-| Parameter           | Description                                                                                               |Optional|
-|---------------------|-----------------------------------------------------------------------------------------------------------|-------|
-| db_Name             | Name of the database. If this parameter is left unspecified, the current database is selected.            |  YES  |
-| table_name          | Name of the existing table.                                                                               |  NO |
-| col_name            | Name of comma-separated column with data type. Column names contain letters, digits, and underscores (\_). | NO |
-| changed_column_type | The change in the data type.                                                                              |  NO |
-
-#### Usage Guidelines
-
-- Change of decimal data type from lower precision to higher precision will only be supported for cases where there is no data loss.
-
-#### Valid Scenarios
-- Invalid scenario - Change of decimal precision from (10,2) to (10,5) is invalid as in this case only scale is increased but total number of digits remains the same.
-
-- Valid scenario - Change of decimal precision from (10,2) to (12,3) is valid as the total number of digits are increased by 2 but scale is increased only by 1 which will not lead to any data loss.
-
-- Note :The allowed range is 38,38 (precision, scale) and is a valid upper case scenario which is not resulting in data loss.
-
-#### Examples:
-- **Changing data type of column a1 from INT to BIGINT**
-
-```
-   ALTER TABLE test_db.carbon CHANGE a1 a1 BIGINT;
-```
-- **Changing decimal precision of column a1 from 10 to 18.**
-
-```
-   ALTER TABLE test_db.carbon CHANGE a1 a1 DECIMAL(18,2);
-```
-
-## DROP TABLE
-
- This command is used to delete an existing table.
-```
-  DROP TABLE [IF EXISTS] [db_name.]table_name;
-```
-
-### Parameter Description
-| Parameter | Description | Optional |
-|-----------|-------------| -------- |
-| db_Name | Name of the database. If not specified, current database will be selected. | YES |
-| table_name | Name of the table to be deleted. | NO |
-
-### Example:
-```
-  DROP TABLE IF EXISTS productSchema.productSalesTable;
-```
-
-## COMPACTION
-
-This command merges the specified number of segments into one segment. This enhances the query performance of the table.
-```
-  ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR';
-```
-
-  To get details about Compaction refer to [Data Management](data-management.md)
-
-### Parameter Description
-
-| Parameter | Description | Optional |
-| ------------- | -----| ----------- |
-| db_name | Database name, if it is not specified then it uses current database. | YES |
-| table_name | The name of the table in provided database.| NO |
-
-### Syntax
-
-- **Minor Compaction**
-```
-ALTER TABLE table_name COMPACT 'MINOR';
-```
-- **Major Compaction**
-```
-ALTER TABLE table_name COMPACT 'MAJOR';
-```
-
-## BUCKETING
-
-Bucketing feature can be used to distribute/organize the table/partition data into multiple files such
-that similar records are present in the same file. While creating a table, a user needs to specify the
-columns to be used for bucketing and the number of buckets. For the selection of bucket the Hash value
-of columns is used.
-
-```
-   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
-                    [(col_name data_type, ...)]
-   STORED BY 'carbondata'
-   TBLPROPERTIES('BUCKETNUMBER'='noOfBuckets',
-   'BUCKETCOLUMNS'='columnname')
-```
-
-### Parameter Description
-
-| Parameter 	| Description 	| Optional 	|
-|---------------	|------------------------------------------------------------------------------------------------------------------------------	|----------	|
-| BUCKETNUMBER 	| Specifies the number of Buckets to be created. 	| No 	|
-| BUCKETCOLUMNS 	| Specify the columns to be considered for Bucketing  	| No 	|
-
-### Usage Guidelines
-
-- The feature is supported for Spark 1.6.2 onwards, but the performance optimization is evident from Spark 2.1 onwards.
-
-- Bucketing can not be performed for columns of Complex Data Types.
-
-- Columns in the BUCKETCOLUMN parameter must be only dimension. The BUCKETCOLUMN parameter can not be a measure or a combination of measures and dimensions.
-
-
-### Example:
-
-```
- CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                productNumber Int,
-                                saleQuantity Int,
-                                productName String,
-                                storeCity String,
-                                storeProvince String,
-                                productCategory String,
-                                productBatch String,
-                                revenue Int)
-   STORED BY 'carbondata'
-   TBLPROPERTIES ('DICTIONARY_EXCLUDE'='productName',
-                  'DICTIONARY_INCLUDE'='productNumber,saleQuantity',
-                  'NO_INVERTED_INDEX'='productBatch',
-                  'BUCKETNUMBER'='4',
-                  'BUCKETCOLUMNS'='productName')
-```
-

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/dml-operation-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/dml-operation-on-carbondata.md b/docs/dml-operation-on-carbondata.md
deleted file mode 100644
index 66109e8..0000000
--- a/docs/dml-operation-on-carbondata.md
+++ /dev/null
@@ -1,484 +0,0 @@
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-
-# DML Operations on CarbonData
-This tutorial guides you through the data manipulation language support provided by CarbonData.
-
-## Overview 
-The following DML operations are supported in CarbonData :
-
-* [LOAD DATA](#load-data)
-* [INSERT DATA INTO A CARBONDATA TABLE](#insert-data-into-a-carbondata-table)
-* [SHOW SEGMENTS](#show-segments)
-* [DELETE SEGMENT BY ID](#delete-segment-by-id)
-* [DELETE SEGMENT BY DATE](#delete-segment-by-date)
-* [UPDATE CARBONDATA TABLE](#update-carbondata-table)
-* [DELETE RECORDS FROM CARBONDATA TABLE](#delete-records-from-carbondata-table)
-
-## LOAD DATA
-
-This command loads the user data in raw format to the CarbonData specific data format store, this allows CarbonData to provide good performance while querying the data.
-Please visit [Data Management](data-management.md) for more details on LOAD.
-
-### Syntax
-
-```
-LOAD DATA [LOCAL] INPATH 'folder_path' 
-INTO TABLE [db_name.]table_name 
-OPTIONS(property_name=property_value, ...)
-```
-
-OPTIONS are not mandatory for data loading process. Inside OPTIONS user can provide either of any options like DELIMITER, QUOTECHAR, ESCAPECHAR, MULTILINE as per requirement.
-
-NOTE: The path shall be canonical path.
-
-### Parameter Description
-
-| Parameter     | Description                                                          | Optional |
-| ------------- | ---------------------------------------------------------------------| -------- |
-| folder_path   | Path of raw csv data folder or file.                                 | NO       |
-| db_name       | Database name, if it is not specified then it uses the current database. | YES      |
-| table_name    | The name of the table in provided database.                          | NO       |
-| OPTIONS       | Extra options provided to Load                                       | YES      |
- 
-
-### Usage Guidelines
-
-You can use the following options to load data:
-
-- **DELIMITER:** Delimiters can be provided in the load command.
-    
-    ``` 
-    OPTIONS('DELIMITER'=',')
-    ```
-
-- **QUOTECHAR:** Quote Characters can be provided in the load command.
-
-    ```
-    OPTIONS('QUOTECHAR'='"')
-    ```
-
-- **COMMENTCHAR:** Comment Characters can be provided in the load command if user want to comment lines.
-
-    ```
-    OPTIONS('COMMENTCHAR'='#')
-    ```
-
-- **FILEHEADER:** Headers can be provided in the LOAD DATA command if headers are missing in the source files.
-
-    ```
-    OPTIONS('FILEHEADER'='column1,column2') 
-    ```
-
-- **MULTILINE:** CSV with new line character in quotes.
-
-    ```
-    OPTIONS('MULTILINE'='true') 
-    ```
-
-- **ESCAPECHAR:** Escape char can be provided if user want strict validation of escape character on CSV.
-
-    ```
-    OPTIONS('ESCAPECHAR'='\') 
-    ```
-
-- **COMPLEX_DELIMITER_LEVEL_1:** Split the complex type data column in a row (eg., a$b$c --> Array = {a,b,c}).
-
-    ```
-    OPTIONS('COMPLEX_DELIMITER_LEVEL_1'='$') 
-    ```
-
-- **COMPLEX_DELIMITER_LEVEL_2:** Split the complex type nested data column in a row. Applies level_1 delimiter & applies level_2 based on complex data type (eg., a:b$c:d --> Array> = {{a,b},{c,d}}).
-
-    ```
-    OPTIONS('COMPLEX_DELIMITER_LEVEL_2'=':')
-    ```
-
-- **ALL_DICTIONARY_PATH:** All dictionary files path.
-
-    ```
-    OPTIONS('ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary')
-    ```
-
-- **COLUMNDICT:** Dictionary file path for specified column.
-
-    ```
-    OPTIONS('COLUMNDICT'='column1:dictionaryFilePath1,
-    column2:dictionaryFilePath2')
-    ```
-
-    NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.
-    
-- **DATEFORMAT:** Date format for specified column.
-
-    ```
-    OPTIONS('DATEFORMAT'='column1:dateFormat1, column2:dateFormat2')
-    ```
-
-    NOTE: Date formats are specified by date pattern strings. The date pattern letters in CarbonData are same as in JAVA. Refer to [SimpleDateFormat](http://docs.oracle.com/javase/7/docs/api/java/text/SimpleDateFormat.html).
-
-- **SINGLE_PASS:** Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
-
-   This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
-
-    ```
-    OPTIONS('SINGLE_PASS'='TRUE')
-    ```
-
-   Note :
-
-   * If this option is set to TRUE then data loading will take less time.
-
-   * If this option is set to some invalid value other than TRUE or FALSE then it uses the default value.
-   
-   * If this option is set to TRUE, then high.cardinality.identify.enable property will be disabled during data load.
-   
-  ### Example:
-
-```
-LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
-options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
-'FILEHEADER'='empno,empname,designation,doj,workgroupcategory,
- workgroupcategoryname,deptno,deptname,projectcode,
- projectjoindate,projectenddate,attendance,utilization,salary',
-'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$',
-'COMPLEX_DELIMITER_LEVEL_2'=':',
-'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary',
-'SINGLE_PASS'='TRUE'
-)
-```
-
-- **BAD RECORDS HANDLING:** Methods of handling bad records are as follows:
-
-    * Load all of the data before dealing with the errors.
-
-    * Clean or delete bad records before loading data or stop the loading when bad records are found.
-
-    ```
-    OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true', 'BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon', 'BAD_RECORDS_ACTION'='REDIRECT', 'IS_EMPTY_DATA_BAD_RECORD'='false')
-    ```
-
-    NOTE:
-
-    * If the REDIRECT option is used, Carbon will add all bad records in to a separate CSV file. However, this file must not be used for subsequent data loading because the content may not exactly match the source record. You are advised to cleanse the original source record for further data ingestion. This option is used to remind you which records are bad records.
-
-    * In loaded data, if all records are bad records, the BAD_RECORDS_ACTION is invalid and the load operation fails.
-
-    * The maximum number of characters per column is 100000. If there are more than 100000 characters in a column, data loading will fail.
-
-### Example:
-
-```
-LOAD DATA INPATH 'filepath.csv'
-INTO TABLE tablename
-OPTIONS('BAD_RECORDS_LOGGER_ENABLE'='true',
-'BAD_RECORD_PATH'='hdfs://hacluster/tmp/carbon',
-'BAD_RECORDS_ACTION'='REDIRECT',
-'IS_EMPTY_DATA_BAD_RECORD'='false');
-```
-
- **Bad Records Management Options:**
-
- | Options                   | Default Value | Description                                                                                                                                                                                                                                                                                                                                                                                                                                              |
- |---------------------------|---------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
- | BAD_RECORDS_LOGGER_ENABLE | false         | Whether to create logs with details about bad records.                                                                                                                                                                                                                                                                                                                                                                                                   |
- | BAD_RECORDS_ACTION        | FORCE          | Following are the four types of action for bad records:  FORCE: Auto-corrects the data by storing the bad records as NULL.  REDIRECT: Bad records are written to the raw CSV instead of being loaded.  IGNORE: Bad records are neither loaded nor written to the raw CSV.  FAIL: Data loading fails if any bad records are found.  NOTE: In loaded data, if all records are bad records, the BAD_RECORDS_ACTION is invalid and the load operation fails. |
- | IS_EMPTY_DATA_BAD_RECORD  | false         | If false, then empty ("" or '' or ,,) data will not be considered as bad record and vice versa.                                                                                                                                                                                                                                                                                                                                                          |
- | BAD_RECORD_PATH           | -             | Specifies the HDFS path where bad records are stored. By default the value is Null. This path must to be configured by the user if bad record logger is enabled or bad record action redirect.                                                                                                                                                                                                                                                           |
-
-## INSERT DATA INTO A CARBONDATA TABLE
-
-This command inserts data into a CarbonData table. It is defined as a combination of two queries Insert and Select query respectively. It inserts records from a source table into a target CarbonData table. The source table can be a Hive table, Parquet table or a CarbonData table itself. It comes with the functionality to aggregate the records of a table by performing Select query on source table and load its corresponding resultant records into a CarbonData table.
-
-**NOTE** :  The client node where the INSERT command is executing, must be part of the cluster.
-
-### Syntax
-
-```
-INSERT INTO TABLE <CARBONDATA TABLE> SELECT * FROM sourceTableName 
-[ WHERE { <filter_condition> } ];
-```
-
-You can also omit the `table` keyword and write your query as:
- 
-```
-INSERT INTO <CARBONDATA TABLE> SELECT * FROM sourceTableName 
-[ WHERE { <filter_condition> } ];
-```
-
-### Parameter Description
-
-| Parameter | Description |
-|--------------|---------------------------------------------------------------------------------|
-| CARBON TABLE | The name of the Carbon table in which you want to perform the insert operation. |
-| sourceTableName | The table from which the records are read and inserted into destination CarbonData table. |
-
-### Usage Guidelines
-The following condition must be met for successful insert operation :
-
-- The source table and the CarbonData table must have the same table schema.
-- The table must be created.
-- Overwrite is not supported for CarbonData table.
-- The data type of source and destination table columns should be same, else the data from source table will be treated as bad records and the INSERT command fails.
-- INSERT INTO command does not support partial success if bad records are found, it will fail.
-- Data cannot be loaded or updated in source table while insert from source table to target table is in progress.
-
-To enable data load or update during insert operation, configure the following property to true.
-
-```
-carbon.insert.persist.enable=true
-```
-
-By default the above configuration will be false.
-
-**NOTE**: Enabling this property will reduce the performance.
-
-### Examples
-```
-INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM 
-table2 group by item1;
-```
-
-```
-INSERT INTO table1 SELECT item1, item2, item3 FROM table2 
-where item2='xyz';
-```
-
-```
-INSERT INTO table1 SELECT * FROM table2 
-where exists (select * from table3 
-where table2.item1 = table3.item1);
-```
-
-**The Status Success/Failure shall be captured in the driver log.**
-
-## SHOW SEGMENTS
-
-This command is used to get the segments of CarbonData table.
-
-```
-SHOW SEGMENTS FOR TABLE [db_name.]table_name 
-LIMIT number_of_segments;
-```
-
-### Parameter Description
-
-| Parameter          | Description                                                          | Optional |
-| ------------------ | ---------------------------------------------------------------------| ---------|
-| db_name            | Database name, if it is not specified then it uses the current database. | YES      |
-| table_name         | The name of the table in provided database.                          | NO       |
-| number_of_segments | Limit the output to this number.                                     | YES      |
-
-### Example:
-
-```
-SHOW SEGMENTS FOR TABLE CarbonDatabase.CarbonTable LIMIT 4;
-```
-
-## DELETE SEGMENT BY ID
-
-This command is used to delete segment by using the segment ID. Each segment has a unique segment ID associated with it. 
-Using this segment ID, you can remove the segment.
-
-The following command will get the segmentID.
-
-```
-SHOW SEGMENTS FOR Table [db_name.]table_name LIMIT number_of_segments
-```
-
-After you retrieve the segment ID of the segment that you want to delete, execute the following command to delete the selected segment.
-
-```
-DELETE FROM TABLE [db_name.]table_name WHERE SEGMENT.ID IN (segment_id1, segments_id2, ...)
-```
-
-### Parameter Description
-| Parameter  | Description                                                          | Optional |
-| -----------| ---------------------------------------------------------------------|----------|
-| segment_id | Segment Id of the load.                                              | NO       |
-| db_name    | Database name, if it is not specified then it uses the current database. | YES      |
-| table_name | The name of the table in provided database.                          | NO       |
-
-### Example:
-
-```
-DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0);
-DELETE FROM TABLE CarbonDatabase.CarbonTable WHERE SEGMENT.ID IN (0,5,8);
-```
-  NOTE: Here 0.1 is compacted segment sequence id. 
-
-## DELETE SEGMENT BY DATE
-
-This command will allow to delete the CarbonData segment(s) from the store based on the date provided by the user in the DML command. 
-The segment created before the particular date will be removed from the specific stores.
-
-```
-DELETE FROM TABLE [db_name.]table_name 
-WHERE SEGMENT.STARTTIME BEFORE DATE_VALUE
-```
-
-### Parameter Description
-
-| Parameter  | Description                                                                                        | Optional |
-| ---------- | ---------------------------------------------------------------------------------------------------| -------- |
-| DATE_VALUE | Valid segment load start time value. All the segments before this specified date will be deleted. | NO       |
-| db_name    | Database name, if it is not specified then it uses the current database.                               | YES      |
-| table_name | The name of the table in provided database.                                                        | NO       |
-
-### Example:
-
-```
- DELETE FROM TABLE CarbonDatabase.CarbonTable 
- WHERE SEGMENT.STARTTIME BEFORE '2017-06-01 12:05:06';  
-```
-
-## Update CarbonData Table
-This command will allow to update the carbon table based on the column expression and optional filter conditions.
-
-### Syntax
-
-```
- UPDATE <table_name>
- SET (column_name1, column_name2, ... column_name n) =
- (column1_expression , column2_expression, ... column n_expression )
- [ WHERE { <filter_condition> } ];
-```
-
-alternatively the following the command can also be used for updating the CarbonData Table :
-
-```
-UPDATE <table_name>
-SET (column_name1, column_name2) =
-(select sourceColumn1, sourceColumn2 from sourceTable
-[ WHERE { <filter_condition> } ] )
-[ WHERE { <filter_condition> } ];
-```
-
-### Parameter Description
-
-| Parameter | Description |
-|--------------|---------------------------------------------------------------------------------|
-| table_name | The name of the Carbon table in which you want to perform the update operation. |
-| column_name | The destination columns to be updated. |
-| sourceColumn | The source table column values to be updated in destination table. |
-| sourceTable | The table from which the records are updated into destination Carbon table. |
-
-NOTE: This functionality is currently not supported in Spark 2.x and will support soon.  
-
-### Usage Guidelines
-The following conditions must be met for successful updation :
-
-- The update command fails if multiple input rows in source table are matched with single row in destination table.
-- If the source table generates empty records, the update operation will complete successfully without updating the table.
-- If a source table row does not correspond to any of the existing rows in a destination table, the update operation will complete successfully without updating the table.
-- In sub-query, if the source table and the target table are same, then the update operation fails.
-- If the sub-query used in UPDATE statement contains aggregate method or group by query, then the UPDATE operation fails.
-
-### Examples
-
- Update is not supported for queries that contain aggregate or group by.
-
-```
- UPDATE t_carbn01 a
- SET (a.item_type_code, a.profit) = ( SELECT b.item_type_cd,
- sum(b.profit) from t_carbn01b b
- WHERE item_type_cd =2 group by item_type_code);
-```
-
-Here the Update Operation fails as the query contains aggregate function sum(b.profit) and group by clause in the sub-query.
-
-
-```
-UPDATE carbonTable1 d
-SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
-FROM sourceTable1 s WHERE d.column1 = s.c11)
-WHERE d.column1 = 'china' EXISTS( SELECT * from table3 o where o.c2 > 1);
-```
-
-
-```
-UPDATE carbonTable1 d SET (c3) = (SELECT s.c33 from sourceTable1 s
-WHERE d.column1 = s.c11)
-WHERE exists( select * from iud.other o where o.c2 > 1);
-```
-
-
-```
-UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , "y" ));
-```
-
-
-```
-UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, "xyx")
-WHERE d.column1 = 'india';
-```
-
-
-```
-UPDATE carbonTable1 d SET (c2, c5 ) = (c2 + 1, "xyx")
-WHERE d.column1 = 'india'
-and EXISTS( SELECT * FROM table3 o WHERE o.column2 > 1);
-```
-
-**The Status Success/Failure shall be captured in the driver log and the client.**
-
-
-## Delete Records from CarbonData Table
-This command allows us to delete records from CarbonData table.
-
-### Syntax
-
-```
-DELETE FROM table_name [WHERE expression];
-```
-
-### Parameter Description
-
-| Parameter | Description |
-|--------------|-----------------------------------------------------------------------|
-| table_name | The name of the Carbon table in which you want to perform the delete. |
-
-NOTE: This functionality is currently not supported in Spark 2.x and will support soon.  
-
-### Examples
-
-```
-DELETE FROM columncarbonTable1 d WHERE d.column1  = 'china';
-```
-
-```
-DELETE FROM dest WHERE column1 IN ('china', 'USA');
-```
-
-```
-DELETE FROM columncarbonTable1
-WHERE column1 IN (SELECT column11 FROM sourceTable2);
-```
-
-```
-DELETE FROM columncarbonTable1
-WHERE column1 IN (SELECT column11 FROM sourceTable2 WHERE
-column1 = 'USA');
-```
-
-```
-DELETE FROM columncarbonTable1 WHERE column2 >= 4;
-```
-
-**The Status Success/Failure shall be captured in the driver log and the client.**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/faq.md
----------------------------------------------------------------------
diff --git a/docs/faq.md b/docs/faq.md
index 45fd960..6bbd4f7 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -1,20 +1,18 @@
 <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
 
       http://www.apache.org/licenses/LICENSE-2.0
 
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
 -->
 
 # FAQs

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/file-structure-of-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/file-structure-of-carbondata.md b/docs/file-structure-of-carbondata.md
index 7ac234c..303d0e0 100644
--- a/docs/file-structure-of-carbondata.md
+++ b/docs/file-structure-of-carbondata.md
@@ -1,20 +1,18 @@
 <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
 
       http://www.apache.org/licenses/LICENSE-2.0
 
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
 -->
 
 # CarbonData File Structure

http://git-wip-us.apache.org/repos/asf/carbondata/blob/9a69d638/docs/installation-guide.md
----------------------------------------------------------------------
diff --git a/docs/installation-guide.md b/docs/installation-guide.md
index acb952a..1ba5dd1 100644
--- a/docs/installation-guide.md
+++ b/docs/installation-guide.md
@@ -1,20 +1,18 @@
 <!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
+    Licensed to the Apache Software Foundation (ASF) under one or more 
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership. 
+    The ASF licenses this file to you under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with 
+    the License.  You may obtain a copy of the License at
 
       http://www.apache.org/licenses/LICENSE-2.0
 
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
+    Unless required by applicable law or agreed to in writing, software 
+    distributed under the License is distributed on an "AS IS" BASIS, 
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and 
+    limitations under the License.
 -->
 
 # Installation Guide


Mime
View raw message