carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From chenliang...@apache.org
Subject [1/3] incubator-carbondata-site git commit: Synchronized MD Files with Incubator CarbonData
Date Wed, 15 Mar 2017 14:46:33 GMT
Repository: incubator-carbondata-site
Updated Branches:
  refs/heads/asf-site 0839eb10c -> f89c04dcd


http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/configuration-parameters.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/configuration-parameters.md b/src/site/markdown/configuration-parameters.md
index 71fddf7..75001be 100644
--- a/src/site/markdown/configuration-parameters.md
+++ b/src/site/markdown/configuration-parameters.md
@@ -34,10 +34,10 @@ This section provides the details of all the configurations required for the Car
 | Property | Default Value | Description |
 |----------------------------|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
 | carbon.storelocation | /user/hive/warehouse/carbon.store | Location where CarbonData will create the store, and write the data in its own format. NOTE: Store location should be in HDFS. |
-| carbon.ddl.base.hdfs.url | hdfs://hacluster/opt/data | This property is used to configure the HDFS relative path, the path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv.  |
+| carbon.ddl.base.hdfs.url | hdfs://hacluster/opt/data | This property is used to configure the HDFS relative path, the path configured in carbon.ddl.base.hdfs.url will be appended to the HDFS path configured in fs.defaultFS. If this path is configured, then user need not pass the complete path while dataload. For example: If absolute path of the csv file is hdfs://10.18.101.155:54310/data/cnbc/2016/xyz.csv, the path "hdfs://10.18.101.155:54310" will come from property fs.defaultFS and user can configure the /data/cnbc/ as carbon.ddl.base.hdfs.url. Now while dataload user can specify the csv path as /2016/xyz.csv. |
 | carbon.badRecords.location | /opt/Carbon/Spark/badrecords | Path where the bad records are stored. |
 | carbon.kettle.home | $SPARK_HOME/carbonlib/carbonplugins | Configuration for loading the data with kettle. |
-| carbon.data.file.version | 2 | If this parameter value is set to 1, then CarbonData will support the data load which is in old format(0.x version). If the value is set to 2(1.x onwards version), then CarbonData will support the data load of new format only. |
+| carbon.data.file.version | 2 | If this parameter value is set to 1, then CarbonData will support the data load which is in old format(0.x version). If the value is set to 2(1.x onwards version), then CarbonData will support the data load of new format only.|                    
 
 ##  Performance Configuration
 This section provides the details of all the configurations required for CarbonData Performance Optimization.
@@ -140,9 +140,9 @@ This section provides the details of all the configurations required for CarbonD
 ##  Spark Configuration
  <b><p align="center">Spark Configuration Reference in spark-defaults.conf</p></b>
  
-| Parameter 	| Default Value 	| Description 	|
-|----------------------------------------	|--------------------------------------------------------	|----------------------------------------------------------------------	|
-| spark.driver.memory 	| 1g 	| Amount of memory to be used for the driver process. 	|
-| spark.executor.memory 	| 1g 	| Amount of memory to be used per executor process. 	|
-| spark.sql.bigdata.register.analyseRule 	| org.apache.spark.sql.hive.acl.CarbonAccessControlRules 	| CarbonAccessControlRules need to be set for enabling Access Control. 	|
+| Parameter | Default Value | Description |
+|----------------------------------------|--------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
+| spark.driver.memory | 1g | Amount of memory to be used by the driver process. |
+| spark.executor.memory | 1g | Amount of memory to be used per executor process. |
+| spark.sql.bigdata.register.analyseRule | org.apache.spark.sql.hive.acl.CarbonAccessControlRules | CarbonAccessControlRules need to be set for enabling Access Control. |
  
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/data-management.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/data-management.md b/src/site/markdown/data-management.md
old mode 100755
new mode 100644

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/ddl-operation-on-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/ddl-operation-on-carbondata.md b/src/site/markdown/ddl-operation-on-carbondata.md
index ec3008b..de4999e 100644
--- a/src/site/markdown/ddl-operation-on-carbondata.md
+++ b/src/site/markdown/ddl-operation-on-carbondata.md
@@ -31,15 +31,14 @@ The following DDL operations are supported in CarbonData :
 
 ## CREATE TABLE
   This command can be used to create a CarbonData table by specifying the list of fields along with the table properties.
-  
 ```
-   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name 
-                    [(col_name data_type, ...)]
+   CREATE TABLE [IF NOT EXISTS] [db_name.]table_name
+                    [(col_name data_type , ...)]
    STORED BY 'carbondata'
    [TBLPROPERTIES (property_name=property_value, ...)]
    // All Carbon's additional table options will go into properties
 ```
-   
+
 ### Parameter Description
 
 | Parameter | Description | Optional |
@@ -49,48 +48,43 @@ The following DDL operations are supported in CarbonData :
 | table_name | The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. | No |
 | STORED BY | "org.apache.carbondata.format", identifies and creates a CarbonData table. | No |
 | TBLPROPERTIES | List of CarbonData table properties. |  |
- 
- 
+
 ### Usage Guidelines
-            
+
    Following are the guidelines for using table properties.
-     
+
    - **Dictionary Encoding Configuration**
-   
+
        Dictionary encoding is enabled by default for all String columns, and disabled for non-String columns. You can include and exclude columns for dictionary encoding.
-     
 ```
-       TBLPROPERTIES ("DICTIONARY_EXCLUDE"="column1, column2") 
-       TBLPROPERTIES ("DICTIONARY_INCLUDE"="column1, column2") 
+       TBLPROPERTIES ("DICTIONARY_EXCLUDE"="column1, column2")
+       TBLPROPERTIES ("DICTIONARY_INCLUDE"="column1, column2")
 ```
-       
+
    Here, DICTIONARY_EXCLUDE will exclude dictionary creation. This is applicable for high-cardinality columns and is an optional parameter. DICTIONARY_INCLUDE will generate dictionary for the columns specified in the list.
-     
+
    - **Row/Column Format Configuration**
-     
+
        Column groups with more than one column are stored in row format, instead of columnar format. By default, each column is a separate column group.
-     
 ```
 TBLPROPERTIES ("COLUMN_GROUPS"="(column1, column3),
-(Column4,Column5,Column6)") 
+(Column4,Column5,Column6)")
 ```
-   
+
    - **Table Block Size Configuration**
-   
+
      The block size of table files can be defined using the property TABLE_BLOCKSIZE. It accepts only integer values. The default value is 1024 MB and supports a range of 1 MB to 2048 MB.
      If you do not specify this value in the DDL command, default value is used.
-     
 ```
        TBLPROPERTIES ("TABLE_BLOCKSIZE"="512 MB")
 ```
-     
+
   Here 512 MB means the block size of this table is 512 MB, you can also set it as 512M or 512.
-   
+
    - **Inverted Index Configuration**
-     
+
       Inverted index is very useful to improve compression ratio and query speed, especially for those low-cardinality columns who are in reward position.
       By default inverted index is enabled. The user can disable the inverted index creation for some columns.
-     
 ```
        TBLPROPERTIES ("NO_INVERTED_INDEX"="column1, column3")
 ```
@@ -98,44 +92,42 @@ TBLPROPERTIES ("COLUMN_GROUPS"="(column1, column3),
   No inverted index shall be generated for the columns specified in NO_INVERTED_INDEX. This property is applicable on columns with high-cardinality and is an optional parameter.
 
    NOTE:
-     
-   - By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures. 
-    
+
+   - By default all columns other than numeric datatype are treated as dimensions and all columns of numeric datatype are treated as measures.
+
    - All dimensions except complex datatype columns are part of multi dimensional key(MDK). This behavior can be overridden by using TBLPROPERTIES. If the user wants to keep any column (except columns of complex datatype) in multi dimensional key then he can keep the columns either in DICTIONARY_EXCLUDE or DICTIONARY_INCLUDE.
-     
-     
+
 ### Example:
 ```
-   CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
-                                productNumber Int,
-                                productName String, 
-                                storeCity String, 
-                                storeProvince String, 
-                                productCategory String, 
-                                productBatch String,
-                                saleQuantity Int,
-                                revenue Int)       
-   STORED BY 'carbondata' 
-   TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
-                  'DICTIONARY_EXCLUDE'='productName',
-                  'DICTIONARY_INCLUDE'='productNumber',
-                  'NO_INVERTED_INDEX'='productBatch')
+    CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
+                                   productNumber Int,
+                                   productName String,
+                                   storeCity String,
+                                   storeProvince String,
+                                   productCategory String,
+                                   productBatch String,
+                                   saleQuantity Int,
+                                   revenue Int)
+      STORED BY 'carbondata'
+      TBLPROPERTIES ('COLUMN_GROUPS'='(productNumber,productName)',
+                     'DICTIONARY_EXCLUDE'='storeCity',
+                     'DICTIONARY_INCLUDE'='productNumber',
+                     'NO_INVERTED_INDEX'='productBatch')
 ```
-    
+
 ## SHOW TABLE
 
   This command can be used to list all the tables in current database or all the tables of a specific database.
 ```
   SHOW TABLES [IN db_Name];
 ```
-  
+
 ### Parameter Description
 | Parameter  | Description                                                                               | Optional |
 |------------|-------------------------------------------------------------------------------------------|----------|
 | IN db_Name | Name of the database. Required only if tables of this specific database are to be listed. | Yes      |
 
 ### Example:
-  
 ```
   SHOW TABLES IN ProductSchema;
 ```
@@ -143,7 +135,6 @@ TBLPROPERTIES ("COLUMN_GROUPS"="(column1, column3),
 ## DROP TABLE
 
  This command is used to delete an existing table.
-
 ```
   DROP TABLE [IF EXISTS] [db_name.]table_name;
 ```
@@ -155,7 +146,6 @@ TBLPROPERTIES ("COLUMN_GROUPS"="(column1, column3),
 | table_name | Name of the table to be deleted. | NO |
 
 ### Example:
-
 ```
   DROP TABLE IF EXISTS productSchema.productSalesTable;
 ```
@@ -163,13 +153,12 @@ TBLPROPERTIES ("COLUMN_GROUPS"="(column1, column3),
 ## COMPACTION
 
 This command merges the specified number of segments into one segment. This enhances the query performance of the table.
-
 ```
   ALTER TABLE [db_name.]table_name COMPACT 'MINOR/MAJOR';
 ```
-  
+
   To get details about Compaction refer to [Data Management](data-management.md)
-  
+
 ### Parameter Description
 
 | Parameter | Description | Optional |
@@ -180,17 +169,14 @@ This command merges the specified number of segments into one segment. This enha
 ### Syntax
 
 - **Minor Compaction**
-
 ```
 ALTER TABLE table_name COMPACT 'MINOR';
 ```
 - **Major Compaction**
-
 ```
 ALTER TABLE table_name COMPACT 'MAJOR';
 ```
 
-
 ## BUCKETING
 
 Bucketing feature can be used to distribute/organize the table/partition data into multiple files such
@@ -203,8 +189,7 @@ of columns is used.
                     [(col_name data_type, ...)]
    STORED BY 'carbondata'
    TBLPROPERTIES(“BUCKETNUMBER”=”noOfBuckets”,
-   “BUCKETCOLUMNS”=’’columnname”, “TABLENAME”=”tablename”)
-
+   “BUCKETCOLUMNS”=’’columnname”)
 ```
   
 ## Parameter Description
@@ -213,7 +198,6 @@ of columns is used.
 |---------------	|------------------------------------------------------------------------------------------------------------------------------	|----------	|
 | BUCKETNUMBER 	| Specifies the number of Buckets to be created. 	| No 	|
 | BUCKETCOLUMNS 	| Specify the columns to be considered for Bucketing  	| No 	|
-| TABLENAME 	| The name of the table in Database. Table Name should consist of alphanumeric characters and underscore(_) special character. 	| Yes 	|
 
 ## Usage Guidelines
 
@@ -221,12 +205,12 @@ of columns is used.
 
 - Bucketing can not be performed for columns of Complex Data Types.
 
-- Columns in the BUCKETCOLUMN parameter must be either a dimension or a measure but combination of both is not supported.
+- Columns in the BUCKETCOLUMN parameter must be only dimension. The BUCKETCOLUMN parameter can not be a measure or a combination of measures and dimensions.
 
 
 ## Example :
 
- ```
+```
  CREATE TABLE IF NOT EXISTS productSchema.productSalesTable (
                                 productNumber Int,
                                 productName String,
@@ -237,13 +221,11 @@ of columns is used.
                                 saleQuantity Int,
                                 revenue Int)
    STORED BY 'carbondata'
-   TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productCategory)',
+   TBLPROPERTIES ('COLUMN_GROUPS'='(productName,productNumber)',
                   'DICTIONARY_EXCLUDE'='productName',
                   'DICTIONARY_INCLUDE'='productNumber',
                   'NO_INVERTED_INDEX'='productBatch',
                   'BUCKETNUMBER'='4',
-                  'BUCKETCOLUMNS'='productNumber,saleQuantity',
-                  'TABLENAME'='productSalesTable')
-
-  ```
+                  'BUCKETCOLUMNS'='productName')
+ ```
 

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/dml-operation-on-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/dml-operation-on-carbondata.md b/src/site/markdown/dml-operation-on-carbondata.md
index 6fdfcde..74fa0b0 100644
--- a/src/site/markdown/dml-operation-on-carbondata.md
+++ b/src/site/markdown/dml-operation-on-carbondata.md
@@ -87,7 +87,7 @@ You can use the following options to load data:
     ```
 
 - **MULTILINE:** CSV with new line character in quotes.
-5
+
     ```
     OPTIONS('MULTILINE'='true') 
     ```
@@ -123,7 +123,7 @@ You can use the following options to load data:
     column2:dictionaryFilePath2')
     ```
 
-    NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can not be used together.
+    NOTE: ALL_DICTIONARY_PATH and COLUMNDICT can't be used together.
     
 - **DATEFORMAT:** Date format for specified column.
 
@@ -141,8 +141,7 @@ You can use the following options to load data:
 
    Note :  It is recommended to set the value for this option as false.
 
-- **SINGLE_PASS:** Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance
-   in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
+- **SINGLE_PASS:** Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
 
    This option specifies whether to use single pass for loading data or not. By default this option is set to FALSE.
 
@@ -155,7 +154,6 @@ You can use the following options to load data:
    * If this option is set to TRUE then data loading will take less time.
 
    * If this option is set to some invalid value other than TRUE or FALSE then it uses the default value.
-
 ### Example:
 
 ```
@@ -164,7 +162,7 @@ options('DELIMITER'=',', 'QUOTECHAR'='"','COMMENTCHAR'='#',
 'FILEHEADER'='empno,empname,designation,doj,workgroupcategory,
  workgroupcategoryname,deptno,deptname,projectcode,
  projectjoindate,projectenddate,attendance,utilization,salary',
-'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$', 
+'MULTILINE'='true','ESCAPECHAR'='\','COMPLEX_DELIMITER_LEVEL_1'='$',
 'COMPLEX_DELIMITER_LEVEL_2'=':',
 'ALL_DICTIONARY_PATH'='/opt/alldictionary/data.dictionary',
 'USE_KETTLE'='FALSE',
@@ -222,7 +220,7 @@ By default the above configuration will be false.
 
 ### Examples
 ```
-INSERT INTO table1 SELECT item1, sum(item2 + 1000) as result FROM
+INSERT INTO table1 SELECT item1 ,sum(item2 + 1000) as result FROM 
 table2 group by item1;
 ```
 
@@ -328,7 +326,7 @@ This command will allow to update the carbon table based on the column expressio
 ```
  UPDATE <table_name>
  SET (column_name1, column_name2, ... column_name n) =
- (column1_expression, column2_expression . .. column n_expression )
+ (column1_expression , column2_expression . .. column n_expression )
  [ WHERE { <filter_condition> } ];
 ```
 
@@ -376,7 +374,7 @@ Here the Update Operation fails as the query contains aggregate function sum(b.p
 
 ```
 UPDATE carbonTable1 d
-SET(d.column3,d.column5 ) = (SELECT s.c33, s.c55
+SET(d.column3,d.column5 ) = (SELECT s.c33 ,s.c55
 FROM sourceTable1 s WHERE d.column1 = s.c11)
 WHERE d.column1 = 'china' EXISTS( SELECT * from table3 o where o.c2 > 1);
 ```
@@ -390,7 +388,7 @@ WHERE exists( select * from iud.other o where o.c2 > 1);
 
 
 ```
-UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5, "y" ));
+UPDATE carbonTable1 SET (c2, c5 ) = (c2 + 1, concat(c5 , "y" ));
 ```
 
 

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/faq.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/faq.md b/src/site/markdown/faq.md
old mode 100755
new mode 100644
index d02d296..57ac171
--- a/src/site/markdown/faq.md
+++ b/src/site/markdown/faq.md
@@ -19,80 +19,43 @@
 
 # FAQs
 
-* [Can we preserve Segments from Compaction?](#can-we-preserve-segments-from-compaction)
-* [Can we disable horizontal compaction?](#can-we-disable-horizontal-compaction)
-* [What is horizontal compaction?](#what-is-horizontal-compaction)
-* [How to enable Compaction while data loading?](#how-to-enable-compaction-while-data-loading)
-* [Where are Bad Records Stored in CarbonData?](#where-are-bad-records-stored-in-carbondata)
 * [What are Bad Records?](#what-are-bad-records)
-* [Can we use CarbonData on Standalone Spark Cluster?](#can-we-use-carbondata-on-standalone-spark-cluster)
-* [What versions of Apache Spark are Compatible with CarbonData?](#what-versions-of-apache-spark-are-compatible-with-carbondata)
-* [Can we Load Data from excel?](#can-we-load-data-from-excel)
-* [How to enable Single Pass Data Loading?](#how-to-enable-single-pass-data-loading)
-* [What is Single Pass Data Loading?](#what-is-single-pass-data-loading)
-* [How to specify the data loading format for CarbonData ?](#how-to-specify-the-data-loading-format-for-carbondata)
-* [How to resolve store location can’t be found?](#how-to-resolve-store-location-can-not-be-found)
-* [What is carbon.lock.type?]()
-* [How to enable Auto Compaction?](#how-to-enable-auto-compaction)
+* [Where are Bad Records Stored in CarbonData?](#where-are-bad-records-stored-in-carbondata)
+* [How to enable Bad Record Logging?](#how-to-enable-bad-record-logging)
+* [How to ignore the Bad Records?](#how-to-ignore-the-bad-records)
+* [How to specify store location while creating carbon session?](#how-to-specify-store-location-while-creating-carbon-session)
+* [What is Carbon Lock Type?](#what-is-carbon-lock-type)
 * [How to resolve Abstract Method Error?](#how-to-resolve-abstract-method-error)
-* [Getting Exception on Creating a View](#getting-exception-on-creating-a-view)
-* [Is CarbonData supported for Windows?](#is-carbondata-supported-for-windows)
-
-## Can we preserve Segments from Compaction?
-If you want to preserve number of segments from being compacted then you can set the property  **carbon.numberof.preserve.segments**  equal to the **value of number of segments to be preserved**.
-
-Note : *No segments are preserved by Default.*
-
-## Can we disable horizontal compaction?
-Yes, to disable horizontal compaction, set **carbon.horizontal.compaction.enable** to ``FALSE`` in carbon.properties file.
 
-## What is horizontal compaction?
-Compaction performed after Update and Delete operations is referred as Horizontal Compaction. After every DELETE and UPDATE operation, horizontal compaction may occur in case the delta (DELETE/ UPDATE) files becomes more than specified threshold.
-
-By default the parameter **carbon.horizontal.compaction.enable** enabling the horizontal compaction is set to ``TRUE``.
-
-## How to enable Compaction while data loading?
-To enable compaction while data loading, set **carbon.enable.auto.load.merge** to ``TRUE`` in carbon.properties file.
+## What are Bad Records?
+Records that fail to get loaded into the CarbonData due to data type incompatibility or are empty or have incompatible format are classified as Bad Records.
 
 ## Where are Bad Records Stored in CarbonData?
 The bad records are stored at the location set in carbon.badRecords.location in carbon.properties file.
 By default **carbon.badRecords.location** specifies the following location ``/opt/Carbon/Spark/badrecords``.
 
-## What are Bad Records?
-Records that fail to get loaded into the CarbonData due to data type incompatibility are classified as Bad Records.
-
-## Can we use CarbonData on Standalone Spark Cluster?
-Yes, CarbonData can be used on a Standalone spark cluster. But using a standalone cluster has following limitations:
-- single node cluster cannot be scaled up
-- the maximum memory and the CPU computation power has a fixed limit
-- the number of processors are limited in a single node cluster
-
-To harness the actual speed of execution of CarbonData on petabytes of data, it is suggested to use a Multinode Cluster.
-
-## What versions of Apache Spark are Compatible with CarbonData?
-Currently **Spark 1.6.2** and **Spark 2.1** is compatible with CarbonData.
+## How to enable Bad Record Logging?
+While loading data we can specify the approach to handle Bad Records. In order to analyse the cause of the Bad Records the parameter ``BAD_RECORDS_LOGGER_ENABLE`` must be set to value ``TRUE``. There are multiple approaches to handle Bad Records which can be specified  by the parameter ``BAD_RECORDS_ACTION``.
 
-## Can we Load Data from excel?
-Yes, the data can be loaded from excel provided the data is in CSV format.
+- To pad the incorrect values of the csv rows with NULL value and load the data in CarbonData, set the following in the query :
+```
+'BAD_RECORDS_ACTION'='FORCE'
+```
 
-## How to enable Single Pass Data Loading?
-You need to set **SINGLE_PASS** to ``True`` and append it to ``OPTIONS`` Section in the query as demonstrated in the Load Query below :
+- To write the Bad Records without padding incorrect values with NULL in the raw csv (set in the parameter **carbon.badRecords.location**), set the following in the query :
 ```
-LOAD DATA local inpath '/opt/rawdata/data.csv' INTO table carbontable
-OPTIONS('DELIMITER'=',', 'QUOTECHAR'='"','FILEHEADER'='empno,empname,designation','USE_KETTLE'='FALSE')
+'BAD_RECORDS_ACTION'='REDIRECT'
 ```
-Refer to [DML-operations-in-CarbonData](https://github.com/PallaviSingh1992/incubator-carbondata/blob/6b4dd5f3dea8c93839a94c2d2c80ab7a799cf209/docs/dml-operation-on-carbondata.md) for more details and example.
 
-## What is Single Pass Data Loading?
-Single Pass Loading enables single job to finish data loading with dictionary generation on the fly. It enhances performance in the scenarios where the subsequent data loading after initial load involves fewer incremental updates on the dictionary.
-This option specifies whether to use single pass for loading data or not. By default this option is set to ``FALSE``.
+## How to ignore the Bad Records?
+To ignore the Bad Records from getting stored in the raw csv, we need to set the following in the query :
+```
+'BAD_RECORDS_ACTION'='IGNORE'
+```
 
-## How to specify the data loading format for CarbonData?
-Edit carbon.properties file. Modify the value of parameter **carbon.data.file.version**.
-Setting the parameter **carbon.data.file.version** to ``1`` will support data loading in ``old format(0.x version)`` and setting **carbon.data.file.version** to ``2`` will support data loading in ``new format(1.x onwards)`` only.
-By default the data loading is supported using the new format.
+## How to specify store location while creating carbon session?
+The store location specified while creating carbon session is used by the CarbonData to store the meta data like the schema, dictionary files, dictionary meta data and sort indexes.
 
-## How to resolve store location can not be found?
 Try creating ``carbonsession`` with ``storepath`` specified in the following manner :
 ```
 val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession(<store_path>)
@@ -102,20 +65,13 @@ Example:
 val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("hdfs://localhost:9000/carbon/store ")
 ```
 
-## What is carbon.lock.type?
-This property configuration specifies the type of lock to be acquired during concurrent operations on table. This property can be set with the following values :
+## What is Carbon Lock Type?
+The Apache CarbonData acquires lock on the files to prevent concurrent operation from modifying the same files. The lock can be of the following types depending on the storage location, for HDFS we specify it to be of type HDFSLOCK. By default it is set to type LOCALLOCK.
+The property carbon.lock.type configuration specifies the type of lock to be acquired during concurrent operations on table. This property can be set with the following values :
 - **LOCALLOCK** : This Lock is created on local file system as file. This lock is useful when only one spark driver (thrift server) runs on a machine and no other CarbonData spark application is launched concurrently.
 - **HDFSLOCK** : This Lock is created on HDFS file system as file. This lock is useful when multiple CarbonData spark applications are launched and no ZooKeeper is running on cluster and the HDFS supports, file based locking.
 
-## How to enable Auto Compaction?
-To enable compaction set **carbon.enable.auto.load.merge** to ``TRUE`` in the carbon.properties file.
-
 ## How to resolve Abstract Method Error?
-You need to specify the ``spark version`` while using Maven to build project.
-
-## Getting Exception on Creating a View
-View not supported in CarbonData.
+In order to build CarbonData project it is necessary to specify the spark profile. The spark profile sets the Spark Version. You need to specify the ``spark version`` while using Maven to build project.
 
-## Is CarbonData supported for Windows?
-We may provide support for windows in future. You are welcome to contribute if you want to add the support :)
 

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/file-structure-of-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/file-structure-of-carbondata.md b/src/site/markdown/file-structure-of-carbondata.md
index 482c57c..63e34ec 100644
--- a/src/site/markdown/file-structure-of-carbondata.md
+++ b/src/site/markdown/file-structure-of-carbondata.md
@@ -1,3 +1,22 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
 #  CarbonData File Structure
 
 CarbonData files contain groups of data called blocklets, along with all required information like schema, offsets and indices etc, in a file footer, co-located in HDFS.
@@ -6,7 +25,7 @@ The file footer can be read once to build the indices in memory, which can be ut
 
 Each blocklet in the file is further divided into chunks of data called data chunks. Each data chunk is organized either in columnar format or row format, and stores the data of either a single column or a set of columns. All blocklets in a file contain the same number and type of data chunks.
 
-![CarbonData File Structure](../../../src/site/markdown/images/carbon_data_file_structure_new.png?raw=true)
+![CarbonData File Structure](../docs/images/carbon_data_file_structure_new.png?raw=true)
 
 Each data chunk contains multiple groups of data called as pages. There are three types of pages.
 
@@ -14,4 +33,4 @@ Each data chunk contains multiple groups of data called as pages. There are thre
 * Row ID Page (optional): Contains the row ID mappings used when the data page is stored as an inverted index.
 * RLE Page (optional): Contains additional metadata used when the data page is RLE coded.
 
-![CarbonData File Format](../../../src/site/markdown/images/carbon_data_format_new.png?raw=true)
+![CarbonData File Format](../docs/images/carbon_data_format_new.png?raw=true)

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/installation-guide.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/installation-guide.md b/src/site/markdown/installation-guide.md
index c2194ac..c5bf6df 100644
--- a/src/site/markdown/installation-guide.md
+++ b/src/site/markdown/installation-guide.md
@@ -40,42 +40,46 @@ followed by :
 
 ### Procedure
 
-* [Build the CarbonData](https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration) project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the ``"<SPARK_HOME>/carbonlib"`` folder.
+1. [Build the CarbonData](https://github.com/apache/incubator-carbondata/blob/master/build/README.md) project and get the assembly jar from `./assembly/target/scala-2.1x/carbondata_xxx.jar`. 
 
-     NOTE: Create the carbonlib folder if it does not exists inside ``"<SPARK_HOME>"`` path.
+2. Copy `./assembly/target/scala-2.1x/carbondata_xxx.jar` to `$SPARK_HOME/carbonlib` folder.
 
-* Add the carbonlib folder path in the Spark classpath. (Edit ``"<SPARK_HOME>/conf/spark-env.sh"`` file and modify the value of SPARK_CLASSPATH by appending ``"<SPARK_HOME>/carbonlib/*"`` to the existing value)
+     **NOTE**: Create the carbonlib folder if it does not exist inside `$SPARK_HOME` path.
 
-* Copy the carbon.properties.template to ``"<SPARK_HOME>/conf/carbon.properties"`` folder from "./conf/" of CarbonData repository.
+3. Add the carbonlib folder path in the Spark classpath. (Edit `$SPARK_HOME/conf/spark-env.sh` file and modify the value of `SPARK_CLASSPATH` by appending `$SPARK_HOME/carbonlib/*` to the existing value)
 
-* Copy the "carbonplugins" folder  to ``"<SPARK_HOME>/carbonlib"`` folder from "./processing/" folder of CarbonData repository.
+4. Copy the `./conf/carbon.properties.template` file from CarbonData repository to `$SPARK_HOME/conf/` folder and rename the file to `carbon.properties`.
 
-    NOTE: carbonplugins will contain .kettle folder.
+5. Copy the `./processing/carbonplugins` folder from CarbonData repository to `$SPARK_HOME/carbonlib/` folder.
+
+    **NOTE**: carbonplugins will contain .kettle folder.
+
+6. Repeat Step 2 to Step 5 in all the nodes of the cluster.
     
-* In Spark node, configure the properties mentioned in the following table in ``"<SPARK_HOME>/conf/spark-defaults.conf"`` file.
+7. In Spark node[master], configure the properties mentioned in the following table in `$SPARK_HOME/conf/spark-defaults.conf` file.
 
-| Property | Value | Description |
-|---------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
-| carbon.kettle.home | $SPARK_HOME /carbonlib/carbonplugins | Path that will be used by CarbonData internally to create graph for loading the data |
-| spark.driver.extraJavaOptions | -Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties | A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. |
-| spark.executor.extraJavaOptions | -Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties | A string of extra JVM options to pass to executors. For instance, GC settings or other logging. NOTE: You can enter multiple values separated by space. |
+   | Property | Value | Description |
+   |---------------------------------|-----------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
+   | carbon.kettle.home | `$SPARK_HOME/carbonlib/carbonplugins` | Path that will be used by CarbonData internally to create graph for loading the data |
+   | spark.driver.extraJavaOptions | `-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties` | A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. |
+   | spark.executor.extraJavaOptions | `-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties` | A string of extra JVM options to pass to executors. For instance, GC settings or other logging. **NOTE**: You can enter multiple values separated by space. |
 
-* Add the following properties in ``"<SPARK_HOME>/conf/" carbon.properties``:
+8. Add the following properties in `$SPARK_HOME/conf/carbon.properties` file:
 
-| Property             | Required | Description                                                                            | Example                             | Remark  |
-|----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------|
-| carbon.storelocation | NO       | Location where data CarbonData will create the store and write the data in its own format. | hdfs://HOSTNAME:PORT/Opt/CarbonStore      | Propose to set HDFS directory |
-| carbon.kettle.home   | YES      | Path that will be used by CarbonData internally to create graph for loading the data.         | $SPARK_HOME/carbonlib/carbonplugins |         |
+   | Property             | Required | Description                                                                            | Example                             | Remark  |
+   |----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------|
+   | carbon.storelocation | NO       | Location where data CarbonData will create the store and write the data in its own format. | hdfs://HOSTNAME:PORT/Opt/CarbonStore      | Propose to set HDFS directory |
+   | carbon.kettle.home   | YES      | Path that will be used by CarbonData internally to create graph for loading the data.         | `$SPARK_HOME/carbonlib/carbonplugins` |         |
 
 
-* Verify the installation. For example:
+9. Verify the installation. For example:
 
-```
+   ```
    ./spark-shell --master spark://HOSTNAME:PORT --total-executor-cores 2
    --executor-memory 2G
-```
+   ```
 
-NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.
+**NOTE**: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.
 
 To get started with CarbonData : [Quick Start](quick-start-guide.md), [DDL Operations on CarbonData](ddl-operation-on-carbondata.md)
 
@@ -92,77 +96,87 @@ To get started with CarbonData : [Quick Start](quick-start-guide.md), [DDL Opera
 
    The following steps are only for Driver Nodes. (Driver nodes are the one which starts the spark context.)
 
-* [Build the CarbonData](https://cwiki.apache.org/confluence/display/CARBONDATA/Building+CarbonData+And+IDE+Configuration) project and get the assembly jar from "./assembly/target/scala-2.10/carbondata_xxx.jar" and put in the ``"<SPARK_HOME>/carbonlib"`` folder.
+1. [Build the CarbonData](https://github.com/apache/incubator-carbondata/blob/master/build/README.md) project and get the assembly jar from `./assembly/target/scala-2.1x/carbondata_xxx.jar` and copy to `$SPARK_HOME/carbonlib` folder.
 
-      NOTE: Create the carbonlib folder if it does not exists inside ``"<SPARK_HOME>"`` path.
+    **NOTE**: Create the carbonlib folder if it does not exists inside `$SPARK_HOME` path.
 
-* Copy "carbonplugins" folder to ``"<SPARK_HOME>/carbonlib"`` folder from "./processing/" folder of CarbonData repository.
-      carbonplugins will contain .kettle folder.
+2. Copy the `./processing/carbonplugins` folder from CarbonData repository to `$SPARK_HOME/carbonlib/` folder.
 
-* Copy the "carbon.properties.template" to ``"<SPARK_HOME>/conf/carbon.properties"`` folder from conf folder of CarbonData repository.
-* Modify the parameters in "spark-default.conf" located in the ``"<SPARK_HOME>/conf``"
+    **NOTE**: carbonplugins will contain .kettle folder.
 
-| Property | Description | Value |
-|---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|
-| spark.master | Set this value to run the Spark in yarn cluster mode. | Set "yarn-client" to run the Spark in yarn cluster mode. |
-| spark.yarn.dist.files | Comma-separated list of files to be placed in the working directory of each executor. |``"<YOUR_SPARK_HOME_PATH>"/conf/carbon.properties`` |
-| spark.yarn.dist.archives | Comma-separated list of archives to be extracted into the working directory of each executor. |``"<YOUR_SPARK_HOME_PATH>"/carbonlib/carbondata_xxx.jar`` |
-| spark.executor.extraJavaOptions | A string of extra JVM options to pass to executors. For instance  NOTE: You can enter multiple values separated by space. |``-Dcarbon.properties.filepath="<YOUR_SPARK_HOME_PATH>"/conf/carbon.properties`` |
-| spark.executor.extraClassPath | Extra classpath entries to prepend to the classpath of executors. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the values in below parameter spark.driver.extraClassPath |``"<YOUR_SPARK_HOME_PATH>"/carbonlib/carbonlib/carbondata_xxx.jar`` |
-| spark.driver.extraClassPath | Extra classpath entries to prepend to the classpath of the driver. NOTE: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the value in below parameter spark.driver.extraClassPath. |``"<YOUR_SPARK_HOME_PATH>"/carbonlib/carbonlib/carbondata_xxx.jar`` |
-| spark.driver.extraJavaOptions | A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. |``-Dcarbon.properties.filepath="<YOUR_SPARK_HOME_PATH>"/conf/carbon.properties`` |
-| carbon.kettle.home | Path that will be used by CarbonData internally to create graph for loading the data. |``"<YOUR_SPARK_HOME_PATH>"/carbonlib/carbonplugins`` |
+3. Copy the `./conf/carbon.properties.template` file from CarbonData repository to `$SPARK_HOME/conf/` folder and rename the file to `carbon.properties`.
 
-* Add the following properties in ``<SPARK_HOME>/conf/ carbon.properties``:
+4. Create `tar,gz` file of carbonlib folder and move it inside the carbonlib folder.
 
-| Property | Required | Description | Example | Default Value |
-|----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------------|
-| carbon.storelocation | NO | Location where CarbonData will create the store and write the data in its own format. | hdfs://HOSTNAME:PORT/Opt/CarbonStore | Propose to set HDFS directory|
-| carbon.kettle.home | YES | Path that will be used by CarbonData internally to create graph for loading the data. | $SPARK_HOME/carbonlib/carbonplugins |  |
+    ```
+	cd $SPARK_HOME
+	tar -zcvf carbondata.tar.gz carbonlib/
+	mv carbondata.tar.gz carbonlib/
+    ```
 
+5. Configure the properties mentioned in the following table in `$SPARK_HOME/conf/spark-defaults.conf` file.
 
-* Verify the installation.
-   
-```
-     ./bin/spark-shell --master yarn-client --driver-memory 1g 
+   | Property | Description | Value |
+   |---------------------------------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------|
+   | spark.master | Set this value to run the Spark in yarn cluster mode. | Set yarn-client to run the Spark in yarn cluster mode. |
+   | spark.yarn.dist.files | Comma-separated list of files to be placed in the working directory of each executor. |`$SPARK_HOME/conf/carbon.properties` |
+   | spark.yarn.dist.archives | Comma-separated list of archives to be extracted into the working directory of each executor. |`$SPARK_HOME/carbonlib/carbondata.tar.gz` |
+   | spark.executor.extraJavaOptions | A string of extra JVM options to pass to executors. For instance  **NOTE**: You can enter multiple values separated by space. |`-Dcarbon.properties.filepath=carbon.properties` |
+   | spark.executor.extraClassPath | Extra classpath entries to prepend to the classpath of executors. **NOTE**: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the values in below parameter spark.driver.extraClassPath |`carbondata.tar.gz/carbonlib/*` |
+   | spark.driver.extraClassPath | Extra classpath entries to prepend to the classpath of the driver. **NOTE**: If SPARK_CLASSPATH is defined in spark-env.sh, then comment it and append the value in below parameter spark.driver.extraClassPath. |`$SPARK_HOME/carbonlib/carbonlib/*` |
+   | spark.driver.extraJavaOptions | A string of extra JVM options to pass to the driver. For instance, GC settings or other logging. |`-Dcarbon.properties.filepath=$SPARK_HOME/conf/carbon.properties` |
+
+
+6. Add the following properties in `$SPARK_HOME/conf/carbon.properties`:
+
+   | Property | Required | Description | Example | Default Value |
+   |----------------------|----------|----------------------------------------------------------------------------------------|-------------------------------------|---------------|
+   | carbon.storelocation | NO | Location where CarbonData will create the store and write the data in its own format. | hdfs://HOSTNAME:PORT/Opt/CarbonStore | Propose to set HDFS directory|
+   | carbon.kettle.home | YES | Path that will be used by CarbonData internally to create graph for loading the data. | carbondata.tar.gz/carbonlib/carbonplugins |  |
+
+
+7. Verify the installation.
+
+   ```
+     ./bin/spark-shell --master yarn-client --driver-memory 1g
      --executor-cores 2 --executor-memory 2G
-```
-  NOTE: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.
+   ```
+  **NOTE**: Make sure you have permissions for CarbonData JARs and files through which driver and executor will start.
 
   Getting started with CarbonData : [Quick Start](quick-start-guide.md), [DDL Operations on CarbonData](ddl-operation-on-carbondata.md)
 
 ## Query Execution Using CarbonData Thrift Server
 
-### Starting CarbonData Thrift Server
+### Starting CarbonData Thrift Server.
 
-   a. cd ``<SPARK_HOME>``
+   a. cd `$SPARK_HOME`
 
    b. Run the following command to start the CarbonData thrift server.
-     
-```
-./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
---class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
-$SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR <carbon_store_path>
-```
-  
+
+   ```
+   ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true
+   --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer
+   $SPARK_HOME/carbonlib/$CARBON_ASSEMBLY_JAR <carbon_store_path>
+   ```
+
 | Parameter | Description | Example |
 |---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------|
-| CARBON_ASSEMBLY_JAR | CarbonData assembly jar name present in the ``"<SPARK_HOME>"/carbonlib/`` folder. | carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar |
-| carbon_store_path | This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. | ``hdfs//<host_name>:54310/user/hive/warehouse/carbon.store`` |
+| CARBON_ASSEMBLY_JAR | CarbonData assembly jar name present in the `$SPARK_HOME/carbonlib/` folder. | carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar |
+| carbon_store_path | This is a parameter to the CarbonThriftServer class. This a HDFS path where CarbonData files will be kept. Strongly Recommended to put same as carbon.storelocation parameter of carbon.properties. | `hdfs://<host_name>:port/user/hive/warehouse/carbon.store` |
 
-### Examples
+**Examples**
    
-   * Start with default memory and executors
+   * Start with default memory and executors.
 
 ```
 ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
 --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer 
 $SPARK_HOME/carbonlib
 /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
-hdfs://hacluster/user/hive/warehouse/carbon.store
+hdfs://<host_name>:port/user/hive/warehouse/carbon.store
 ```
    
-   * Start with Fixed executors and resources
+   * Start with Fixed executors and resources.
 
 ```
 ./bin/spark-submit --conf spark.sql.hive.thriftServer.singleSession=true 
@@ -171,13 +185,13 @@ hdfs://hacluster/user/hive/warehouse/carbon.store
 --executor-cores 32 
 /srv/OSCON/BigData/HACluster/install/spark/sparkJdbc/lib
 /carbondata_2.10-0.1.0-incubating-SNAPSHOT-shade-hadoop2.7.2.jar 
-hdfs://hacluster/user/hive/warehouse/carbon.store
+hdfs://<host_name>:port/user/hive/warehouse/carbon.store
 ```
   
-### Connecting to CarbonData Thrift Server Using Beeline
+### Connecting to CarbonData Thrift Server Using Beeline.
 
 ```
-     cd <SPARK_HOME>
+     cd $SPARK_HOME
      ./bin/beeline jdbc:hive2://<thrftserver_host>:port
 
      Example

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/quick-start-guide.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/quick-start-guide.md b/src/site/markdown/quick-start-guide.md
index d98ad39..c298b0d 100644
--- a/src/site/markdown/quick-start-guide.md
+++ b/src/site/markdown/quick-start-guide.md
@@ -24,15 +24,15 @@ This tutorial provides a quick introduction to using CarbonData.
 * [Installation and building CarbonData](https://github.com/apache/incubator-carbondata/blob/master/build).
 * Create a sample.csv file using the following commands. The CSV file is required for loading data into CarbonData.
 
-```
-cd carbondata
-cat > sample.csv << EOF
-id,name,city,age
-1,david,shenzhen,31
-2,eason,shenzhen,27
-3,jarry,wuhan,35
-EOF
-```
+  ```
+  cd carbondata
+  cat > sample.csv << EOF
+  id,name,city,age
+  1,david,shenzhen,31
+  2,eason,shenzhen,27
+  3,jarry,wuhan,35
+  EOF
+  ```
 
 ## Interactive Analysis with Spark Shell Version 2.1
 
@@ -60,20 +60,16 @@ import org.apache.spark.sql.CarbonSession._
 * Create a CarbonSession :
 
 ```
-val carbon = SparkSession
-            .builder()
-            .config(sc.getConf)
-            .getOrCreateCarbonSession()
+val carbon = SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<hdfs store path>")
 ```
+**NOTE**: By default metastore location is pointed to `../carbon.metastore`, user can provide own metastore location to CarbonSession like `SparkSession.builder().config(sc.getConf).getOrCreateCarbonSession("<hdfs store path>", "<local metastore path>")`
 
 #### Executing Queries
 
 ##### Creating a Table
 
 ```
-scala>carbon.sql("CREATE TABLE IF NOT EXISTS test_table
-     (id string, name string, city string, age Int)
-     STORED BY 'carbondata'")
+scala>carbon.sql("CREATE TABLE IF NOT EXISTS test_table(id string, name string, city string, age Int) STORED BY 'carbondata'")
 ```
 
 ##### Loading Data to a Table
@@ -81,15 +77,14 @@ scala>carbon.sql("CREATE TABLE IF NOT EXISTS test_table
 ```
 scala>carbon.sql("LOAD DATA INPATH 'sample.csv file path' INTO TABLE test_table")
 ```
-NOTE:Please provide the real file path of sample.csv for the above script.
+**NOTE**: Please provide the real file path of `sample.csv` for the above script.
 
 ###### Query Data from a Table
 
 ```
 scala>carbon.sql("SELECT * FROM test_table").show()
 
-scala>carbon.sql("SELECT city, avg(age), sum(age)
-      FROM test_table GROUP BY city").show()
+scala>carbon.sql("SELECT city, avg(age), sum(age) FROM test_table GROUP BY city").show()
 ```
 
 ## Interactive Analysis with Spark Shell Version 1.6
@@ -102,7 +97,7 @@ Start Spark shell by running the following command in the Spark directory:
 ./bin/spark-shell --jars <carbondata assembly jar path>
 ```
 
-NOTE: In this shell, SparkContext is readily available as sc.
+**NOTE**: In this shell, SparkContext is readily available as `sc`.
 
 * In order to execute the Queries we need to import CarbonContext:
 
@@ -113,19 +108,16 @@ import org.apache.spark.sql.CarbonContext
 * Create an instance of CarbonContext in the following manner :
 
 ```
-val cc = new CarbonContext(sc)
+val cc = new CarbonContext(sc, "<hdfs store path>")
 ```
-
-NOTE: By default store location is pointed to "../carbon.store", user can provide own store location to CarbonContext like new CarbonContext(sc, storeLocation).
+**NOTE**: If running on local machine without hdfs, configure the local machine's store path instead of hdfs store path
 
 #### Executing Queries
 
 ##### Creating a Table
 
 ```
-scala>cc.sql("CREATE TABLE IF NOT EXISTS test_table
-     (id string, name string, city string, age Int)
-     STORED BY 'carbondata'")
+scala>cc.sql("CREATE TABLE IF NOT EXISTS test_table (id string, name string, city string, age Int) STORED BY 'carbondata'")
 ```
 To see the table created :
 
@@ -136,15 +128,13 @@ scala>cc.sql("SHOW TABLES").show()
 ##### Loading Data to a Table
 
 ```
-scala>cc.sql("LOAD DATA INPATH 'sample.csv file path'
-      INTO TABLE test_table")
+scala>cc.sql("LOAD DATA INPATH 'sample.csv file path' INTO TABLE test_table")
 ```
-NOTE:Please provide the real file path of sample.csv for the above script.
+**NOTE**: Please provide the real file path of `sample.csv` for the above script.
 
 ##### Query Data from a Table
 
 ```
 scala>cc.sql("SELECT * FROM test_table").show()
-scala>cc.sql("SELECT city, avg(age), sum(age)
-      FROM test_table GROUP BY city").show()
-```
\ No newline at end of file
+scala>cc.sql("SELECT city, avg(age), sum(age) FROM test_table GROUP BY city").show()
+```

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/supported-data-types-in-carbondata.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/supported-data-types-in-carbondata.md b/src/site/markdown/supported-data-types-in-carbondata.md
index 01bd6e3..d71b59b 100644
--- a/src/site/markdown/supported-data-types-in-carbondata.md
+++ b/src/site/markdown/supported-data-types-in-carbondata.md
@@ -1,3 +1,22 @@
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+
+      http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
 #  Data Types
 
 #### CarbonData supports the following data types:

http://git-wip-us.apache.org/repos/asf/incubator-carbondata-site/blob/594dedec/src/site/markdown/troubleshooting.md
----------------------------------------------------------------------
diff --git a/src/site/markdown/troubleshooting.md b/src/site/markdown/troubleshooting.md
index 2c892ae..9181d83 100644
--- a/src/site/markdown/troubleshooting.md
+++ b/src/site/markdown/troubleshooting.md
@@ -21,19 +21,6 @@
 This tutorial is designed to provide troubleshooting for end users and developers
 who are building, deploying, and using CarbonData.
 
-* [Failed to load thrift libraries](#failed-to-load-thrift-libraries)
-* [Failed to launch the Spark Shell](#failed-to-launch-the-spark-shell)
-* [Query Failure with Generic Error on the Beeline](#query-failure-with-generic-error-on-the-beeline)
-* [Failed to execute load query on cluster](#failed-to-execute-load-query-on-cluster)
-* [Failed to execute insert query on cluster](#failed-to-execute-insert-query-on-cluster)
-* [Failed to connect to hiveuser with thrift](#failed-to-connect-to-hiveuser-with-thrift)
-* [Failure to read the metastore db during table creation](#failure-to-read-the-metastore-db-during-table-creation)
-* [Failed to load data on the cluster](#failed-to-load-data-on-the-cluster)
-* [Failed to insert data on the cluster](#failed-to-insert-data-on-the-cluster)
-* [Failed to execute Concurrent Operations](#failed-to-execute-concurrent-operations)
-* [Failed to create a table with a single numeric column](#failed-to-create-a-table-with-a-single-numeric-column)
-* [Data Failure because of Bad Records](#data-failure-because-of-bad-records)
-
 ## Failed to load thrift libraries
 
   **Symptom**
@@ -51,26 +38,7 @@ who are building, deploying, and using CarbonData.
 
   **Procedure**
 
-  Follow the steps below to ensure loading of libraries appropriately :
-
-  1. For ubuntu you have to add a custom.conf file to /etc/ld.so.conf.d
-     For example,
-
-     ```
-     sudo gedit /etc/ld.so.conf.d/randomLibs.conf
-     ```
-
-     Inside this file you are supposed to configure the complete path to the directory that contains all the libraries that you wish to add to the system, let us say /home/ubuntu/localLibs
-
-  2. To ensure your library location ,check for existence of libthrift.so
-
-  3. Save and run the following command to update the system with this libs.
-
-      ```
-      sudo ldconfig
-      ```
-
-    Note : Remember to add only the path to the directory, not the full path for that file, all the libraries inside that path will be automatically indexed.
+  Follow the Apache thrift docs at [https://thrift.apache.org/docs/install](https://thrift.apache.org/docs/install) to install thrift correctly.
 
 ## Failed to launch the Spark Shell
 
@@ -99,41 +67,6 @@ who are building, deploying, and using CarbonData.
     ```
 
     Note :  Refrain from using "mvn clean package" without specifying the profile.
-    
-## Query Failure with Generic Error on the Beeline
-
-   **Symptom**
-
-   Query fails on the executor side and generic error message is printed on the beeline console
-
-   ![Query Failure Beeline](../../../src/site/markdown/images/query_failure_beeline.png?raw=true)
-
-   **Possible Causes**
-
-   - In Query flow, Table B-Tree will be loaded into memory on the driver side and filter condition is validated against the min-max of each block to identify false positive,
-   Once the blocks are selected, based on number of available executors, blocks will be distributed to each executor node as shown in below driver logs snapshot
-
-   ![Query Failure Logs](../../../src/site/markdown/images/query_failure_logs.png?raw=true)
-
-   - When the error occurs in driver side while b-tree loading or block distribution, detail error message will be printed on the beeline console and error trace will be printed on the driver logs.
-
-   - When the error occurs in the executor side, generic error message will be printed as shown in issue description.
-
-   ![Query Failure Job Details](../../../src/site/markdown/images/query_failure_job_details.png?raw=true)
-
-   - Details of the failed stages can be seen in the Spark Application UI by clicking on the failed stages on the failed job as shown in previous snapshot
-
-   ![Query Failure Spark UI](../../../src/site/markdown/images/query_failure_spark_ui.png?raw=true)
-
-   **Procedure**
-
-   Details of the error can be analyzed in details using executor logs available in stdout
-
-   ![Query Failure Spark UI](../../../src/site/markdown/images/query_failure_procedure.png?raw=true)
-
-   Below snapshot shows executor logs with error message for query failure which can be helpful to locate the error
-
-   ![Query Failure Spark UI](../../../src/site/markdown/images/query_failure_issue.png?raw=true)    
 
 ## Failed to execute load query on cluster.
 
@@ -277,11 +210,11 @@ who are building, deploying, and using CarbonData.
 
    2. For the changes to take effect, restart the Spark cluster.
 
-## Failed to execute Concurrent Operations.
+## Failed to execute Concurrent Operations(Load,Insert,Update) on table by multiple workers.
 
   **Symptom**
 
-  Execution of  Concurrent Operations (Load,Insert,Update) on table by multiple workers fails with the following exception :
+  Execution fails with the following exception :
 
    ```
    Table is locked for updation.
@@ -312,29 +245,3 @@ who are building, deploying, and using CarbonData.
   **Procedure**
 
   A single column that can be considered as dimension is mandatory for table creation.
-
-## Data Failure because of Bad Records
-
-   **Symptom**
-
-   Data Loading fails with the following exception
-
-   ```
-   Error: java.lang.Exception: Data load failed due to Bad record
-   ```
-
-   **Possible Causes**
-
-   The parameter BAD_RECORDS_ACTION has not been specified in the Query.
-
-   **Procedure**
-
-   Set the following parameter in the load command OPTIONS as shown below :
-
-   'BAD_RECORDS_ACTION'='FORCE‘
-
-   *Example :*
-
-   ```
-   LOAD DATA INPATH 'hdfs://hacluster/user/loader/moredata01.csv' INTO TABLE flow_carbon_256b OPTIONS('DELIMITER'=',', 'BAD_RECORDS_ACTION'='FORCE');
-   ```


Mime
View raw message