carbondata-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From ravipes...@apache.org
Subject [06/25] carbondata git commit: [Documentation] Editorial review
Date Sat, 03 Mar 2018 12:43:53 GMT
[Documentation] Editorial review

correct some docus description

This closes #1992


Project: http://git-wip-us.apache.org/repos/asf/carbondata/repo
Commit: http://git-wip-us.apache.org/repos/asf/carbondata/commit/e5d9802a
Tree: http://git-wip-us.apache.org/repos/asf/carbondata/tree/e5d9802a
Diff: http://git-wip-us.apache.org/repos/asf/carbondata/diff/e5d9802a

Branch: refs/heads/branch-1.3
Commit: e5d9802abe244e24a64fc883690632732d94f306
Parents: 6c25d24
Author: sgururajshetty <sgururajshetty@gmail.com>
Authored: Fri Feb 23 17:05:17 2018 +0530
Committer: ravipesala <ravi.pesala@gmail.com>
Committed: Sat Mar 3 17:46:26 2018 +0530

----------------------------------------------------------------------
 docs/data-management-on-carbondata.md | 36 +++++++++++++++---------------
 docs/faq.md                           |  4 ++--
 docs/troubleshooting.md               |  4 ++--
 docs/useful-tips-on-carbondata.md     |  2 +-
 4 files changed, 23 insertions(+), 23 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/carbondata/blob/e5d9802a/docs/data-management-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/data-management-on-carbondata.md b/docs/data-management-on-carbondata.md
index f70e0b7..78ab010 100644
--- a/docs/data-management-on-carbondata.md
+++ b/docs/data-management-on-carbondata.md
@@ -178,7 +178,7 @@ This tutorial is going to introduce all commands and data operations on
CarbonDa
   SHOW TABLES IN defaultdb
   ```
 
-### ALTER TALBE
+### ALTER TABLE
 
   The following section introduce the commands to modify the physical or logical state of
the existing table(s).
 
@@ -494,7 +494,7 @@ This tutorial is going to introduce all commands and data operations on
CarbonDa
   [ WHERE { <filter_condition> } ]
   ```
   
-  alternatively the following the command can also be used for updating the CarbonData Table
:
+  alternatively the following command can also be used for updating the CarbonData Table
:
   
   ```
   UPDATE <table_name>
@@ -674,7 +674,7 @@ This tutorial is going to introduce all commands and data operations on
CarbonDa
 
 #### Insert OVERWRITE
   
-  This command allows you to insert or load overwrite on a spcific partition.
+  This command allows you to insert or load overwrite on a specific partition.
   
   ```
    INSERT OVERWRITE TABLE table_name
@@ -898,50 +898,50 @@ will fetch the data from the main table **sales**
 For existing table with loaded data, data load to pre-aggregate table will be triggered by
the 
 CREATE DATAMAP statement when user creates the pre-aggregate table.
 For incremental loads after aggregates tables are created, loading data to main table triggers

-the load to pre-aggregate tables once main table loading is complete.These loads are automic

+the load to pre-aggregate tables once main table loading is complete. These loads are automic

 meaning that data on main table and aggregate tables are only visible to the user after all
tables 
 are loaded
 
 ##### Querying data from pre-aggregate tables
-Pre-aggregate tables cannot be queries directly.Queries are to be made on main table.Internally

-carbondata will check associated pre-aggregate tables with the main table and if the 
+Pre-aggregate tables cannot be queries directly. Queries are to be made on main table. Internally

+carbondata will check associated pre-aggregate tables with the main table, and if the 
 pre-aggregate tables satisfy the query condition, the plan is transformed automatically to
use 
-pre-aggregate table to fetch the data
+pre-aggregate table to fetch the data.
 
 ##### Compacting pre-aggregate tables
 Compaction command (ALTER TABLE COMPACT) need to be run separately on each pre-aggregate
table.
 Running Compaction command on main table will **not automatically** compact the pre-aggregate

 tables.Compaction is an optional operation for pre-aggregate table. If compaction is performed
on
 main table but not performed on pre-aggregate table, all queries still can benefit from 
-pre-aggregate tables.To further improve performance on pre-aggregate tables, compaction can
be 
+pre-aggregate tables. To further improve performance on pre-aggregate tables, compaction
can be 
 triggered on pre-aggregate tables directly, it will merge the segments inside pre-aggregate
table. 
 
 ##### Update/Delete Operations on pre-aggregate tables
 This functionality is not supported.
 
   NOTE (<b>RESTRICTION</b>):
-  * Update/Delete operations are <b>not supported</b> on main table which has
pre-aggregate tables 
-  created on it.All the pre-aggregate tables <b>will have to be dropped</b> before
update/delete 
-  operations can be performed on the main table.Pre-aggregate tables can be rebuilt manually

+  Update/Delete operations are <b>not supported</b> on main table which has pre-aggregate
tables 
+  created on it. All the pre-aggregate tables <b>will have to be dropped</b>
before update/delete 
+  operations can be performed on the main table. Pre-aggregate tables can be rebuilt manually

   after update/delete operations are completed
  
 ##### Delete Segment Operations on pre-aggregate tables
 This functionality is not supported.
 
   NOTE (<b>RESTRICTION</b>):
-  * Delete Segment operations are <b>not supported</b> on main table which has
pre-aggregate tables 
-  created on it.All the pre-aggregate tables <b>will have to be dropped</b> before
update/delete 
-  operations can be performed on the main table.Pre-aggregate tables can be rebuilt manually

+  Delete Segment operations are <b>not supported</b> on main table which has
pre-aggregate tables 
+  created on it. All the pre-aggregate tables <b>will have to be dropped</b>
before update/delete 
+  operations can be performed on the main table. Pre-aggregate tables can be rebuilt manually

   after delete segment operations are completed
   
 ##### Alter Table Operations on pre-aggregate tables
 This functionality is not supported.
 
   NOTE (<b>RESTRICTION</b>):
-  * Adding new column in new table does not have any affect on pre-aggregate tables. However
if 
+  Adding new column in new table does not have any affect on pre-aggregate tables. However
if 
   dropping or renaming a column has impact in pre-aggregate table, such operations will be

-  rejected and error will be thrown.All the pre-aggregate tables <b>will have to be
dropped</b> 
-  before Alter Operations can be performed on the main table.Pre-aggregate tables can be
rebuilt 
+  rejected and error will be thrown. All the pre-aggregate tables <b>will have to be
dropped</b> 
+  before Alter Operations can be performed on the main table. Pre-aggregate tables can be
rebuilt 
   manually after Alter Table operations are completed
   
 ### Supporting timeseries data (Alpha feature in 1.3.0)
@@ -1012,7 +1012,7 @@ roll-up for the queries on these hierarchies.
   ```
   
   It is **not necessary** to create pre-aggregate tables for each granularity unless required
for 
-  query.Carbondata can roll-up the data and fetch it.
+  query. Carbondata can roll-up the data and fetch it.
    
   For Example: For main table **sales** , If pre-aggregate tables were created as  
   

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e5d9802a/docs/faq.md
----------------------------------------------------------------------
diff --git a/docs/faq.md b/docs/faq.md
index baa46cc..8f04e4f 100644
--- a/docs/faq.md
+++ b/docs/faq.md
@@ -80,7 +80,7 @@ In order to build CarbonData project it is necessary to specify the spark
profil
 
 ## How Carbon will behave when execute insert operation in abnormal scenarios?
 Carbon support insert operation, you can refer to the syntax mentioned in [DML Operations
on CarbonData](dml-operation-on-carbondata.md).
-First, create a soucre table in spark-sql and load data into this created table.
+First, create a source table in spark-sql and load data into this created table.
 
 ```
 CREATE TABLE source_table(
@@ -124,7 +124,7 @@ id  city    name
 
 As result shows, the second column is city in carbon table, but what inside is name, such
as jack. This phenomenon is same with insert data into hive table.
 
-If you want to insert data into corresponding column in carbon table, you have to specify
the column order same in insert statment. 
+If you want to insert data into corresponding column in carbon table, you have to specify
the column order same in insert statement. 
 
 ```
 INSERT INTO TABLE carbon_table SELECT id, city, name FROM source_table;

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e5d9802a/docs/troubleshooting.md
----------------------------------------------------------------------
diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md
index 68dd538..0156121 100644
--- a/docs/troubleshooting.md
+++ b/docs/troubleshooting.md
@@ -177,7 +177,7 @@ Note :  Refrain from using "mvn clean package" without specifying the
profile.
   Data loading fails with the following exception :
 
    ```
-   Data Load failure exeception
+   Data Load failure exception
    ```
 
   **Possible Cause**
@@ -208,7 +208,7 @@ Note :  Refrain from using "mvn clean package" without specifying the
profile.
   Insertion fails with the following exception :
 
    ```
-   Data Load failure exeception
+   Data Load failure exception
    ```
 
   **Possible Cause**

http://git-wip-us.apache.org/repos/asf/carbondata/blob/e5d9802a/docs/useful-tips-on-carbondata.md
----------------------------------------------------------------------
diff --git a/docs/useful-tips-on-carbondata.md b/docs/useful-tips-on-carbondata.md
index aaf6460..4d43003 100644
--- a/docs/useful-tips-on-carbondata.md
+++ b/docs/useful-tips-on-carbondata.md
@@ -138,7 +138,7 @@
   |carbon.number.of.cores.while.loading|Default: 2.This value should be >= 2|Specifies
the number of cores used for data processing during data loading in CarbonData. |
   |carbon.sort.size|Default: 100000. The value should be >= 100.|Threshold to write local
file in sort step when loading data|
   |carbon.sort.file.write.buffer.size|Default:  50000.|DataOutputStream buffer. |
-  |carbon.number.of.cores.block.sort|Default: 7 | If you have huge memory and cpus, increase
it as you will|
+  |carbon.number.of.cores.block.sort|Default: 7 | If you have huge memory and CPUs, increase
it as you will|
   |carbon.merge.sort.reader.thread|Default: 3 |Specifies the number of cores used for temp
file merging during data loading in CarbonData.|
   |carbon.merge.sort.prefetch|Default: true | You may want set this value to false if you
have not enough memory|
 


Mime
View raw message