carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "kumar vishal (JIRA)" <j...@apache.org>
Subject [jira] [Assigned] (CARBONDATA-1777) Carbon1.3.0-Pre-AggregateTable - Pre-aggregate tables created in Spark-shell sessions are not used in the beeline session
Date Wed, 20 Dec 2017 08:45:00 GMT

     [ https://issues.apache.org/jira/browse/CARBONDATA-1777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

kumar vishal reassigned CARBONDATA-1777:
----------------------------------------

    Assignee: Kunal Kapoor  (was: kumar vishal)

> Carbon1.3.0-Pre-AggregateTable - Pre-aggregate tables created in Spark-shell sessions
are not used in the beeline session
> -------------------------------------------------------------------------------------------------------------------------
>
>                 Key: CARBONDATA-1777
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-1777
>             Project: CarbonData
>          Issue Type: Bug
>          Components: data-load
>    Affects Versions: 1.3.0
>         Environment: Test - 3 node ant cluster
>            Reporter: Ramakrishna S
>            Assignee: Kunal Kapoor
>            Priority: Minor
>              Labels: DFX
>             Fix For: 1.3.0
>
>
> Steps:
> Beeline:
> 1. Create table and load with  data
> Spark-shell:
> 1. create a pre-aggregate table
> Beeline:
> 1. Run aggregate query
> *+Expected:+* Pre-aggregate table should be used in the aggregate query 
> *+Actual:+* Pre-aggregate table is not used
> 1.
> create table if not exists lineitem1(L_SHIPDATE string,L_SHIPMODE string,L_SHIPINSTRUCT
string,L_RETURNFLAG string,L_RECEIPTDATE string,L_ORDERKEY string,L_PARTKEY string,L_SUPPKEY
  string,L_LINENUMBER int,L_QUANTITY double,L_EXTENDEDPRICE double,L_DISCOUNT double,L_TAX
double,L_LINESTATUS string,L_COMMITDATE string,L_COMMENT  string) STORED BY 'org.apache.carbondata.format'
TBLPROPERTIES ('table_blocksize'='128','NO_INVERTED_INDEX'='L_SHIPDATE,L_SHIPMODE,L_SHIPINSTRUCT,L_RETURNFLAG,L_RECEIPTDATE,L_ORDERKEY,L_PARTKEY,L_SUPPKEY','sort_columns'='');
> load data inpath "hdfs://hacluster/user/test/lineitem.tbl.5" into table lineitem1 options('DELIMITER'='|','FILEHEADER'='L_ORDERKEY,L_PARTKEY,L_SUPPKEY,L_LINENUMBER,L_QUANTITY,L_EXTENDEDPRICE,L_DISCOUNT,L_TAX,L_RETURNFLAG,L_LINESTATUS,L_SHIPDATE,L_COMMITDATE,L_RECEIPTDATE,L_SHIPINSTRUCT,L_SHIPMODE,L_COMMENT');
> 2. 
>  carbon.sql("create datamap agr1_lineitem1 ON TABLE lineitem1 USING 'org.apache.carbondata.datamap.AggregateDataMapHandler'
as select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from
lineitem1 group by l_returnflag, l_linestatus").show();
> 3. 
> select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity) from
lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus;
> Actual:
> 0: jdbc:hive2://10.18.98.136:23040> show tables;
> +-----------+---------------------------+--------------+--+
> | database  |         tableName         | isTemporary  |
> +-----------+---------------------------+--------------+--+
> | test_db2  | lineitem1                 | false        |
> | test_db2  | lineitem1_agr1_lineitem1  | false        |
> +-----------+---------------------------+--------------+--+
> 2 rows selected (0.047 seconds)
> Logs:
> 2017-11-20 15:46:48,314 | INFO  | [pool-23-thread-53] | Running query 'select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity)
from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus' with 7f3091a8-4d7b-40ac-840f-9db6f564c9cf
| org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,314 | INFO  | [pool-23-thread-53] | Parsing command: select l_returnflag,l_linestatus,sum(l_quantity),avg(l_quantity),count(l_quantity)
from lineitem1 where l_returnflag = 'R' group by l_returnflag, l_linestatus | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,353 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1
| org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,353 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr
cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,354 | INFO  | [pool-23-thread-53] | 55: Opening raw store with implemenation
class:org.apache.hadoop.hive.metastore.ObjectStore | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:589)
> 2017-11-20 15:46:48,355 | INFO  | [pool-23-thread-53] | ObjectStore, initialize called
| org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:289)
> 2017-11-20 15:46:48,360 | INFO  | [pool-23-thread-53] | Reading in results for query
"org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | org.datanucleus.util.Log4JLogger.info(Log4JLogger.java:77)
> 2017-11-20 15:46:48,362 | INFO  | [pool-23-thread-53] | Using direct SQL, underlying
DB is MYSQL | org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:139)
> 2017-11-20 15:46:48,362 | INFO  | [pool-23-thread-53] | Initialized ObjectStore | org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:272)
> 2017-11-20 15:46:48,376 | INFO  | [pool-23-thread-53] | Parsing command: array<string>
| org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,399 | INFO  | [pool-23-thread-53] | Schema changes have been detected
for table: `lineitem1` | org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,399 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1
| org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,400 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr
cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,413 | INFO  | [pool-23-thread-53] | Parsing command: array<string>
| org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,415 | INFO  | [pool-23-thread-53] | 55: get_table : db=test_db2 tbl=lineitem1
| org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,415 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr
cmd=get_table : db=test_db2 tbl=lineitem1	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,428 | INFO  | [pool-23-thread-53] | Parsing command: array<string>
| org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,431 | INFO  | [pool-23-thread-53] | 55: get_database: test_db2 |
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,431 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr
cmd=get_database: test_db2	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,434 | INFO  | [pool-23-thread-53] | 55: get_database: test_db2 |
org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,434 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr
cmd=get_database: test_db2	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,437 | INFO  | [pool-23-thread-53] | 55: get_tables: db=test_db2 pat=*
| org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logInfo(HiveMetaStore.java:746)
> 2017-11-20 15:46:48,437 | INFO  | [pool-23-thread-53] | ugi=anonymous	ip=unknown-ip-addr
cmd=get_tables: db=test_db2 pat=*	 | org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.logAuditEvent(HiveMetaStore.java:371)
> 2017-11-20 15:46:48,522 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Starting to
optimize plan | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,536 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Skip CarbonOptimizer
| org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,679 | INFO  | [pool-23-thread-53] | Code generated in 41.000919 ms
| org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,766 | INFO  | [pool-23-thread-53] | Code generated in 61.651832 ms
| org.apache.spark.internal.Logging$class.logInfo(Logging.scala:54)
> 2017-11-20 15:46:48,821 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Table block
size not specified for test_db2_lineitem1. Therefore considering the default value 1024 MB
| org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,872 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Time taken
to load blocklet datamap from file : hdfs://hacluster/user/test2/lineitem1/Fact/Part0/Segment_0/1_batchno0-0-1511163544085.carbonindexis
2 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,873 | INFO  | [pool-23-thread-53] | pool-23-thread-53 Time taken
to load blocklet datamap from file : hdfs://hacluster/user/test2/lineitem1/Fact/Part0/Segment_0/0_batchno0-0-1511163544085.carbonindexis
1 | org.apache.carbondata.common.logging.impl.StandardLogService.logInfoMessage(StandardLogService.java:150)
> 2017-11-20 15:46:48,884 | INFO  | [pool-23-thread-53] | 
>  Identified no.of.blocks: 2,
>  no.of.tasks: 2,
>  no.of.nodes: 0,
>  parallelism: 2



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message