carbondata-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (CARBONDATA-272) Two test case are failing , on second time maven build without 'clean'
Date Sun, 25 Sep 2016 04:50:20 GMT

    [ https://issues.apache.org/jira/browse/CARBONDATA-272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15520186#comment-15520186
] 

ASF GitHub Bot commented on CARBONDATA-272:
-------------------------------------------

GitHub user vinodkc opened a pull request:

    https://github.com/apache/incubator-carbondata/pull/197

    [CARBONDATA-272]Fixed Test case failure on second mvn build

    Currently Test case are passing only when 'clean' is used with mvn.
    This is due to improper table drop in test cases AllDataTypesTestCaseAggregate and NO_DICTIONARY_COL_TestCase

    
    During development, running test with 'clean' will take more time to build so it is better
ensure tables are dropped properly


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/vinodkc/incubator-carbondata fixTestcasefailure

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/incubator-carbondata/pull/197.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #197
    
----
commit 65e4b6520b9b60058a20f4dc3be07161d71ad557
Author: vinodkc <vinod.kc.in@gmail.com>
Date:   2016-09-24T16:37:46Z

    drop table corrected

----


> Two test case are failing , on second time maven build without  'clean'
> -----------------------------------------------------------------------
>
>                 Key: CARBONDATA-272
>                 URL: https://issues.apache.org/jira/browse/CARBONDATA-272
>             Project: CarbonData
>          Issue Type: Bug
>          Components: spark-integration
>            Reporter: Vinod KC
>            Priority: Trivial
>              Labels: test
>
> Two test case are failing , during second time build without mvn clean
> eg: > 
> 1) run : mvn  -Pspark-1.6 -Dspark.version=1.6.2  install
> 2) After successful build, again run mvn  -Pspark-1.6 -Dspark.version=1.6.2  install
> *** 2 SUITES ABORTED ***
> [INFO] ------------------------------------------------------------------------
> [INFO] Reactor Summary:
> [INFO] 
> [INFO] Apache CarbonData :: Parent ........................ SUCCESS [ 11.412 s]
> [INFO] Apache CarbonData :: Common ........................ SUCCESS [  5.585 s]
> [INFO] Apache CarbonData :: Format ........................ SUCCESS [  7.079 s]
> [INFO] Apache CarbonData :: Core .......................... SUCCESS [ 15.874 s]
> [INFO] Apache CarbonData :: Processing .................... SUCCESS [ 12.417 s]
> [INFO] Apache CarbonData :: Hadoop ........................ SUCCESS [ 17.330 s]
> [INFO] Apache CarbonData :: Spark ......................... FAILURE [07:47 min]
> [INFO] Apache CarbonData :: Assembly ...................... SKIPPED
> [INFO] Apache CarbonData :: Examples ...................... SKIPPED
> [INFO] ------------------------------------------------------------------------
> [INFO] BUILD FAILURE
> [INFO] ------------------------------------------------------------------------
> Reason for failure is that two tables created by test cases AllDataTypesTestCaseAggregate
and NO_DICTIONARY_COL_TestCase are not dropping tables properly.
> Refer error below error log
> - skip auto identify high cardinality column for column group
> AllDataTypesTestCaseAggregate:
> ERROR 24-09 08:31:29,368 - Table alldatatypescubeAGG not found: default.alldatatypescubeAGG
table not found
> AUDIT 24-09 08:31:29,383 - [vinod][vinod][Thread-1]Creating Table with Database name
[default] and Table name [alldatatypestableagg]
> AUDIT 24-09 08:31:29,385 - [vinod][vinod][Thread-1]Table creation with Database name
[default] and Table name [alldatatypestableagg] failed. Table [alldatatypestableagg] already
exists under database [default]
> ERROR 24-09 08:31:29,401 - Table Desc1 not found: default.Desc1 table not found
> ERROR 24-09 08:31:29,414 - Table Desc2 not found: default.Desc2 table not found
> AUDIT 24-09 08:31:29,422 - [vinod][vinod][Thread-1]Creating Table with Database name
[default] and Table name [desc1]
> Exception encountered when invoking run on a nested suite - Table [alldatatypestableagg]
already exists under database [default] *** ABORTED ***
>   java.lang.RuntimeException: Table [alldatatypestableagg] already exists under
database [default]
>   at scala.sys.package$.error(package.scala:27)
>   at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:853)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
>   at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
>   ...
> [32mNO_DICTIONARY_COL_TestCase:
> ERROR 24-09 08:31:29,954 - Table filtertestTables not found: default.filtertestTables
table not found
> AUDIT 24-09 08:31:30,041 - [vinod][vinod][Thread-1]Deleting table [no_dictionary_carbon_6]
under database [default]
> AUDIT 24-09 08:31:30,115 - [vinod][vinod][Thread-1]Deleted table [no_dictionary_carbon_6]
under database [default]
> AUDIT 24-09 08:31:30,122 - [vinod][vinod][Thread-1]Deleting table [no_dictionary_carbon_7]
under database [default]
> AUDIT 24-09 08:31:30,191 - [vinod][vinod][Thread-1]Deleted table [no_dictionary_carbon_7]
under database [default]
> AUDIT 24-09 08:31:30,454 - [vinod][vinod][Thread-1]Creating Table with Database name
[default] and Table name [no_dictionary_carbon_6]
> AUDIT 24-09 08:31:30,480 - [vinod][vinod][Thread-1]Table created with Database name [default]
and Table name [no_dictionary_carbon_6]
> AUDIT 24-09 08:31:30,583 - [vinod][vinod][Thread-1]Data load request has been received
for table default.no_dictionary_carbon_6
> AUDIT 24-09 08:31:30,665 - [vinod][vinod][Thread-1]Data load is successful for default.no_dictionary_carbon_6
> AUDIT 24-09 08:31:30,684 - [vinod][vinod][Thread-1]Creating Table with Database name
[default] and Table name [no_dictionary_carbon_7]
> AUDIT 24-09 08:31:30,727 - [vinod][vinod][Thread-1]Table created with Database name [default]
and Table name [no_dictionary_carbon_7]
> AUDIT 24-09 08:31:30,822 - [vinod][vinod][Thread-1]Data load request has been received
for table default.no_dictionary_carbon_7
> AUDIT 24-09 08:31:31,077 - [vinod][vinod][Thread-1]Data load is successful for default.no_dictionary_carbon_7
> AUDIT 24-09 08:31:31,090 - [vinod][vinod][Thread-1]Creating Table with Database name
[default] and Table name [filtertesttable]
> AUDIT 24-09 08:31:31,092 - [vinod][vinod][Thread-1]Table creation with Database name
[default] and Table name [filtertesttable] failed. Table [filtertesttable] already exists
under database [default]
> Exception encountered when invoking run on a nested suite - Table [filtertesttable]
already exists under database [default] *** ABORTED ***
>   java.lang.RuntimeException: Table [filtertesttable] already exists under database
[default]
>   at scala.sys.package$.error(package.scala:27)
>   at org.apache.spark.sql.execution.command.CreateTable.run(carbonTableSchema.scala:853)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:58)
>   at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:56)
>   at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:70)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
>   at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
>   at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
>   at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
>   at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:55)
>   ...



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message