hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mike Drob (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HBASE-18175) Add hbase-spark integration test into hbase-spark-it
Date Wed, 28 Jun 2017 15:25:00 GMT

    [ https://issues.apache.org/jira/browse/HBASE-18175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16066693#comment-16066693
] 

Mike Drob commented on HBASE-18175:
-----------------------------------

QA Test failure is unrelated.

Verified IT locally with {{mvn clean verify -Dtest=none -Dit.test=IntegrationTestSparkBulkLoad
-pl hbase-spark/hbase-spark-it -am -DwildcardSuites=none}}.
Got this failure in stdout:

{noformat}
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 168.136 sec <<< FAILURE!
- in org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad
testBulkLoad(org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad)  Time elapsed: 168.006
sec  <<< ERROR!
org.apache.spark.SparkException: Only one SparkContext may be running in this JVM (see SPARK-2243).
To ignore this error, set spark.driver.allowMultipleContexts = true. The currently running
SparkContext was created at:
org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.runCheck(IntegrationTestSparkBulkLoad.java:280)
org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.runCheckWithRetry(IntegrationTestSparkBulkLoad.java:309)
org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.testBulkLoad(IntegrationTestSparkBulkLoad.java:523)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
	at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:2257)
	at org.apache.spark.SparkContext$$anonfun$assertNoOtherContextIsRunning$1.apply(SparkContext.scala:2239)
	at scala.Option.foreach(Option.scala:236)
	at org.apache.spark.SparkContext$.assertNoOtherContextIsRunning(SparkContext.scala:2239)
	at org.apache.spark.SparkContext$.setActiveContext(SparkContext.scala:2325)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:2197)
	at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
	at org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.runCheck(IntegrationTestSparkBulkLoad.java:280)
	at org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.runCheckWithRetry(IntegrationTestSparkBulkLoad.java:313)
	at org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.testBulkLoad(IntegrationTestSparkBulkLoad.java:523)


Results :

Tests in error: 
  IntegrationTestSparkBulkLoad.testBulkLoad:523->runCheckWithRetry:313->runCheck:280
ยป Spark
{noformat}

Which I think was actually caused by this failure in the logs:

{noformat}
2017-06-28 10:10:04,275 WARN  [main] spark.IntegrationTestSparkBulkLoad (IntegrationTestSparkBulkLoad.java:runCheckWithRetry(311))
- Received java.lang.NullPointerException
	at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad$LinkKey.compareTo(IntegrationTestBulkLoad.java:468)
	at org.apache.hadoop.hbase.mapreduce.IntegrationTestBulkLoad$LinkKey.compareTo(IntegrationTestBulkLoad.java:445)
	at org.spark-project.guava.collect.NaturalOrdering.compare(NaturalOrdering.java:41)
	at org.spark-project.guava.collect.NaturalOrdering.compare(NaturalOrdering.java:28)
	at scala.math.LowPriorityOrderingImplicits$$anon$7.compare(Ordering.scala:153)
	at scala.math.Ordering$$anon$5.compare(Ordering.scala:122)
	at java.util.TimSort.countRunAndMakeAscending(TimSort.java:355)
	at java.util.TimSort.sort(TimSort.java:234)
	at java.util.Arrays.sort(Arrays.java:1438)
	at scala.collection.SeqLike$class.sorted(SeqLike.scala:615)
	at scala.collection.AbstractSeq.sorted(Seq.scala:40)
	at scala.collection.SeqLike$class.sortBy(SeqLike.scala:594)
	at scala.collection.AbstractSeq.sortBy(Seq.scala:40)
	at org.apache.spark.RangePartitioner$.determineBounds(Partitioner.scala:281)
	at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:154)
	at org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:62)
	at org.apache.spark.rdd.OrderedRDDFunctions$$anonfun$sortByKey$1.apply(OrderedRDDFunctions.scala:61)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
	at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
	at org.apache.spark.rdd.OrderedRDDFunctions.sortByKey(OrderedRDDFunctions.scala:61)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:902)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:872)
	at org.apache.spark.api.java.JavaPairRDD.sortByKey(JavaPairRDD.scala:862)
	at org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.runCheck(IntegrationTestSparkBulkLoad.java:300)
	at org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad.runCheckWithRetry(IntegrationTestSparkBulkLoad.java:309)
{noformat}

> Add hbase-spark integration test into hbase-spark-it
> ----------------------------------------------------
>
>                 Key: HBASE-18175
>                 URL: https://issues.apache.org/jira/browse/HBASE-18175
>             Project: HBase
>          Issue Type: Test
>          Components: spark
>            Reporter: Yi Liang
>            Assignee: Yi Liang
>            Priority: Critical
>             Fix For: 2.0.0
>
>         Attachments: hbase-18175-master-v2.patch, hbase-18175-master-v3.patch, hbase-18175-master-v4.patch,
hbase-18175-v1.patch
>
>
> After HBASE-17574, all test under hbase-spark are regarded as unit test, and this jira
will add integration test of hbase-spark into hbase-it.  This patch run same tests as mapreduce.IntegrationTestBulkLoad,
just change mapreduce to spark.  
> test in Maven:
> mvn verify -Dit.test=IntegrationTestSparkBulkLoad
> test on cluster:
> spark-submit --class org.apache.hadoop.hbase.spark.IntegrationTestSparkBulkLoad HBASE_HOME/lib/hbase-it-2.0.0-SNAPSHOT-tests.jar
-Dhbase.spark.bulkload.chainlength=500000 -m slowDeterministic



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message