Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 0E24C200BF1 for ; Tue, 3 Jan 2017 17:15:25 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 0CB03160B43; Tue, 3 Jan 2017 16:15:25 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id D535E160B33 for ; Tue, 3 Jan 2017 17:15:23 +0100 (CET) Received: (qmail 8814 invoked by uid 500); 3 Jan 2017 16:15:22 -0000 Mailing-List: contact commits-help@phoenix.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@phoenix.apache.org Delivered-To: mailing list commits@phoenix.apache.org Received: (qmail 8631 invoked by uid 99); 3 Jan 2017 16:15:22 -0000 Received: from crius.apache.org (HELO crius.apache.org) (140.211.11.14) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 Jan 2017 16:15:21 +0000 Received: from crius.apache.org (localhost [127.0.0.1]) by crius.apache.org (ASF Mail Server at crius.apache.org) with ESMTP id B73959C005B; Tue, 3 Jan 2017 16:15:18 +0000 (UTC) Date: Tue, 3 Jan 2017 16:15:18 +0000 (UTC) From: Apache Jenkins Server To: commits@phoenix.apache.org, tdsilva@apache.org, elserj@apache.org, jmahonin@interset.com, ankitsinghal59@gmail.com Message-ID: <757411559.6220.1483460118528.JavaMail.jenkins@crius> Subject: Build failed in Jenkins: Phoenix | Master #1529 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Instance-Identity: MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkqVKZPv7YyHBB3FvWfV7XQehwe/Ga3aadzSNknt8g382X3uN8A3SOQ+Ixq9HxS+ZlN6XR4TECySmSRy2JN5Rx8svxAD0TjtSF9LuU98dD+LniNDP7Lq6gvRFuJhbMHoS0nuTizDZLsK4X8TW5MyV9w+jFbdoZfRE5O/Mse0fkOeL5uoIS/3Vvu/W+x9QSjDkB7CaU56bPFlQjqqJBl3Cn9r34CkXQZYnLb/NjW4vcpw0+TgMUAPTIVEr5BTPZRshz19g7huwg3zANT5HBIZnzV4hsVY9w4JHkceFdKi/ibNnjPjsFs9pm0HSGJ/RDxjIvSTYT02eH4+m1RAYaj2E9QIDAQAB X-Jenkins-Job: Phoenix-master X-Jenkins-Result: FAILURE archived-at: Tue, 03 Jan 2017 16:15:25 -0000 See Changes: [jmahonin] PHOENIX-3333 Support Spark 2.0 ------------------------------------------ [...truncated 18357 lines...] =1B[31m at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java= :591)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.jav= a:509)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseT= est.java:483)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java= :561)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java= :557)=1B[0m =1B[31m at org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseH= BaseManagedTimeIT.java:57)=1B[0m =1B[31m at org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(Abstract= PhoenixSparkIT.scala:33)=1B[0m =1B[31m at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(Abstr= actPhoenixSparkIT.scala:88)=1B[0m =1B[31m at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterA= ll.scala:187)=1B[0m =1B[31m at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(Abstr= actPhoenixSparkIT.scala:44)=1B[0m =1B[31m ...=1B[0m =1B[31m Cause: java.io.IOException: Failed to save in any storage director= ies while saving namespace.=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAll= Dirs(FSImage.java:1176)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSImage.saveFSImageInAll= Dirs(FSImage.java:1133)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.j= ava:163)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode= .java:991)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode= .java:342)=1B[0m =1B[31m at org.apache.hadoop.hdfs.DFSTestUtil.formatNameNode(DFSTestUtil.j= ava:176)=1B[0m =1B[31m at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf= (MiniDFSCluster.java:973)=1B[0m =1B[31m at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDF= SCluster.java:811)=1B[0m =1B[31m at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.jav= a:742)=1B[0m =1B[31m at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster= (HBaseTestingUtility.java:585)=1B[0m =1B[31m ...=1B[0m 990 [ScalaTest-3] ERROR org.apache.hadoop.hdfs.server.namenode.FSImage - = Failed to load image from FSImageFile(file=3D cpktTxId=3D0000000000000000000) java.io.IOException: No MD5 file found corresponding to image file =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja= va:940) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImag= e.java:740) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja= va:676) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(= FSImage.java:294) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam= esystem.java:976) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa= mesystem.java:681) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo= de.java:584) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:644) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 811) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 795) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1488) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:1111) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:982) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:811) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:742) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBase= TestingUtility.java:585) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:982) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:863) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:845) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:832) =09at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:588) =09at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509) =09at org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.ja= va:483) =09at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561) =09at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557) =09at org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseMa= nagedTimeIT.java:57) =09at org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoeni= xSparkIT.scala:33) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPho= enixSparkIT.scala:88) =09at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.sca= la:187) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPho= enixSparkIT.scala:44) =09at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253= ) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSp= arkIT.scala:44) =09at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55) =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47= 1) =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.= java:1145) =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor= .java:615) =09at java.lang.Thread.run(Thread.java:745) 1100 [ScalaTest-3] ERROR org.apache.hadoop.hdfs.server.namenode.FSImage - = Failed to load image from FSImageFile(file=3D cpktTxId=3D0000000000000000000) java.io.IOException: No MD5 file found corresponding to image file =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja= va:940) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImageFile(FSImag= e.java:740) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja= va:676) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(= FSImage.java:294) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam= esystem.java:976) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa= mesystem.java:681) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo= de.java:584) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:644) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 811) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 795) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1488) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:1111) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:982) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:811) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:742) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBase= TestingUtility.java:585) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:982) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:863) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:845) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:832) =09at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:588) =09at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509) =09at org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.ja= va:483) =09at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561) =09at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557) =09at org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseMa= nagedTimeIT.java:57) =09at org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoeni= xSparkIT.scala:33) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPho= enixSparkIT.scala:88) =09at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.sca= la:187) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPho= enixSparkIT.scala:44) =09at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253= ) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSp= arkIT.scala:44) =09at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55) =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47= 1) =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.= java:1145) =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor= .java:615) =09at java.lang.Thread.run(Thread.java:745) 1211 [ScalaTest-3] ERROR org.apache.hadoop.hdfs.MiniDFSCluster - IOE creat= ing namenodes. Permissions dump: path ':=20 =09absolute: =09permissions: ---- path ':=20 =09absolute: =09permissions: drwx path ':=20 =09absolute: =09permissions: drwx path ':=20 =09absolute: =09permissions: drwx path ':=20 =09absolute: =09permissions: drwx path ':=20 =09absolute: =09permissions: drwx path ':=20 =09absolute: =09permissions: drwx path '/home/jenkins/jenkins-slave/workspace':=20 =09absolute:/home/jenkins/jenkins-slave/workspace =09permissions: drwx path '/home/jenkins/jenkins-slave':=20 =09absolute:/home/jenkins/jenkins-slave =09permissions: drwx path '/home/jenkins':=20 =09absolute:/home/jenkins =09permissions: drwx path '/home':=20 =09absolute:/home =09permissions: dr-x path '/':=20 =09absolute:/ =09permissions: dr-x java.io.IOException: Failed to load an FSImage file! =09at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.ja= va:687) =09at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(= FSImage.java:294) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNam= esystem.java:976) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa= mesystem.java:681) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo= de.java:584) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:644) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 811) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 795) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1488) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.j= ava:1111) =09at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniD= FSCluster.java:982) =09at org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSClust= er.java:811) =09at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:742) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniDFSCluster(HBase= TestingUtility.java:585) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:982) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:863) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:845) =09at org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTes= tingUtility.java:832) =09at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java:588) =09at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.java:509) =09at org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseTest.ja= va:483) =09at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:561) =09at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java:557) =09at org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseHBaseMa= nagedTimeIT.java:57) =09at org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(AbstractPhoeni= xSparkIT.scala:33) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPho= enixSparkIT.scala:88) =09at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterAll.sca= la:187) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(AbstractPho= enixSparkIT.scala:44) =09at org.scalatest.BeforeAndAfterAll$class.run(BeforeAndAfterAll.scala:253= ) =09at org.apache.phoenix.spark.AbstractPhoenixSparkIT.run(AbstractPhoenixSp= arkIT.scala:44) =09at org.scalatest.tools.SuiteRunner.run(SuiteRunner.scala:55) =09at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:47= 1) =09at java.util.concurrent.FutureTask.run(FutureTask.java:262) =09at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.= java:1145) =09at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor= .java:615) =09at java.lang.Thread.run(Thread.java:745) =1B[31mException encountered when invoking run on a nested suite - java.io.= IOException: Failed to load an FSImage file! *** ABORTED ***=1B[0m =1B[31m java.lang.RuntimeException: java.io.IOException: Failed to load an= FSImage file!=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.initMiniCluster(BaseTest.java= :591)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.setUpTestCluster(BaseTest.jav= a:509)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.checkClusterInitialized(BaseT= est.java:483)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java= :561)=1B[0m =1B[31m at org.apache.phoenix.query.BaseTest.setUpTestDriver(BaseTest.java= :557)=1B[0m =1B[31m at org.apache.phoenix.end2end.BaseHBaseManagedTimeIT.doSetup(BaseH= BaseManagedTimeIT.java:57)=1B[0m =1B[31m at org.apache.phoenix.spark.PhoenixSparkITHelper$.doSetup(Abstract= PhoenixSparkIT.scala:33)=1B[0m =1B[31m at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(Abstr= actPhoenixSparkIT.scala:88)=1B[0m =1B[31m at org.scalatest.BeforeAndAfterAll$class.beforeAll(BeforeAndAfterA= ll.scala:187)=1B[0m =1B[31m at org.apache.phoenix.spark.AbstractPhoenixSparkIT.beforeAll(Abstr= actPhoenixSparkIT.scala:44)=1B[0m =1B[31m ...=1B[0m =1B[31m Cause: java.io.IOException: Failed to load an FSImage file!=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSIm= age.java:687)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitio= nRead(FSImage.java:294)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage= (FSNamesystem.java:976)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDis= k(FSNamesystem.java:681)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(= NameNode.java:584)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(Name= Node.java:644)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode= .java:811)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode= .java:795)=1B[0m =1B[31m at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(= NameNode.java:1488)=1B[0m =1B[31m at org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSClu= ster.java:1111)=1B[0m =1B[31m ...=1B[0m =1B[36mRun completed in 3 seconds, 806 milliseconds.=1B[0m =1B[36mTotal number of tests run: 0=1B[0m =1B[36mSuites: completed 2, aborted 2=1B[0m =1B[36mTests: succeeded 0, failed 0, canceled 0, ignored 0, pending 0=1B[0m =1B[31m*** 2 SUITES ABORTED ***=1B[0m [INFO] --------------------------------------------------------------------= ---- [INFO] Reactor Summary: [INFO]=20 [INFO] Apache Phoenix ..................................... SUCCESS [ 2.32= 8 s] [INFO] Phoenix Core ....................................... SUCCESS [51:28 = min] [INFO] Phoenix - Flume .................................... SUCCESS [01:23 = min] [INFO] Phoenix - Pig ...................................... SUCCESS [03:49 = min] [INFO] Phoenix Query Server Client ........................ SUCCESS [ 13.80= 4 s] [INFO] Phoenix Query Server ............................... SUCCESS [01:45 = min] [INFO] Phoenix - Pherf .................................... SUCCESS [01:41 = min] [INFO] Phoenix - Spark .................................... FAILURE [ 55.38= 3 s] [INFO] Phoenix - Hive ..................................... SKIPPED [INFO] Phoenix Client ..................................... SKIPPED [INFO] Phoenix Server ..................................... SKIPPED [INFO] Phoenix Assembly ................................... SKIPPED [INFO] Phoenix - Tracing Web Application .................. SKIPPED [INFO] --------------------------------------------------------------------= ---- [INFO] BUILD FAILURE [INFO] --------------------------------------------------------------------= ---- [INFO] Total time: 01:01 h [INFO] Finished at: 2017-01-03T16:09:20+00:00 [INFO] Final Memory: 106M/1208M [INFO] --------------------------------------------------------------------= ---- [ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0:tes= t (integration-test) on project phoenix-spark: There are test failures -> [= Help 1] [ERROR]=20 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e= switch. [ERROR] Re-run Maven using the -X switch to enable full debug logging. [ERROR]=20 [ERROR] For more information about the errors and possible solutions, pleas= e read the following articles: [ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailu= reException [ERROR]=20 [ERROR] After correcting the problems, you can resume the build with the co= mmand [ERROR] mvn -rf :phoenix-spark Build step 'Invoke top-level Maven targets' marked build as failure Archiving artifacts Compressed 1.57 GB of artifacts by 64.4% relative to #1528 Updating PHOENIX-3333 Recording test results