mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Jenkins Server <jenk...@builds.apache.org>
Subject Build failed in Jenkins: Mahout-Quality #2965
Date Thu, 19 Feb 2015 17:35:17 GMT
See <https://builds.apache.org/job/Mahout-Quality/2965/>

------------------------------------------
[...truncated 6255 lines...]
  2  =>	{0:0.15935325307337903,1:0.07468219465060774,2:-0.37963073350622206}
}
ALS factorized approximation block:
{
  0  =>	{0:0.3947518179224231,1:-0.08695389395155544,2:-1.0574494478839265}
  1  =>	{0:0.4076399435035176,1:0.013566854170126305,2:-0.6050777716454986}
  2  =>	{0:0.15934618500050707,1:0.0746779194686282,2:-0.3796636071626735}
}
norm of residuals 0.009174
train iteration rmses: List(1.7992630164822846E-7, 1.308589600373296E-7, 1.1214324598457284E-7,
1.6832499777966416E-7)
- dals
DrmLikeOpsSuite:
{
  0  =>	{0:2.0,1:3.0,2:4.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
  2  =>	{0:4.0,1:5.0,2:6.0}
  3  =>	{0:5.0,1:6.0,2:7.0}
}
- mapBlock
{
  0  =>	{0:2.0,1:3.0}
  1  =>	{0:3.0,1:4.0}
  2  =>	{0:4.0,1:5.0}
  3  =>	{0:5.0,1:6.0}
}
- col range
{
  0  =>	{0:2.0,1:3.0,2:4.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
}
- row range
{
  0  =>	{0:3.0,1:4.0}
  1  =>	{0:4.0,1:5.0}
}
- col, row range
- exact, min and auto ||
NBSparkTestSuite:
- Simple Standard NB Model
- NB Aggregator
- Model DFS Serialization
- Spark NB Aggregator
ItemSimilarityDriverSuite:
- ItemSimilarityDriver, non-full-spec CSV
- ItemSimilarityDriver TSV 
- ItemSimilarityDriver log-ish files
- ItemSimilarityDriver legacy supported file format
- ItemSimilarityDriver write search engine output
- ItemSimilarityDriver recursive file discovery using filename patterns
- ItemSimilarityDriver, two input paths
- ItemSimilarityDriver, two inputs of different dimensions
- ItemSimilarityDriver cross similarity two separate items spaces
- A.t %*% B after changing row cardinality of A
- Changing row cardinality of an IndexedDataset
- ItemSimilarityDriver cross similarity two separate items spaces, missing rows in B
BlasSuite:
AB' num partitions = 2.
{
  2  =>	{0:50.0,1:74.0}
  1  =>	{0:38.0,1:56.0}
  0  =>	{0:26.0,1:38.0}
}
- ABt
- A * B Hadamard
- A + B Elementwise
- A - B Elementwise
- A / B Elementwise
{
  0  =>	{0:5.0,1:8.0}
  1  =>	{0:8.0,1:13.0}
}
{
  0  =>	{0:5.0,1:8.0}
  1  =>	{0:8.0,1:13.0}
}
- AtA slim
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:2.0,1:3.0,2:4.0}
  2  =>	{0:3.0,1:4.0,2:5.0}
}
- At
SimilarityAnalysisSuite:
- cooccurrence [A'A], [B'A] boolbean data using LLR
- cooccurrence [A'A], [B'A] double data using LLR
- cooccurrence [A'A], [B'A] integer data using LLR
- cooccurrence two matrices with different number of columns
- LLR calc
- downsampling by number per row
log4j:WARN No appenders could be found for logger (org.apache.spark.storage.BlockManager).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
ClassifierStatsSparkTestSuite:
- testFullRunningAverageAndStdDev
- testBigFullRunningAverageAndStdDev
- testStddevFullRunningAverageAndStdDev
- testFullRunningAverage
- testFullRunningAveragCopyConstructor
- testInvertedRunningAverage
- testInvertedRunningAverageAndStdDev
- testBuild
- GetMatrix
- testPrecisionRecallAndF1ScoreAsScikitLearn
RLikeDrmOpsSuite:
- A.t
{
  1  =>	{0:25.0,1:39.0}
  0  =>	{0:11.0,1:17.0}
}
{
  1  =>	{0:25.0,1:39.0}
  0  =>	{0:11.0,1:17.0}
}
- C = A %*% B
{
  0  =>	{0:11.0,1:17.0}
  1  =>	{0:25.0,1:39.0}
}
{
  0  =>	{0:11.0,1:17.0}
  1  =>	{0:25.0,1:39.0}
}
Q=
{
  0  =>	{0:0.40273861426601687,1:-0.9153150324187648}
  1  =>	{0:0.9153150324227656,1:0.40273861426427493}
}
- C = A %*% B mapBlock {}
- C = A %*% B incompatible B keys
- Spark-specific C = At %*% B , join
- C = At %*% B , join, String-keyed
- C = At %*% B , zippable, String-keyed
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = A %*% inCoreB
{
  0  =>	{0:26.0,1:35.0,2:46.0,3:51.0}
  1  =>	{0:50.0,1:69.0,2:92.0,3:105.0}
  2  =>	{0:62.0,1:86.0,2:115.0,3:132.0}
  3  =>	{0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = inCoreA %*%: B
- C = A.t %*% A
- C = A.t %*% A fat non-graph
- C = A.t %*% A non-int key
- C = A + B
A=
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{0:3.0,1:4.0,2:5.0}
  2  =>	{0:5.0,1:6.0,2:7.0}
}
B=
{
  0  =>	{0:0.7055659047464417,1:0.1473878001035579,2:0.0046582646756648804}
  1  =>	{0:0.8011606025805866,1:0.898829969053996,2:0.3463798557503821}
  2  =>	{0:0.36637956234231994,1:0.45094945513528684,2:0.19896459714711479}
}
C=
{
  0  =>	{0:1.7055659047464418,1:2.147387800103558,2:3.004658264675665}
  1  =>	{0:3.8011606025805866,1:4.898829969053996,2:5.346379855750382}
  2  =>	{0:5.36637956234232,1:6.450949455135287,2:7.198964597147115}
}
- C = A + B, identically partitioned
- C = A + B side test 1
- C = A + B side test 2
- C = A + B side test 3
- Ax
- A'x
- colSums, colMeans
- rowSums, rowMeans
- A.diagv
- numNonZeroElementsPerColumn
- C = A cbind B, cogroup
- C = A cbind B, zip
- B = A + 1.0
- C = A rbind B
- C = A rbind B, with empty
- scalarOps
- C = A + B missing rows
- C = cbind(A, B) with missing rows
collected A = 
{
  0  =>	{0:1.0,1:2.0,2:3.0}
  1  =>	{}
  2  =>	{}
  3  =>	{0:3.0,1:4.0,2:5.0}
}
collected B = 
{
  2  =>	{0:1.0,1:1.0,2:1.0}
  1  =>	{0:1.0,1:1.0,2:1.0}
  3  =>	{0:4.0,1:5.0,2:6.0}
  0  =>	{0:2.0,1:3.0,2:4.0}
}
- B = A + 1.0 missing rows
Run completed in 1 minute, 43 seconds.
Total number of tests run: 89
Suites: completed 12, aborted 0
Tests: succeeded 88, failed 1, canceled 0, ignored 1, pending 0
*** 1 TEST FAILED ***
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................ SUCCESS [4.749s]
[INFO] Apache Mahout ..................................... SUCCESS [1.727s]
[INFO] Mahout Math ....................................... SUCCESS [2:14.471s]
[INFO] Mahout MapReduce Legacy ........................... SUCCESS [11:05.791s]
[INFO] Mahout Integration ................................ SUCCESS [1:34.856s]
[INFO] Mahout Examples ................................... SUCCESS [52.924s]
[INFO] Mahout Release Package ............................ SUCCESS [0.094s]
[INFO] Mahout Math Scala bindings ........................ SUCCESS [2:09.344s]
[INFO] Mahout Spark bindings ............................. FAILURE [2:24.977s]
[INFO] Mahout Spark bindings shell ....................... SKIPPED
[INFO] Mahout H2O backend ................................ SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 20:31.211s
[INFO] Finished at: Thu Feb 19 17:34:27 UTC 2015
[INFO] Final Memory: 86M/390M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.scalatest:scalatest-maven-plugin:1.0-M2:test (test) on
project mahout-spark_2.10: There are test failures -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following
articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :mahout-spark_2.10
Build step 'Invoke top-level Maven targets' marked build as failure
[PMD] Skipping publisher since build result is FAILURE
[TASKS] Skipping publisher since build result is FAILURE
Archiving artifacts
Sending artifact delta relative to Mahout-Quality #2964
Archived 72 artifacts
Archive block size is 32768
Received 3655 blocks and 20758672 bytes
Compression is 85.2%
Took 13 sec
Recording test results
Publishing Javadoc

Mime
View raw message