systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Fei Hu (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SYSTEMML-1762) Fix the matrix reshape function for the Spark mode
Date Thu, 13 Jul 2017 03:05:00 GMT

    [ https://issues.apache.org/jira/browse/SYSTEMML-1762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16084968#comment-16084968
] 

Fei Hu edited comment on SYSTEMML-1762 at 7/13/17 3:04 AM:
-----------------------------------------------------------

When setting the training parameters as following:

{code:java}
    val N = 64
    val Nval = 64
    val Ntest = 1
    val C = 3
    val Hin = 224
    val Win = 224
    val K = 10
    val batchSize = 32
    val paralellBatches = 4
    val epochs = 1
{code}

the errors come from the dense matrix reshape as following:

{code:java}
17/07/12 17:20:40 INFO DAGScheduler: ShuffleMapStage 111 (flatMapToPair at MatrixReshapeSPInstruction.java:106)
failed in 0.290 s due to Job aborted due to stage failure: Task 3 in stage 111.0 failed 1
times, most recent failure: Lost task 3.0 in stage 111.0 (TID 331, localhost, executor driver):
java.lang.NullPointerException
	at org.apache.sysml.runtime.matrix.data.LibMatrixReorg.reshapeDense(LibMatrixReorg.java:1550)
	at org.apache.sysml.runtime.matrix.data.LibMatrixReorg.reshape(LibMatrixReorg.java:506)
	at org.apache.sysml.runtime.instructions.spark.MatrixReshapeSPInstruction$RDDReshapeFunction.call(MatrixReshapeSPInstruction.java:138)
	at org.apache.sysml.runtime.instructions.spark.MatrixReshapeSPInstruction$RDDReshapeFunction.call(MatrixReshapeSPInstruction.java:114)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
{code}








was (Author: tenma):
When setting the training parameters as following:

{code:java}
val N = 64
    val Nval = 64
    val Ntest = 1
    val C = 3
    val Hin = 224
    val Win = 224
    val K = 10
    val batchSize = 32
    val paralellBatches = 4
    val epochs = 1
{code}

the errors come from the dense matrix reshape as following:

{code:java}
17/07/12 17:20:40 INFO DAGScheduler: ShuffleMapStage 111 (flatMapToPair at MatrixReshapeSPInstruction.java:106)
failed in 0.290 s due to Job aborted due to stage failure: Task 3 in stage 111.0 failed 1
times, most recent failure: Lost task 3.0 in stage 111.0 (TID 331, localhost, executor driver):
java.lang.NullPointerException
	at org.apache.sysml.runtime.matrix.data.LibMatrixReorg.reshapeDense(LibMatrixReorg.java:1550)
	at org.apache.sysml.runtime.matrix.data.LibMatrixReorg.reshape(LibMatrixReorg.java:506)
	at org.apache.sysml.runtime.instructions.spark.MatrixReshapeSPInstruction$RDDReshapeFunction.call(MatrixReshapeSPInstruction.java:138)
	at org.apache.sysml.runtime.instructions.spark.MatrixReshapeSPInstruction$RDDReshapeFunction.call(MatrixReshapeSPInstruction.java:114)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
	at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$3$1.apply(JavaRDDLike.scala:143)
	at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434)
	at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
	at org.apache.spark.util.collection.ExternalSorter.insertAll(ExternalSorter.scala:191)
	at org.apache.spark.shuffle.sort.SortShuffleWriter.write(SortShuffleWriter.scala:63)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:96)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:53)
	at org.apache.spark.scheduler.Task.run(Task.scala:99)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:282)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:748)
{code}







> Fix the matrix reshape function for the Spark mode
> --------------------------------------------------
>
>                 Key: SYSTEMML-1762
>                 URL: https://issues.apache.org/jira/browse/SYSTEMML-1762
>             Project: SystemML
>          Issue Type: Bug
>          Components: Algorithms, ParFor, Runtime
>            Reporter: Fei Hu
>            Assignee: Fei Hu
>         Attachments: MNIST_Distrib_Sgd.scala
>
>
> When running the [distributed MNIST LeNet example | https://github.com/apache/systemml/blob/master/scripts/nn/examples/mnist_lenet_distrib_sgd.dml],
it works well in the hybrid mode. But in the Spark mode, there are some errors about
> {{java.lang.NullPointerException}}  and {{java.lang.ArrayIndexOutOfBoundsException: 1000}}
when reshaping the matrix. The involved functions are {{org.apache.sysml.runtime.matrix.data.LibMatrixReorg#reshapeSparse}}
and {{org.apache.sysml.runtime.matrix.data.LibMatrixReorg#reshapeDense}}. The reason is that
the output matrix index computed by {{org.apache.sysml.runtime.matrix.data.LibMatrixReorg#computeResultBlockIndex}}
does not match the keys in the {{HashMap<MatrixIndexes,MatrixBlock> rix}}. 
> To reproduce the error, the attached scala file {{MNIST_Distrib_Sgd.scala}} could be
used to run the distributed MNIST example.  
> In addition, if adding some codes to ignore the null output matrix block from {{MatrixBlock
out = rix.get(ixtmp)}},  the distributed MNIST example could run in the Spark mode, but the
result may not be right. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message