mahout-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAHOUT-1818) dals test failing in Flink-bindings
Date Sun, 10 Apr 2016 22:59:25 GMT

    [ https://issues.apache.org/jira/browse/MAHOUT-1818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234323#comment-15234323
] 

ASF GitHub Bot commented on MAHOUT-1818:
----------------------------------------

GitHub user andrewpalumbo opened a pull request:

    https://github.com/apache/mahout/pull/218

    MAHOUT-1818: Workaround: Create a FlinkDistributedDecomposionSuite and clean up Tests…

    Create `FlinkDistributedDecompositionSuite` identical to that of `DistributedDecompositionSuiteBase`
with the exception of the `dals` test  which uses a 350 x 350 matrix rather than a 500 x 500
due to some flink Serialization issues.
    
    Also remove unneeded tests ahead of release.

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/andrewpalumbo/mahout flink-tests

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/mahout/pull/218.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #218
    
----
commit 9a55b49a01c7ce0c60084d2b0ab8c5ce0ca0df8e
Author: Andrew Palumbo <apalumbo@apache.org>
Date:   2016-04-10T22:51:49Z

    (NOJIRA) Create a FlinkDistributedDecomposionSuite and clean up Tests for Flink Release

----


> dals test failing in Flink-bindings
> -----------------------------------
>
>                 Key: MAHOUT-1818
>                 URL: https://issues.apache.org/jira/browse/MAHOUT-1818
>             Project: Mahout
>          Issue Type: Bug
>          Components: Flink
>    Affects Versions: 0.11.2
>            Reporter: Andrew Palumbo
>            Assignee: Andrew Palumbo
>            Priority: Blocker
>             Fix For: 0.12.0
>
>
> {{dals}} test fails in Flink bindings with an OOM.  Numerically the test passes, when
the matrix being decomposed in the test  lowered to the size 50 x 50.  But the default size
of the matrix in the {{DistributedDecompositionsSuiteBase}} is 500 x 500. 
> {code}
> java.lang.OutOfMemoryError: Java heap space
> 	at java.util.Arrays.copyOf(Arrays.java:2271)
> 	at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
> 	at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
> 	at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
> 	at java.io.ObjectOutputStream$BlockDataOutputStream.writeBlockHeader(ObjectOutputStream.java:1893)
> 	at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1874)
> 	at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1785)
> 	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1188)
> 	at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1547)
> 	at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1508)
> 	at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1431)
> 	at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1177)
> 	at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:347)
> 	at org.apache.flink.util.InstantiationUtil.serializeObject(InstantiationUtil.java:300)
> 	at org.apache.flink.util.InstantiationUtil.writeObjectToConfig(InstantiationUtil.java:252)
> 	at org.apache.flink.runtime.operators.util.TaskConfig.setStubWrapper(TaskConfig.java:273)
> 	at org.apache.flink.optimizer.plantranslate.JobGraphGenerator.createDataSourceVertex(JobGraphGenerator.java:893)
> 	at org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:286)
> 	at org.apache.flink.optimizer.plantranslate.JobGraphGenerator.preVisit(JobGraphGenerator.java:109)
> 	at org.apache.flink.optimizer.plan.SourcePlanNode.accept(SourcePlanNode.java:86)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.SingleInputPlanNode.accept(SingleInputPlanNode.java:199)
> 	at org.apache.flink.optimizer.plan.OptimizedPlan.accept(OptimizedPlan.java:128)
> 	at org.apache.flink.optimizer.plantranslate.JobGraphGenerator.compileJobGraph(JobGraphGenerator.java:188)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message