spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Owen (JIRA)" <>
Subject [jira] [Commented] (SPARK-20228) Random Forest instable results depending on spark.executor.memory
Date Fri, 07 Apr 2017 08:24:41 GMT


Sean Owen commented on SPARK-20228:

Without more detail I'm not sure what to make of it. Just giving more memory shouldn't change
anything directly, but it could affect things like caching and therefore locality, could affect
whether your jobs are failing for lack of memory. I have never encountered this when working
with decision forests and varying memory settings. It's hard to investigate with no reproduction.
Can you suggest what the issue is?

> Random Forest instable results depending on spark.executor.memory
> -----------------------------------------------------------------
>                 Key: SPARK-20228
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: PySpark
>    Affects Versions: 2.1.0
>            Reporter: Ansgar Schulze
> If I deploy a random forrest modeling with example 
> spark.executor.memory            20480M
> I got another result as if i depoy the modeling with
> spark.executor.memory            6000M
> I excpected the same results but different runtimes.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message