flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Maximilian Michels (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-2235) Local Flink cluster allocates too much memory
Date Mon, 22 Jun 2015 15:43:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-2235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14596099#comment-14596099

Maximilian Michels commented on FLINK-2235:

Actually, I think the explanation is simple. The doc string for Runtime.maxMemory():
     * Returns the maximum amount of memory that the Java virtual machine will
     * attempt to use.  If there is no inherent limit then the value {@link
     * java.lang.Long#MAX_VALUE} will be returned.

So it returned {{Long.MAX_VALUE}} because it found "no inherent limit". As a quick fix, I
would simply check if the returned value is equal to {{Long.MAX_VALUE}} and then simply use
{{Runtime.freeMemory()}} instead of {{Runtime.maxMemory() - Runtime.totalMemory() + Runtime.freeMemory()}}.

> Local Flink cluster allocates too much memory
> ---------------------------------------------
>                 Key: FLINK-2235
>                 URL: https://issues.apache.org/jira/browse/FLINK-2235
>             Project: Flink
>          Issue Type: Bug
>          Components: Local Runtime, TaskManager
>    Affects Versions: 0.9
>         Environment: Oracle JDK: 1.6.0_65-b14-462
> Eclipse
>            Reporter: Maximilian Michels
>            Priority: Minor
> When executing a Flink job locally, the task manager gets initialized with an insane
amount of memory. After a quick look in the code it seems that the call to {{EnvironmentInformation.getSizeOfFreeHeapMemoryWithDefrag()}}
returns a wrong estimate of the heap memory size.
> Moreover, the same user switched to Oracle JDK 1.8 and that made the error disappear.
So I'm guessing this is some Java 1.6 quirk.

This message was sent by Atlassian JIRA

View raw message