flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Xintong Song (Jira)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-17493) Possible direct memory leak in cassandra sink
Date Fri, 15 May 2020 03:23:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-17493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17107890#comment-17107890

Xintong Song commented on FLINK-17493:

bq. Repeat step2 and step3, the direct memory buffers used will be greater and greater, until
it reached the MaxDirectMemorySize(decided by jvm options, which is framework-off-heap+task-off-heap+network-memory).

What happened after reaching the MaxDirectMemorySize? Do you see a direct memory OOM?

> Possible direct memory leak in cassandra sink
> ---------------------------------------------
>                 Key: FLINK-17493
>                 URL: https://issues.apache.org/jira/browse/FLINK-17493
>             Project: Flink
>          Issue Type: Bug
>          Components: Connectors / Cassandra
>    Affects Versions: 1.9.0, 1.10.0
>            Reporter: nobleyd
>            Priority: Major
>         Attachments: image-2020-05-14-21-58-59-152.png
> # Cassandra Sink use direct memorys.
>  # Start a standalone cluster(1 machines) for test.
>  # After the cluster started, check the flink web-ui, and record the task manager's memory
info. I mean the direct memory part info.
>  # Start a job which read from kafka and write to cassandra using the cassandra sink,
and you can see that the direct memory count in 'Outside JVM' part go up.
>  # Stop the job, and the direct memory count is not decreased(using 'jmap -histo:live
pid' to make the task manager gc).
>  # Repeat serveral times, the direct memory count will be more and more.

This message was sent by Atlassian Jira

View raw message