spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dylanzhou (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (SPARK-13183) Bytebuffers occupy a large amount of heap memory
Date Tue, 16 Feb 2016 07:49:18 GMT

    [ https://issues.apache.org/jira/browse/SPARK-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15148205#comment-15148205
] 

dylanzhou edited comment on SPARK-13183 at 2/16/16 7:48 AM:
------------------------------------------------------------

@Sean Owen maybe is a memory leak problem, and finally will run out of heap memory error java.lang.OutOfMemoryError:Java
for heap space. When I try to increase driver memory, just streaming programs work a little
longer, in my opinion byte[] objects cannot be reclaimed by the GC. Can you give me some advice?
Here is my question, thank you!
http://apache-spark-user-list.1001560.n3.nabble.com/the-memory-leak-problem-of-use-sparkstreamimg-and-sparksql-with-kafka-in-spark-1-4-1-td26231.html



was (Author: dylanzhou):
@Sean Owen maybe is a memory leak problem, and finally will run out of heap memory error java.lang.OutOfMemoryError:Java
for heap space. When I try to increase driver memory, just streaming programs work a little
longer, in my opinion byte[] objects cannot be reclaimed by the GC. Here is my question, get
advice here is my point, thank you!
http://apache-spark-user-list.1001560.n3.nabble.com/the-memory-leak-problem-of-use-sparkstreamimg-and-sparksql-with-kafka-in-spark-1-4-1-td26231.html


> Bytebuffers occupy a large amount of heap memory
> ------------------------------------------------
>
>                 Key: SPARK-13183
>                 URL: https://issues.apache.org/jira/browse/SPARK-13183
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.4.1
>            Reporter: dylanzhou
>
> When I used sparkstreamimg and sparksql, i cache the table,found that old gen increases
very fast and full GC is very frequent, running for a period of time will be out of memory,
after analysis of heap memory, found that there are a large number of org.apache.spark.sql.columnar.ColumnBuilder[38]
@ 0xd022a0b8, takes up 90% of the space, look at the source is HeapByteBuffer occupy, don't
know why these objects are not released, had been waiting for GC to recycle;if i donot use
cache table, there will be no this problem, but I need to repeatedly query this table do



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message