spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Marcelo Vanzin (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-21157) Report Total Memory Used by Spark Executors
Date Wed, 21 Jun 2017 01:22:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-21157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16056779#comment-16056779
] 

Marcelo Vanzin commented on SPARK-21157:
----------------------------------------

I'm pretty confused by the description of flights. How are they presented to the user? I see
nothing in the API that lets me get information about a specific flight, or nothing in the
mock UI that seems to show how different stage boundaries affect the metrics, so it's unclear
in the document how flights would be used, or how they would affect the information that is
shown.

The whole section seems to indicate that you would have some API for stage-specific memory
usage, or some extra information about how stages affect memory usage in the executors, but
I see nothing like that being explained. Instead, I see a description of flights and the rest
of the document pretty much ignores them.

> Report Total Memory Used by Spark Executors
> -------------------------------------------
>
>                 Key: SPARK-21157
>                 URL: https://issues.apache.org/jira/browse/SPARK-21157
>             Project: Spark
>          Issue Type: Improvement
>          Components: Input/Output
>    Affects Versions: 2.1.1
>            Reporter: Jose Soltren
>         Attachments: TotalMemoryReportingDesignDoc.pdf
>
>
> Building on some of the core ideas of SPARK-9103, this JIRA proposes tracking total memory
used by Spark executors, and a means of broadcasting, aggregating, and reporting memory usage
data in the Spark UI.
> Here, "total memory used" refers to memory usage that is visible outside of Spark, to
an external observer such as YARN, Mesos, or the operating system. The goal of this enhancement
is to give Spark users more information about how Spark clusters are using memory. Total memory
will include non-Spark JVM memory and all off-heap memory.
> Please consult the attached design document for further details.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message