spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Reynold Xin <>
Subject Re: New Spark json endpoints
Date Thu, 17 Sep 2015 05:17:53 GMT
Do we need to increment the version number if it is just strict additions?

On Wed, Sep 16, 2015 at 7:10 PM, Kevin Chen <> wrote:

> Just wanted to bring this email up again in case there were any thoughts.
> Having all the information from the web UI accessible through a supported
> json API is very important to us; are there any objections to us adding a
> v2 API to Spark?
> Thanks!
> From: Kevin Chen <>
> Date: Friday, September 11, 2015 at 11:30 AM
> To: "" <>
> Cc: Matt Cheah <>, Mingyu Kim <>
> Subject: New Spark json endpoints
> Hello Spark Devs,
>  I noticed that [SPARK-3454], which introduces new json endpoints at
> /api/v1/[path] for information previously only shown on the web UI, does
> not expose several useful properties about Spark jobs that are exposed on
> the web UI and on the unofficial /json endpoint.
>  Specific examples include the maximum number of allotted cores per
> application, amount of memory allotted to each slave, and number of cores
> used by each worker. These are provided at ‘app.cores, app.memoryperslave,
> and worker.coresused’ in the /json endpoint, and also all appear on the web
> UI page.
>  Is there any specific reason that these fields are not exposed in the
> public API? If not, would it be reasonable to add them to the json blobs,
> possibly in a future /api/v2 API?
> Thank you,
> Kevin Chen

View raw message