openwhisk-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Carlos Santana <>
Subject Re: Limiting response size from actions
Date Sat, 28 Jan 2017 13:33:46 GMT
That's what I'm proposing that we stream the json and truncate at 1mb mark
and stored it.

Oh now re reading the proposal said "top level" I think I missed that part.

I would still would like to be easy to turn off and on to easily run a load
test and compare results what's the performance hit of adding the extra
compute and memory to determined the last field to fits into the 1mb mark.

It might be wash and the the discussion is mute point.

I just want to be devil's advocate where we start challenging any potential
extra work in critical path, just want to be sure that every CPU and memory
byte gets squeezed :-)

I agree that it's very important for usability, the difference is it worth
to send a parsed string into json or just text. The client will still have
access to all the data, and I would say in some cases sending the text
client gets every byte of that 1mb were if we do json it might not get all
the data only the last field that fits in meaning if there are let's say
200kb left space left to fit another field but the next field has 300kb in
the case of json the client would not be able to see the name of the
culprit field that is causing the over limit, but if raw text is send
instead the client will be able to see the last field name and the first
200kb of that field.

I'm ok with either solution not strongly voting for any just articulating
my observations.

-- Carlos

On Sat, Jan 28, 2017 at 7:20 AM Nick Mitchell <> wrote:

> i like the proposed solution. i think this is an important problem to
> address, from usability perspective. as an example: recently, a client
> application started to experience bimodal performance: either very fast, or
> 30.x seconds. the logs gave no indication as to what was going on, and so a
> bit of a wild goose chase ensued.
> side note: are we truly parsing the json results in memory? can't json
> validation and decoration be done in a streaming fashion?
> On Fri, Jan 27, 2017 at 10:35 AM, Rodric Rabbah <> wrote:
> > Currently, the size of an action result is not checked by the system and
> > can exceed the assumed limit of 1MB. This will manifest itself in
> blocking
> > invokes will have at least a 30s delay on getting their results.
> Similarly
> > larger than allowed invoke time parameters are not rejected as early as
> > they should be in the pipeline. This is a standing open defect you can
> read
> > more about here [1] along with my proposed solution [2] which I include
> > below for convenience.
> >
> >
> >    1. Response is sized by the invoker.
> >       1. If it exceeds the system limit, the invoker will reject it.
> >       2. If it is within the limits, it will be processed as usual.
> >    2. When the invoker rejects an action response, it will do the
> >    following:
> >       1. Construct an error response object with a message explaining the
> >       failure
> >       2. In addition, it will provide as a debugging aid a partial result
> >       that contains as many of the top level fields from the
> > dictionary response
> >       into the error object.
> >    3. The error response for large responses shall include:
> >       1. "Action result of X MB exceeds system limit of Y MB".
> >       2. The number of top level fields contained in the response and how
> >       many are actually included in the partial response.
> >
> > As part of doing this work, I will also apply the same technique to
> reject
> > large incoming payloads which currently get rejected a bit later in the
> > pipeline than they should.
> >
> > Comments and feedback welcomed and appreciated.
> >
> > -r
> >
> > [1]
> > [2]
> > issuecomment-275687757
> >

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message