perl-modperl mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Frotz Fa'Atuai (ffaatuai)" <>
Subject Re: close connection for request, but continue
Date Thu, 21 Apr 2016 14:54:08 GMT
You will need:

[] a background state storage location (database table with unique row ID;
directory with unique ID which points to the state file).
[] Your user-facing request page accepts the request, scheduled the work,
responds with a page which auto-refreshes against the GUID which reports
status of the background request.
[] Your user-facing status page auto-refreshes to itself while the job is
in motion.
[] Your user-facing status page auto-refreshes to a "you're done; your
results are here / have been mailed to you".

[] My corporate business users can follow along with a 15 second
auto-refresh (as long as the page clearly indicates an auto-refresh in 15
seconds).  Count-down timers are probably better.
[] My technical users close the pop-up tab after the first request (not
caring for the intermediate status pages and knowing that the result has
been accomplished or mailed to them).

[] Some of our backend jobs take a long time (lots of data to grind
through); these tend towards email status.
[] The database-based queue view (assuming you're internally facing only)
allows your support teams to observed queued jobs (things which will
happen in the future), active jobs (things running right now on some
machine), completed jobs (jobs which succeeded), failed jobs (jobs which
did not succeed).

Hopefully these implementation specifics and operational observations
assist you as you take André's excellent summary and put it all to work.
Cisco Systems, Inc.

On 2016/04/21, 07:36, "André Warnier (tomcat)" <> wrote:

>On 21.04.2016 11:20, Iosif Fettich wrote:
>> Dear mod_perl list,
>> please consider my gratefulness for any hints/insight :)
>> I'm trying to achieve the following: when there is an incoming request,
>>I want to set a
>> time limit in which an answer should be delivered to the client, no
>>matter what.
>> However, since the work triggered by the initial request (there is
>>another request to
>> other site involved)  might take much longer than that time limit, I
>>want that work to
>> properly finish, despite the fact that the initial request was 'served'
>The "canonical" way to do this, would be something like
>- the client sends the request to the server
>- the server allocates a process (or thread or whatever) to process this
>- this request-processing process "delegates" this browser request to
>some other, 
>independent-of-the-webserver process, which can take as long as necessary
>to fulfill the 
>(background part of) the request
>- the request-processing process does not wait for the response or the
>exit of that 
>independent process, but returns a response right away to the client
>browser (such as 
>"Thank you for your request. It is being handled by our back-office. You
>will receive an 
>email when it's done.".)
>- and then, as far as the webserver is concerned, this client request is
>(cleanly), and the request-processing process can be re-allocated to some
>other incoming 
>Optionally, you could provide a way for the client to periodically
>enquire as to the 
>advancement status of his request.

View raw message