commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jon Stevens <>
Subject Re: Workflow & Turbine design questions
Date Wed, 28 Nov 2001 19:32:36 GMT
Woah! I hope you decide to contribute your service back to the Turbine
group. :-) At some point soon, I'm going to need something very similar for

Brain dump:

That said, one way that you can do things in a web environment is to spawn a
thread to do the processing and not make the user wait. Since web
environments are very disconnected (ie: you don't know if the user just
closed their browser window)...

Then, when the thread is finished, an email is sent to the user to let them
know that things have finished. The process can also do things like update a
database table which can be read each time the user is accesses a page in
the site. That table would be able to maintain the state of what is going on
with regards to the processing and relay that to the user as they browse the

I hope that helps a bit...


on 11/27/01 7:11 PM, "Brett Gullan" <> wrote:

> I am working on a project to develop a Turbine-based workflow engine for
> file processing, using the Commons Workflow package and I would be
> interested in any feedback or suggestions as to how best to implement
> this.
> The main focus of the project is to automate file processing for
> print/publishing workflow. Users move/copy files into predefined
> "hotfolders" which trigger a series of workflow steps such as converting
> a file from PostScript to PDF, compressing images or transferring a file
> or directory to a remote server (via FTP, HTTP, WebDAV, etc...).
> Currently I have created a WorkflowService, modelled after the Turbine
> SchedulerService that retrieves items from a queue and initiates the
> processing sequence. A scheduled job periodically retrieves the list of
> hotfolders (from storage; they are implemented as OM/Peers), checks for
> any new files and submits them to the workflow queue, along with a
> reference to an Activity definition that will be used to process the
> files.
> My intention is that as each entry is retrieved from the queue the
> associated Activity definition file is 'digested' and a processing
> thread started to perform the workflow steps.
> I also need to ensure there is a mechanism for maintaining
> state/progress information for each workflow step -- both to report back
> to the client UI and for 'recovery' should a process be interrupted.
> Workflow steps may be quite expensive and/or time consuming -- for
> instance, PDF conversion of large publishing files typically takes 1-5
> minutes, thus processing a directory of files could take significantly
> longer. During this period a user needs to see progress information such
> as a progress bar or estimated time to completion.
> I am unclear at this stage as to how the workflow Context and Scope fit
> in. I figure the processing thread itself should probably be a Context
> implementation and maybe the Scope implementation is the conduit to the
> UI? A Velocity Screen/Action could request a list of processing
> threads/Contexts from the WorkflowService and insert each Scope
> implementation into the Velocity Context? The Scope would probably also
> need to be backed by an OM/Peer to persist state information.
> I would appreciate any suggestions or alternative design ideas.
> Thanks,
> Brett

To unsubscribe, e-mail:   <>
For additional commands, e-mail: <>

View raw message