geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Aaron Mulder" <ammul...@alumni.princeton.edu>
Subject Re: GBeans representing separately persistent data
Date Wed, 14 Jun 2006 19:20:16 GMT
Yeah, I'm not imagining using this approach for "thousands" of jobs.
Though in truth, it shouldn't take an XML parser an unreasonable
amount of time to write a thousand (or ten thousand) elements, so I'm
not sure there would be a huge problem in having more entries in
config.xml.

Anyway, if you have thousands of jobs, I'd recommend a dedicated tool
or GUI.  If you have an application with "some" but not "thousands" of
jobs, I imagine it would be nice to have a convenient way to deploy
and manage them through Geronimo.  To me, these are not overlapping
use cases.  I don't know where to draw the line in between, but I
think we can clarify what we're tageting with each approach and let
the developer decide which to take.

Thanks,
    Aaron

On 6/14/06, Dain Sundstrom <dain@iq80.com> wrote:
> On Jun 12, 2006, at 8:11 PM, John Sisson wrote:
>
> > How scalable would this be?  I would imagine there would be
> > applications that may create thousands of jobs (possibly per day).
> > Wouldn't startup be slow if we had to de-serialize thousands of
> > jobs at startup in the process of loading all the GBeans that
> > represent the jobs.  Not having looked at quartz myself, it seems
> > it would be much better to have jobs in a database.  For example,
> > thousands of Jobs could be created that aren't to be executed until
> > next year.   I would expect that a job management engine would
> > optimize processing, e.g. only read jobs from the database into
> > memory that are to be executed today or in the next hour.
>
>
> And that config.xml file is going to get mighty large, so every time
> someone makes a small change we are writing out all the jobs...
>
> -dain
>

Mime
View raw message