geronimo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Hogstrom <>
Subject Re: GBeans representing separately persistent data
Date Thu, 15 Jun 2006 04:26:37 GMT
Thinking about this from an operational model (not a developer) I think the database approach
a lot of sense.  If the jobs are hosted in a DB they can be managed directly from a GUI that
make more sense to the operators or other folks that aren't developers.   It strikes me as
a bit 
heavy to have each job as a GBean in the config.xml.

Based on what I know it seems to make a lot of sense that there is a GBean that boostraps
scheduler container and that container manages the jobs.  I was thinking back to the premise
outlined in another e-mail thread Aaron that said we should be as easy as a Mac.  I think
the one 
GBean per job is not quite in line with the simple premise (I'm thinking of non-developers

The people that will be adding and removing jobs are most likely not developers and will not
their same skill set.

Does this make sense?


Aaron Mulder wrote:
> Yeah, I'm not imagining using this approach for "thousands" of jobs.
> Though in truth, it shouldn't take an XML parser an unreasonable
> amount of time to write a thousand (or ten thousand) elements, so I'm
> not sure there would be a huge problem in having more entries in
> config.xml.
> Anyway, if you have thousands of jobs, I'd recommend a dedicated tool
> or GUI.  If you have an application with "some" but not "thousands" of
> jobs, I imagine it would be nice to have a convenient way to deploy
> and manage them through Geronimo.  To me, these are not overlapping
> use cases.  I don't know where to draw the line in between, but I
> think we can clarify what we're tageting with each approach and let
> the developer decide which to take.
> Thanks,
>    Aaron
> On 6/14/06, Dain Sundstrom <> wrote:
>> On Jun 12, 2006, at 8:11 PM, John Sisson wrote:
>> > How scalable would this be?  I would imagine there would be
>> > applications that may create thousands of jobs (possibly per day).
>> > Wouldn't startup be slow if we had to de-serialize thousands of
>> > jobs at startup in the process of loading all the GBeans that
>> > represent the jobs.  Not having looked at quartz myself, it seems
>> > it would be much better to have jobs in a database.  For example,
>> > thousands of Jobs could be created that aren't to be executed until
>> > next year.   I would expect that a job management engine would
>> > optimize processing, e.g. only read jobs from the database into
>> > memory that are to be executed today or in the next hour.
>> And that config.xml file is going to get mighty large, so every time
>> someone makes a small change we are writing out all the jobs...
>> -dain

View raw message