camel-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "lakshmi.prashant" <>
Subject Re: Quartz job data deletion in clustered quartz2
Date Mon, 10 Nov 2014 04:33:50 GMT
Hi Claus,

  There is a mis-communication - we need not have a special classloader
helper, I think. 

  The issue was because on the un-deployment of 1 camel blueprint bundle
(with camel quartz2 route),* the quartz job data is not deleted from db - if
it is clustered quartz.*

 Unfortunately, we do not want to delete the job data, when the route is
stopped using RoutePolicySupport class, as the main intent from clustered
quartz is job recovery.
  - The scheduler will be shut down (QuartzComponent: doStop()) if there are
no more jobs (if the scheduler is not shared across camel context bundles) &
it works fine.
  - But if the scheduler configuration / scheduler instance is shared across
camel quartz routes / bundles, the scheduler continues to run.
  - When the scheduler acquires next trigger, the trigger related to
undeployed bundle is also obtained & then it tries to execute that trigger
by executing CamelJob class from uninstalled bundle, using
  - If it cannot load the class for that trigger, it throws exception and
the rest of the triggers do not get executed at that time - So we get

Please refer to  line no. 876 in - this quartz class throws
exception, if job class is not loaded and does not proceed further.
     job.setJobClass(loadHelper.loadClass(rs  .getString(COL_JOB_CLASS)));

  1. I have written an osgi EventHandler service that will listen to 'bundle
undeploy' events, that get published.
  2. If the osgi bundle related to camel quartz2 is undeployed, it will
remove the corresponding job data from DB.

If this can be handled by camel quartz2, it will become simple for
a) There is an issue in camel in addJobInScheduler(). We
were getting misfires in some nodes of the cluster, due to below issue.

   a) If the trigger does not exist in DB, it tries to schedule the job
   b) But this is not an atomic transaction - After the call to find a
trigger from DB  is made, some other node in the cluster could have created
the trigger, resulting in ObjectAlreadyExistsException when call to schedule
job is made
  c) Then misfires happen in that cluster node, as the Quartz component /
camel context itself does not get started. 

  private void addJobInScheduler() throws Exception {
        // Add or use existing trigger to/from scheduler
        Scheduler scheduler = getComponent().getScheduler();
        JobDetail jobDetail;
        Trigger trigger = scheduler.getTrigger(triggerKey);
        if (trigger == null) {
            jobDetail = createJobDetail();
            trigger = createTrigger(jobDetail);


            // Schedule it now. Remember that scheduler might not be started
it, but we can schedule now.
	            Date nextFireDate = scheduler.scheduleJob(jobDetail, trigger);
	            if (LOG.isInfoEnabled()) {
	      "Job {} (triggerType={}, jobClass={}) is
scheduled. Next fire date is {}",
	                         new Object[] {trigger.getKey(),
jobDetail.getJobClass().getSimpleName(), nextFireDate});
           * catch(ObjectAlreadyExistsException e){
            	//double-check if Some other VM might has already stored the
job & trigger in clustered mode
            		throw e;
            		trigger = scheduler.getTrigger(triggerKey);
            			  throw new SchedulerException("Trigger could not be found in
quartz scheduler.");
        } else {

Can the above correction in be made?


View this message in context:
Sent from the Camel - Users mailing list archive at

View raw message