camel-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "lakshmi.prashant" <lakshmi.prash...@gmail.com>
Subject Quartz job data deletion in clustered quartz2
Date Mon, 13 Oct 2014 07:13:34 GMT
Hi,

  While using camel quartz2 in clustered mode, the job data is not deleted
when we un-deploy the bundles. 
 
Due to the above, when we try to re-deploy the bundles (or) stop & start the
cluster, we encounter errors:

a) After the camel blueprint bundle is un-deployed, we get the error:

	Failed to execute CamelJob.org.quartz.JobExecutionException: No
CamelContext could be found with name: 621-Quartz2_Mig_Test
       	at
org.apache.camel.component.quartz2.CamelJob.getCamelContext(CamelJob.java:77)

b) If we try to re-deploy the same bundle again, we are unable to do so &
get an error:

 	org.quartz.ObjectAlreadyExistsException: Unable to store Trigger with
name: 'myTimerName5' and group: 'myGroup5', because one already exists with
this identification.

c) Even if we delete the specific quartz entries from the Job_details,
Trigger tables of quartz, sometimes the other jobs / triggers, shared by
that quartz instance also stop running / misfire after the above deletion.
Hence we have to use different quartz instances for each schedule (i.e. for
each camel quartz2 route)

d) Why doesn't camel try to remove the Job / Job Data from quartz, when the
routes are stopped (bundles are un- deployed) in the cluster?

e) To circumvent this, we have tried to add a RouteselectionPolicy, that
will try to delete the job data - on stopping of the routes..

f) Whenever the cluster is re-started, the bundles will be re-deployed /
re-started.
When the camel quartz IFlow bundles get active, the quartz data will be
re-created.

g) Route stop event will be triggered:

   a) When camel blueprint bundle is un-deployed
   b) When a cluster Node goes down & there are other nodes in the cluster
   c) When a cluster Node goes down & there are no more nodes in the cluster
   d) When cluster goes down / cluster is stopped, during a planned
downtime.

We need to trigger the clean-up of quartz job data, during a, c & d only.
The flip side is: we need to check the quartz scheduler state to know if
there are other nodes, after the associated check-in interval and delete the
quartz data, only if there are no other nodes in the cluster.


h) Am I missing something here & have I missed out anything from the camel
quartz 2 documentation?

As this is a generic issue, can this be achieved  easily with camel quartz2
endpoint configuration, without our custom route policy?
Please help.

g) My blueprint xml: beans_quartz2.xml
<http://camel.465427.n5.nabble.com/file/n5757508/beans_quartz2.xml>  

Thanks,
Lakshmi



--
View this message in context: http://camel.465427.n5.nabble.com/Quartz-job-data-deletion-in-clustered-quartz2-tp5757508.html
Sent from the Camel - Users mailing list archive at Nabble.com.

Mime
View raw message