karaf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Baptiste Onofré ...@nanthrax.net>
Subject Re: Cellar clustering issue
Date Wed, 06 Jan 2016 15:05:10 GMT
Both on each node

On 01/06/2016 03:55 PM, barry.barnett@wellsfargo.com wrote:
> Just put one member, the remote one?  Not the local?  Or do I put both the box1 and box2
as members?
>
> Regards,
>
> Barry
>
>
> -----Original Message-----
> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
> Sent: Wednesday, January 06, 2016 9:54 AM
> To: user@karaf.apache.org
> Subject: Re: Cellar clustering issue
>
> OK, so, now, disable multicast, and enable tcp-ip with member containing IP of the different
host.
>
> Regards
> JB
>
> On 01/06/2016 03:49 PM, barry.barnett@wellsfargo.com wrote:
>> Yes, still doesn't see the remote node from either box.
>>
>> Regards,
>>
>> Barry
>>
>>
>> -----Original Message-----
>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>> Sent: Wednesday, January 06, 2016 9:43 AM
>> To: user@karaf.apache.org
>> Subject: Re: Cellar clustering issue
>>
>> Did you do cluster:node-list to check the nodes that the cluster is able to see ?
>>
>> On 01/06/2016 03:39 PM, barry.barnett@wellsfargo.com wrote:
>>> Ok, so I try to do a cluster:group-set dev box2, from box1...
>>>
>>> cluster:group-set dev box2:5701
>>> Cluster node box2:5701 doesn't exist
>>>
>>> 06 Jan 2016 08:33:51,141 | DEBUG | Thread-213       | LoggingCommandSessionListener
   | 21 - org.apache.karaf.shell.console - 2.4.3 | Command: 'cluster:group-set dev box2' returned
'null'
>>>
>>>
>>>
>>> Regards,
>>>
>>> Barry
>>>
>>>
>>> -----Original Message-----
>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>> Sent: Wednesday, January 06, 2016 9:08 AM
>>> To: user@karaf.apache.org
>>> Subject: Re: Cellar clustering issue
>>>
>>> Please, can you try:
>>>
>>> 1. disable tcp-ip and use multicast
>>> 2. disable interfaces
>>> 3. send the debug log message on each box 4. result of
>>> cluster:node-list
>>>
>>> Thanks,
>>> Regards
>>> JB
>>>
>>> On 01/06/2016 03:05 PM, barry.barnett@wellsfargo.com wrote:
>>>> In the tcpip stanza, I only have member for box1 and box2.  I don't specify
interface.  Should I also include interface there?  With just the members, its still not picking
up the remote node from either box.
>>>>
>>>> Regards,
>>>>
>>>> Barry
>>>>
>>>>
>>>> -----Original Message-----
>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>> Sent: Wednesday, January 06, 2016 9:03 AM
>>>> To: user@karaf.apache.org
>>>> Subject: Re: Cellar clustering issue
>>>>
>>>> If you mean <interface/> inside <interfaces/>, yes correct.
>>>>
>>>> <interface/> or <member/> in <tcp-ip/> can help (depending
of your network configuration).
>>>>
>>>> Regards
>>>> JB
>>>>
>>>>
>>>> On 01/06/2016 02:58 PM, barry.barnett@wellsfargo.com wrote:
>>>>> So to connect one node to another remote node, it is not necessary to
specify interface?  Just set to false and allow to bind on all?
>>>>>
>>>>> Regards,
>>>>>
>>>>> Barry
>>>>>
>>>>>
>>>>> -----Original Message-----
>>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>>> Sent: Wednesday, January 06, 2016 8:52 AM
>>>>> To: user@karaf.apache.org
>>>>> Subject: Re: Cellar clustering issue
>>>>>
>>>>> Hi Barry,
>>>>>
>>>>> Your interface configuration is not correct.
>>>>>
>>>>> It's the network interface of your machine.
>>>>>
>>>>> By default, Hazelcast binds on all interface on your machine (0.0.0.0).
>>>>>
>>>>> You use <interface/> to specify on which "local" interface you
want to bind.
>>>>> For instance, you have eth0 (192.169.1.1) and eth1 (192.168.134.10) on
your machine. You want to bind on eth1, so you do:
>>>>>
>>>>> <interfaces enabled="true">
>>>>>         <interface>192.168.134.10</interface>
>>>>> </interfaces>
>>>>>
>>>>> If it's not your case, I advise to disable interface in order to bind
on all interfaces (0.0.0.0).
>>>>>
>>>>> As Hazelcast doesn't start, it explains why the Cellar ClusterManager
service is not present.
>>>>>
>>>>> Regards
>>>>> JB
>>>>>
>>>>> On 01/06/2016 02:46 PM, barry.barnett@wellsfargo.com wrote:
>>>>>> My interface is set to:
>>>>>> <interfaces enabled="true">
>>>>>>                   <interface>IPBox1</interface>
>>>>>>                   <interface>IPBox2</interface>
>>>>>>               </interfaces>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> 06 Jan 2016 08:34:10,339 | ERROR | FelixStartLevel              
   | AddressPicker                    | 250 - com.hazelcast - 2.6.9 | Hazelcast CANNOT start
on this node. No matching network interface found.
>>>>>> Interface matching must be either disabled or updated in the hazelcast.xml
config file.
>>>>>> 06 Jan 2016 08:34:10,340 | ERROR | FelixStartLevel              
   | AddressPicker                    | 250 - com.hazelcast - 2.6.9 | Hazelcast CANNOT start
on this node. No matching network interface found.
>>>>>> Interface matching must be either disabled or updated in the hazelcast.xml
config file.
>>>>>> java.lang.RuntimeException: Hazelcast CANNOT start on this node.
No matching network interface found.
>>>>>> Interface matching must be either disabled or updated in the hazelcast.xml
config file.
>>>>>>               at com.hazelcast.impl.AddressPicker.pickAddress(AddressPicker.java:147)
>>>>>>               at com.hazelcast.impl.AddressPicker.pickAddress(AddressPicker.java:51)
>>>>>>               at com.hazelcast.impl.Node.<init>(Node.java:144)
>>>>>>               at com.hazelcast.impl.FactoryImpl.<init>(FactoryImpl.java:386)
>>>>>>               at com.hazelcast.impl.FactoryImpl.newHazelcastInstanceProxy(FactoryImpl.java:133)
>>>>>>               at com.hazelcast.impl.FactoryImpl.newHazelcastInstanceProxy(FactoryImpl.java:119)
>>>>>>               at com.hazelcast.impl.FactoryImpl.newHazelcastInstanceProxy(FactoryImpl.java:104)
>>>>>>               at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelcast.java:507)[250:com.hazelcast:2.6.9]
>>>>>>               at org.apache.karaf.cellar.hazelcast.factory.HazelcastServiceFactory.buildInstance(HazelcastServiceFactory.java:107)[253:org.apache.karaf.cellar.hazelcast:2.3.6]
>>>>>>               at org.apache.karaf.cellar.hazelcast.factory.HazelcastServiceFactory.getInstance(HazelcastServiceFactory.java:92)[253:org.apache.karaf.cellar.hazelcast:2.3.6]
>>>>>>               at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)[:1.8.0_45]
>>>>>>               at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_45]
>>>>>>               at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_45]
>>>>>>               at java.lang.reflect.Method.invoke(Method.java:497)[:1.8.0_45]
>>>>>>               at org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:297)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:958)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.getInstance(BeanRecipe.java:298)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:806)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:787)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_45]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.di.RefRecipe.internalCreate(RefRecipe.java:62)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:106)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.ServiceRecipe.createService(ServiceRecipe.java:284)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.ServiceRecipe.internalGetService(ServiceRecipe.java:251)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.ServiceRecipe.internalCreate(ServiceRecipe.java:148)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>              at org.apache.aries.blueprint.container.ServiceRecipe.internalCreate(ServiceRecipe.java:148)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_45]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:245)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:183)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:682)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:377)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:269)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintExtender.createContainer
 (
B
>   l
> u
>>    e
>> p
>>>     r
>>> i
>>>>      n
>>>> t
>>>>>       E
>>>>> xtender.java:294)[18:org.apache.aries.blueprint.core:1.4.3]        at
org.apache.aries.blueprint.container.BlueprintExtender.createContainer(BlueprintExtender.java:263)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintExtender.modifiedBundle(BlueprintExtender.java:253)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.customizerModified(BundleHookBundleTracker.java:500)[13:org.apache.aries.util:1.1.0]
       at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.customizerModified(BundleHookBundleTracker.java:433)[13:org.apache.aries.util:1.1.0]
       at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$AbstractTracked.track(BundleHookBundleTracker.java:725)[13:org.apache.aries.util:1.1.0]
       at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.bundleChanged(BundleHookBundleTracker.java:463)[13:org.apache.aries.util:
 1

>   .
>
>>    1
>>
>>>     .
>>>
>>>>      0
>>>> ]
>>>>>
>>>>>             at
>>>>> org.apache.aries.util.tracker.hook.BundleHookBundleTracker$BundleEv
>>>>> e
>>>>> n
>>>>> t
>>>>> Hook.event(BundleHookBundleTracker.java:422)[13:org.apache.aries.util:
>>>>> 1.1.0]
>>>>>>               at org.apache.felix.framework.util.SecureAction.invokeBundleEventHook(SecureAction.java:1127)[org.apache.felix.framework-4.4.1.jar:]
       at org.apache.felix.framework.util.EventDispatcher.createWhitelistFromHooks(EventDispatcher.java:696)[org.apache.felix.framework-4.4.1.jar:]
>>>>>>               at org.apache.felix.framework.util.EventDispatcher.fireBundleEvent(EventDispatcher.java:484)[org.apache.felix.framework-4.4.1.jar:]
>>>>>>               at org.apache.felix.framework.Felix.fireBundleEvent(Felix.java:4429)[org.apache.felix.framework-4.4.1.jar:]
>>>>>>               at org.apache.felix.framework.Felix.startBundle(Felix.java:2100)[org.apache.felix.framework-4.4.1.jar:]
       at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1299)[org.apache.felix.framework-4.4.1.jar:]
>>>>>>               at
>>>>>> org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkSt
>>>>>> a r t L evelImpl.java:304)[org.apache.felix.framework-4.4.1.jar:]
>>>>>> at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1299)[org.apache.felix.framework-4.4.1.jar:]
>>>>>>               at org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStartLevelImpl.java:304)[org.apache.felix.framework-4.4.1.jar:]
>>>>>>               at java.lang.Thread.run(Thread.java:745)[:1.8.0_45]
>>>>>> 06 Jan 2016 08:34:10,349 | ERROR | FelixStartLevel              
   | ServiceRecipe                    | 18 - org.apache.aries.blueprint.core - 1.4.3 | Error
retrieving service from ServiceRecipe[name='.component-1']
>>>>>> org.osgi.service.blueprint.container.ComponentDefinitionException:
Error when instantiating bean hazelcast of class com.hazelcast.core.Hazelcast
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.getInstance(BeanRecipe.java:300)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.internalCreate2(BeanRecipe.java:806)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:787)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_45]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.di.RefRecipe.internalCreate(RefRecipe.java:62)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:106)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.ServiceRecipe.createService(ServiceRecipe.java:284)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.ServiceRecipe.internalGetService(ServiceRecipe.java:251)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at org.apache.aries.blueprint.container.ServiceRecipe.internalCreate(ServiceRecipe.java:148)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe.java:79)[18:org.apache.aries.blueprint.core:1.4.3]
>>>>>>               at java.util.concurrent.FutureTask.run(FutureTask.java:266)[:1.8.0_45]
       at org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:245)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:183)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:682)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:377)[18:org.apache.aries.blueprint.core:1.4.3]
       at org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:269)[18:org.apache.aries.blueprint.core:1.4.3]
>>
>> a
>>>     t
>>>
>>>>      o
>>>> r
>>>>>       g
>>>>> .apache.aries.blueprint.container.BlueprintExtender.createContain
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>>
>>>>>> Barry
>>>>>>
>>>>>>
>>>>>> -----Original Message-----
>>>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>>>> Sent: Tuesday, January 05, 2016 10:27 AM
>>>>>> To: user@karaf.apache.org
>>>>>> Subject: Re: Cellar clustering issue
>>>>>>
>>>>>> Can you set log level in DEBUG and send karaf.log to me ?
>>>>>>
>>>>>> It looks like bundle-hazelcast doesn't expose the ClusterManager
service.
>>>>>>
>>>>>> Regards
>>>>>> JB
>>>>>>
>>>>>> On 01/05/2016 03:42 PM, barry.barnett@wellsfargo.com wrote:
>>>>>>> Ok, now I've put in the stanza for required-members.  But when
I do that, and have interfaces enabled, I get the following:
>>>>>>>
>>>>>>> 05 Jan 2016 09:40:20,739 | INFO  | Thread-198               
       | ReferenceRecipe                  | 18 - org.apache.aries.blueprint.core - 1.4.3 |
No matching service for optional OSGi service reference (objectClass=org.apache.karaf.cellar.core.ClusterManager)
>>>>>>> 05 Jan 2016 09:40:20,741 | ERROR | Thread-198               
       | Console                          | 21 - org.apache.karaf.shell.console - 2.4.3 |
Exception caught while executing command
>>>>>>> org.osgi.service.blueprint.container.ServiceUnavailableException:
No matching service for optional OSGi service reference: (objectClass=org.apache.karaf.cellar.core.ClusterManager)
>>>>>>>                at org.apache.aries.blueprint.container.ReferenceRecipe.getService(ReferenceRecipe.java:236)
>>>>>>>                at org.apache.aries.blueprint.container.ReferenceRecipe.access$000(ReferenceRecipe.java:55)
>>>>>>>                at org.apache.aries.blueprint.container.ReferenceRecipe$ServiceDispatcher.call(ReferenceRecipe.java:298)
>>>>>>>                at Proxy2882e1b3_fe00_4c30_818e_5ad671ebc492.listNodes(Unknown
Source)
>>>>>>>                at org.apache.karaf.cellar.shell.NodesListCommand.doExecute(NodesListCommand.java:29)
>>>>>>>                at org.apache.karaf.shell.console.OsgiCommandSupport.execute(OsgiCommandSupport.java:38)
>>>>>>>                at org.apache.felix.gogo.commands.basic.AbstractCommand.execute(AbstractCommand.java:35)
>>>>>>>                at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)[:1.8.0_45]
>>>>>>>                at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[:1.8.0_45]
>>>>>>>                at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[:1.8.0_45]
>>>>>>>                at java.lang.reflect.Method.invoke(Method.java:497)[:1.8.0_45]
>>>>>>>                at org.apache.aries.proxy.impl.ProxyHandler$1.invoke(ProxyHandler.java:54)
>>>>>>>                at org.apache.aries.proxy.impl.ProxyHandler.invoke(ProxyHandler.java:119)
>>>>>>>                at org.apache.karaf.shell.console.commands.$BlueprintCommand1144753357.execute(Unknown
Source)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:78)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:477)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:403)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:183)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:120)[21:org.apache.karaf.shell.console:2.4.3]
>>>>>>>                at org.apache.felix.gogo.runtime.CommandSessionImpl.execute(CommandSessionImpl.java:92)
>>>>>>>                at org.apache.karaf.shell.console.jline.Console.run(Console.java:195)
>>>>>>>                at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$1.runConsole(ShellFactoryImpl.java:167)[36:org.apache.karaf.shell.ssh:2.4.3]
>>>>>>>                at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$1$1.run(ShellFactoryImpl.java:126)
>>>>>>>                at java.security.AccessController.doPrivileged(Native
Method)[:1.8.0_45]
>>>>>>>                at org.apache.karaf.jaas.modules.JaasHelper.doAs(JaasHelper.java:47)[20:org.apache.karaf.jaas.modules:2.4.3]
>>>>>>>                at
>>>>>>> org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$1.run(Shell
>>>>>>> F a c t o ryImpl.java:124)[36:org.apache.karaf.shell.ssh:2.4.3]
>>>>>>>
>>>>>>> Regards,
>>>>>>>
>>>>>>> Barry
>>>>>>>
>>>>>>> -----Original Message-----
>>>>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>>>>> Sent: Tuesday, January 05, 2016 8:55 AM
>>>>>>> To: user@karaf.apache.org
>>>>>>> Subject: Re: Cellar clustering issue
>>>>>>>
>>>>>>> As I'm not able to reproduce your issue, it's not easy to figure
it out.
>>>>>>>
>>>>>>> Clearly, the problem is that the nodes don't see each other,
and I suspect we're missing something obvious on the network configuration. So yes, tweaking
the tcp-ip configuration can help.
>>>>>>>
>>>>>>> The only weird thing for me is the fact that you have a Cellar
bundle in failed state. Is it still the case ?
>>>>>>>
>>>>>>> Regards
>>>>>>> JB
>>>>>>>
>>>>>>> On 01/05/2016 02:50 PM, barry.barnett@wellsfargo.com wrote:
>>>>>>>> Should I try the following?
>>>>>>>>
>>>>>>>> <tcp-ip enabled="true">
>>>>>>>>                         <required-member>IPofBox1</required-member>
>>>>>>>>                         <member>IPofBox1</member>
>>>>>>>>                         <members>IPofBox1,IPofBox2</members>
>>>>>>>>                     </tcp-ip>
>>>>>>>>
>>>>>>>> I currently only use:
>>>>>>>> <tcp-ip enabled="true">
>>>>>>>>                         <member>IPofBox1</member>
>>>>>>>>                         <member>IPofBox2</member>
>>>>>>>>                     </tcp-ip>
>>>>>>>>
>>>>>>>> Regards,
>>>>>>>>
>>>>>>>> Barry
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> -----Original Message-----
>>>>>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>>>>>> Sent: Monday, January 04, 2016 11:09 AM
>>>>>>>> To: user@karaf.apache.org
>>>>>>>> Subject: Re: Cellar clustering issue
>>>>>>>>
>>>>>>>> Do you mind to provide a link to the karaf log ?
>>>>>>>>
>>>>>>>> Thanks,
>>>>>>>> Regards
>>>>>>>> JB
>>>>>>>>
>>>>>>>> On 01/04/2016 04:59 PM, barry.barnett@wellsfargo.com wrote:
>>>>>>>>> Iptables is not enabled on the Linux boxes.
>>>>>>>>>
>>>>>>>>> Config on hazelcast.xml for Box1.  Box2 is a mirror image
basically:
>>>>>>>>>
>>>>>>>>> <?xml version="1.0" encoding="UTF-8"?> <hazelcast
>>>>>>>>> xsi:schemaLocation="http://www.hazelcast.com/schema/config
hazelcast-config-2.5.xsd"
>>>>>>>>>                     xmlns="http://www.hazelcast.com/schema/config"
>>>>>>>>>                     xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
>>>>>>>>>              <group>
>>>>>>>>>                  <name>dev</name>
>>>>>>>>>                  <password>pass</password>
>>>>>>>>>              </group>
>>>>>>>>>              <management-center enabled="false">http://localhost:8080/mancenter</management-center>
>>>>>>>>>              <network>
>>>>>>>>>                  <port auto-increment="true">5701</port>
>>>>>>>>>                  <outbound-ports>
>>>>>>>>>                      <ports>0</ports>
>>>>>>>>>                  </outbound-ports>
>>>>>>>>>                  <join>
>>>>>>>>>                      <multicast enabled="true">
>>>>>>>>>                          <multicast-group>224.2.2.3</multicast-group>
>>>>>>>>>                          <multicast-port>54327</multicast-port>
>>>>>>>>>                      </multicast>
>>>>>>>>>                      <tcp-ip enabled="true">
>>>>>>>>>                          <member>IPforBox1:5701</member>
>>>>>>>>>                          <member>IPforBox2:5701</member>
>>>>>>>>>                      </tcp-ip>
>>>>>>>>>                      <aws enabled="false">
>>>>>>>>>                          <access-key>my-access-key</access-key>
>>>>>>>>>                          <secret-key>my-secret-key</secret-key>
>>>>>>>>>                          <!--optional, default is
us-east-1 -->
>>>>>>>>>                          <region>us-west-1</region>
>>>>>>>>>                          <!--optional, default is
ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
>>>>>>>>>                          <hostHeader>ec2.amazonaws.com</hostHeader>
>>>>>>>>>                          <!-- optional, only instances
belonging to this group will be discovered, default will try all running instances -->
>>>>>>>>>                          <security-group-name>hazelcast-sg</security-group-name>
>>>>>>>>>                          <tag-key>type</tag-key>
>>>>>>>>>                          <tag-value>hz-nodes</tag-value>
>>>>>>>>>
>>>>>>>>>                      </aws>
>>>>>>>>>                  </join>
>>>>>>>>>                  <interfaces enabled="true">
>>>>>>>>>                      <interface>IPforBox1</interface>
>>>>>>>>>                      <interface>IPforBox2</interface>
>>>>>>>>>                  </interfaces>
>>>>>>>>>                  <ssl enabled="false"/>
>>>>>>>>>                  <socket-interceptor enabled="false"/>
>>>>>>>>>                  <symmetric-encryption enabled="false">
>>>>>>>>>                      <!--
>>>>>>>>>                         encryption algorithm such as
>>>>>>>>>                         DES/ECB/PKCS5Padding,
>>>>>>>>>                         PBEWithMD5AndDES,
>>>>>>>>>                         AES/CBC/PKCS5Padding,
>>>>>>>>>                         Blowfish,
>>>>>>>>>                         DESede
>>>>>>>>>                      -->
>>>>>>>>>                      <algorithm>PBEWithMD5AndDES</algorithm>
>>>>>>>>>                      <!-- salt value to use when generating
the secret key -->
>>>>>>>>>                      <salt>thesalt</salt>
>>>>>>>>>                      <!-- pass phrase to use when
generating the secret key -->
>>>>>>>>>                      <password>thepass</password>
>>>>>>>>>                      <!-- iteration count to use when
generating the secret key -->
>>>>>>>>>                      <iteration-count>19</iteration-count>
>>>>>>>>>                  </symmetric-encryption>
>>>>>>>>>                  <asymmetric-encryption enabled="false">
>>>>>>>>>                      <!-- encryption algorithm -->
>>>>>>>>>                      <algorithm>RSA/NONE/PKCS1PADDING</algorithm>
>>>>>>>>>                      <!-- private key password -->
>>>>>>>>>                      <keyPassword>thekeypass</keyPassword>
>>>>>>>>>                      <!-- private key alias -->
>>>>>>>>>                      <keyAlias>local</keyAlias>
>>>>>>>>>                      <!-- key store type -->
>>>>>>>>>                      <storeType>JKS</storeType>
>>>>>>>>>                      <!-- key store password -->
>>>>>>>>>                      <storePassword>thestorepass</storePassword>
>>>>>>>>>                      <!-- path to the key store -->
>>>>>>>>>                      <storePath>keystore</storePath>
>>>>>>>>>                  </asymmetric-encryption>
>>>>>>>>>              </network>
>>>>>>>>>              <partition-group enabled="false"/>
>>>>>>>>>              <executor-service>
>>>>>>>>>                  <core-pool-size>16</core-pool-size>
>>>>>>>>>                  <max-pool-size>64</max-pool-size>
>>>>>>>>>                  <keep-alive-seconds>60</keep-alive-seconds>
>>>>>>>>>              </executor-service>
>>>>>>>>>              <queue name="default">
>>>>>>>>>                  <!--
>>>>>>>>>                      Maximum size of the queue. When
a JVM's local queue size reaches the maximum,
>>>>>>>>>                      all put/offer operations will get
blocked until the queue size
>>>>>>>>>                      of the JVM goes down below the maximum.
>>>>>>>>>                      Any integer between 0 and Integer.MAX_VALUE.
0 means
>>>>>>>>>                      Integer.MAX_VALUE. Default is 0.
>>>>>>>>>                  -->
>>>>>>>>>                  <max-size-per-jvm>0</max-size-per-jvm>
>>>>>>>>>                  <!--
>>>>>>>>>                      Name of the map configuration that
will be used for the backing distributed
>>>>>>>>>                      map for this queue.
>>>>>>>>>                  -->
>>>>>>>>>                  <backing-map-ref>default</backing-map-ref>
>>>>>>>>>              </queue>
>>>>>>>>>              <map name="default">
>>>>>>>>>                  <!--
>>>>>>>>>                      Number of backups. If 1 is set as
the backup-count for example,
>>>>>>>>>                      then all entries of the map will
be copied to another JVM for
>>>>>>>>>                      fail-safety. 0 means no backup.
>>>>>>>>>                  -->
>>>>>>>>>                  <backup-count>1</backup-count>
>>>>>>>>>                  <!--
>>>>>>>>>                                  Maximum number of seconds
for each entry to stay in the map. Entries that are
>>>>>>>>>                                  older than <time-to-live-seconds>
and not updated for <time-to-live-seconds>
>>>>>>>>>                                  will get automatically
evicted from the map.
>>>>>>>>>                                  Any integer between
0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
>>>>>>>>>                          -->
>>>>>>>>>                  <time-to-live-seconds>0</time-to-live-seconds>
>>>>>>>>>                  <!--
>>>>>>>>>                                  Maximum number of seconds
for each entry to stay idle in the map. Entries that are
>>>>>>>>>                                  idle(not touched) for
more than <max-idle-seconds> will get
>>>>>>>>>                                  automatically evicted
from the map. Entry is touched if get, put or containsKey is called.
>>>>>>>>>                                  Any integer between
0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
>>>>>>>>>                          -->
>>>>>>>>>                  <max-idle-seconds>0</max-idle-seconds>
>>>>>>>>>                  <!--
>>>>>>>>>                      Valid values are:
>>>>>>>>>                      NONE (no eviction),
>>>>>>>>>                      LRU (Least Recently Used),
>>>>>>>>>                      LFU (Least Frequently Used).
>>>>>>>>>                      NONE is the default.
>>>>>>>>>                  -->
>>>>>>>>>                  <eviction-policy>NONE</eviction-policy>
>>>>>>>>>                  <!--
>>>>>>>>>                      Maximum size of the map. When max
size is reached,
>>>>>>>>>                      map is evicted based on the policy
defined.
>>>>>>>>>                      Any integer between 0 and Integer.MAX_VALUE.
0 means
>>>>>>>>>                      Integer.MAX_VALUE. Default is 0.
>>>>>>>>>                  -->
>>>>>>>>>                  <max-size policy="cluster_wide_map_size">0</max-size>
>>>>>>>>>                  <!--
>>>>>>>>>                      When max. size is reached, specified
percentage of
>>>>>>>>>                      the map will be evicted. Any integer
between 0 and 100.
>>>>>>>>>                      If 25 is set for example, 25% of
the entries will
>>>>>>>>>                      get evicted.
>>>>>>>>>                  -->
>>>>>>>>>                  <eviction-percentage>25</eviction-percentage>
>>>>>>>>>                  <!--
>>>>>>>>>                      While recovering from split-brain
(network partitioning),
>>>>>>>>>                      map entries in the small cluster
will merge into the bigger cluster
>>>>>>>>>                      based on the policy set here. When
an entry merge into the
>>>>>>>>>                      cluster, there might an existing
entry with the same key already.
>>>>>>>>>                      Values of these entries might be
different for that same key.
>>>>>>>>>                      Which value should be set for the
key? Conflict is resolved by
>>>>>>>>>                      the policy set here. Default policy
is
>>>>>>>>> hz.ADD_NEW_ENTRY
>>>>>>>>>
>>>>>>>>>                      There are built-in merge policies
such as
>>>>>>>>>                      hz.NO_MERGE      ; no entry will
merge.
>>>>>>>>>                      hz.ADD_NEW_ENTRY ; entry will be
added if the merging entry's key
>>>>>>>>>                                         doesn't exist
in the cluster.
>>>>>>>>>                      hz.HIGHER_HITS   ; entry with the
higher hits wins.
>>>>>>>>>                      hz.LATEST_UPDATE ; entry with the
latest update wins.
>>>>>>>>>                  -->
>>>>>>>>>                  <merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
>>>>>>>>>              </map>
>>>>>>>>>
>>>>>>>>>              <!-- Cellar MERGE POLICY -->
>>>>>>>>>              <!--
>>>>>>>>>              <merge-policies>
>>>>>>>>>                  <map-merge-policy name="CELLAR_MERGE_POLICY">
>>>>>>>>>                      <class-name>org.apache.karaf.cellar.hazelcast.merge.CellarMergePolicy</class-name>
>>>>>>>>>                  </map-merge-policy>
>>>>>>>>>              </merge-policies>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> Regards,
>>>>>>>>>
>>>>>>>>> Barry
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> -----Original Message-----
>>>>>>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>>>>>>> Sent: Monday, January 04, 2016 9:16 AM
>>>>>>>>> To: user@karaf.apache.org
>>>>>>>>> Subject: Re: Cellar clustering issue
>>>>>>>>>
>>>>>>>>> Hi Barry,
>>>>>>>>>
>>>>>>>>> For now, I don't have any issue with Karaf 2.4.3 and
Cellar 2.3.6 (on Linux, using different VMs).
>>>>>>>>>
>>>>>>>>> The only case that looks like your is when I enable iptable
on one machine (in that case, it doesn't see the other nodes).
>>>>>>>>>
>>>>>>>>> Any chance to provide to me more details about your setup
?
>>>>>>>>>
>>>>>>>>> iptables -L (for the different tables) Karaf log
>>>>>>>>>
>>>>>>>>> Thanks,
>>>>>>>>> Regards
>>>>>>>>> JB
>>>>>>>>>
>>>>>>>>> On 01/04/2016 03:03 PM, barry.barnett@wellsfargo.com
wrote:
>>>>>>>>>> Any results with your testing?
>>>>>>>>>>
>>>>>>>>>> Regards,
>>>>>>>>>>
>>>>>>>>>> Barry
>>>>>>>>>>
>>>>>>>>>> -----Original Message-----
>>>>>>>>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>>>>>>>>> Sent: Sunday, December 27, 2015 2:15 AM
>>>>>>>>>> To: user@karaf.apache.org
>>>>>>>>>> Subject: Re: Cellar clustering issue
>>>>>>>>>>
>>>>>>>>>> Hi Barry,
>>>>>>>>>>
>>>>>>>>>> I just tested Cellar 2.3.6 with Karaf 2.4.3 and it
works fine.
>>>>>>>>>>
>>>>>>>>>> Let me try another test case.
>>>>>>>>>>
>>>>>>>>>> Regards
>>>>>>>>>> JB
>>>>>>>>>>
>>>>>>>>>> On 12/07/2015 04:54 PM, barry.barnett@wellsfargo.com
wrote:
>>>>>>>>>>> Hello,
>>>>>>>>>>> I am have installed Cellar v2.3.6 in each of
my Karaf instances.
>>>>>>>>>>> Karaf1 - IP aaa.aaa.aaa, port bbbb
>>>>>>>>>>> Karaf2 - IP bbb.bbb.bbb, port cccc Why is it
that when I
>>>>>>>>>>> issue the following on Karaf1, I get 'Cluster
node
>>>>>>>>>>> bbb.bbb.bbb doesn't
>>>>>>>>>>> exit':
>>>>>>>>>>> Karaf root> cluster:group-set dev bbb.bbb.bbb:cccc
I thought
>>>>>>>>>>> it would pick it up right away.
>>>>>>>>>>> Regards,
>>>>>>>>>>> Barry
>>>>>>>>>>
>>>>>>>>>> --
>>>>>>>>>> Jean-Baptiste Onofré
>>>>>>>>>> jbonofre@apache.org
>>>>>>>>>> http://blog.nanthrax.net
>>>>>>>>>> Talend - http://www.talend.com
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> --
>>>>>>>>> Jean-Baptiste Onofré
>>>>>>>>> jbonofre@apache.org
>>>>>>>>> http://blog.nanthrax.net
>>>>>>>>> Talend - http://www.talend.com
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Jean-Baptiste Onofré
>>>>>>> jbonofre@apache.org
>>>>>>> http://blog.nanthrax.net
>>>>>>> Talend - http://www.talend.com
>>>>>>>
>>>>>>
>>>>>> --
>>>>>> Jean-Baptiste Onofré
>>>>>> jbonofre@apache.org
>>>>>> http://blog.nanthrax.net
>>>>>> Talend - http://www.talend.com
>>>>>>
>>>>>
>>>>> --
>>>>> Jean-Baptiste Onofré
>>>>> jbonofre@apache.org
>>>>> http://blog.nanthrax.net
>>>>> Talend - http://www.talend.com
>>>>>
>>>>
>>>> --
>>>> Jean-Baptiste Onofré
>>>> jbonofre@apache.org
>>>> http://blog.nanthrax.net
>>>> Talend - http://www.talend.com
>>>>
>>>
>>> --
>>> Jean-Baptiste Onofré
>>> jbonofre@apache.org
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>
>> --
>> Jean-Baptiste Onofré
>> jbonofre@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>
> --
> Jean-Baptiste Onofré
> jbonofre@apache.org
> http://blog.nanthrax.net
> Talend - http://www.talend.com
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Mime
View raw message