Return-Path: X-Original-To: apmail-karaf-user-archive@minotaur.apache.org Delivered-To: apmail-karaf-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id ED3BF1810C for ; Wed, 6 Jan 2016 14:39:45 +0000 (UTC) Received: (qmail 68483 invoked by uid 500); 6 Jan 2016 14:39:45 -0000 Delivered-To: apmail-karaf-user-archive@karaf.apache.org Received: (qmail 68439 invoked by uid 500); 6 Jan 2016 14:39:45 -0000 Mailing-List: contact user-help@karaf.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@karaf.apache.org Delivered-To: mailing list user@karaf.apache.org Received: (qmail 68429 invoked by uid 99); 6 Jan 2016 14:39:45 -0000 Received: from Unknown (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Jan 2016 14:39:45 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 3BE2A180495 for ; Wed, 6 Jan 2016 14:39:45 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -0.653 X-Spam-Level: X-Spam-Status: No, score=-0.653 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, RP_MATCHES_RCVD=-0.554, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=wellsfargo.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id i_Jg6PQzALaR for ; Wed, 6 Jan 2016 14:39:31 +0000 (UTC) Received: from mxdcmv01i.wellsfargo.com (mxdcmv01i.wellsfargo.com [151.151.26.137]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id C19CA439D3 for ; Wed, 6 Jan 2016 14:39:30 +0000 (UTC) Received: from mxidlx02.wellsfargo.com (mxidlx02.wellsfargo.com [162.102.225.33]) by mxdcmv01i.wellsfargo.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id u06EbnBc003553 for ; Wed, 6 Jan 2016 14:39:21 GMT DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=wellsfargo.com; s=2011-05-wfb; t=1452091161; bh=k6d4wWD1ew1v03RHE82OCWM5hL431SmN9IOtZ5Zpe0M=; h=From:To:Subject:Date:Message-ID:References:In-Reply-To: Content-Type:Content-Transfer-Encoding:MIME-Version; b=BY/JhA7l8atB1BRNgAPYJ3dIfDZHElsSVuJbY8lFPxAb7flQNbzYgvm+2Ghb6cCXe ZjZxDZSp+DGdrz9I5e/H5DGMUgphl2tIDZuxwHJG6l2l3shi2z6476KyevzD0v7heF 4v8rwjAoD2IpJJYd+14xBbSEe2wWUNdNtn/FkM5w= Received: from mxidlx03.wellsfargo.com ( [162.102.225.34]) by mxidlx02.wellsfargo.com (postmaster@wellsfargo.com) with SMTP id 7C.2C.19352.9172D865; Wed, 6 Jan 2016 14:39:21 +0000 (GMT) X-AuditID: a266e121-f79bd6d000004b98-70-568d27198ef6 Received: from mxicmx02.wellsfargo.com ( [162.102.137.63]) by mxidlx03.wellsfargo.com (postmaster@wellsfargo.com) with SMTP id D1.10.19984.9172D865; Wed, 6 Jan 2016 14:39:21 +0000 (GMT) Received: from MSGEXSIL4109.ent.wfb.bank.corp (msgexsil4109.wellsfargo.com [162.103.214.47]) by mxicmx02.wellsfargo.com (Sentrion-MTA-4.3.1/Sentrion-MTA-4.3.1) with ESMTP id u06EdKtC030859 for ; Wed, 6 Jan 2016 14:39:20 GMT Received: from MSGEXSIL1119.ent.wfb.bank.corp ([169.254.1.116]) by MSGEXSIL4109.ent.wfb.bank.corp ([162.103.214.47]) with mapi id 14.03.0224.002; Wed, 6 Jan 2016 09:39:19 -0500 From: To: Subject: RE: Cellar clustering issue Thread-Topic: Cellar clustering issue Thread-Index: AQHRQHZO4S5Zciseuku11pzdJZIZ0p7rcHwAgABXVgD//8eJgIAAWAAAgAEXODCAAFXBgP//uRgQgABgcACAASGcoIAAVl4A//+t2NAACqGyAAAKbwSA//+t8YCAAEvN8A== Date: Wed, 6 Jan 2016 14:39:18 +0000 Message-ID: <443BCE38E921434394B45FF0813FA57850562275@MSGEXSIL1119.ent.wfb.bank.corp> References: <443BCE38E921434394B45FF0813FA5785053E4DB@MSGEXSIL1119.ent.wfb.bank.corp> <567F8FD6.7020503@nanthrax.net> <443BCE38E921434394B45FF0813FA57850560931@MSGEXSIL1119.ent.wfb.bank.corp> <568A7EAC.7000707@nanthrax.net> <443BCE38E921434394B45FF0813FA57850560B63@MSGEXSIL1119.ent.wfb.bank.corp> <568A9920.3050301@nanthrax.net> <443BCE38E921434394B45FF0813FA578505615CE@MSGEXSIL1119.ent.wfb.bank.corp> <568BCB49.6020004@nanthrax.net> <443BCE38E921434394B45FF0813FA57850561618@MSGEXSIL1119.ent.wfb.bank.corp> <568BE0B4.4010409@nanthrax.net> <443BCE38E921434394B45FF0813FA5785056213D@MSGEXSIL1119.ent.wfb.bank.corp> <568D1C18.1020102@nanthrax.net> <443BCE38E921434394B45FF0813FA57850562184@MSGEXSIL1119.ent.wfb.bank.corp> <568D1E86.1010800@nanthrax.net> <443BCE38E921434394B45FF0813FA578505621B7@MSGEXSIL1119.ent.wfb.bank.corp> <568D1FB5.9000005@nanthrax.net> In-Reply-To: <568D1FB5.9000005@nanthrax.net> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [170.13.180.66] Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFrrJIsWRmVeSWpSXmKPExsWyKO1Ctq6kem+Ywdon4hYT1r5gdGD0eLP5 CmsAY5SiTUpqTmZZapG+nU1SZUFicbFucppCYk6OrVJJUWmqkr5dgmLG87nv2At+tzJVTPh0 m62Bcfcpxi5GTg4JAROJxk8XWSFsMYkL99azdTFycQgJ7GGU2P3rIVzR+xfTmeESO6ZOZYVw fjFKnF+yggnC2cIosWLLNHaQFjYBFYn9ExeAtYsISEu8a3oF1MHBIQwUb5tWARFWlbj35Q8j SK+IwCRGieZDy5hBEixANaen3wO7iVcgVOLev+NQ236yScw88QEswSmgI/Fr8h8WEJsR6PDv p9YwgdjMAuISt57MZ4K4W0BiyZ7zzBC2qMTLx/+gHlWUOHt+OytEvZ7EjalT2CBsbYllC18z QywWlDg58wkLyGIJgcnsEtOWvmOBaBaSmHphETuELSlxcMUNlgmM0rOQ7J6FZO4sJHNnIZm7 gJFlFaN4bkVmSk6FgZFeeWpOTnFaYlF6vl5yfu4mRnCMP1Tcwbh/kvMhRgEORiUe3k2rbUOF WBPLiitzDzGaAoNpIrOUaHI+MJHklcQbmhhYGFuamBoZmlqaKInz1oEUC6QnlqRmp6YWpBbF F5XmpBYfYmTi4JRqYMw4qHJVk6W9pmhG4J49MWtTT3QtT5G5k7Zy6XoxKW0Wm9p9O2Nr9u7a v2TKdNUYZscJtimrJ9+3ffiGP0vrVdmEw05H9++bKPv1r0flss2BmVLLBeOTll/f2ty8x118 R1rULcuCu3NOb9om7ZoS6lH3lsnwTNL1BZ2ppyz59h+tTCq+/btkyV0lluKMREMt5qLiRAB6 zb2V7AIAAA== X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrFIsWRmVeSWpSXmKPExsWyKK3TXldSvTfMoOUJv8WEtS8YHRg93my+ whrAGOVgk5GamJJapJCal5yfkpmXbqvkGeyva2FhaqlrqKSQl5ibaqsUoRvu5qTr4hOgG+Lv rqRQlphTChSNdA1W0rdLcMh4Pvcde8HvVqaKCZ9uszUw7j7F2MXIySEhYCLx/sV0ZghbTOLC vfVsXYxcHEICexgldkydygrh/GKUOL9kBROEs4VRYsWWaewgLWwCKhL7Jy4AGyUiIC3xrukV UAcHhzBQvG1aBURYVeLelz+MIL0iApMYJZoPLQNbxwJUc3r6PVYQm1cgVOLev+NQ236yScw8 8QEswSmgI/Fr8h8WEJsR6L7vp9YwgdjMAuISt57MZ4K4W0BiyZ7zUD+ISrx8/I8VwlaUOHt+ OytEvZ7EjalT2CBsbYllC18zQywWlDg58wnLBEaxWUjGzkLSMgtJyywkLQsYWVYxiudWZKbk VBgY65Wn5uQUpyUWpefrJefnbmIER+lDpR2ML5Y4H2IU4GBU4uG19J4UIsSaWFZcmXuI0RQY AhOZpUST84GpIK8k3tDEwMLY0tzA1MzSyExJnLfzl12okEB6YklqdmpqQWpRfFFpTmrxIUYm Dk6pBsZYE79dxb7/OfgkTe89rGG8vUFeicNNwFldKtUgPGT2qzPxQgEGp/0/xVZxuDIa2Lob pS3PvSL4/taknBj9nWHJRacfN5REmj0+wcca8yBacKVcl8KPo1MMzrtMbImcnhRwYt+EqQqf DikllQlEtG8U+v6YTX7r4Qt5tWYKf6+szrAr9kuMV2Ipzkg01GIuKk4EAMyQUpDNAgAA X-WFB-DLP-TOG: YES X-CFilter-Loop: Reflected Ok, so I try to do a cluster:group-set dev box2, from box1... cluster:group-set dev box2:5701 Cluster node box2:5701 doesn't exist 06 Jan 2016 08:33:51,141 | DEBUG | Thread-213 | LoggingCommandSession= Listener | 21 - org.apache.karaf.shell.console - 2.4.3 | Command: 'clust= er:group-set dev box2' returned 'null' Regards, Barry=20 -----Original Message----- From: Jean-Baptiste Onofr=E9 [mailto:jb@nanthrax.net]=20 Sent: Wednesday, January 06, 2016 9:08 AM To: user@karaf.apache.org Subject: Re: Cellar clustering issue Please, can you try: 1. disable tcp-ip and use multicast 2. disable interfaces 3. send the debug log message on each box 4. result of cluster:node-list Thanks, Regards JB On 01/06/2016 03:05 PM, barry.barnett@wellsfargo.com wrote: > In the tcpip stanza, I only have member for box1 and box2. I don't speci= fy interface. Should I also include interface there? With just the member= s, its still not picking up the remote node from either box. > > Regards, > > Barry > > > -----Original Message----- > From: Jean-Baptiste Onofr=E9 [mailto:jb@nanthrax.net] > Sent: Wednesday, January 06, 2016 9:03 AM > To: user@karaf.apache.org > Subject: Re: Cellar clustering issue > > If you mean inside , yes correct. > > or in can help (depending of your networ= k configuration). > > Regards > JB > > > On 01/06/2016 02:58 PM, barry.barnett@wellsfargo.com wrote: >> So to connect one node to another remote node, it is not necessary to sp= ecify interface? Just set to false and allow to bind on all? >> >> Regards, >> >> Barry >> >> >> -----Original Message----- >> From: Jean-Baptiste Onofr=E9 [mailto:jb@nanthrax.net] >> Sent: Wednesday, January 06, 2016 8:52 AM >> To: user@karaf.apache.org >> Subject: Re: Cellar clustering issue >> >> Hi Barry, >> >> Your interface configuration is not correct. >> >> It's the network interface of your machine. >> >> By default, Hazelcast binds on all interface on your machine (0.0.0.0). >> >> You use to specify on which "local" interface you want to b= ind. >> For instance, you have eth0 (192.169.1.1) and eth1 (192.168.134.10) on y= our machine. You want to bind on eth1, so you do: >> >> >> 192.168.134.10 >> >> >> If it's not your case, I advise to disable interface in order to bind on= all interfaces (0.0.0.0). >> >> As Hazelcast doesn't start, it explains why the Cellar ClusterManager se= rvice is not present. >> >> Regards >> JB >> >> On 01/06/2016 02:46 PM, barry.barnett@wellsfargo.com wrote: >>> My interface is set to: >>> >>> IPBox1 >>> IPBox2 >>> >>> >>> >>> >>> >>> 06 Jan 2016 08:34:10,339 | ERROR | FelixStartLevel | A= ddressPicker | 250 - com.hazelcast - 2.6.9 | Hazelcast C= ANNOT start on this node. No matching network interface found. >>> Interface matching must be either disabled or updated in the hazelcast.= xml config file. >>> 06 Jan 2016 08:34:10,340 | ERROR | FelixStartLevel | A= ddressPicker | 250 - com.hazelcast - 2.6.9 | Hazelcast C= ANNOT start on this node. No matching network interface found. >>> Interface matching must be either disabled or updated in the hazelcast.= xml config file. >>> java.lang.RuntimeException: Hazelcast CANNOT start on this node. No mat= ching network interface found. >>> Interface matching must be either disabled or updated in the hazelcast.= xml config file. >>> at com.hazelcast.impl.AddressPicker.pickAddress(AddressPicke= r.java:147) >>> at com.hazelcast.impl.AddressPicker.pickAddress(AddressPicke= r.java:51) >>> at com.hazelcast.impl.Node.(Node.java:144) >>> at com.hazelcast.impl.FactoryImpl.(FactoryImpl.java:38= 6) >>> at com.hazelcast.impl.FactoryImpl.newHazelcastInstanceProxy(= FactoryImpl.java:133) >>> at com.hazelcast.impl.FactoryImpl.newHazelcastInstanceProxy(= FactoryImpl.java:119) >>> at com.hazelcast.impl.FactoryImpl.newHazelcastInstanceProxy(= FactoryImpl.java:104) >>> at com.hazelcast.core.Hazelcast.newHazelcastInstance(Hazelca= st.java:507)[250:com.hazelcast:2.6.9] >>> at org.apache.karaf.cellar.hazelcast.factory.HazelcastServic= eFactory.buildInstance(HazelcastServiceFactory.java:107)[253:org.apache.kar= af.cellar.hazelcast:2.3.6] >>> at org.apache.karaf.cellar.hazelcast.factory.HazelcastServic= eFactory.getInstance(HazelcastServiceFactory.java:92)[253:org.apache.karaf.= cellar.hazelcast:2.3.6] >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Metho= d)[:1.8.0_45] >>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodA= ccessorImpl.java:62)[:1.8.0_45] >>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegatin= gMethodAccessorImpl.java:43)[:1.8.0_45] >>> at java.lang.reflect.Method.invoke(Method.java:497)[:1.8.0_4= 5] >>> at org.apache.aries.blueprint.utils.ReflectionUtils.invoke(R= eflectionUtils.java:297)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.BeanRecipe.invoke(Be= anRecipe.java:958)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.BeanRecipe.getInstan= ce(BeanRecipe.java:298)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.BeanRecipe.internalC= reate2(BeanRecipe.java:806)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.BeanRecipe.internalC= reate(BeanRecipe.java:787)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.di.AbstractRecipe$1.call(Abstr= actRecipe.java:79)[18:org.apache.aries.blueprint.core:1.4.3] >>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[= :1.8.0_45] >>> at org.apache.aries.blueprint.di.AbstractRecipe.create(Abstr= actRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.di.RefRecipe.internalCreate(Re= fRecipe.java:62)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.di.AbstractRecipe.create(Abstr= actRecipe.java:106)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.ServiceRecipe.create= Service(ServiceRecipe.java:284)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.ServiceRecipe.intern= alGetService(ServiceRecipe.java:251)[18:org.apache.aries.blueprint.core:1.4= .3] >>> at org.apache.aries.blueprint.container.ServiceRecipe.intern= alCreate(ServiceRecipe.java:148)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.ServiceRecipe.interna= lCreate(ServiceRecipe.java:148)[18:org.apache.aries.blueprint.core:1.4.3] = at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecipe= .java:79)[18:org.apache.aries.blueprint.core:1.4.3] >>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[= :1.8.0_45] >>> at org.apache.aries.blueprint.di.AbstractRecipe.create(Abstr= actRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3] at org.= apache.aries.blueprint.container.BlueprintRepository.createInstances(Bluepr= intRepository.java:245)[18:org.apache.aries.blueprint.core:1.4.3] at= org.apache.aries.blueprint.container.BlueprintRepository.createAll(Bluepri= ntRepository.java:183)[18:org.apache.aries.blueprint.core:1.4.3] at = org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEage= rComponents(BlueprintContainerImpl.java:682)[18:org.apache.aries.blueprint.= core:1.4.3] at org.apache.aries.blueprint.container.BlueprintContain= erImpl.doRun(BlueprintContainerImpl.java:377)[18:org.apache.aries.blueprint= .core:1.4.3] at org.apache.aries.blueprint.container.BlueprintContai= nerImpl.run(BlueprintContainerImpl.java:269)[18:org.apache.aries.blueprint.= core:1.4.3] at org.apache.aries.blueprint.container.BlueprintExtende= r.createContainer(Bluep r i > n > t >> E >> xtender.java:294)[18:org.apache.aries.blueprint.core:1.4.3] at or= g.apache.aries.blueprint.container.BlueprintExtender.createContainer(Bluepr= intExtender.java:263)[18:org.apache.aries.blueprint.core:1.4.3] at o= rg.apache.aries.blueprint.container.BlueprintExtender.modifiedBundle(Bluepr= intExtender.java:253)[18:org.apache.aries.blueprint.core:1.4.3] at o= rg.apache.aries.util.tracker.hook.BundleHookBundleTracker$Tracked.customize= rModified(BundleHookBundleTracker.java:500)[13:org.apache.aries.util:1.1.0]= at org.apache.aries.util.tracker.hook.BundleHookBundleTracker$Track= ed.customizerModified(BundleHookBundleTracker.java:433)[13:org.apache.aries= .util:1.1.0] at org.apache.aries.util.tracker.hook.BundleHookBundleT= racker$AbstractTracked.track(BundleHookBundleTracker.java:725)[13:org.apach= e.aries.util:1.1.0] at org.apache.aries.util.tracker.hook.BundleHook= BundleTracker$Tracked.bundleChanged(BundleHookBundleTracker.java:463)[13:or= g.apache.aries.util:1.1 . > 0 > ] >> >> at >> org.apache.aries.util.tracker.hook.BundleHookBundleTracker$BundleEven >> t >> Hook.event(BundleHookBundleTracker.java:422)[13:org.apache.aries.util: >> 1.1.0] >>> at org.apache.felix.framework.util.SecureAction.invokeBundle= EventHook(SecureAction.java:1127)[org.apache.felix.framework-4.4.1.jar:] = at org.apache.felix.framework.util.EventDispatcher.createWhitelistFrom= Hooks(EventDispatcher.java:696)[org.apache.felix.framework-4.4.1.jar:] >>> at org.apache.felix.framework.util.EventDispatcher.fireBundl= eEvent(EventDispatcher.java:484)[org.apache.felix.framework-4.4.1.jar:] >>> at org.apache.felix.framework.Felix.fireBundleEvent(Felix.ja= va:4429)[org.apache.felix.framework-4.4.1.jar:] >>> at org.apache.felix.framework.Felix.startBundle(Felix.java:2= 100)[org.apache.felix.framework-4.4.1.jar:] at org.apache.felix.fram= ework.Felix.setActiveStartLevel(Felix.java:1299)[org.apache.felix.framework= -4.4.1.jar:] >>> at >>> org.apache.felix.framework.FrameworkStartLevelImpl.run(FrameworkStar >>> t L evelImpl.java:304)[org.apache.felix.framework-4.4.1.jar:] >>> at org.apache.felix.framework.Felix.setActiveStartLevel(Felix.java:1299= )[org.apache.felix.framework-4.4.1.jar:] >>> at org.apache.felix.framework.FrameworkStartLevelImpl.run(Fr= ameworkStartLevelImpl.java:304)[org.apache.felix.framework-4.4.1.jar:] >>> at java.lang.Thread.run(Thread.java:745)[:1.8.0_45] >>> 06 Jan 2016 08:34:10,349 | ERROR | FelixStartLevel | S= erviceRecipe | 18 - org.apache.aries.blueprint.core - 1.= 4.3 | Error retrieving service from ServiceRecipe[name=3D'.component-1'] >>> org.osgi.service.blueprint.container.ComponentDefinitionException: Erro= r when instantiating bean hazelcast of class com.hazelcast.core.Hazelcast >>> at org.apache.aries.blueprint.container.BeanRecipe.getInstan= ce(BeanRecipe.java:300)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.BeanRecipe.internalC= reate2(BeanRecipe.java:806)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.BeanRecipe.internalC= reate(BeanRecipe.java:787)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.di.AbstractRecipe$1.call(Abstr= actRecipe.java:79)[18:org.apache.aries.blueprint.core:1.4.3] >>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[= :1.8.0_45] >>> at org.apache.aries.blueprint.di.AbstractRecipe.create(Abstr= actRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3] at org.= apache.aries.blueprint.di.RefRecipe.internalCreate(RefRecipe.java:62)[18:or= g.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.di.AbstractRecipe.create(Abstr= actRecipe.java:106)[18:org.apache.aries.blueprint.core:1.4.3] at org= .apache.aries.blueprint.container.ServiceRecipe.createService(ServiceRecipe= .java:284)[18:org.apache.aries.blueprint.core:1.4.3] at org.apache.a= ries.blueprint.container.ServiceRecipe.internalGetService(ServiceRecipe.jav= a:251)[18:org.apache.aries.blueprint.core:1.4.3] >>> at org.apache.aries.blueprint.container.ServiceRecipe.intern= alCreate(ServiceRecipe.java:148)[18:org.apache.aries.blueprint.core:1.4.3] = at org.apache.aries.blueprint.di.AbstractRecipe$1.call(AbstractRecip= e.java:79)[18:org.apache.aries.blueprint.core:1.4.3] >>> at java.util.concurrent.FutureTask.run(FutureTask.java:266)[= :1.8.0_45] at org.apache.aries.blueprint.di.AbstractRecipe.create(Ab= stractRecipe.java:88)[18:org.apache.aries.blueprint.core:1.4.3] at o= rg.apache.aries.blueprint.container.BlueprintRepository.createInstances(Blu= eprintRepository.java:245)[18:org.apache.aries.blueprint.core:1.4.3] = at org.apache.aries.blueprint.container.BlueprintRepository.createAll(Blue= printRepository.java:183)[18:org.apache.aries.blueprint.core:1.4.3] = at org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateE= agerComponents(BlueprintContainerImpl.java:682)[18:org.apache.aries.bluepri= nt.core:1.4.3] at org.apache.aries.blueprint.container.BlueprintCont= ainerImpl.doRun(BlueprintContainerImpl.java:377)[18:org.apache.aries.bluepr= int.core:1.4.3] at org.apache.aries.blueprint.container.BlueprintCon= tainerImpl.run(BlueprintContainerImpl.java:269)[18:org.apache.aries.bluepri= nt.core:1.4.3] a t > o > r >> g >> .apache.aries.blueprint.container.BlueprintExtender.createContain >>> >>> >>> Regards, >>> >>> Barry >>> >>> >>> -----Original Message----- >>> From: Jean-Baptiste Onofr=E9 [mailto:jb@nanthrax.net] >>> Sent: Tuesday, January 05, 2016 10:27 AM >>> To: user@karaf.apache.org >>> Subject: Re: Cellar clustering issue >>> >>> Can you set log level in DEBUG and send karaf.log to me ? >>> >>> It looks like bundle-hazelcast doesn't expose the ClusterManager servic= e. >>> >>> Regards >>> JB >>> >>> On 01/05/2016 03:42 PM, barry.barnett@wellsfargo.com wrote: >>>> Ok, now I've put in the stanza for required-members. But when I do th= at, and have interfaces enabled, I get the following: >>>> >>>> 05 Jan 2016 09:40:20,739 | INFO | Thread-198 | = ReferenceRecipe | 18 - org.apache.aries.blueprint.core - 1= .4.3 | No matching service for optional OSGi service reference (objectClass= =3Dorg.apache.karaf.cellar.core.ClusterManager) >>>> 05 Jan 2016 09:40:20,741 | ERROR | Thread-198 | = Console | 21 - org.apache.karaf.shell.console - 2.= 4.3 | Exception caught while executing command >>>> org.osgi.service.blueprint.container.ServiceUnavailableException: No m= atching service for optional OSGi service reference: (objectClass=3Dorg.apa= che.karaf.cellar.core.ClusterManager) >>>> at org.apache.aries.blueprint.container.ReferenceRecipe.ge= tService(ReferenceRecipe.java:236) >>>> at org.apache.aries.blueprint.container.ReferenceRecipe.ac= cess$000(ReferenceRecipe.java:55) >>>> at org.apache.aries.blueprint.container.ReferenceRecipe$Se= rviceDispatcher.call(ReferenceRecipe.java:298) >>>> at Proxy2882e1b3_fe00_4c30_818e_5ad671ebc492.listNodes(Unk= nown Source) >>>> at org.apache.karaf.cellar.shell.NodesListCommand.doExecut= e(NodesListCommand.java:29) >>>> at org.apache.karaf.shell.console.OsgiCommandSupport.execu= te(OsgiCommandSupport.java:38) >>>> at org.apache.felix.gogo.commands.basic.AbstractCommand.ex= ecute(AbstractCommand.java:35) >>>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Met= hod)[:1.8.0_45] >>>> at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMetho= dAccessorImpl.java:62)[:1.8.0_45] >>>> at sun.reflect.DelegatingMethodAccessorImpl.invoke(Delegat= ingMethodAccessorImpl.java:43)[:1.8.0_45] >>>> at java.lang.reflect.Method.invoke(Method.java:497)[:1.8.0= _45] >>>> at org.apache.aries.proxy.impl.ProxyHandler$1.invoke(Proxy= Handler.java:54) >>>> at org.apache.aries.proxy.impl.ProxyHandler.invoke(ProxyHa= ndler.java:119) >>>> at org.apache.karaf.shell.console.commands.$BlueprintComma= nd1144753357.execute(Unknown Source)[21:org.apache.karaf.shell.console:2.4.= 3] >>>> at org.apache.felix.gogo.runtime.CommandProxy.execute(Comm= andProxy.java:78)[21:org.apache.karaf.shell.console:2.4.3] >>>> at org.apache.felix.gogo.runtime.Closure.executeCmd(Closur= e.java:477)[21:org.apache.karaf.shell.console:2.4.3] >>>> at org.apache.felix.gogo.runtime.Closure.executeStatement(= Closure.java:403)[21:org.apache.karaf.shell.console:2.4.3] >>>> at org.apache.felix.gogo.runtime.Pipe.run(Pipe.java:108)[2= 1:org.apache.karaf.shell.console:2.4.3] >>>> at org.apache.felix.gogo.runtime.Closure.execute(Closure.j= ava:183)[21:org.apache.karaf.shell.console:2.4.3] >>>> at org.apache.felix.gogo.runtime.Closure.execute(Closure.j= ava:120)[21:org.apache.karaf.shell.console:2.4.3] >>>> at org.apache.felix.gogo.runtime.CommandSessionImpl.execut= e(CommandSessionImpl.java:92) >>>> at org.apache.karaf.shell.console.jline.Console.run(Consol= e.java:195) >>>> at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$1= .runConsole(ShellFactoryImpl.java:167)[36:org.apache.karaf.shell.ssh:2.4.3] >>>> at org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$1= $1.run(ShellFactoryImpl.java:126) >>>> at java.security.AccessController.doPrivileged(Native Meth= od)[:1.8.0_45] >>>> at org.apache.karaf.jaas.modules.JaasHelper.doAs(JaasHelpe= r.java:47)[20:org.apache.karaf.jaas.modules:2.4.3] >>>> at >>>> org.apache.karaf.shell.ssh.ShellFactoryImpl$ShellImpl$1.run(ShellFa >>>> c t o ryImpl.java:124)[36:org.apache.karaf.shell.ssh:2.4.3] >>>> >>>> Regards, >>>> >>>> Barry >>>> >>>> -----Original Message----- >>>> From: Jean-Baptiste Onofr=E9 [mailto:jb@nanthrax.net] >>>> Sent: Tuesday, January 05, 2016 8:55 AM >>>> To: user@karaf.apache.org >>>> Subject: Re: Cellar clustering issue >>>> >>>> As I'm not able to reproduce your issue, it's not easy to figure it ou= t. >>>> >>>> Clearly, the problem is that the nodes don't see each other, and I sus= pect we're missing something obvious on the network configuration. So yes, = tweaking the tcp-ip configuration can help. >>>> >>>> The only weird thing for me is the fact that you have a Cellar bundle = in failed state. Is it still the case ? >>>> >>>> Regards >>>> JB >>>> >>>> On 01/05/2016 02:50 PM, barry.barnett@wellsfargo.com wrote: >>>>> Should I try the following? >>>>> >>>>> >>>>> IPofBox1 >>>>> IPofBox1 >>>>> IPofBox1,IPofBox2 >>>>> >>>>> >>>>> I currently only use: >>>>> >>>>> IPofBox1 >>>>> IPofBox2 >>>>> >>>>> >>>>> Regards, >>>>> >>>>> Barry >>>>> >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: Jean-Baptiste Onofr=E9 [mailto:jb@nanthrax.net] >>>>> Sent: Monday, January 04, 2016 11:09 AM >>>>> To: user@karaf.apache.org >>>>> Subject: Re: Cellar clustering issue >>>>> >>>>> Do you mind to provide a link to the karaf log ? >>>>> >>>>> Thanks, >>>>> Regards >>>>> JB >>>>> >>>>> On 01/04/2016 04:59 PM, barry.barnett@wellsfargo.com wrote: >>>>>> Iptables is not enabled on the Linux boxes. >>>>>> >>>>>> Config on hazelcast.xml for Box1. Box2 is a mirror image basically: >>>>>> >>>>>> >>>>> xsi:schemaLocation=3D"http://www.hazelcast.com/schema/config hazelca= st-config-2.5.xsd" >>>>>> xmlns=3D"http://www.hazelcast.com/schema/config" >>>>>> xmlns:xsi=3D"http://www.w3.org/2001/XMLSchema-insta= nce"> >>>>>> >>>>>> dev >>>>>> pass >>>>>> >>>>>> http://localhost:8080= /mancenter >>>>>> >>>>>> 5701 >>>>>> >>>>>> 0 >>>>>> >>>>>> >>>>>> >>>>>> 224.2.2.3 >>>>>> 54327 >>>>>> >>>>>> >>>>>> IPforBox1:5701 >>>>>> IPforBox2:5701 >>>>>> >>>>>> >>>>>> my-access-key >>>>>> my-secret-key >>>>>> >>>>>> us-west-1 >>>>>> >>>>>> ec2.amazonaws.com >>>>>> >>>>>> hazelcast-sg >>>>>> type >>>>>> hz-nodes >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> IPforBox1 >>>>>> IPforBox2 >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> PBEWithMD5AndDES >>>>>> >>>>>> thesalt >>>>>> >>>>>> thepass >>>>>> >>>>>> 19 >>>>>> >>>>>> >>>>>> >>>>>> RSA/NONE/PKCS1PADDING >>>>>> >>>>>> thekeypass >>>>>> >>>>>> local >>>>>> >>>>>> JKS >>>>>> >>>>>> thestorepass >>>>>> >>>>>> keystore >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> 16 >>>>>> 64 >>>>>> 60 >>>>>> >>>>>> >>>>>> >>>>>> 0 >>>>>> >>>>>> default >>>>>> >>>>>> >>>>>> >>>>>> 1 >>>>>> >>>>>> 0 >>>>>> >>>>>> 0 >>>>>> >>>>>> NONE >>>>>> >>>>>> 0 >>>>>> >>>>>> 25 >>>>>> >>>>>> hz.ADD_NEW_ENTRY >>>>>> >>>>>> >>>>>> >>>>>>