karaf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Baptiste Onofré ...@nanthrax.net>
Subject Re: Cellar clustering issue
Date Tue, 05 Jan 2016 13:55:21 GMT
As I'm not able to reproduce your issue, it's not easy to figure it out.

Clearly, the problem is that the nodes don't see each other, and I 
suspect we're missing something obvious on the network configuration. So 
yes, tweaking the tcp-ip configuration can help.

The only weird thing for me is the fact that you have a Cellar bundle in 
failed state. Is it still the case ?

Regards
JB

On 01/05/2016 02:50 PM, barry.barnett@wellsfargo.com wrote:
> Should I try the following?
>
> <tcp-ip enabled="true">
>                  <required-member>IPofBox1</required-member>
>                  <member>IPofBox1</member>
>                  <members>IPofBox1,IPofBox2</members>
>              </tcp-ip>
>
> I currently only use:
> <tcp-ip enabled="true">
>                  <member>IPofBox1</member>
>                  <member>IPofBox2</member>
>              </tcp-ip>
>
> Regards,
>
> Barry
>
>
>
> -----Original Message-----
> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
> Sent: Monday, January 04, 2016 11:09 AM
> To: user@karaf.apache.org
> Subject: Re: Cellar clustering issue
>
> Do you mind to provide a link to the karaf log ?
>
> Thanks,
> Regards
> JB
>
> On 01/04/2016 04:59 PM, barry.barnett@wellsfargo.com wrote:
>> Iptables is not enabled on the Linux boxes.
>>
>> Config on hazelcast.xml for Box1.  Box2 is a mirror image basically:
>>
>> <?xml version="1.0" encoding="UTF-8"?>
>> <hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-2.5.xsd"
>>              xmlns="http://www.hazelcast.com/schema/config"
>>              xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
>>       <group>
>>           <name>dev</name>
>>           <password>pass</password>
>>       </group>
>>       <management-center enabled="false">http://localhost:8080/mancenter</management-center>
>>       <network>
>>           <port auto-increment="true">5701</port>
>>           <outbound-ports>
>>               <ports>0</ports>
>>           </outbound-ports>
>>           <join>
>>               <multicast enabled="true">
>>                   <multicast-group>224.2.2.3</multicast-group>
>>                   <multicast-port>54327</multicast-port>
>>               </multicast>
>>               <tcp-ip enabled="true">
>>                   <member>IPforBox1:5701</member>
>>                   <member>IPforBox2:5701</member>
>>               </tcp-ip>
>>               <aws enabled="false">
>>                   <access-key>my-access-key</access-key>
>>                   <secret-key>my-secret-key</secret-key>
>>                   <!--optional, default is us-east-1 -->
>>                   <region>us-west-1</region>
>>                   <!--optional, default is ec2.amazonaws.com. If set, region shouldn't
be set as it will override this property -->
>>                   <hostHeader>ec2.amazonaws.com</hostHeader>
>>                   <!-- optional, only instances belonging to this group will be
discovered, default will try all running instances -->
>>                   <security-group-name>hazelcast-sg</security-group-name>
>>                   <tag-key>type</tag-key>
>>                   <tag-value>hz-nodes</tag-value>
>>
>>               </aws>
>>           </join>
>>           <interfaces enabled="true">
>>               <interface>IPforBox1</interface>
>>               <interface>IPforBox2</interface>
>>           </interfaces>
>>           <ssl enabled="false"/>
>>           <socket-interceptor enabled="false"/>
>>           <symmetric-encryption enabled="false">
>>               <!--
>>                  encryption algorithm such as
>>                  DES/ECB/PKCS5Padding,
>>                  PBEWithMD5AndDES,
>>                  AES/CBC/PKCS5Padding,
>>                  Blowfish,
>>                  DESede
>>               -->
>>               <algorithm>PBEWithMD5AndDES</algorithm>
>>               <!-- salt value to use when generating the secret key -->
>>               <salt>thesalt</salt>
>>               <!-- pass phrase to use when generating the secret key -->
>>               <password>thepass</password>
>>               <!-- iteration count to use when generating the secret key -->
>>               <iteration-count>19</iteration-count>
>>           </symmetric-encryption>
>>           <asymmetric-encryption enabled="false">
>>               <!-- encryption algorithm -->
>>               <algorithm>RSA/NONE/PKCS1PADDING</algorithm>
>>               <!-- private key password -->
>>               <keyPassword>thekeypass</keyPassword>
>>               <!-- private key alias -->
>>               <keyAlias>local</keyAlias>
>>               <!-- key store type -->
>>               <storeType>JKS</storeType>
>>               <!-- key store password -->
>>               <storePassword>thestorepass</storePassword>
>>               <!-- path to the key store -->
>>               <storePath>keystore</storePath>
>>           </asymmetric-encryption>
>>       </network>
>>       <partition-group enabled="false"/>
>>       <executor-service>
>>           <core-pool-size>16</core-pool-size>
>>           <max-pool-size>64</max-pool-size>
>>           <keep-alive-seconds>60</keep-alive-seconds>
>>       </executor-service>
>>       <queue name="default">
>>           <!--
>>               Maximum size of the queue. When a JVM's local queue size reaches the
maximum,
>>               all put/offer operations will get blocked until the queue size
>>               of the JVM goes down below the maximum.
>>               Any integer between 0 and Integer.MAX_VALUE. 0 means
>>               Integer.MAX_VALUE. Default is 0.
>>           -->
>>           <max-size-per-jvm>0</max-size-per-jvm>
>>           <!--
>>               Name of the map configuration that will be used for the backing distributed
>>               map for this queue.
>>           -->
>>           <backing-map-ref>default</backing-map-ref>
>>       </queue>
>>       <map name="default">
>>           <!--
>>               Number of backups. If 1 is set as the backup-count for example,
>>               then all entries of the map will be copied to another JVM for
>>               fail-safety. 0 means no backup.
>>           -->
>>           <backup-count>1</backup-count>
>>           <!--
>>                           Maximum number of seconds for each entry to stay in the
map. Entries that are
>>                           older than <time-to-live-seconds> and not updated
for <time-to-live-seconds>
>>                           will get automatically evicted from the map.
>>                           Any integer between 0 and Integer.MAX_VALUE. 0 means infinite.
Default is 0.
>>                   -->
>>           <time-to-live-seconds>0</time-to-live-seconds>
>>           <!--
>>                           Maximum number of seconds for each entry to stay idle in
the map. Entries that are
>>                           idle(not touched) for more than <max-idle-seconds>
will get
>>                           automatically evicted from the map. Entry is touched if
get, put or containsKey is called.
>>                           Any integer between 0 and Integer.MAX_VALUE. 0 means infinite.
Default is 0.
>>                   -->
>>           <max-idle-seconds>0</max-idle-seconds>
>>           <!--
>>               Valid values are:
>>               NONE (no eviction),
>>               LRU (Least Recently Used),
>>               LFU (Least Frequently Used).
>>               NONE is the default.
>>           -->
>>           <eviction-policy>NONE</eviction-policy>
>>           <!--
>>               Maximum size of the map. When max size is reached,
>>               map is evicted based on the policy defined.
>>               Any integer between 0 and Integer.MAX_VALUE. 0 means
>>               Integer.MAX_VALUE. Default is 0.
>>           -->
>>           <max-size policy="cluster_wide_map_size">0</max-size>
>>           <!--
>>               When max. size is reached, specified percentage of
>>               the map will be evicted. Any integer between 0 and 100.
>>               If 25 is set for example, 25% of the entries will
>>               get evicted.
>>           -->
>>           <eviction-percentage>25</eviction-percentage>
>>           <!--
>>               While recovering from split-brain (network partitioning),
>>               map entries in the small cluster will merge into the bigger cluster
>>               based on the policy set here. When an entry merge into the
>>               cluster, there might an existing entry with the same key already.
>>               Values of these entries might be different for that same key.
>>               Which value should be set for the key? Conflict is resolved by
>>               the policy set here. Default policy is hz.ADD_NEW_ENTRY
>>
>>               There are built-in merge policies such as
>>               hz.NO_MERGE      ; no entry will merge.
>>               hz.ADD_NEW_ENTRY ; entry will be added if the merging entry's key
>>                                  doesn't exist in the cluster.
>>               hz.HIGHER_HITS   ; entry with the higher hits wins.
>>               hz.LATEST_UPDATE ; entry with the latest update wins.
>>           -->
>>           <merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
>>       </map>
>>
>>       <!-- Cellar MERGE POLICY -->
>>       <!--
>>       <merge-policies>
>>           <map-merge-policy name="CELLAR_MERGE_POLICY">
>>               <class-name>org.apache.karaf.cellar.hazelcast.merge.CellarMergePolicy</class-name>
>>           </map-merge-policy>
>>       </merge-policies>
>>
>>
>> Regards,
>>
>> Barry
>>
>>
>>
>> -----Original Message-----
>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>> Sent: Monday, January 04, 2016 9:16 AM
>> To: user@karaf.apache.org
>> Subject: Re: Cellar clustering issue
>>
>> Hi Barry,
>>
>> For now, I don't have any issue with Karaf 2.4.3 and Cellar 2.3.6 (on Linux, using
different VMs).
>>
>> The only case that looks like your is when I enable iptable on one machine (in that
case, it doesn't see the other nodes).
>>
>> Any chance to provide to me more details about your setup ?
>>
>> iptables -L (for the different tables)
>> Karaf log
>>
>> Thanks,
>> Regards
>> JB
>>
>> On 01/04/2016 03:03 PM, barry.barnett@wellsfargo.com wrote:
>>> Any results with your testing?
>>>
>>> Regards,
>>>
>>> Barry
>>>
>>> -----Original Message-----
>>> From: Jean-Baptiste Onofré [mailto:jb@nanthrax.net]
>>> Sent: Sunday, December 27, 2015 2:15 AM
>>> To: user@karaf.apache.org
>>> Subject: Re: Cellar clustering issue
>>>
>>> Hi Barry,
>>>
>>> I just tested Cellar 2.3.6 with Karaf 2.4.3 and it works fine.
>>>
>>> Let me try another test case.
>>>
>>> Regards
>>> JB
>>>
>>> On 12/07/2015 04:54 PM, barry.barnett@wellsfargo.com wrote:
>>>> Hello,
>>>> I am have installed Cellar v2.3.6 in each of my Karaf instances.
>>>> Karaf1 - IP aaa.aaa.aaa, port bbbb
>>>> Karaf2 - IP bbb.bbb.bbb, port cccc
>>>> Why is it that when I issue the following on Karaf1, I get 'Cluster
>>>> node bbb.bbb.bbb doesn't exit':
>>>> Karaf root> cluster:group-set dev bbb.bbb.bbb:cccc I thought it would
>>>> pick it up right away.
>>>> Regards,
>>>> Barry
>>>
>>> --
>>> Jean-Baptiste Onofré
>>> jbonofre@apache.org
>>> http://blog.nanthrax.net
>>> Talend - http://www.talend.com
>>>
>>
>> --
>> Jean-Baptiste Onofré
>> jbonofre@apache.org
>> http://blog.nanthrax.net
>> Talend - http://www.talend.com
>>
>

-- 
Jean-Baptiste Onofré
jbonofre@apache.org
http://blog.nanthrax.net
Talend - http://www.talend.com

Mime
View raw message