Please find my answers inline..

On Fri, Sep 19, 2014 at 9:57 AM, Isuru Haththotuwa <isuruh@apache.org> wrote:
Hi Raj,

On Thu, Sep 18, 2014 at 10:53 PM, Rajkumar Rajaratnam <rajkumarr@wso2.com> wrote:

As discussed in the other thread[1], I am restructuring the existing cluster monitor design along with implementing docker cluster monitor.

I am implementing docker cluster monitor according to the following design; Please note that I am renaming the existing cluster monitors too. Because, If you look at the current cluster monitors' names, these are very confusing; We have an abstract 'AbstractMonitor' and one of its concrete subclass 'ClusterMonitor', which doesn't make any sense (how it is different from others?).
Currently we only have a LBClusterMonitor and a ClusterMonitor for all non-lb services. Maybe that is the reason why the name is not given a specific prefix. +1 to having meaningful names now that we are going to have several Monitor types.

Hence I am proposing the following names for stratos cluster monitors. Basic idea is, each cluster monitor is either monitoring a service cluster or a lb cluster, and each of these cluster can either be a VM cluster or a Container cluster.

Are we okay with class names and design?

As you can see, plugging in a new 'entity' cluster monitor becomes easier with this new design. 

ContainerClusterMonitor has an attribute of type KubernetesClusterContext to keep track of information such as kubernetes_cluster_id, active/pending/obsoleted members, monitoring interval, min/max replicas.. etc, per service cluster.

Currently, we do not have a factory design pattern for creating cluster monitors. I introduced a ClusterMonitorFactory which will be used by AutoscalerTopologyEventReceiver.

As Nirmal mentioned in the other thread[1], min check will be handled by the kubernetes itself. ContainerClusterMonitor job is to create a kubernetes replication controller with the minimum number of replicas initially via CC and need to update the number of replicas of the same kubernetes replication controller (again via CC) based on the health stats and make sure that the number of replicas are maintained under maximum replicas count.

Earlier we used to spin LB per network partition. AFAIK, that's why we had to keep a separate LBClusterMonitor as it is monitoring almost all the network partitions.  Since we don't have the network partition concept in kubernetes, how are we going to handle LB cartridges? Do you think that still we need a separate LBClusterMonitor in docker? Also, how can we achieve the high availability by having more than one  Lbs with Docker? 


Appreciate your feedback and suggestions.

I will keep update this thread with progress.

1. Stratos and Kuburnetes and docker support


Rajkumar Rajaratnam
Software Engineer | WSO2, Inc.

Reka Thirunavukkarasu
Senior Software Engineer,
WSO2, Inc.:http://wso2.com,
Mobile: +94776442007