incubator-cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vijay Venkatachalam <Vijay.Venkatacha...@citrix.com>
Subject RE: [DISCUSS] Method/Algorithm to gather Health Check states
Date Thu, 07 Mar 2013 15:51:14 GMT

I have mentioned Pros and Cons, thanks for asking!. 

Hopefully, it gives a better picture now

> -----Original Message-----
> From: Chip Childers [mailto:chip.childers@sungard.com]
> Sent: Thursday, March 07, 2013 7:40 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: Re: [DISCUSS] Method/Algorithm to gather Health Check states
> 
> On Thu, Mar 07, 2013 at 07:17:17PM +0530, Vijay Venkatachalam wrote:
> >
> > Any  votes/re-assurances to the thought process from the Networking
> Universe?
> >
> > My Vote: Method-2
> >
> > -Vijay
> 
> So I'm struggling to understand the implications of doing it via method
> 1 vs method 2.  Can you provide what you think the pros/cons are?
> 
> >
> > -----Original Message-----
> > From: Vijay Venkatachalam
> > Sent: Wednesday, March 06, 2013 7:58 PM
> > To: cloudstack-dev@incubator.apache.org
> > Subject: [DISCUSS] Method/Algorithm to gather Health Check states
> >
> > Hi,
> >
> > As part of HealthCheck feature review, I have suggested to introduce a
> new Capability for HealthCheck. So that any Network Element who has this
> capability can advertise it.
> >
> > Any alternative thoughts?
> >
> > The most important part of health check functionality is to update the
> status of a destination VM in CS DB (as seen by the LB Appliance) on a
> scheduled thread launched every time interval.
> >
> > So on every iteration, the need is to go to every LB rule and find the status
> of its destinations.
> >
> > I can think of 2 ways to get this done
> >
> > Method 1:
> >   A.  List all the Health Monitors; For each monitor=>
> >   B.  Work backwards find the LbRule
> >   C. Find the network for LbRule
> >   D. Find the LB provider in the network
> >   E. Call LoadBalancingServiceProvider.updateStatus by passing LB and its
> destinations.
> >   F. Which will reach the Resource and ultimately the Appliance
> >

Pros:
1.  Right in the beginning you have the LbRules in the entire system, so you 
     have a perfect working set for which status have to be collected.

Cons:
1.  Scope of narrowing down logically to work on the problem set is not present.

> > Method 2:
> >   A. List all the Networks; For each network =>
> >   B. Find the LB provider in the network
> >   C. Find the NetworkElement for the LB provider
> >   D. Proceed to (E.)  If NetworkElement has the HealthCheck Capability
> >   E. Call LoadBalancingServiceProvider.updateStatus by passing LB and its
> destinations.
> >   F. Which will reach the Resource and ultimately the Appliance
> >

Pros:
1.  Logical break down of the working set "Network" -> "LBRules" -> "Appliance". 
2.  Possibility of optimization because multiple LB rules of the network can be 
    passed together to Network Element and NetworkElement can optimize 
    by sending all lbrules destined to an Appliance together to the Resource for status.
3.  Also, in case if we decide to report the status of destination VMs even 
    if HealthMonitors are not specifically configured by the user (this can be done 
    because almost all LB appliances track the status of destinations by binding 
    some default monitor to check connectivity at least to the port), this is the only way.
4. Is there a division of responsibility when multiple ManagementServers are installed, 
     Is it based on networks? Then this approach makes more sense

Cons:
1.  Even if no LB rules are there with health monitors we will be doing Steps A-D

> > I like the top down approach of Method 2. Any thoughts?
> >
> > Thanks,
> > Vijay V.
> >
> >
> > From: Vijay Venkatachalam [mailto:noreply@reviews.apache.org] On
> Behalf Of Vijay Venkatachalam
> > Sent: Wednesday, March 06, 2013 6:10 PM
> > To: Vijay Venkatachalam
> > Cc: Rajesh Battala; cloudstack
> > Subject: Re: Review Request: AWS Style HealthCheck feature BugID : 664

Mime
View raw message