cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Koushik Das <koushik....@citrix.com>
Subject Re: Possible bug in DeploymentPlanner?
Date Wed, 16 Oct 2013 12:20:30 GMT

On 16-Oct-2013, at 3:12 AM, Prachi Damle <Prachi.Damle@citrix.com> wrote:

> 
> -----Original Message-----
> From: Koushik Das [mailto:koushik.das@citrix.com] 
> Sent: Tuesday, October 15, 2013 11:43 AM
> To: <dev@cloudstack.apache.org>
> Subject: Re: Possible bug in DeploymentPlanner?
> 
> 
> Thanks for the explanation Prachi.
> 
> Here is one issue I see with what you explained below. If an allocator is not suppose
to handle a particular scenario how should it deal with the avoid set. Should it put all available
pools in the avoid set. In that case subsequent allocators won't get a chance to do anything
as all pools are in avoid set. If more than one allocator can operate on a particular pool
then the situation becomes tricky.
> 
> To give an example, if there is a request for volume on shared storage and first allocator
to get called is the local allocator. Since it is not going to handle this case it will return
empty/null list. Now should it put all the other shared pools in avoid set or only the local
pools in avoid set? Similarly if the request is for a local volume and the request first goes
to zone or cluster-wide storage allocator what should be there in the avoid set?
> 
> 
> [Prachi] I see the bug you are thinking about - deploymentPlanningManager :: canAvoidCluster
 is only checking the shared storages and thus does not put the cluster in avoid set in case
the local storage is not a match for the host.
> 
> I think the storage allocators should put the resources not considered, in avoid set,
that are within their scope. Also they should do so only when they are supposed to handle
the request. Thus local allocator sees only local pools, so it should place them into avoid
set if not considered and only when this allocator is supposed to handle the scenario. While
the zone-wide/cluster-wide should set those within their scope.
> Also, the allocators have a defined order in which they get invoked and we need to maintain
the order defined in the componentContext.xml
> 
> Now to solve the bug, the deploymentPlanningManager :: canAvoidCluster  needs to consider
which scenario applies for the given vmprofile - i.e whether its local storage or shared to
correctly figure out if cluster can be avoided by considering applicable pools only.
> Does this work fine for your scenario?
> 
> 
[Koushik] Not sure if canAvoidCluster() is the right place. Shared and local pools can belong
to the same cluster, so the cluster cannot be put in avoid set. Only the pools needs to be
selectively added to avoid set.

> 
> On 15-Oct-2013, at 11:31 PM, Prachi Damle <Prachi.Damle@citrix.com>
> wrote:
> 
>> Koushik,
>> 
>> The deployment planning process is divided between DeploymentPlanners and Allocators.
The planners are supposed to feed a list of clusters the allocators should look into to find
a potential destination. While the allocators should provide a avoid set back to the planners
consisting of any resource not considered.
>> 
>> Reasoning how the planners work:
>> Planners can set/reset clusters into avoid set depending on the heuristics they implement
and the order in which they want the clusters to be traversed.
>> When all options of the planner are exhausted they should return an empty cluster
list halting the search.
>> 
>> What should the allocators do?
>> Iterate over the given cluster list and in each cluster find out suitable resources
- all other resources not considered AND not suitable MUST be added to the avoid set so that
the planners get the correct avoid input.
>> This is necessary for the logic to not enter an infinite loop.
>> 
>> As you see, only planners can halt the search process by referring to the avoid set
provided by the allocators. If you see any case not following this, that needs to be fixed.

>> 
>> Also, I do think in general it will be safe to add a configurable retry limit on
this loop to have control in case any new logic in allocators don't follow this reasoning.
I will add that limit.
>> 
>> Thanks,
>> Prachi
>> 
>> 
>> 
>> -----Original Message-----
>> From: Koushik Das [mailto:koushik.das@citrix.com] 
>> Sent: Tuesday, October 15, 2013 9:19 AM
>> To: <dev@cloudstack.apache.org>
>> Subject: Possible bug in DeploymentPlanner?
>> 
>> I was making some changes in the storage pool allocators related to some bug fix
and came across this code snippet in planDeplyment() method of DeploymentPlanningManagerImpl.java.
>> In this if the checkClustersforDestination() returns null and the 'avoids' parameter
is not correctly updated (one such place can be the storage allocators) then the while loop
will never terminate. Is there any  assumption about how the 'avoids' parameter needs to be
updated? From the code it is not very intuitive. I saw some places in the storage pool allocators
where this will not get updated.
>> 
>> Wanted to understand the reason for doing it this way? Can the while (true) condition
be replaced with something more intuitive?
>> 
>>           while (true) {
>>               if (planner instanceof DeploymentClusterPlanner) {
>>                   ExcludeList plannerAvoidInput = new ExcludeList(avoids.getDataCentersToAvoid(),
>>                           avoids.getPodsToAvoid(), avoids.getClustersToAvoid(), avoids.getHostsToAvoid(),
>>                           avoids.getPoolsToAvoid());
>> 
>>                   clusterList = ((DeploymentClusterPlanner) planner).orderClusters(vmProfile,
plan, avoids);
>>                   if (clusterList != null && !clusterList.isEmpty()) {
>>                       // planner refactoring. call allocators to list hosts
>>                       ExcludeList plannerAvoidOutput = new ExcludeList(avoids.getDataCentersToAvoid(),
>>                               avoids.getPodsToAvoid(), avoids.getClustersToAvoid(),
avoids.getHostsToAvoid(),
>>                               avoids.getPoolsToAvoid());
>> 
>>                       resetAvoidSet(plannerAvoidOutput, plannerAvoidInput);
>> 
>>                       dest = checkClustersforDestination(clusterList, vmProfile,
plan, avoids, dc,
>>                               getPlannerUsage(planner, vmProfile, plan, avoids),
plannerAvoidOutput);
>>                       if (dest != null) {
>>                           return dest;
>>                       }
>>                       // reset the avoid input to the planners
>>                       resetAvoidSet(avoids, plannerAvoidOutput);
>> 
>>                   } else {
>>                       return null;
>>                   }
>>               } else {
>> ............
>> ............
>>               }
>>           }
>> 
>> 
>> 
>> 
>> 
>> 
> 


Mime
View raw message