cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Burwell <jburw...@basho.com>
Subject Re: Hypervisor Host Type Required at Zone Level for Primary Storage?
Date Mon, 17 Jun 2013 20:49:05 GMT
Marcus,

I am coming of the viewpoint that ImageService (ISOs and Templates), hypervisor snapshotting,
and DataMotionService should moved from the Storage layer into the Hypervisor layer for the
following reasons:

The storage layer should treat the data it stores as opaque.  These services deal with content,
not data management, in a manner that is specific to one or more hypervisors.  The Storage
should simply provide operations to read as a stream, read through a file handle, write through
a steam, write a file handle, list contents, and delete data based on a logical URI.   These
higher level, content-oriented services then compose these lower-level primitive operations
to operate on content.  
These elements are Hypervisor specific.  Therefore, tracking their storage location and association
with a hypervisor should be part of the hypervisor layer.

As I have said in numerous threads (so I apologize for the repetition), we have to break this
cyclic dependency for a whole range of good reasons.  I am beginning to think that unit these
services are moved to the Hypervisor layer, we won't be able to break it.

Thanks,
-John

On Jun 17, 2013, at 4:23 PM, Marcus Sorensen <shadowsor@gmail.com> wrote:

> I can understand the intention, for example templates are tied to a
> hypervisor because the OS installed works with that hypervisor (drivers,
> etc), and templates end up on primary storage.
> 
> To some extent what's on the volume is hypervisor dependent, AND the
> storage technology is possibly hypervisor dependent. But I agree that it
> doesn't sit well to have the dependency.
> On Jun 17, 2013 3:12 PM, "Mike Tutkowski" <mike.tutkowski@solidfire.com>
> wrote:
> 
>> I figured you might have something to say about this, John. :)
>> 
>> Yeah, I have no idea behind the motivation for this change other than what
>> Edison just said in a recent e-mail.
>> 
>> It sounds like this change went in so that the allocators could look at the
>> VM characteristics and see the hypervisor type. With this info, the
>> allocator can decide if a particular zone-wide storage is acceptable. This
>> doesn't apply for my situation as I'm dealing with a SAN, but some
>> zone-wide storage is static (just a volume "out there" somewhere). Once
>> this volume is used for, say, XenServer purposes, it can only be used for
>> XenServer going forward.
>> 
>> For more details, I would recommend Edison comment.
>> 
>> 
>> On Mon, Jun 17, 2013 at 2:01 PM, John Burwell <jburwell@basho.com> wrote:
>> 
>>> Mike,
>>> 
>>> I know my thoughts will come as a galloping shock, but the idea of a
>>> hypervisor type being attached to a volume is the type of dependency I
>>> think we need to remove from the Storage layer.  What attributes of a
>>> DataStore/StoragePool require association to a hypervisor type?  My
>> thought
>>> is that we should expose query methods allow the Hypervisor layer to
>>> determine if a DataStore/StoragePool requires such a reservation, and we
>>> track that reservation in the Hypervisor layer.
>>> 
>>> Thanks,
>>> -John
>>> 
>>> On Jun 17, 2013, at 3:48 PM, Mike Tutkowski <
>> mike.tutkowski@solidfire.com>
>>> wrote:
>>> 
>>>> Hi Edison,
>>>> 
>>>> How's about if I add this logic into ZoneWideStoragePoolAllocator
>>> (below)?
>>>> 
>>>> After filtering storage pools by tags, it saves off the ones that are
>> for
>>>> any hypervisor.
>>>> 
>>>> Next, we filter the list down more by hypervisor.
>>>> 
>>>> Then, we add the storage pools back into the list that were for any
>>>> hypervisor.
>>>> 
>>>> @Override
>>>> 
>>>> protected List<StoragePool> select(DiskProfile dskCh,
>>>> 
>>>> VirtualMachineProfile<? extends VirtualMachine> vmProfile,
>>>> 
>>>> DeploymentPlan plan, ExcludeList avoid, int returnUpTo) {
>>>> 
>>>>   s_logger.debug("ZoneWideStoragePoolAllocator to find storage pool");
>>>> 
>>>> List<StoragePool> suitablePools = new ArrayList<StoragePool>();
>>>> 
>>>> 
>>>>       List<StoragePoolVO> storagePools =
>>>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
>>>> dskCh.getTags());
>>>> 
>>>> 
>>>>       if (storagePools == null) {
>>>> 
>>>>           storagePools = new ArrayList<StoragePoolVO>();
>>>> 
>>>>       }
>>>> 
>>>> 
>>>>       List<StoragePoolVO> anyHypervisorStoragePools =
>>>> newArrayList<StoragePoolVO>();
>>>> 
>>>> 
>>>>       for (StoragePoolVO storagePool : storagePools) {
>>>> 
>>>>           if (storagePool.getHypervisor().equals(HypervisorType.Any))
>> {
>>>> 
>>>>               anyHypervisorStoragePools.add(storagePool);
>>>> 
>>>>           }
>>>> 
>>>>       }
>>>> 
>>>> 
>>>>       List<StoragePoolVO> storagePoolsByHypervisor =
>>>> 
>>> 
>> _storagePoolDao.findZoneWideStoragePoolsByHypervisor(plan.getDataCenterId(),
>>>> dskCh.getHypervisorType());
>>>> 
>>>> 
>>>>       storagePools.retainAll(storagePoolsByHypervisor);
>>>> 
>>>> 
>>>>       storagePools.addAll(anyHypervisorStoragePools);
>>>> 
>>>> 
>>>>       // add remaining pools in zone, that did not match tags, to
>> avoid
>>>> set
>>>> 
>>>>       List<StoragePoolVO> allPools =
>>>> _storagePoolDao.findZoneWideStoragePoolsByTags(plan.getDataCenterId(),
>>>> null);
>>>> 
>>>>       allPools.removeAll(storagePools);
>>>> 
>>>>       for (StoragePoolVO pool : allPools) {
>>>> 
>>>>           avoid.addPool(pool.getId());
>>>> 
>>>>       }
>>>> 
>>>> 
>>>>       for (StoragePoolVO storage : storagePools) {
>>>> 
>>>>           if (suitablePools.size() == returnUpTo) {
>>>> 
>>>>               break;
>>>> 
>>>>           }
>>>> 
>>>>           StoragePool pol = (StoragePool)this.dataStoreMgr
>>>> .getPrimaryDataStore(storage.getId());
>>>> 
>>>>           if (filter(avoid, pol, dskCh, plan)) {
>>>> 
>>>>               suitablePools.add(pol);
>>>> 
>>>>           } else {
>>>> 
>>>>               avoid.addPool(pol.getId());
>>>> 
>>>>           }
>>>> 
>>>>       }
>>>> 
>>>>       return suitablePools;
>>>> 
>>>>   }
>>>> 
>>>> 
>>>> On Mon, Jun 17, 2013 at 11:40 AM, Mike Tutkowski <
>>>> mike.tutkowski@solidfire.com> wrote:
>>>> 
>>>>> Hi Edison,
>>>>> 
>>>>> I haven't looked into this much, so maybe what I suggest here won't
>> make
>>>>> sense, but here goes:
>>>>> 
>>>>> What about a Hypervisor.MULTIPLE enum option ('Hypervisor' might not
>> be
>>>>> the name of the enumeration...I forget). The
>>> ZoneWideStoragePoolAllocator
>>>>> could use this to be less choosy about if a storage pool qualifies to
>> be
>>>>> used.
>>>>> 
>>>>> Does that make any sense?
>>>>> 
>>>>> Thanks!
>>>>> 
>>>>> 
>>>>> On Mon, Jun 17, 2013 at 11:28 AM, Edison Su <Edison.su@citrix.com>
>>> wrote:
>>>>> 
>>>>>> I think it's due to this
>>>>>> 
>>> 
>> https://cwiki.apache.org/confluence/display/CLOUDSTACK/Zone-wide+primary+storage+target
>>>>>> There are zone-wide storages, may only work with one particular
>>>>>> hypervisor. For example, the data store created on VCenter can be
>>> shared by
>>>>>> all the clusters in a DC, but only for vmware. And, CloudStack
>> supports
>>>>>> multiple hypervisors in one Zone, so, somehow, need a way to tell
mgt
>>>>>> server, for a particular zone-wide storage, which can only work with
>>>>>> certain hypervisors.
>>>>>> You can treat hypervisor type on the storage pool, is another tag,
to
>>>>>> help storage pool allocator to find proper storage pool. But seems
>>>>>> hypervisor type is not enough for your case, as your storage pool
can
>>> work
>>>>>> with both vmware/xenserver, but not for other hypervisors(that's
your
>>>>>> current code's implementation limitation, not your storage itself
>>> can't do
>>>>>> that).
>>>>>> So I'd think you need to extend ZoneWideStoragePoolAllocator, maybe,
>> a
>>>>>> new allocator called: solidfirezonewidestoragepoolAllocator. And,
>>> replace
>>>>>> the following line in applicationContext.xml:
>>>>>> <bean id="zoneWideStoragePoolAllocator"
>>>>>> 
>>> 
>> class="org.apache.cloudstack.storage.allocator.ZoneWideStoragePoolAllocator"
>>>>>> />
>>>>>> With your solidfirezonewidestoragepoolAllocator
>>>>>> It also means, for each CloudStack mgt server deployment, admin needs
>>> to
>>>>>> configure applicationContext.xml for their needs.
>>>>>> 
>>>>>>> -----Original Message-----
>>>>>>> From: Mike Tutkowski [mailto:mike.tutkowski@solidfire.com]
>>>>>>> Sent: Saturday, June 15, 2013 11:34 AM
>>>>>>> To: dev@cloudstack.apache.org
>>>>>>> Subject: Hypervisor Host Type Required at Zone Level for Primary
>>>>>> Storage?
>>>>>>> 
>>>>>>> Hi,
>>>>>>> 
>>>>>>> I recently updated my local repo and noticed that we now require
a
>>>>>>> hypervisor type to be associated with zone-wide primary storage.
>>>>>>> 
>>>>>>> I was wondering what the motivation for this might be?
>>>>>>> 
>>>>>>> In my case, my zone-wide primary storage represents a SAN. Volumes
>> are
>>>>>>> carved out of the SAN as needed and can currently be utilized
on
>> both
>>>>>> Xen
>>>>>>> and VMware (although, of course, once you've used a given volume
on
>>> one
>>>>>>> hypervisor type or the other, you can only continue to use it
with
>>> that
>>>>>>> hypervisor type).
>>>>>>> 
>>>>>>> I guess the point being my primary storage can be associated
with
>> more
>>>>>> than
>>>>>>> one hypervisor type because of its dynamic nature.
>>>>>>> 
>>>>>>> Can someone fill me in on the reasons behind this recent change
and
>>>>>>> recommendations on how I should proceed here?
>>>>>>> 
>>>>>>> Thanks!
>>>>>>> 
>>>>>>> --
>>>>>>> *Mike Tutkowski*
>>>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>>>> e: mike.tutkowski@solidfire.com
>>>>>>> o: 303.746.7302
>>>>>>> Advancing the way the world uses the
>>>>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>>>>> *(tm)*
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> --
>>>>> *Mike Tutkowski*
>>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>>> e: mike.tutkowski@solidfire.com
>>>>> o: 303.746.7302
>>>>> Advancing the way the world uses the cloud<
>>> http://solidfire.com/solution/overview/?video=play>
>>>>> *™*
>>>>> 
>>>> 
>>>> 
>>>> 
>>>> --
>>>> *Mike Tutkowski*
>>>> *Senior CloudStack Developer, SolidFire Inc.*
>>>> e: mike.tutkowski@solidfire.com
>>>> o: 303.746.7302
>>>> Advancing the way the world uses the
>>>> cloud<http://solidfire.com/solution/overview/?video=play>
>>>> *™*
>>> 
>>> 
>> 
>> 
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the
>> cloud<http://solidfire.com/solution/overview/?video=play>
>> *™*
>> 


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message