cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Clayton Weise <cwe...@iswest.net>
Subject RE: CLoudStack and OpenFiler2.99
Date Thu, 20 Sep 2012 17:04:45 GMT
I think it's just a language or miscommunication because what you have is exactly what I would
recommend.

-----Original Message-----
From: claude bariot [mailto:clobariot@gmail.com] 
Sent: Thursday, September 20, 2012 9:22 AM
To: cloudstack-users@incubator.apache.org
Subject: Re: CLoudStack and OpenFiler2.99

Physicaly, I project to put following :
In the *Vlan1 *: Management srv, *host (bond-0)* and* storage server ==> VLAN1
has been reserved for computes node*
In the *Vlan2 *: Guest VMs , and *Host (bond-1)*

If I apply your recommandation, *Storage server* should be in Vlan2 ? bat
not in  vlan1 ?

My Cloud can be run fine if the Management Server and the Storage Serve are
not on the same VLAN ? I'm not sure.

regard



On 20 September 2012 17:54, Clayton Weise <cweise@iswest.net> wrote:

> On the management server you could do tagged interfaces (VLAN) on the same
> NIC, but on your hosts I would recommend using 1 NIC for storage and 1 NIC
> for management and guest traffic.
>
> -----Original Message-----
> From: claude bariot [mailto:clobariot@gmail.com]
> Sent: Thursday, September 20, 2012 2:32 AM
> To: cloudstack-users@incubator.apache.org
> Subject: Re: CLoudStack and OpenFiler2.99
>
> For your VLAN question.  You can put both VLANs on the same interface if
> you want, but I recommend that you keep management and storage traffic on
> separate NICs for performance reasons. ==> on the* Management server* NICs
> same ? I separate also to the *Host *NICs  ?
>
> On 20 September 2012 09:17, claude bariot <clobariot@gmail.com> wrote:
>
> > Thanks lot for your response.
> >
> > regards
> >
> >
> > On 19 September 2012 21:57, Clayton Weise <cweise@iswest.net> wrote:
> >
> >> The LVM volumes are not files, so no you can't save them like that.
>  They
> >> can be converted to VHD files though, which is what CloudStack does when
> >> you take a snapshot of a volume.  CloudStack converts the LVM to a VHD
> file
> >> and places it on secondary storage.
> >>
> >> For your VLAN question.  You can put both VLANs on the same interface if
> >> you want, but I recommend that you keep management and storage traffic
> on
> >> separate NICs for performance reasons.
> >>
> >> -----Original Message-----
> >> From: claude bariot [mailto:clobariot@gmail.com]
> >> Sent: Wednesday, September 19, 2012 12:06 PM
> >> To: cloudstack-users@incubator.apache.org
> >> Subject: Re: CLoudStack and OpenFiler2.99
> >>
> >> If I need to backup VMs root disks, I same can save this files  ?
> >>
> >> I've another question concerning VLAN :
> >> I project to create 2 vlans : vlan1 for admin trafics between Management
> >> serv, hosts and storage node.
> >> And vlan2 for VMs trafics (Application Vlan).
> >>
> >> My question is : It's necessary to attribute 1 host NIC (eth0) for vlan1
> >> and the second host NIC (eth1) for vlan2 ?
> >>
> >> I hope that I'll have a response quiqly.
> >>
> >> regard
> >>
> >>
> >> On 19 September 2012 18:16, Clayton Weise <cweise@iswest.net> wrote:
> >>
> >> > Right, those are LVM volumes that have been created by XenServer and
> >> > assigned to virtual machines.  If you want to see what VMs they're
> >> attached
> >> > to you can run various 'xe' commands to see.  For example, 'xe
> vm-list'
> >> > will give you a list of all of your VMs.  If you know the UUID of the
> VM
> >> > already then you can run 'xe vm-disk-list uuid=(uuid)' where (uuid) is
> >> the
> >> > UUID of the VM in question.
> >> >
> >> > -----Original Message-----
> >> > From: claude bariot [mailto:clobariot@gmail.com]
> >> > Sent: Wednesday, September 19, 2012 3:04 AM
> >> > To: cloudstack-users@incubator.apache.org
> >> > Subject: Re: CLoudStack and OpenFiler2.99
> >> >
> >> > ok
> >> > look below, what I see from the Management server :
> >> >
> >> > root@cloud-cms1:/export/primary# *ls -l
> >> >  /dev/VG_XenStorage-d870c716-4c81-1a64-4d90-5a91f835f422/*
> >> > total 0
> >> > drwxr-xr-x.  2 root root  240 2012-09-13 16:56 ./
> >> > drwxr-xr-x. 18 root root 3620 2012-09-17 15:40 ../
> >> > lrwxrwxrwx.  1 root root  110 2012-09-13 16:56
> >> > hb-5010b2b2-8cb4-447d-aae1-2453571df587 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-hb--5010b2b2--8cb4--447d--aae1--2453571df587
> >> > lrwxrwxrwx.  1 root root  110 2012-09-13 16:56
> >> > hb-c1a30f79-7327-4f16-b450-defa14442433 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-hb--c1a30f79--7327--4f16--b450--defa14442433
> >> > lrwxrwxrwx.  1 root root   69 2012-09-13 16:56 MGT ->
> >> > ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-MGT
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-35e13f3a-e126-4e7a-bb7d-1e1732f59e84 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--35e13f3a--e126--4e7a--bb7d--1e1732f59e84
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-38afc9cc-366e-44bd-95a9-21fd2f194785 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--38afc9cc--366e--44bd--95a9--21fd2f194785
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-51b1d1f7-767f-4268-ab07-696932dddc96 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--51b1d1f7--767f--4268--ab07--696932dddc96
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-5c717365-bbab-4781-a1ea-4651db5efca6 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--5c717365--bbab--4781--a1ea--4651db5efca6
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-b849c29a-e7aa-4d77-b965-7845e8be079b ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--b849c29a--e7aa--4d77--b965--7845e8be079b
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-c23673fb-3e40-4b0f-9cee-e2c947bdda59 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--c23673fb--3e40--4b0f--9cee--e2c947bdda59
> >> > lrwxrwxrwx.  1 root root  111 2012-09-13 16:56
> >> > VHD-d2358d95-b4e8-4426-9c68-33686251a2b3 ->
> >> >
> >> >
> >>
> ../mapper/VG_XenStorage--d870c716--4c81--1a64--4d90--5a91f835f422-VHD--d2358d95--b4e8--4426--9c68--33686251a2b3
> >> >
> >> >
> >> > There are VMs roots volumes stored into the Primary storage ?
> >> > If it's true, what can I associate this files with VMs hostname or
> >> Instance
> >> > name ?
> >> >
> >> > Regards
> >> >
> >> >
> >> > On 18 September 2012 18:31, Jason Davis <scr512@gmail.com> wrote:
> >> >
> >> > > Hence why you should consider NFS for management simplicity :)
> >> > >
> >> > > On Tue, Sep 18, 2012 at 11:06 AM, Clayton Weise <cweise@iswest.net>
> >> > wrote:
> >> > >
> >> > > > If it's a LUN, it's a block device.  You can't just look in "/"
> and
> >> > find
> >> > > > it.  In the case of XenServer, it creates a CLVM device for the
> >> iSCSI
> >> > or
> >> > > FC
> >> > > > LUN and that logical volume is shared amongst all of the hosts
in
> >> the
> >> > > > cluster.  Then, for each virtual disk (VDI) you create XenServer
> >> > creates
> >> > > an
> >> > > > LVM "partition" (partition isn't actually the correct technical
> >> term,
> >> > but
> >> > > > it's the easiest way to express it) for that virtual disk.  So
> >> > > essentially,
> >> > > > each virtual disk is another LVM partition.
> >> > > >
> >> > > > It's not a filesystem, it's not like NFS.  You can't just browse
> >> into
> >> > it,
> >> > > > in the same way you can't just plug in a hard drive and browse
> into
> >> > that
> >> > > > either.  You need a filesystem on top of the block device that
you
> >> then
> >> > > > need to mount.
> >> > > >
> >> > > > If you're used to iSCSI or FC with VMware the reason you can
> browse
> >> > into
> >> > > a
> >> > > > block device is because VMware formats the device with a
> filesystem
> >> > > called
> >> > > > VMFS.  In the case of XenServer, there is no file system, just
> block
> >> > > > devices that are handed to individual virtual machines.
> >> > > >
> >> > > > -----Original Message-----
> >> > > > From: claude bariot [mailto:clobariot@gmail.com]
> >> > > > Sent: Tuesday, September 18, 2012 7:31 AM
> >> > > > To: cloudstack-users@incubator.apache.org
> >> > > > Subject: Re: CLoudStack and OpenFiler2.99
> >> > > >
> >> > > > Thanks for this precision.
> >> > > >
> >> > > > Bat, How can I display datas (VMs Root DISKs) on the Lun ?
> >> > > > I'll need to do ls command into the storage server or not /
> >> > > > How can I find stored directory when I would like to display
files
> >> ...
> >> > > >
> >> > > > regards
> >> > > >
> >> > > >
> >> > > > On 18 September 2012 16:14, Jason Davis <scr512@gmail.com>
wrote:
> >> > > >
> >> > > > > Remember that if you will be presenting LUNs to your XenServer
> >> > Cluster,
> >> > > > > that the LUN is effectively shared between all hosts within
the
> >> > > > cluster...
> >> > > > > ie: LUNs are not tied to a specific host per say.
> >> > > > >
> >> > > > > On Tue, Sep 18, 2012 at 8:38 AM, claude bariot <
> >> clobariot@gmail.com>
> >> > > > > wrote:
> >> > > > >
> >> > > > > > We'll using XenServer as Hypervisor.
> >> > > > > > My boss prefere to create same Luns with OpenFiler,
then each
> >> Lun
> >> > > will
> >> > > > be
> >> > > > > > attached to Xen Host.
> >> > > > > >
> >> > > > > > May be we can use it as a local primary storage.
> >> > > > > >
> >> > > > > > I know that, We can add PS via CS UI.
> >> > > > > >
> >> > > > > > On 18 September 2012 15:21, Jason Davis <scr512@gmail.com>
> >> wrote:
> >> > > > > >
> >> > > > > > > Ah time to chime in :)
> >> > > > > > >
> >> > > > > > > I would recommend using NFS for primary and secondary
> >> storage...
> >> > > NFS
> >> > > > is
> >> > > > > > > *much* more straight forward. That and VM's provisioned
via
> >> NFS
> >> > are
> >> > > > > > > inherently thin. With iSCSI this may or may not
be true
> >> > (dependent
> >> > > on
> >> > > > > > what
> >> > > > > > > hypervisor is being used.)
> >> > > > > > >
> >> > > > > > > Based on what you are setting up, the theoretical
speed
> >> > advantages
> >> > > of
> >> > > > > > iSCSI
> >> > > > > > > vs NFSv3 are moot.
> >> > > > > > >
> >> > > > > > > -Jason
> >> > > > > > >
> >> > > > > > >
> >> > > > > > >
> >> > > > > > > On Tue, Sep 18, 2012 at 6:23 AM, claude bariot
<
> >> > > clobariot@gmail.com>
> >> > > > > > > wrote:
> >> > > > > > >
> >> > > > > > > > One more question :
> >> > > > > > > > How can I find files (for Primary and Secondary
Storage)
> >> When
> >> > > there
> >> > > > > are
> >> > > > > > > > using Iscsi device (Luns) ?
> >> > > > > > > >
> >> > > > > > > > regards
> >> > > > > > > >
> >> > > > > > > >
> >> > > > > > > > On 18 September 2012 10:30, claude bariot
<
> >> clobariot@gmail.com
> >> > >
> >> > > > > wrote:
> >> > > > > > > >
> >> > > > > > > > > I building a testbed plateform.
> >> > > > > > > > > Could you let me know, the pest precedures
(best way)
> for
> >> > > > configure
> >> > > > > > it.
> >> > > > > > > > > I have 2 hosts (xenserver), 1 Mangement
server (Ubuntu
> >> > 10.04) 1
> >> > > > > > storage
> >> > > > > > > > > server with OpenFiler
> >> > > > > > > > >
> >> > > > > > > > > I need the process please ..
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > regards
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > > > On 17 September 2012 23:51, Geoff Higginbottom
<
> >> > > > > > > > > geoff.higginbottom@shapeblue.com>
wrote:
> >> > > > > > > > >
> >> > > > > > > > >> Hi Claude,
> >> > > > > > > > >>
> >> > > > > > > > >> We have frequently used OpenFiler
on our test and proof
> >> of
> >> > > > concept
> >> > > > > > > (POC)
> >> > > > > > > > >> builds, however to be fair none
of our clients have
> ever
> >> > used
> >> > > it
> >> > > > > in
> >> > > > > > a
> >> > > > > > > > >> production environment.
> >> > > > > > > > >>
> >> > > > > > > > >> Regards
> >> > > > > > > > >>
> >> > > > > > > > >> Geoff
> >> > > > > > > > >>
> >> > > > > > > > >>
> >> > > > > > > > >> On 17 Sep 2012, at 14:45, "claude
bariot" <
> >> > > clobariot@gmail.com>
> >> > > > > > > wrote:
> >> > > > > > > > >>
> >> > > > > > > > >> Hello
> >> > > > > > > > >>
> >> > > > > > > > >> Someone has already used CloudStack
with OpenFiler as
> >> > storage
> >> > > > > > > > management ?
> >> > > > > > > > >> ShapeBlue provides a range of strategic
and technical
> >> > > consulting
> >> > > > > and
> >> > > > > > > > >> implementation services to help
IT Service Providers
> and
> >> > > > > Enterprises
> >> > > > > > > to
> >> > > > > > > > >> build a true IaaS compute cloud.
ShapeBlue's expertise,
> >> > > combined
> >> > > > > > with
> >> > > > > > > > >> CloudStack technology, allows IT
Service Providers and
> >> > > > Enterprises
> >> > > > > > to
> >> > > > > > > > >> deliver true, utility based, IaaS
to the customer or
> >> > end-user.
> >> > > > > > > > >>
> >> > > > > > > > >> ________________________________
> >> > > > > > > > >>
> >> > > > > > > > >> This email and any attachments to
it may be
> confidential
> >> and
> >> > > are
> >> > > > > > > > intended
> >> > > > > > > > >> solely for the use of the individual
to whom it is
> >> > addressed.
> >> > > > Any
> >> > > > > > > views
> >> > > > > > > > or
> >> > > > > > > > >> opinions expressed are solely those
of the author and
> do
> >> not
> >> > > > > > > necessarily
> >> > > > > > > > >> represent those of Shape Blue Ltd.
If you are not the
> >> > intended
> >> > > > > > > > recipient of
> >> > > > > > > > >> this email, you must neither take
any action based upon
> >> its
> >> > > > > > contents,
> >> > > > > > > > nor
> >> > > > > > > > >> copy or show it to anyone. Please
contact the sender if
> >> you
> >> > > > > believe
> >> > > > > > > you
> >> > > > > > > > >> have received this email in error.
Shape Blue Ltd is a
> >> > company
> >> > > > > > > > incorporated
> >> > > > > > > > >> in England & Wales.
> >> > > > > > > > >>
> >> > > > > > > > >
> >> > > > > > > > >
> >> > > > > > > >
> >> > > > > > >
> >> > > > > >
> >> > > > >
> >> > > >
> >> > >
> >> >
> >>
> >
> >
>

Mime
View raw message