cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From benoit lair <kurushi4...@gmail.com>
Subject Re: CS 4.2.0 - Bug with ssvm - wrong capacity recognized
Date Thu, 03 Oct 2013 12:56:51 GMT
Hi Indra,

Thanks for your response.

Effectively, i can't contact my nfs server

Wei, my management.network.cidr is 172.20.0.0/22, so my nfs server is on
172.20.0.57 so in the cidr of mgmt

Is there an impact concerning the config of my zone ?

My pod has a start ip with 172.20.0.100 to 172.20.1.254, is there an issue
due to the fact my secondary storage nfs server has  the ip 172.20.0.57 ?

The reason for configuring the pod with an ip range outside the nfs server
is because the subnet is already used by another cs mgmt instance (i'm
proceeding of a migration of cs 4.0.0 towards a cs 4.2.0 without using the
official upgrade process).

So my nfs server is 172.20.0.57, is managed by cs 4.2.0, with and advanced
zone composed of a pod with an ip range outside the nfs server (it's a
xenserver 6.2 cluster inside).

Does it explain my problem ?

During the time i wrote this email, i saw that the routing table of my ssvm
vm is :

root@s-1-VM:~# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use
Iface
default         10.14.0.1       0.0.0.0         UG    0      0        0 eth2
google-public-d 172.20.0.1      255.255.255.255 UGH   0      0        0 eth1
10.14.0.0       *               255.254.0.0     U     0      0        0 eth2
10.32.0.0       *               255.240.0.0     U     0      0        0 eth3
link-local      *               255.255.0.0     U     0      0        0 eth0
172.20.0.0      *               255.255.252.0   U     0      0        0 eth1
172.20.0.57     10.32.0.1       255.255.255.255 UGH   0      0        0 eth3
relais3.altitud 172.20.0.1      255.255.255.255 UGH   0      0        0 eth1


So, i can see that my nfs server (172.20.0.57) is routed via the network gw
10.32.0.1 (it is my storage newtwork)

Why does it need to be routed via my storage network ?

Thanks for your advice.


Regards, Benoit Lair.


2013/10/3 Wei ZHOU <ustcweizhou@gmail.com>

> Could you check management.network.cidr and host in Global Setting?
>
>
> 2013/10/3 Indra Pramana <indra@sg.or.id>
>
> > Hi,
> >
> > I encountered the problem before, this is because for some reason, SSVM
> is
> > not able to mount to your NFS server. Try to SSH into the SSVM and then
> run
> > mount command to try whether you can mount manually. If cannot, then need
> > to investigate further.
> >
> > HTH.
> >
> > Thank you.
> >
> >
> > On Thu, Oct 3, 2013 at 6:48 PM, benoit lair <kurushi4000@gmail.com>
> wrote:
> >
> > > Hello guys,
> > >
> > > I'm trying cs 4.2.0 installed with the cloudstack rpm repositories
> onto a
> > > centos 63 mgmt server.
> > >
> > > I've created a zone, a pod, a cluster with a xenserver 6.2 host.
> > >
> > > Created my networks, deployed the sysvm template (with the last
> version,
> > > not the one into he docs)
> > >
> > > I got an issue with the ssvm
> > >
> > > My dashboard says me i have 275MB of storage available, that is the
> size
> > of
> > > the rootfs of the ssvm.
> > > When i ran ssh against my ssvm (ssh -i /root/.ssh/id_rsa.cloud -p 3922
> > > root@169.254.0.82)
> > >
> > > i see the following with df -h :
> > >
> > > root@s-1-VM:~# df -h
> > > Filesystem                                              Size  Used
> Avail
> > > Use% Mounted on
> > > rootfs                                                  276M  118M
>  145M
> > > 45% /
> > > udev                                                     10M     0
> 10M
> > > 0% /dev
> > > tmpfs                                                    25M  152K
> 25M
> > > 1% /run
> > > /dev/disk/by-uuid/3bbaf5c6-5317-468b-9742-0e68c65ad565  276M  118M
>  145M
> > > 45% /
> > > tmpfs                                                   5.0M     0
>  5.0M
> > > 0% /run/lock
> > > tmpfs                                                    79M     0
> 79M
> > > 0% /run/shm
> > > /dev/xvda1                                               30M   18M
> 11M
> > > 63% /boot
> > > /dev/xvda6                                               53M  4.9M
> 45M
> > > 10% /home
> > > /dev/xvda8                                              368M   11M
>  339M
> > > 3% /opt
> > > /dev/xvda10                                              48M  4.9M
> 41M
> > > 11% /tmp
> > > /dev/xvda7                                              610M  502M
> 77M
> > > 87% /usr
> > > /dev/xvda9                                              415M  107M
>  287M
> > > 27% /var
> > >
> > >
> > > That's not okay.
> > >
> > > In fact, the ssvm mount his own / partition and provide it to the mgmt
> > > server.
> > > However, i configured an nfs server as secondary storage server with ip
> > > 182.20.0.57 with mount point /export/secondary.
> > >
> > > Why do i not see my nfs server mounted onto the ssvm vm ?
> > >
> > > So due to this bug i can't upload my vm templates and populate my
> > templates
> > > database.
> > >
> > >
> > > Thanks a lot for your help.
> > >
> > > Regards.
> > >
> > > Benoit Lair.
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message