cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrija Panic <andrija.pa...@gmail.com>
Subject Re: Management IP on guest VM using public IP
Date Fri, 24 Oct 2014 14:05:50 GMT
and internal DNS - it must be accessible/routable from mgmt network - when
I set internal dns to be 8.8.8.8 then SSVM check was failing, and I had bad
routes as you do at the moment...
So no routes for internal DNS should be added at all in my opinion, it's is
on the administarator of the CS to allow public IP (whatever it is private
ro public IP) to be reachable/routable from mgmt network...

On 24 October 2014 16:03, Andrija Panic <andrija.panic@gmail.com> wrote:

> I had the same/similar issues, when my mgmt server and agents (bridge on
> them) had public IP...I add host as 10.x,x,x to the cloudstack, and the
> cs-agent reads the public IP address from the bridge and defines that
> public IP as the host IP inside cloudstack (acs 4.4)
>
> On 24 October 2014 01:06, John Pletka <jpletka@abraxis.com> wrote:
>
>> This is probably something simple, but I can't find it.
>> *TLDR: how to send guests the internal IP of the management server instead
>> of public?*
>>
>> My setup:
>>
>> xx.47.90.0/24 => cloud-public
>> 10.1.40.0/24 => cloud-private (SAN + MGMT)
>>
>> Management server: xx.47.90.4 (public IP) and 10.1.40.4 (private IP)
>>
>> I'm trying to run the storage server health check
>> (/usr/local/cloud/systemvm/ssvm-check.sh) and it is failing pinging the
>> management server.
>>
>> Good: DNS resolves download.cloud.com
>> ================================================
>> nfs is currently mounted
>> ================================================
>> Management server is XX.47.90.4. Checking connectivity.
>> ERROR: Cannot connect to XX.47.90.4 port 8250
>> 2014/10/23 22:51:29 socat[4617] E write(3, 0x86773c8, 1): No route to host
>>
>> *The reason it is failing is it adds a default route to the Secondary
>> Storage server pointing that public IP at the local gateway.*  *If I
>> manually delete that route, the health check works*
>>
>> root@s-6-VM:~# route -n
>> Kernel IP routing table
>> Destination     Gateway         Genmask         Flags Metric Ref    Use
>> Iface
>> 0.0.0.0         xx.47.90.1     0.0.0.0         UG    0      0        0
>> eth2
>> 10.1.40.0       0.0.0.0         255.255.255.0   U     0      0        0
>> eth1
>> 10.1.40.0       0.0.0.0         255.255.255.0   U     0      0        0
>> eth3
>> 169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0
>> eth0
>> xx.47.90.0     0.0.0.0         255.255.255.0   U     0      0        0
>> eth2
>> xx.47.90.4     10.1.40.1       255.255.255.255 UGH   0      0        0
>> eth1
>>
>> The test is getting that IP from /var/cache/cloud/cmdline
>>
>> root@s-6-VM:~# cat /var/cache/cloud/cmdline
>> root=UUID=3bbaf5c6-5317-468b-9742-0e68c65ad565 ro debian-installer=en_US
>> quiet -- quiet console=hvc0 template=domP type=secstorage host=XX.47.90.4
>> port=8250 name=s-6-VM zone=1 pod=1 guid=s-6-VM
>> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource
>> instance=SecStorage sslcopy=false role=templateProcessor mtu=1500
>> eth2ip=XX.47.90.195 eth2mask=255.255.255.0 gateway=XX.47.90.1
>> eth0ip=169.254.2.79 eth0mask=255.255.0.0 eth1ip=10.1.40.207
>> eth1mask=255.255.255.0 mgmtcidr=XX.47.90.0/24 localgw=10.1.40.1
>> private.network.device=eth1 eth3ip=10.1.40.65 eth3mask=255.255.255.0
>> storageip=10.1.40.65 storagenetmask=255.255.255.0 storagegateway=10.1.40.1
>> internaldns1=10.1.40.3 internaldns2= dns1=XX.47.64.201 dns2=XX.47.67.201
>>
>
>
>
> --
>
> Andrija Panić
> --------------------------------------
>   http://admintweets.com
> --------------------------------------
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message