cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ahmad Emneina <aemne...@gmail.com>
Subject Re: How to recover a CloudStack deployment
Date Thu, 31 Oct 2013 19:50:52 GMT
if you have two hosts in that cluster, remove one. Reinstall the
hypervisor, readd it back to the cluster. then do the same for the other
host.


On Thu, Oct 31, 2013 at 12:48 PM, Carlos Reategui <carlos@reategui.com>wrote:

> I currently have 2 hosts.  What steps do you suggest I try?
>
>
> On Thu, Oct 31, 2013 at 11:31 AM, Ahmad Emneina <aemneina@gmail.com>wrote:
>
>> Ideally you can add another xenserver, to the same cluster and remove the
>> original. I cant say for certain that removing your only host and adding
>> it
>> back would work. I think you might actually have remove the original host,
>> reinstall the hypervisor, before adding it back. so as a safety measure...
>> add another new host in the same cluster, test it out before yanking the
>> original host.
>>
>>
>> On Thu, Oct 31, 2013 at 11:13 AM, Carlos Reategui <carlos@reategui.com
>> >wrote:
>>
>> > Ahmad,
>> > Would it be safe to remove the hosts and re add them?  Will that
>> preserve
>> > my instances?
>> > thanks
>> > Carlos
>> >
>> >
>> > On Wed, Oct 30, 2013 at 4:24 PM, Carlos Reategui <carlos@reategui.com
>> >wrote:
>> >
>> >> Is there a way to tell Cloudstack to launch the system VMs?
>> >>
>> >> port 443 from MS to both hosts is fine:
>> >>
>> >> # telnet 172.30.45.32 443
>> >> Trying 172.30.45.32...
>> >> Connected to 172.30.45.32.
>> >> Escape character is '^]'.
>> >> ^]
>> >>
>> >> telnet> quit
>> >> Connection closed.
>> >>
>> >> # telnet 172.30.45.31 443
>> >> Trying 172.30.45.31...
>> >> Connected to 172.30.45.31.
>> >> Escape character is '^]'.
>> >> ^]
>> >>
>> >> telnet> quit
>> >> Connection closed.
>> >>
>> >>
>> >>
>> >>
>> >>
>> >> On Wed, Oct 30, 2013 at 4:10 PM, Ahmad Emneina <aemneina@gmail.com
>> >wrote:
>> >>
>> >>> Cloudstack isnt in a state to launch vm's. you want to see it spin up
>> >>> system vm's first. Your deployvm command didnt even appear in the
>> xensource
>> >>> log you provided... because the command didnt make it far in
>> cloudstack
>> >>> (It'd be rejected pretty quick since it cant spin up the system
>> vm's). In
>> >>> your management-server logs you see:
>> >>>
>> >>> 2013-10-30 12:58:34,923 DEBUG [cloud.storage.StorageManagerImpl]
>> (StatsCollector-3:null) Unable to send storage pool command to
>> Pool[202|NetworkFilesystem] via 1
>> >>> com.cloud.exception.OperationTimedoutException: Commands 314507300 to
>> Host 1 timed out after 3600
>> >>>
>> >>> and
>> >>>
>> >>>
>> >>> 2013-10-30 12:58:48,425 DEBUG
>> [storage.secondary.SecondaryStorageManagerImpl] (secstorage-1:null) Zone 1
>> is not ready to launch secondary storage VM yet
>> >>>
>> >>> can you telnet to port 443 from the MS to the host? That
>> OperationTimedoutException
>> >>> looks suspicious.
>> >>>
>> >>>
>> >>>
>> >>> On Wed, Oct 30, 2013 at 3:47 PM, Carlos Reategui <carlos@reategui.com
>> >wrote:
>> >>>
>> >>>> forgot to include the list in my reply.
>> >>>>
>> >>>>
>> >>>> On Wed, Oct 30, 2013 at 3:44 PM, Carlos Reategui <
>> carlos@reategui.com>wrote:
>> >>>>
>> >>>>> That was a cut and paste from a tail -f I did just before hitting
>> the
>> >>>>> start VM button on the console. So not a log roll over.
>> >>>>>
>> >>>>> The management server logs are GMT-8.
>> >>>>>
>> >>>>>
>> >>>>> On Wed, Oct 30, 2013 at 3:36 PM, Ahmad Emneina <aemneina@gmail.com
>> >wrote:
>> >>>>>
>> >>>>>> i dont see any deploys in that xensource log... might have
rolled
>> >>>>>> over as well.
>> >>>>>>
>> >>>>>>
>> >>>>>> On Wed, Oct 30, 2013 at 3:11 PM, Carlos Reategui <
>> carlos@reategui.com
>> >>>>>> > wrote:
>> >>>>>>
>> >>>>>>> Nothing that seems related to trying to start an instance
in the
>> >>>>>>> xensource.log file (see below).  The SMlog file is empty
(rotated
>> a short
>> >>>>>>> while ago).
>> >>>>>>>
>> >>>>>>> I have uploaded the new management log file to:
>> >>>>>>> http://reategui.com/cloudstack/management-server.log.new
>> >>>>>>>
>> >>>>>>> Here is the xensource.log section from after restart
of the
>> >>>>>>> management server (from my pool master -- second node
just
>> showing hearbeat
>> >>>>>>> logs):
>> >>>>>>> [20131030T21:58:45.674Z|debug|srvengxen02|199 sr_scan|SR
scanner
>> >>>>>>> D:a852c04f1e68|xapi] Automatically scanning SRs = [
>> >>>>>>> OpaqueRef:6fe82fcd-15b3-be18-6636-5380ee19de1d ]
>> >>>>>>> [20131030T21:58:45.676Z|debug|srvengxen02|17932||dummytaskhelper]
>> >>>>>>> task scan one D:75bf6896f47a created by task D:a852c04f1e68
>> >>>>>>> [20131030T21:58:45.676Z|debug|srvengxen02|17932|scan
one
>> >>>>>>> D:75bf6896f47a|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:58:45.679Z|debug|srvengxen02|17933
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.slave_login
>> D:a72377292526
>> >>>>>>> created by task D:75bf6896f47a
>> >>>>>>> [20131030T21:58:45.684Z| info|srvengxen02|17933
>> >>>>>>> unix-RPC|session.slave_login D:f33a1c65be6f|xapi] Session.create
>> >>>>>>> trackid=f493c253a7b55c2571816cd0c9c90355 pool=true uname=
>> >>>>>>> is_local_superuser=true auth_user_sid=
>> >>>>>>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>> >>>>>>> [20131030T21:58:45.686Z|debug|srvengxen02|17933
>> >>>>>>> unix-RPC|session.slave_login D:f33a1c65be6f|xapi] Attempting
to
>> open
>> >>>>>>> /var/xapi/xapi
>> >>>>>>> [20131030T21:58:45.689Z|debug|srvengxen02|17934
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
>> D:02f721a999e5
>> >>>>>>> created by task D:f33a1c65be6f
>> >>>>>>> [20131030T21:58:45.696Z|debug|srvengxen02|17932|scan
one
>> >>>>>>> D:75bf6896f47a|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:58:45.698Z|debug|srvengxen02|17935
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.scan D:1926372044db
>> created by
>> >>>>>>> task D:75bf6896f47a
>> >>>>>>> [20131030T21:58:45.707Z| info|srvengxen02|17935
>> >>>>>>> unix-RPC|dispatch:SR.scan D:1926372044db|taskhelper]
task SR.scan
>> >>>>>>> R:73f6dbba33aa (uuid:a3799647-e481-3697-74ea-49798804b477)
created
>> >>>>>>> (trackid=f493c253a7b55c2571816cd0c9c90355) by task D:75bf6896f47a
>> >>>>>>> [20131030T21:58:45.707Z|debug|srvengxen02|17935 unix-RPC|SR.scan
>> >>>>>>> R:73f6dbba33aa|xapi] SR.scan: SR =
>> 'd340de31-8a2f-51b3-926d-5306e2b3405c
>> >>>>>>> (NFS ISO library)'
>> >>>>>>> [20131030T21:58:45.709Z|debug|srvengxen02|17935 unix-RPC|SR.scan
>> >>>>>>> R:73f6dbba33aa|xapi] Marking SR for SR.scan
>> >>>>>>> (task=OpaqueRef:73f6dbba-33aa-a95c-178a-9f9d5ac10b6e)
>> >>>>>>> [20131030T21:58:45.713Z|debug|srvengxen02|17935 unix-RPC|SR.scan
>> >>>>>>> R:73f6dbba33aa|sm] SM iso sr_scan
>> >>>>>>> sr=OpaqueRef:6fe82fcd-15b3-be18-6636-5380ee19de1d
>> >>>>>>> [20131030T21:58:45.720Z| info|srvengxen02|17935 unix-RPC|sm_exec
>> >>>>>>> D:1161068a7230|xapi] Session.create
>> >>>>>>> trackid=4ee84749e687d05ce996d016e609f518 pool=false
uname=
>> >>>>>>> is_local_superuser=true auth_user_sid=
>> >>>>>>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>> >>>>>>> [20131030T21:58:45.721Z|debug|srvengxen02|17935 unix-RPC|sm_exec
>> >>>>>>> D:1161068a7230|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:58:45.725Z|debug|srvengxen02|17936
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
>> D:f2e326cdf62a
>> >>>>>>> created by task D:1161068a7230
>> >>>>>>> [20131030T21:58:45.873Z|debug|srvengxen02|17937
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:host.get_other_config
>> >>>>>>> D:bc0c5e2a3024 created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:45.910Z|debug|srvengxen02|17938
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.get_by_uuid
>> D:ee7f022e0661
>> >>>>>>> created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:45.920Z|debug|srvengxen02|17939
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:VDI.get_all_records_where
>> >>>>>>> D:b2a1cf32cca0 created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:45.991Z|debug|srvengxen02|17940
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.get_by_uuid
>> D:29146266030e
>> >>>>>>> created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:46.000Z|debug|srvengxen02|17941
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.set_virtual_allocation
>> >>>>>>> D:e82fe926a634 created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:46.013Z|debug|srvengxen02|17942
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.set_physical_size
>> >>>>>>> D:ce753e786464 created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:46.025Z|debug|srvengxen02|17943
>> >>>>>>> unix-RPC||dummytaskhelper] task
>> dispatch:SR.set_physical_utilisation
>> >>>>>>> D:51d59ee2c652 created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:46.037Z|debug|srvengxen02|17944
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.get_by_uuid
>> D:ccbe694ba9b5
>> >>>>>>> created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:46.047Z|debug|srvengxen02|17945
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:VDI.get_all_records_where
>> >>>>>>> D:1a049c6abb04 created by task R:73f6dbba33aa
>> >>>>>>> [20131030T21:58:46.126Z| info|srvengxen02|17935 unix-RPC|sm_exec
>> >>>>>>> D:1161068a7230|xapi] Session.destroy
>> >>>>>>> trackid=4ee84749e687d05ce996d016e609f518
>> >>>>>>> [20131030T21:58:46.131Z|debug|srvengxen02|17935 unix-RPC|SR.scan
>> >>>>>>> R:73f6dbba33aa|xapi] Unmarking SR after SR.scan
>> >>>>>>> (task=OpaqueRef:73f6dbba-33aa-a95c-178a-9f9d5ac10b6e)
>> >>>>>>> [20131030T21:58:46.146Z|debug|srvengxen02|17932|scan
one
>> >>>>>>> D:75bf6896f47a|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:58:46.149Z|debug|srvengxen02|17946
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.logout
>> D:beff7c62a646
>> >>>>>>> created by task D:75bf6896f47a
>> >>>>>>> [20131030T21:58:46.154Z| info|srvengxen02|17946
>> >>>>>>> unix-RPC|session.logout D:74779ffb1f36|xapi] Session.destroy
>> >>>>>>> trackid=f493c253a7b55c2571816cd0c9c90355
>> >>>>>>> [20131030T21:58:46.157Z|debug|srvengxen02|17932|scan
one
>> >>>>>>> D:75bf6896f47a|xapi] Scan of SR
>> d340de31-8a2f-51b3-926d-5306e2b3405c
>> >>>>>>> complete.
>> >>>>>>> [20131030T21:59:15.705Z|debug|srvengxen02|199 sr_scan|SR
scanner
>> >>>>>>> D:a852c04f1e68|xapi] Automatically scanning SRs = [
>> >>>>>>> OpaqueRef:6fe82fcd-15b3-be18-6636-5380ee19de1d ]
>> >>>>>>> [20131030T21:59:15.707Z|debug|srvengxen02|17949||dummytaskhelper]
>> >>>>>>> task scan one D:f838014d67c1 created by task D:a852c04f1e68
>> >>>>>>> [20131030T21:59:15.707Z|debug|srvengxen02|17949|scan
one
>> >>>>>>> D:f838014d67c1|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:59:15.710Z|debug|srvengxen02|17950
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.slave_login
>> D:d7f8041f0ea8
>> >>>>>>> created by task D:f838014d67c1
>> >>>>>>> [20131030T21:59:15.715Z| info|srvengxen02|17950
>> >>>>>>> unix-RPC|session.slave_login D:9d5021ebb7bb|xapi] Session.create
>> >>>>>>> trackid=614665040ebf96f8d3688b561cae32fa pool=true uname=
>> >>>>>>> is_local_superuser=true auth_user_sid=
>> >>>>>>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>> >>>>>>> [20131030T21:59:15.717Z|debug|srvengxen02|17950
>> >>>>>>> unix-RPC|session.slave_login D:9d5021ebb7bb|xapi] Attempting
to
>> open
>> >>>>>>> /var/xapi/xapi
>> >>>>>>> [20131030T21:59:15.720Z|debug|srvengxen02|17951
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
>> D:9b961577759f
>> >>>>>>> created by task D:9d5021ebb7bb
>> >>>>>>> [20131030T21:59:15.726Z|debug|srvengxen02|17949|scan
one
>> >>>>>>> D:f838014d67c1|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:59:15.729Z|debug|srvengxen02|17952
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.scan D:34238362aa17
>> created by
>> >>>>>>> task D:f838014d67c1
>> >>>>>>> [20131030T21:59:15.737Z| info|srvengxen02|17952
>> >>>>>>> unix-RPC|dispatch:SR.scan D:34238362aa17|taskhelper]
task SR.scan
>> >>>>>>> R:9993e27cadc8 (uuid:24efaaf1-1a7f-2f8b-aec5-d0bf386b4158)
created
>> >>>>>>> (trackid=614665040ebf96f8d3688b561cae32fa) by task D:f838014d67c1
>> >>>>>>> [20131030T21:59:15.737Z|debug|srvengxen02|17952 unix-RPC|SR.scan
>> >>>>>>> R:9993e27cadc8|xapi] SR.scan: SR =
>> 'd340de31-8a2f-51b3-926d-5306e2b3405c
>> >>>>>>> (NFS ISO library)'
>> >>>>>>> [20131030T21:59:15.739Z|debug|srvengxen02|17952 unix-RPC|SR.scan
>> >>>>>>> R:9993e27cadc8|xapi] Marking SR for SR.scan
>> >>>>>>> (task=OpaqueRef:9993e27c-adc8-e77e-7813-995168649469)
>> >>>>>>> [20131030T21:59:15.743Z|debug|srvengxen02|17952 unix-RPC|SR.scan
>> >>>>>>> R:9993e27cadc8|sm] SM iso sr_scan
>> >>>>>>> sr=OpaqueRef:6fe82fcd-15b3-be18-6636-5380ee19de1d
>> >>>>>>> [20131030T21:59:15.750Z| info|srvengxen02|17952 unix-RPC|sm_exec
>> >>>>>>> D:f56ec69d7e1f|xapi] Session.create
>> >>>>>>> trackid=8d403a50eeb9a2bca0c314a3d1985803 pool=false
uname=
>> >>>>>>> is_local_superuser=true auth_user_sid=
>> >>>>>>> parent=trackid=9834f5af41c964e225f24279aefe4e49
>> >>>>>>> [20131030T21:59:15.751Z|debug|srvengxen02|17952 unix-RPC|sm_exec
>> >>>>>>> D:f56ec69d7e1f|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:59:15.754Z|debug|srvengxen02|17953
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.get_uuid
>> D:fddc51c18b37
>> >>>>>>> created by task D:f56ec69d7e1f
>> >>>>>>> [20131030T21:59:15.903Z|debug|srvengxen02|17954
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:host.get_other_config
>> >>>>>>> D:13fe3c6a32c9 created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:15.934Z|debug|srvengxen02|17955
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.get_by_uuid
>> D:c192d2117790
>> >>>>>>> created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:15.944Z|debug|srvengxen02|17956
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:VDI.get_all_records_where
>> >>>>>>> D:ff0b31aa3a4f created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.012Z|debug|srvengxen02|17957
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.get_by_uuid
>> D:5d0800007279
>> >>>>>>> created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.022Z|debug|srvengxen02|17958
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.set_virtual_allocation
>> >>>>>>> D:dec8e9bb21e3 created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.034Z|debug|srvengxen02|17959
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.set_physical_size
>> >>>>>>> D:b205f6a468e2 created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.046Z|debug|srvengxen02|17960
>> >>>>>>> unix-RPC||dummytaskhelper] task
>> dispatch:SR.set_physical_utilisation
>> >>>>>>> D:dda160d61e63 created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.059Z|debug|srvengxen02|17961
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:SR.get_by_uuid
>> D:276d90611376
>> >>>>>>> created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.069Z|debug|srvengxen02|17962
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:VDI.get_all_records_where
>> >>>>>>> D:7e73bcb1cb79 created by task R:9993e27cadc8
>> >>>>>>> [20131030T21:59:16.150Z| info|srvengxen02|17952 unix-RPC|sm_exec
>> >>>>>>> D:f56ec69d7e1f|xapi] Session.destroy
>> >>>>>>> trackid=8d403a50eeb9a2bca0c314a3d1985803
>> >>>>>>> [20131030T21:59:16.155Z|debug|srvengxen02|17952 unix-RPC|SR.scan
>> >>>>>>> R:9993e27cadc8|xapi] Unmarking SR after SR.scan
>> >>>>>>> (task=OpaqueRef:9993e27c-adc8-e77e-7813-995168649469)
>> >>>>>>> [20131030T21:59:16.170Z|debug|srvengxen02|17949|scan
one
>> >>>>>>> D:f838014d67c1|xapi] Attempting to open /var/xapi/xapi
>> >>>>>>> [20131030T21:59:16.173Z|debug|srvengxen02|17963
>> >>>>>>> unix-RPC||dummytaskhelper] task dispatch:session.logout
>> D:c7be76666e35
>> >>>>>>> created by task D:f838014d67c1
>> >>>>>>> [20131030T21:59:16.178Z| info|srvengxen02|17963
>> >>>>>>> unix-RPC|session.logout D:b075ca9e306b|xapi] Session.destroy
>> >>>>>>> trackid=614665040ebf96f8d3688b561cae32fa
>> >>>>>>> [20131030T21:59:16.181Z|debug|srvengxen02|17949|scan
one
>> >>>>>>> D:f838014d67c1|xapi] Scan of SR
>> d340de31-8a2f-51b3-926d-5306e2b3405c
>> >>>>>>> complete.
>> >>>>>>>
>> >>>>>>>
>> >>>>>>>
>> >>>>>>> On Wed, Oct 30, 2013 at 2:56 PM, Carlos Reategui <
>> >>>>>>> carlos@reategui.com> wrote:
>> >>>>>>>
>> >>>>>>>> Currently, Cloudstack is not showing an SSVM.  It
does have an
>> CPVM
>> >>>>>>>> and a VR that are both stuck in an expunging state
and have been
>> for over a
>> >>>>>>>> day.
>> >>>>>>>>
>> >>>>>>>> I'll clear the management log and restart the Management
server
>> and
>> >>>>>>>> try to start one of my instances that is currently
in stopped
>> state.
>> >>>>>>>>
>> >>>>>>>> I'll upload the logs once I'm done.
>> >>>>>>>>
>> >>>>>>>> Thank you for looking at these.
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> On Wed, Oct 30, 2013 at 2:44 PM, Ahmad Emneina <
>> aemneina@gmail.com>wrote:
>> >>>>>>>>
>> >>>>>>>>> hrm... should work just fine from cloudstack.
Do xensource.log
>> or
>> >>>>>>>>> SMlog on
>> >>>>>>>>> the xenserver say anything specific when starting
the vm's via
>> >>>>>>>>> cloudstack?
>> >>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> On Wed, Oct 30, 2013 at 2:00 PM, Carlos Reategui
<
>> >>>>>>>>> carlos@reategui.com>wrote:
>> >>>>>>>>>
>> >>>>>>>>> > Installed and ran fine using the same NFS
SR as the rest of my
>> >>>>>>>>> CS root
>> >>>>>>>>> > disks are on.
>> >>>>>>>>> >
>> >>>>>>>>> >
>> >>>>>>>>> > On Wed, Oct 30, 2013 at 1:44 PM, Ahmad
Emneina <
>> >>>>>>>>> aemneina@gmail.com> wrote:
>> >>>>>>>>> >
>> >>>>>>>>> >> launch a vm independently via xen center.
>> >>>>>>>>> >>
>> >>>>>>>>> >>
>> >>>>>>>>> >> On Wed, Oct 30, 2013 at 1:36 PM, Carlos
Reategui <
>> >>>>>>>>> carlos@reategui.com>wrote:
>> >>>>>>>>> >>
>> >>>>>>>>> >>> Should I try launching a vm independently?
or should I try
>> to
>> >>>>>>>>> start one
>> >>>>>>>>> >>> of the vhds that is in the primary
storage?
>> >>>>>>>>> >>>
>> >>>>>>>>> >>>
>> >>>>>>>>> >>> On Wed, Oct 30, 2013 at 1:34 PM,
Ahmad Emneina <
>> >>>>>>>>> aemneina@gmail.com>wrote:
>> >>>>>>>>> >>>
>> >>>>>>>>> >>>> outside of cloudstack, can
you deploy a vm on your host, to
>> >>>>>>>>> the desired
>> >>>>>>>>> >>>> storage pool. It looks like
the hypervisor host cant
>> connect
>> >>>>>>>>> to its
>> >>>>>>>>> >>>> storage
>> >>>>>>>>> >>>> server.
>> >>>>>>>>> >>>>
>> >>>>>>>>> >>>>
>> >>>>>>>>> >>>> On Wed, Oct 30, 2013 at 1:31
PM, Carlos Reategui <
>> >>>>>>>>> carlos@reategui.com
>> >>>>>>>>> >>>> >wrote:
>> >>>>>>>>> >>>>
>> >>>>>>>>> >>>> > Here is a link to the
log file:
>> >>>>>>>>> >>>> > http://reategui.com/cloudstack/management-server.log
>> >>>>>>>>> >>>> >
>> >>>>>>>>> >>>> >
>> >>>>>>>>> >>>> >
>> >>>>>>>>> >>>> > On Wed, Oct 30, 2013 at
12:22 PM, Ahmad Emneina <
>> >>>>>>>>> aemneina@gmail.com
>> >>>>>>>>> >>>> >wrote:
>> >>>>>>>>> >>>> >
>> >>>>>>>>> >>>> >> can we get the full
logs? There should be something
>> simple
>> >>>>>>>>> blocking
>> >>>>>>>>> >>>> the
>> >>>>>>>>> >>>> >> reconnection of the
management server to the hosts. I
>> >>>>>>>>> worked past
>> >>>>>>>>> >>>> this,
>> >>>>>>>>> >>>> >> this past weekend
against 4.2. So i dont think your
>> >>>>>>>>> results will
>> >>>>>>>>> >>>> differ
>> >>>>>>>>> >>>> >> upgrading to 4.2...
>> >>>>>>>>> >>>> >>
>> >>>>>>>>> >>>> >>
>> >>>>>>>>> >>>> >> On Wed, Oct 30, 2013
at 12:14 PM, Carlos Reategui <
>> >>>>>>>>> >>>> creategui@gmail.com
>> >>>>>>>>> >>>> >> >wrote:
>> >>>>>>>>> >>>> >>
>> >>>>>>>>> >>>> >> > No replies to
my other emails.  I really need help
>> >>>>>>>>> getting my CS
>> >>>>>>>>> >>>> 4.1.1
>> >>>>>>>>> >>>> >> > cluster back
up.
>> >>>>>>>>> >>>> >> >
>> >>>>>>>>> >>>> >> > I basically have
a CloudStack console that thinks
>> >>>>>>>>> everything is
>> >>>>>>>>> >>>> fine,
>> >>>>>>>>> >>>> >> but
>> >>>>>>>>> >>>> >> > looking at the
management logs there seems to be a
>> >>>>>>>>> problem
>> >>>>>>>>> >>>> connecting to
>> >>>>>>>>> >>>> >> > the hosts.  XenCenter
does not seem to agree and
>> thinks
>> >>>>>>>>> all is
>> >>>>>>>>> >>>> fine with
>> >>>>>>>>> >>>> >> > the hosts.  Iptables
is disabled on the hosts and the
>> >>>>>>>>> management
>> >>>>>>>>> >>>> server
>> >>>>>>>>> >>>> >> so
>> >>>>>>>>> >>>> >> > not a firewall
issue.  Primary storage is mounted on
>> the
>> >>>>>>>>> hosts and
>> >>>>>>>>> >>>> I am
>> >>>>>>>>> >>>> >> > able to mount
secondary storage.
>> >>>>>>>>> >>>> >> >
>> >>>>>>>>> >>>> >> > I believe I have
the following options:
>> >>>>>>>>> >>>> >> > 1) Backup all
my vhds, reinstall XenServer and CS,
>> >>>>>>>>> import the vhds
>> >>>>>>>>> >>>> as
>> >>>>>>>>> >>>> >> > templates and
relaunch my 20+ VMs.  I see this as a
>> last
>> >>>>>>>>> resort
>> >>>>>>>>> >>>> option
>> >>>>>>>>> >>>> >> that
>> >>>>>>>>> >>>> >> > I would rather
not have to do
>> >>>>>>>>> >>>> >> > 2) Remove my
XS hosts from CS (assuming that wont get
>> >>>>>>>>> rid of my
>> >>>>>>>>> >>>> >> instances),
>> >>>>>>>>> >>>> >> > clear tags (or
re-install XS), re-add XS hosts and
>> hope
>> >>>>>>>>> for the
>> >>>>>>>>> >>>> best.
>> >>>>>>>>> >>>> >> > 3) Attempt to
upgrade to 4.2 and hope my problems go
>> >>>>>>>>> away.
>> >>>>>>>>> >>>> >> >
>> >>>>>>>>> >>>> >> > Anyone have any
thoughts on how to proceed?
>> >>>>>>>>> >>>> >> >
>> >>>>>>>>> >>>> >> > thanks
>> >>>>>>>>> >>>> >> > Carlos
>> >>>>>>>>> >>>> >> >
>> >>>>>>>>> >>>> >>
>> >>>>>>>>> >>>> >
>> >>>>>>>>> >>>> >
>> >>>>>>>>> >>>>
>> >>>>>>>>> >>>
>> >>>>>>>>> >>>
>> >>>>>>>>> >>
>> >>>>>>>>> >
>> >>>>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>
>> >>>>>
>> >>>>
>> >>>
>> >>
>> >
>>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message