cloudstack-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jeremy Peterson <jpeter...@acentek.net>
Subject RE: Recreating SystemVM's
Date Wed, 14 Jun 2017 16:59:12 GMT
Is there anyone out there reading these messages?

Am I just not seeing responses?  

Jeremy


-----Original Message-----
From: Jeremy Peterson [mailto:jpeterson@acentek.net] 
Sent: Wednesday, June 14, 2017 8:12 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

I opened an issue since this is still an issue.  CLOUDSTACK-9960  

Jeremy

-----Original Message-----
From: Jeremy Peterson [mailto:jpeterson@acentek.net]
Sent: Sunday, June 11, 2017 9:10 AM
To: users@cloudstack.apache.org
Subject: Re: Recreating SystemVM's

Any other suggestions?

I am going to be scheduling to run XenServer updates.  But this all points back to CANNOT_ATTACH_NETWORk.

I've verified nothing is active on the Public IP space that those two VM's were living on.

Jeremy
________________________________________
From: Jeremy Peterson <jpeterson@acentek.net>
Sent: Friday, June 9, 2017 9:58 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

I see the vm's try to create on a host that I just removed from maintenance mode to install
updates and here are the logs

I don't see anything that sticks out to me as a failure message.

Jun  9 09:53:54 Xen3 SM: [13068] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13068] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13068] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12',
'src', '169.254.0.1']
Jun  9 09:53:54 Xen3 SM: [13068]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13071] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13071] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS
Jun  9 09:53:54 Xen3 SM: [13071] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12',
'src', '169.254.0.1']
Jun  9 09:53:54 Xen3 SM: [13071]   pread SUCCESS


Jun  9 09:54:00 Xen3 SM: [13115] on-slave.multi: {'vgName': 'VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2',
'lvName1': 'VHD-633338a7-6c40-4aa6-b88e-c798b6fdc04d', 'action1': 'deactivateNoRefcount',
'action2': 'cleanupLock', 'uuid2': '633338a7-6c40-4aa6-b88e-c798b6fdc04d', 'ns2': 'lvm-469b6dcd-8466-3d03-de0e-cc3983e1b6e2'}
Jun  9 09:54:00 Xen3 SM: [13115] LVMCache created for VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2
Jun  9 09:54:00 Xen3 SM: [13115] on-slave.action 1: deactivateNoRefcount Jun  9 09:54:00 Xen3
SM: [13115] LVMCache: will initialize now Jun  9 09:54:00 Xen3 SM: [13115] LVMCache: refreshing
Jun  9 09:54:00 Xen3 SM: [13115] ['/usr/sbin/lvs', '--noheadings', '--units', 'b', '-o', '+lv_tags',
'/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2']
Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS
Jun  9 09:54:00 Xen3 SM: [13115] ['/usr/sbin/lvchange', '-an', '/dev/VG_XenStorage-469b6dcd-8466-3d03-de0e-cc3983e1b6e2/VHD-633338a7-6c40-4aa6-b88e-c798b6fdc04d']
Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS
Jun  9 09:54:00 Xen3 SM: [13115] ['/sbin/dmsetup', 'status', 'VG_XenStorage--469b6dcd--8466--3d03--de0e--cc3983e1b6e2-VHD--633338a7--6c40--4aa6--b88e--c798b6fdc04d']
Jun  9 09:54:00 Xen3 SM: [13115]   pread SUCCESS
Jun  9 09:54:00 Xen3 SM: [13115] on-slave.action 2: cleanupLock

Jun  9 09:54:16 Xen3 SM: [13230] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS
Jun  9 09:54:16 Xen3 SM: [13230] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS
Jun  9 09:54:16 Xen3 SM: [13230] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12',
'src', '169.254.0.1']
Jun  9 09:54:16 Xen3 SM: [13230]   pread SUCCESS
Jun  9 09:54:19 Xen3 updatempppathd: [15446] The garbage collection routine returned: 0 Jun
 9 09:54:23 Xen3 SM: [13277] ['ip', 'route', 'del', '169.254.0.0/16']
Jun  9 09:54:23 Xen3 SM: [13277]   pread SUCCESS
Jun  9 09:54:23 Xen3 SM: [13277] ['ifconfig', 'xapi12', '169.254.0.1', 'netmask', '255.255.0.0']
Jun  9 09:54:23 Xen3 SM: [13277]   pread SUCCESS
Jun  9 09:54:23 Xen3 SM: [13277] ['ip', 'route', 'add', '169.254.0.0/16', 'dev', 'xapi12',
'src', '169.254.0.1']
Jun  9 09:54:23 Xen3 SM: [13277]   pread SUCCESS

Jeremy


-----Original Message-----
From: Jeremy Peterson [mailto:jpeterson@acentek.net]
Sent: Friday, June 9, 2017 9:53 AM
To: users@cloudstack.apache.org
Subject: RE: Recreating SystemVM's

I am checking SMlog now on all hosts.

Jeremy


-----Original Message-----
From: Rajani Karuturi [mailto:rajani@apache.org]
Sent: Friday, June 9, 2017 9:00 AM
To: Users <users@cloudstack.apache.org>
Subject: Re: Recreating SystemVM's

on xenserver log, did you check what is causing "
HOST_CANNOT_ATTACH_NETWORK"?

~Rajani
http://cloudplatform.accelerite.com/

On Fri, Jun 9, 2017 at 7:00 PM, Jeremy Peterson <jpeterson@acentek.net>
wrote:

> 08:28:43        select * from vm_instance where name like 's-%' limit
> 10000     7481 row(s) returned    0.000 sec / 0.032 sec
>
> All vm's 'state' returned Destoryed outside of the current vm 7873 
> which is in a Stopped state but that goes Destroyed and a new get created.
>
> Any other suggestions?
>
> Jeremy
>
>
> -----Original Message-----
> From: Jeremy Peterson [mailto:jpeterson@acentek.net]
> Sent: Thursday, June 8, 2017 12:47 AM
> To: users@cloudstack.apache.org
> Subject: Re: Recreating SystemVM's
>
> I'll make that change in the am.
>
> Today I put a host in maintence and rebooted because proxy and 
> secstore vm were constantly being created on that host and still no change.
>
> Let you know tomorrow.
>
> Jeremy
>
>
> Sent from my Verizon, Samsung Galaxy smartphone
>
>
> -------- Original message --------
> From: Rajani Karuturi <rajani@apache.org>
> Date: 6/8/17 12:07 AM (GMT-06:00)
> To: Users <users@cloudstack.apache.org>
> Subject: Re: Recreating SystemVM's
>
> Did you check SMLog on xenserver?
> unable to destroy task(com.xensource.xenapi.Task@256829a8) on
> host(b34f086e-fabf-471e-9feb-8f54362d7d0f) due to You gave an invalid 
> object reference.  The object may have recently been deleted.  The 
> class parameter gives the type of reference given, and the handle 
> parameter echoes the bad value given.
>
> Looks like Destroy of SSVM failed. What state is SSVM in? mark it as 
> Destroyed in cloud DB and wait for cloudstack to create a new SSVM.
>
> ~Rajani
> http://cloudplatform.accelerite.com/
>
> On Thu, Jun 8, 2017 at 1:11 AM, Jeremy Peterson 
> <jpeterson@acentek.net>
> wrote:
>
> > Probably agreed.
> >
> > But I ran toolstack restart on all hypervisors and v-3193 just tried 
> > to create and fail along with s-5398.
> >
> > The PIF error went away. But VM's are still recreating
> >
> > https://pastebin.com/4n4xBgMT
> >
> > New log from this afternoon.
> >
> > My catalina.out is over 4GB
> >
> > Jeremy
> >
> >
> > -----Original Message-----
> > From: Makrand [mailto:makrandsanap@gmail.com]
> > Sent: Wednesday, June 7, 2017 12:52 AM
> > To: users@cloudstack.apache.org
> > Subject: Re: Recreating SystemVM's
> >
> > Hi there,
> >
> > Looks more like hypervisor issue.
> >
> > Just run *xe-toolstack-restart* on hosts where these VMs are trying 
> > to start or if you don't have too many hosts, better run on all 
> > members including master. most of i/o related issues squared off by 
> > toolstack bounce.
> >
> > --
> > Makrand
> >
> >
> > On Wed, Jun 7, 2017 at 3:01 AM, Jeremy Peterson 
> > <jpeterson@acentek.net>
> > wrote:
> >
> > > Ok so I pulled this from Sunday morning.
> > >
> > > https://pastebin.com/nCETw1sC
> > >
> > >
> > > errorInfo: [HOST_CANNOT_ATTACH_NETWORK, 
> > > OpaqueRef:65d0c844-bd70-81e9-4518-8809e1dc0ee7,
> > > OpaqueRef:0093ac3f-9f3a-37e1-9cdb-581398d27ba2]
> > >
> > > XenServer error.
> > >
> > > Now this still gets me because all of the other VM's launched just
> fine.
> > >
> > > Going into XenCenter I see an error at the bottom This PIF is a 
> > > bond slave and cannot be plugged.
> > >
> > > ???
> > >
> > > If I go to networking on the hosts I see the storage vlans and 
> > > bonds are all there.
> > >
> > > I see my GUEST-PUB bond is there and LACP is setup correct.
> > >
> > > Any suggestions ?
> > >
> > >
> > > Jeremy
> > >
> > >
> > > -----Original Message-----
> > > From: Jeremy Peterson [mailto:jpeterson@acentek.net]
> > > Sent: Tuesday, June 6, 2017 9:23 AM
> > > To: users@cloudstack.apache.org
> > > Subject: RE: Recreating SystemVM's
> > >
> > > Thank you all for those responses.
> > >
> > > I'll comb through my management-server.log and post a pastebin if 
> > > I'm scratching my head.
> > >
> > > Jeremy
> > >
> > > -----Original Message-----
> > > From: Rajani Karuturi [mailto:rajani@apache.org]
> > > Sent: Tuesday, June 6, 2017 6:53 AM
> > > To: users@cloudstack.apache.org
> > > Subject: Re: Recreating SystemVM's
> > >
> > > If the zone is enabled, cloudstack should recreate them automatically.
> > >
> > > ~ Rajani
> > >
> > > http://cloudplatform.accelerite.com/
> > >
> > > On June 6, 2017 at 11:37 AM, Erik Weber (terbolous@gmail.com)
> > > wrote:
> > >
> > > CloudStack should recreate automatically, check the mgmt server 
> > > logs for hints of why it doesn't happen.
> > >
> > > --
> > > Erik
> > >
> > > tir. 6. jun. 2017 kl. 04.29 skrev Jeremy Peterson
> > > <jpeterson@acentek.net>:
> > >
> > > I had an issue Sunday morning with cloudstack 4.9.0 and xenserver
> 6.5.0.
> > > My hosts stop sending LACP PDU's and caused a network drop to 
> > > iSCSI primary storage.
> > >
> > > So all my instances recovered via HA enabled.
> > >
> > > But my console proxy and secondary storage system VM's got stuck 
> > > in a boot state that would not power on.
> > >
> > > At this time they are expunged and gone.
> > >
> > > How do I tell cloudstack-management to recreate system VM's?
> > >
> > > I'm drawing a blank since deploying CS two years ago and just 
> > > keeping things running and adding hosts and more storage 
> > > everything has been so stable.
> > >
> > > Jeremy
> > >
> >
>

Mime
View raw message