cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marcus Sorensen <>
Subject Re: [VOTE] Release Apache CloudStack 4.2.0 (sixth round)
Date Mon, 23 Sep 2013 06:47:26 GMT
Further testing shows that this only seems to affect an upgraded
agent. I can reproduce if I install agent 4.1.1, then upgrade to 4.2,
but not if I start fresh with agent 4.2. If I had to guess, the
upgrade maybe makes the local storage pool re-register(maybe it
creates a new host in the table or something like that, haven't
looked), where it looks like a duplicate.

On Mon, Sep 23, 2013 at 12:36 AM, Marcus Sorensen <> wrote:
> I think this does have to do with the new default storage plugin. I
> think it only affects local storage, as the agent attempts to register
> it's local storage pool uuid as a pool with the mgmt server, who says
> 'sorry, this pool already exists in the database'.
> On Mon, Sep 23, 2013 at 12:24 AM, Marcus Sorensen <> wrote:
>> +0 (binding), created 4.1.1 zone with vms and a vpc, upgraded to 4.2
>> via RPMs. Restarted vpc. Then wiped database and redeployed zone, vms
>> to test 4.2 basic functionality. All on CentOS 6.4
>> The reason for the +0 is that I found a regression when going beyond
>> the basic testing and trying things like maintenance mode, I'm not
>> sure if it's related to the new storage framework or not, but it
>> definitely didn't exist in the previous release. I'm including the bug
>> below.  Basically the agent fails to connect to the management server
>> on restart because we are trying to register the same pool multiple
>> times. We should just check for the pool, match the uuid/properties,
>> and say 'success' if it already exists. There is a hokey workaround,
>> which is why I'm not going to block, if you stop libvirtd after taking
>> the host out of maintenance and wait for the agent to reconnect, then
>> you can start libvirtd again and everything will jumpstart into
>> action. That can be done in order to complete a successful upgrade,
>> and upon deploying a fresh zone I don't think anyone will notice until
>> they go to use maintenance mode.
>> Should be easy to fix, but I'm not going to look at it right this
>> second. Maybe Edison can take a peek in the meantime.
>> Looks like nobody has voted yet, I'm not sure if that means nobody has
>> tested over the weekend or if they're being more thorough. If we do
>> find that nobody has tested, and decide to fix this, I'd request we
>> pull in 39f7ddbb8f7eedb050da2991cdc1fb72a9e97f5f from 4.2-forward as
>> well.
>> On Fri, Sep 20, 2013 at 9:36 PM, Animesh Chaturvedi
>> <> wrote:
>>> I've created a 4.2.0 release, with the following artifacts up for a vote:
>>> Git Branch and Commit SH:
>>> Commit: 69c459342c568e2400d57ee88572b301603d8686
>>> List of changes:
>>> Source release (checksums and signatures are available at the same
>>> location):
>>> PGP release keys (signed using 94BE0D7C):
>>> Testing instructions are here:
>>> Vote will be open for 72 hours (Monday 9/23 PST EOD).
>>> For sanity in tallying the vote, can PMC members please be sure to indicate "(binding)"
with their vote?
>>> [ ] +1  approve
>>> [ ] +0  no opinion
>>> [ ] -1  disapprove (and reason why)

View raw message