cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Paul Angus <paul.an...@shapeblue.com>
Subject RE: [VOTE] Apache Cloudstack 4.10.0.0 RC3
Date Wed, 28 Jun 2017 06:48:49 GMT
Those new PRs should not have been merged.

Those on the mailing list should respect the process and accept that they will have to wait
until code is unfrozen.





Kind regards,

Paul Angus

paul.angus@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Rajani Karuturi [mailto:rajani@apache.org] 
Sent: 28 June 2017 07:45
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

Paul,

Which shows we are not actively following RCs. That PR was a blocker for RC3 and was well
discussed. That PR is a perfect example that we are not working as community to release code.
That is a fix for a blocker which stayed open for more than 45 days.

If you see till RC2 it was only blockers that were merged. But, since it has taken a lot more
time to fix blockers, more PRs were merged on request on the mailing list(and we don't have
people even to object it). you can think of it as a combination of two releases due to the
time it has taken.

~ Rajani

http://cloudplatform.accelerite.com/

On June 28, 2017 at 12:06 PM, Paul Angus
(paul.angus@shapeblue.com) wrote:

Rajani,

I suspect that fatigue with the 4.10 release testing that we are seeing is due to the time
it has taken to release it. And that is has been caused by new code going in, which have introduced
new bugs.

This was demonstrated in the last -1 from Kris. This change was merged 10 days ago.

Kind regards,

Paul Angus

paul.angus@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-----Original Message-----
From: Rajani Karuturi [mailto:rajani@apache.org]
Sent: 28 June 2017 06:14
To: dev@cloudstack.apache.org
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

We can do a release every month as long as we have enough people actively participating in
the release process.

We have people who wants to have their code/features checked in.
We, very clearly do not have enough people working on releases/blockers. How many of us are
testing/voting on releases or PRs? We have blockers in jira, with no one to fix. We have PRs
open for release blockers for more than a month with no one to test.

I would ask everyone to start testing releases/PRs and voting on them actively.

We need people who can do the work. We already know what needs to be done as outlined in the
release principles wiki after long discussions on this list.

Whether we create a branch off RC or continue on master wont change the current situation.

We, as community should commit to testing and releasing code.
principles and theory wont help.

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 27, 2017 at 9:43 PM, Rafael Weingärtner
(rafaelweingartner@gmail.com) wrote:

+1 to what Paul said.
IMHO, as soon as we start a release candidate to close a version, all merges should stop (period);
the only exceptions should be PRs that address specific problems in the RC.
I always thought that we had a protocol for that [1]; maybe for this version, we have not
followed it?

[1]
https://cwiki.apache.org/confluence/display/CLOUDSTACK/Release+principles+for+Apache+CloudStack+4.6+and+up#ReleaseprinciplesforApacheCloudStack4.6andup-Preparingnewrelease:masterfrozen

On Tue, Jun 27, 2017 at 1:32 AM, Paul Angus <paul.angus@shapeblue.com>
wrote:

Hi All,

From my view point 'we' have been the architects of our own downfall. Once a code freeze is
in place NO new features, NO enhancements should be going in. Once we're at an RC stage, NO
new bug fixes other that for the blockers should be going in.
that way the release gets out, and the next one can get going. If
4.10 had gone out in a timely fashion, then we'd probably be on
4.11 if not 4.12 by now, with all the new features AND all the new fixes in.

People sliding new changes/bug fixes/enhancements in are not making the product better, they're
stopping progress. As we can clearly see here.

Kind regards,

Paul Angus

paul.angus@shapeblue.com
www.shapeblue.com ( http://www.shapeblue.com ) ( http://www.shapeblue.com )
53 Chandos Place, Covent Garden, London WC2N 4HSUK @shapeblue

-----Original Message-----
From: Tutkowski, Mike [mailto:Mike.Tutkowski@netapp.com]
Sent: 27 June 2017 01:25
To: dev@cloudstack.apache.org
Cc: Wido den Hollander <wido@widodh.nl>
Subject: Re: [VOTE] Apache Cloudstack 4.10.0.0 RC3

I tend to agree with you here, Daan. I know the downside we’ve discussed in the past is
that overall community participation in the RC process has dropped off when such a new branch
is created (since the community as a whole tends to focus more on the new branch rather than
on testing the RC and releasing it).

I believe we should do the following: As we approach the first RC, we need to limit the number
of PRs going into the branch (in order to stabilize it). If we had a super duper array of
automated regression tests that ran against the code, then we might be able to avoid this,
but our automated test suite is not extensive enough for us to do so.

As we approach the first RC, only blockers and trivial (ex. text
changes)
PRs should be permitted in. Once we cut the first RC, create a new branch for ongoing dev
work. In between RCs, we can only allow in code related to blocker PRs (or trivial text changes,
as discussed before).

What do people think?

On 6/13/17, 4:56 AM, "Daan Hoogland" <daan.hoogland@gmail.com>
wrote:

this is why i say we should branch on first RC, fix in release branch only and merge forward

On Tue, Jun 13, 2017 at 12:41 PM, Will Stevens < williamstevens@gmail.com> wrote:

I know it is hard to justify not merging PRs that seem ready but are

not

blockers in an RC, but it is a vicious circle which ultimately

results in a

longer RC process.

It is something i struggled with as a release manager as well.

On Jun 13, 2017 1:56 AM, "Rajani Karuturi" <rajani@apache.org>

wrote:

Thanks Mike,

Will hold off next RC until we hear an update from you.

Regarding merging non-blockers, unfortunately, its a side-effect of taking more than three
months in the RC phase :(

Thanks,

~ Rajani

http://cloudplatform.accelerite.com/

On June 13, 2017 at 10:10 AM, Tutkowski, Mike
(Mike.Tutkowski@netapp.com) wrote:

Hi everyone,

I had a little time this evening and re-ran some VMware-related tests around managed storage.
I noticed a problem that I’d like to investigate before we spin up the next RC. Let’s
hold off on the next RC until I can find out more (I should know more within
24 hours).

Thanks!
Mike

On 6/12/17, 2:40 AM, "Wido den Hollander" <wido@widodh.nl>
wrote:

Op 10 juni 2017 om 21:18 schreef "Tutkowski, Mike"

<Mike.Tutkowski@netapp.com>:

Hi,

I opened a PR against the most recent RC:

https://github.com/apache/cloudstack/pull/2141

I ran all managed-storage regression tests against it and they

pass (as noted in detail in the PR).

If someone wants to take this code and create a new RC from

it, I’m +1 on the new RC as long as this is the only commit addedto it since the current
RC.

Thanks Mike!

If this PR is good we should probably merge it asap and go for RC5.

4.10 should really be released by now.

Wido

Thanks!
Mike

On 6/9/17, 7:43 PM, "Tutkowski, Mike"

<Mike.Tutkowski@netapp.com> wrote:

Hi everyone,

I found a critical issue that was introduced into this RC

since the most recent RC, so I am -1 on this RC.

The fix for this ticket breaks the support for storing volume

snapshots on primary storage (which is a feature managed storagecan support):

https://issues.apache.org/jira/browse/CLOUDSTACK-9685

Here is the SHA: 336df84f1787de962a67d0a34551f9027303040e

At a high level, what it does is remove a row from the

cloud.snapshot_store_ref table when a volume is deleted that hasone or more volume snapshots.

This is fine for non-managed (traditional) storage; however,

managed storage can store volume snapshots on primary storage, soremoving this row breaks
that functionality.

I can fix the problem that this commit introduced by looking

at the primary storage that supports the volume snapshot andchecking the following: 1) Is
this managed storage? 2) If yes, is the snapshot in question stored on that primary storage?

The problem is I will be out of the office for a couple weeks

and will not be able to address this until I return.

We could revert the commit, but I still will not have time to

run the managed-storage regression test suite until I return.

On a side note, it looks like this commit was introduced since

the most recent RC. I would argue that it was not a blocker andshould not have been placed
into the new RC. We (as a
community)
tend to have a lot of code go in between RCs and that just increases the chances of introducing
critical issues and thus delaying the release. We’ve gotten better at this over the years,
but we should focus more on only allowing the entry of new code into a follow-on RC that is
critical (or so trivial as to not at all be likely to introduce any problems…like fixing
an error message).

Thanks for your efforts on this, everyone!
Mike

On 6/9/17, 8:52 AM, "Tutkowski, Mike"

<Mike.Tutkowski@netapp.com> wrote:

Hi Rajani,

I will see if I can get all of my managed-storage testing

(both automated and manual) done today. If not, we’ll need to seeif someone else can complete
it before we OK this RC as I won’t be back in the office for a couple weeks. I’ll report
back later today.

Thanks,
Mike

On 6/9/17, 2:34 AM, "Rajani Karuturi" <rajani@apache.org>

wrote:

Yup. thats right. I dont know how it happened but, it created from the previous RC commit.
The script is supposed to do a

git

pull. I didn't notice any failures. Not sure what went wrong.

Thanks for finding it mike. I am creating RC4 now and

cancelling

this.

~ Rajani

http://cloudplatform.accelerite.com/

On June 9, 2017 at 12:07 PM, Tutkowski, Mike
(Mike.Tutkowski@netapp.com) wrote:

Hi Rajani,

I don’t see the following PR in this RC:

https://github.com/apache/cloudstack/pull/2098

I ran all of my managed-storage regression tests. They all passed with the exception of the
one that led to PR 2098.

As I examine the RC in a bit more detail, it sits on top of ed2f573, but I think it should
sit on top of ed376fc.

As a result, I am -1 on the RC.

It takes me about a day to run all of the managed-storage regression tests and I am out of
the office for the next

couple

weeks, so I’d really like to avoid another RC until I’m back

and

able to test the next RC.

Thanks!
Mike

On 6/7/17, 4:36 AM, "Rajani Karuturi" <rajani@apache.org>

wrote:

Hi All,

I've created 4.10.0.0 release with the following artifacts up for a vote:

Git Branch and Commit SH:

https://gitbox.apache.org/repos/asf?p=cloudstack.git;a=commit;h=a55738a31d0073f2925c6fb84bf7a6bb32f4ca27

Commit:a55738a31d0073f2925c6fb84bf7a6bb32f4ca27
Branch: 4.10.0.0-RC20170607T1407

Source release (checksums and signatures are available at the same
location):
https://dist.apache.org/repos/dist/dev/cloudstack/4.10.0.0/

SystemVm Templates:
http://download.cloudstack.org/systemvm/4.10/RC3/

PGP release keys (signed using CBB44821):
https://dist.apache.org/repos/dist/release/cloudstack/KEYS

Vote will be open for 72 hours.

For sanity in tallying the vote, can PMC members please be

sure

to indicate
"(binding)" with their vote?

[ ] +1 approve
[ ] +0 no opinion
[ ] -1 disapprove (and reason why)

Thanks,
~Rajani
http://cloudplatform.accelerite.com/

--
Daan

--
Rafael Weingärtner
Mime
View raw message