cloudstack-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daan Hoogland <daan.hoogl...@gmail.com>
Subject Re: Release cadence
Date Thu, 13 Mar 2014 23:33:49 GMT
I agree that we can't move to our end goal in on go. But I disagree
that we should go on with business as usual right now. baby steps but
never stop taking steps.

On Fri, Mar 14, 2014 at 12:20 AM, Mike Tutkowski
<mike.tutkowski@solidfire.com> wrote:
> My reasoning here is that otherwise we will just be futilely creating RCs
> for 4.4 since this expectation was not clearly defined ahead of the release.
>
> If we set expectations appropriately for 4.5, then we should expect we can
> begin RC building right after Feature Freeze for that release.
>
>
> On Thu, Mar 13, 2014 at 5:18 PM, Mike Tutkowski <
> mike.tutkowski@solidfire.com> wrote:
>
>> I think we should set that as a goal for 4.5. We should treat 4.4 as
>> business as usual at this point and give "fair warning" for the next
>> release.
>>
>> We should formally define what "tested" means for 4.5 and then take the
>> appropriate course of action from a RC point of view.
>>
>>
>> On Thu, Mar 13, 2014 at 5:00 PM, Daan Hoogland <daan.hoogland@gmail.com>wrote:
>>
>>> That's how i like to see it and why I asked. Is there a reason people
>>> merge and then commit their features instead of rebasing and running a
>>> standard set of integration tests to validate before merging. I am not
>>> better then average on this myself but I think here is where we have
>>> room to improve if anywhere.
>>>
>>> So do we create 4.4 RC1 next Monday?
>>>
>>>
>>>
>>> On Thu, Mar 13, 2014 at 11:19 PM, Mike Tutkowski
>>> <mike.tutkowski@solidfire.com> wrote:
>>> > I think many people (myself included) are used to performing rigorous,
>>> but
>>> > focused feature-specific testing before feature freeze, but are under
>>> the
>>> > impression that once feature freeze arrives that we are in
>>> > integration-testing mode (where our feature is tested in combination
>>> with
>>> > other features...not so isolated anymore). At this point, we tend to
>>> find
>>> > bugs that were not hit pre feature freeze because that mode of testing
>>> was
>>> > more confined.
>>> >
>>> > Perhaps we simply need to decide on how tested a feature should be for
>>> > feature freeze. Does it need to be fully tested from an integration with
>>> > other features standpoint or not? If yes, then we are basically "done"
>>> with
>>> > the release at feature freeze time and can begin the release-candidate
>>> > process.
>>> >
>>> >
>>> > On Thu, Mar 13, 2014 at 4:11 PM, David Nalley <david@gnsa.us> wrote:
>>> >
>>> >> Thats a very good point - we are effectively saying we know the
>>> >> features we merged in have potentially months worth of bugs. Though
>>> >> really, our hiccups don't seem to generally be in new features, it's
>>> >> old features.
>>> >>
>>> >> On Thu, Mar 13, 2014 at 3:44 PM, Marcus <shadowsor@gmail.com>
wrote:
>>> >> > Its a good point. I had thought about. Essentially we are saying
>>> that we
>>> >> > know the features we just merged need another few months of work.
>>> >> > On Mar 13, 2014 1:01 PM, "Daan Hoogland" <daan.hoogland@gmail.com>
>>> >> wrote:
>>> >> >
>>> >> >> Just a thought,
>>> >> >>
>>> >> >> Why isn't the freshly cut branch the first RC from the get
go? It is
>>> >> >> quite sure not to pass but it should cantain what we ant to
ship
>>> >> >> feature wise.
>>> >> >>
>>> >> >> On Thu, Mar 13, 2014 at 6:35 PM, Mike Tutkowski
>>> >> >> <mike.tutkowski@solidfire.com> wrote:
>>> >> >> > OK, so it sounds like a 3-month dev cycle for a four-month
release
>>> >> was on
>>> >> >> > purpose.
>>> >> >> >
>>> >> >> > Just curious...thanks :)
>>> >> >> >
>>> >> >> >
>>> >> >> > On Thu, Mar 13, 2014 at 11:31 AM, David Nalley <david@gnsa.us>
>>> wrote:
>>> >> >> >
>>> >> >> >> This was (IIRC) part of the explicit decision in how
to do
>>> things.
>>> >> The
>>> >> >> >> thought being that if you are restricting what people
can do
>>> with a
>>> >> >> >> release branch, people still need to be able to have
a place to
>>> base
>>> >> >> >> their ongoing work; and master should be that place.
Some
>>> features
>>> >> >> >> will take more than a cycle to get integrated.
>>> >> >> >>
>>> >> >> >> --David
>>> >> >> >>
>>> >> >> >> On Thu, Mar 13, 2014 at 1:11 PM, Mike Tutkowski
>>> >> >> >> <mike.tutkowski@solidfire.com> wrote:
>>> >> >> >> > Yeah, if you "abandon" the "old" release as soon
as a release
>>> >> branch
>>> >> >> is
>>> >> >> >> cut
>>> >> >> >> > for it, then you essentially have three months
on the new
>>> release
>>> >> >> before
>>> >> >> >> > its release branch is cut and you move on to
the newer
>>> release. I'm
>>> >> >> not
>>> >> >> >> > sure that was the intent when such a schedule
was created. It
>>> means
>>> >> >> we're
>>> >> >> >> > releasing every four months, but developing for
only three.
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> > On Thu, Mar 13, 2014 at 11:03 AM, Marcus <shadowsor@gmail.com>
>>> >> wrote:
>>> >> >> >> >
>>> >> >> >> >> The overlap is simply a byproduct of cutting
the branch, I'm
>>> not
>>> >> sure
>>> >> >> >> >> there's a way around it. It's a good point
though, that
>>> >> essentially
>>> >> >> >> >> the window is 1 month shorter than I think
was intended.
>>> Better
>>> >> >> >> >> testing will help that, however, with the
point being that we
>>> >> >> >> >> shouldn't be doing a ton of work to make
the release branch
>>> >> stable.
>>> >> >> It
>>> >> >> >> >> should push the majority of the work back
into the pre-branch
>>> >> stage.
>>> >> >> >> >>
>>> >> >> >> >> On Thu, Mar 13, 2014 at 10:50 AM, Mike Tutkowski
>>> >> >> >> >> <mike.tutkowski@solidfire.com> wrote:
>>> >> >> >> >> > I wanted to add a little comment/question
in general about
>>> our
>>> >> >> release
>>> >> >> >> >> > process:
>>> >> >> >> >> >
>>> >> >> >> >> > Right now we typically have a one-month
overlap between
>>> >> releases.
>>> >> >> That
>>> >> >> >> >> > being the case, if you are focusing
on the current release
>>> until
>>> >> >> it is
>>> >> >> >> >> out
>>> >> >> >> >> > the door, you effectively lose a month
of development for
>>> the
>>> >> >> future
>>> >> >> >> >> > release. It might be tempting during
this one-month time
>>> period
>>> >> to
>>> >> >> >> focus
>>> >> >> >> >> > instead on the future release and leave
the current release
>>> >> alone.
>>> >> >> >> >> >
>>> >> >> >> >> > Would it make sense to keep a four-month
release cycle, but
>>> not
>>> >> >> have
>>> >> >> >> an
>>> >> >> >> >> > overlapping month of two releases?
>>> >> >> >> >> >
>>> >> >> >> >> > Just a thought
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> > On Thu, Mar 13, 2014 at 10:42 AM, David
Nalley <
>>> david@gnsa.us>
>>> >> >> wrote:
>>> >> >> >> >> >
>>> >> >> >> >> >> The RC7 vote thread contained a
lot of discussion around
>>> >> release
>>> >> >> >> >> >> cadence, and I figured I'd move
that to a thread that has a
>>> >> better
>>> >> >> >> >> >> subject so there is better visibility
to list participants
>>> who
>>> >> >> don't
>>> >> >> >> >> >> read every thread.
>>> >> >> >> >> >>
>>> >> >> >> >> >> When I look at things schedule wise,
I see our aims and our
>>> >> >> reality.
>>> >> >> >> >> >> We have a relatively short development
window (in the
>>> schedule)
>>> >> >> and
>>> >> >> >> we
>>> >> >> >> >> >> have almost 50% of our time in the
schedule allocated to
>>> >> testing.
>>> >> >> >> >> >> (over two months). However, it seems
that a lot of testing
>>> -
>>> >> or at
>>> >> >> >> >> >> least a lot of testing for  what
became blockers to the
>>> release
>>> >> >> >> didn't
>>> >> >> >> >> >> appear to happen until RCs were
kicked out - and that's
>>> where
>>> >> our
>>> >> >> >> >> >> schedule has fallen apart for multiple
releases. The
>>> automated
>>> >> >> tests
>>> >> >> >> >> >> we have were clean when we issued
RCs, so we clearly don't
>>> have
>>> >> >> the
>>> >> >> >> >> >> depth needed from an automated standpoint.
>>> >> >> >> >> >>
>>> >> >> >> >> >> Two problems, one cultural and one
technical. The technical
>>> >> >> problem
>>> >> >> >> is
>>> >> >> >> >> >> that our automated test suite isn't
deep enough to give us
>>> a
>>> >> high
>>> >> >> >> >> >> level of confidence that we should
release. The cultural
>>> >> problem
>>> >> >> is
>>> >> >> >> >> >> that many of us wait until the release
period of the
>>> schedule
>>> >> to
>>> >> >> >> test.
>>> >> >> >> >> >>
>>> >> >> >> >> >> What does that have to do with release
cadence? Well
>>> inherently
>>> >> >> not
>>> >> >> >> >> >> much; but let me describe my concerns.
As a project; the
>>> >> schedule
>>> >> >> is
>>> >> >> >> >> >> meaningless if we don't follow it;
and effectively the
>>> release
>>> >> >> date
>>> >> >> >> is
>>> >> >> >> >> >> held hostage. Personally, I do want
as few bugs as
>>> possible,
>>> >> but
>>> >> >> it's
>>> >> >> >> >> >> a balancing act where people doubt
our ability if we aren't
>>> >> able
>>> >> >> to
>>> >> >> >> >> >> ship. I don't think it matters if
we move to 6 month
>>> cycles, if
>>> >> >> this
>>> >> >> >> >> >> behavior continues, we'd miss the
6 month date as well and
>>> push
>>> >> >> to 8
>>> >> >> >> >> >> or 9 months. See my radical proposition
at the bottom for
>>> an
>>> >> idea
>>> >> >> on
>>> >> >> >> >> >> dealing with this.
>>> >> >> >> >> >>
>>> >> >> >> >> >> I also find myself agreeing with
Daan on the additional
>>> >> >> complexity.
>>> >> >> >> >> >> Increasing the window for release
inherently increases the
>>> >> window
>>> >> >> for
>>> >> >> >> >> >> feature development. As soon as
we branch a release,
>>> master is
>>> >> >> open
>>> >> >> >> >> >> for feature development again. This
means a potential for
>>> >> greater
>>> >> >> >> >> >> change at each release. Change is
a risk to quality; or at
>>> >> least
>>> >> >> an
>>> >> >> >> >> >> unknown that we again have to test.
The greater that
>>> quantity
>>> >> of
>>> >> >> >> >> >> change, the greater the potential
threat to quality.
>>> >> >> >> >> >>
>>> >> >> >> >> >> Radical proposition:
>>> >> >> >> >> >>
>>> >> >> >> >> >> Because we have two problems, of
different nature, we are
>>> in a
>>> >> >> >> >> >> difficult situation. This is a possible
solution, and I'd
>>> >> >> appreciate
>>> >> >> >> >> >> you reading and considering it.
 Feedback is welcome. I
>>> propose
>>> >> >> that
>>> >> >> >> >> >> after we enter the RC stage that
we not entertain any bugs
>>> as
>>> >> >> >> blockers
>>> >> >> >> >> >> that don't have automated test cases
associated with them.
>>> This
>>> >> >> means
>>> >> >> >> >> >> that you are still welcome to do
manual testing of your pet
>>> >> >> feature
>>> >> >> >> >> >> and the things that are important
to you; during the
>>> testing
>>> >> >> window
>>> >> >> >> >> >> (or anytime really). However, if
the automation suite isn't
>>> >> also
>>> >> >> >> >> >> failing then we consider the release
as high enough
>>> quality to
>>> >> >> ship.
>>> >> >> >> >> >> This isn't something we can codify,
but the PMC can
>>> certainly
>>> >> >> adopt
>>> >> >> >> >> >> this attitude as a group when voting.
Which also means
>>> that we
>>> >> can
>>> >> >> >> >> >> deviate from it. If you brought
up a blocker for release -
>>> we
>>> >> >> should
>>> >> >> >> >> >> be immediately looking at how we
can write a test for that
>>> >> >> behavior.
>>> >> >> >> >> >> This should also mean several other
behaviors need to
>>> become a
>>> >> >> valid
>>> >> >> >> >> >> part of our process. We need to
ensure that things are well
>>> >> tested
>>> >> >> >> >> >> before allowing a merge. This means
we need a known state
>>> of
>>> >> >> master,
>>> >> >> >> >> >> and we need to perform testing that
allows us to confirm
>>> that a
>>> >> >> patch
>>> >> >> >> >> >> does no harm. We also need to insist
on implementation of
>>> >> >> >> >> >> comprehensive tests for every inbound
feature.
>>> >> >> >> >> >>
>>> >> >> >> >> >> Thoughts, comments, flames, death
threats? :)
>>> >> >> >> >> >>
>>> >> >> >> >> >> --David
>>> >> >> >> >> >>
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> >
>>> >> >> >> >> > --
>>> >> >> >> >> > *Mike Tutkowski*
>>> >> >> >> >> > *Senior CloudStack Developer, SolidFire
Inc.*
>>> >> >> >> >> > e: mike.tutkowski@solidfire.com
>>> >> >> >> >> > o: 303.746.7302
>>> >> >> >> >> > Advancing the way the world uses the
>>> >> >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >> >> >> > *(tm)*
>>> >> >> >> >>
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> >
>>> >> >> >> > --
>>> >> >> >> > *Mike Tutkowski*
>>> >> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> >> > e: mike.tutkowski@solidfire.com
>>> >> >> >> > o: 303.746.7302
>>> >> >> >> > Advancing the way the world uses the
>>> >> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >> >> > *(tm)*
>>> >> >> >>
>>> >> >> >
>>> >> >> >
>>> >> >> >
>>> >> >> > --
>>> >> >> > *Mike Tutkowski*
>>> >> >> > *Senior CloudStack Developer, SolidFire Inc.*
>>> >> >> > e: mike.tutkowski@solidfire.com
>>> >> >> > o: 303.746.7302
>>> >> >> > Advancing the way the world uses the
>>> >> >> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> >> >> > *(tm)*
>>> >> >>
>>> >> >>
>>> >> >>
>>> >> >> --
>>> >> >> Daan
>>> >> >>
>>> >>
>>> >
>>> >
>>> >
>>> > --
>>> > *Mike Tutkowski*
>>> > *Senior CloudStack Developer, SolidFire Inc.*
>>> > e: mike.tutkowski@solidfire.com
>>> > o: 303.746.7302
>>> > Advancing the way the world uses the
>>> > cloud<http://solidfire.com/solution/overview/?video=play>
>>> > *(tm)*
>>>
>>>
>>>
>>> --
>>> Daan
>>>
>>
>>
>>
>> --
>> *Mike Tutkowski*
>> *Senior CloudStack Developer, SolidFire Inc.*
>> e: mike.tutkowski@solidfire.com
>> o: 303.746.7302
>> Advancing the way the world uses the cloud<http://solidfire.com/solution/overview/?video=play>
>> *(tm)*
>>
>
>
>
> --
> *Mike Tutkowski*
> *Senior CloudStack Developer, SolidFire Inc.*
> e: mike.tutkowski@solidfire.com
> o: 303.746.7302
> Advancing the way the world uses the
> cloud<http://solidfire.com/solution/overview/?video=play>
> *(tm)*



-- 
Daan

Mime
View raw message