httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Leif W <warp-...@usa.net>
Subject Re: Branching and release scheduling
Date Wed, 17 Nov 2004 02:23:36 GMT
> On: Tue, 16 Nov 2004 07:55:13 PM EST, Jim Jagielski wrote
> 
> > On Nov 16, 2004, at 3:16 PM, Manoj Kasichainula wrote:
> > 
> > We had a good discussion over lunch today on our release processes and 
> > how to have stable releases while making new feature development as 
> > fun and easy for the geeks as possible.
> 
> I find it somewhat hard to get "excited" by 2.x development because
> 2.1 almost seems like a black hole at times... You put the energy
> into the development but have to wait for it to be folded into
> an actual version that gets released and used.

[snip]

> Stability is great, but we should be careful that we don't "unduly"
> sacrifice developer energy. This may not be so clear, so feel free
> to grab me :)

Hello,

Yes, I've lurked on this list barely a week, and the only source contributions
are very minor bugs (see my PATCH nags and s/CVS/SVN/g).  However I was
recently surfing around (for no apparent reason) and stumbled upon the "GCC
Development Plan" ( http://gcc.gnu.org/develop.html ), which seems relevant to
the current discussion.

Of particular note is the schedule section, which sets a definite time frame
of two months per release.  IMO this shows committment to the project, and
keeps everything moving forward, and would avoid the tendency to
procrastinate.  It's a powerful thing to put something in writing.  I'm not a
GCC developer, so I'm not biased,and have no experience with the plan in
practice.  I am however a user of GCC, and I do notice new features every few
months.

The idea of "constant development" seems unbalanced.  If it's devloped but
never tested or merged or released or stabalized or documented, it is all time
wasted as far as the user is concerned, because they will never know about any
of it unless there's a link to download a release on main site's "downloads"
page.  Who is going to know the code better than the person who writes it?

If everyone is constantly "playing" with new features, who does the "work" of
writing good docs, testing, merging, and fixing bugs?  Development isn't even
1/2 of the job, and as such it needs to be balanced with all of the rest. 
Otherwise you have software which tries to do too much and doesn't quite
deliver everything it promises.

On the other hand, I know how it is, when I go to bed sleepy, thinking of some
code feature or problem, wake up a few times at night and think of it, and
have some ideas in the morning, which maybe disappear forever unless I make
use of them.  Sometimes just making a few notes isn't enough.  This is the
argument for constant development?  To have the most flexibility and
convenience to continually to capture as many new ideas as possible?

The balance often seems to be three stages: Develop & merge (alpha, odd
minors), freeze & test (beta, odd minors), bugfix only (gamma, even minors). 
So then the question again: how to keep releasing on a regular schedule?  For
each piece of new code that is developed or merged, it will need to be
maintained later on (for documentation, testing, and bug fixes), so plan
ahead.  Is the original coder going to commit to that work, or would they
prefer adding new features?  If they want to develop, then they must find a
maintainer to work with the developers with write-access.

Are there enough qualified and trusted people with write-access to the source
repository to review code submissions?  Let's not overload them with a bunch
of abandonned projects or too much new code from one person.  Are there any
prequalifiations like has-docs, has-maintainer(s), needs-testing, has-orphans
or needs-updates (with respective threshhold values to allow new code but
prevent a single developer from adding too much new code until they finish
their old code), which could let the maintainer focus on code which is ready
to be added or merged?  Or is submission-throttling a bad idea, and why?

Seems like there's a lot of code with a lot of easy to fix bugs and not enough
people for the grunt work of reviewing, responding and ultimately closing all
of them, but plenty of people to keep pushing new code development.  Result:
old bugs don't get fixed, new code doesn't get merged, no joy all around.

I look forward to constructive responses if any.  I'm just throwing this out
there for the heck of it.

Thanks as always for all the hard work: thinking of good ideas, implementing
new policies and code to strengthen the development infrastructure, and the
resultant excellent software, and documentation which makes the software come
alive for the rest of us.

Leif



Mime
View raw message