hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Chris Douglas <chris.doug...@gmail.com>
Subject Re: [DISCUSSION] Release process
Date Thu, 01 Apr 2010 20:59:34 GMT
> Thus far the changes suggested for a 1.0 branch are:
>  - de-deprecate "classic" mapred APIs (no Jira issue yet)

Why? Tom and Owen's proposal preserves compatibility with the
deprecated FileSystem and mapred APIs up to 1.0. After Tom cuts a
release- from either the 0.21 branch or trunk- then issues related to
missing mapred.lib classes, partial implementations, etc. are
ameliorated and they actually become usable. Telling users to ignore
them and use the classic APIs only deepens our debt.

I don't mind releasing 1.0 with the classic APIs. Given the installed
base, it's probably required. But let's not kill the new APIs by
calling them "experimental," thereby granting the old ones "official"
status at the moment the new ones become viable.

>  - add HDFS-200 (improved append)
>  - add HADOOP-6668 & MAPREDUCE-1623 (audience and stability annotations)
>  - add MAPREDUCE-1650 (exclude private elements from javadoc)

OK. From some previous messages, I thought you were proposing some mix
of 0.20 + security + HDFS-200 + et al., to better reflect what many
run in production, possibly spreading that backporting work over
several 1.x releases. This comparably meager set- with a vote on
HDFS-200- could easily be 0.20.3, plus a set of bug fixes Todd and I
have been assembling.

> Would you strongly oppose such a 3-week process?

Having spent 2009 in the shadow of 0.20, I oppose any decision that
prevents Apache from releasing the last year of work, or backporting
existing work *again* onto that branch. With 0.21 finally coming out,
a line of 1.x releases based on 0.20 would kneecap Owen and Tom's
effort to restart the project. -C

Mime
View raw message