hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Loughran <ste...@apache.org>
Subject Re: Defining Hadoop Compatibility -revisiting-
Date Thu, 12 May 2011 09:23:01 GMT
On 11/05/2011 22:24, Eric Baldeschwieler wrote:
> This is a really interesting topic!  I completely agree that we need to get ahead of
> I would be really interested in learning of any experience other apache projects, such
as apache or tomcat have with these issues.

I don't know about apache httpd

Tomcat is the JCP reference implementation of JSP, the JSP Jar is 
broadly reused, and the JCP program defines a test kit (with licensing 
T&Cs) to define compatibility. That is because the JCP program was 
designed to split specification from implementation.

Hadoop doesn't have that, which is a strength and a weakness. Strong: 
agility. Weakness: compatibility between versions as well as with others.

I think Sun NFS might be a good example of similar defacto standard, or 
MS SMB -it is up to others to show they are compatible with what is 
effective the reference implementation. Being closed source, there is no 
option for anyone to include SunOS NFS or MS SMB in their products -the 
issue of "how much of SunOS NFS to include before you have to stop 
calling it that" never arose.

View raw message