Return-Path: X-Original-To: apmail-hadoop-general-archive@minotaur.apache.org Delivered-To: apmail-hadoop-general-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 94BCA6A4F for ; Tue, 17 May 2011 01:53:06 +0000 (UTC) Received: (qmail 40063 invoked by uid 500); 17 May 2011 01:53:05 -0000 Delivered-To: apmail-hadoop-general-archive@hadoop.apache.org Received: (qmail 39998 invoked by uid 500); 17 May 2011 01:53:05 -0000 Mailing-List: contact general-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: general@hadoop.apache.org Delivered-To: mailing list general@hadoop.apache.org Received: (qmail 39989 invoked by uid 99); 17 May 2011 01:53:05 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 May 2011 01:53:05 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [74.125.83.176] (HELO mail-pv0-f176.google.com) (74.125.83.176) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 17 May 2011 01:52:59 +0000 Received: by pve37 with SMTP id 37so34058pve.35 for ; Mon, 16 May 2011 18:52:37 -0700 (PDT) Received: by 10.68.3.139 with SMTP id c11mr151693pbc.277.1305597157063; Mon, 16 May 2011 18:52:37 -0700 (PDT) MIME-Version: 1.0 Sender: cos@boudnik.org Received: by 10.68.62.230 with HTTP; Mon, 16 May 2011 18:52:17 -0700 (PDT) In-Reply-To: References: <4DCCF172.5010504@apache.org> <99AB77CF-025E-4E4A-BD7F-BDA362E633BF@apache.org> <4DCDA8E7.5090809@apache.org> <2ED3A7F5-E549-4545-9395-094A10EA4EED@apache.org> <4DD10420.7060004@apache.org> <434E417B-C21F-4649-9D72-4E6DCA558C18@apache.org> <6E1C9C7A-646A-44A2-9EB0-A11B7291B7B2@linkedin.com> From: Konstantin Boudnik Date: Mon, 16 May 2011 18:52:17 -0700 X-Google-Sender-Auth: uFckeOqvaYXIhblhlWaG3O7N2ZQ Message-ID: Subject: Re: Defining Hadoop Compatibility -revisiting- To: general@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org We have the following method coverage: Common ~60% HDFS ~80% MR ~70% (better analysis will be available after our projects are connected to Sonar, I think). While method coverage isn't completely adequate answer to your question, I'd say there is a possibility to sneak in some semantical and even API changes which might go entirely unvalidated by the test suites. It isn't very high, but it does exist. A better approach to validate semantics is to run cluster tests (e.g. system tests) which have a better potentials to exercise public APIs than functional tests. There's HADOOP-7278 to address this for 0.22 (and potentially others) -- =A0 Take care, Konstantin (Cos) Boudnik Disclaimer: Opinions expressed in this email are those of the author, and do not necessarily represent the views of any company the author might be affiliated with at the moment of writing. On Mon, May 16, 2011 at 14:59, Ian Holsman wrote: > >> >> =A0 =A0 =A0 Does "Hadoop compatibility" and the ability to say "includes= Apache Hadoop" only apply when we're talking about MR and HDFS APIs? > > > It is confusing isn't it. > > We could go down the route java did and say that the API's are 'hadoop' a= nd ours is just a reference implementation of it. (but others pointed out, = we don't want to become a certification group) > > Out of curiosity, how good is our test suite in exercising our APIs? > Is it sophisticated enough to capture someone adding a functionality-chan= ging patch (eg the append one). and have it flag it as a test-failure? > >