Return-Path: X-Original-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 88172E781 for ; Fri, 1 Mar 2013 21:58:11 +0000 (UTC) Received: (qmail 31985 invoked by uid 500); 1 Mar 2013 21:58:10 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 31903 invoked by uid 500); 1 Mar 2013 21:58:10 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 31872 invoked by uid 99); 1 Mar 2013 21:58:10 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Mar 2013 21:58:10 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of shv.hadoop@gmail.com designates 209.85.215.47 as permitted sender) Received: from [209.85.215.47] (HELO mail-la0-f47.google.com) (209.85.215.47) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 01 Mar 2013 21:58:03 +0000 Received: by mail-la0-f47.google.com with SMTP id fj20so3429976lab.34 for ; Fri, 01 Mar 2013 13:57:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:cc:content-type:content-transfer-encoding; bh=XKTKOD7O9SoWvQ0PB8mC6KxTtF4hBLb58uO4khi1FuI=; b=WcWXi0xH0V2UUdLPLygKfwn2S3TWE/nCUA/aBCDTw4hkQtIVSwjX4WH980O4nQZnYv T2YpHw+DhuCiep805FhU0IegAb5JQuJhJ3WT2ITGHTFsWuNua9VNn6eP+lnJG80V196B RqksXr0mYmuJzzj8vRjB3OV1HsOGEb9JnIrA4tGk2n6xWk6oi1c6LODk3yEj0cWvENfs 1uhv7qGgAEt/lT3J5wdFJx3Kui6ihz+Rhtvcfv4Np8/UqVkcv0aBAFS5GT9UTG3k7hj2 K6yo7E++0H157YVXBeo6iZcu3X1yjsLVGMwyRMz4znmGRsSZ6hbOpGRm5L5oM/hrDeVd 5hXw== MIME-Version: 1.0 X-Received: by 10.112.8.9 with SMTP id n9mr1330063lba.71.1362175062764; Fri, 01 Mar 2013 13:57:42 -0800 (PST) Received: by 10.152.109.78 with HTTP; Fri, 1 Mar 2013 13:57:42 -0800 (PST) In-Reply-To: References: Date: Fri, 1 Mar 2013 13:57:42 -0800 Message-ID: Subject: Re: [Vote] Merge branch-trunk-win to trunk From: Konstantin Shvachko To: hdfs-dev@hadoop.apache.org Cc: common-dev@hadoop.apache.org, "mapreduce-dev@hadoop.apache.org" , yarn-dev@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org Commitment is a good thing. I think the two builds that I proposed are a prerequisite for Win support. If we commit windows patch people will start breaking it the next day. Which we wont know without the nightly build and wont be able to fix without the on-demand one. Making two builds is less than 2 days work, imho, given that there is a Windows node available and that mvn targets are in place. Correct me if I missed any complications in the process. Thanks, --Konst On Fri, Mar 1, 2013 at 1:28 PM, Chris Douglas wrote: > Konstantin- > > There's no debate on the necessity of CI and related infrastructure to > support the platform well. Suresh outlined the support to effect this > here: http://s.apache.org/s1 > > Is the commitment to establish this infrastructure after the merge > sufficient? -C > > On Fri, Mar 1, 2013 at 12:18 PM, Konstantin Shvachko > wrote: >> -1 >> We should have a CI infrastructure in place before we can commit to >> supporting Windows platform. >> >> Eric is right Win/Cygwin was supported since day one. >> I had a Windows box under my desk running nightly builds back in 2006-07= . >> People were irritated but I was filing windows bugs until 0.22 release. >> Times changing and I am glad to see wider support for Win platform. >> >> But in order to make it work you guys need to put the CI process in plac= e >> >> 1. windows jenkins build: could be nightly or PreCommit. >> - Nightly would mean that changes can be committed to trunk based on >> linux PreCommit build. And people will file bugs if the change broke >> Windows nightly build. >> - PreCommit-win build will mean automatic reporting failed tests to >> respective jira blocking commits the same way as it is now with linux >> PreCommit builds. >> We should discuss which way is more efficient for developers. >> >> 2. On-demand-windows Jenkins build. >> I see it as a build to which I can attach my patch and the build will >> run my changes on a dedicated windows box. >> That way people can test their changes without having personal windows n= odes. >> >> I think this is the minimal set of requirement for us to be able to >> commit to the new platform. >> Right now I see only one windows related build >> https://builds.apache.org/view/Hadoop/job/Hadoop-1-win/ >> Which was failing since Sept 8, 2012 and did not run in the last month. >> >> Thanks, >> --Konst >> >> On Thu, Feb 28, 2013 at 8:47 PM, Eric Baldeschwieler >> wrote: >>> +1 (non-binding) >>> >>> A few of observations: >>> >>> - Windows has actually been a supported platform for Hadoop since 0.1 .= Doug championed supporting windows then and we've continued to do it with= varying vigor over time. To my knowledge we've never made a decision to d= rop windows support. The change here is improving our support and dropping= the requirement of cigwin. We had Nutch windows users on the list in 2006= and we've been supporting windows FS requirements since inception. >>> >>> - A little pragmatism will go a long way. As a community we've got to = stay committed to keeping hadoop simple (so it does work on many platforms)= and extending it to take advantage of key emerging OS/hardware features, s= uch as containers, new FSs, virtualization, flash ... We should all plan t= o let new features & optimizations emerge that don't work everywhere, if th= ey are compelling and central to hadoop's mission of being THE best fabric = for storing and processing big data. >>> >>> - A UI project like KDE has to deal with the MANY differences between w= indows and linux UI APIs. Hadoop faces no such complex challenge and hence= can be maintained from a single codeline IMO. It is mostly abstracted fro= m the OS APIs via Java and our design choices. Where it is not we can cont= inue to add plugable abstractions. >>>