Return-Path: Delivered-To: apmail-hadoop-common-dev-archive@www.apache.org Received: (qmail 83732 invoked from network); 11 Aug 2009 01:33:27 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 11 Aug 2009 01:33:27 -0000 Received: (qmail 52615 invoked by uid 500); 11 Aug 2009 01:33:32 -0000 Delivered-To: apmail-hadoop-common-dev-archive@hadoop.apache.org Received: (qmail 52486 invoked by uid 500); 11 Aug 2009 01:33:32 -0000 Mailing-List: contact common-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-dev@hadoop.apache.org Delivered-To: mailing list common-dev@hadoop.apache.org Received: (qmail 52458 invoked by uid 99); 11 Aug 2009 01:33:32 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Aug 2009 01:33:32 +0000 X-ASF-Spam-Status: No, hits=2.2 required=10.0 tests=HTML_MESSAGE,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [209.85.220.224] (HELO mail-fx0-f224.google.com) (209.85.220.224) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 11 Aug 2009 01:33:20 +0000 Received: by fxm24 with SMTP id 24so3730892fxm.36 for ; Mon, 10 Aug 2009 18:33:00 -0700 (PDT) MIME-Version: 1.0 Received: by 10.86.29.1 with SMTP id c1mr3644841fgc.52.1249954380112; Mon, 10 Aug 2009 18:33:00 -0700 (PDT) In-Reply-To: <54dc3c50908101824i37b1d43clf599002b76e5cc95@mail.gmail.com> References: <180073.93584.qm@web56204.mail.re3.yahoo.com> <45f85f70908101730y47491d31u2c8c498686c20b45@mail.gmail.com> <192435.13503.qm@web56201.mail.re3.yahoo.com> <54dc3c50908101824i37b1d43clf599002b76e5cc95@mail.gmail.com> From: Philip Zeyliger Date: Mon, 10 Aug 2009 18:32:40 -0700 Message-ID: <15da8a100908101832p4a911835n6a6fe2670a52426@mail.gmail.com> Subject: Re: Question: how to run hadoop after the project split? To: hdfs-dev@hadoop.apache.org Cc: common-dev@hadoop.apache.org, mapreduce-dev@hadoop.apache.org Content-Type: multipart/alternative; boundary=000e0cd255f8cfe4020470d3aef8 X-Virus-Checked: Checked by ClamAV on apache.org --000e0cd255f8cfe4020470d3aef8 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit FWIW, I've been using the following simple shell script: [0]doorstop:hadoop(149128)$cat damnit.sh #!/bin/bash set -o errexit set -x cd hadoop-common ant binary cd .. cd hadoop-hdfs ant binary cd .. cd hadoop-mapreduce ant binary cd .. mkdir -p all/bin all/lib all/contrib cp hadoop-common/bin/* all/bin cp **/build/*.jar all/lib || true cp **/build/*-dev/lib/* all/lib || true cp **/build/*-dev/contrib/**/*.jar all/contrib It may very well make sense to have a meta-ant target that aggregates these things together in a sensible way. -- Philip On Mon, Aug 10, 2009 at 6:24 PM, Jay Booth wrote: > Yeah, I'm hitting the same issues, the patch problems weren't really an > issue (same line for same line conflict on my checkout), but not having the > webapp's sort of a pain. > > Looks like ant bin-package puts the webapps dir in > HDFS_HOME/build/hadoop-hdfs-0.21.0-dev/webapps, while the daemon's > expecting > build/webapps/hdfs. Anyone know off the top of their heads where this is > specified, or have a recommended solution? Otherwise I can hack away. > > On Mon, Aug 10, 2009 at 8:59 PM, Tsz Wo (Nicholas), Sze < > s29752-hadoopdev@yahoo.com> wrote: > > > Hi Todd, > > > > Two problems: > > - The patch in HADOOP-6152 cannot be applied. > > > > - I have tried an approach similar to the one described by the slides but > > it did not work since jetty cannot find the webapps directory. See > below: > > 2009-08-10 17:54:41,671 WARN org.mortbay.log: Web application not found > > file:/D:/@sze/hadoop/common/c2/build/webapps/hdfs > > 2009-08-10 17:54:41,671 WARN org.mortbay.log: Failed startup of context > > org.mortbay.jetty.webapp.WebAppContext@1884a40 > > {/,file:/D:/@sze/hadoop/common/c2/build/webapps/hdfs} > > java.io.FileNotFoundException: > > file:/D:/@sze/hadoop/common/c2/build/webapps/hdfs > > at > > > org.mortbay.jetty.webapp.WebAppContext.resolveWebApp(WebAppContext.java:959) > > at > > org.mortbay.jetty.webapp.WebAppContext.getWebInf(WebAppContext.java:793) > > at > > > org.mortbay.jetty.webapp.WebInfConfiguration.configureClassLoader(WebInfConfiguration.java:62) > > at > > org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:456) > > at > > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) > > at > > > org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152) > > at > > > org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156) > > at > > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) > > at > > org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130) > > at org.mortbay.jetty.Server.doStart(Server.java:222) > > at > > org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50) > > at org.apache.hadoop.http.HttpServer.start(HttpServer.java:464) > > at > > > org.apache.hadoop.hdfs.server.namenode.NameNode.startHttpServer(NameNode.java:362) > > at > > > org.apache.hadoop.hdfs.server.namenode.NameNode.activate(NameNode.java:309) > > at > > > org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:300) > > at > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:405) > > at > > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:399) > > at > > > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1165) > > at > > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1174) > > > > Thanks, > > Nicholas > > > > > > > > > > ----- Original Message ---- > > > From: Todd Lipcon > > > To: common-dev@hadoop.apache.org > > > Cc: hdfs-dev@hadoop.apache.org; mapreduce-dev@hadoop.apache.org > > > Sent: Monday, August 10, 2009 5:30:52 PM > > > Subject: Re: Question: how to run hadoop after the project split? > > > > > > Hey Nicholas, > > > > > > Aaron gave a presentation with his best guess at the HUG last month. > His > > > slides are here: > > http://www.cloudera.com/blog/2009/07/17/the-project-split/ > > > (starting at slide 16) > > > (I'd let him reply himself, but he's out of the office this afternoon > ;-) > > ) > > > > > > Hopefully we'll get towards something better soon :-/ > > > > > > -Todd > > > > > > On Mon, Aug 10, 2009 at 5:25 PM, Tsz Wo (Nicholas), Sze < > > > s29752-hadoopdev@yahoo.com> wrote: > > > > > > > I have to admit that I don't know the official answer. The hack > below > > > > seems working: > > > > - compile all 3 sub-projects; > > > > - copy everything in hdfs/build and mapreduce/build to common/build; > > > > - then run hadoop by the scripts in common/bin as before. > > > > > > > > Any better idea? > > > > > > > > Nicholas Sze > > > > > > > > > > > > > > > --000e0cd255f8cfe4020470d3aef8--