Return-Path: Delivered-To: apmail-hadoop-core-user-archive@www.apache.org Received: (qmail 87170 invoked from network); 20 Jun 2009 16:40:53 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 20 Jun 2009 16:40:53 -0000 Received: (qmail 14293 invoked by uid 500); 20 Jun 2009 16:41:02 -0000 Delivered-To: apmail-hadoop-core-user-archive@hadoop.apache.org Received: (qmail 14220 invoked by uid 500); 20 Jun 2009 16:41:02 -0000 Mailing-List: contact core-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-user@hadoop.apache.org Delivered-To: mailing list core-user@hadoop.apache.org Received: (qmail 14210 invoked by uid 99); 20 Jun 2009 16:41:02 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Jun 2009 16:41:02 +0000 X-ASF-Spam-Status: No, hits=3.7 required=10.0 tests=HTML_MESSAGE,NORMAL_HTTP_TO_IP,SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of harish.mallipeddi@gmail.com designates 209.85.216.171 as permitted sender) Received: from [209.85.216.171] (HELO mail-px0-f171.google.com) (209.85.216.171) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 20 Jun 2009 16:40:50 +0000 Received: by pxi1 with SMTP id 1so2256242pxi.5 for ; Sat, 20 Jun 2009 09:40:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :from:date:message-id:subject:to:content-type; bh=1T9MFG/jTbcomW2nfczHQ+O7XqllCta2LcTFJcvq43U=; b=JzPAPuDvCmERTBuOotRYXOyAhIdHB6Z9ekleJGbZomGCWt6fUpD9YzvjqlJ2FVO6eK 9j2GLpLQ9RV8dkts+bPoXin2h+HllI4lq86H2a0A12C4sYgF23m+0fK8o4gDeLT87o4L NwqhEr1oZeEvg2gTAVSPJDiqkn4FKd4NwrTqk= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; b=TxBYAZrGgIm3n9TXE5OEVCYPGrX6fTvav/Di9zANCuAa15cnhlMfwSLqv+TbhS02J0 AEjFIuEaNSRcmF91qZPGo9Yx1ZVukUdJxj8la0KClAWiXFiTcsmWL8BxyOPxQ63baWkh YUasEQL7Y+76Mw0DAxrt46fJB59N2pXI65/bk= MIME-Version: 1.0 Received: by 10.142.99.13 with SMTP id w13mr1863404wfb.18.1245516029115; Sat, 20 Jun 2009 09:40:29 -0700 (PDT) In-Reply-To: <32120a6a0906200542r33f4c8e5ufa29a8aa0b970857@mail.gmail.com> References: <32120a6a0906200542r33f4c8e5ufa29a8aa0b970857@mail.gmail.com> From: Harish Mallipeddi Date: Sat, 20 Jun 2009 22:10:09 +0530 Message-ID: Subject: Re: Help with EC2 using HDFS client (port open issue?) To: core-user@hadoop.apache.org Content-Type: multipart/alternative; boundary=00504502c9137a3cb6046cca4c1b X-Virus-Checked: Checked by ClamAV on apache.org --00504502c9137a3cb6046cca4c1b Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Hi Tim, I don't know the answer to your specific problem but IIRC all ports on EC2 machines (within the same security group?) are open and reachable within the EC2 environment. You only have to open the ports (via ec2-authorize) if you want to reach them from outside EC2. So typically for hadoop clusters, you open the ports corresponding to the web-admin consoles (ports 50030, 50060, 50070, etc) so you can see the web console from your browser. I've not used EC2 for a year now so things might have changed. Also do use the 'public' hostnames for configuring purposes - they resolve to internal IPs inside EC2 and external IPs from outside EC2. More on this: http://mail-archives.apache.org/mod_mbox/hadoop-core-user/200905.mbox/%3CDFD95197F3AE8C45B0A96C2F4BA3A2556BF123EE37@SC-MBXC1.TheFacebook.com%3E - Harish On Sat, Jun 20, 2009 at 6:12 PM, tim robertson wrote: > Hi all, > > I am using Hadoop to build a read only store for voldemort on EC2 and > for some reason can't get it to talk across the nodes. > I know this is a specific EC2 linux setup question, but I was hoping > someone could help me as I am sure all the apps build on Hadoop are > doing this - I'm not very hot on linux. > > The client is calling > > hdfs://ip-10-244-191-175.ec2.internal:54310/user/root/output/fullPD/stage2/node-0 > > and I have run > ec2-authorize hdfs-cluster -p 54310 > (but I am not sure this is the way to open the port) > > I'm using the cloudera AMI. > Full trace is below and any pointers are greatly appreciated! > > Cheers > > Tim > > > 09/06/20 06:54:09 ERROR gui.ReadOnlyStoreManagementServlet: Error > while performing operation. > java.net.ConnectException: Call to > ip-10-244-191-175.ec2.internal/10.244.191.175:54310 failed on > connection exception: java.net.ConnectException: Connection refused > at org.apache.hadoop.ipc.Client.wrapException(Client.java:743) > at org.apache.hadoop.ipc.Client.call(Client.java:719) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:216) > at org.apache.hadoop.dfs.$Proxy6.getProtocolVersion(Unknown Source) > at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:348) > at > org.apache.hadoop.dfs.DFSClient.createRPCNamenode(DFSClient.java:103) > at org.apache.hadoop.dfs.DFSClient.(DFSClient.java:172) > at > org.apache.hadoop.dfs.DistributedFileSystem.initialize(DistributedFileSystem.java:67) > at > org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1328) > at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:56) > at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1343) > at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213) > at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175) > at > voldemort.store.readonly.fetcher.HdfsFetcher.fetch(HdfsFetcher.java:82) > at > voldemort.server.http.gui.ReadOnlyStoreManagementServlet.doFetch(ReadOnlyStoreManagementServlet.java:162) > at > voldemort.server.http.gui.ReadOnlyStoreManagementServlet.doPost(ReadOnlyStoreManagementServlet.java:125) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:727) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:820) > at > org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:502) > at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:389) > at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:765) > at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152) > at org.mortbay.jetty.Server.handle(Server.java:326) > at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:534) > at > org.mortbay.jetty.HttpConnection$RequestHandler.content(HttpConnection.java:879) > at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:747) > at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:218) > at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404) > at > org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:409) > at > org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:520) > -- Harish Mallipeddi http://blog.poundbang.in --00504502c9137a3cb6046cca4c1b--