Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 75D6010EC8 for ; Wed, 6 Nov 2013 23:49:32 +0000 (UTC) Received: (qmail 61867 invoked by uid 500); 6 Nov 2013 23:49:27 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 61694 invoked by uid 500); 6 Nov 2013 23:49:27 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 61687 invoked by uid 99); 6 Nov 2013 23:49:27 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Nov 2013 23:49:27 +0000 X-ASF-Spam-Status: No, hits=-0.7 required=5.0 tests=RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 209.85.223.181 as permitted sender) Received: from [209.85.223.181] (HELO mail-ie0-f181.google.com) (209.85.223.181) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 06 Nov 2013 23:49:23 +0000 Received: by mail-ie0-f181.google.com with SMTP id ar20so421461iec.12 for ; Wed, 06 Nov 2013 15:49:02 -0800 (PST) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:content-type:content-transfer-encoding; bh=glnz3badvK6jjtnAE6MBr+h+/mXT+LPV2S7q+BJi6Hk=; b=YxoDnVvbNclI13eoU1YkAT4hQTEydn1+RuQMX5lglksJN/50owHjL0IGm6mSJqDoUw 568Hj4THa7wXtBDc9E/NtU3lb5v1Tg0lBmETV1hFkSpyN9DVvRoA7IvsfPziWd5ElN+r jPP6BRzvHwEPjhqysTrAE69R2d45R/HpqVu0Av1nha4r0RnGIJ9tsWePYo8PgVQQ9Wup b/WODiFc3ZYHT1JjRaYuggzjVfBzbz2HaUliVASouDmQPcWBWYNERdTWrLKo42XqBg8r yAwa562cbmBC87iIpFzGSumuW40XhFpBIpMXR9atuhVOMAwj0YzfpF7YOqXWnlppjqNC BCDw== X-Gm-Message-State: ALoCoQkg1pKNHAY9phGp4Jm7dm+KHu07eqSkvDt0Pkjge8Lb+bGR7w9wNzuZunhtVQF2Phec4Wgo X-Received: by 10.50.1.102 with SMTP id 6mr202417igl.0.1383781742726; Wed, 06 Nov 2013 15:49:02 -0800 (PST) MIME-Version: 1.0 Received: by 10.50.234.225 with HTTP; Wed, 6 Nov 2013 15:48:42 -0800 (PST) In-Reply-To: <0b1d01cedb46$bb1d8230$31588690$@mlab.cs.msu.su> References: <0b1d01cedb46$bb1d8230$31588690$@mlab.cs.msu.su> From: Harsh J Date: Thu, 7 Nov 2013 05:18:42 +0530 Message-ID: Subject: Re: access to hadoop cluster to post tasks remotely To: "" , gerasimov@mlab.cs.msu.su Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org Data in HDFS is read and written via the individual DN's 50010 ports, which you would also need to open up to avoid these errors. Data isn't written/read through the NameNode. On Thu, Nov 7, 2013 at 4:50 AM, Sergey Gerasimov wrote: > Hello, > > > > I have problems with posting jar to my cluster remotely from client machi= ne > located somewhere in the Web. I use original hadoop-1.2.1. > > > > I installed hadoop on client machine (same version as in the cluster), > configured fs.default.name and mapred.job.tracker. > > Access to DFS works fine remotely. I can successfully play with =93hadoop= fs=94 > commands. > > > > But when I send some job, for example: > > hadoop jar hadoop-examples -1.2.1.jar sleep 1 > > > > I see output like: > > 13/11/07 02:44:42 INFO hdfs.DFSClient: Exception in createBlockOutputStre= am > xx.xx.xx.xx:50010 java.net.ConnectException: Connection timed out > > 13/11/07 02:44:42 INFO hdfs.DFSClient: Abandoning > blk_1089181243677159149_31717 > > 13/11/07 02:44:42 INFO hdfs.DFSClient: Excluding datanode xx.xx.xx.xx:500= 10 > > 13/11/07 02:45:45 INFO hdfs.DFSClient: Exception in createBlockOutputStre= am > xx.xx.xx.xx:50010 java.net.ConnectException: Connection timed out > > 13/11/07 02:45:45 INFO hdfs.DFSClient: Abandoning > blk_6550586867464091073_31717 > > 13/11/07 02:45:45 INFO hdfs.DFSClient: Excluding datanode xx.xx.xx.xx:500= 10 > > 13/11/07 02:46:48 INFO hdfs.DFSClient: Exception in createBlockOutputStre= am > xx.xx.xx.xx:50010 java.net.ConnectException: Connection timed out > > 13/11/07 02:46:48 INFO hdfs.DFSClient: Abandoning > blk_5814098597599107248_31717 > > 13/11/07 02:46:48 INFO hdfs.DFSClient: Excluding datanode xx.xx.xx.xx:500= 10 > > 13/11/07 02:47:51 INFO hdfs.DFSClient: Exception in createBlockOutputStre= am > xx.xx.xx.xx:50010 java.net.ConnectException: Connection timed out > > 13/11/07 02:47:51 INFO hdfs.DFSClient: Abandoning > blk_6368219524592897749_31717 > > > > The same jar sent from inside the cluster runs fine. > > > > The network where cluster lives is protected by firewall with only NameN= ode > and JobTracker ports opened externally. > > iptables on all nodes are off. > > > > I have no ideas about reasons of these messages in the log. To the moment= I > were sure that entry point to hadoop cluster contains just NameNode and > JobTracker ports. > > Both are open. > > > > Please help! > > > > > > --=20 Harsh J