Return-Path: Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: (qmail 57386 invoked from network); 21 Apr 2010 09:32:20 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 21 Apr 2010 09:32:20 -0000 Received: (qmail 4669 invoked by uid 500); 21 Apr 2010 09:32:18 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 4294 invoked by uid 500); 21 Apr 2010 09:32:15 -0000 Mailing-List: contact common-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: common-user@hadoop.apache.org Delivered-To: mailing list common-user@hadoop.apache.org Received: (qmail 4286 invoked by uid 99); 21 Apr 2010 09:32:15 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Apr 2010 09:32:15 +0000 X-ASF-Spam-Status: No, hits=-1.6 required=10.0 tests=RCVD_IN_DNSWL_MED,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [192.6.10.60] (HELO tobor.hpl.hp.com) (192.6.10.60) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 21 Apr 2010 09:32:05 +0000 Received: from localhost (localhost [127.0.0.1]) by tobor.hpl.hp.com (Postfix) with ESMTP id 0D356B7E18 for ; Wed, 21 Apr 2010 10:31:43 +0100 (BST) X-Virus-Scanned: amavisd-new at hplb.hpl.hp.com Received: from tobor.hpl.hp.com ([127.0.0.1]) by localhost (tobor.hpl.hp.com [127.0.0.1]) (amavisd-new, port 10024) with LMTP id B6xF3kecyqOr for ; Wed, 21 Apr 2010 10:31:33 +0100 (BST) Received: from 0-imap-br1.hpl.hp.com (0-imap-br1.hpl.hp.com [16.25.144.60]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (No client certificate requested) by tobor.hpl.hp.com (Postfix) with ESMTPS id D2DEFB7D06 for ; Wed, 21 Apr 2010 10:31:32 +0100 (BST) MailScanner-NULL-Check: 1272447080.86706@wGEmuOYmeVESEiuNa3PrVQ Received: from [16.25.175.158] (morzine.hpl.hp.com [16.25.175.158]) by 0-imap-br1.hpl.hp.com (8.14.1/8.13.4) with ESMTP id o3L9VGOm016520 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO) for ; Wed, 21 Apr 2010 10:31:17 +0100 (BST) Message-ID: <4BCEC5E5.5050406@apache.org> Date: Wed, 21 Apr 2010 10:31:17 +0100 From: Steve Loughran User-Agent: Thunderbird 2.0.0.24 (X11/20100228) MIME-Version: 1.0 To: common-user@hadoop.apache.org Subject: Re: Trouble copying local file to hdfs References: <1281f1817cc.-2434269201981041555.-4271435516673101065@zoho.com> In-Reply-To: <1281f1817cc.-2434269201981041555.-4271435516673101065@zoho.com> Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: 7bit X-HPL-MailScanner-Information: Please contact the ISP for more information X-MailScanner-ID: o3L9VGOm016520 X-HPL-MailScanner: Found to be clean X-HPL-MailScanner-From: stevel@apache.org X-Virus-Checked: Checked by ClamAV on apache.org manas.tomar wrote: > I have set-up Hadoop on OpenSuse 11.2 VM using Virtualbox. I ran Hadoop examples in the standalone mode successfully. > Now, I want to run in distributed mode using 2 nodes. > Hadoop starts fine and jps lists all the nodes. But when i try to put any file or run any example, I get error. For e.g. : > > hadoop@master:~/hadoop> ./bin/hadoop dfs -copyFromLocal ./input inputsample > 10/04/17 14:42:46 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported > 10/04/17 14:42:46 INFO hdfs.DFSClient: Abandoning block blk_8951413748418693186_1080 > .... > 10/04/17 14:43:04 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available > 10/04/17 14:43:04 INFO hdfs.DFSClient: Abandoning block blk_838428157309440632_1081 > 10/04/17 14:43:10 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > > 10/04/17 14:43:10 WARN hdfs.DFSClient: Error Recovery for block blk_838428157309440632_1081 bad datanode[0] nodes == null > 10/04/17 14:43:10 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/hadoop/inputsample/check" - Aborting... > copyFromLocal: Protocol not available > 10/04/17 14:43:10 ERROR hdfs.DFSClient: Exception closing file /user/hadoop/inputsample/check : java.net.SocketException: Protocol not available > java.net.SocketException: Protocol not available > at sun.nio.ch.Net.getIntOption0(Native Method) > at sun.nio.ch.Net.getIntOption(Net.java:178) > at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) > at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) > at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) > at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) > at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) > at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > > > I can see the files on HDFS through the web interface but they are empty. > Any suggestion on how can I get over this ? > That is a very low-level socket error; I would file a bugrep on hadoop and include all machine details, as there is something very odd about your underlying machine or network stack that is stopping hadoop tweaking TCP buffer sizes