Return-Path: Delivered-To: apmail-hadoop-hdfs-dev-archive@minotaur.apache.org Received: (qmail 72691 invoked from network); 28 Apr 2010 11:12:03 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 28 Apr 2010 11:12:03 -0000 Received: (qmail 83721 invoked by uid 500); 28 Apr 2010 11:12:00 -0000 Delivered-To: apmail-hadoop-hdfs-dev-archive@hadoop.apache.org Received: (qmail 82972 invoked by uid 500); 28 Apr 2010 11:11:57 -0000 Mailing-List: contact hdfs-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-dev@hadoop.apache.org Delivered-To: mailing list hdfs-dev@hadoop.apache.org Received: (qmail 82950 invoked by uid 99); 28 Apr 2010 11:11:55 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Apr 2010 11:11:55 +0000 X-ASF-Spam-Status: No, hits=-1359.9 required=10.0 tests=ALL_TRUSTED,AWL X-Spam-Check-By: apache.org Received: from [140.211.11.22] (HELO thor.apache.org) (140.211.11.22) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 28 Apr 2010 11:11:54 +0000 Received: from thor (localhost [127.0.0.1]) by thor.apache.org (8.13.8+Sun/8.13.8) with ESMTP id o3SBBYms008642 for ; Wed, 28 Apr 2010 11:11:34 GMT Message-ID: <9114734.55661272453094187.JavaMail.jira@thor> Date: Wed, 28 Apr 2010 07:11:34 -0400 (EDT) From: "manas (JIRA)" To: hdfs-dev@hadoop.apache.org Subject: [jira] Created: (HDFS-1115) DFSClient unable to create new block MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 DFSClient unable to create new block ------------------------------------ Key: HDFS-1115 URL: https://issues.apache.org/jira/browse/HDFS-1115 Project: Hadoop HDFS Issue Type: Bug Components: hdfs client Affects Versions: 0.20.2 Environment: OpenSuse 11.2 running as a Virtual Machine on Windows Vista Reporter: manas Priority: Blocker Here, input is a folder containing all .xml files from ./conf Then trying the command: ./bin/hadoop fs -copyFromLocal input input The following message is displayed: {noformat} INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported INFO hdfs.DFSClient: Abandoning block blk_-1884214035513073759_1010 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available INFO hdfs.DFSClient: Abandoning block blk_5533397873275401028_1010 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available INFO hdfs.DFSClient: Abandoning block blk_-237603871573204731_1011 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available INFO hdfs.DFSClient: Abandoning block blk_-8668593183126057334_1011 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) WARN hdfs.DFSClient: Error Recovery for block blk_-8668593183126057334_1011 bad datanode[0] nodes == null WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... copyFromLocal: Protocol not available ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Operation not supported INFO hdfs.DFSClient: Abandoning block blk_-1884214035513073759_1010 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available INFO hdfs.DFSClient: Abandoning block blk_5533397873275401028_1010 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available INFO hdfs.DFSClient: Abandoning block blk_-237603871573204731_1011 INFO hdfs.DFSClient: Exception in createBlockOutputStream java.net.SocketException: Protocol not available INFO hdfs.DFSClient: Abandoning block blk_-8668593183126057334_1011 WARN hdfs.DFSClient: DataStreamer Exception: java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2845) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) WARN hdfs.DFSClient: Error Recovery for block blk_-8668593183126057334_1011 bad datanode[0] nodes == null WARN hdfs.DFSClient: Could not get block locations. Source file "/user/max/input/core-site.xml" - Aborting... copyFromLocal: Protocol not available ERROR hdfs.DFSClient: Exception closing file /user/max/input/core-site.xml : java.net.SocketException: Protocol not available java.net.SocketException: Protocol not available at sun.nio.ch.Net.getIntOption0(Native Method) at sun.nio.ch.Net.getIntOption(Net.java:178) at sun.nio.ch.SocketChannelImpl$1.getInt(SocketChannelImpl.java:419) at sun.nio.ch.SocketOptsImpl.getInt(SocketOptsImpl.java:60) at sun.nio.ch.SocketOptsImpl.sendBufferSize(SocketOptsImpl.java:156) at sun.nio.ch.SocketOptsImpl$IP$TCP.sendBufferSize(SocketOptsImpl.java:286) at sun.nio.ch.OptionAdaptor.getSendBufferSize(OptionAdaptor.java:129) at sun.nio.ch.SocketAdaptor.getSendBufferSize(SocketAdaptor.java:328) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.createBlockOutputStream(DFSClient.java:2873) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2826) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) {noformat} However, only empty files are created on HDFS. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.