Return-Path: Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: (qmail 712 invoked from network); 21 Jan 2010 14:49:33 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 21 Jan 2010 14:49:33 -0000 Received: (qmail 87681 invoked by uid 500); 21 Jan 2010 14:49:33 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 87626 invoked by uid 500); 21 Jan 2010 14:49:33 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 87617 invoked by uid 99); 21 Jan 2010 14:49:32 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Jan 2010 14:49:32 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of fenix.serega@gmail.com designates 72.14.220.155 as permitted sender) Received: from [72.14.220.155] (HELO fg-out-1718.google.com) (72.14.220.155) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 21 Jan 2010 14:49:26 +0000 Received: by fg-out-1718.google.com with SMTP id e12so57282fga.11 for ; Thu, 21 Jan 2010 06:49:05 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:content-type :content-transfer-encoding; bh=9EpIDg+uUeJW1LarD6GN2TcdEOCnn2V5eXI0C7EpGf4=; b=aw44TGm9JIRpwzpofnnrxBjhg/J1kNch0FeeGhz9H9NBF+BWP+SCoErBRsc9pcNV+K CayH8XOnFQFxwF5GlYCvbsrdkFbmGh2zL73b4L4QZNfx+AqKoGsV6VEWwWjrMJ2hNgBK Mk/71psdUASq2f8Kd3YRWOHexc8fyUSv2fNTQ= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; b=abl//w2McnVUKefAgZ6mU4pKiXfEjKRF8aIRUMx8/Lb9nts3CNbfyNZmT+T9K8nFqG EAz/dG1lVsdG5IHTzULXhblN23HNwt1GCL6Ar1IPoRjF1HBtP9dFpqrTeHnhqpuOave1 YYoJU0/qEFnWZ6U2bhXFyHlk1mRAgifpj+6L8= MIME-Version: 1.0 Received: by 10.87.38.2 with SMTP id q2mr1525556fgj.0.1264085345548; Thu, 21 Jan 2010 06:49:05 -0800 (PST) In-Reply-To: References: <39139.192.168.1.187.1263356591.squirrel@192.168.1.1> Date: Thu, 21 Jan 2010 16:49:05 +0200 Message-ID: Subject: Re: fuse_dfs dfs problem From: fenix.serega@gmail.com To: hdfs-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Thanks. 2010/1/20 Eli Collins : > Hey Sergey, > > Here's a link to the jira: http://issues.apache.org/jira/browse/HDFS-856 > > You can find a patch under the file attachment section, here's a direct l= ink: > > http://issues.apache.org/jira/secure/attachment/12429027/HADOOP-856.patch > > Thanks, > Eli > > On Wed, Jan 20, 2010 at 8:03 AM, =A0 wrote: >> Hello, Eli could you please point me - where I can get this patch (from = jira) >> to fix this issue ? >> >> Regards, >> Sergey S. Ropchan >> >> 2010/1/13 Eli Collins : >>> Hey Klaus, >>> >>> That's HDFS-856, you can apply the patch from the jira. The fix will >>> also be in the next cdh2 release. >>> >>> Thanks, >>> Eli >>> >>> On Tue, Jan 12, 2010 at 8:23 PM, Klaus Nagel wro= te: >>>> Hello, I am using Hadoop 0.20.1 and have a litte problem with it...hop= e >>>> someone can help... >>>> >>>> I have a 3 Node Setup, and set dfs.replication and dfs.replication.max= to >>>> 1 (in the file hdfs-site.xml). >>>> That works great when putting a file to the hadoop filesystem >>>> (eg. ./hadoop fs -put ~/debian-503-i386-businesscard.iso abc.iso) >>>> >>>> when I try that with fuse_dfs I get the following error message from t= he >>>> fuse_dfs_wrapper.sh script >>>> >>>> LOOKUP /temp/test.test >>>> =A0 unique: 21, error: -2 (No such file or directory), outsize: 16 >>>> unique: 22, opcode: CREATE (35), nodeid: 7, insize: 58 >>>> WARN: hdfs does not truly support O_CREATE && O_EXCL >>>> Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException: >>>> java.io.IOException: failed to create file /temp/test.test on client >>>> 10.8.0.1. >>>> Requested replication 3 exceeds maximum 1 >>>> =A0 =A0 =A0 =A0at >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(= FSNamesystem.java:1074) >>>> =A0 =A0 =A0 =A0at >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesy= stem.java:977) >>>> =A0 =A0 =A0 =A0at >>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:3= 77) >>>> =A0... >>>> ... >>>> ... >>>> >>>> >>>> ...same messages in the namenode-log >>>> 2010-01-13 04:36:57,183 WARN org.apache.hadoop.hdfs.StateChange: DIR* >>>> NameSystem.startFile: failed to create file /temp/test.test on client >>>> 10.8.0.1. >>>> Requested replication 3 exceeds maximum 1 >>>> 2010-01-13 04:36:57,183 INFO org.apache.hadoop.ipc.Server: IPC Server >>>> handler 4 on 9000, call create(/temp/test.test, rwxr-xr-x, >>>> DFSClient_814881830$ >>>> Requested replication 3 exceeds maximum 1 >>>> java.io.IOException: failed to create file /temp/test.test on client >>>> 10.8.0.1. >>>> Requested replication 3 exceeds maximum 1 >>>> =A0 =A0 =A0 =A0at >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(= FSNamesystem.java:1074) >>>> =A0 =A0 =A0 =A0at >>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFile(FSNamesy= stem.java:977) >>>> =A0 =A0 =A0 =A0at >>>> org.apache.hadoop.hdfs.server.namenode.NameNode.create(NameNode.java:3= 77) >>>> ... >>>> ... >>>> >>>> ...hope someone can help me solving that problem, >>>> best regards: Klaus >>>> >>>> >>> >> >