Return-Path: Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: (qmail 11312 invoked from network); 19 Jan 2010 18:54:14 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 19 Jan 2010 18:54:14 -0000 Received: (qmail 34024 invoked by uid 500); 19 Jan 2010 18:54:14 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 33937 invoked by uid 500); 19 Jan 2010 18:54:13 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 33928 invoked by uid 99); 19 Jan 2010 18:54:13 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Jan 2010 18:54:13 +0000 X-ASF-Spam-Status: No, hits=-0.0 required=10.0 tests=SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: local policy) Received: from [209.85.216.204] (HELO mail-px0-f204.google.com) (209.85.216.204) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 19 Jan 2010 18:54:05 +0000 Received: by pxi42 with SMTP id 42so5419589pxi.5 for ; Tue, 19 Jan 2010 10:53:43 -0800 (PST) MIME-Version: 1.0 Received: by 10.142.117.1 with SMTP id p1mr2650122wfc.185.1263927218635; Tue, 19 Jan 2010 10:53:38 -0800 (PST) In-Reply-To: <45464.192.168.1.187.1263779065.squirrel@192.168.1.1> References: <45464.192.168.1.187.1263779065.squirrel@192.168.1.1> Date: Tue, 19 Jan 2010 10:53:38 -0800 Message-ID: Subject: Re: fuse_dfs and iozone From: Eli Collins To: hdfs-user@hadoop.apache.org, das@gibtsdochgar.net Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org On Sun, Jan 17, 2010 at 5:44 PM, Klaus Nagel wrote: > Hi everybody - new week, new problem...hope you can help me another time.= .. > > I want to run iozone in a fuse_dfs mounted directory (command: iozone -a) > but that doesn't work. (hadoop version 0.20.1, System Debian Lenny 32Bit > with 2.6.26-2-686 kernel, only one datanode configured) > > > ###Output from iozone: > Can not open temp file: iozone.tmp > open: Input/output error > > > ###fuse_dfs_wrapper output in debug mode: > LOOKUP /temp/iozone.tmp > =A0 unique: 26, error: -2 (No such file or directory), outsize: 16 > unique: 27, opcode: CREATE (35), nodeid: 6, insize: 59 > =A0 NODEID: 7 > =A0 unique: 27, error: 0 (Success), outsize: 152 > unique: 28, opcode: GETATTR (3), nodeid: 6, insize: 56 > =A0 unique: 28, error: 0 (Success), outsize: 112 > unique: 29, opcode: SETATTR (4), nodeid: 7, insize: 128 > =A0 unique: 29, error: 0 (Success), outsize: 112 > unique: 30, opcode: FLUSH (25), nodeid: 7, insize: 64 > FLUSH[167051800] > =A0 unique: 30, error: 0 (Success), outsize: 16 > unique: 31, opcode: RELEASE (18), nodeid: 7, insize: 64 > RELEASE[167051800] flags: 0x8001 > unique: 32, opcode: UNLINK (10), nodeid: 6, insize: 51 > Exception in thread "Thread-6" org.apache.hadoop.ipc.RemoteException: > java.io.IOException: Could not complete write to file /temp/iozone.tmp by > DFSClient_1838156575 > =A0 =A0 =A0 =A0at > org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:44= 9) > =A0 =A0 =A0 =A0at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Met= hod) > =A0 =A0 =A0 =A0at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :39) > =A0 =A0 =A0 =A0at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) > =A0 =A0 =A0 =A0at java.lang.reflect.Method.invoke(Method.java:597) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:= 959) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:= 955) > =A0 =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native Meth= od) > =A0 =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java:95= 3) > > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Client.call(Client.java:739) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > =A0 =A0 =A0 =A0at $Proxy0.complete(Unknown Source) > =A0 =A0 =A0 =A0at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Met= hod) > =A0 =A0 =A0 =A0at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :39) > =A0 =A0 =A0 =A0at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) > =A0 =A0 =A0 =A0at java.lang.reflect.Method.invoke(Method.java:597) > =A0 =A0 =A0 =A0at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvoc= ationHandler.java:82) > =A0 =A0 =A0 =A0at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationH= andler.java:59) > =A0 =A0 =A0 =A0at $Proxy0.complete(Unknown Source) > =A0 =A0 =A0 =A0at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.= java:3226) > =A0 =A0 =A0 =A0at > org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:315= 0) > =A0 =A0 =A0 =A0at > org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputS= tream.java:61) > =A0 =A0 =A0 =A0at > org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86) > Call to org/apache/hadoop/fs/FSDataOutputStream::close failed! > ERROR: dfs problem - could not close file_handle(166166016) for > /temp/iozone.tmp fuse_impls_release.c:59 > =A0 unique: 31, error: 0 (Success), outsize: 16 > =A0CREATE[167051800] flags: 0x8041 /temp/iozone.tmp > UNLINK /temp/iozone.tmp > =A0 unique: 32, error: 0 (Success), outsize: 16 > > > ### namenodes log > 2010-01-18 02:22:11,535 INFO org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.completeFile: file /temp/iozone.tmp is closed by > DFSClient_-585130158 > 2010-01-18 02:22:11,539 WARN org.apache.hadoop.hdfs.StateChange: DIR* > NameSystem.completeFile: failed to complete /temp/iozone.tmp because > dir.getFileBlocks() is null =A0and pendingFile is null > 2010-01-18 02:22:11,539 INFO org.apache.hadoop.ipc.Server: IPC Server > handler 7 on 9000, call complete(/temp/iozone.tmp, DFSClient_-585130158) > from 10.8.0.1:49371: error: java.io.IOException: Could not complete write > to file /temp/iozone.tmp by DFSClient_-585130158 > java.io.IOException: Could not complete write to file /temp/iozone.tmp by > DFSClient_-585130158 > =A0 =A0 =A0 =A0at > org.apache.hadoop.hdfs.server.namenode.NameNode.complete(NameNode.java:44= 9) > =A0 =A0 =A0 =A0at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Met= hod) > =A0 =A0 =A0 =A0at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java= :39) > =A0 =A0 =A0 =A0at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorI= mpl.java:25) > =A0 =A0 =A0 =A0at java.lang.reflect.Method.invoke(Method.java:597) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:= 959) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:= 955) > =A0 =A0 =A0 =A0at java.security.AccessController.doPrivileged(Native Meth= od) > =A0 =A0 =A0 =A0at javax.security.auth.Subject.doAs(Subject.java:396) > =A0 =A0 =A0 =A0at org.apache.hadoop.ipc.Server$Handler.run(Server.java:95= 3) > 2010-01-18 02:22:11,590 INFO > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit: > ugi=3Dhadoop,hadoop,fuse =A0ip=3D/10.8.0.1 =A0 =A0cmd=3Ddelete > src=3D/temp/iozone.tmp =A0 =A0dst=3Dnull =A0 =A0 =A0 =A0perm=3Dnull > > > ### nothing ist logged about that operation in the datanode > > ...that was quiet all - I tested it with many different iozone options, > but everytime the same result...(making a file with e.g. echo ... > file > works perfectly) > > ...hope someone knows about that problem... > > thanks for any reply: klaus > > Looks like the file is being unlinked while the client is still completing the creation of the file. Would need to see all the fuse_dfs_wrapper output and what iozone was doing to really understand what's going on. Thanks, Eli