hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jing Zhao (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4847) hdfs dfs -count of a .snapshot directory fails claiming file does not exist
Date Tue, 28 May 2013 17:51:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13668488#comment-13668488
] 

Jing Zhao commented on HDFS-4847:
---------------------------------

Yes, I will check the snapshot document to make sure we mention ".snapshot" is not a valid
directory. 
[~schu], I will mark this as invalid first. Feel free to create a new jira if you think to
provide more accurate error msg to end users is necessary.
                
> hdfs dfs -count of a .snapshot directory fails claiming file does not exist
> ---------------------------------------------------------------------------
>
>                 Key: HDFS-4847
>                 URL: https://issues.apache.org/jira/browse/HDFS-4847
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: snapshots
>    Affects Versions: 3.0.0
>            Reporter: Stephen Chu
>              Labels: snapshot, snapshots
>
> I successfully allow snapshots for /tmp and create three snapshots. I verify that the
three snapshots are in /tmp/.snapshot.
> However, when I attempt _hdfs dfs -count /tmp/.snapshot_ I get a file does not exist
exception.
> Running -count on /tmp finds /tmp successfully.
> {code}
> schu-mbp:~ schu$ hadoop fs -ls /tmp/.snapshot
> 2013-05-24 10:27:10,070 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62))
- Unable to load native-hadoop library for your platform... using builtin-java classes where
applicable
> Found 3 items
> drwxr-xr-x   - schu supergroup          0 2013-05-24 10:26 /tmp/.snapshot/s1
> drwxr-xr-x   - schu supergroup          0 2013-05-24 10:27 /tmp/.snapshot/s2
> drwxr-xr-x   - schu supergroup          0 2013-05-24 10:27 /tmp/.snapshot/s3
> schu-mbp:~ schu$ hdfs dfs -count /tmp
> 2013-05-24 10:27:20,510 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62))
- Unable to load native-hadoop library for your platform... using builtin-java classes where
applicable
>           12            0                  0 /tmp
> schu-mbp:~ schu$ hdfs dfs -count /tmp/.snapshot
> 2013-05-24 10:27:30,397 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62))
- Unable to load native-hadoop library for your platform... using builtin-java classes where
applicable
> count: File does not exist: /tmp/.snapshot
> schu-mbp:~ schu$ hdfs dfs -count -q /tmp/.snapshot
> 2013-05-24 10:28:23,252 WARN  [main] util.NativeCodeLoader (NativeCodeLoader.java:<clinit>(62))
- Unable to load native-hadoop library for your platform... using builtin-java classes where
applicable
> count: File does not exist: /tmp/.snapshot
> schu-mbp:~ schu$
> {code}
> In the NN logs, I see:
> {code}
> 2013-05-24 10:27:30,857 INFO  [IPC Server handler 6 on 8020] FSNamesystem.audit (FSNamesystem.java:logAuditEvent(6143))
- allowed=true	ugi=schu (auth:SIMPLE)	ip=/127.0.0.1	cmd=getfileinfo	src=/tmp/.snapshot	dst=null
perm=null
> 2013-05-24 10:27:30,891 ERROR [IPC Server handler 7 on 8020] security.UserGroupInformation
(UserGroupInformation.java:doAs(1492)) - PriviledgedActionException as:schu (auth:SIMPLE)
cause:java.io.FileNotFoundException: File does not exist: /tmp/.snapshot
> 2013-05-24 10:27:30,891 INFO  [IPC Server handler 7 on 8020] ipc.Server (Server.java:run(1864))
- IPC Server handler 7 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getContentSummary
from 127.0.0.1:49738: error: java.io.FileNotFoundException: File does not exist: /tmp/.snapshot
> java.io.FileNotFoundException: File does not exist: /tmp/.snapshot
> 	at org.apache.hadoop.hdfs.server.namenode.FSDirectory.getContentSummary(FSDirectory.java:2267)
> 	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getContentSummary(FSNamesystem.java:3188)
> 	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getContentSummary(NameNodeRpcServer.java:829)
> 	at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getContentSummary(ClientNamenodeProtocolServerSideTranslatorPB.java:726)
> 	at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java:48057)
> 	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:527)
> 	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1033)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1842)
> 	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1838)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:396)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1489)
> 	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1836)
> {code}
> Likewise, the _hdfs dfs du_ command fails with the same problem. 
> Hadoop version:
> {code}
> schu-mbp:~ schu$ hadoop version
> Hadoop 3.0.0-SNAPSHOT
> Subversion git://github.com/apache/hadoop-common.git -r ccaf5ea09118eedbe17fd3f5b3f0c516221dd613
> Compiled by schu on 2013-05-24T04:45Z
> From source with checksum ee94d984bcf5cc38ca12a1efedb68fc
> This command was run using /Users/schu/hadoop-3.0.0-SNAPSHOT/share/hadoop/common/hadoop-common-3.0.0-SNAPSHOT.jar
> {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message