hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ravi Phulari (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HDFS-777) A zero size file is created when SpaceQuota exceeded
Date Thu, 19 Nov 2009 19:47:39 GMT

     [ https://issues.apache.org/jira/browse/HDFS-777?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Ravi Phulari resolved HDFS-777.
-------------------------------

    Resolution: Duplicate

 This Jira is duplicate of HDFS-172.

> A zero size file is created when SpaceQuota exceeded
> ----------------------------------------------------
>
>                 Key: HDFS-777
>                 URL: https://issues.apache.org/jira/browse/HDFS-777
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 0.20.1
>         Environment: Debian GNU/Linux 5.0 
> hadoop-0.20.1
> java version "1.6.0_12"
> Java(TM) SE Runtime Environment (build 1.6.0_12-b04)
> Java HotSpot(TM) Server VM (build 11.2-b01, mixed mode)
>            Reporter: freestyler
>
> The issue can be reproduced by the following steps:
> $ cd hadoop
> $ bin/hadoop fs -mkdir /tmp
> $ bin/hadoop dfsadmin -setSpaceQuota 1m /tmp
> $ bin/hadoop fs -count -q /tmp              
>         none             inf         1048576         1048576            1           
0                  0 hdfs://debian:9000/tmp
> $ ls -l hadoop-0.20.1-core.jar
> -rw-r--r-- 1 freestyler freestyler 2682112 2009-09-02 04:59 hadoop-0.20.1-core.jar
> $ bin/hadoop fs -put hadoop-0.20.1-core.jar /tmp/test.jar
> {quote}
> 09/11/19 12:09:35 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /tmp is exceeded:
quota=1048576 diskspace consumed=128.0m                                                  
                      
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)    
                                                                                 
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
                                                              
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
                                                      
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)          
                                                                                 
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
                                                                       
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
                                                                      
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2906)
                                                                
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
The DiskSpace quota of /tmp is exceeded: quota=1048576 diskspace consumed=128.0m
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:156)
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.updateNumItemsInTree(INodeDirectoryWithQuota.java:127)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:859)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:265)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1436)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1285)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>         at org.apache.hadoop.ipc.Client.call(Client.java:739)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>         ... 3 more
> 09/11/19 12:09:35 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0]
nodes == null
> 09/11/19 12:09:35 WARN hdfs.DFSClient: Could not get block locations. Source file "/tmp/test.jar"
- Aborting...
> put: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of
/tmp is exceeded: quota=1048576 diskspace consumed=128.0m
> {quote}
> Then a zero size file is there which is bad.
> [freestyler@debian hadoop]$ bin/hadoop fs -lsr  /tmp
> -rw-r--r--   2 freestyler supergroup          0 2009-11-19 12:09 /tmp/test.jar
> Even worse is that when I try to write to this '/tmp/test.jar' using  libhdfs (see the
code below), 
> it's stuck on hdfsOpenFile about 2 minutes.
> $ cat test.c
> {code:c|title=test.c}
> #include "hdfs.h"
> int main(int argc, char **argv) {
>     hdfsFS fs = hdfsConnect("default", 0);
>     if(!fs) {
>         fprintf(stderr, "Oops! Failed to connect to hdfs!\n");
>         exit(-1);
>     }
>       const char* writePath = "/tmp/test.jar";
>     {
>         //Write tests
>         fprintf(stderr, "opening %s for writing!\n", writePath);
>         hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0);
>         if(!writeFile) {
>             fprintf(stderr, "Failed to open %s for writing!\n", writePath);
>             exit(-1);
>         }
>         fprintf(stderr, "Opened %s for writing successfully...\n", writePath);
>         char* buffer = "Hello, World!";
>         tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
>         fprintf(stderr, "Wrote %d bytes\n", num_written_bytes);
>         hdfsCloseFile(fs, writeFile);
>     }
>     hdfsDisconnect(fs);
>     return 0;
> }
> {code}
> $ time ./a.out
> {quote}
> opening /tmp/test.jar for writing!                                                  
                                                                 
> Opened /tmp/test.jar for writing successfully...                                    
                                                                 
> Wrote 14 bytes                                                                      
                                                                 
> Nov 19, 2009 12:14:45 PM org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer
run                                                            
> WARNING: DataStreamer Exception: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /tmp is exceeded:
quota=1048576 diskspace consumed=128.0m                                                  
                                                    
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)    
                                                                                 
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
                                                              
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
                                                      
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)          
                                                                                 
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
                                                                       
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
                                                                      
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2906)
                                                                
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
                                                               
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
                                                                         
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
                                                                    
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
The DiskSpace quota of /tmp is exceeded: quota=1048576 diskspace consumed=128.0m         
                                                                                         
                                            
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:156)
                                              
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.updateNumItemsInTree(INodeDirectoryWithQuota.java:127)
                                     
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:859)
                                                                      
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:265)
                                                                         
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1436)
                                                                 
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1285)
                                                            
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
                                                                               
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)              
                                                                                 
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
                                                                             
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
                                                                     
>         at java.lang.reflect.Method.invoke(Method.java:597)                         
                                                                                 
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)                      
                                                                                 
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)              
                                                                                 
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)              
                                                                                 
>         at java.security.AccessController.doPrivileged(Native Method)               
                                                                                 
>         at javax.security.auth.Subject.doAs(Subject.java:396)                       
                                                                                 
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)                
                                                                                 
>         at org.apache.hadoop.ipc.Client.call(Client.java:739)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)                      
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)                         
           
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
     
>         at $Proxy0.addBlock(Unknown Source)                                         
                    
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
   
>         ... 3 more                                                                  
                    
> Nov 19, 2009 12:14:45 PM org.apache.hadoop.hdfs.DFSClient$DFSOutputStream processDatanodeError
> WARNING: Error Recovery for block null bad datanode[0] nodes == null                
         
> Nov 19, 2009 12:14:45 PM org.apache.hadoop.hdfs.DFSClient$DFSOutputStream processDatanodeError
> WARNING: Could not get block locations. Source file "/tmp/test.jar" - Aborting...   
         
> Exception in thread "main" org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
org.apache.hadoop.hdfs.protocol.DSQuotaExceededException: The DiskSpace quota of /tmp is exceeded:
quota=1048576 diskspace consumed=128.0m                                                  
                                                          
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)    
                                                                                 
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
                                                              
>         at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
                                                      
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)          
                                                                                 
>         at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:96)
                                                                       
>         at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:58)
                                                                      
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2906)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2786)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2076)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2262)
> Caused by: org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.DSQuotaExceededException:
The DiskSpace quota of /tmp is exceeded: quota=1048576 diskspace consumed=128.0m
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INodeDirectoryWithQuota.java:156)
>         at org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.updateNumItemsInTree(INodeDirectoryWithQuota.java:127)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.java:859)
>         at org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:265)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem.java:1436)
>         at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1285)
>         at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>         at org.apache.hadoop.ipc.Client.call(Client.java:739)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>         at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2904)
>         ... 3 more
> Call to org/apache/hadoop/fs/FSDataOutputStream::close failed!
> real    1m0.579s
> user    0m0.404s
> sys     0m0.052s
> {quote}
> And, the above code didn't  write successfully.
> $ bin/hadoop fs -lsr  /tmp
> -rw-r--r--   2 freestyler supergroup          0 2009-11-19 12:14 /tmp/test.jar

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message