hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Boris Shkolnik <bo...@yahoo-inc.com>
Subject Re: HDFS Quota
Date Mon, 25 Jan 2010 20:20:24 GMT
>>> /user/root/input is exceeded: namespace quota=-1 file count=2,
>From your example it looks like you have namespace quota (not diskspace),
i.e. quota for quantities of files (not size)


> Quota exceed exception,but it creates file of size 0.
Yes, I think this is by design. When HDFS creates a file, it doesn't know
how big the file will be. So diskspace quota is checked on writing to the
file (not on creation).

>Q3:Why did input file's capacity exceed the limit(space quota)?
What is your block size. I don't think it make sense to put quota for less
then a blocksize.

Boris.

On 1/25/10 1:01 AM, "tatebet@nttdata.co.jp" <tatebet@nttdata.co.jp> wrote:

> Hi everyone,
> 
> I set Space Quotas for the amount of space on HDFS.
> But I have some questions.
> 
> ■Question
> Q1:Why does diskspace become MByet when the file of KByte is input?(1024
> calculations in HDFS)
> Q2:Is there a person who has information for this problem that quota exceed
> exception,but it creates file of size 0?
> Q3:Why did input file's capacity exceed the limit(space quota)?
>   
> 
> ■Details of question
> ★First time
> Even when I put file which was smaller than the capacity set in Space Quota,
> it became an error.
> 
> The input file's capacity which I put is 64000=64kbyte.
> 
> ・input file
> $ ls -ltr
> $ -rw-r--r-- 1 root root  64000 Jan 21 04:42 xaa
> $ du -h xaa
> 68K     xaa
> 
> ・hdfs-site.xml(replication)
> <property>
>   <name>dfs.replication</name>
>   <value>2</value>
>  </property>
> 
> ・Space Quota
> $ ./bin/hadoop dfsadmin -setSpaceQuota 192000 /user/root/input/
> $ ./bin/hadoop fs -count -q /user/root/input/
>         none             inf          192000          192000            1
> 0                  0 hdfs://drbd-test-vm03/user/root/input
> $ ./bin/hadoop dfs -put input/xaa /user/root/input/
> 10/01/21 19:35:58 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.hdfs.protocol.QuotaExceededException:
> org.apache.hadoop.hdfs.protocol.QuotaExceededException: The quota of
> /user/root/input is exceeded: namespace quota=-1 file count=2, diskspace
> quota=192000 diskspace=134217728
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
>         at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccesso
> rImpl.java:39)
>         at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructo
> rAccessorImpl.java:27)
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
>         at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.jav
> a:96)
>         at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.ja
> va:58)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClien
> t.java:2875)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClie
> nt.java:2755)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:20
> 46)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.ja
> va:2232)
> Caused by: org.apache.hadoop.ipc.RemoteException:
> org.apache.hadoop.hdfs.protocol.QuotaExceededException: The quota of
> /user/root/input is exceeded: namespace quota=-1 file count=2, diskspace
> quota=192000 diskspace=134217728
>         at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.verifyQuota(INo
> deDirectoryWithQuota.java:161)
>         at 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectoryWithQuota.updateNumItemsI
> nTree(INodeDirectoryWithQuota.java:134)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.updateCount(FSDirectory.jav
> a:859)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.addBlock(FSDirectory.java:2
> 65)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.allocateBlock(FSNamesystem
> .java:1427)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNames
> ystem.java:1274)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:396)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>         at org.apache.hadoop.ipc.Client.call(Client.java:739)
>         at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>         at $Proxy0.addBlock(Unknown Source)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.j
> ava:25)
>         at java.lang.reflect.Method.invoke(Method.java:597)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocation
> Handler.java:82)
>         at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandle
> r.java:59)
>         at $Proxy0.addBlock(Unknown Source)
>         at 
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClien
> t.java:2873)
>         ... 3 more
> 
> 10/01/21 19:35:58 WARN hdfs.DFSClient: Error Recovery for block null bad
> datanode[0] nodes == null
> 10/01/21 19:35:58 WARN hdfs.DFSClient: Could not get block locations. Source
> file "/user/root/input/xaa" - Aborting...
> put: org.apache.hadoop.hdfs.protocol.QuotaExceededException: The quota of
> /user/root/input is exceeded: namespace quota=-1 file count=2, diskspace
> quota=192000 diskspace=134217728
> 
> When I saw the above error messages,it was described as diskspace=134217728.
> Perhaps, I think that it is "134217728/1024^2=128MByte".
> 
> I understood that this is why The quota(192000=192Kbyte) of /user/root/input
> was exceeded.
> 
> Q1:Why does diskspace become MByet when the file of KByte is input?(1024
> calculations in HDFS)
> 
> [root@drbd-test-vm03 current]# ./bin/hadoop dfs -lsr /user/root/input/xaa
> -rw-r--r--   2 root supergroup          0 2010-01-21 19:35
> /user/root/input/xaa
> 
> Quota exceed exception,but it creates file of size 0.
> 
> It seems that this is the similar problem as the problem described in
> following URL.
> http://issues.apache.org/jira/browse/HDFS-172
> 
> Has not this problem solved yet?
> 
> Q2:Is there a person who has information for this problem that quota exceed
> exception,but it creates file of size 0?
> 
> ★Since the second times
> The result was different the first time and the second times.
> It doesn't make an error even if it exceeds capacity since the second times.
> 
> Imput file's capacity is 64000=64k.
> Reprication figure is 2.
> Calculated result
> 102400-(64000×2)=-25600<-- Why does it become a "-" mark without becoming an
> error?
> 
> I thought that the space quota is a limit on the number of bytes used by files
> at that directory.
> However, the result shows the thing to which my assumption is wrong.
> 
> Q3:Why did input file's capacity exceed the limit(space quota)?
> 
> [root@drbd-test-vm03 current]# ./bin/hadoop dfsadmin -setSpaceQuota 100K
> /user/root/input/
> [root@drbd-test-vm03 current]# ./bin/hadoop fs -count -q /user/root/input/
>         none             inf          102400          102400            1
> 0                  0 hdfs://drbd-test-vm03/user/root/input
> [root@drbd-test-vm03 current]# ./bin/hadoop dfs -put xaa /user/root/input/
> [root@drbd-test-vm03 current]# ./bin/hadoop fs -count -q /user/root/input/
>         none             inf          102400          -25600            1
> 1              64000 hdfs://drbd-test-vm03/user/root/input
> 
> Best regards,
> Tadashi.
> 


Mime
View raw message