hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Eli Collins (JIRA)" <j...@apache.org>
Subject [jira] Created: (HDFS-1026) Quota checks fail for small files and quotas
Date Sat, 06 Mar 2010 00:05:27 GMT
Quota checks fail for small files and quotas
--------------------------------------------

                 Key: HDFS-1026
                 URL: https://issues.apache.org/jira/browse/HDFS-1026
             Project: Hadoop HDFS
          Issue Type: Bug
    Affects Versions: 0.20.2, 0.20.1, 0.20.3, 0.21.0, 0.22.0
            Reporter: Eli Collins
             Fix For: 0.20.3, 0.22.0


If a directory has a quota less than blockSize * numReplicas then you can't add a file to
it, even if the file size is less than the quota. This is because FSDirectory#addBlock updates
the count assuming at least one block is written in full. We don't know how much of the block
will be written when addBlock is called and supporting such small quotas is not important
so perhaps we should document this and log an error message instead of making small (blockSize
* numReplicas) quotas work.

{code}
// check quota limits and updated space consumed
updateCount(inodes, inodes.length-1, 0, fileINode.getPreferredBlockSize()*fileINode.getReplication(),
true);
{code}

You can reproduce with the following commands:
{code}
$ dd if=/dev/zero of=temp bs=1000 count=64
$ hadoop fs -mkdir /user/eli/dir
$ hdfs dfsadmin -setSpaceQuota 191M /user/eli/dir
$ hadoop fs -put temp /user/eli/dir  # Causes DSQuotaExceededException
{code}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message