hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Todd Lipcon <t...@cloudera.com>
Subject Re: Overwriting the same block instead of creating a new one
Date Tue, 22 Jun 2010 04:52:54 GMT
HDFS assumes in hundreds of places that blocks never shrink. So, there is no
option to truncate a block.

-Todd

On Mon, Jun 21, 2010 at 9:41 PM, Vidur Goyal <vidur@students.iiit.ac.in>wrote:

> Hi All,
>
> In FSNamesystem#startFileInternal , whenever there is a overwrite flag set
> , why is the INode removed from the namespace and a new
> INodeFileUnderConstruction is created. Why can't we use the convert the
> same INode to INodeFileUnderConstruction. And we start writing to the same
> blocks at the same datanodes (after incrementing the GS) followed by
> either truncating the remaining blocks(if the file size decreases) or
> allocating new blocks (if the file size increases). This will decrease
> data redundancy and the job of garbage collector and will increase
> security.
>
> vidur
>
>
>
>
> --
> This message has been scanned for viruses and
> dangerous content by MailScanner, and is
> believed to be clean.
>
>


-- 
Todd Lipcon
Software Engineer, Cloudera

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message