hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Eric Baldeschwieler <eri...@yahoo-inc.com>
Subject Re: Hadoop Distributed File System requirements on Wiki
Date Thu, 06 Jul 2006 19:24:41 GMT

On Jul 6, 2006, at 12:02 PM, Paul Sutter wrote:

...
> *Constant size file blocks (#16),  -1*
>
> I vote to keep variable size blocks, especially because you are adding
> atomic append capabilities (#25). Variable length blocks creates the
> possibility for blocks that contain only whole records. This:
> - improves recoverability for large important files with one or more
> irrevocably lost blocks, and
> - makes it very clean for mappers to process local data blocks

...  I think we can achieve our goal without compromising yours.   
Each block can be of any size up to the files fixed block size.  The  
system can be aware of that and provide an API to report gaps and/or  
an API option to skip them or see them as NULLs.  This reporting can  
be done at the datanode level allowing us to remove all the size data  
& logic at the namenode level.

** If you agree, why don't we just add the above annotation to  
konstantine's doc?

> *Recoverability and Availability Goals*
...
> **
> *Backup Scheme*
> **
> We might want to start discussion of a backup scheme for HDFS,  
> especially
> given all the courageous rewriting and feature-addition likely to  
> occur.

** I agree, this needs to be on the list.  I'm imagining a command  
that hardlinks every datanode's (and namenode's if needed) files into  
a snapshot directory.  And another command that moves all current  
state into a snapshot directory and hardlinks a snapshot's state back  
into the working directory.  This would be very fast and not cost  
much space in the short term.  Thoughts?  (yes, hardlinks are a pain  
on the PC, we can discuss design later)

> *Rebalancing (#22,#21)*
>
> I would suggest that keeping disk usage balanced is more than a  
> performance
> feature, its important for the success of running jobs with large map
> outputs or large sorts. Our most common reducer failure is running  
> out of
> disk space during sort, and this is caused by imbalanced block  
> allocation.

** Good point.  Any interest in helping us with this one?

>
> On 6/30/06, Konstantin Shvachko <shv@yahoo-inc.com> wrote:
>>
>> I've created a Wiki page that summarizes DFS requirements and  
>> proposed
>> changes.
>> This is a summary of discussions held in this mailing list and
>> additional internal discussions.
>> The page is here:
>>
>> http://wiki.apache.org/lucene-hadoop/DFS_requirements
>>
>> I see there is an ongoing related discussion in HADOOP-337.
>> We prioritized our goals as
>> (1) Reliability (which includes Recoverability and Availability)
>> (2) Scalability
>> (3) Functionality
>> (4) Performance
>> (5) other
>> But then gave higher priority to some features like the append
>> functionality.
>>
>> Happy holidays to everybody.
>>
>> --Konstantin Shvachko
>>


Mime
View raw message