hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-51) per-file replication counts
Date Mon, 10 Apr 2006 02:56:00 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-51?page=comments#action_12373803 ] 

Doug Cutting commented on HADOOP-51:

> the idiom of conf.getType("config.value",defaultValue) is good for user-defined values,
but shouldn't the default be skipped for things that are defined in hadoop-default.xml, in

The value from hadoop-default.xml is used in preference to the defaultValue paramter.  The
paramter is only used as a last resort when no value is found in hadoop-default.xml or any
other config file.

> per-file replication counts
> ---------------------------
>          Key: HADOOP-51
>          URL: http://issues.apache.org/jira/browse/HADOOP-51
>      Project: Hadoop
>         Type: New Feature

>   Components: dfs
>     Versions: 0.2
>     Reporter: Doug Cutting
>     Assignee: Konstantin Shvachko
>      Fix For: 0.2
>  Attachments: Replication.patch
> It should be possible to specify different replication counts for different files.  Perhaps
an option when creating a new file should be the desired replication count.  MapReduce should
take advantage of this feature so that job.xml and job.jar files, which are frequently accessed
by lots of machines, are more highly replicated than large data files.

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message