hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Doug Cutting (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-340) Using wildcards in config pathnames
Date Fri, 07 Jul 2006 12:51:30 GMT
    [ http://issues.apache.org/jira/browse/HADOOP-340?page=comments#action_12419700 ] 

Doug Cutting commented on HADOOP-340:

Disks listed in dfs.data.dir that do not exist on a host are ignored.  So, instead of a wildcard,
you can simply list all possible names used in your cluster, and only those that actually
exist on a host will be used.  Similarly for mapred.local.dir.  Does that suffice?

> Using wildcards in config pathnames
> -----------------------------------
>          Key: HADOOP-340
>          URL: http://issues.apache.org/jira/browse/HADOOP-340
>      Project: Hadoop
>         Type: Improvement

>   Components: conf
>     Versions: 0.4.0
>  Environment: a cluster with different disk setups
>     Reporter: Johan Oskarson
>     Priority: Minor

> In our cluster there's machines with very different disk setups
> I've solved this by not rsyncing hadoop-site.xml, but as you probably understand this
means new settings will not get copied properly.
> I'd like to be able to use wildcards in the dfs.data.dir path for example:
> <property>
>   <name>dfs.data.dir</name>
>     <value>/home/hadoop/disk*/dfs/data</value>
> </property>
> then every disk mounted in that directory would be used

This message is automatically generated by JIRA.
If you think it was sent incorrectly contact one of the administrators:
For more information on JIRA, see:

View raw message