hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rafael Turk" <rafael.t...@gmail.com>
Subject Re: Hadoop 4 disks per server
Date Thu, 31 Jul 2008 01:07:54 GMT
Thank you all!  it worked like a charm

On Wed, Jul 30, 2008 at 3:05 PM, Konstantin Shvachko <shv@yahoo-inc.com>wrote:

> On hdfs see
> http://wiki.apache.org/hadoop/FAQ#15
> In addition to the James's suggestion you can also specify dfs.name.dir
> for the name-node to store extra copies of the namespace.
>
>
>
> James Moore wrote:
>
>> On Tue, Jul 29, 2008 at 6:37 PM, Rafael Turk <rafael.turk@gmail.com>
>> wrote:
>>
>>> Hi All,
>>>
>>>  I´m setting up a cluster with 4 disks per server. Is there any way to
>>> make
>>> Hadoop aware of this setup and take benefits from that?
>>>
>>
>> I believe all you need to do is give four directories (one on each
>> drive) as  the value for dfs.data.dir and mapred.local.dir.  Something
>> like:
>>
>> <property>
>>  <name>dfs.data.dir</name>
>>
>>  <value>/drive1/myDfsDir,/drive2/myDfsDir,/drive3/myDfsDir,/drive4/myDfsDir</value>
>>  <description>Determines where on the local filesystem an DFS data node
>>  should store its blocks.  If this is a comma-delimited
>>  list of directories, then data will be stored in all named
>>  directories, typically on different devices.
>>  Directories that do not exist are ignored.
>>  </description>
>> </property>
>>
>>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message