hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Agarwal, Nikhil" <Nikhil.Agar...@netapp.com>
Subject RE: MapReduce on Local FileSystem
Date Fri, 31 May 2013 07:24:26 GMT

Thank you for your reply. One simple answer can be to reduce the time taken for ingesting
the data in HDFS.


From: Sanjay Subramanian [mailto:Sanjay.Subramanian@wizecommerce.com]
Sent: Friday, May 31, 2013 12:50 PM
To: <user@hadoop.apache.org>
Cc: user@hadoop.apache.org
Subject: Re: MapReduce on Local FileSystem

Basic question. Why would u want to do that ? Also I think the Map R Hadoop distribution has
an NFS mountable HDFS

Sent from my iPhone

On May 30, 2013, at 11:37 PM, "Agarwal, Nikhil" <Nikhil.Agarwal@netapp.com<mailto:Nikhil.Agarwal@netapp.com>>

Is it possible to run MapReduce on multiple nodes using Local File system (file:///<file:///\\>)
I am able to run it in single node setup but in a multiple node setup the "slave" nodes are
not able to access the "jobtoken" file which is present in the Hadoop.tmp.dir in "master"

Please let me know if it is possible to do this.

Thanks & Regards,

This email message and any attachments are for the exclusive use of the intended recipient(s)
and may contain confidential and privileged information. Any unauthorized review, use, disclosure
or distribution is prohibited. If you are not the intended recipient, please contact the sender
by reply email and destroy all copies of the original message along with any attachments,
from your computer system. If you are the intended recipient, please be advised that the content
of this message is subject to access, review and disclosure by the sender's Email System Administrator.

View raw message