Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 03518E309 for ; Sat, 26 Jan 2013 16:47:50 +0000 (UTC) Received: (qmail 59340 invoked by uid 500); 26 Jan 2013 16:47:45 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 59260 invoked by uid 500); 26 Jan 2013 16:47:45 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 59252 invoked by uid 99); 26 Jan 2013 16:47:44 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 26 Jan 2013 16:47:44 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of vinayakponangi@gmail.com designates 74.125.82.42 as permitted sender) Received: from [74.125.82.42] (HELO mail-wg0-f42.google.com) (74.125.82.42) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 26 Jan 2013 16:47:39 +0000 Received: by mail-wg0-f42.google.com with SMTP id 12so371939wgh.5 for ; Sat, 26 Jan 2013 08:47:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=x-received:mime-version:in-reply-to:references:from:date:message-id :subject:to:content-type; bh=GoFA7PtbPMXdA82qIbUVXb38rXiyC8DmV1Mu1vquUcw=; b=KNwTAx+Jsr7I1g0s2mKuO/nxO2uHGyNuur1WARCi2vS6d/JT00y8FxjFJg5gD9Trip z2y3gxiD3GxnJu2QPLoMnvrQfnNUCAHr8p0MeOaCyhvzNt8W+DcRkLgJZ91O5tdffgra hU0rZ31srnTLk8G/ijxi/dBfd5K4c9qKn9S8w4WEfiF/JhahmXIMdLMSYEhsfR1RZ9Zq ayj4FlfGUrodIH0zLd3haxq8wiyXmqdYAlqbbgD6VwjBJwZGr7pCZEV+FRTsdt+TRbOW 8/jpM+lXCfym2ZcMH4WWgbWsV5ldR/E9uMILhtpt2Hij181UDJslbUOSxBi+Q8HEiB3U 22FA== X-Received: by 10.180.20.109 with SMTP id m13mr2708225wie.16.1359218838158; Sat, 26 Jan 2013 08:47:18 -0800 (PST) MIME-Version: 1.0 Received: by 10.180.21.169 with HTTP; Sat, 26 Jan 2013 08:46:58 -0800 (PST) In-Reply-To: <5103FAF9.50606@cse.ohio-state.edu> References: <5103FAF9.50606@cse.ohio-state.edu> From: Preethi Vinayak Ponangi Date: Sat, 26 Jan 2013 10:46:58 -0600 Message-ID: Subject: Re: Difference between HDFS and local filesystem To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec53f343b053c2f04d433cc7b X-Virus-Checked: Checked by ClamAV on apache.org --bcaec53f343b053c2f04d433cc7b Content-Type: text/plain; charset=ISO-8859-1 Yes. It's possible to use your local file system instead of HDFS. As you said, doesn't really matter when you are running a pseudo-distributed cluster. This is generally fine if your dataset is fairly small. The place where HDFS access really shines is if your file is huge, generally several TB or PB. That is when individual mappers can access different partitioned data on different nodes improving performance. In a fully distributed mode, your data gets partitioned and gets stored on several different nodes on HDFS. But when you use local data, the data is not replication or partitioned, it's just like accessing a single file. On Sat, Jan 26, 2013 at 9:49 AM, Sundeep Kambhampati < kambhamp@cse.ohio-state.edu> wrote: > Hi Users, > I am kind of new to MapReduce programming I am trying to understand the > integration between MapReduce and HDFS. > I could understand MapReduce can use HDFS for data access. But is possible > not to use HDFS at all and run MapReduce programs? > HDFS does file replication and partitioning. But if I use the following > command to run the Example MaxTemperature > > bin/hadoop jar /usr/local/hadoop/maxtemp.jar MaxTemperature > file:///usr/local/ncdcinput/**sample.txt file:///usr/local/out4 > > instead of > > bin/hadoop jar /usr/local/hadoop/maxtemp.jar MaxTemperature > usr/local/ncdcinput/sample.txt usr/local/out4 ->> this will use hdfs > file system. > > it uses local file system files and writing to local file system when I > run in pseudo distributed mode. Since it is single node there is no problem > of non local data. > What happens in a fully distributed mode. Will the files be copied to > other machines or will it throw errors? will the files be replicated and > will they be partitioned for running MapReduce if i use Localfile system? > > Can someone please explain. > > Regards > Sundeep > > > > > --bcaec53f343b053c2f04d433cc7b Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Yes. It's possible to use your local file system instead of HDFS. As yo= u said, doesn't really matter when you are running a pseudo-distributed= cluster. This is generally fine if your dataset is fairly small. The place= where HDFS access really shines is if your file is huge, generally several= TB or PB. That is when individual mappers can access different partitioned= data on different nodes improving performance.

In a fully distributed mode, your data gets partitioned and = gets stored on several different nodes on HDFS.
But when you use = local data, the data is not replication or partitioned, it's just like = accessing a single file.

On Sat, Jan 26, 2013 at 9:49 AM, Sundeep Kam= bhampati <kambhamp@cse.ohio-state.edu> wrote:
Hi Users,
I am kind of new to MapReduce programming I am trying to understand the int= egration between MapReduce and HDFS.
I could understand MapReduce can use HDFS for data access. But is possible = not to use HDFS at all and run MapReduce programs?
HDFS does file replication and partitioning. But if I use the following com= mand to run the Example MaxTemperature

=A0bin/hadoop jar /usr/local/hadoop/maxtemp.jar MaxTemperature file:///usr/= local/ncdcinput/sample.txt file:///usr/local/out4

instead of

=A0bin/hadoop jar /usr/local/hadoop/maxtemp.jar MaxTemperature usr/local/nc= dcinput/sample.txt usr/local/out4 =A0 =A0 ->> this will use hdfs file= system.

it uses local file system files and writing to local file system when I run= in pseudo distributed mode. Since it is single node there is no problem of= non local data.
What happens in a fully distributed mode. Will the files be copied to other= machines or will it throw errors? will the files be replicated and will th= ey be partitioned for running MapReduce if i use Localfile system?

Can someone please explain.

Regards
Sundeep





--bcaec53f343b053c2f04d433cc7b--