hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ted Dunning" <tdunn...@veoh.com>
Subject RE: Using Map/Reduce without HDFS?
Date Wed, 29 Aug 2007 20:18:08 GMT

It certainly would help me to be able to do this and I would guess it would ease the "limited
number of files" problem that people encounter.

-----Original Message-----
From: Jeff Hammerbacher [mailto:jeff.hammerbacher@gmail.com]
Sent: Wed 8/29/2007 12:30 PM
To: hadoop-user@lucene.apache.org
Subject: Re: Using Map/Reduce without HDFS?
 
haven't heard much on this subject actually:
http://issues.apache.org/jira/browse/HADOOP-1700

On 8/29/07, Ted Dunning <tdunning@veoh.com> wrote:
>
>
> You can't append in hadoop, AFAIK.
>
> The appending would be done outside of Hadoop with a periodic copy into
> HDFS.
>
> I hear that append operations are coming soon.
>
> -----Original Message-----
> From: mfc [mailto:mikefconnell@verizon.net]
> Sent: Mon 8/27/2007 6:48 PM
> To: hadoop-user@lucene.apache.org
> Subject: Re: Using Map/Reduce without HDFS?
>
>
> Hi,
>
> Can you elaborate how this is done in Hadoop?
>
> Thanks
>
>
> Ted Dunning-3 wrote:
> >
> >
> > It is often also possible to merge the receiving of the new data with
> the
> > appending to a large file.  The append nature of the writing makes this
> > very
> > mcuh more efficient than scanning a pile of old files.
> >
> >
>
> --
> View this message in context:
> http://www.nabble.com/Using-Map-Reduce-without-HDFS--tf4331338.html#a12360816
> Sent from the Hadoop Users mailing list archive at Nabble.com.
>
>
>


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message