hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Dunning <tdunn...@veoh.com>
Subject Re: Using Map/Reduce without HDFS?
Date Mon, 27 Aug 2007 15:31:47 GMT

Yes.

And unless you are a very unusual person, it is not all that rare for more
than one scan of the consolidated data to be required, especially during
development.  Can you say "When can we have these new statistics for Wed-Wed
unique users"?

It is often also possible to merge the receiving of the new data with the
appending to a large file.  The append nature of the writing makes this very
mcuh more efficient than scanning a pile of old files.
 


On 8/27/07 4:16 AM, "mfc" <mikefconnell@verizon.net> wrote:

> 
> Hi,
> 
> One benefit to the pre-processing step is the random i/o during
> pre-processing
> is only done on "new" data, i.e. it is incremental. So you only pay
> the random i/o cost once when new data is added. This is better than having
> to pay
> the random i/o cost every time on all the data (old and new) as would be
> required if 
> a map/reduce job where to run directly on the local file system.
> 
> Thanks
> 
> 
> mfc wrote:
>> 
>> Hi,
>> 
>> I can see a benefit to this approach if it replaces random
>> access of a local file system with sequential access to
>> large files in HDFS. We are talking about physical disks and
>> seek time is expensive.
>> 
>> But the random access of the local file system still happens,
>> it just gets moved to the pre-processing step.
>> 
>> How about walking thru the relative cost of this pre-processing step
>> (which still must do random access), and some approaches to how
>> this could be done. You mentioned cat | gzip (assuming parallel instances
>> of this), is that what you do?
>> 
>> Thanks
>> 
>> 


Mime
View raw message