hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Aaron Kimball <aa...@cloudera.com>
Subject Re: Hadoop Data Sharing
Date Thu, 06 May 2010 01:04:08 GMT

In general if you need to perform a multi-pass MapReduce workflow, each pass
materializes its output to files. The subsequent pass then reads those same
files back in as input. This allows the workflow to start at the last
"checkpoint" if it gets interrupted. There is no persistent in-memory
distributed storage feature in Hadoop that would allow a MapReduce job to
post results to memory for consumption by a subsequent job.

So you would just read your initial data from /input, and write your interim
results to /iteration0. Then the next pass reads from /iteration0 and writes
to /iteration1, etc..

If your data is reasonably small and you think it could fit in memory
somewhere, then you could experiment with using other distributed key-value
stores (memcached[b], hbase, cassandra, etc..) to hold intermediate results.
But this will require some integration work on your part.
- Aaron

On Wed, May 5, 2010 at 8:29 AM, Renato MarroquĂ­n Mogrovejo <
renatoj.marroquin@gmail.com> wrote:

> Hi everyone, I have recently started to play around with hadoop, but I am
> getting some into some "design" problems.
> I need to make a loop to execute the same job several times, and in each
> iteration get the processed values (not using a file because I would need
> to
> read it). I was using an static vector in my main class (the one that
> iterates and executes the job in each iteration) to retrieve those values,
> and it did work while I was using a standalone mode. Now I tried to test it
> on a pseudo-distributed manner and obviously is not working.
> Any suggestions, please???
> Thanks in advance,
> Renato M.

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message