hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Vijay Rao <raovi...@gmail.com>
Subject Fundamental question
Date Sun, 09 May 2010 05:42:24 GMT
Hello,

I am just reading and understanding Hadoop and all the other components.
However I have a fundamental question for which I am not getting answers in
any of the online material that is out there.

1) If hadoop is used then all the slaves and other machines in the cluster
need to be formatted to have HDFS file system. If so what happens to the
tera bytes of data that need to be crunched? Or is the data on a different
machine?

2) Everywhere it is mentioned that the main advantage of map/reduce and
hadoop is that it runs on data that is available locally. So does this mean
that once the file system is formatted then I have to move my terabytes of
data and split them across the cluster?

Thanks
VJ

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message