hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Geoffry Roberts <threadedb...@gmail.com>
Subject Re: Reg: Setting up Hadoop Cluster
Date Thu, 13 Mar 2014 21:14:05 GMT
Andy,

Once you have hadoop running,  You can run your jobs from the cli of the
name node. When I write a map reduce job, I jar it up. and place it in,
say, my home directory and run it from there.  I do the same with pig
scripts.  I've used neither hive nor cascading, but I imagine they would
work the same.

Another approach I've tried is WebHDFS.  It's for manipulating the hdfs via
a restful interface.  It worked well enough for me.  I stopped using it
when I discovered it didn't support MapFiles but that's another story.


On Thu, Mar 13, 2014 at 5:00 PM, ados1984@gmail.com <ados1984@gmail.com>wrote:

> Hello Team,
>
> I have one question regarding putting data into hdfs and running mapreduce
> on data present in hdfs.
>
>    1. hdfs is file system and so to interact with it what kind of clients
>    are available? also where do we need to install those client?
>    2. regarding pig, hive and mapreduce, where do we install them on
>    hadoop cluster and from where do we run all scripts and how does it
>    internally know that it needs to run on node 1, node2 or node 3?
>
> any inputs here would really helpful.
>
> Thanks, Andy.
>



-- 
There are ways and there are ways,

Geoffry Roberts

Mime
View raw message