ambari-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alejandro Fernandez <afernan...@hortonworks.com>
Subject Re: Running spark and map reduce jobs
Date Thu, 27 Aug 2015 18:06:23 GMT
Hi Jitendra,

What version of Ambari and HDP are you running?
You just need to install Oozie server on any host, and pick the hosts for the clients.
In HDP 2.3, it's possible to have multiple Oozie servers for High Availability.

HDP binaries are in /usr/hdp/current/spark-server/bin
Note that /usr/hdp/current/spark-server is a symlink to /usr/hdp/2.#.#.#-####/spark

Thanks,
Alejandro

From: Jeetendra G <jeetendra.g@housing.com<mailto:jeetendra.g@housing.com>>
Reply-To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" <user@ambari.apache.org<mailto:user@ambari.apache.org>>
Date: Thursday, August 27, 2015 at 4:06 AM
To: "user@ambari.apache.org<mailto:user@ambari.apache.org>" <user@ambari.apache.org<mailto:user@ambari.apache.org>>
Subject: Running spark and map reduce jobs

Hi All I have installed Ambari and with Ambari I have installed hadoop,Spark,hive,oozie.
 When I was installing oozie it was asking me where all you need Ooozie in my cluster means
in how many Nodes?
I am not really able to understand why its asking what all nodes you want to install Oozie.
rather it should install in any one Node?


Also how can I run my map reduce and spark jobs?

Where does Ambari installed the binary of the installed packages in /bin?


Regards
Jeetendra

Mime
View raw message