hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Getting statuses of jobs
Date Sun, 25 Mar 2012 17:12:44 GMT
If your real problem is a bad client you do not want running jobs (or
do not wish them to be granted all resources when they do), why not
tackle just that instead of "working-around"?

Hadoop allows authorization of users, and MR schedulers also allow
restricting submissions to defined queues/pools, aside of letting you
configure cluster resources per user/job. HDFS also carry permission
features to prevent global write access to directories, if that is
also your issue.

On Sun, Mar 25, 2012 at 8:08 PM, shlomi java <shlomijava@gmail.com> wrote:
> hi all,
>
> I want to figure out, from a client of Hadoop cluster, the statuses of jobs
> that are currently running on the cluster.
> I need it in order to prevent the client from submitting certain jobs to
> the cluster, when some certain jobs are already running on the cluster.
> I know to recognize my jobs - by their name.
>
> How do I do it?
>
> I saw the JobTracker code in jobtracker.jsp. I would be happy to use its
> 'runningJobs' method.
> The thing is that in the JSP, which is executed on the cluster (master), we
> get hold of the JobTracker from the application context.
>
> Is it safe to instantiate my own JobTracker, given the right conf, from the
> client?
>
> thanks
> ShlomiJ



-- 
Harsh J

Mime
View raw message