hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Steve Lewis <lordjoe2...@gmail.com>
Subject Running from a client machine does not work under 1.03
Date Fri, 07 Dec 2012 18:19:20 GMT
I have been running Hadoop jobs from my local box - on the net but outside
the cluster.

      Configuration conf = new Configuration();
     String jarfile = "somelocalfile.jar";
        conf.set("mapred.jar", jarFile);

hdsf-site.xml has

and all policies in hadoop-policy.xml are *

when I run the job on my local machine it executes properly on a hadoop 0.2
cluster. All directories in hdfs are owned by the local user - something
like Asterix\Steve but hdfs does not seen to care and jobs run well.

I have a colleague with a Hadoop 1.03 cluster and setting the config to
point at the cluster's file system, jobtracker and passing in a local jar
gives permission errors.

I read that security has changed in 1.03. My question is was this EVER
supposed to work? If it used to work then why does it not work now?
(security?) Is there a way to change the hadoop cluster so it works under
1.03 or (preferable) to supply a username and password and ask the cluster
to execute under that user from a client system rather than opening an ssh
channel to the cluster?

        String hdfshost = "hdfs://MyCluster:9000";
        conf.set("fs.default.name", hdfshost);
        String jobTracker = "MyCluster:9001";
        conf.set("mapred.job.tracker", jobTracker);

On the cluster in hdfs

Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com

View raw message