hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Benson Qiu <benson....@salesforce.com>
Subject Cluster.getJob IOException
Date Wed, 04 Jan 2017 02:28:11 GMT
I'm trying to test the following code:

Configuration conf = new Configuration(...);
Cluster cluster = new Cluster(conf);
cluster.getJob(new JobID("-1", -1));

The code gets a nonexistent job. It does a call out to the RM, and since
the nonexistent job will never be found, it will redirect to the JHS. JHS
stores history files on HDFS, so it will access the NameNode too.
`cluster.getJob(...)` will return null.


If I run Hadoop in pseudodistributed mode and kill the NameNode process, I
get an IOException. This is expected since the default namenode IPC port is
8020.
java.io.IOException:
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.yarn.exceptions.YarnRuntimeException):
java.net.ConnectException: Call From <omitted> to localhost:8020 failed on
connection exception: java.net.ConnectException: Connection refused; For
more details see:  http://wiki.apache.org/hadoop/ConnectionRefused


However, if I override Configuration (i.e. conf.set("fs.default.name",
"hdfs://foo:99999")), I don't get an error. *Why doesn't the code below
throw an IOException?*

Configuration conf = new Configuration(...);
conf.set("fs.default.name", "hdfs://foo:99999");
Cluster cluster = new Cluster(conf);
cluster.getJob(new JobID("-1", -1));

Thanks,
Benson

Mime
View raw message