spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Michael Gummelt (JIRA)" <>
Subject [jira] [Commented] (SPARK-20328) HadoopRDDs create a MapReduce JobConf, but are not MapReduce jobs
Date Thu, 13 Apr 2017 21:24:41 GMT


Michael Gummelt commented on SPARK-20328:

cc [~colorant] [~hfeng] [~vanzin]

> HadoopRDDs create a MapReduce JobConf, but are not MapReduce jobs
> -----------------------------------------------------------------
>                 Key: SPARK-20328
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 2.1.0, 2.1.1, 2.1.2
>            Reporter: Michael Gummelt
> In order to obtain {{InputSplit}} information, {{HadoopRDD}} creates a MapReduce {{JobConf}}
out of the Hadoop {{Configuration}}:
> Semantically, this is a problem because a HadoopRDD does not represent a Hadoop MapReduce
job.  Practically, this is a problem because this line:
results in this MapReduce-specific security code being called:,
which assumes the MapReduce master is configured.  If it isn't, an exception is thrown.
> So I'm seeing this exception thrown as I'm trying to add Kerberos support for the Spark
Mesos scheduler.  I have a workaround where I set a YARN-specific configuration variable to
trick {{TokenCache}} into thinking YARN is configured, but this is obviously suboptimal.
> The proper fix to this would likely require significant {{hadoop}} refactoring to make
split information available without going through {{JobConf}}, so I'm not yet sure what the
best course of action is.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message