spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From steveloughran <>
Subject [GitHub] spark pull request #14371: [SPARK-16736] WiP Core+ SQL superfluous fs calls
Date Wed, 27 Jul 2016 14:14:38 GMT
Github user steveloughran commented on a diff in the pull request:
    --- Diff: core/src/main/scala/org/apache/spark/SparkContext.scala ---
    @@ -1410,10 +1410,12 @@ class SparkContext(config: SparkConf) extends Logging with ExecutorAllocationCli
         val scheme = new URI(schemeCorrectedPath).getScheme
         if (!Array("http", "https", "ftp").contains(scheme)) {
           val fs = hadoopPath.getFileSystem(hadoopConfiguration)
    -      if (!fs.exists(hadoopPath)) {
    -        throw new FileNotFoundException(s"Added file $hadoopPath does not exist.")
    +      val isDir = try {
    +        fs.getFileStatus(hadoopPath).isDirectory
    +      } catch {
    +        case f: FileNotFoundException =>
    +          throw new FileNotFoundException(s"Added file $hadoopPath does not exist.").initCause(f)
    --- End diff --
    @rxin the exception logic is in there, hidden in the fs.exists() code; which [is just
a try/catch wrapper around getFileStatus()](
This patch merges back to back calls of the method (eg. exists + getFileStatus, exists+isDirectory),
and also from operations like exists+open where the second call raises `FileNotFoundException`
anyway. This bit of the patch is ugly because I was preserving all error messages; I'm about
to submit the leaner one which passes up the FNFE directly: no try/catch

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message