spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mateiz <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-2670] FetchFailedException should be th...
Date Thu, 31 Jul 2014 07:21:07 GMT
Github user mateiz commented on a diff in the pull request:

    https://github.com/apache/spark/pull/1578#discussion_r15628876
  
    --- Diff: core/src/main/scala/org/apache/spark/storage/BlockFetcherIterator.scala ---
    @@ -199,15 +199,22 @@ object BlockFetcherIterator {
           // Get the local blocks while remote blocks are being fetched. Note that it's okay
to do
           // these all at once because they will just memory-map some files, so they won't
consume
           // any memory that might exceed our maxBytesInFlight
    -      for (id <- localBlocksToFetch) {
    -        getLocalFromDisk(id, serializer) match {
    -          case Some(iter) => {
    -            // Pass 0 as size since it's not in flight
    -            results.put(new FetchResult(id, 0, () => iter))
    -            logDebug("Got local block " + id)
    -          }
    -          case None => {
    -            throw new BlockException(id, "Could not get block " + id + " from local machine")
    +      var fetchIndex = 0
    +      try {
    +        for (id <- localBlocksToFetch) {
    +
    +          // getLocalFromDisk never return None but throws BlockException
    +          val iter = getLocalFromDisk(id, serializer).get
    +          // Pass 0 as size since it's not in flight
    +          results.put(new FetchResult(id, 0, () => iter))
    +          fetchIndex += 1
    +          logDebug("Got local block " + id)
    +        }
    +      } catch {
    +        case e: Exception => {
    +          logError(s"Error occurred while fetching local blocks", e)
    +          for (id <- localBlocksToFetch.drop(fetchIndex)) {
    +            results.put(new FetchResult(id, -1, null))
    --- End diff --
    
    I thought next() would return a failure block, and then the caller of BlockFetcherIterator
will just stop. Did you see it not doing that? I think all you have to do is put *one* FetchResult
with size = -1 in the queue and return, and everything will be fine.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

Mime
View raw message