spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jinxing64 <...@git.apache.org>
Subject [GitHub] spark issue #16989: [SPARK-19659] Fetch big blocks to disk when shuffle-read...
Date Thu, 11 May 2017 00:58:42 GMT
Github user jinxing64 commented on the issue:

    https://github.com/apache/spark/pull/16989
  
    As @mridulm mentioned, in `HighlyCompressedMapStatus` it can be configured in two respects:

    >1. minimum size before we consider something a large block.
    >2. The fraction '2' should also be configurable.
    
     I spent quite a while for thinking this and didn't come up with good names for these
two configurations. @mridulm @cloud-fan Could you please give some advice ? How about `spark.shuffle.accurate.block.bound`,
`spark.shuffle.accurate.block.multiples` - Actually, I think they are not good :-(  
    Should I put it in this pr or make another one ?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message