flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-9061) add entropy to s3 path for better scalability
Date Mon, 16 Jul 2018 06:46:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-9061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16544858#comment-16544858

ASF GitHub Bot commented on FLINK-9061:

Github user indrc commented on the issue:

    @StephanEwen Thanks for your comments and feedback, my replies below
    1. after talking to @StefanRRichter we decided to bring in this dependency to use a standard
library instead; initially the pr had an impl of random generator function. Are you suggesting
to revert to the old?
    (And due to a squash, i lost the review comments; sorry about losing the context)
    2. the feature gets enabled through a new config key; the specific constructor helps initializing
the new feature. My thought was this way, it doesn't get exposed to default codepath. Do you
see a problem with this approach?
    3. sounds good, i will remove that warning.

> add entropy to s3 path for better scalability
> ---------------------------------------------
>                 Key: FLINK-9061
>                 URL: https://issues.apache.org/jira/browse/FLINK-9061
>             Project: Flink
>          Issue Type: Bug
>          Components: FileSystem, State Backends, Checkpointing
>    Affects Versions: 1.5.0, 1.4.2
>            Reporter: Jamie Grier
>            Assignee: Indrajit Roychoudhury
>            Priority: Critical
>              Labels: pull-request-available
> I think we need to modify the way we write checkpoints to S3 for high-scale jobs (those
with many total tasks).  The issue is that we are writing all the checkpoint data under a
common key prefix.  This is the worst case scenario for S3 performance since the key is used
as a partition key.
> In the worst case checkpoints fail with a 500 status code coming back from S3 and an
internal error type of TooBusyException.
> One possible solution would be to add a hook in the Flink filesystem code that allows
me to "rewrite" paths.  For example say I have the checkpoint directory set to:
> s3://bucket/flink/checkpoints
> I would hook that and rewrite that path to:
> s3://bucket/[HASH]/flink/checkpoints, where HASH is the hash of the original path
> This would distribute the checkpoint write load around the S3 cluster evenly.
> For reference: https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-performance-improve/
> Any other people hit this issue?  Any other ideas for solutions?  This is a pretty
serious problem for people trying to checkpoint to S3.
> -Jamie

This message was sent by Atlassian JIRA

View raw message