flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From EAlexRojas <...@git.apache.org>
Subject [GitHub] flink pull request #5991: [FLINK-9303] [kafka] Adding support for unassign d...
Date Fri, 11 May 2018 13:36:37 GMT
GitHub user EAlexRojas opened a pull request:

    https://github.com/apache/flink/pull/5991

    [FLINK-9303] [kafka] Adding support for unassign dynamically partitions from kafka consumer
when they become unavailable

    ## What is the purpose of the change
    
    This pull request add an option on the kafka consumer to check for unavailable partitions
and unassign them from the consumer. That way the consumer does not request for records on
invalid partitions and prevent Logs noises.
    
    ## Brief change log
    
    - Modify the partition discovery system to check not only new partitions, but also check
partitions that are no longer available.
    - Check for partitions no longer available recovered from state.
    - Add option on kafka consumer to activate this checks
    
    ## Verifying this change
    
    This change added tests and can be verified as follows:
    *Manually verified as follows:*
    - Create a job with a kafka consumer listening to a topic pattern and having partition
discovery activated and the property introduced in this PR set to true.
    - Configure Kafka to have set the following properties: 
       delete.topic.enable=true
       auto.create.topics.enable=false
    - Create some topics matching the pattern.
    - Run the job.
    -  While running, remove some of the topics. 
    - Verify the partitions are unassigned and the job continue running without Log noises.
    
    *I guess this can be tested with e2e tests, but I'm not familiarised with the system in
place* 
    
    ## Does this pull request potentially affect one of the following parts:
    
      - Dependencies (does it add or upgrade a dependency): (no)
      - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no)
      - The serializers: (no)
      - The runtime per-record code paths (performance sensitive): (no)
      - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing,
Yarn/Mesos, ZooKeeper: (no)
      - The S3 file system connector: (no)
    
    ## Documentation
    
      - Does this pull request introduce a new feature? (yes)
      - If yes, how is the feature documented? (JavaDocs)


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/EAlexRojas/flink kafka-unassign-partitions-fix

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/flink/pull/5991.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #5991
    
----
commit a17d0dcdeaac5b2508f4748d08fd4cb879fa5033
Author: EAlexRojas <alexrojas235@...>
Date:   2018-04-18T14:35:57Z

    [FLINK-9303] [kafka] Adding support for unassign dynamically partitions from kafka consumer
when they become unavailable
    - Check for unavailable partitions recovered from state
    - Using kafka consumer option to activate this validations

----


---

Mime
View raw message