flink-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (FLINK-6352) FlinkKafkaConsumer should support to use timestamp to set up start offset
Date Tue, 27 Feb 2018 15:46:00 GMT

    [ https://issues.apache.org/jira/browse/FLINK-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16378802#comment-16378802
] 

ASF GitHub Bot commented on FLINK-6352:
---------------------------------------

Github user aljoscha commented on a diff in the pull request:

    https://github.com/apache/flink/pull/5282#discussion_r170966200
  
    --- Diff: flink-connectors/flink-connector-kafka-base/src/test/java/org/apache/flink/streaming/connectors/kafka/KafkaConsumerTestBase.java
---
    @@ -621,12 +621,70 @@ public void runStartFromSpecificOffsets() throws Exception {
     		partitionsToValueCountAndStartOffsets.put(2, new Tuple2<>(28, 22));	// partition
2 should read offset 22-49
     		partitionsToValueCountAndStartOffsets.put(3, new Tuple2<>(50, 0));	// partition
3 should read offset 0-49
     
    -		readSequence(env, StartupMode.SPECIFIC_OFFSETS, specificStartupOffsets, readProps,
topicName, partitionsToValueCountAndStartOffsets);
    +		readSequence(env, StartupMode.SPECIFIC_OFFSETS, specificStartupOffsets, null, readProps,
topicName, partitionsToValueCountAndStartOffsets);
     
     		kafkaOffsetHandler.close();
     		deleteTestTopic(topicName);
     	}
     
    +	/**
    +	 * This test ensures that the consumer correctly uses user-supplied timestamp when explicitly
configured to
    +	 * start from timestamp.
    +	 *
    +	 * <p>The validated Kafka data is written in 2 steps: first, an initial 50 records
is written to each partition.
    +	 * After that, another 30 records is appended to each partition. Before each step, a
timestamp is recorded.
    +	 * For the validation, when the read job is configured to start from the first timestamp,
each partition should start
    +	 * from offset 0 and read a total of 80 records. When configured to start from the second
timestamp,
    +	 * each partition should start from offset 50 and read on the remaining 30 appended
records.
    +	 */
    +	public void runStartFromTimestamp() throws Exception {
    +		// 4 partitions with 50 records each
    +		final int parallelism = 4;
    +		final int initialRecordsInEachPartition = 50;
    +		final int appendRecordsInEachPartition = 30;
    +
    +		long firstTimestamp = 0;
    +		long secondTimestamp = 0;
    +		String topic = "";
    +
    +		// attempt to create an appended test sequence, where the timestamp of writing the
appended sequence
    +		// is assured to be larger than the timestamp of the original sequence.
    +		final int maxRetries = 3;
    +		int attempt = 0;
    +		while (attempt != maxRetries) {
    +			firstTimestamp = System.currentTimeMillis();
    +			topic = writeSequence("runStartFromTimestamp", initialRecordsInEachPartition, parallelism,
1);
    --- End diff --
    
    Ah, I just thought that we could have a simple loop there:
    
    ```
    long secondTimestamp = System.currentTimeMillis();
    while (secondTimestamp <= firstTimestamp) {
      Thread.sleep();
      secondTimestamp = System.currentTimeMillis();
    }
    ```
    what do you think?


> FlinkKafkaConsumer should support to use timestamp to set up start offset
> -------------------------------------------------------------------------
>
>                 Key: FLINK-6352
>                 URL: https://issues.apache.org/jira/browse/FLINK-6352
>             Project: Flink
>          Issue Type: Improvement
>          Components: Kafka Connector
>            Reporter: Fang Yong
>            Assignee: Fang Yong
>            Priority: Blocker
>             Fix For: 1.5.0
>
>
>     Currently "auto.offset.reset" is used to initialize the start offset of FlinkKafkaConsumer,
and the value should be earliest/latest/none. This method can only let the job comsume the
beginning or the most recent data, but can not specify the specific offset of Kafka began
to consume. 
>     So, there should be a configuration item (such as "flink.source.start.time" and the
format is "yyyy-MM-dd HH:mm:ss") that allows user to configure the initial offset of Kafka.
The action of "flink.source.start.time" is as follows:
> 1) job start from checkpoint / savepoint
>   a> offset of partition can be restored from checkpoint/savepoint,  "flink.source.start.time"
will be ignored.
>   b> there's no checkpoint/savepoint for the partition (For example, this partition
is newly increased), the "flink.kafka.start.time" will be used to initialize the offset of
the partition    
> 2) job has no checkpoint / savepoint, the "flink.source.start.time" is used to initialize
the offset of the kafka
>   a> the "flink.source.start.time" is valid, use it to set the offset of kafka
>   b> the "flink.source.start.time" is out-of-range, the same as it does currently
with no initial offset, get kafka's current offset and start reading



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message