Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id EB6DB200CDE for ; Tue, 8 Aug 2017 16:58:07 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id E9FAB167621; Tue, 8 Aug 2017 14:58:07 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id B41FB16761F for ; Tue, 8 Aug 2017 16:58:06 +0200 (CEST) Received: (qmail 17033 invoked by uid 500); 8 Aug 2017 14:58:05 -0000 Mailing-List: contact commits-help@beam.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@beam.apache.org Delivered-To: mailing list commits@beam.apache.org Received: (qmail 17007 invoked by uid 99); 8 Aug 2017 14:58:05 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd2-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Aug 2017 14:58:05 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd2-us-west.apache.org (ASF Mail Server at spamd2-us-west.apache.org) with ESMTP id 63A721A0436 for ; Tue, 8 Aug 2017 14:58:05 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd2-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd2-us-west.apache.org [10.40.0.9]) (amavisd-new, port 10024) with ESMTP id KuXhHChJygIl for ; Tue, 8 Aug 2017 14:58:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTP id 7DA135F56A for ; Tue, 8 Aug 2017 14:58:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id C9BCBE0044 for ; Tue, 8 Aug 2017 14:58:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 7242A21E14 for ; Tue, 8 Aug 2017 14:58:00 +0000 (UTC) Date: Tue, 8 Aug 2017 14:58:00 +0000 (UTC) From: "Pawel Bartoszek (JIRA)" To: commits@beam.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (BEAM-2752) Job fails to checkpoint with kinesis stream as an input for Flink job MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Tue, 08 Aug 2017 14:58:08 -0000 [ https://issues.apache.org/jira/browse/BEAM-2752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Pawel Bartoszek updated BEAM-2752: ---------------------------------- Description: Our job is reading from kinesis stream as a job input. Quiet often when the job is checkpointing for the first time the exception is thrown: The scenario the produces the exception: # Upload a new jar file with job logic # Start new job # Stop the job with savepoint that is written to s3 # Upload a new jar file with job logic(in this case the jar contains the same code - but our pipeline generates new jar file name for every build) # Start a new job from savepoint # The first checkpoint fails causing the job to be cancelled If the job is started without passing savepoint the checkpointing works fine. Other information: Flink version 1.2.1 Beam 2.0.0 Flink Parallelism - 20 slots Number of task managers - 4 {code:java} java.lang.Exception: Error while triggering checkpoint 59 for Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20) at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1136) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.Exception: Could not perform checkpoint 59 for operator Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20). at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:524) at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1125) ... 5 more Caused by: java.lang.Exception: Could not complete snapshot 59 for operator Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20). at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:379) at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157) at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1090) at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:630) at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:575) at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:518) ... 6 more Caused by: java.util.ConcurrentModificationException at java.util.ArrayDeque$DeqIterator.next(ArrayDeque.java:643) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableCollection$Builder.addAll(ImmutableCollection.java:409) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList$Builder.addAll(ImmutableList.java:699) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:256) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:209) at org.apache.beam.sdk.io.kinesis.KinesisReaderCheckpoint.(KinesisReaderCheckpoint.java:44) at org.apache.beam.sdk.io.kinesis.KinesisReaderCheckpoint.asCurrentStateOf(KinesisReaderCheckpoint.java:49) at org.apache.beam.sdk.io.kinesis.KinesisReader.getCheckpointMark(KinesisReader.java:137) at org.apache.beam.runners.flink.translation.wrappers.streaming.io.UnboundedSourceWrapper.snapshotState(UnboundedSourceWrapper.java:379) at org.apache.flink.streaming.api.functions.util.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118) at org.apache.flink.streaming.api.functions.util.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:100) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:357) ... 11 more {code} was: Our job is reading from kinesis stream as a job input. Quiet often when the job is checkpointing for the first time the exception is thrown: {code:java} Our job is reading from kinesis stream as a job input. Quiet often when the job is checkpointing for the first time the exception is thrown. The scenario the produces the exception: # Upload a new jar file with job logic # Start new job # Stop the job with savepoint that is written to s3 # Upload a new jar file with job logic(in this case the jar contains the same code - but our pipeline generates new jar file name for every build) # Start a new job from savepoint # The first checkpoint fails causing the job to be cancelled If the job is started without passing savepoint the checkpointing works fine. Other information: Flink version 1.2.1 Beam 2.0.0 Flink Parallelism - 20 slots Number of task managers - 4 {code:java} java.lang.Exception: Error while triggering checkpoint 59 for Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20) at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1136) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang.Exception: Could not perform checkpoint 59 for operator Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20). at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:524) at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1125) ... 5 more Caused by: java.lang.Exception: Could not complete snapshot 59 for operator Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20). at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:379) at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157) at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1090) at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:630) at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:575) at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:518) ... 6 more Caused by: java.util.ConcurrentModificationException at java.util.ArrayDeque$DeqIterator.next(ArrayDeque.java:643) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableCollection$Builder.addAll(ImmutableCollection.java:409) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList$Builder.addAll(ImmutableList.java:699) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:256) at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:209) at org.apache.beam.sdk.io.kinesis.KinesisReaderCheckpoint.(KinesisReaderCheckpoint.java:44) at org.apache.beam.sdk.io.kinesis.KinesisReaderCheckpoint.asCurrentStateOf(KinesisReaderCheckpoint.java:49) at org.apache.beam.sdk.io.kinesis.KinesisReader.getCheckpointMark(KinesisReader.java:137) at org.apache.beam.runners.flink.translation.wrappers.streaming.io.UnboundedSourceWrapper.snapshotState(UnboundedSourceWrapper.java:379) at org.apache.flink.streaming.api.functions.util.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118) at org.apache.flink.streaming.api.functions.util.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99) at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:100) at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:357) ... 11 more {code} > Job fails to checkpoint with kinesis stream as an input for Flink job > --------------------------------------------------------------------- > > Key: BEAM-2752 > URL: https://issues.apache.org/jira/browse/BEAM-2752 > Project: Beam > Issue Type: Bug > Components: sdk-java-extensions > Affects Versions: 2.0.0 > Reporter: Pawel Bartoszek > Assignee: Davor Bonaci > Priority: Minor > > Our job is reading from kinesis stream as a job input. Quiet often when the job is checkpointing for the first time the exception is thrown: > The scenario the produces the exception: > # Upload a new jar file with job logic > # Start new job > # Stop the job with savepoint that is written to s3 > # Upload a new jar file with job logic(in this case the jar contains the same code - but our pipeline generates new jar file name for every build) > # Start a new job from savepoint > # The first checkpoint fails causing the job to be cancelled > If the job is started without passing savepoint the checkpointing works fine. > Other information: > Flink version 1.2.1 > Beam 2.0.0 > Flink Parallelism - 20 slots > Number of task managers - 4 > {code:java} > java.lang.Exception: Error while triggering checkpoint 59 for Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20) > at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1136) > at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) > at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) > at java.lang.Thread.run(Thread.java:748) > Caused by: java.lang.Exception: Could not perform checkpoint 59 for operator Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20). > at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:524) > at org.apache.flink.runtime.taskmanager.Task$3.run(Task.java:1125) > ... 5 more > Caused by: java.lang.Exception: Could not complete snapshot 59 for operator Source: Read(KinesisSource) -> Flat Map -> ParMultiDo(KinesisExtractor) -> Flat Map -> ParMultiDo(StringToRecord) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToRRecord) -> Flat Map -> ParMultiDo(AddTimestamps) -> Flat Map -> xxxx.yyyy.GroupByOneMinuteWindow GROUP RDOTRECORDS BY ONE MINUTE WINDOWS/Window.Assign.out -> (ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ToSomeKey) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(ToCompositeKey) -> Flat Map -> ParMultiDo(Anonymous) -> Flat Map -> ToKeyedWorkItem, ParMultiDo(Anonymous) -> Flat Map -> ParMultiDo(ApplyShardingKey) -> Flat Map -> ToKeyedWorkItem) (1/20). > at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:379) > at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.checkpointStreamOperator(StreamTask.java:1157) > at org.apache.flink.streaming.runtime.tasks.StreamTask$CheckpointingOperation.executeCheckpointing(StreamTask.java:1090) > at org.apache.flink.streaming.runtime.tasks.StreamTask.checkpointState(StreamTask.java:630) > at org.apache.flink.streaming.runtime.tasks.StreamTask.performCheckpoint(StreamTask.java:575) > at org.apache.flink.streaming.runtime.tasks.StreamTask.triggerCheckpoint(StreamTask.java:518) > ... 6 more > Caused by: java.util.ConcurrentModificationException > at java.util.ArrayDeque$DeqIterator.next(ArrayDeque.java:643) > at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.TransformedIterator.next(TransformedIterator.java:47) > at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableCollection$Builder.addAll(ImmutableCollection.java:409) > at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList$Builder.addAll(ImmutableList.java:699) > at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:256) > at org.apache.beam.sdks.java.io.kinesis.repackaged.com.google.common.collect.ImmutableList.copyOf(ImmutableList.java:209) > at org.apache.beam.sdk.io.kinesis.KinesisReaderCheckpoint.(KinesisReaderCheckpoint.java:44) > at org.apache.beam.sdk.io.kinesis.KinesisReaderCheckpoint.asCurrentStateOf(KinesisReaderCheckpoint.java:49) > at org.apache.beam.sdk.io.kinesis.KinesisReader.getCheckpointMark(KinesisReader.java:137) > at org.apache.beam.runners.flink.translation.wrappers.streaming.io.UnboundedSourceWrapper.snapshotState(UnboundedSourceWrapper.java:379) > at org.apache.flink.streaming.api.functions.util.StreamingFunctionUtils.trySnapshotFunctionState(StreamingFunctionUtils.java:118) > at org.apache.flink.streaming.api.functions.util.StreamingFunctionUtils.snapshotFunctionState(StreamingFunctionUtils.java:99) > at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.snapshotState(AbstractUdfStreamOperator.java:100) > at org.apache.flink.streaming.api.operators.AbstractStreamOperator.snapshotState(AbstractStreamOperator.java:357) > ... 11 more > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029)