spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From tdas <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-11359][STREAMING][KINESIS] Checkpoint t...
Date Thu, 05 Nov 2015 22:59:46 GMT
Github user tdas commented on a diff in the pull request:

    https://github.com/apache/spark/pull/9421#discussion_r44082707
  
    --- Diff: extras/kinesis-asl/src/main/scala/org/apache/spark/streaming/kinesis/KinesisCheckpointer.scala
---
    @@ -0,0 +1,117 @@
    +/*
    + * Licensed to the Apache Software Foundation (ASF) under one or more
    + * contributor license agreements.  See the NOTICE file distributed with
    + * this work for additional information regarding copyright ownership.
    + * The ASF licenses this file to You under the Apache License, Version 2.0
    + * (the "License"); you may not use this file except in compliance with
    + * the License.  You may obtain a copy of the License at
    + *
    + *    http://www.apache.org/licenses/LICENSE-2.0
    + *
    + * Unless required by applicable law or agreed to in writing, software
    + * distributed under the License is distributed on an "AS IS" BASIS,
    + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    + * See the License for the specific language governing permissions and
    + * limitations under the License.
    + */
    +package org.apache.spark.streaming.kinesis
    +
    +import java.util.concurrent._
    +
    +import scala.util.control.NonFatal
    +
    +import com.amazonaws.services.kinesis.clientlibrary.interfaces.IRecordProcessorCheckpointer
    +import com.amazonaws.services.kinesis.clientlibrary.types.ShutdownReason
    +
    +import org.apache.spark.Logging
    +import org.apache.spark.streaming.Duration
    +import org.apache.spark.util.ThreadUtils
    +
    +/**
    + * This is a helper class for managing Kinesis checkpointing.
    + *
    + * @param receiver The receiver that keeps track of which sequence numbers we can checkpoint
    + * @param checkpointInterval How frequently we will checkpoint to DynamoDB
    + * @param workerId Worker Id of KCL worker for logging purposes
    + */
    +private[kinesis] class KinesisCheckpointer(
    +    receiver: KinesisReceiver[_],
    +    checkpointInterval: Duration,
    +    workerId: String) extends Logging {
    +
    +  // a map from shardId's to checkpointers
    +  private val checkpointers = new ConcurrentHashMap[String, IRecordProcessorCheckpointer]()
    +
    +  private val lastCheckpointedSeqNums = new ConcurrentHashMap[String, String]()
    +
    +  private val checkpointerThread = startCheckpointerThread()
    +
    +  /** Update the checkpointer instance to the most recent one for the given shardId.
*/
    +  def setCheckpointer(shardId: String, checkpointer: IRecordProcessorCheckpointer): Unit
= {
    +    checkpointers.put(shardId, checkpointer)
    +  }
    +
    +  /**
    +   * Stop tracking the specified shardId.
    +   *
    +   * If a checkpointer is provided, e.g. on IRecordProcessor.shutdown [[ShutdownReason.TERMINATE]],
    --- End diff --
    
    Why do we need to do two different things. Unlike the usual checkpointing by usual IRecordProcessor
implementations which checkpoint as soon as the data is received, we checkpoint stuff that
has been received AND stored in Spark reliably. If some data has already been stored, then
isnt it just strictly better to write corresponding offset to DynamoDB in any condition? 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message