From jira-return-11235-archive-asf-public=cust-asf.ponee.io@kafka.apache.org Wed Mar 28 00:57:10 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id D75E618064E for ; Wed, 28 Mar 2018 00:57:09 +0200 (CEST) Received: (qmail 75273 invoked by uid 500); 27 Mar 2018 22:57:03 -0000 Mailing-List: contact jira-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@kafka.apache.org Delivered-To: mailing list jira@kafka.apache.org Received: (qmail 75257 invoked by uid 99); 27 Mar 2018 22:57:03 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Mar 2018 22:57:03 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id 52C3F18035D for ; Tue, 27 Mar 2018 22:57:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -109.511 X-Spam-Level: X-Spam-Status: No, score=-109.511 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id HlS9NE-pAXyT for ; Tue, 27 Mar 2018 22:57:02 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 663045F4EB for ; Tue, 27 Mar 2018 22:57:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 945B9E0D76 for ; Tue, 27 Mar 2018 22:57:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 58CC321511 for ; Tue, 27 Mar 2018 22:57:00 +0000 (UTC) Date: Tue, 27 Mar 2018 22:57:00 +0000 (UTC) From: "Srinivas Dhruvakumar (JIRA)" To: jira@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (KAFKA-6649) ReplicaFetcher stopped after non fatal exception is thrown MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/KAFKA-6649?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1641= 6382#comment-16416382 ]=20 Srinivas Dhruvakumar commented on KAFKA-6649: --------------------------------------------- I am trying out the patch "high watermark=C2=A0could be incorrectly set to = -1". But I am unable to reproduce the above scenario " : org.apache.kafka.common.errors.OffsetOutOfRangeException: Cannot incremen= t the log start offset to 2098535 of partition [[TOPIC_NAME_REMOVED]]-84 si= nce it is larger than the high watermark -1 " Does anyone know how to reproduce the above error ?=C2=A0 > ReplicaFetcher stopped after non fatal exception is thrown > ---------------------------------------------------------- > > Key: KAFKA-6649 > URL: https://issues.apache.org/jira/browse/KAFKA-6649 > Project: Kafka > Issue Type: Bug > Components: replication > Affects Versions: 1.0.0, 0.11.0.2, 1.1.0, 1.0.1 > Reporter: Julio Ng > Priority: Major > > We have seen several under-replication partitions, usually triggered by t= opic creation. After digging in the logs, we see the below: > {noformat} > [2018-03-12 22:40:17,641] ERROR [ReplicaFetcher replicaId=3D12, leaderId= =3D0, fetcherId=3D1] Error due to (kafka.server.ReplicaFetcherThread) > kafka.common.KafkaException: Error processing data for partition [[TOPIC_= NAME_REMOVED]]-84 offset 2098535 > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$an= onfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:204= ) > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$an= onfun$apply$mcV$sp$1$$anonfun$apply$2.apply(AbstractFetcherThread.scala:169= ) > at scala.Option.foreach(Option.scala:257) > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$an= onfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:169) > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2$$an= onfun$apply$mcV$sp$1.apply(AbstractFetcherThread.scala:166) > at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.= scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.app= ly$mcV$sp(AbstractFetcherThread.scala:166) > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.app= ly(AbstractFetcherThread.scala:166) > at kafka.server.AbstractFetcherThread$$anonfun$processFetchRequest$2.app= ly(AbstractFetcherThread.scala:166) > at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250) > at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetche= rThread.scala:164) > at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala= :111) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82) > Caused by: org.apache.kafka.common.errors.OffsetOutOfRangeException: Cann= ot increment the log start offset to 2098535 of partition [[TOPIC_NAME_REMO= VED]]-84 since it is larger than the high watermark -1 > [2018-03-12 22:40:17,641] INFO [ReplicaFetcher replicaId=3D12, leaderId= =3D0, fetcherId=3D1] Stopped (kafka.server.ReplicaFetcherThread){noformat} > It looks like that after the=C2=A0ReplicaFetcherThread is stopped, the re= plicas start to lag behind, presumably because we are not fetching from the= leader anymore. Further examining, the=C2=A0ShutdownableThread.scala objec= t: > {noformat} > override def run(): Unit =3D { > info("Starting") > try { > while (isRunning) > doWork() > } catch { > case e: FatalExitError =3D> > shutdownInitiated.countDown() > shutdownComplete.countDown() > info("Stopped") > Exit.exit(e.statusCode()) > case e: Throwable =3D> > if (isRunning) > error("Error due to", e) > } finally { > shutdownComplete.countDown() > } > info("Stopped") > }{noformat} > For the Throwable (non-fatal) case, it just exits the while loop and the = thread stops doing work. I am not sure whether this is the intended behavio= r of the=C2=A0ShutdownableThread, or the exception should be caught and we = should keep calling doWork() > =C2=A0 -- This message was sent by Atlassian JIRA (v7.6.3#76005)