From jira-return-8747-archive-asf-public=cust-asf.ponee.io@kafka.apache.org Thu Jan 11 20:07:06 2018 Return-Path: X-Original-To: archive-asf-public@eu.ponee.io Delivered-To: archive-asf-public@eu.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by mx-eu-01.ponee.io (Postfix) with ESMTP id 59F97180656 for ; Thu, 11 Jan 2018 20:07:06 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 49AEE160C23; Thu, 11 Jan 2018 19:07:06 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 6A4BB160C13 for ; Thu, 11 Jan 2018 20:07:05 +0100 (CET) Received: (qmail 75461 invoked by uid 500); 11 Jan 2018 19:07:04 -0000 Mailing-List: contact jira-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: jira@kafka.apache.org Delivered-To: mailing list jira@kafka.apache.org Received: (qmail 75450 invoked by uid 99); 11 Jan 2018 19:07:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 11 Jan 2018 19:07:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 3064EC44DF for ; Thu, 11 Jan 2018 19:07:04 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.911 X-Spam-Level: X-Spam-Status: No, score=-99.911 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_LOW=-0.7, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id 5apgHfB0UJYV for ; Thu, 11 Jan 2018 19:07:03 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 56A575FB87 for ; Thu, 11 Jan 2018 19:07:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id F1894E039B for ; Thu, 11 Jan 2018 19:07:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id AD677240CA for ; Thu, 11 Jan 2018 19:07:00 +0000 (UTC) Date: Thu, 11 Jan 2018 19:07:00 +0000 (UTC) From: "Andrew Olson (JIRA)" To: jira@kafka.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (KAFKA-5413) Log cleaner fails due to large offset in segment file MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/KAFKA-5413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16322759#comment-16322759 ] Andrew Olson edited comment on KAFKA-5413 at 1/11/18 7:06 PM: -------------------------------------------------------------- We ran into this for a partition of the __consumer_offsets topic. The issue was discovered when we noticed that the open file count for a Kafka broker had been steadily growing for a couple months, and was about 2x higher than any other broker in the cluster. When we restarted this broker it seemed able to recover, deleting a large number of old log segments, with the open file count returning to a more normal value. was (Author: noslowerdna): We ran into this for a partition of the __consumer_offsets topic. The issue was discovered when we noticed that the open file count for a Kafka broker had been steadily growing for a couple months, and was about 2x higher than any other broker in the cluster. When we restarted this broker it seemed able to recover, deleting a large number of old segments for __consumer_offsets, with the open file count returning to a more normal value. > Log cleaner fails due to large offset in segment file > ----------------------------------------------------- > > Key: KAFKA-5413 > URL: https://issues.apache.org/jira/browse/KAFKA-5413 > Project: Kafka > Issue Type: Bug > Components: core > Affects Versions: 0.10.2.0, 0.10.2.1 > Environment: Ubuntu 14.04 LTS, Oracle Java 8u92, kafka_2.11-0.10.2.0 > Reporter: Nicholas Ngorok > Assignee: Kelvin Rutt > Priority: Critical > Labels: reliability > Fix For: 0.10.2.2, 0.11.0.0 > > Attachments: 00000000000000000000.index.cleaned, 00000000000000000000.log, 00000000000000000000.log.cleaned, 00000000000000000000.timeindex.cleaned, 00000000002147422683.log, kafka-5413.patch > > > The log cleaner thread in our brokers is failing with the trace below > {noformat} > [2017-06-08 15:49:54,822] INFO {kafka-log-cleaner-thread-0} Cleaner 0: Cleaning segment 0 in log __consumer_offsets-12 (largest timestamp Thu Jun 08 15:48:59 PDT 2017) into 0, retaining deletes. (kafka.log.LogCleaner) > [2017-06-08 15:49:54,822] INFO {kafka-log-cleaner-thread-0} Cleaner 0: Cleaning segment 2147343575 in log __consumer_offsets-12 (largest timestamp Thu Jun 08 15:49:06 PDT 2017) into 0, retaining deletes. (kafka.log.LogCleaner) > [2017-06-08 15:49:54,834] ERROR {kafka-log-cleaner-thread-0} [kafka-log-cleaner-thread-0], Error due to (kafka.log.LogCleaner) > java.lang.IllegalArgumentException: requirement failed: largest offset in message set can not be safely converted to relative offset. > at scala.Predef$.require(Predef.scala:224) > at kafka.log.LogSegment.append(LogSegment.scala:109) > at kafka.log.Cleaner.cleanInto(LogCleaner.scala:478) > at kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:405) > at kafka.log.Cleaner$$anonfun$cleanSegments$1.apply(LogCleaner.scala:401) > at scala.collection.immutable.List.foreach(List.scala:381) > at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:401) > at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:363) > at kafka.log.Cleaner$$anonfun$clean$4.apply(LogCleaner.scala:362) > at scala.collection.immutable.List.foreach(List.scala:381) > at kafka.log.Cleaner.clean(LogCleaner.scala:362) > at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:241) > at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:220) > at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63) > [2017-06-08 15:49:54,835] INFO {kafka-log-cleaner-thread-0} [kafka-log-cleaner-thread-0], Stopped (kafka.log.LogCleaner) > {noformat} > This seems to point at the specific line [here| https://github.com/apache/kafka/blob/0.11.0/core/src/main/scala/kafka/log/LogSegment.scala#L92] in the kafka src where the difference is actually larger than MAXINT as both baseOffset and offset are of type long. It was introduced in this [pr| https://github.com/apache/kafka/pull/2210/files/56d1f8196b77a47b176b7bbd1e4220a3be827631] > These were the outputs of dumping the first two log segments > {noformat} > :~$ /usr/bin/kafka-run-class kafka.tools.DumpLogSegments --deep-iteration --files /kafka-logs/__consumer_offsets-12/000 > 00000000000000000.log > Dumping /kafka-logs/__consumer_offsets-12/00000000000000000000.log > Starting offset: 0 > offset: 1810054758 position: 0 NoTimestampType: -1 isvalid: true payloadsize: -1 magic: 0 compresscodec: NONE crc: 3127861909 keysize: 34 > :~$ /usr/bin/kafka-run-class kafka.tools.DumpLogSegments --deep-iteration --files /kafka-logs/__consumer_offsets-12/000 > 00000002147343575.log > Dumping /kafka-logs/__consumer_offsets-12/00000000002147343575.log > Starting offset: 2147343575 > offset: 2147539884 position: 0 NoTimestampType: -1 isvalid: true paylo > adsize: -1 magic: 0 compresscodec: NONE crc: 2282192097 keysize: 34 > {noformat} > My guess is that since 2147539884 is larger than MAXINT, we are hitting this exception. Was there a specific reason, this check was added in 0.10.2? > E.g. if the first offset is a key = "key 0" and then we have MAXINT + 1 of "key 1" following, wouldn't we run into this situation whenever the log cleaner runs? -- This message was sent by Atlassian JIRA (v6.4.14#64029)