Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B8059200B75 for ; Thu, 28 Jul 2016 00:52:22 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id B6BE9160A90; Wed, 27 Jul 2016 22:52:22 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E3763160AA8 for ; Thu, 28 Jul 2016 00:52:21 +0200 (CEST) Received: (qmail 3773 invoked by uid 500); 27 Jul 2016 22:52:21 -0000 Mailing-List: contact issues-help@ambari.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@ambari.apache.org Delivered-To: mailing list issues@ambari.apache.org Received: (qmail 3587 invoked by uid 99); 27 Jul 2016 22:52:21 -0000 Received: from arcas.apache.org (HELO arcas) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Jul 2016 22:52:21 +0000 Received: from arcas.apache.org (localhost [127.0.0.1]) by arcas (Postfix) with ESMTP id 99F0A2C0D66 for ; Wed, 27 Jul 2016 22:52:20 +0000 (UTC) Date: Wed, 27 Jul 2016 22:52:20 +0000 (UTC) From: "Hudson (JIRA)" To: issues@ambari.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Commented] (AMBARI-17929) Kafka brokers went down after Ambari upgrade due to IllegalArgumentException MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Wed, 27 Jul 2016 22:52:22 -0000 [ https://issues.apache.org/jira/browse/AMBARI-17929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15396578#comment-15396578 ] Hudson commented on AMBARI-17929: --------------------------------- FAILURE: Integrated in Ambari-trunk-Commit #5402 (See [https://builds.apache.org/job/Ambari-trunk-Commit/5402/]) AMBARI-17929. Kafka brokers went down after Ambari upgrade due to (vbrodetskyi: [http://git-wip-us.apache.org/repos/asf?p=ambari.git&a=commit&h=d6b8617167484d22ceccfc6f1eb71d0b246392f4]) * ambari-server/src/test/java/org/apache/ambari/server/upgrade/UpgradeCatalog240Test.java * ambari-server/src/main/java/org/apache/ambari/server/upgrade/UpgradeCatalog240.java > Kafka brokers went down after Ambari upgrade due to IllegalArgumentException > ---------------------------------------------------------------------------- > > Key: AMBARI-17929 > URL: https://issues.apache.org/jira/browse/AMBARI-17929 > Project: Ambari > Issue Type: Bug > Components: ambari-server > Affects Versions: 2.4.0 > Reporter: Vitaly Brodetskyi > Assignee: Vitaly Brodetskyi > Priority: Blocker > Fix For: 2.4.0 > > Attachments: AMBARI-17929.patch > > > *Steps* > # Deploy HDP-2.4.2 cluster with Ambari 2.2.2.0 > # Upgrade Ambari to 2.4.0.0 > # Observe the status of Kafka brokers > *Result* > All brokers report down > Logs indicate below: > {code} > [2016-07-27 05:48:26,535] INFO Initializing Kafka Timeline Metrics Sink (org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter) > [2016-07-27 05:48:26,571] INFO Started Kafka Timeline metrics reporter with polling period 10 seconds (org.apache.hadoop.metrics2.sink.kafka.KafkaTimelineMetricsReporter) > [2016-07-27 05:48:26,716] INFO KafkaConfig values: > request.timeout.ms = 30000 > log.roll.hours = 168 > inter.broker.protocol.version = 0.9.0.X > log.preallocate = false > security.inter.broker.protocol = PLAINTEXTSASL > controller.socket.timeout.ms = 30000 > broker.id.generation.enable = true > ssl.keymanager.algorithm = SunX509 > ssl.key.password = [hidden] > log.cleaner.enable = true > ssl.provider = null > num.recovery.threads.per.data.dir = 1 > background.threads = 10 > unclean.leader.election.enable = true > sasl.kerberos.kinit.cmd = /usr/bin/kinit > replica.lag.time.max.ms = 10000 > ssl.endpoint.identification.algorithm = null > auto.create.topics.enable = true > zookeeper.sync.time.ms = 2000 > ssl.client.auth = none > ssl.keystore.password = [hidden] > log.cleaner.io.buffer.load.factor = 0.9 > offsets.topic.compression.codec = 0 > log.retention.hours = 168 > log.dirs = /kafka-logs > ssl.protocol = TLS > log.index.size.max.bytes = 10485760 > sasl.kerberos.min.time.before.relogin = 60000 > log.retention.minutes = null > connections.max.idle.ms = 600000 > ssl.trustmanager.algorithm = PKIX > offsets.retention.minutes = 86400000 > max.connections.per.ip = 2147483647 > replica.fetch.wait.max.ms = 500 > metrics.num.samples = 2 > port = 6667 > offsets.retention.check.interval.ms = 600000 > log.cleaner.dedupe.buffer.size = 134217728 > log.segment.bytes = 1073741824 > group.min.session.timeout.ms = 6000 > producer.purgatory.purge.interval.requests = 10000 > min.insync.replicas = 1 > ssl.truststore.password = [hidden] > log.flush.scheduler.interval.ms = 9223372036854775807 > socket.receive.buffer.bytes = 102400 > leader.imbalance.per.broker.percentage = 10 > num.io.threads = 8 > zookeeper.connect = nats11-36-alzs-dgm10toeriedwngdha-s11-3.openstacklocal:2181,nats11-36-alzs-dgm10toeriedwngdha-s11-4.openstacklocal:2181,nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:2181 > queued.max.requests = 500 > offsets.topic.replication.factor = 3 > replica.socket.timeout.ms = 30000 > offsets.topic.segment.bytes = 104857600 > replica.high.watermark.checkpoint.interval.ms = 5000 > broker.id = -1 > ssl.keystore.location = /etc/security/serverKeys/keystore.jks > listeners = PLAINTEXT://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6667,SSL://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6666 > log.flush.interval.messages = 9223372036854775807 > principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder > log.retention.ms = null > offsets.commit.required.acks = -1 > sasl.kerberos.principal.to.local.rules = [DEFAULT] > group.max.session.timeout.ms = 30000 > num.replica.fetchers = 1 > advertised.listeners = PLAINTEXT://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6667,SSL://nats11-36-alzs-dgm10toeriedwngdha-s11-1.openstacklocal:6666 > replica.socket.receive.buffer.bytes = 65536 > delete.topic.enable = false > log.index.interval.bytes = 4096 > metric.reporters = [] > compression.type = producer > log.cleanup.policy = delete > controlled.shutdown.max.retries = 3 > log.cleaner.threads = 1 > quota.window.size.seconds = 1 > zookeeper.connection.timeout.ms = 25000 > offsets.load.buffer.size = 5242880 > zookeeper.session.timeout.ms = 30000 > ssl.cipher.suites = null > authorizer.class.name = org.apache.ranger.authorization.kafka.authorizer.RangerKafkaAuthorizer > sasl.kerberos.ticket.renew.jitter = 0.05 > sasl.kerberos.service.name = null > controlled.shutdown.enable = true > offsets.topic.num.partitions = 50 > quota.window.num = 11 > message.max.bytes = 1000000 > log.cleaner.backoff.ms = 15000 > log.roll.jitter.hours = 0 > log.retention.check.interval.ms = 300000 > replica.fetch.max.bytes = 1048576 > log.cleaner.delete.retention.ms = 86400000 > fetch.purgatory.purge.interval.requests = 10000 > log.cleaner.min.cleanable.ratio = 0.5 > offsets.commit.timeout.ms = 5000 > zookeeper.set.acl = false > log.retention.bytes = -1 > offset.metadata.max.bytes = 4096 > leader.imbalance.check.interval.seconds = 300 > quota.consumer.default = 9223372036854775807 > log.roll.jitter.ms = null > reserved.broker.max.id = 1000 > replica.fetch.backoff.ms = 1000 > advertised.host.name = null > quota.producer.default = 9223372036854775807 > log.cleaner.io.buffer.size = 524288 > controlled.shutdown.retry.backoff.ms = 5000 > log.dir = /tmp/kafka-logs > log.flush.offset.checkpoint.interval.ms = 60000 > log.segment.delete.delay.ms = 60000 > num.partitions = 1 > num.network.threads = 3 > socket.request.max.bytes = 104857600 > sasl.kerberos.ticket.renew.window.factor = 0.8 > log.roll.ms = null > ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1] > socket.send.buffer.bytes = 102400 > log.flush.interval.ms = null > ssl.truststore.location = /etc/security/serverKeys/truststore.jks > log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 > default.replication.factor = 1 > metrics.sample.window.ms = 30000 > auto.leader.rebalance.enable = true > host.name = > ssl.truststore.type = JKS > advertised.port = null > max.connections.per.ip.overrides = > replica.fetch.min.bytes = 1 > ssl.keystore.type = JKS > (kafka.server.KafkaConfig) > [2016-07-27 05:48:26,804] FATAL (kafka.Kafka$) > java.lang.IllegalArgumentException: requirement failed: security.inter.broker.protocol must be a protocol in the configured set of advertised.listeners. The valid options based on currently configured protocols are Set(PLAINTEXT, SSL) > at scala.Predef$.require(Predef.scala:233) > at kafka.server.KafkaConfig.validateValues(KafkaConfig.scala:957) > at kafka.server.KafkaConfig.(KafkaConfig.scala:935) > at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:699) > at kafka.server.KafkaConfig$.fromProps(KafkaConfig.scala:696) > at kafka.server.KafkaServerStartable$.fromProps(KafkaServerStartable.scala:28) > at kafka.Kafka$.main(Kafka.scala:58) > at kafka.Kafka.main(Kafka.scala) > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)