Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 5802C200CD9 for ; Thu, 3 Aug 2017 21:00:37 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 565E316C4E3; Thu, 3 Aug 2017 19:00:37 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 4F83A16C4E2 for ; Thu, 3 Aug 2017 21:00:36 +0200 (CEST) Received: (qmail 86327 invoked by uid 500); 3 Aug 2017 19:00:30 -0000 Mailing-List: contact dev-help@kafka.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@kafka.apache.org Delivered-To: mailing list dev@kafka.apache.org Delivered-To: moderator for dev@kafka.apache.org Received: (qmail 46829 invoked by uid 99); 3 Aug 2017 18:48:48 -0000 X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 5.133 X-Spam-Level: ***** X-Spam-Status: No, score=5.133 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, HTML_MESSAGE=2, KAM_BADIPHTTP=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_NEUTRAL=0.652, WEIRD_PORT=0.001] autolearn=disabled Authentication-Results: spamd3-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=arrayent-com.20150623.gappssmtp.com DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arrayent-com.20150623.gappssmtp.com; s=20150623; h=mime-version:from:date:message-id:subject:to; bh=AWvqhQQrudKeRmDf/DtH+EfDEnKP0nt/fWOBrCxa5gQ=; b=JrAPeflq+8Y6hc0D7LXrxuWp1MAL/DJXaHA84pm8dfTwgWOxPKDv8O7+Eh98CVqkuT A773y4f+dqnSBKrPXYaN2R1MhLFsCo+P7BalV0sw8IM8tBfbhw3qb5OQMiJARq8W+21G /tpW3OKxh7XDQxTDh6ZzErlBNKICTiuWJSvOQAojfr22zavOetHIOAi0hNmpI3uYFzEr qkmglYLUgu7PQcwfxfpfW0bBIUvtA8jC2xubUi741EilppXqGu4NieyN1hlmOUEVcw0j tuqgv8BBpDTebj7HHxSzRdK6hAsASumGeq8lzwIkPKEJjnhL2Xm5CXR5XPnVfJLnb1wb D1ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=AWvqhQQrudKeRmDf/DtH+EfDEnKP0nt/fWOBrCxa5gQ=; b=eWvXW6zv1cYyHb+Q7sdK5I7OeJDeZb73Wn5d7yyoUxLAD8CmEg2Q8o1kA5yB62cxn/ gPRFCNK+rpngx1ybZZwwFujmkjqWVMZTw1oPiGZoch6dApmw21f3kXe45iLOdKM+FAFu DyPnTRb30IW5Zr2JQwVF1qTXIvam4y88RKgGojYzEtBWMTJjjOvL0uih2DMt9hyKcyEl W58GSnkCO4owp5xW0RtwNkwSYD+mFum8vsptLlE30/3hDQ6aX0EB60JWjzq38+0k9QRu bjPCVzpYzPQ8r3Tw10fa13WRWly4RbdCV/FbXRVwBTwDbI3F0woI5VlqrsI53kLpLC55 1nww== X-Gm-Message-State: AIVw112wIYdVbcU7Rw7c7bkGcuGlYCEZYFmNavL6r56ODI9OYcEKbmSX /gDrP9eBs8wkVNy9zw4rgoj75A1zAcSZL2M= X-Received: by 10.46.32.82 with SMTP id g79mr1143181ljg.174.1501786120560; Thu, 03 Aug 2017 11:48:40 -0700 (PDT) MIME-Version: 1.0 From: Quentin Derory Date: Thu, 3 Aug 2017 11:48:39 -0700 Message-ID: Subject: SASL SCRAM failing during Handshake To: dev@kafka.apache.org Content-Type: multipart/alternative; boundary="001a1142a8de3ebbca0555ddd434" archived-at: Thu, 03 Aug 2017 19:00:37 -0000 --001a1142a8de3ebbca0555ddd434 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hello, I have a 0.11.0.0 kafka cluster running with SSL and SASL SCRAM enabled connected with Zookeeper 3.5.3-beta SASL-Plain and SSL enabled. Kafka is connecting correctly to zookeeper(I replaced default zookeeper library with the corresponding cluster zookeeper library and added Netty library as well) but then fails to connect to other brokers due to a SASL error after successful SSL handshake. I previously registered correctly the admin users with the kafka-config command with the corresponding password from the server jaas file. Note: The same configuration works perfectly with SASL PLAIN instead of SASL SCRAM What I tried: - Got rid of zookeeper ACL - Put SASL only for external use : The kafka cluster will boot up correctly but consumer/producer will fail after the ssl handshake with this error on the client side: org.apache.kafka.common.errors.IllegalSaslStateException: Unexpected handshake request with client mechanism SCRAM-SHA-256, enabled mechanisms are [SCRAM-SHA-256] on the server side: at org.apache.kafka.common.security.scram.ScramSaslServer.evaluateResponse(Scr= amSaslServer.java:102) at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.auth= enticate(SaslServerAuthenticator.java:232) ... 6 more Caused by: javax.security.sasl.SaslException: Authentication failed: Invalid user credentials at org.apache.kafka.common.security.scram.ScramSaslServer.evaluateResponse(Scr= amSaslServer.java:92) *server.log* (looping on this error): [2017-07-31 22:55:03,234] DEBUG SSL handshake completed successfully with peerHost 'kafka1.arrayent.io' peerPort 9093 peerPrincipal 'CN=3D' kafka1.private.io', OU=3Dprivate, O=3DPrivate, L=3DRedwood city, ST=3DCA, C= =3DUS' cipherSuite 'TLS_DHE_DSS_WITH_AES_256_CBC_SHA256' (org.apache.kafka.common.network.SslTransportLayer) [2017-07-31 22:55:03,234] DEBUG Set SASL client state to RECEIVE_HANDSHAKE_RESPONSE (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator) [2017-07-31 22:55:03,234] DEBUG Set SASL server state to HANDSHAKE_REQUEST (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator) [2017-07-31 22:55:03,234] DEBUG Handle Kafka request SASL_HANDSHAKE (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator) [2017-07-31 22:55:03,234] DEBUG Using SASL mechanism 'SCRAM-SHA-256 (or SCRAM-SHA-512)' provided by client (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator) [2017-07-31 22:55:03,234] DEBUG Set SASL server state to AUTHENTICATE (org.apache.kafka.common.security.authenticator.SaslServerAuthenticator) [2017-07-31 22:55:03,234] DEBUG Set SASL client state to INITIAL (org.apache.kafka.common.security.authenticator.SaslClientAuthenticator) [2017-07-31 22:55:03,234] WARN Unexpected error from kafka1.private.io/172.21.1.18; closing connection (org.apache.kafka.common.network.Selector) java.lang.NullPointerException at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.send= SaslToken(SaslClientAuthenticator.java:214) at org.apache.kafka.common.security.authenticator.SaslClientAuthenticator.auth= enticate(SaslClientAuthenticator.java:183) at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:76) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:37= 4) at org.apache.kafka.common.network.Selector.poll(Selector.java:326) at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:432) at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:71) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) [2017-07-31 22:55:03,234] DEBUG Node 0 disconnected. (org.apache.kafka.clients.NetworkClient) [2017-07-31 22:55:03,234] WARN Connection to node 0 terminated during authentication. This may indicate that authentication failed due to invalid credentials. (org.apache.kafka.clients.NetworkClient) [2017-07-31 22:55:03,234] DEBUG Connection with ip-172-21-1-18.us-west-2.compute.internal/172.21.1.18 disconnected (org.apache.kafka.common.network.Selector) java.io.EOFException at org.apache.kafka.common.network.SslTransportLayer.read(SslTransportLayer.ja= va:492) at org.apache.kafka.common.network.NetworkReceive.readFromReadableChannel(Netw= orkReceive.java:85) at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java= :75) at org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.auth= enticate(SaslServerAuthenticator.java:214) at org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:76) at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:37= 4) at org.apache.kafka.common.network.Selector.poll(Selector.java:326) at kafka.network.Processor.poll(SocketServer.scala:499) at kafka.network.Processor.run(SocketServer.scala:435) at java.lang.Thread.run(Thread.java:745) logs of the controller node (looping on these errors) : [2017-07-31 22:57:55,497] WARN [Controller-0-to-broker-1-send-thread]: Controller 0's connection to broker kafka2.private.io:9093 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to kafka2.private.io:9093 (id: 1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:68) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) [2017-07-31 22:57:55,524] WARN [Controller-0-to-broker-0-send-thread]: Controller 0's connection to broker kafka1.private.io:9093 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to kafka1.private.io:9093 (id: 0 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:68) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) [2017-07-31 22:57:55,560] WARN [Controller-0-to-broker-2-send-thread]: Controller 0's connection to broker kafka3.private.io:9093 (id: 2 rack: null) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to kafka3.private.io:9093 (id: 2 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:68) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) [2017-07-31 22:57:55,614] WARN [Controller-0-to-broker-1-send-thread]: Controller 0's connection to broker kafka2.private.io:9093 (id: 1 rack: null) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to kafka2.private.io:9093 (id: 1 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:68) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) [2017-07-31 22:57:55,637] WARN [Controller-0-to-broker-0-send-thread]: Controller 0's connection to broker kafka1.private.io:9093 (id: 0 rack: null) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to kafka1.private.io:9093 (id: 0 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:68) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) [2017-07-31 22:57:55,676] WARN [Controller-0-to-broker-2-send-thread]: Controller 0's connection to broker kafka3.private.io:9093 (id: 2 rack: null) was unsuccessful (kafka.controller.RequestSendThread) java.io.IOException: Connection to kafka3.private.io:9093 (id: 2 rack: null) failed. at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.j= ava:68) at kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.sca= la:264) at kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:21= 8) at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:64) *My broker properties*: broker.id=3D1 listeners=3DINTERNAL://kafka1.private.io:9093,EXTERNAL://ec2-102.52.36.54.u= s-west-2.compute.amazonaws.com:{{externalPort}} zookeeper.connect=3Dzk1.private.io:2281,zk2.private.io:2281,zk3.private.io:= 2281############################# Log Basics ############################## A comma seperated list of directories under which to store log fileslog.dirs=3D/logs/kafka/1############SECURITY##########################= #################################ssl.keystore.location=3D{{kafka.keystoreLo= cation}}ssl.keystore.password=3D{{kafka.keystorePassword}}ssl.key.password= =3D{{kafka.keyPassword}}ssl.truststore.location=3D{{kafka.truststoreLocatio= n}}ssl.truststore.password=3D{{kafka.truststorePassword}}ssl.client.auth=3D= nonessl.enabled.protocols=3DTLSv1.2,TLSv1.1,TLSv1#InterBroker#security.inte= r.broker.protocol=3DSASL_SSLinter.broker.listener.name=3DINTERNALlistener.s= ecurity.protocol.map =3DEXTERNAL:SASL_SSL,INTERNAL:SASL_SSL##Authentication SASLsasl.mechanism.inter.broker.protocol=3DSCRAM-SHA-256 sasl.enabled.mechanisms=3DSCRAM-SHA-256#SASL with zookeeperzookeeper.set.acl=3Dtrue##ACLauthorizer.class.name=3Dkafka.securit= y.auth.SimpleAclAuthorizersuper.users=3DUser:adminallow.everyone.if.no.acl.= found=3Dtrue###############################################################= ###########################num.partitions=3D30message.max.bytes=3D1000000au= to.create.topics.enable=3Dfalselog.index.interval.bytes=3D4096log.index.siz= e.max.bytes=3D10485760log.retention.hours=3D12log.flush.interval.ms=3D10000= log.flush.interval.messages=3D20000log.flush.scheduler.interval.ms=3D2000lo= g.roll.hours=3D12log.retention.check.interval.ms=3D300000log.segment.bytes= =3D1073741824# Replication configurationsnum.replica.fetchers=3D4replica.fetch.max.bytes= =3D1048576replica.fetch.wait.max.ms=3D500replica.high.watermark.checkpoint.= interval.ms=3D5000replica.socket.timeout.ms=3D30000replica.socket.receive.b= uffer.bytes=3D65536replica.lag.time.max.ms=3D10000replica.lag.max.messages= =3D4000controller.socket.timeout.ms=3D30000controller.message.queue.size=3D= 10# ZK configurationzookeeper.connection.timeout.ms=3D6000zookeeper.sync.time.m= s=3D2000# Socket server configurationnum.io.threads=3D20num.network.threads=3D20socke= t.request.max.bytes=3D104857600socket.receive.buffer.bytes=3D1048576socket.= send.buffer.bytes=3D1048576queued.max.requests=3D1000fetch.purgatory.purge.= interval.requests=3D100producer.purgatory.purge.interval.requests=3D100 *My jaas server file:* KafkaServer { org.apache.kafka.common.security.scram.ScramLoginModule required username=3D"admin" password=3D"admin-secret";};Client { org.apache.zookeeper.server.auth.DigestLoginModule required username=3D"kafka1" password=3D"kafka1secret";}; Thanks in advance for your help, Quentin --001a1142a8de3ebbca0555ddd434--