kafka-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ismael Juma <isma...@gmail.com>
Subject Re: [DISCUSS] KIP-84: Support SASL/SCRAM mechanisms
Date Thu, 03 Nov 2016 13:25:39 GMT
Hi all,

I'm currently with limited internet access so I have not done a proper
review of the KIP, my apologies. A couple of thoughts:

1. I agree with Rajini and I don't see the value in bumping the protocol
version in this case. As Rajini said, the enabled mechanisms are
independent of the broker version. If and when we have a feature API, we
can consider exposing it that way.

2. Do we really want to support sha1? The security of it has been
questioned and the general recommendation is to avoid it these days.

Ismael

On 3 Nov 2016 6:51 am, "Rajini Sivaram" <rajinisivaram@googlemail.com>
wrote:

> I don't have a strong objection to bumping up SaslHandshakeRequest version,
> though I am still not convinced of the usefulness. I do agree that KIP-35
> should be standard way to determine client/broker compatibility. But I am
> not sure ApiVersionsRequest is a good fit for clients to choose a SASL
> mechanism.
>
>    1. SaslHandshakeRequest is the only way a client can figure out if a
>    SASL mechanism is actually enabled in the broker. The fact that broker
>    version n supports SCRAM doesn't imply that a broker of version n
> actually
>    has the mechanism enabled. Since enabling a mechanism involves some
> effort
>    like installing an authentication server and/or configuring credentials,
>    SASL mechanisms are a feature of a broker installation very unlike
>    versions.  As you said, "features" would have worked better.
>    2. New SASL mechanisms can be added to older Kafka broker versions. With
>    some tweaking, the SCRAM implementation from KIP-84 can be enabled in a
>    0.10.0 broker without changing any broker code. KIP-86 would make this
> even
>    easier, but it is already possible to add new or custom mechanisms to
>    existing broker versions. A client using ApiVersionsRequest to choose a
>    SASL mechanism is only checking when a mechanism was included in a
> broker
>    release, not the "capability" of a broker to support the mechanism. I am
>    not sure we should encourage clients to choose mechanisms based on
> versions.
>    3. Clients need additional configuration based on the chosen mechanism.
>    One of the reasons I couldn't see any value in using ApiVersionsRequest
> in
>    the Java client was because clients are configured with a single SASL
>    mechanism and a JAAS configuration corresponding to that mechanism. If a
>    client wants to choose between Kerberos and SCRAM, the client would need
>    keytab/principal for kerberos and username/password for SCRAM. Clients
> that
>    possess multiple credentials without knowing what the broker supports -
>    that sounds like a rather unusual scenario.
>    4. For some mechanisms, clients may want to choose mechanism at runtime.
>    For example, a client is configured with username/ password and wants to
>    choose between SCRAM-SHA-1/SCRAM-SHA-256/PLAIN depending on which
>    mechanisms are enabled in the broker. To choose between SCRAM-SHA-1 and
>    SCRAM-SHA-256, clients have to use SaslHandshakeRequest since they were
>    added in the same version. To choose between PLAIN and SCRAM-SHA-256, a
>    versions based approach would work, but wouldn't it be better for
> clients
>    to rely on just SaslHandshakeRequest rather than
>    ApiVersionsRequest+SaslHandshakeRequest so that the solution works in
> all
>    scenarios?
>
>
>
>
> On Thu, Nov 3, 2016 at 4:33 AM, Ewen Cheslack-Postava <ewen@confluent.io>
> wrote:
>
> > I think the bump isn't strictly required, but if the client is KIP-35
> > aware, it can proactively choose a compatible SASL mechanism based on its
> > initial ApiVersionRequest and avoid an extra connection round trip when
> > there are client/broker version differences. Without this, a newer client
> > would have to do 2 set of requests since the first SASL mechanism might
> not
> > be compatible.
> >
> > I don't think this is a deal breaker, but I do think it would be good to
> > just standardize on KIP-35 as the way we figure out client/broker
> > compatibility. The SASL stuff happened in parallel (maybe before?) KIP-35
> > and ended up with its own mechanism, but I'm in favor of trying to
> simplify
> > everything by centralizing those considerations into a single API call.
> (By
> > the way, dredging up now ancient history in the KIP-35 discussion, this
> is
> > also why "features" vs "API version" is relevant. If we wanted to
> configure
> > a newer broker to disable SASL mechanisms we no longer want to allow use
> > of, this isn't really possible to express via API versions unless we also
> > explicitly add an API version that doesn't support that mechanism whereas
> > features would make this easier to toggle on/off. The
> SaslHandshakeRequest
> > probably makes it easier to keep thing secure compared to the current
> state
> > of ApiVersionRequest).
> >
> > -Ewen
> >
> > On Tue, Nov 1, 2016 at 2:09 PM, Rajini Sivaram <
> > rajinisivaram@googlemail.com
> > > wrote:
> >
> > > Gwen,
> > >
> > > I had thought the same too and hence I am assuming that Java clients
> > could
> > > simply use SaslHandshakeRequest. SaslHandshakeRequest returns the list
> of
> > > mechanisms enabled in the broker. I think Jun's point was that by
> > > incrementing the version of SaslHandshakeRequest, clients can use
> > > ApiVersionsRequest to figure out the mechanisms the broker is capable
> of
> > > supporting and use that information to choose a mechanism to send in
> > > SaslHandshakeRequest. Not sure how useful this actually is, so will
> wait
> > > for Jun's response.
> > >
> > >
> > >
> > > On Tue, Nov 1, 2016 at 8:18 PM, Gwen Shapira <gwen@confluent.io>
> wrote:
> > >
> > > > Wait, I thought SaslHandshakeResponse includes a list of mechanisms
> > > > supported, so I'm not sure why we need to bump the version?
> > > >
> > > > I expect clients will send SaslHandshakeRequest_V0, see which
> > mechanisms
> > > > are supported and make a call based on that? Which means KIP-35 is
> not
> > > > required in that case? Am I missing something?
> > > >
> > > > On Tue, Nov 1, 2016 at 1:07 PM, Rajini Sivaram <
> > > > rajinisivaram@googlemail.com
> > > > > wrote:
> > > >
> > > > > Jun,
> > > > >
> > > > > I have added the following text to the KIP. Does this match your
> > > > > expectation?
> > > > >
> > > > > *SaslHandshakeRequest version will be increased from 0 to 1 so that
> > > > clients
> > > > > can determine if the broker is capable of supporting SCRAM
> mechanisms
> > > > using
> > > > > ApiVersionsRequest. Java clients will not be updated to use
> > > > > ApiVersionsRequest to choose SASL mechanism under this KIP. Java
> > > clients
> > > > > will continue to use their configured SASL mechanism and will fail
> > > > > connection if the requested mechanism is not enabled in the
> broker.*
> > > > >
> > > > > Thank you,
> > > > >
> > > > > Rajini
> > > > >
> > > > > On Tue, Nov 1, 2016 at 4:54 PM, Jun Rao <jun@confluent.io>
wrote:
> > > > >
> > > > > > Hi, Rajini,
> > > > > >
> > > > > > One more thing. It seems that we should bump up the version
of
> > > > > > SaslHandshakeRequest? This way, the client can figure out which
> > SASL
> > > > > > mechanisms the broker is capable of supporting through
> > > > ApiVersionRequest.
> > > > > > We discussed this briefly as part of KIP-43.
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Jun
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Tue, Nov 1, 2016 at 7:41 AM, Rajini Sivaram <
> > > > > > rajinisivaram@googlemail.com
> > > > > > > wrote:
> > > > > >
> > > > > > > If there are no more comments, I will start vote on this
KIP
> > later
> > > > this
> > > > > > > week. In the meantime, please feel free to post any feedback
or
> > > > > > > suggestions. Initial implementation is here:
> > > > > > > https://github.com/apache/kafka/pull/2086.
> > > > > > >
> > > > > > > Thank you,
> > > > > > >
> > > > > > > Rajini
> > > > > > >
> > > > > > > On Thu, Oct 27, 2016 at 11:18 AM, Rajini Sivaram <
> > > > > > > rajinisivaram@googlemail.com> wrote:
> > > > > > >
> > > > > > > > Jun,
> > > > > > > >
> > > > > > > > 4) Agree, it does make the implementation simpler.
Updated
> KIP.
> > > > > > > > 5) Thank you, that looks neater. Updated KIP.
> > > > > > > >
> > > > > > > > On Wed, Oct 26, 2016 at 6:59 PM, Jun Rao <jun@confluent.io>
> > > wrote:
> > > > > > > >
> > > > > > > >> Hi, Rajini,
> > > > > > > >>
> > > > > > > >> Thanks for the reply.
> > > > > > > >>
> > > > > > > >> 4. Implementation wise, it seems to me that it's
simpler to
> > read
> > > > > from
> > > > > > > the
> > > > > > > >> cache than reading directly from ZK since the
config manager
> > > > already
> > > > > > > >> propagates all config changes through ZK. Also,
it's
> probably
> > a
> > > > good
> > > > > > > idea
> > > > > > > >> to limit the places in the code base that directly
accesses
> > ZK.
> > > > > > > >>
> > > > > > > >> 5. Yes, it seems that it makes sense to add the
new SCRAM
> > > > > > configurations
> > > > > > > >> to
> > > > > > > >> the existing /config/users/<encoded-user>.
I am not sure how
> > one
> > > > > would
> > > > > > > >> remove the SCRAM configurations in the example
though since
> > the
> > > > > > > properties
> > > > > > > >> specified in add-config is not the ones actually
being
> stored.
> > > An
> > > > > > > >> alternative is to doing sth like the following.
It may still
> > > feel
> > > > a
> > > > > > bit
> > > > > > > >> weird and I am wondering if there is a clearer
approach.
> > > > > > > >>
> > > > > > > >> bin/kafka-configs.sh --zookeeper localhost:2181
--alter
> > > > --add-config
> > > > > > > >> 'scram_sha-256=[password=alice-secret,iterations=4096],
> > > > scram_sha-1=
> > > > > > > >> [password=alice-secret,iterations=4096]' --entity-type
> users
> > > > > > > >> --entity-name
> > > > > > > >> alice
> > > > > > > >>
> > > > > > > >> bin/kafka-configs.sh --zookeeper localhost:2181
--alter
> > > > > > --delete-config
> > > > > > > >> 'scram_sha-256,scram_sha-1' --entity-type users
> --entity-name
> > > > alice
> > > > > > > >>
> > > > > > > >> Thanks,
> > > > > > > >>
> > > > > > > >> Jun
> > > > > > > >>
> > > > > > > >>
> > > > > > > >> On Wed, Oct 26, 2016 at 4:35 AM, Rajini Sivaram
<
> > > > > > > >> rajinisivaram@googlemail.com> wrote:
> > > > > > > >>
> > > > > > > >> > Hi Jun,
> > > > > > > >> >
> > > > > > > >> > Thank you for reviewing the KIP. Answers
below:
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> >    1. Yes, agree, Updated KIP.
> > > > > > > >> >    2. User specifies a password and iteration
count.
> > > > > kaka-configs.sh
> > > > > > > >> >    generates a random salt and then generates
StoredKey
> and
> > > > > > ServerKey
> > > > > > > >> for
> > > > > > > >> > that
> > > > > > > >> >    password using the same message formatter
> implementation
> > > used
> > > > > for
> > > > > > > >> SCRAM
> > > > > > > >> >    authentication. I have added some more
detail to the
> KIP
> > (
> > > > > > > >> >    https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > > >> > 84%3A+Support+SASL+SCRAM+mechanisms#KIP-84:
> > > > > > > SupportSASLSCRAMmechanisms-
> > > > > > > >> > Tools).
> > > > > > > >> >    Does that answer the question?
> > > > > > > >> >    3. I started off thinking just one (SCRAM-SHA-256)
and
> > then
> > > > > > thought
> > > > > > > >> >    another one is required to make sure that
the
> > > implementation
> > > > > can
> > > > > > > cope
> > > > > > > >> > with
> > > > > > > >> >    multiple SCRAM mechanisms. But actually
you are right,
> we
> > > can
> > > > > > > support
> > > > > > > >> > all.
> > > > > > > >> >    I haven't added the old md2/md5 mechanisms
that aren't
> > very
> > > > > > secure,
> > > > > > > >> but
> > > > > > > >> > I
> > > > > > > >> >    have included all the SHA algorithms.
> > > > > > > >> >    4. Since credentials are only required
when a
> connection
> > is
> > > > > made,
> > > > > > > it
> > > > > > > >> >    feels like we can just read the latest
value from ZK
> > rather
> > > > > than
> > > > > > > >> cache
> > > > > > > >> > all
> > > > > > > >> >    users and keep them updated. Having said
that, we can
> > > always
> > > > > add
> > > > > > > >> caching
> > > > > > > >> >    later if we find that the overhead of
reading from ZK
> > every
> > > > > time
> > > > > > is
> > > > > > > >> too
> > > > > > > >> >    expensive. Since caching doesn't change
any externals,
> > this
> > > > can
> > > > > > be
> > > > > > > >> done
> > > > > > > >> > in
> > > > > > > >> >    a JIRA later - would that be ok?
> > > > > > > >> >    5. Thanks, updated. I have changed the
property names
> to
> > > > > include
> > > > > > > >> >    mechanism. To avoid four separate properties
per
> > mechanism
> > > in
> > > > > > ZK, I
> > > > > > > >> have
> > > > > > > >> >    changed the format to use a single property
(lower-case
> > > > > mechanism
> > > > > > > >> name)
> > > > > > > >> >    with four values concatenated in a format
similar to
> > SCRAM
> > > > > > > messages.
> > > > > > > >> >
> > > > > > > >> > Do you think storing SCRAM credentials in
> > > > > > /config/users/<encoded-user>
> > > > > > > >> > along with existing quota properties is okay?
Or should
> they
> > > be
> > > > > > under
> > > > > > > a
> > > > > > > >> > different path (eg. /config/credentials/<encoded-user>)?
> Or
> > > > > should
> > > > > > > >> they be
> > > > > > > >> > on a completely different path like ACLs
with a separate
> > tool
> > > > > > instead
> > > > > > > of
> > > > > > > >> > reusing kaka-configs.sh?
> > > > > > > >> >
> > > > > > > >> > Thank you,
> > > > > > > >> >
> > > > > > > >> > Rajini
> > > > > > > >> >
> > > > > > > >> > On Tue, Oct 25, 2016 at 11:55 PM, Jun Rao
<
> jun@confluent.io
> > >
> > > > > wrote:
> > > > > > > >> >
> > > > > > > >> > > Hi, Rajini,
> > > > > > > >> > >
> > > > > > > >> > > Thanks for the proposal. Looks good
overall and seems
> > quite
> > > > > useful
> > > > > > > >> (e.g.
> > > > > > > >> > > for supporting delegation tokens). A
few
> > comments/questions
> > > > > below.
> > > > > > > >> > >
> > > > > > > >> > > 1. For the ZK data format change, should
we use the same
> > > > > > convention
> > > > > > > >> as in
> > > > > > > >> > > KIP-55 to use encoded user name (i.e.,
> > > > > > > /config/users/<encoded-user1>)
> > > > > > > >> ?
> > > > > > > >> > >
> > > > > > > >> > > 2. For tooling, could you describe how
user typically
> > > > generates
> > > > > > > >> > > scam_server_key and scram_stored_key
to be used by
> > > > > > kafka-config.sh?
> > > > > > > >> > >
> > > > > > > >> > > 3. Is there a particular reason to only
support sha1 and
> > > > sha128?
> > > > > > > >> Should
> > > > > > > >> > we
> > > > > > > >> > > support more hashes listed below in
the future?
> > > > > > > >> > > http://www.iana.org/assignments/hash-function-
> > > > > > > >> > > text-names/hash-function-text-names.xhtml
> > > > > > > >> > >
> > > > > > > >> > > 4. Is there a reason not to cache user
credentials in
> the
> > > > > broker?
> > > > > > > The
> > > > > > > >> > > dynamic config mechanism already supports
loading
> configs
> > > into
> > > > > > > >> broker's
> > > > > > > >> > > cache. Checking credentials from broker's
cache is more
> > > > > efficient
> > > > > > > than
> > > > > > > >> > > reading from ZK each time.
> > > > > > > >> > >
> > > > > > > >> > > 5. Typo "scram_iteration-4096" (= instead
of -).
> > > > > > > >> > >
> > > > > > > >> > > Thanks,
> > > > > > > >> > >
> > > > > > > >> > > Jun
> > > > > > > >> > >
> > > > > > > >> > >
> > > > > > > >> > >
> > > > > > > >> > > On Tue, Oct 4, 2016 at 6:43 AM, Rajini
Sivaram <
> > > > > > > >> > > rajinisivaram@googlemail.com
> > > > > > > >> > > > wrote:
> > > > > > > >> > >
> > > > > > > >> > > > Hi all,
> > > > > > > >> > > >
> > > > > > > >> > > > I have just created KIP-84 to add
SCRAM-SHA-1 and
> > > > > SCRAM-SHA-256
> > > > > > > SASL
> > > > > > > >> > > > mechanisms to Kafka:
> > > > > > > >> > > >
> > > > > > > >> > > > https://cwiki.apache.org/
> confluence/display/KAFKA/KIP-
> > > > > > > >> > > > 84%3A+Support+SASL+SCRAM+mechanisms
> > > > > > > >> > > >
> > > > > > > >> > > >
> > > > > > > >> > > > Comments and suggestions are welcome.
> > > > > > > >> > > >
> > > > > > > >> > > > Thank you...
> > > > > > > >> > > >
> > > > > > > >> > > > Regards,
> > > > > > > >> > > >
> > > > > > > >> > > > Rajini
> > > > > > > >> > > >
> > > > > > > >> > >
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> >
> > > > > > > >> > --
> > > > > > > >> > Regards,
> > > > > > > >> >
> > > > > > > >> > Rajini
> > > > > > > >> >
> > > > > > > >>
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Regards,
> > > > > > > >
> > > > > > > > Rajini
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Regards,
> > > > > > >
> > > > > > > Rajini
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Regards,
> > > > >
> > > > > Rajini
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > *Gwen Shapira*
> > > > Product Manager | Confluent
> > > > 650.450.2760 | @gwenshap
> > > > Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
> > > > <http://www.confluent.io/blog>
> > > >
> > >
> > >
> > >
> > > --
> > > Regards,
> > >
> > > Rajini
> > >
> >
> >
> >
> > --
> > Thanks,
> > Ewen
> >
>
>
>
> --
> Regards,
>
> Rajini
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message