hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sean Busbey (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-12076) Incomplete Cache Mechanism in CredentialProvider API
Date Wed, 10 Jun 2015 15:58:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-12076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14580694#comment-14580694
] 

Sean Busbey commented on HADOOP-12076:
--------------------------------------

{quote}
bq. Since AbstractJavaKeyStore isn't thread safe, do we know what happens if multiple instances
are pointing at the same jks file?

I'm not entirely sure of the usecase that you have in mind. There are read/write locks in
this class for providing thread safety. Perhaps there are state issues not covered properly?
If so, we should file separate jiras for them.
{quote}

The {{getCache}} method means we have no control over how the HashMap backing the cache is
accessed. A follow-on jira is fine by me.

{quote}
bq. Presuming the above works, how do we reconcile changes that happen to the underlying jks
against the cache?

I think this is largely answered by in the previous question. I guess if passwords are added
to the store that they will be picked up without any need for restart or new instances - due
to them not being cached. If you used the CLI and deleted an SSL password it would still be
returned by the cache until restart or new instances (again, Configuration.getPassword isn't
a problem there). If you change the existing password to a keystore then it will probably
continue to work until you restart and try to load the certs again. In which case, if the
new password matches the keystore it will work otherwise it won't.
{quote}

Since JavaKeyStoreProvider has already been in released versions, any caching done here has
to maintain the same behavior as found in 2.6.0 and 2.7.0 (or the change needs to be marked
incompatible and release noted).

{quote}
Interestingly, based on trying to change the test to ensure that non-cached items are not
returned when the underlying store is deleted, it seems that the in-memory keystore instance
itself serves as a cache. I have found that when a credentialEntry is added to the in-memory
it is always returned even if the underlying jks is deleted and the value wasn't queried prior.
I even persisted the store with a flush() and instantiated a new provider. The act of loading
the keystore reads everything into memory - so, even when I remove the file it is still returned
by the getCredentialEntry since it is in the in-memory keystore. It doesn't even need to be
in the cache.

Not sure what value the additional cache adds here. There may be some overhead to pulling
it out of the keystore and the KeyEntry but not sure.
{quote}

Does the loaded KeyStore recognize (non-deleted) changes to the underlying file after it's
been loaded? It's possible that while the file still exists the keystore could use filesystem
stat calls to find out if it has changed and update its cache as appropriate.

Sounds like we should change the test to be backed by MiniDFS instead of a local file uri?
That should give us a better idea of what's happening in ACCUMULO-3890.

> Incomplete Cache Mechanism in CredentialProvider API
> ----------------------------------------------------
>
>                 Key: HADOOP-12076
>                 URL: https://issues.apache.org/jira/browse/HADOOP-12076
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>            Reporter: Larry McCay
>            Assignee: Larry McCay
>         Attachments: HADOOP-12076-001.patch
>
>
> The AbstractJavaKeyStoreProvider class in the CredentialProvider API has a cache member
variable and interrogation of it during access but does not populate it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message