hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun Suresh (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-10224) JavaKeyStoreProvider has to protect against corrupting underlying store
Date Thu, 31 Jul 2014 18:01:46 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-10224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14081192#comment-14081192
] 

Arun Suresh commented on HADOOP-10224:
--------------------------------------

[~tucu00], thanx for the review..

{code}
..
  if (fs.exists(keyStorePath)) {
    if (fs.exists(newPath)) {
      //THROW EXCEPTION, something weird happened, admin should take care of
    }
..
{code}

Should we actually throw an exception here ? If new exists, it would imply that the flush
did not proceed to completion.. in which case, I was thinking, the JKS should restart with
the last know good configuration silently. Not sure if the admin should be flagged at this
point.

Also, during startup, 

{code}
..
if (fs.exists(newPath) || fs.exists(oldPath)) {
      if (fs.exists(newPath)) {
        try {
..
{code}

I was wondering if we have to check if "new" exist ? Ideally, on startup, the JKS should be
brought to one of two states : "old" state or a completely flushed state ("current"). the
"new" file in my opinion is an intermediate file (and should be deleted by flush() if it proceeds
to completion)

I was also wondering if :

{code}
if (fs.exists(newPath) || fs.exists(oldPath)) {
    //THROW EXCEPTION, something weird happened, admin should take care of
  }
{code}

is needed, since we can safely assume that if the JKS has initialized properly, there should
not exits a "new" and "old" file at the time of flush. no ?



> JavaKeyStoreProvider has to protect against corrupting underlying store
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-10224
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10224
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: security
>            Reporter: Larry McCay
>            Assignee: Arun Suresh
>         Attachments: HADOOP-10224.1.patch, HADOOP-10224.2.patch
>
>
> Java keystores get corrupted at times. A key management operation that writes the store
to disk could cause a corruption and all protected data would then be unaccessible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message