hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Anu Engineer (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12195) Ozone: DeleteKey-1: KSM replies delete key request asynchronously
Date Thu, 27 Jul 2017 05:18:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16102720#comment-16102720
] 

Anu Engineer commented on HDFS-12195:
-------------------------------------

*OzoneConsts.java*
{code}
	 public static final String KSM_DELETING_KEY_PREFIX =
	    KSM_USER_PREFIX + "DELETING" + KSM_USER_PREFIX + KSM_VOLUME_PREFIX;
{code}
	I thought [~cheersyang] wanted to use #deleting# as the prefix and not $deleting$. My apologies,
if some of my older emails or docs created this confusion.

*MetadataManagerImpl.java*
getDeletingKey() -- Why are we sending volume name, bucket name and key name, isn't the key
name complete with just keyName.  Also shouldn't this function just call into getDBKeyForKey()
and add those bytes to the Deleting_Prefix? Also, should we rename this function to getDeletedKeyName
? 

*KeySpaceManagerImpl.java*

* I see that we have removed the lookup function for a key. So what happens if I attempt to
delete a non-existent key? I don't think your code is introducing any new issue, but just
wondering how that is handled today. In fact, I think we should have KEY_NOT_FOUND as a different
error from FAILED_KEY_DELETION.

* I know that you are doing this under a writelock(), but I think this should be under a transaction.
	{code}
	      metadataManager.delete(objectKey);
	      metadataManager.put(deletingKey, objectValue);
	{code}
* Here is a case where having a write lock is not good enough.
  ## Say the delete(objectKey), call works, but the put fails due to some database error.

  ## In that case we have some ghost blocks since if the client tried to delete the key again
we will return error, But we have not cleaned up the blocks. In other words, the delete and
put must either work or not. 
* Just out of paranoia, I would flip the order of deletes and put. That is I would put and
then update the actual delete. if we get a failure like the above one, the pending delete
will get deleted -- That is blocks would be gone, and we have a damaged key -- which we can
delete later.
* Can we have some test cases that confirm that key got moved to the new location in the database
?

> Ozone: DeleteKey-1: KSM replies delete key request asynchronously
> -----------------------------------------------------------------
>
>                 Key: HDFS-12195
>                 URL: https://issues.apache.org/jira/browse/HDFS-12195
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: ozone
>    Affects Versions: HDFS-7240
>            Reporter: Weiwei Yang
>            Assignee: Yuanbo Liu
>         Attachments: client-ksm.png, HDFS-12195-HDFS-7240.001.patch
>
>
> We will implement delete key in ozone in multiple child tasks, this is 1 of the child
task to implement client to scm communication. We need to do it in async manner, once key
state is changed in ksm metadata, ksm is ready to reply client with a successful message.
Actual deletes on other layers will happen some time later.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message