hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (HDDS-1672) Improve locking in OzoneManager
Date Thu, 20 Jun 2019 18:19:01 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1672?focusedWorklogId=263990&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-263990
]

ASF GitHub Bot logged work on HDDS-1672:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Jun/19 18:18
            Start Date: 20/Jun/19 18:18
    Worklog Time Spent: 10m 
      Work Description: anuengineer commented on pull request #949: HDDS-1672. Improve locking
in OzoneManager.
URL: https://github.com/apache/hadoop/pull/949#discussion_r295896363
 
 

 ##########
 File path: hadoop-ozone/common/src/main/java/org/apache/hadoop/ozone/om/OzoneManagerLock.java
 ##########
 @@ -59,32 +68,39 @@
  * <br>
  * {@literal ->} acquireVolumeLock (will work)<br>
  *   {@literal +->} acquireBucketLock (will work)<br>
- *     {@literal +-->} acquireUserLock (will throw Exception)<br>
+ *     {@literal +-->} acquireS3BucketLock (will throw Exception)<br>
  * </p>
  * <br>
- * To acquire a user lock you should not hold any Volume/Bucket lock. Similarly
- * to acquire a Volume lock you should not hold any Bucket lock.
+ * To acquire a S3 lock you should not hold any Volume/Bucket lock. Similarly
+ * to acquire a Volume lock you should not hold any Bucket/User/S3
+ * Secret/Prefix lock.
  */
 public final class OzoneManagerLock {
 
+  private static final Logger LOG =
+      LoggerFactory.getLogger(OzoneManagerLock.class);
+
+  private static final String S3_BUCKET_LOCK = "s3BucketLock";
   private static final String VOLUME_LOCK = "volumeLock";
   private static final String BUCKET_LOCK = "bucketLock";
-  private static final String PREFIX_LOCK = "prefixLock";
-  private static final String S3_BUCKET_LOCK = "s3BucketLock";
+  private static final String USER_LOCK = "userLock";
   private static final String S3_SECRET_LOCK = "s3SecretetLock";
+  private static final String PREFIX_LOCK = "prefixLock";
+
 
   private final LockManager<String> manager;
 
   // To maintain locks held by current thread.
   private final ThreadLocal<Map<String, AtomicInteger>> myLocks =
       ThreadLocal.withInitial(
-          () -> ImmutableMap.of(
-              VOLUME_LOCK, new AtomicInteger(0),
-              BUCKET_LOCK, new AtomicInteger(0),
-              PREFIX_LOCK, new AtomicInteger(0),
-              S3_BUCKET_LOCK, new AtomicInteger(0),
-              S3_SECRET_LOCK, new AtomicInteger(0)
-          )
+          () -> ImmutableMap.<String, AtomicInteger>builder()
+              .put(S3_BUCKET_LOCK, new AtomicInteger(0))
+              .put(VOLUME_LOCK, new AtomicInteger(0))
+              .put(BUCKET_LOCK, new AtomicInteger(0))
+              .put(USER_LOCK, new AtomicInteger(0))
+              .put(S3_SECRET_LOCK, new AtomicInteger(0))
+              .put(PREFIX_LOCK, new AtomicInteger(0))
+              .build()
 
 Review comment:
   Not a suggestion for this patch. But more of a question, should we just maintain a bitset
here, and just flip that bit up and down to see if the lock is held. Or we can just maintain
32 bit integer, and we can easily find if a lock is held by Xoring with the correct mask.
I feel that might be super efficient.  @nandakumar131 . But as I said let us not do that in
this patch.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 263990)
    Time Spent: 3h 50m  (was: 3h 40m)

> Improve locking in OzoneManager
> -------------------------------
>
>                 Key: HDDS-1672
>                 URL: https://issues.apache.org/jira/browse/HDDS-1672
>             Project: Hadoop Distributed Data Store
>          Issue Type: Improvement
>          Components: Ozone Manager
>    Affects Versions: 0.4.0
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: Ozone Locks in OM.pdf
>
>          Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> In this Jira, we shall follow the new lock ordering. In this way, in volume requests
we can solve the issue of acquire/release/reacquire problem. And few bugs in the current
implementation of S3Bucket/Volume operations.
>  
> Currently after acquiring volume lock, we cannot acquire user lock. 
> This is causing an issue in Volume request implementation, acquire/release/reacquire
volume lock.
>  
> Case of Delete Volume Request: 
>  # Acquire volume lock.
>  # Get Volume Info from DB
>  # Release Volume lock. (We are releasing the lock, because while acquiring volume lock,
we cannot acquire user lock0
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Acquire volume lock
>  # Do delete logic
>  # release volume lock
>  # release user lock
>  
> We can avoid this acquire/release/reacquire lock issue by making volume lock as low weight. 
>  
> In this way, the above deleteVolume request will change as below
>  # Acquire volume lock
>  # Get Volume Info from DB
>  # Get owner from volume Info read from DB
>  # Acquire owner lock
>  # Do delete logic
>  # release owner lock
>  # release volume lock. 
> Same issue is seen with SetOwner for Volume request also.
> During HDDS-1620 [~arp] brought up this issue. 
> I am proposing the above solution to solve this issue. Any other idea/suggestions are
welcome.
> This also resolves a bug in setOwner for Volume request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message