hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements
Date Mon, 21 Aug 2017 22:00:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16135903#comment-16135903

Uma Maheswara Rao G commented on HDFS-12225:

Hi [~surendrasingh], Thank you for working on this. I have few questions/comments.

 public void addInodeToPendingSPSList(long id) {
+    pendingSPSxAttrInode.add(id);
+    // Notify waiting PendingSPSTaskScanner thread about the newly
+    // added SPS path.
+    synchronized (pendingSPSxAttrInode) {
+      pendingSPSxAttrInode.notify();
+    }
+  }
How about we abstract everything under storageMovementNeeded?
With this patch, flow is not well structured. Adding elements to SPS class, then processing
back to storageMovementNeeded and again SPS will pick them up. 
So, my thought is, we should abstract everything under storageMovementNeeded, then SPS will
consume it.

A suggestion on naming: pendingSPSxAttrInode —> spsDirsToBeTraveresed ?
 Under BlockStorageMovementNeeded class, spsDirsToBeTraveresed list should be traversed through
and collect the elements back to storageMovementNeeded

{quote}// Remove xAttr if trackID does't exit in {quote}
—> // Remove xAttr if trackID doesn't exist in

+   * Clear queues for give track id.
+   */
It should be “Clear queues for given track id.”

Question: in FSDirXAttrOp.java
 if (existingXAttrs.size() != newXAttrs.size()) {
+      for (XAttr xattr : toRemove) {
+            .equals(XAttrHelper.getPrefixedName(xattr))) {
+          fsd.getBlockManager().getStoragePolicySatisfier()
+              .clearQueue(inode.getId());
+          break;
+        }
+      }
{code} Why is this required? We will remove Xattr only when queue really becomes empty right?

> [SPS]: Optimize extended attributes for tracking SPS movements
> --------------------------------------------------------------
>                 Key: HDFS-12225
>                 URL: https://issues.apache.org/jira/browse/HDFS-12225
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, namenode
>            Reporter: Uma Maheswara Rao G
>            Assignee: Surendra Singh Lilhore
>         Attachments: HDFS-12225-HDFS-10285-01.patch, HDFS-12225-HDFS-10285-02.patch,
HDFS-12225-HDFS-10285-03.patch, HDFS-12225-HDFS-10285-04.patch, HDFS-12225-HDFS-10285-05.patch,
HDFS-12225-HDFS-10285-06.patch, HDFS-12225-HDFS-10285-07.patch
> We have discussed to optimize number extended attributes and asked to report separate
JIRA while implementing [HDFS-11150 | https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only mark the
directory. When recovering from FsImage, the InodeMap isn't built up, so we don't know the
sub-inode of a given inode, in the end, We cannot add these inodes to movement queue in FSDirectory#addToInodeMap,
any thoughts?{quote}
> {quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can add for
all Inodes now. For this to handle 100%, we may need intermittent processing, like first we
should add them to some intermittentList while loading fsImage, once fully loaded and when
starting active services, we should process that list and do required stuff. But it would
add some additional complexity may be. Let's do with all file inodes now and we can revisit
later if it is really creating issues. How about you raise a JIRA for it and think to optimize
> {quote}
> {quote}
> [~andrew.wang] wrote in HDFS-10285 merge time review comment : HDFS-10899 also the cursor
of the iterator in the EZ root xattr to track progress and handle restarts. I wonder if we
can do something similar here to avoid having an xattr-per-file being moved.
> {quote}

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message