hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Uma Maheswara Rao G (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-12225) [SPS]: Optimize extended attributes for tracking SPS movements
Date Sat, 29 Jul 2017 06:16:00 GMT

     [ https://issues.apache.org/jira/browse/HDFS-12225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Uma Maheswara Rao G updated HDFS-12225:
---------------------------------------
    Summary: [SPS]: Optimize extended attributes for tracking SPS movements  (was: Optimize
extended attributes for tracking SPS movements)

> [SPS]: Optimize extended attributes for tracking SPS movements
> --------------------------------------------------------------
>
>                 Key: HDFS-12225
>                 URL: https://issues.apache.org/jira/browse/HDFS-12225
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: datanode, namenode
>            Reporter: Uma Maheswara Rao G
>
> We have discussed to optimize number extended attributes and asked to report separate
JIRA while implementing [HDFS-11150 | https://issues.apache.org/jira/browse/HDFS-11150?focusedCommentId=15766127&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15766127]
> This is the JIRA to track that work 
> For the context, comment copied from HDFS-11150
> {quote}
> [~yuanbo] wrote : I've tried that before. There is an issue here if we only mark the
directory. When recovering from FsImage, the InodeMap isn't built up, so we don't know the
sub-inode of a given inode, in the end, We cannot add these inodes to movement queue in FSDirectory#addToInodeMap,
any thoughts?{quote}
> [~umamaheswararao] wrote: I got what you are saying. Ok for simplicity we can add for
all Inodes now. For this to handle 100%, we may need intermittent processing, like first we
should add them to some intermittentList while loading fsImage, once fully loaded and when
starting active services, we should process that list and do required stuff. But it would
add some additional complexity may be. Let's do with all file inodes now and we can revisit
later if it is really creating issues. How about you raise a JIRA for it and think to optimize
separately?
> {quote}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message