hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vinayakumar B (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel
Date Fri, 11 Dec 2015 04:36:11 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15052152#comment-15052152
] 

Vinayakumar B commented on HDFS-8578:
-------------------------------------

Synchronization on {{DataStorage}} will affect only for this part of the code, which basically
does the formatting/loading version file in datanode level. Since data loaded from version
file will be same accross all VERSION file, it should not matter.
{nofomat}          File root = dataDir.getFile();
          try {
            if (!containsStorageDir(root)) {
              // It first ensures the datanode level format is completed.
              StorageDirectory sd = loadStorageDirectory(datanode, nsInfo,
                  root, startOpt);
              addStorageDir(sd);
            } else {
              LOG.info("Storage directory " + dataDir
                  + " has already been used.");
            }
          } catch (IOException e) {
            LOG.warn("Failed to add Storage directory " + dataDir, e);
            return null;
          }{noformat}
Earlier, getDatanodeUuid() was called, which was getting blocked, so only I had removed synchronization
on {{addStorageLocations()}}.
 Still I believe there should not be any problem with synchronization.

Only thing is, since I have restored synchronizarion on addStorageLocations(), synchronization
on {{DataStorage#setFieldsFromProperties(..)}} is removed in latest patch. Still since all
values are same values, there should not be any problem with this.

> On upgrade, Datanode should process all storage/data dirs in parallel
> ---------------------------------------------------------------------
>
>                 Key: HDFS-8578
>                 URL: https://issues.apache.org/jira/browse/HDFS-8578
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode
>            Reporter: Raju Bairishetti
>            Assignee: Vinayakumar B
>            Priority: Critical
>         Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, HDFS-8578-03.patch, HDFS-8578-04.patch,
HDFS-8578-05.patch, HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, HDFS-8578-09.patch,
HDFS-8578-10.patch, HDFS-8578-11.patch, HDFS-8578-12.patch, HDFS-8578-13.patch, HDFS-8578-14.patch,
HDFS-8578-15.patch, HDFS-8578-16.patch, HDFS-8578-branch-2.6.0.patch, HDFS-8578-branch-2.7-001.patch,
HDFS-8578-branch-2.7-002.patch, HDFS-8578-branch-2.7-003.patch, h8578_20151210.patch
>
>
> Right now, during upgrades datanode is processing all the storage dirs sequentially.
Assume it takes ~20 mins to process a single storage dir then  datanode which has ~10 disks
will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>    for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>       doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>       assert getCTime() == nsInfo.getCTime() 
>           : "Data-node and name-node CTimes must be the same.";
>     }
> {code}
> It would save lots of time during major upgrades if datanode process all storagedirs/disks
parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message