hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raju Bairishetti (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-8578) On upgrade, Datanode should process all storage/data dirs in parallel
Date Thu, 11 Jun 2015 11:00:07 GMT

     [ https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Raju Bairishetti updated HDFS-8578:
-----------------------------------
    Description: 
Right now, during upgrades datanode is processing all the storage dirs sequentially. Assume
it takes ~20 mins to process a single storage dir then  datanode which has ~10 disks will
take around 3hours to come up.

*BlockPoolSliceStorage.java*
{code}
   for (int idx = 0; idx < getNumStorageDirs(); idx++) {
      doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
      assert getCTime() == nsInfo.getCTime() 
          : "Data-node and name-node CTimes must be the same.";
    }
{code}

It would save lots of time during major upgrades if datanode process all storagedirs/disks
parallelly.

Can we make datanode to process all storage dirs parallelly?


  was:
Right now, during upgrades datanode is processing all the storage dirs sequentially. Assume
it takes ~20 mins to process a single storage dir then  datanode which has ~10 disks will
take around 3hours to come up.

*BlockPoolSliceStorage.java*
{code}
   for (int idx = 0; idx < getNumStorageDirs(); idx++) {
      doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
      assert getCTime() == nsInfo.getCTime() 
          : "Data-node and name-node CTimes must be the same.";
    }
{code}

Can we make datanode to process all the storage dirs parallelly? This saves lots of time during
upgrades.



> On upgrade, Datanode should process all storage/data dirs in parallel
> ---------------------------------------------------------------------
>
>                 Key: HDFS-8578
>                 URL: https://issues.apache.org/jira/browse/HDFS-8578
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>            Reporter: Raju Bairishetti
>            Priority: Critical
>
> Right now, during upgrades datanode is processing all the storage dirs sequentially.
Assume it takes ~20 mins to process a single storage dir then  datanode which has ~10 disks
will take around 3hours to come up.
> *BlockPoolSliceStorage.java*
> {code}
>    for (int idx = 0; idx < getNumStorageDirs(); idx++) {
>       doTransition(datanode, getStorageDir(idx), nsInfo, startOpt);
>       assert getCTime() == nsInfo.getCTime() 
>           : "Data-node and name-node CTimes must be the same.";
>     }
> {code}
> It would save lots of time during major upgrades if datanode process all storagedirs/disks
parallelly.
> Can we make datanode to process all storage dirs parallelly?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message