hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-1269) DFS Scalability: namenode throughput impacted becuase of global FSNamesystem lock
Date Fri, 25 May 2007 23:06:17 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-1269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

dhruba borthakur updated HADOOP-1269:
-------------------------------------

    Attachment: chooseTargetLock.patch

This is a very toned-down version of the fine-grain locking model that i was playing around
with. I have successfully tested randomWriter and dfsio on a 1000 node cluster with this fix.
My workload runs to a successful completion with this patch (but fails without this patch).

The chooseTarget method consumes quite a bit of CPU. This method allocates a set of datanodes
for a newly allocated block. This patch changes the locking to allow chooseTargets to occur
outside the FSNamesystem global lock.

The chooseTarget() method uses the clusterMap to determine "distances" between nodes. The
cluster map used to have a lock-monitor protecting it. Now it has a reader/writer lock. This
is appropriate because the rate of change to the cluster is very rare (occurs when datanodes
go down/come back up).

When a client request a new block for a file, the namenode acquires the FSnamesystem lock,
checks leases, allocates a new blockid, inserts it into pendingCreates, etc.etc. Then it releases
the FSnamesystem global lock and invokes chooseTargets(). chooseTargets acquires the clusterMap
in read mode and selects a set of datanode locations for this block.

This patch does not change the locking hierarchy of locks.

> DFS Scalability: namenode throughput impacted becuase of global FSNamesystem lock
> ---------------------------------------------------------------------------------
>
>                 Key: HADOOP-1269
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1269
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: dhruba borthakur
>         Assigned To: dhruba borthakur
>         Attachments: chooseTargetLock.patch, serverThreads1.html, serverThreads40.html
>
>
> I have been running a 2000 node cluster and measuring namenode performance. There are
quite a few "Calls dropped" messages in the namenode log. The namenode machine has 4 CPUs
and each CPU is about 30% busy. Profiling the namenode shows that the methods the consume
CPU the most are addStoredBlock() and getAdditionalBlock(). The first method in invoked when
a datanode confirms the presence of a newly created block. The second method in invoked when
a DFSClient request a new block for a file.
> I am attaching two files that were generated by the profiler. serverThreads40.html captures
the scenario when the namenode had 40 server handler threads. serverThreads1.html is with
1 server handler thread (with a max_queue_size of 4000).
> In the case when there are 40 handler threads, the total elapsed time taken by  FSNamesystem.getAdditionalBlock()
is 1957 seconds whereas the methods that that it invokes (chooseTarget) takes only about 97
seconds. FSNamesystem.getAdditionalBlock is blocked on the global FSNamesystem lock for all
those 1860 seconds.
> My proposal is to implement a finer grain locking model in the namenode. The FSNamesystem
has a few important data structures, e.g. blocksMap, datanodeMap, leases, neededReplication,
pendingCreates, heartbeats, etc. Many of these data structures already have their own lock.
My proposal is to have a lock for each one of these data structures. The individual lock will
protect the integrity of the contents of the data structure that it protects. The global FSNamesystem
lock is still needed to maintain consistency across different data structures.
> If we implement the above proposal, both addStoredBlock() and getAdditionalBlock() does
not need to hold the global FSNamesystem lock. startFile() and closeFile() still needs to
acquire the global FSNamesystem lock because it needs to ensure consistency across multiple
data structures.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message