hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1942) Increase the concurrency of transaction logging to edits log
Date Wed, 10 Oct 2007 21:12:50 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1942?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12533876

Raghu Angadi commented on HADOOP-1942:

Yes, following line show nearly 18ms for each sync. 10 ms does not surprise much.. I wonder
what was happenning before. Would sync time vary much on amount of data synced?

2007-10-09 19:45:37,778 INFO org.apache.hadoop.fs.FSNamesystem: Number of transactions: 240414

Total time for transactions(ms): 1242 Number of syncs: 9823 SyncTimes(ms): 179477 

> Increase the concurrency of transaction logging to edits log
> ------------------------------------------------------------
>                 Key: HADOOP-1942
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1942
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.15.0
>         Attachments: transactionLogSync.patch, transactionLogSync2.patch, transactionLogSync3.patch,
transactionLogSync4.patch, transactionLogSync5.patch, transactionLogSync6.patch, transactionLogSync8.patch,
> For some typical workloads, the throughput of the namenode is bottlenecked by the rate
of transactions that are being logged into tghe edits log. In the current code, a batching
scheme implies that all transactions do not have to incur a sync of the edits log to disk.
However, the existing batch-ing scheme can be improved.
> One option is to keep two buffers associated with edits file. Threads write to the primary
buffer while holding the FSNamesystem lock. Then the thread release the FSNamesystem lock,
acquires a new lock called the syncLock, swaps buffers, and flushes the old buffer to the
persistent store. Since the buffers are swapped, new transactions continue to get logged into
the new buffer. (Of course, the new transactions cannot complete before this new buffer is
> This approach does a better job of batching syncs to disk, thus improving performance.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message