hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7609) startup used too much time to load edits
Date Sat, 24 Jan 2015 17:59:34 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14290735#comment-14290735
] 

Chris Nauroth commented on HDFS-7609:
-------------------------------------

Specifically, the retry cache was added in 2.1.0-beta, so the theory in my last comment would
only be valid if you're running RPC clients older than that.

> startup used too much time to load edits
> ----------------------------------------
>
>                 Key: HDFS-7609
>                 URL: https://issues.apache.org/jira/browse/HDFS-7609
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Carrey Zhan
>         Attachments: HDFS-7609-CreateEditsLogWithRPCIDs.patch, recovery_do_not_use_retrycache.patch
>
>
> One day my namenode crashed because of two journal node timed out at the same time under
very high load, leaving behind about 100 million transactions in edits log.(I still have no
idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be needed before
finish, and it was loading fsedits most of the time. I also tryed to restart namenode in recover
mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. So I set
dfs.namenode.enable.retrycache to false, the restart process finished in half an hour.
> I think the retry cached is useless during startup, at least during recover process.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message