hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo Nicholas Sze (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7609) startup used too much time to load edits
Date Tue, 19 May 2015 07:02:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14549939#comment-14549939

Tsz Wo Nicholas Sze commented on HDFS-7609:

PriorityQueue#remove is O\(n), so that definitely could be problematic. It's odd that there
would be so many collisions that this would become noticeable though. Are any of you running
a significant number of legacy applications linked to the RPC code before introduction of
the retry cache support? If that were the case, then perhaps a huge number of calls are not
supplying a call ID, and then the NN is getting a default call ID value from protobuf decoding,
thus causing a lot of collisions.
The priority queue can be improved using a balanced tree as stated in the java comment in
LightWeightCache.  We should do it if it could fix the problem.
   * The memory footprint for java.util.PriorityQueue is low but the
   * remove(Object) method runs in linear time. We may improve it by using a
   * balanced tree. However, we do not yet have a low memory footprint balanced
   * tree implementation.
  private final PriorityQueue<Entry> queue;
BTW, the priority queue is used to evict entries according the expiration time.  All the entries
(with any key, i.e. any caller ID) are stored in it.

> startup used too much time to load edits
> ----------------------------------------
>                 Key: HDFS-7609
>                 URL: https://issues.apache.org/jira/browse/HDFS-7609
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Carrey Zhan
>            Assignee: Ming Ma
>              Labels: BB2015-05-RFC
>         Attachments: HDFS-7609-CreateEditsLogWithRPCIDs.patch, HDFS-7609.patch, recovery_do_not_use_retrycache.patch
> One day my namenode crashed because of two journal node timed out at the same time under
very high load, leaving behind about 100 million transactions in edits log.(I still have no
idea why they were not rolled into fsimage.)
> I tryed to restart namenode, but it showed that almost 20 hours would be needed before
finish, and it was loading fsedits most of the time. I also tryed to restart namenode in recover
mode, the loading speed had no different.
> I looked into the stack trace, judged that it is caused by the retry cache. So I set
dfs.namenode.enable.retrycache to false, the restart process finished in half an hour.
> I think the retry cached is useless during startup, at least during recover process.

This message was sent by Atlassian JIRA

View raw message