hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Rakesh R (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-2962) ZKRMStateStore: Limit the number of znodes under a znode
Date Thu, 18 Dec 2014 06:40:13 GMT

    [ https://issues.apache.org/jira/browse/YARN-2962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14251292#comment-14251292

Rakesh R commented on YARN-2962:

OK. Recently I had given a talk about ZooKeeper. Please refer [ZooKeeper In The Wild|http://events.linuxfoundation.org/sites/events/files/slides/ZooKeeper%20in%20the%20Wild.pdf]
presentation slides. In that I had mentioned similar case. Probably you guys can have a look
at the slides and page no. 30. Here the idea is, instead of flat structure use hierarchical
structure. For this user need to split the single znode name to form a hierarchy. With this
approach user can store many znodes. AFAIK this is proven method in [Apache BookKeeper|http://zookeeper.apache.org/bookkeeper/docs/r4.3.0]

For example, {{application_1418470446447_0049}} can split to form a hierarchy like,  {{(app_root)\application_\141\84704\46447_\0049}}.
If there is a data for this application, user can store it to the leaf znode. Since am not
very good in yarn, you guys can find a better way to split the znode name for holding n number
of znodes. Provide a parser to read it back by iterating the child znodes and form application_1418470446447_0049.
Since ZooKeeper read operation is low latency one, it wont hit performance I guess. Probably
we can do a test and see the performance graph.


> ZKRMStateStore: Limit the number of znodes under a znode
> --------------------------------------------------------
>                 Key: YARN-2962
>                 URL: https://issues.apache.org/jira/browse/YARN-2962
>             Project: Hadoop YARN
>          Issue Type: Bug
>          Components: resourcemanager
>    Affects Versions: 2.6.0
>            Reporter: Karthik Kambatla
>            Assignee: Varun Saxena
>            Priority: Critical
> We ran into this issue where we were hitting the default ZK server message size configs,
primarily because the message had too many znodes even though they individually they were
all small.

This message was sent by Atlassian JIRA

View raw message