hadoop-yarn-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Zhijie Shen (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (YARN-3448) Add Rolling Time To Lives Level DB Plugin Capabilities
Date Fri, 10 Apr 2015 03:45:13 GMT

    [ https://issues.apache.org/jira/browse/YARN-3448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14488844#comment-14488844
] 

Zhijie Shen commented on YARN-3448:
-----------------------------------

[~jeagles], thanks for updating the patch. It looks good to me overall. I've some additional
comments about it:

1. Should RollingLevelDB extend AbstractService?

2. In computeCheckMillis(), shall we prevent execution flow across different cases in the
switch block?

3. The javadoc of RollingLevelDBTimelineStore needs to be updated accordingly.

4. I'm still wondering if writing entitydb and indexdb is atomic. What if entitydb is written,
but indexdb isn't for an entity? Shall we delete the data in entitydb too in this case?

5. I think another improvement could be having one thread per different type of db and per
db inside rollingdb, and writing those dbs simultaneously to increase concurrency, but I'm
not sure if it's difficult to implement. Thoughts?

6. Can we make sure TimelineStoreTestUtils' test cases are passed with RollingLevelDBTimelineStore
too?

7. Is TimelineDataManager code change related or necessary?

8. Is it better to cleanup the directory after completing the test cases in TestRollingLevelDB?

> Add Rolling Time To Lives Level DB Plugin Capabilities
> ------------------------------------------------------
>
>                 Key: YARN-3448
>                 URL: https://issues.apache.org/jira/browse/YARN-3448
>             Project: Hadoop YARN
>          Issue Type: Sub-task
>            Reporter: Jonathan Eagles
>            Assignee: Jonathan Eagles
>         Attachments: YARN-3448.1.patch, YARN-3448.2.patch, YARN-3448.3.patch, YARN-3448.4.patch,
YARN-3448.5.patch
>
>
> For large applications, the majority of the time in LeveldbTimelineStore is spent deleting
old entities record at a time. An exclusive write lock is held during the entire deletion
phase which in practice can be hours. If we are to relax some of the consistency constraints,
other performance enhancing techniques can be employed to maximize the throughput and minimize
locking time.
> Split the 5 sections of the leveldb database (domain, owner, start time, entity, index)
into 5 separate databases. This allows each database to maximize the read cache effectiveness
based on the unique usage patterns of each database. With 5 separate databases each lookup
is much faster. This can also help with I/O to have the entity and index databases on separate
disks.
> Rolling DBs for entity and index DBs. 99.9% of the data are in these two sections 4:1
ration (index to entity) at least for tez. We replace DB record removal with file system removal
if we create a rolling set of databases that age out and can be efficiently removed. To do
this we must place a constraint to always place an entity's events into it's correct rolling
db instance based on start time. This allows us to stitching the data back together while
reading and artificial paging.
> Relax the synchronous writes constraints. If we are willing to accept losing some records
that we not flushed in the operating system during a crash, we can use async writes that can
be much faster.
> Prefer Sequential writes. sequential writes can be several times faster than random writes.
Spend some small effort arranging the writes in such a way that will trend towards sequential
write performance over random write performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message