hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (HDDS-1499) OzoneManager Cache
Date Wed, 15 May 2019 17:20:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1499?focusedWorklogId=242690&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-242690

ASF GitHub Bot logged work on HDDS-1499:

                Author: ASF GitHub Bot
            Created on: 15/May/19 17:19
            Start Date: 15/May/19 17:19
    Worklog Time Spent: 10m 
      Work Description: bharatviswa504 commented on pull request #798: HDDS-1499. OzoneManager
URL: https://github.com/apache/hadoop/pull/798#discussion_r284364154

 File path: hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OmMetadataManagerImpl.java
 @@ -245,42 +247,50 @@ protected DBStoreBuilder addOMTablesAndCodecs(DBStoreBuilder builder)
   protected void initializeOmTables() throws IOException {
     userTable =
-        this.store.getTable(USER_TABLE, String.class, VolumeList.class);
+        this.store.getTable(USER_TABLE, String.class, VolumeList.class,
 Review comment:
   As for some tables where SCM does not need to have cache, so that is the reason for introducing
   And also for some tables like deletedtable, where this is used by background thread inOM,
these does not need cache.
   And as for volume/bucket table we plan to have entire table info, and cleanup for them
is no-op, even after flush to db, where as for partial cache the cache entries will be cleaned
up. This is the only difference between full table cache vs partial table cache.
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:

Issue Time Tracking

    Worklog Id:     (was: 242690)
    Time Spent: 6h 10m  (was: 6h)

> OzoneManager Cache
> ------------------
>                 Key: HDDS-1499
>                 URL: https://issues.apache.org/jira/browse/HDDS-1499
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: Ozone Manager
>            Reporter: Bharat Viswanadham
>            Assignee: Bharat Viswanadham
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 6h 10m
>  Remaining Estimate: 0h
> In this Jira, we shall implement a cache for Table.
> As with OM HA, we are planning to implement double buffer implementation to flush transaction
in a batch, instead of using rocksdb put() for every operation. When this comes in to place
we need cache in OzoneManager HA to handle/server the requests for validation/returning responses.
> This Jira will implement Cache as an integral part of the table. In this way users using
this table does not need to handle like check cache/db. For this, we can update get API in
the table to handle the cache.
> This Jira will implement:
>  # Cache as a part of each Table.
>  # Uses this cache in get().
>  # Exposes api for cleanup, add entries to cache.
> Usage to add the entries in to cache will be done in further jira's.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message