hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (HDDS-1685) Recon: Add support for "start" query param to containers and containers/{id} endpoints
Date Fri, 28 Jun 2019 01:57:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1685?focusedWorklogId=268975&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-268975
]

ASF GitHub Bot logged work on HDDS-1685:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 28/Jun/19 01:56
            Start Date: 28/Jun/19 01:56
    Worklog Time Spent: 10m 
      Work Description: avijayanhwx commented on pull request #987: HDDS-1685. Recon: Add
support for 'start' query param to containers…
URL: https://github.com/apache/hadoop/pull/987#discussion_r298428634
 
 

 ##########
 File path: hadoop-ozone/ozone-recon/src/main/java/org/apache/hadoop/ozone/recon/spi/impl/ContainerDBServiceProviderImpl.java
 ##########
 @@ -164,38 +206,49 @@ public Integer getCountForForContainerKeyPrefix(
     return prefixes;
   }
 
-  /**
-   * Get all the containers.
-   *
-   * @return Map of containerID -> containerMetadata.
-   * @throws IOException
-   */
-  @Override
-  public Map<Long, ContainerMetadata> getContainers() throws IOException {
-    // Set a negative limit to get all the containers.
-    return getContainers(-1);
-  }
-
   /**
    * Iterate the DB to construct a Map of containerID -> containerMetadata
-   * only for the given limit.
+   * only for the given limit from the given start key. The start containerID
+   * is skipped from the result.
    *
    * Return all the containers if limit < 0.
    *
+   * @param limit No of containers to get.
+   * @param prevKey containerID after which the list of containers are scanned.
    * @return Map of containerID -> containerMetadata.
    * @throws IOException
    */
   @Override
-  public Map<Long, ContainerMetadata> getContainers(int limit)
+  public Map<Long, ContainerMetadata> getContainers(int limit, long prevKey)
       throws IOException {
     Map<Long, ContainerMetadata> containers = new LinkedHashMap<>();
     TableIterator<ContainerKeyPrefix, ? extends KeyValue<ContainerKeyPrefix,
         Integer>> containerIterator = containerKeyTable.iterator();
+    boolean skipPrevKey = false;
+    ContainerKeyPrefix seekKey;
+    if (prevKey > 0L) {
+      skipPrevKey = true;
 
 Review comment:
   I believe we don't need this skipPrevKey flag. If the value of prevKey < 0, then the
method will return at Line 238. Also, instead of iterating the entire key space of the prevKey
container, can we do something like setting seek key to prevKey+1. By that way, we will always
pick the next container. 
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 268975)
    Time Spent: 2.5h  (was: 2h 20m)

> Recon: Add support for "start" query param to containers and containers/{id} endpoints
> --------------------------------------------------------------------------------------
>
>                 Key: HDDS-1685
>                 URL: https://issues.apache.org/jira/browse/HDDS-1685
>             Project: Hadoop Distributed Data Store
>          Issue Type: Sub-task
>          Components: Ozone Recon
>    Affects Versions: 0.4.0
>            Reporter: Vivek Ratnavel Subramanian
>            Assignee: Vivek Ratnavel Subramanian
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> * Support "start" query param to seek to the given key in RocksDB.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message