hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (HDDS-1753) Datanode unable to find chunk while replication data using ratis.
Date Tue, 27 Aug 2019 12:48:01 GMT

     [ https://issues.apache.org/jira/browse/HDDS-1753?focusedWorklogId=301945&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-301945
]

ASF GitHub Bot logged work on HDDS-1753:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 27/Aug/19 12:47
            Start Date: 27/Aug/19 12:47
    Worklog Time Spent: 10m 
      Work Description: bshashikant commented on pull request #1318: HDDS-1753. Datanode unable
to find chunk while replication data using ratis.
URL: https://github.com/apache/hadoop/pull/1318#discussion_r318059374
 
 

 ##########
 File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/keyvalue/statemachine/background/BlockDeletingService.java
 ##########
 @@ -143,6 +150,52 @@ public BackgroundTaskQueue getTasks() {
     return queue;
   }
 
+  public List<ContainerData> chooseContainerForBlockDeletion(int count,
+      ContainerDeletionChoosingPolicy deletionPolicy)
+      throws StorageContainerException {
+    Map<Long, ContainerData> containerDataMap =
+        ozoneContainer.getContainerSet().getContainerMap().entrySet().stream()
+            .filter(e -> isDeletionAllowed(e.getValue().getContainerData(),
+                deletionPolicy)).collect(Collectors
+            .toMap(Map.Entry::getKey, e -> e.getValue().getContainerData()));
+    return deletionPolicy
+        .chooseContainerForBlockDeletion(count, containerDataMap);
+  }
+
+  private boolean isDeletionAllowed(ContainerData containerData,
+      ContainerDeletionChoosingPolicy deletionPolicy) {
+    if (!deletionPolicy
+        .isValidContainerType(containerData.getContainerType())) {
+      return false;
+    } else if (!containerData.isClosed()) {
+      return false;
+    } else {
+      if (ozoneContainer.getWriteChannel() instanceof XceiverServerRatis) {
+        try {
+          XceiverServerRatis ratisServer =
+              (XceiverServerRatis) ozoneContainer.getWriteChannel();
+          long minReplicatedIndex = ratisServer.getMinReplicatedIndex(PipelineID
+              .valueOf(UUID.fromString(containerData.getOriginPipelineId())));
 
 Review comment:
   Addressed in the latest patch.
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 301945)
    Time Spent: 4h 50m  (was: 4h 40m)

> Datanode unable to find chunk while replication data using ratis.
> -----------------------------------------------------------------
>
>                 Key: HDDS-1753
>                 URL: https://issues.apache.org/jira/browse/HDDS-1753
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Datanode
>    Affects Versions: 0.4.0
>            Reporter: Mukul Kumar Singh
>            Assignee: Shashikant Banerjee
>            Priority: Major
>              Labels: MiniOzoneChaosCluster, pull-request-available
>         Attachments: HDDS-1753.000.patch
>
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Leader datanode is unable to read chunk from the datanode while replicating data from
leader to follower.
> Please note that deletion of keys is also happening while the data is being replicated.
> {code}
> 2019-07-02 19:39:22,604 INFO  impl.RaftServerImpl (RaftServerImpl.java:checkInconsistentAppendEntries(972))
- 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#70:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 ERROR impl.ChunkManagerImpl (ChunkUtils.java:readData(161)) -
Unable to find the chunk file. chunk info : ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3
> -4d64-93d8-fa2ebafee933_chunk_1, offset=0, len=2048}
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl (RaftServerImpl.java:checkInconsistentAppendEntries(990))
- 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot (9770) already
h
> as the append entries (first index: 1)
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl (RaftServerImpl.java:checkInconsistentAppendEntries(972))
- 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#71:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 2019-07-02 19:39:22,605 INFO  keyvalue.KeyValueHandler (ContainerUtils.java:logAndReturnError(146))
- Operation: ReadChunk : Trace ID: 4216d461a4679e17:4216d461a4679e17:0:0 : Message: Unable
to find the c
> hunk file. chunk info ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
offset=0, len=2048} : Result: UNABLE_TO_FIND_CHUNK
> 2019-07-02 19:39:22,605 INFO  impl.RaftServerImpl (RaftServerImpl.java:checkInconsistentAppendEntries(990))
- 5ac88709-a3a2-4c8f-91de-5e54b617f05e: Failed appendEntries as latest snapshot (9770) already
h
> as the append entries (first index: 2)
> 2019-07-02 19:39:22,606 INFO  impl.RaftServerImpl (RaftServerImpl.java:checkInconsistentAppendEntries(972))
- 5ac88709-a3a2-4c8f-91de-5e54b617f05e: inconsistency entries. Reply:76a3eb0f-d7cd-477b-8973-db1
> 014feb398<-5ac88709-a3a2-4c8f-91de-5e54b617f05e#72:FAIL,INCONSISTENCY,nextIndex:9771,term:2,followerCommit:9782
> 19:39:22.606 [pool-195-thread-19] ERROR DNAudit - user=null | ip=null | op=READ_CHUNK
{blockData=conID: 3 locID: 102372189549953034 bcsId: 0} | ret=FAILURE
> java.lang.Exception: Unable to find the chunk file. chunk info ChunkInfo{chunkName='76ec669ae2cb6e10dd9f08c0789c5fdf_stream_a2850dce-def3-4d64-93d8-fa2ebafee933_chunk_1,
offset=0, len=2048}
>         at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatchRequest(HddsDispatcher.java:320)
~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
>         at org.apache.hadoop.ozone.container.common.impl.HddsDispatcher.dispatch(HddsDispatcher.java:148)
~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
>         at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.dispatchCommand(ContainerStateMachine.java:346)
~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
>         at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.readStateMachineData(ContainerStateMachine.java:476)
~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
>         at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$getCachedStateMachineData$2(ContainerStateMachine.java:495)
~[hadoop-hdds-container-service-0.5.0-SN
> APSHOT.jar:?]
>         at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4767)
~[guava-11.0.2.jar:?]
>         at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
~[guava-11.0.2.jar:?]
>         at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
~[guava-11.0.2.jar:?]
>         at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
~[guava-11.0.2.jar:?]
>         at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) ~[guava-11.0.2.jar:?]
>         at com.google.common.cache.LocalCache.get(LocalCache.java:3965) ~[guava-11.0.2.jar:?]
>         at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4764)
~[guava-11.0.2.jar:?]
>         at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.getCachedStateMachineData(ContainerStateMachine.java:494)
~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.ja
> r:?]
>         at org.apache.hadoop.ozone.container.common.transport.server.ratis.ContainerStateMachine.lambda$readStateMachineData$4(ContainerStateMachine.java:542)
~[hadoop-hdds-container-service-0.5.0-SNAPSHOT.jar:?]
>         at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1590)
[?:1.8.0_171]
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_171]
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_171]
> {code}



--
This message was sent by Atlassian Jira
(v8.3.2#803003)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message