hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mukul Kumar Singh (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDDS-834) Datanode goes OOM based because of segment size
Date Tue, 13 Nov 2018 18:00:00 GMT

     [ https://issues.apache.org/jira/browse/HDDS-834?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Mukul Kumar Singh updated HDDS-834:
    Attachment:     (was: HDDS-834-ozone-0.3.001.patch)

> Datanode goes OOM based because of segment size
> -----------------------------------------------
>                 Key: HDDS-834
>                 URL: https://issues.apache.org/jira/browse/HDDS-834
>             Project: Hadoop Distributed Data Store
>          Issue Type: Bug
>          Components: Ozone Datanode
>    Affects Versions: 0.3.0
>            Reporter: Mukul Kumar Singh
>            Assignee: Mukul Kumar Singh
>            Priority: Major
>         Attachments: HDDS-834.001.patch
> Currently ratis segment size is set to 1GB. After RATIS-253, the entry size for a write
Chunk is not  counted towards the entry being written to Raft Log.
> This jira controls the segment size to 16KB which makes sure that the number of entries
with WriteChunk is limited to 64. This means with 16MB chunk, the total data pending in the
segment is 1GB.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org

View raw message