hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (Jira)" <j...@apache.org>
Subject [jira] [Work logged] (HADOOP-17195) Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs
Date Wed, 09 Sep 2020 16:20:00 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-17195?focusedWorklogId=480908&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-480908
]

ASF GitHub Bot logged work on HADOOP-17195:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 09/Sep/20 16:19
            Start Date: 09/Sep/20 16:19
    Worklog Time Spent: 10m 
      Work Description: steveloughran opened a new pull request #2294:
URL: https://github.com/apache/hadoop/pull/2294


   
   This is the successor to #2179
   
   1. ABFS Store creates a single threadpool, configurable with fixed size or multiple of
cores
   1. each output stream is given its own semaphored pool which limits the access that stream
has to the pool
   
   To actually defend against OOMs the per-stream queue length is what needs to be managed;
looking at the patch it still has the problem of #2179: you need one buffer per pending upload
in the the pools.
   
   Ultimately the S3A Connector fixed this by going to disk buffering by default. A more performant
design might be to have a blocking byte buffer factory which limits the #of buffers which
the streams can request, so putting an upper bound on the amount of memory which a single
ABFS store instance can demand. 
   
   
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

            Worklog Id:     (was: 480908)
    Remaining Estimate: 0h
            Time Spent: 10m

> Intermittent OutOfMemory error while performing hdfs CopyFromLocal to abfs 
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-17195
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17195
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: fs/azure
>    Affects Versions: 3.3.0
>            Reporter: Mehakmeet Singh
>            Assignee: Bilahari T H
>            Priority: Major
>              Labels: abfsactive
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> OutOfMemory error due to new ThreadPools being made each time AbfsOutputStream is created.
Since threadPool aren't limited a lot of data is loaded in buffer and thus it causes OutOfMemory
error.
> Possible fixes:
> - Limit the number of ThreadCounts while performing hdfs copyFromLocal (Using -t property).
> - Reducing OUTPUT_BUFFER_SIZE significantly which would limit the amount of buffer to
be loaded in threads.
> - Don't create new ThreadPools each time AbfsOutputStream is created and limit the number
of ThreadPools each AbfsOutputStream could create.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message