hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-11446) S3AOutputStream should use shared thread pool to avoid OutOfMemoryError
Date Tue, 30 Dec 2014 18:43:13 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-11446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14261344#comment-14261344
] 

Hadoop QA commented on HADOOP-11446:
------------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12689537/hadoop-11446-003.patch
  against trunk revision 6621c35.

    {color:green}+1 @author{color}.  The patch does not contain any @author tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include any new or modified
tests.
                        Please justify why no new tests are needed for this patch.
                        Also please list what manual steps were performed to verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the total number of
javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new Findbugs (version
2.0.3) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase the total number
of release audit warnings.

    {color:green}+1 core tests{color}.  The patch passed unit tests in hadoop-tools/hadoop-aws.

Test results: https://builds.apache.org/job/PreCommit-HADOOP-Build/5352//testReport/
Console output: https://builds.apache.org/job/PreCommit-HADOOP-Build/5352//console

This message is automatically generated.

> S3AOutputStream should use shared thread pool to avoid OutOfMemoryError
> -----------------------------------------------------------------------
>
>                 Key: HADOOP-11446
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11446
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Ted Yu
>            Assignee: Ted Yu
>         Attachments: hadoop-11446-001.patch, hadoop-11446-002.patch, hadoop-11446-003.patch
>
>
> Here is part of the output including the OOME when hbase snapshot is exported to s3a
(nofile ulimit was increased to 102400):
> {code}
> 2014-12-19 13:15:03,895 INFO  [main] s3a.S3AFileSystem: OutputStream for key 'FastQueryPOC/2014-12-11/EVENT1-IDX-snapshot/.hbase-snapshot/.tmp/EVENT1_IDX_snapshot_2012_12_11/
   650a5678810fbdaa91809668d11ccf09/.regioninfo' closed. Now beginning upload
> 2014-12-19 13:15:03,895 INFO  [main] s3a.S3AFileSystem: Minimum upload part size: 16777216
threshold2147483647
> Exception in thread "main" java.lang.OutOfMemoryError: unable to create new native thread
>         at java.lang.Thread.start0(Native Method)
>         at java.lang.Thread.start(Thread.java:713)
>         at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:949)
>         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
>         at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
>         at com.amazonaws.services.s3.transfer.internal.UploadMonitor.<init>(UploadMonitor.java:129)
>         at com.amazonaws.services.s3.transfer.TransferManager.upload(TransferManager.java:449)
>         at com.amazonaws.services.s3.transfer.TransferManager.upload(TransferManager.java:382)
>         at org.apache.hadoop.fs.s3a.S3AOutputStream.close(S3AOutputStream.java:127)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>         at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:54)
>         at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:366)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:356)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:356)
>         at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:338)
>         at org.apache.hadoop.hbase.snapshot.ExportSnapshot.run(ExportSnapshot.java:791)
>         at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
>         at org.apache.hadoop.hbase.snapshot.ExportSnapshot.innerMain(ExportSnapshot.java:882)
>         at org.apache.hadoop.hbase.snapshot.ExportSnapshot.main(ExportSnapshot.java:886)
> {code}
> In S3AOutputStream#close():
> {code}
>       TransferManager transfers = new TransferManager(client);
> {code}
> This results in each TransferManager creating its own thread pool, leading to the OOME.
> One solution is to pass shared thread pool to TransferManager.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message