hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Feng Yuan (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (MAPREDUCE-7083) Map native output collector hang on NativeBatchProcess.nativeFinsh
Date Wed, 18 Apr 2018 11:32:00 GMT

     [ https://issues.apache.org/jira/browse/MAPREDUCE-7083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Feng Yuan updated MAPREDUCE-7083:
---------------------------------
    Description: 
I run the benchmark test demo terasort app.

Input size 100G and  block size is 1g, so all is 100 files. One file have only one split.

terasort have 100 map, this mean every map process 1g data
 everyone's jvm setting is:
{code:java}
-Xmx2048m -Xms2048m -Xmn256m -XX:MaxDirectMemorySize=128m -XX:SurvivorRatio=6 -XX:MaxPermSize=128m
-XX:ParallelGCThreads=10

io.sort.mb is 512mb
{code}
This is Java stack
 [^js_1] 
This is native stack
 [^js_native_1] 

  was:
I run the benchmark test demo terasort app.

Input size 100G and  block size is 1g, so all is 100 files. One file have only one split.

terasort have 100 map, this mean every map process 1g data
 everyone's jvm setting is:
{code:java}
-Xmx2048m -Xms2048m -Xmn256m -XX:MaxDirectMemorySize=128m -XX:SurvivorRatio=6 -XX:MaxPermSize=128m
-XX:ParallelGCThreads=10

io.sort.mb is 512mb
{code}
This is heap snapshot
 !4910BB94-7966-4BFB-B2EC-66FE42768F3A.png! 
This is working output files
 !734822C1-F4CB-4B37-8F43-C598CB18F52D.png! 
This is Java stack
 [^js_1] 
This is native stack
 [^js_native_1] 


> Map native output collector hang on NativeBatchProcess.nativeFinsh
> ------------------------------------------------------------------
>
>                 Key: MAPREDUCE-7083
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7083
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: nativetask
>    Affects Versions: 3.0.0-alpha1
>            Reporter: Feng Yuan
>            Priority: Major
>         Attachments: 4910BB94-7966-4BFB-B2EC-66FE42768F3A.png, 734822C1-F4CB-4B37-8F43-C598CB18F52D.png,
js_1, js_native_1
>
>
> I run the benchmark test demo terasort app.
> Input size 100G and  block size is 1g, so all is 100 files. One file have only one split.
> terasort have 100 map, this mean every map process 1g data
>  everyone's jvm setting is:
> {code:java}
> -Xmx2048m -Xms2048m -Xmn256m -XX:MaxDirectMemorySize=128m -XX:SurvivorRatio=6 -XX:MaxPermSize=128m
-XX:ParallelGCThreads=10
> io.sort.mb is 512mb
> {code}
> This is Java stack
>  [^js_1] 
> This is native stack
>  [^js_native_1] 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org


Mime
View raw message