hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chen Zhang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12936) java.lang.OutOfMemoryError: unable to create new native thread
Date Mon, 18 Dec 2017 09:42:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16294704#comment-16294704
] 

Chen Zhang commented on HDFS-12936:
-----------------------------------

Hey [~1028344078@qq.com],this error usually means you didn't set your system limits appropriately.
There's lots of system limits may cause this issue, such max-threads-per-process, max-open-files-per-process,
etc.
This [answer on stackoverflow | https://stackoverflow.com/questions/34452302/how-to-increase-maximum-number-of-jvm-threads-linux-64bit]
is a great guideline for you to find out which limits is not set appropriately, hope it can
help

> java.lang.OutOfMemoryError: unable to create new native thread
> --------------------------------------------------------------
>
>                 Key: HDFS-12936
>                 URL: https://issues.apache.org/jira/browse/HDFS-12936
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.6.0
>         Environment: CDH5.12
> hadoop2.6
>            Reporter: Jepson
>   Original Estimate: 96h
>  Remaining Estimate: 96h
>
> I configure the max user processes  65535 with any user ,and the datanode memory is 8G.
> When a log of data was been writeen,the datanode was been shutdown.
> But I can see the memory use only < 1000M.
> Please to see https://pan.baidu.com/s/1o7BE0cy
> *DataNode shutdown error log:*  
> {code:java}
> 2017-12-17 23:58:14,422 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder:
BP-1437036909-192.168.17.36-1509097205664:blk_1074725940_987917, type=HAS_DOWNSTREAM_IN_PIPELINE
terminating
> 2017-12-17 23:58:31,425 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode
is out of memory. Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
> 	at java.lang.Thread.start0(Native Method)
> 	at java.lang.Thread.start(Thread.java:714)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
> 	at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:01,426 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode
is out of memory. Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
> 	at java.lang.Thread.start0(Native Method)
> 	at java.lang.Thread.start(Thread.java:714)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
> 	at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:05,520 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode
is out of memory. Will retry in 30 seconds.
> java.lang.OutOfMemoryError: unable to create new native thread
> 	at java.lang.Thread.start0(Native Method)
> 	at java.lang.Thread.start(Thread.java:714)
> 	at org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:154)
> 	at java.lang.Thread.run(Thread.java:745)
> 2017-12-17 23:59:31,429 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving
BP-1437036909-192.168.17.36-1509097205664:blk_1074725951_987928 src: /192.168.17.54:40478
dest: /192.168.17.48:50010
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message