Return-Path: X-Original-To: apmail-hadoop-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 12709188C3 for ; Fri, 24 Apr 2015 08:00:06 +0000 (UTC) Received: (qmail 85631 invoked by uid 500); 24 Apr 2015 07:59:39 -0000 Delivered-To: apmail-hadoop-user-archive@hadoop.apache.org Received: (qmail 85513 invoked by uid 500); 24 Apr 2015 07:59:39 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 85502 invoked by uid 99); 24 Apr 2015 07:59:38 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Apr 2015 07:59:38 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: error (athena.apache.org: local policy) Received: from [54.164.171.186] (HELO mx1-us-east.apache.org) (54.164.171.186) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 24 Apr 2015 07:59:34 +0000 Received: from mail-oi0-f52.google.com (mail-oi0-f52.google.com [209.85.218.52]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id 1705842E76 for ; Fri, 24 Apr 2015 07:58:52 +0000 (UTC) Received: by oign205 with SMTP id n205so34482531oig.2 for ; Fri, 24 Apr 2015 00:58:46 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=TwGVjWxmUoGi1h03LHE8JqSsV4fpKDyBBis4NL9K3jI=; b=DRN0jYVJ7ljsyWMqY1SMg0sqbon5KeLcVwwFwrU0cyx49/M90AuNcjtooWh6lmFjs2 bIiyA/A/9H3ebUd9hlY6vber0T/aCpSgGVTAcZtxtDRmBmSLVRj7OUob9NEiOWs2w4wK wu3pzMPTTdc5xLBCTZ8ICc8gavR2gncGQzwesQouiNvXIQjZ4RZI0SnrSgw7xAMuaih/ YtSYb6XN1/xovsQVWV/VRb3Ppf+GxF1EIpP4l8uKzDGyDvPy0dd4dzfKlPWJC/BQ5L1b GE20f3cF5JmSujyK+w6Jusy8D29hWL2sLe7BZ1xsnJAO2tpATapR5n2fLbrvRoapYh0E vUFg== X-Gm-Message-State: ALoCoQmztyKueIt6Wgyp0y5JPNN9uSNq/a2R88PV+HqUe31gLrUCRfdnXXUaSQID4zDCQjdmYOAr MIME-Version: 1.0 X-Received: by 10.202.92.9 with SMTP id q9mr5893362oib.8.1429862326671; Fri, 24 Apr 2015 00:58:46 -0700 (PDT) Received: by 10.182.38.135 with HTTP; Fri, 24 Apr 2015 00:58:46 -0700 (PDT) In-Reply-To: <562b94464b725071e6e51fded2fdc6e4@cweb04.nm.nhnsystem.com> References: <5df31857ed01f8ba169af1555939dc@cweb09.nm.nhnsystem.com> <562b94464b725071e6e51fded2fdc6e4@cweb04.nm.nhnsystem.com> Date: Fri, 24 Apr 2015 16:58:46 +0900 Message-ID: Subject: Re: rolling upgrade(2.4.1 to 2.6.0) problem From: =?UTF-8?B?RHJha2Xrr7zsmIHqt7w=?= To: user , =?UTF-8?B?7KGw7KO87J28?= Content-Type: multipart/alternative; boundary=001a113d60940f3aec051473c48b X-Virus-Checked: Checked by ClamAV on apache.org --001a113d60940f3aec051473c48b Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable HI, How about the ulimit setting of the user for hdfs datanode ? Drake =EB=AF=BC=EC=98=81=EA=B7=BC Ph.D kt NexR On Wed, Apr 22, 2015 at 6:25 PM, =EC=A1=B0=EC=A3=BC=EC=9D=BC wrote: > > > I allocated 5G. > > I think OOM is not the cause of essentially > > > > -----Original Message----- > *From:* "Han-Cheol Cho" > *To:* ; > *Cc:* > *Sent:* 2015-04-22 (=EC=88=98) 15:32:35 > *Subject:* RE: rolling upgrade(2.4.1 to 2.6.0) problem > > > Hi, > > > > The first warning shows out-of-memory error of JVM. > > Did you give enough max heap memory for DataNode daemons? > > DN daemons, by default, uses max heap size 1GB. So if your DN requires > more > > than that, it will be in a trouble. > > > You can check the memory consumption of you DN dameons (e.g., top > command) > > and the memory allocated to them by -Xmx option (e.g., jps -lmv). > > If the max heap size is too small, you can use HADOOP_DATANODE_OPTS > variable > > (e.g., HADOOP_DATANODE_OPTS=3D"-Xmx4g") to override it. > > > > Best wishes, > > Han-Cheol > > > > > > > > > > > > -----Original Message----- > *From:* "=EC=A1=B0=EC=A3=BC=EC=9D=BC" > *To:* ; > *Cc:* > *Sent:* 2015-04-22 (=EC=88=98) 14:54:16 > *Subject:* rolling upgrade(2.4.1 to 2.6.0) problem > > > > > My Cluster is.. > > hadoop 2.4.1 > > Capacity : 1.24PB > > Used 1.1PB > > 16 Datanodes > > Each node is a capacity of 65TB, 96TB, 80TB, Etc.. > > > > I had to proceed with the rolling upgrade 2.4.1 to 2.6.0. > > A data node upgraded takes about 40 minutes. > > Occurs during the upgrade is in progress under-block. > > > > 10 nodes completed upgrade 2.6.0. > > Had a problem at some point during a rolling upgrade of the remaining > nodes. > > > > Heartbeat of the many nodes(2.6.0 only) has failed. > > > > I did changes the following attributes but I did not fix the problem, > > dfs.datanode.handler.count =3D 100 ---> 300, 400, 500 > > dfs.datanode.max.transfer.threads =3D 4096 ---> 8000, 10000 > > > > I think, > > 1. Something that causes a delay in processing threads. I think it may be > because the block replication between different versions. > > 2. Whereby the many handlers and xceiver became necessary. > > 3. Whereby the out of memory, an error occurs. Or the problem arises on a > datanode. > > 4. Heartbeat fails, and datanode dies. > > > I found a datanode error log for the following: > > However, it is impossible to determine the cause. > > > > I think, therefore I am. Called because it blocks the replication between > different versions > > > > Give me someone help me !! > > > > DATANODE LOG > > -------------------------------------------------------------------------= - > > ### I had to check a few thousand close_wait connection from the datanode= . > > > > org.apache.hadoop.hdfs.server.datanode.DataNode: Slow BlockReceiver write > packet to mirror took 1207ms (threshold=3D300ms) > > > > 2015-04-21 22:46:01,772 WARN > org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out of memor= y. > Will retry in 30 seconds. > > java.lang.OutOfMemoryError: unable to create new native thread > > at java.lang.Thread.start0(Native Method) > > at java.lang.Thread.start(Thread.java:640) > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverS= erver.java:145) > > at java.lang.Thread.run(Thread.java:662) > > 2015-04-21 22:49:45,378 WARN > org.apache.hadoop.hdfs.server.datanode.DataNode: > datanode-192.168.1.207:40010:DataXceiverServer:java.io.IOException: Xceiv= er > count 8193 exceeds the limit of concurrent xcievers: 8192 > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverS= erver.java:140) > > at java.lang.Thread.run(Thread.java:662) > > 2015-04-22 01:01:25,632 WARN > org.apache.hadoop.hdfs.server.datanode.DataNode: > datanode-192.168.1.207:40010:DataXceiverServer:java.io.IOException: Xceiv= er > count 8193 exceeds the limit of concurrent xcievers: 8192 > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverS= erver.java:140) > > at java.lang.Thread.run(Thread.java:662) > > 2015-04-22 03:49:44,125 ERROR > org.apache.hadoop.hdfs.server.datanode.DataNode: > datanode-192.168.1.204:40010:DataXceiver error processing READ_BLOCK > operation src: /192.168.2.174:45606 dst: /192.168.1.204:40010 > > java.io.IOException: cannot find BPOfferService for > bpid=3DBP-1770955034-0.0.0.0-1401163460236 > > at > org.apache.hadoop.hdfs.server.datanode.DataNode.getDNRegistrationForBP(Da= taNode.java:1387) > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.= java:470) > > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receive= r.java:116) > > at > org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.= java:71) > > at > org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:2= 35) > > at java.lang.Thread.run(Thread.java:662) > > 2015-04-22 05:30:28,947 WARN > org.apache.hadoop.hdfs.server.datanode.DataNode: > DatanodeRegistration(192.168.1.203, > datanodeUuid=3D654f22ef-84b3-4ecb-a959-2ea46d817c19, infoPort=3D40075, > ipcPort=3D40020, > storageInfo=3Dlv=3D-56;cid=3DCID-CLUSTER;nsid=3D239138164;c=3D14048838389= 82):Failed > to transfer BP-1770955034-0.0.0.0-1401163460236:blk_1075354042_1613403 to > 192.168.2.156:40010 got > > java.net.SocketException: Original Exception : java.io.IOException: > Connection reset by peer > > at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) > > at > sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405) > > at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:506= ) > > at > org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStre= am.java:223) > > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender= .java:559) > > at > org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.= java:728) > > at > org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode= .java:2017) > > at java.lang.Thread.run(Thread.java:662) > > Caused by: java.io.IOException: Connection reset by peer > > ... 8 more > > > > > --001a113d60940f3aec051473c48b Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
HI,=C2=A0

How about the ulimit setting = of the user for hdfs datanode ?

Drake =EB=AF=BC=EC=98=81=EA=B7=BC Ph.D
kt NexR
=

On Wed, Apr 22, 2015 at 6:25 PM, =EC=A1=B0= =EC=A3=BC=EC=9D=BC <tjstory@kgrid.co.kr> wrote:

= =C2=A0

I allocated 5G.=C2=A0

I think OOM is not the cause of es= sentially=C2=A0

=C2=A0

-----Or= iginal Message-----
From: "Han-Cheol Cho"<hancheol.cho= @nhn-playart.com>
To: <user@hadoop.apache.org>;
Cc:=

Sent: 2015-04-22 (=EC=88=98) = 15:32:35
Subject: RE: rolling upgrade(2.4.1 to 2.6.0) problem
= =C2=A0

=

Hi,

=C2=A0

The first warning shows out-of-memory error of JV= M.

Did you give enough max heap memory for DataNode daemons?

DN daemons, by default, uses max heap = size 1GB. So if your DN requires more=C2=A0

than that, it will be in a trouble.

=C2=A0

You can check the memory consu= mption of=C2=A0you DN= dameons (e.g., top c= ommand)=C2=A0

a= nd the memory allocated to them by -Xmx option (e.g., jps -lmv).

=

If the max heap size is t= oo small, you can use HADOOP_DATANODE_OPTS variable

(e.g., HADOOP_DATANODE_OPTS=3D"-Xmx4g&qu= ot;)=C2=A0to override= it.

=C2=A0

Best wishes,

Han-Cheol

=C2=A0

=C2=A0

=C2=A0

=C2=A0

=C2=A0

-----Original Message---= --
From: "=EC=A1=B0=EC=A3=BC=EC=9D=BC"<tjstory@kgrid.co.kr&= gt;
To: <user@hadoop.apache.org>;
Cc:
Sent: 201= 5-04-22 (=EC=88=98) 14:54:16
Subject: rolling upgrade(2.4.1 to 2.= 6.0) problem
=C2=A0

=C2=A0

My Cluster is..

hadoop 2.4.1

Capacity : 1.24PB

Used 1.1PB

16 Datanodes=C2=A0

Each node is a capacity of 65= TB, 96TB, 80TB, Etc..

=C2=A0

I had to proceed with the rolling = upgrade 2.4.1 to 2.6.0.=C2=A0

A data node upgraded takes about 40 min= utes.=C2=A0

Occurs during the upgrade is in progress under-block.=C2= =A0

=C2=A0

10 nodes completed upgrade 2.6.0.=C2=A0=C2=A0

= Had a problem at some point during a rolling upgrade of the remaining nodes= .

=C2=A0

Heartbeat of the many nodes(2.6.0 only) has failed.=C2= =A0

=C2=A0

I did changes the following attributes but I did not= fix the problem,=C2=A0=C2=A0

dfs.datanode= .handler.count =3D 100 ---> 300, 400, 500=C2=A0=C2=A0

dfs.datanode.max.transfer.threads =3D 4096 ---> 8000, 100= 00=C2=A0

=C2=A0

I think,=C2=A0

1. Something that causes a= delay in processing threads.=C2=A0I think it may be because the block replication between different ver= sions.

2. Whereby the many handlers and xceiver became necessa= ry.=C2=A0

3.=C2=A0Where= by the=C2=A0out of memory, an error occurs. Or the problem arises on a datanode.

4. = Heartbeat fails, and datanode dies.

=C2=A0

I found a datano= de error log for the following:=C2=A0

However, it is impossible to de= termine the cause.=C2=A0

=C2=A0

I think, therefore I am. Called= because it blocks the replication between different versions=C2=A0

= =C2=A0

Give me someone help me !!=C2=A0

=C2=A0

DATANODE L= OG

------------------------------------------------------------------= --------

### I had to check a few thousand close_wait connection from= the datanode.

=C2=A0

org.apache.hadoop= .hdfs.server.datanode.DataNode: Slow BlockReceiver write packet to mirror t= ook 1207ms (threshold=3D300ms)

=C2=A0

2015-04-21 22:46:0= 1,772 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DataNode is out= of memory. Will retry in 30 seconds.

java.lang.OutOfMemoryError: una= ble to create new native thread

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.l= ang.Thread.start0(Native Method)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.= lang.Thread.start(Thread.java:640)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org= .apache.hadoop.hdfs.server.datanode.DataXceiverServer.run(DataXceiverServer= .java:145)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.Thread.run(Thread= .java:662)

2015-04-21 22:49:45,378 WARN org.apache.hadoop.hdfs.server= .datanode.DataNode: datanode-192.168.1.207:40010:DataXceiverServer:java.io.= IOException: Xceiver count 8193 exceeds the limit of concurrent xcievers: 8= 192

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.data= node.DataXceiverServer.run(DataXceiverServer.java:140)

=C2=A0 =C2=A0 = =C2=A0 =C2=A0 at java.lang.Thread.run(Thread.java:662)

2015-04-22 01:01:25,632 WARN org.apache.had= oop.hdfs.server.datanode.DataNode: datanode-192.168.1.207:40010:DataXceiver= Server:java.io.IOException: Xceiver count 8193 exceeds the limit of concurr= ent xcievers: 8192

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.h= dfs.server.datanode.DataXceiverServer.run(DataXceiverServer.java:140)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.Thread.run(Thread.java:662)

2015-04-22 03:49:44,125 ERROR org.apache.hadoop.hdfs.server.datanode.DataN= ode: datanode-192.168.1.204:40010:DataXceiver error processing READ_BLOCK o= peration =C2=A0src: /192.168.2.174:45606 dst: /192.168.1.204:40010

java.io.IOException: cannot fi= nd BPOfferService for bpid=3DBP-1770955034-0.0.0.0-1401163460236

=C2= =A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.datanode.DataNode= .getDNRegistrationForBP(DataNode.java:1387)

=C2=A0 =C2=A0 =C2=A0 =C2= =A0 at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXce= iver.java:470)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.= protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)

=C2=A0 = =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.protocol.datatransfer.Receiv= er.processOp(Receiver.java:71)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apa= che.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at java.lang.Thread.run(Thread.java:662)

2015-04-22 05:30:28,947 WARN org.apache.hadoop.hdfs.server.datanode.DataNo= de: DatanodeRegistration(192.168.1.203, datanodeUuid=3D654f22ef-84b3-4ecb-a= 959-2ea46d817c19, infoPort=3D40075, ipcPort=3D40020, storageInfo=3Dlv=3D-56= ;cid=3DCID-CLUSTER;nsid=3D239138164;c=3D1404883838982):Failed to transfer B= P-1770955034-0.0.0.0-1401163460236:blk_1075354042_1613403 to 192.168.2.156:40010 got

<= p>java.net.SocketException: Original Exception : java.io.IOException: Conne= ction reset by peer

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.FileCha= nnelImpl.transferTo0(Native Method)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at su= n.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:405)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at sun.nio.ch.FileChannelImpl.transferTo(FileC= hannelImpl.java:506)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop= .net.SocketOutputStream.transferToFully(SocketOutputStream.java:223)

= =C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.server.datanode.Block= Sender.sendPacket(BlockSender.java:559)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 a= t org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.= java:728)

=C2=A0 =C2=A0 =C2=A0 =C2=A0 at org.apache.hadoop.hdfs.serve= r.datanode.DataNode$DataTransfer.run(DataNode.java:2017)

=C2=A0 =C2= =A0 =C2=A0 =C2=A0 at java.lang.Thread.run(Thread.java:662)

Caused by:= java.io.IOException: Connection reset by peer

=C2=A0 =C2=A0 =C2=A0 = =C2=A0 ... 8 more

=C2=A0

=C2=A0


--001a113d60940f3aec051473c48b--