From hdfs-issues-return-268220-archive-asf-public=cust-asf.ponee.io@hadoop.apache.org Tue Jun 18 20:23:02 2019 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [207.244.88.153]) by mx-eu-01.ponee.io (Postfix) with SMTP id 67CDD18066B for ; Tue, 18 Jun 2019 22:23:02 +0200 (CEST) Received: (qmail 10147 invoked by uid 500); 18 Jun 2019 20:23:01 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 10132 invoked by uid 99); 18 Jun 2019 20:23:01 -0000 Received: from mailrelay1-us-west.apache.org (HELO mailrelay1-us-west.apache.org) (209.188.14.139) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 18 Jun 2019 20:23:01 +0000 Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id CAC16E2DD5 for ; Tue, 18 Jun 2019 20:23:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 3B16925422 for ; Tue, 18 Jun 2019 20:23:00 +0000 (UTC) Date: Tue, 18 Jun 2019 20:23:00 +0000 (UTC) From: =?utf-8?Q?=C3=8D=C3=B1igo_Goiri_=28JIRA=29?= To: hdfs-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Comment Edited] (HDFS-14579) In refreshNodes, avoid performing a DNS lookup while holding the write lock MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-14579?page=3Dcom.atlassian= .jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=3D1686= 7011#comment-16867011 ]=20 =C3=8D=C3=B1igo Goiri edited comment on HDFS-14579 at 6/18/19 8:22 PM: ------------------------------------------------------------- So here you can see this stuck at getting the address: {code} "main" #1 prio=3D5 os_prio=3D0 tid=3D0x0000000001b38800 nid=3D0x6a0c runnab= le [0x0000000001b1e000] java.lang.Thread.State: RUNNABLE =09at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) =09at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) =09at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:132= 3) =09at java.net.InetAddress.getAllByName0(InetAddress.java:1276) =09at java.net.InetAddress.getAllByName(InetAddress.java:1192) =09at java.net.InetAddress.getAllByName(InetAddress.java:1126) =09at java.net.InetAddress.getByName(InetAddress.java:1076) =09at java.net.InetSocketAddress.(InetSocketAddress.java:220) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.parseEn= try(HostFileManager.java:94) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.readFil= e(HostFileManager.java:80) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.refresh= (HostFileManager.java:157) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.refresh= (HostFileManager.java:70) =09at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.(= DatanodeManager.java:274) =09at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(Blo= ckManager.java:416) =09at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(Blo= ckManager.java:408) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesyst= em.java:792) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa= mesystem.java:707) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo= de.java:673) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:750) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 1000) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 979) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1726) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:17= 94) {code} The code path is not exactly the same but it's the same DNS resolution issu= e. This is Hadoop 2.9.1 BTW. was (Author: elgoiri): So here you can see this stuck at getting the address: {code} "main" #1 prio=3D5 os_prio=3D0 tid=3D0x0000000001b38800 nid=3D0x6a0c runnab= le [0x0000000001b1e000] java.lang.Thread.State: RUNNABLE =09at java.net.Inet4AddressImpl.lookupAllHostAddr(Native Method) =09at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928) =09at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:132= 3) =09at java.net.InetAddress.getAllByName0(InetAddress.java:1276) =09at java.net.InetAddress.getAllByName(InetAddress.java:1192) =09at java.net.InetAddress.getAllByName(InetAddress.java:1126) =09at java.net.InetAddress.getByName(InetAddress.java:1076) =09at java.net.InetSocketAddress.(InetSocketAddress.java:220) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.parseEn= try(HostFileManager.java:94) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.readFil= e(HostFileManager.java:80) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.refresh= (HostFileManager.java:157) =09at org.apache.hadoop.hdfs.server.blockmanagement.HostFileManager.refresh= (HostFileManager.java:70) =09at org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager.(= DatanodeManager.java:274) =09at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(Blo= ckManager.java:416) =09at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.(Blo= ckManager.java:408) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.(FSNamesyst= em.java:792) =09at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNa= mesystem.java:707) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNo= de.java:673) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.j= ava:750) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 1000) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:= 979) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNo= de.java:1726) =09at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:17= 94) {code} The code path is not exactly the same but it's the same DNS resolution issu= e. > In refreshNodes, avoid performing a DNS lookup while holding the write lo= ck > -------------------------------------------------------------------------= -- > > Key: HDFS-14579 > URL: https://issues.apache.org/jira/browse/HDFS-14579 > Project: Hadoop HDFS > Issue Type: Improvement > Affects Versions: 3.3.0 > Reporter: Stephen O'Donnell > Assignee: Stephen O'Donnell > Priority: Major > Attachments: HDFS-14579.001.patch > > > When refreshNodes is called on a large cluster, or a cluster where DNS is= not performing well, it can cause the namenode to hang for a long time. Th= is is because the refreshNodes operation holds the global write lock while = it is running. Most of refreshNodes code is simple and hence fast, but unfo= rtunately it performs a DNS lookup for each host in the cluster while the l= ock is held.=20 > Right now, it calls: > {code} > public void refreshNodes(final Configuration conf) throws IOException { > refreshHostsReader(conf); > namesystem.writeLock(); > try { > refreshDatanodes(); > countSoftwareVersions(); > } finally { > namesystem.writeUnlock(); > } > } > {code} > The line refreshHostsReader(conf); reads the new config file and does a D= NS lookup on each entry - the write lock is not held here. Then the main wo= rk is done here: > {code} > private void refreshDatanodes() { > final Map copy; > synchronized (this) { > copy =3D new HashMap<>(datanodeMap); > } > for (DatanodeDescriptor node : copy.values()) { > // Check if not include. > if (!hostConfigManager.isIncluded(node)) { > node.setDisallowed(true); > } else { > long maintenanceExpireTimeInMS =3D > hostConfigManager.getMaintenanceExpirationTimeInMS(node); > if (node.maintenanceNotExpired(maintenanceExpireTimeInMS)) { > datanodeAdminManager.startMaintenance( > node, maintenanceExpireTimeInMS); > } else if (hostConfigManager.isExcluded(node)) { > datanodeAdminManager.startDecommission(node); > } else { > datanodeAdminManager.stopMaintenance(node); > datanodeAdminManager.stopDecommission(node); > } > } > node.setUpgradeDomain(hostConfigManager.getUpgradeDomain(node)); > } > } > {code} > All the isIncluded(), isExcluded() methods call node.getResolvedAddress()= which does the DNS lookup. We could probably change things to perform all = the DNS lookups outside of the write lock, and then take the lock and proce= ss the nodes. Also change or overload isIncluded() etc to take the inetAddr= ess rather than the datanode descriptor. > It would not shorten the time the operation takes to run overall, but it = would move the long duration out of the write lock and avoid blocking the n= amenode for the entire time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org