Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id DAB87200CD9 for ; Thu, 3 Aug 2017 22:30:13 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D921416A08C; Thu, 3 Aug 2017 20:30:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id CFDAB16A083 for ; Thu, 3 Aug 2017 22:30:12 +0200 (CEST) Received: (qmail 86040 invoked by uid 500); 3 Aug 2017 20:30:07 -0000 Mailing-List: contact common-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list common-issues@hadoop.apache.org Received: (qmail 86029 invoked by uid 99); 3 Aug 2017 20:30:07 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 03 Aug 2017 20:30:06 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 7C457C1F68 for ; Thu, 3 Aug 2017 20:30:06 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -99.202 X-Spam-Level: X-Spam-Status: No, score=-99.202 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id fiLVaX-pBhAi for ; Thu, 3 Aug 2017 20:30:04 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 1B20C60D44 for ; Thu, 3 Aug 2017 20:30:04 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 63AF2E0DDB for ; Thu, 3 Aug 2017 20:30:02 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id E700F24655 for ; Thu, 3 Aug 2017 20:30:00 +0000 (UTC) Date: Thu, 3 Aug 2017 20:30:00 +0000 (UTC) From: "Jonathan Eagles (JIRA)" To: common-issues@hadoop.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 archived-at: Thu, 03 Aug 2017 20:30:14 -0000 [ https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-14727: ------------------------------------- Status: Patch Available (was: Open) Verified CLOSE_WAIT sockets were being leaked on branch-2 jobhistory server with simple lsof | gerp CLOSE_WAIT and reloading specific mapreduce job configuration reload. With patch, no CLOSE_WAIT sockets are left. Fix was to flag InputStreams as auto-close if they are opened by Configuration and leave them as is if InputStream was passed as a resource to prevent closing an InputStream opened by the user. > Socket not closed properly when reading Configurations with BlockReaderRemote > ----------------------------------------------------------------------------- > > Key: HADOOP-14727 > URL: https://issues.apache.org/jira/browse/HADOOP-14727 > Project: Hadoop Common > Issue Type: Bug > Components: conf > Affects Versions: 3.0.0-alpha4, 2.9.0 > Reporter: Xiao Chen > Assignee: Jonathan Eagles > Priority: Blocker > Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch > > > This is caught by Cloudera's internal testing over the alpha4 release. > We got reports that some hosts ran out of FDs. Triaging that, found out both oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} state. > [~haibochen] helped narrow down to a consistent reproduction by simply visiting the JHS web UI, and clicking through a job and its logs. > I then look at the {{BlockReaderRemote}} and related code, and didn't spot any leaks in the implementation. After adding a debug log whenever a {{Peer}} is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} sockets are created from this call stack: > {noformat} > 2017-08-02 13:58:59,901 INFO org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: ____ associated peer NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109 > java.lang.Exception: test > at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745) > at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385) > at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636) > at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566) > at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749) > at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807) > at java.io.DataInputStream.read(DataInputStream.java:149) > at com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482) > at com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306) > at com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167) > at com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573) > at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633) > at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647) > at com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366) > at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649) > at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697) > at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662) > at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545) > at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076) > at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126) > at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344) > at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45) > at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130) > at org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363) > at org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105) > at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473) > at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180) > at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52) > at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103) > at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100) > at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568) > at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350) > at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313) > at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228) > at com.google.common.cache.LocalCache.get(LocalCache.java:3965) > at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969) > at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829) > at com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834) > at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193) > at org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220) > at org.apache.hadoop.mapreduce.v2.app.webapp.AppController.requireJob(AppController.java:416) > at org.apache.hadoop.mapreduce.v2.app.webapp.AppController.attempts(AppController.java:277) > at org.apache.hadoop.mapreduce.v2.hs.webapp.HsController.attempts(HsController.java:152) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:162) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287) > at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277) > at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182) > at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91) > at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85) > at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941) > at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875) > at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829) > at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82) > at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133) > at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130) > at com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203) > at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130) > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57) > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109) > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1552) > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45) > at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759) > at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108) > at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303) > at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148) > at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136) > at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671) > at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589) > at java.lang.Thread.run(Thread.java:748) > {noformat} > I was able to further confirm this theory by backing out the 4 recent commits to {{Configuration}} on alpha3 and no longer seeing {{CLOSE_WAIT}} sockets. > - HADOOP-14501. > - HADOOP-14399. (only reverted to make other reverts easier) > - HADOOP-14216. Addendum > - HADOOP-14216. > It's not clear to me who's responsible to close the InputStream though. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org For additional commands, e-mail: common-issues-help@hadoop.apache.org