hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote
Date Fri, 04 Aug 2017 02:30:00 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16113825#comment-16113825
] 

Hadoop QA commented on HADOOP-14727:
------------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 19s{color} | {color:blue}
Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  0s{color} |
{color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  0s{color} | {color:red}
The patch doesn't appear to include any new or modified tests. Please justify why no new tests
are needed for this patch. Also please list what manual steps were performed to verify this
patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 30s{color}
| {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 58s{color} |
{color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 44s{color} |
{color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 27s{color}
| {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 59s{color} |
{color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 38s{color} |
{color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 42s{color} |
{color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 50s{color} |
{color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 40s{color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 35s{color} |
{color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 35s{color} | {color:green}
the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 38s{color} |
{color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 38s{color} | {color:green}
the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  0m 26s{color}
| {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 147 unchanged
- 1 fixed = 148 total (was 148) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 58s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m  0s{color}
| {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 54s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 42s{color} |
{color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 49s{color} |
{color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 54s{color} | {color:red}
hadoop-common in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 22s{color}
| {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m  1s{color} | {color:black}
{color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_131 Failed junit tests | hadoop.net.TestDNS |
| JDK v1.7.0_131 Failed junit tests | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14727 |
| JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12880297/HADOOP-14727.001-branch-2.patch
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  unit  findbugs
 checkstyle  |
| uname | Linux 722ab85978b7 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 11:05:26 UTC 2017
x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / b6729a7 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 /usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131
|
| findbugs | v3.0.0 |
| checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/12945/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
|
| unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/12945/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common-jdk1.7.0_131.txt
|
| JDK v1.7.0_131  Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/12945/testReport/
|
| modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common
|
| Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/12945/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Socket not closed properly when reading Configurations with BlockReaderRemote
> -----------------------------------------------------------------------------
>
>                 Key: HADOOP-14727
>                 URL: https://issues.apache.org/jira/browse/HADOOP-14727
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.9.0, 3.0.0-alpha4
>            Reporter: Xiao Chen
>            Assignee: Jonathan Eagles
>            Priority: Blocker
>         Attachments: HADOOP-14727.001-branch-2.patch, HADOOP-14727.001.patch
>
>
> This is caught by Cloudera's internal testing over the alpha4 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both oozie server
and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} state.
> [~haibochen] helped narrow down to a consistent reproduction by simply visiting the JHS
web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot any leaks
in the implementation. After adding a debug log whenever a {{Peer}} is created/closed/in/out
{{PeerCache}}, it looks like all the {{CLOSE_WAIT}} sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO org.apache.hadoop.hdfs.client.impl.BlockReaderFactory: ____
associated peer NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with blockreader
org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
>         at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
>         at org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
>         at org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
>         at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
>         at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
>         at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
>         at java.io.DataInputStream.read(DataInputStream.java:149)
>         at com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
>         at com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
>         at com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
>         at com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
>         at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
>         at com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
>         at com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
>         at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
>         at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
>         at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
>         at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
>         at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
>         at org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
>         at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
>         at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
>         at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
>         at org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
>         at org.apache.hadoop.mapreduce.v2.hs.CompletedJob.<init>(CompletedJob.java:105)
>         at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
>         at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
>         at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
>         at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
>         at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
>         at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
>         at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
>         at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
>         at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
>         at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>         at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
>         at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
>         at com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834)
>         at org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193)
>         at org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220)
>         at org.apache.hadoop.mapreduce.v2.app.webapp.AppController.requireJob(AppController.java:416)
>         at org.apache.hadoop.mapreduce.v2.app.webapp.AppController.attempts(AppController.java:277)
>         at org.apache.hadoop.mapreduce.v2.hs.webapp.HsController.attempts(HsController.java:152)
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:498)
>         at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:162)
>         at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>         at com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>         at com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
>         at com.google.inject.servlet.ServletDefinition.service(ServletDefinition.java:182)
>         at com.google.inject.servlet.ManagedServletPipeline.service(ManagedServletPipeline.java:91)
>         at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:85)
>         at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:941)
>         at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:875)
>         at com.sun.jersey.spi.container.servlet.ServletContainer.doFilter(ServletContainer.java:829)
>         at com.google.inject.servlet.FilterChainInvocation.doFilter(FilterChainInvocation.java:82)
>         at com.google.inject.servlet.ManagedFilterPipeline.dispatch(ManagedFilterPipeline.java:119)
>         at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:133)
>         at com.google.inject.servlet.GuiceFilter$1.call(GuiceFilter.java:130)
>         at com.google.inject.servlet.GuiceFilter$Context.call(GuiceFilter.java:203)
>         at com.google.inject.servlet.GuiceFilter.doFilter(GuiceFilter.java:130)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>         at org.apache.hadoop.security.http.XFrameOptionsFilter.doFilter(XFrameOptionsFilter.java:57)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>         at org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:109)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>         at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1552)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>         at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
>         at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
>         at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>         at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>         at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>         at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>         at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>         at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>         at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>         at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>         at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>         at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>         at org.eclipse.jetty.server.Server.handle(Server.java:534)
>         at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>         at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>         at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
>         at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
>         at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>         at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>         at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>         at org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>         at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>         at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>         at java.lang.Thread.run(Thread.java:748)
> {noformat}
> I was able to further confirm this theory by backing out the 4 recent commits to {{Configuration}}
on alpha3 and no longer seeing {{CLOSE_WAIT}} sockets.
> - HADOOP-14501. 
> - HADOOP-14399. (only reverted to make other reverts easier)
> - HADOOP-14216. Addendum 
> - HADOOP-14216. 
> It's not clear to me who's responsible to close the InputStream though.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


Mime
View raw message