ignite-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vladimir Ozerov (JIRA)" <j...@apache.org>
Subject [jira] [Closed] (IGNITE-3285) A reference to HadoopClassLoader may be held in IGFS service pool threads after Job finish.
Date Tue, 28 Jun 2016 08:47:57 GMT

     [ https://issues.apache.org/jira/browse/IGNITE-3285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Vladimir Ozerov closed IGNITE-3285.
-----------------------------------

Duplicate to IGNITE-3351.

> A reference to HadoopClassLoader may be held in IGFS service pool threads after Job finish.
> -------------------------------------------------------------------------------------------
>
>                 Key: IGNITE-3285
>                 URL: https://issues.apache.org/jira/browse/IGNITE-3285
>             Project: Ignite
>          Issue Type: Bug
>          Components: IGFS
>    Affects Versions: 1.6
>            Reporter: Ivan Veselovsky
>            Assignee: Vladimir Ozerov
>             Fix For: 1.7
>
>
> Memory profiling shows that an instance of HadoopClassLoader used for Hadoop job may
still be referenced after that job finish. This happens for 2 reasons. 
> 1) When a new thread in IGFS pool is created by a thread that has HadoopClassLoader as
the current context class loader, this class loader inherently propagated as a context class
loader to the created thread:
> {code}
> java.lang.Throwable: #### 00igfs-#85%null%.<init>: set hadoop class loader: HadoopClassLoader
[name=hadoop-task-6b4d1037-65df-4e83-a7f8-7338e13ab1cf_1-SETUP-0]. Current cl = HadoopClassLoader
[name=hadoop-task-6b4d1037-65df-4e83-a7f8-7338e13ab1cf_1-SETUP-0]
>         at org.apache.ignite.thread.IgniteThread.<init>(IgniteThread.java:83)
>         at org.apache.ignite.thread.IgniteThread.<init>(IgniteThread.java:62)
>         at org.apache.ignite.thread.IgniteThreadFactory$1.<init>(IgniteThreadFactory.java:62)
>         at org.apache.ignite.thread.IgniteThreadFactory.newThread(IgniteThreadFactory.java:62)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.<init>(ThreadPoolExecutor.java:610)
>         at java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:924)
>         at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1360)
>         at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:132)
>         at org.apache.ignite.internal.processors.igfs.IgfsDataManager.callIgfsLocalSafe(IgfsDataManager.java:1133)
>         at org.apache.ignite.internal.processors.igfs.IgfsDataManager.processBatch(IgfsDataManager.java:1024)
>         at org.apache.ignite.internal.processors.igfs.IgfsDataManager.access$2500(IgfsDataManager.java:100)
>         at org.apache.ignite.internal.processors.igfs.IgfsDataManager$BlocksWriter.storeDataBlocks(IgfsDataManager.java:1416)
>         at org.apache.ignite.internal.processors.igfs.IgfsDataManager.storeDataBlocks(IgfsDataManager.java:538)
>         at org.apache.ignite.internal.processors.igfs.IgfsOutputStreamImpl.storeDataBlock(IgfsOutputStreamImpl.java:193)
>         at org.apache.ignite.internal.processors.igfs.IgfsOutputStreamAdapter.sendData(IgfsOutputStreamAdapter.java:252)
>         at org.apache.ignite.internal.processors.igfs.IgfsOutputStreamAdapter.write(IgfsOutputStreamAdapter.java:135)
>         at org.apache.ignite.internal.processors.hadoop.igfs.HadoopIgfsInProc.writeData(HadoopIgfsInProc.java:440)
>         at org.apache.ignite.internal.processors.hadoop.igfs.HadoopIgfsOutputStream.write(HadoopIgfsOutputStream.java:112)
>         at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
>         at java.io.BufferedOutputStream.write(BufferedOutputStream.java:126)
>         at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOutputStream.java:58)
>         at java.io.DataOutputStream.write(DataOutputStream.java:107)
>         at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1333)
>         at org.apache.hadoop.mapreduce.lib.output.SequenceFileOutputFormat$1.write(SequenceFileOutputFormat.java:83)
>         at org.apache.ignite.internal.processors.hadoop.v2.HadoopV2Context.write(HadoopV2Context.java:144)
>         at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
>         at org.apache.hadoop.examples.RandomTextWriter$RandomTextMapper.map(RandomTextWriter.java:140)
>         at org.apache.hadoop.examples.RandomTextWriter$RandomTextMapper.map(RandomTextWriter.java:102)
>         at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
>         at org.apache.ignite.internal.processors.hadoop.v2.HadoopV2MapTask.run0(HadoopV2MapTask.java:74)
>         at org.apache.ignite.internal.processors.hadoop.v2.HadoopV2Task.run(HadoopV2Task.java:54)
>         at org.apache.ignite.internal.processors.hadoop.v2.HadoopV2TaskContext.run(HadoopV2TaskContext.java:249)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.runTask(HadoopRunnableTask.java:201)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call0(HadoopRunnableTask.java:144)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:116)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask$1.call(HadoopRunnableTask.java:114)
>         at org.apache.ignite.internal.processors.hadoop.v2.HadoopV2TaskContext.runAsJobOwner(HadoopV2TaskContext.java:544)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:114)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopRunnableTask.call(HadoopRunnableTask.java:46)
>         at org.apache.ignite.internal.processors.hadoop.taskexecutor.HadoopExecutorService$2.body(HadoopExecutorService.java:186)
>         at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110)
>         at java.lang.Thread.run(Thread.java:745)
> {code} 
> 2. java.lang.Thread#inheritedAccessControlContext field after new thread construction
may have reference to the parent Thread context class loader through the java.security.ProtectionDomain#classloader
field.
> {code}
> this     - value: org.apache.ignite.internal.processors.hadoop.HadoopClassLoader #2
>  <- classloader     - class: java.security.ProtectionDomain, value: org.apache.ignite.internal.processors.hadoop.HadoopClassLoader
#2
>   <- [4]     - class: java.security.ProtectionDomain[], value: java.security.ProtectionDomain
#123 
>    <- context     - class: java.security.AccessControlContext, value: java.security.ProtectionDomain[]
#67 (8 items)
>     <- inheritedAccessControlContext (thread object)     - class: org.apache.ignite.thread.IgniteThreadFactory$1,
value: java.security.AccessControlContext #105 
> {code}
> Note that strictly speaking these references do not form leak , since number of threads
in the pool is limited by a constant. Nevertheless, it should be fixed since means not effective
memory usage.
>  
> Currently we suggest to fix this simply be pre-creating all the threads in igfs service
pool.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message