hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yanping Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8562) HDFS Performance is impacted by FileInputStream Finalizer
Date Thu, 17 Dec 2015 23:29:46 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15063079#comment-15063079
] 

Yanping Wang commented on HDFS-8562:
------------------------------------

Oracle Java team had a bug/issue filed for this issue,  https://bugs.openjdk.java.net/browse/JDK-8080225
"FileInputStream cleanup should be improved."
The solution is to use Phantom Reference instead of Final Reference. In order to make Phantom
Reference works in FileInputStream and FileOutputStream properly for GC, there were two related
issues: "Clear phantom reference as soft and weak references do" and "Remove REF_CLEANER reference
category"

I think your suggestion is good and reasonable, I will pass that to Oracle runtime and Openjdk
teams. 

> HDFS Performance is impacted by FileInputStream Finalizer
> ---------------------------------------------------------
>
>                 Key: HDFS-8562
>                 URL: https://issues.apache.org/jira/browse/HDFS-8562
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: datanode, performance
>    Affects Versions: 2.5.0
>         Environment: Impact any application that uses HDFS
>            Reporter: Yanping Wang
>         Attachments: HDFS-8562.002b.patch, HDFS-8562.01.patch
>
>
> While running HBase using HDFS as datanodes, we noticed excessive high GC pause spikes.
For example with jdk8 update 40 and G1 collector, we saw datanode GC pauses spiked toward
160 milliseconds while they should be around 20 milliseconds. 
> We tracked down to GC logs and found those long GC pauses were devoted to process high
number of final references. 
> For example, this Young GC:
> 2715.501: [GC pause (G1 Evacuation Pause) (young) 0.1529017 secs]
> 2715.572: [SoftReference, 0 refs, 0.0001034 secs]
> 2715.572: [WeakReference, 0 refs, 0.0000123 secs]
> 2715.572: [FinalReference, 8292 refs, 0.0748194 secs]
> 2715.647: [PhantomReference, 0 refs, 160 refs, 0.0001333 secs]
> 2715.647: [JNI Weak Reference, 0.0000140 secs]
> [Ref Proc: 122.3 ms]
> [Eden: 910.0M(910.0M)->0.0B(911.0M) Survivors: 11.0M->10.0M Heap: 951.1M(1536.0M)->40.2M(1536.0M)]
> [Times: user=0.47 sys=0.01, real=0.15 secs]
> This young GC took 152.9 milliseconds STW pause, while spent 122.3 milliseconds in Ref
Proc, which processed 8292 FinalReference in 74.8 milliseconds plus some overhead.
> We used JFR and JMAP with Memory Analyzer to track down and found those FinalReference
were all from FileInputStream.  We checked HDFS code and saw the use of the FileInputStream
in datanode:
> https://apache.googlesource.com/hadoop-common/+/refs/heads/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/MappableBlock.java
> {code}
> 1.	public static MappableBlock load(long length,
> 2.	FileInputStream blockIn, FileInputStream metaIn,
> 3.	String blockFileName) throws IOException {
> 4.	MappableBlock mappableBlock = null;
> 5.	MappedByteBuffer mmap = null;
> 6.	FileChannel blockChannel = null;
> 7.	try {
> 8.	blockChannel = blockIn.getChannel();
> 9.	if (blockChannel == null) {
> 10.	throw new IOException("Block InputStream has no FileChannel.");
> 11.	}
> 12.	mmap = blockChannel.map(MapMode.READ_ONLY, 0, length);
> 13.	NativeIO.POSIX.getCacheManipulator().mlock(blockFileName, mmap, length);
> 14.	verifyChecksum(length, metaIn, blockChannel, blockFileName);
> 15.	mappableBlock = new MappableBlock(mmap, length);
> 16.	} finally {
> 17.	IOUtils.closeQuietly(blockChannel);
> 18.	if (mappableBlock == null) {
> 19.	if (mmap != null) {
> 20.	NativeIO.POSIX.munmap(mmap); // unmapping also unlocks
> 21.	}
> 22.	}
> 23.	}
> 24.	return mappableBlock;
> 25.	}
> {code}
> We looked up https://docs.oracle.com/javase/7/docs/api/java/io/FileInputStream.html 
and
> http://hg.openjdk.java.net/jdk7/jdk7/jdk/file/23bdcede4e39/src/share/classes/java/io/FileInputStream.java
and noticed FileInputStream relies on the Finalizer to release its resource. 
> When a class that has a finalizer created, an entry for that class instance is put on
a queue in the JVM so the JVM knows it has a finalizer that needs to be executed.   
> The current issue is: even with programmers do call close() after using FileInputStream,
its finalize() method will still be called. In other words, still get the side effect of the
FinalReference being registered at FileInputStream allocation time, and also reference processing
to reclaim the FinalReference during GC (any GC solution has to deal with this). 
> We can imagine When running industry deployment HDFS, millions of files could be opened
and closed which resulted in a very large number of finalizers being registered and subsequently
being executed.  That could cause very long GC pause times.
> We tried to use Files.newInputStream() to replace FileInputStream, but it was clear we
could not replace FileInputStream in hdfs/server/datanode/fsdataset/impl/MappableBlock.java

> We notified Oracle JVM team of this performance issue that impacting all Big Data applications
using HDFS. We recommended the proper fix in Java SE FileInputStream. Because (1) it is really
nothing wrong to use FileInputStream in above datanode code, (2) as the object with a finalizer
is registered with finalizer list within the JVM at object allocation time, if someone makes
an explicit call to close or free the resources that are to be done in the finalizer, then
the finalizer should be pulled off the internal JVM’s finalizer list. That will release
the JVM from having to treat the object as special because it has a finalizer, i.e. no need
for GC to execute the finalizer as part of Reference Processing.  
> As the java fix involves both JVM code and Java SE code, it might take time for the full
solution to be available in future JDK releases. We would like to file his JIRA to notify
Big Data, HDFS community to aware this issue while using HDFS and while writing code using
FileInputStream 
> One alternative is to use Files.newInputStream() to substitute FileInputStream if it
is possible. File.newInputStream() will give an InputStream and do so in a manner that does
not include a finalizer.
> We welcome HDFS community to discuss this issue and see if there are additional ideas
to solve this problem. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message