hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-4461) DirectoryScanner: volume path prefix takes up memory for every block that is scanned
Date Fri, 01 Feb 2013 18:08:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-4461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13568920#comment-13568920

Colin Patrick McCabe commented on HDFS-4461:

bq. I doubt that the directory scanner is the cause of OOM error. It is probably happening
due to some other issue. How many blocks per storage directory do you have, when OOME happened?

we analyzed a DN heap dump from a production cluster with eclipse memory analyzer and found
that the memory was full of ScanInfo objects.  The memory histogram showed that {{java.lang.String}}
was the third-largest consumer of memory in the system.  Unfortunately I can't share the heap

bq. I have hard time understanding the picture. How many bytes are we saving per ScanInfo?

In the particular case shown in memory-analysis.png, we save 86 characters in each string.
 The volume prefix that we avoid storing is {{/home/cmccabe/hadoop4/hadoop-hdfs-project/hadoop-hdfs/build//test/data/dfs/data/data1/}}.
 Java uses 2 bytes per character (UCS-2 encoding), and we store both metaPath and blockPath,
so multiply that by 4 to get 344.  Then add the overhead of using two objects File that contain
the path string instead of just the string itself-- probably around an extra 16 bytes per
object, for 376 bytes in total saved per {{ScanInfo}}.

You might think that {{/home/cmccabe/hadoop4/hadoop-hdfs-project/hadoop-hdfs/build//test/data/dfs/data/data1/}}
is an unrealistically long volume path, but here is an example of a real volume path in use
on a production cluster:


Putting the disk UUID into the volume is an obvious thing to do if you're a system administrator.
> DirectoryScanner: volume path prefix takes up memory for every block that is scanned

> -------------------------------------------------------------------------------------
>                 Key: HDFS-4461
>                 URL: https://issues.apache.org/jira/browse/HDFS-4461
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.0.3-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HDFS-4461.002.patch, HDFS-4461.003.patch, memory-analysis.png
> In the {{DirectoryScanner}}, we create a class {{ScanInfo}} for every block.  This object
contains two File objects-- one for the metadata file, and one for the block file.  Since
those File objects contain full paths, users who pick a lengthly path for their volume roots
will end up using an extra N_blocks * path_prefix bytes per block scanned.  We also don't
really need to store File objects-- storing strings and then creating File objects as needed
would be cheaper.  This has been causing out-of-memory conditions for users who pick such
long volume paths.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

View raw message