hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Harsh J (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-4630) Datanode is going OOM due to small files in hdfs
Date Sun, 24 Mar 2013 15:25:15 GMT

     [ https://issues.apache.org/jira/browse/HDFS-4630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Harsh J resolved HDFS-4630.
---------------------------

    Resolution: Invalid

Closing again per Suresh's comment, as this is by design and you're merely required to raise
your heap to accommodate more files (and thereby, blocks). Please also see HDFS-4465 and HDFS-4461
on optimizations of this.
                
> Datanode is going OOM due to small files in hdfs
> ------------------------------------------------
>
>                 Key: HDFS-4630
>                 URL: https://issues.apache.org/jira/browse/HDFS-4630
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, namenode
>    Affects Versions: 2.0.0-alpha
>         Environment: Ubuntu, Java 1.6
>            Reporter: Ankush Bhatiya
>            Priority: Blocker
>
> Hi, 
> We have very small files(size ranging 10KB-1MB) in our hdfs and no of files are in tens
of millions. Due to this namenode and datanode both going out of memory very frequently. When
we analyse the head dump of datanode most of the memory was used by ReplicaMap. 
> Can we use EhCache or other to not to store all the data in memory? 
> Thanks
> Ankush

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message