hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Ruslan Dautkhanov (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-12113) `hadoop fs -setrep` requries huge amount of memory on client side
Date Tue, 11 Jul 2017 06:31:00 GMT

    [ https://issues.apache.org/jira/browse/HDFS-12113?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16081750#comment-16081750
] 

Ruslan Dautkhanov commented on HDFS-12113:
------------------------------------------

[~brahmareddy], looks very similar. I left comment in HADOOP-12502. Thanks for pointing to
that jira.

> `hadoop fs -setrep` requries huge amount of memory on client side
> -----------------------------------------------------------------
>
>                 Key: HDFS-12113
>                 URL: https://issues.apache.org/jira/browse/HDFS-12113
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.6.0, 2.6.5
>         Environment: Java 7
>            Reporter: Ruslan Dautkhanov
>
> {code}
> $ hadoop fs -setrep -w 3 /
> {code}
> was failing with 
> {noformat}
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.util.Arrays.copyOf(Arrays.java:2367)
> at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)
> at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)
> at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)
> at java.lang.StringBuilder.append(StringBuilder.java:132)
> at org.apache.hadoop.fs.shell.PathData.getStringForChildPath(PathData.java:305)
> at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:272)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:319)
> at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:289)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at org.apache.hadoop.fs.shell.SetReplication.processArguments(SetReplication.java:76)
> at org.apache.hadoop.fs.shell.FsCommand.processRawArguments(FsCommand.java:118)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:315)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:372)
> {noformat}
> Until hadoop fs cli command's Java heap memory was allowed to grow to 5Gb:
> {code}
> HADOOP_HEAPSIZE=5000 hadoop fs -setrep -w 3 /
> {code}
> Notice that this setrep change was done for whole HDFS filesystem.
> So looks like there is a dependency on amount of memory used by `hadoop fs -setrep` command
on how many files total HDFS has? This is not a huge HDFS filesystem, I would say even "small"
by current standards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message