hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Nauroth (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-10093) hadoop-env.cmd sets HADOOP_CLIENT_OPTS with a max heap size that is too small.
Date Tue, 12 Nov 2013 22:37:18 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-10093?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Chris Nauroth updated HADOOP-10093:
-----------------------------------

         Description: HADOOP-9211 increased the default max heap size set by hadoop-env.sh
to 512m.  The same change needs to be applied to hadoop-env.cmd for Windows.  (was: When WASB
is configured as default file system, if you run this:
 Hadoop fs -copyFromLocal largefile(>150MB) /test

You'll see this error message:
 Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2271)
 at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
 at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
 va:93)
 at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
 at com.microsoft.windowsazure.services.blob.client.BlobOutputStream.writ
 eInternal(BlobOutputStream.java:618)
 at com.microsoft.windowsazure.services.blob.client.BlobOutputStream.writ
 e(BlobOutputStream.java:545)
 at java.io.DataOutputStream.write(DataOutputStream.java:107)
 at org.apache.hadoop.fs.azurenative.NativeAzureFileSystem$NativeAzureFsO
 utputStream.write(NativeAzureFileSystem.java:307)
 at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOut
 putStream.java:59)
 at java.io.DataOutputStream.write(DataOutputStream.java:107)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
 at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.wr
 iteStreamToFile(CommandWithDestination.java:299)
 at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(
 CommandWithDestination.java:281)
 at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(Co
 mmandWithDestination.java:245)
 at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command
 WithDestination.java:188)
 at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command
 WithDestination.java:173)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
 at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:2
 78)
 at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument
 (CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
 at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)

at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(Co
 mmandWithDestination.java:145)
 at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyComm
 ands.java:229)
 at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:1
 90)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:305)
)
    Target Version/s: 3.0.0, 2.3.0
             Summary: hadoop-env.cmd sets HADOOP_CLIENT_OPTS with a max heap size that is
too small.  (was: hadoop.cmd fs -copyFromLocal fails with large files on WASB)

Example stack trace:

 Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
 at java.util.Arrays.copyOf(Arrays.java:2271)
 at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:113)
 at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.ja
 va:93)
 at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:140)
 at com.microsoft.windowsazure.services.blob.client.BlobOutputStream.writ
 eInternal(BlobOutputStream.java:618)
 at com.microsoft.windowsazure.services.blob.client.BlobOutputStream.writ
 e(BlobOutputStream.java:545)
 at java.io.DataOutputStream.write(DataOutputStream.java:107)
 at org.apache.hadoop.fs.azurenative.NativeAzureFileSystem$NativeAzureFsO
 utputStream.write(NativeAzureFileSystem.java:307)
 at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.write(FSDataOut
 putStream.java:59)
 at java.io.DataOutputStream.write(DataOutputStream.java:107)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:80)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:52)
 at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:112)
 at org.apache.hadoop.fs.shell.CommandWithDestination$TargetFileSystem.wr
 iteStreamToFile(CommandWithDestination.java:299)
 at org.apache.hadoop.fs.shell.CommandWithDestination.copyStreamToTarget(
 CommandWithDestination.java:281)
 at org.apache.hadoop.fs.shell.CommandWithDestination.copyFileToTarget(Co
 mmandWithDestination.java:245)
 at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command
 WithDestination.java:188)
 at org.apache.hadoop.fs.shell.CommandWithDestination.processPath(Command
 WithDestination.java:173)
 at org.apache.hadoop.fs.shell.Command.processPaths(Command.java:306)
 at org.apache.hadoop.fs.shell.Command.processPathArgument(Command.java:2
 78)
 at org.apache.hadoop.fs.shell.CommandWithDestination.processPathArgument
 (CommandWithDestination.java:168)
 at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:260)
 at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:244)

at org.apache.hadoop.fs.shell.CommandWithDestination.processArguments(Co
 mmandWithDestination.java:145)
 at org.apache.hadoop.fs.shell.CopyCommands$Put.processArguments(CopyComm
 ands.java:229)
 at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:1
 90)
 at org.apache.hadoop.fs.shell.Command.run(Command.java:154)
 at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at org.apache.hadoop.fs.FsShell.main(FsShell.java:305)


> hadoop-env.cmd sets HADOOP_CLIENT_OPTS with a max heap size that is too small.
> ------------------------------------------------------------------------------
>
>                 Key: HADOOP-10093
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10093
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: conf
>    Affects Versions: 2.2.0
>            Reporter: shanyu zhao
>            Assignee: shanyu zhao
>         Attachments: HADOOP-10093.patch
>
>
> HADOOP-9211 increased the default max heap size set by hadoop-env.sh to 512m.  The same
change needs to be applied to hadoop-env.cmd for Windows.



--
This message was sent by Atlassian JIRA
(v6.1#6144)

Mime
View raw message