hadoop-mapreduce-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Takuya Fukudome (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (MAPREDUCE-6062) Use TestDFSIO test random read : job failed
Date Tue, 24 May 2016 06:21:12 GMT

    [ https://issues.apache.org/jira/browse/MAPREDUCE-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297754#comment-15297754
] 

Takuya Fukudome commented on MAPREDUCE-6062:
--------------------------------------------

Hi

This error seems to be occurred when we ran random read test on files over 2GB. Because in
the random read test, the long value of the target file size is casted to int.  I could reproduce
the error by running random read test after invoking TestDFSIO write with "-size 2GB".

> Use TestDFSIO test random read : job failed
> -------------------------------------------
>
>                 Key: MAPREDUCE-6062
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6062
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: benchmarks
>    Affects Versions: 2.2.0
>         Environment: command : hadoop jar $JAR_PATH TestDFSIO-read -random -nrFiles 12
-size 8000
>            Reporter: chongyuanhuang
>
> This is log:
> 2014-09-01 13:57:29,876 WARN [main] org.apache.hadoop.mapred.YarnChild: Exception running
child : java.lang.IllegalArgumentException: n must be positive
> 	at java.util.Random.nextInt(Random.java:300)
> 	at org.apache.hadoop.fs.TestDFSIO$RandomReadMapper.nextOffset(TestDFSIO.java:601)
> 	at org.apache.hadoop.fs.TestDFSIO$RandomReadMapper.doIO(TestDFSIO.java:580)
> 	at org.apache.hadoop.fs.TestDFSIO$RandomReadMapper.doIO(TestDFSIO.java:546)
> 	at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:134)
> 	at org.apache.hadoop.fs.IOMapperBase.map(IOMapperBase.java:37)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:54)
> 	at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:430)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:342)
> 	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:167)
> 	at java.security.AccessController.doPrivileged(Native Method)
> 	at javax.security.auth.Subject.doAs(Subject.java:415)
> 	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1554)
> 	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
> 2014-09-01 13:57:29,886 INFO [main] org.apache.hadoop.mapred.Task: Runnning cleanup for
the task
> 2014-09-01 13:57:29,894 WARN [main] org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter:
Could not delete hdfs://m101:8020/benchmarks/TestDFSIO/io_random_read/_temporary/1/_temporary/attempt_1409538816633_0005_m_000001_0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-help@hadoop.apache.org


Mime
View raw message