hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Andrew Hitchcock <adpow...@gmail.com>
Subject Re: S3 Exception for a Map Reduce job on EC2
Date Thu, 29 Oct 2009 19:48:31 GMT
You are correct, this is because the S3 native file system is trying
to create a bucket too often. It looks like the patch wasn't applied
correctly, or else the problem would have gone away. This is the patch
you need:

http://issues.apache.org/jira/browse/HADOOP-4422

Andrew

On Wed, Oct 28, 2009 at 9:30 PM, Harshit Kumar <hkumar.arora@gmail.com> wrote:
> Hi
>
> There is 1 GB of rdf/owl files that I am executing on EC2. Execution throws
> the following exception
> -------------------
>
> 08/11/19 16:08:27 WARN mapred.JobClient: Use GenericOptionsParser for
> parsing the arguments. Applications should implement Tool for the
> same.
> org.apache.hadoop.fs.s3.S3Exception:
> org.jets3t.service.S3ServiceException: S3 PUT failed for '/' XML Error
> Message: <?xml version="1.0"
> encoding="UTF-8"?><Error><Code>OperationAborted</Code><Message>*A
> conflicting conditional operation is currently in progress against
> **this** resource*. Please try
> again.</Message><RequestId>324E696A4BCA8731</RequestId><HostId>{REMOVED}</HostId></Error>
>        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.createBucket(Jets3tNativeFileSystemStore.java:74)
>        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.initialize(Jets3tNativeFileSystemStore.java:63)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>        at org.apache.hadoop.fs.s3native.$Proxy2.initialize(Unknown Source)
>        at org.apache.hadoop.fs.s3native.NativeS3FileSystem.initialize(NativeS3FileSystem.java:215)
>        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1339)
>        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:56)
>        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1351)
>        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:213)
>        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:175)
>        at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:158)
>        at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:210)
>        at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:742)
>        at org.apache.hadoop.streaming.StreamJob.submitAndMonitorJob(StreamJob.java:925)
>        at org.apache.hadoop.streaming.StreamJob.go(StreamJob.java:115)
>        at org.apache.hadoop.streaming.HadoopStreaming.main(HadoopStreaming.java:33)
>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>        at java.lang.reflect.Method.invoke(Method.java:597)
>        at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>        at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>        at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>        at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
> Caused by: org.jets3t.service.S3ServiceException: S3 PUT failed for
> '/' XML Error Message: <?xml version="1.0"
> encoding="UTF-8"?><Error><Code>OperationAborted</Code><Message>A
> conflicting conditional operation is currently in progress against
> this resource. Please try
> again.</Message><RequestId>{REMOVED}</RequestId><HostId>{REMOVED}</HostId></Error>
>        at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:424)
>        at org.jets3t.service.impl.rest.httpclient.RestS3Service.performRestPut(RestS3Service.java:734)
>        at org.jets3t.service.impl.rest.httpclient.RestS3Service.createObjectImpl(RestS3Service.java:1357)
>        at org.jets3t.service.impl.rest.httpclient.RestS3Service.createBucketImpl(RestS3Service.java:1234)
>        at org.jets3t.service.S3Service.createBucket(S3Service.java:1390)
>        at org.jets3t.service.S3Service.createBucket(S3Service.java:1158)
>        at org.jets3t.service.S3Service.createBucket(S3Service.java:1177)
>        at org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.createBucket(Jets3tNativeFileSystemStore.java:69)
>        ... 29 more
>
> ---------------------
> There were like 50 failed map tasks out of total of 1141 maps. Though hadoop
> job tracker successfully executed failed tasks again, but if a task fails,
> and then restarting it again, takes execution time. If somehow, this
> exception can be avoided altogether, program execution will be faster.
>
> Came across a JIRA post that says, the exception is because s3 tries to
> create a bucket which it should not create, and a patch is also posted for
> the same. Applied the patch, but still the same error.
>
> IMO, this error is because a operation is trying to access a part of file
> which is already been accessed by another map job.
>
> Please help resolve this issue or pass on any pointers that can suggest what
> is the source of error?
>
> Thanks and Regards
>
> H. Kumar
> skype: harshit900
> Blog: http://harshitkumar.wordpress.com
> Website: http:/kumarharmuscat.tripod.comg
> H. Kumar
> Phone(Mobile): +82-10-2892-9663
> Phone(Office): +82-31-
> skype: harshit900
> Blog: http://harshitkumar.wordpress.com
> Website: http:/kumarharmuscat.tripod.com
>

Mime
View raw message