hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Arun Jacob (JIRA)" <j...@apache.org>
Subject [jira] Resolved: (HADOOP-5802) issue creating buffer directory when using native S3FileSystem
Date Tue, 19 May 2009 16:32:45 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-5802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Arun Jacob resolved HADOOP-5802.

    Resolution: Invalid

this is actually specific to the Cloudera distro, specifically HADOOP-4377 hasn't been backported
to that distro. I'm closing this out, following up with Cloudera team.

> issue creating buffer directory when using native S3FileSystem
> --------------------------------------------------------------
>                 Key: HADOOP-5802
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5802
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: fs/s3
>    Affects Versions: 0.18.3
>         Environment: ec2, Cloudera AMI, m-xlarge, 20 nodes. 
>            Reporter: Arun Jacob
> Note the following settings in hadoop-site.xml, which  means I've configured 40 reduce
tasks to run across 20 nodes. I checked Jira, and found HADOOP-4377, which refers to this
as a race condition that has been fixed, the bug was closed. I'm re-opening because this may
be configuration specific, e.g. I am running multiple reduce tasks on the same node (no mention
was made of that in the original bug)
> <property>
>   <name>mapred.reduce.tasks</name>
>   <value>40</value>
> </property>
> <property>
>   <name>mapred.tasktracker.reduce.tasks.maximum</name>
>   <value>4</value>
>   <final>true</final>
> </property>
> One of the tasks fails trying to create the S3 buffer directory:
> attempt_200905111058_0001_r_000019_0: 1589020 [main] WARN org.apache.hadoop.mapred.TaskTracker
 - Error running child
> attempt_200905111058_0001_r_000019_0: java.io.IOException: Cannot create S3 buffer directory:
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.newBackupFile(NativeS3FileSystem.java:152)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsOutputStream.<init>(NativeS3FileSystem.java:136)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.fs.s3native.NativeS3FileSystem.create(NativeS3FileSystem.java:278)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:503)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:403)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.mapred.TextOutputFormat.getRecordWriter(TextOutputFormat.java:117)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:293)
> attempt_200905111058_0001_r_000019_0:   at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:2198)

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message