hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raghu Angadi (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1513) A likely race condition between the creation of a directory and checking for its existence in the DiskChecker class
Date Fri, 22 Jun 2007 15:53:26 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12507424
] 

Raghu Angadi commented on HADOOP-1513:
--------------------------------------

Are saying File.mkdirs() has this problem based on its implementation or  contract or reproduceable
error? I still don't see how just mkdirs()'s documentation implies this problem. The patch
you have seems correct (except that you don't need to call exist() when mkdirs() returns true..
but mkdirs() would return false most of the time in this case  DFSClient). 

I think I understand case  you are describing with two thread but why do you think mkdirs()
first invokes {{exists()}} and then {{mkdir()}} ?

> A likely race condition between the creation of a directory and checking for its existence
in the DiskChecker class
> -------------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-1513
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1513
>             Project: Hadoop
>          Issue Type: Bug
>          Components: fs
>    Affects Versions: 0.14.0
>            Reporter: Devaraj Das
>            Assignee: Devaraj Das
>            Priority: Critical
>             Fix For: 0.14.0
>
>         Attachments: 1513.patch
>
>
> Got this exception in a job run. It looks like the problem is a race condition between
the creation of a directory and checking for its existence. Specifically, the line:
> if (!dir.exists() && !dir.mkdirs()), doesn't seem safe when invoked by multiple
processes at the same time. 
> 2007-06-21 07:55:33,583 INFO org.apache.hadoop.mapred.MapTask: numReduceTasks: 1
> 2007-06-21 07:55:33,818 WARN org.apache.hadoop.fs.AllocatorPerContext: org.apache.hadoop.util.DiskChecker$DiskErrorException:
can not create directory: /export/crawlspace/kryptonite/ddas/dfs/data/tmp
> 	at org.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:26)
> 	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createPath(LocalDirAllocator.java:211)
> 	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:248)
> 	at org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:276)
> 	at org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:155)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.newBackupFile(DFSClient.java:1171)
> 	at org.apache.hadoop.dfs.DFSClient$DFSOutputStream.(DFSClient.java:1136)
> 	at org.apache.hadoop.dfs.DFSClient.create(DFSClient.java:342)
> 	at org.apache.hadoop.dfs.DistributedFileSystem$RawDistributedFileSystem.create(DistributedFileSystem.java:145)
> 	at org.apache.hadoop.fs.ChecksumFileSystem$FSOutputSummer.(ChecksumFileSystem.java:368)
> 	at org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:443)
> 	at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:254)
> 	at org.apache.hadoop.io.SequenceFile$Writer.(SequenceFile.java:675)
> 	at org.apache.hadoop.io.SequenceFile.createWriter(SequenceFile.java:165)
> 	at org.apache.hadoop.examples.RandomWriter$Map.map(RandomWriter.java:137)
> 	at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:46)
> 	at org.apache.hadoop.mapred.MapTask.run(MapTask.java:189)
> 	at org.apache.hadoop.mapred.TaskTracker$Child.main(TaskTracker.java:1740)
> 2007-06-21 07:55:33,821 WARN org.apache.hadoop.mapred.TaskTracker: Error running child

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message