hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Updated: (HADOOP-89) files are not visible until they are closed
Date Mon, 06 Aug 2007 19:57:59 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-89?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

dhruba borthakur updated HADOOP-89:

    Attachment: atomicCreation.patch

This patch makes a file visible in the file system as soon as it is created by an application.
However, the data blocks are associated with a file when the file gets closed.

If a DFS client A has created a file and is writing data to it, another DFS client will see
the file size as zero until A closes the file.

> files are not visible until they are closed
> -------------------------------------------
>                 Key: HADOOP-89
>                 URL: https://issues.apache.org/jira/browse/HADOOP-89
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.1.0
>            Reporter: Yoram Arnon
>            Assignee: Sameer Paranjpye
>            Priority: Critical
>         Attachments: atomicCreation.patch
> the current behaviour, whereby a file is not visible until it is closed has several flaws,including:
> 1. no practical way to know if a file/job is progressing
> 2. no way to implement files that never close, such as log files
> 3. failure to close a file results in loss of the file
> The part of the file that's written should be visible.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message