hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo Nicholas Sze (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-9164) hdfs-nfs connector fails on O_TRUNC
Date Tue, 24 Nov 2015 22:30:10 GMT

     [ https://issues.apache.org/jira/browse/HDFS-9164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Tsz Wo Nicholas Sze updated HDFS-9164:
--------------------------------------
    Component/s:     (was: HDFS)
                 nfs

> hdfs-nfs connector fails on O_TRUNC
> -----------------------------------
>
>                 Key: HDFS-9164
>                 URL: https://issues.apache.org/jira/browse/HDFS-9164
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: nfs
>            Reporter: Constantine Peresypkin
>            Assignee: Constantine Peresypkin
>         Attachments: HDFS-9164.1.patch
>
>
> Linux NFS client will issue `open(.. O_TRUNC); write()` when overwriting a file that's
in nfs client cache (to not evict the inode, probably). Which will spectacularly fail on hdfs-nfs
with I/O error.
> Example:
> $ cp /some/file /to/hdfs/mount/
> $ cp /some/file /to/hdfs/mount/
> I/O error
> The first write will pass if the file is not in cache, the second one will always fail.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message