hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Plamen Jeliazkov (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HDFS-9516) truncate file fails with data dirs on multiple disks
Date Thu, 10 Dec 2015 20:51:11 GMT

     [ https://issues.apache.org/jira/browse/HDFS-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel

Plamen Jeliazkov updated HDFS-9516:
    Attachment:     (was: HDFS-9516_testFailures.patch)

> truncate file fails with data dirs on multiple disks
> ----------------------------------------------------
>                 Key: HDFS-9516
>                 URL: https://issues.apache.org/jira/browse/HDFS-9516
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.7.1
>            Reporter: Bogdan Raducanu
>            Assignee: Plamen Jeliazkov
>         Attachments: Main.java, truncate.dn.log
> FileSystem.truncate returns false (no exception) but the file is never closed and not
writable after this.
> It seems to be because of copy on truncate which is used because the system is in upgrade
state. In this case a rename between devices is attempted.
> See attached log and repro code.
> Probably also affects truncate snapshotted file when copy on truncate is also used.
> Possibly it affects not only truncate but any block recovery.
> I think the problem is in updateReplicaUnderRecovery
> {code}
> ReplicaBeingWritten newReplicaInfo = new ReplicaBeingWritten(
>             newBlockId, recoveryId, rur.getVolume(), blockFile.getParentFile(),
>             newlength);
> {code}
> blockFile is created with copyReplicaWithNewBlockIdAndGS which is allowed to choose any
volume so rur.getVolume() is not where the block is located.

This message was sent by Atlassian JIRA

View raw message