hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuanbo Liu (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-11293) FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation
Date Thu, 05 Jan 2017 09:54:58 GMT

    [ https://issues.apache.org/jira/browse/HDFS-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15800939#comment-15800939
] 

Yuanbo Liu commented on HDFS-11293:
-----------------------------------

[~umamaheswararao] Thanks for your response.
{code}
 the scheduling is wrong if that happening right? 
{code}
The current answer is yes and I've encountered it when I test SPS.
But in general speaking, A[SSD] chosen as a target seems reasonable because the block replica
exists in A[DISK], not A[SSD]. Are there any considerations about not putting replica in the
same node with different storage type/dir?

> FsDatasetImpl throws ReplicaAlreadyExistsException in a wrong situation
> -----------------------------------------------------------------------
>
>                 Key: HDFS-11293
>                 URL: https://issues.apache.org/jira/browse/HDFS-11293
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Yuanbo Liu
>            Assignee: Yuanbo Liu
>            Priority: Critical
>
> In {{FsDatasetImpl#createTemporary}}, we use {{volumeMap}} to get replica info by block
pool id. But in this situation:
> {code}
> datanode A => {DISK, SSD}, datanode B => {DISK, ARCHIVE}.
> 1. the same block replica exists in A[DISK] and B[DISK].
> 2. the block pool id of datanode A and datanode B are the same.
> {code}
> Then we start to change the file's storage policy and move the block replica in the cluster.
Very likely we have to move block from B[DISK] to A[SSD], at this time, datanode A throws
ReplicaAlreadyExistsException and it's not a correct behavior.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message