hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hudson (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-6107) When a block can't be cached due to limited space on the DataNode, that block becomes uncacheable
Date Mon, 17 Mar 2014 19:01:51 GMT

    [ https://issues.apache.org/jira/browse/HDFS-6107?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13938211#comment-13938211
] 

Hudson commented on HDFS-6107:
------------------------------

SUCCESS: Integrated in Hadoop-trunk-Commit #5341 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/5341/])
HDFS-6107. When a block cannot be cached due to limited space on the DataNode, it becomes
uncacheable (cmccabe) (cmccabe: http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1578508)
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPOfferService.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetCache.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/DFSTestUtil.java
* /hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestFsDatasetCache.java


> When a block can't be cached due to limited space on the DataNode, that block becomes
uncacheable
> -------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-6107
>                 URL: https://issues.apache.org/jira/browse/HDFS-6107
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode
>    Affects Versions: 2.4.0
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>             Fix For: 2.4.0
>
>         Attachments: HDFS-6107.001.patch
>
>
> When a block can't be cached due to limited space on the DataNode, that block becomes
uncacheable.  This is because the CachingTask fails to reset the block state in this error
handling case.



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Mime
View raw message