hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Allen Wittenauer (JIRA)" <j...@apache.org>
Subject [jira] [Resolved] (HDFS-1951) Null pointer exception comes when Namenode recovery happens and there is no response from client to NN more than the hardlimit for NN recovery and the current block is more than the prev block size in NN
Date Tue, 10 Mar 2015 03:48:39 GMT

     [ https://issues.apache.org/jira/browse/HDFS-1951?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Allen Wittenauer resolved HDFS-1951.
------------------------------------
    Resolution: Won't Fix

> Null pointer exception comes when Namenode recovery happens and there is no response
from client to NN more than the hardlimit for NN recovery and the current block is more than
the prev block size in NN 
> ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HDFS-1951
>                 URL: https://issues.apache.org/jira/browse/HDFS-1951
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 0.20-append
>            Reporter: ramkrishna.s.vasudevan
>         Attachments: HDFS-1951.patch
>
>
> Null pointer exception comes when Namenode recovery happens and there is no response
from client to NN more than the hardlimit for NN recovery and the current block is more than
the prev block size in NN 
> 1. Write using a client to 2 datanodes
> 2. Kill one data node and allow pipeline recovery.
> 3. write somemore data to the same block
> 4. Parallely allow the namenode recovery to happen
> Null pointer exception will come in addStoreBlock api.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message