hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From sravan korumilli <sravan.korumi...@huawei.com>
Subject Possible Dataloss
Date Fri, 17 Jun 2011 10:55:57 GMT
 
Hi, 

        In the BackupStorage.convergeJournalSpool when the jsState is in
INPROGRESS we are loading edits.new file once. 
Later inorder to load intermediate edits we are loading edits.new file once
again after setting the jsState to WAIT. But here as we are reusing the
BufferedInputStream , reading once again starts at the EndOFFileMarker i.e.,
-1. So, we will not be loading the intermediate edits into the memory. 
        
        The second load edits is not serving any purpose. 

We are using the Hadoop21 version. 
This is the corresponding code snippet in BackupStorage.convergeJournalSpool
API

 if(jSpoolFile.exists()) { 
      // load edits.new 
      EditLogFileInputStream edits = new EditLogFileInputStream(jSpoolFile);

      DataInputStream in = edits.getDataInputStream(); 
      numEdits += editLog.loadFSEdits(in, false); 
  
      // first time reached the end of spool 
      jsState = JSpoolState.WAIT; 
      numEdits += editLog.loadEditRecords(getLayoutVersion(), in, true); 
      getFSNamesystem().dir.updateCountForINodeWithQuota(); 
      edits.close(); 
    } 


We are in process of prototyping the "Hot Backup Node" in which we are
noticing possible dataloss in the following scenario. 

1) Check point is in progress 
2) First time loading completed in convergeJournalSpool. 
3) During step #2, any new transactions [for ex., Create a DFS file] will
not be updated in buffer of BufferedInputStream. 
4) Second time reloading will not reload the new transactions as expected,
as we are not reloading the file into buffer. 
5) Now if switch happens, the newly added transactions are not reflected. 

Does anyone have any suggestions to overcome this problem? 


Regards, 
Sravan kumar.

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message