Return-Path: X-Original-To: apmail-hbase-issues-archive@www.apache.org Delivered-To: apmail-hbase-issues-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B0E9410D84 for ; Thu, 8 Jan 2015 06:13:33 +0000 (UTC) Received: (qmail 19982 invoked by uid 500); 8 Jan 2015 06:13:34 -0000 Delivered-To: apmail-hbase-issues-archive@hbase.apache.org Received: (qmail 19938 invoked by uid 500); 8 Jan 2015 06:13:34 -0000 Mailing-List: contact issues-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list issues@hbase.apache.org Received: (qmail 19927 invoked by uid 99); 8 Jan 2015 06:13:34 -0000 Received: from arcas.apache.org (HELO arcas.apache.org) (140.211.11.28) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 08 Jan 2015 06:13:34 +0000 Date: Thu, 8 Jan 2015 06:13:34 +0000 (UTC) From: "Ashish Singhi (JIRA)" To: issues@hbase.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2. MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ashish Singhi updated HBASE-5878: --------------------------------- Attachment: HBASE-5878.patch > Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2. > ----------------------------------------------------------------------- > > Key: HBASE-5878 > URL: https://issues.apache.org/jira/browse/HBASE-5878 > Project: HBase > Issue Type: Bug > Components: wal > Reporter: Uma Maheswara Rao G > Assignee: Uma Maheswara Rao G > Fix For: 1.0.0 > > Attachments: HBASE-5878.patch > > > SequencFileLogReader: > Currently Hbase using getFileLength api from DFSInputStream class by reflection. DFSInputStream is not exposed as public. So, this may change in future. Now HDFS exposed HdfsDataInputStream as public API. > We can make use of it, when we are not able to find the getFileLength api from DFSInputStream as a else condition. So, that we will not have any sudden surprise like we are facing today. > Also, it is just logging one warn message and proceeding if it throws any exception while getting the length. I think we can re-throw the exception because there is no point in continuing with dataloss. > {code} > long adjust = 0; > try { > Field fIn = FilterInputStream.class.getDeclaredField("in"); > fIn.setAccessible(true); > Object realIn = fIn.get(this.in); > // In hadoop 0.22, DFSInputStream is a standalone class. Before this, > // it was an inner class of DFSClient. > if (realIn.getClass().getName().endsWith("DFSInputStream")) { > Method getFileLength = realIn.getClass(). > getDeclaredMethod("getFileLength", new Class []{}); > getFileLength.setAccessible(true); > long realLength = ((Long)getFileLength. > invoke(realIn, new Object []{})).longValue(); > assert(realLength >= this.length); > adjust = realLength - this.length; > } else { > LOG.info("Input stream class: " + realIn.getClass().getName() + > ", not adjusting length"); > } > } catch(Exception e) { > SequenceFileLogReader.LOG.warn( > "Error while trying to get accurate file length. " + > "Truncation / data loss may occur if RegionServers die.", e); > } > return adjust + super.getPos(); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)