hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Purtell (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HBASE-5878) Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
Date Tue, 11 Aug 2015 00:14:46 GMT

     [ https://issues.apache.org/jira/browse/HBASE-5878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Andrew Purtell updated HBASE-5878:
----------------------------------
       Resolution: Fixed
     Hadoop Flags: Reviewed
    Fix Version/s: 1.1.2
                   1.2.0
                   1.0.2
                   0.98.14
           Status: Resolved  (was: Patch Available)

Thanks.

Pushed the v6 master patch to master, branch-1, and branch-1.2. Pushed the v1 branch-1.0 patch
to branch-1.0 and branch-1.1. Pushed the v7 0.98 patch to 0.98 (checked Hadoop 1 and 2 builds).
WAL unit tests pass all.


> Use getVisibleLength public api from HdfsDataInputStream from Hadoop-2.
> -----------------------------------------------------------------------
>
>                 Key: HBASE-5878
>                 URL: https://issues.apache.org/jira/browse/HBASE-5878
>             Project: HBase
>          Issue Type: Bug
>          Components: wal
>            Reporter: Uma Maheswara Rao G
>            Assignee: Ashish Singhi
>             Fix For: 2.0.0, 0.98.14, 1.0.2, 1.2.0, 1.1.2, 1.3.0
>
>         Attachments: HBASE-5878-branch-1.0.patch, HBASE-5878-v2.patch, HBASE-5878-v3.patch,
HBASE-5878-v4.patch, HBASE-5878-v5-0.98.patch, HBASE-5878-v5.patch, HBASE-5878-v5.patch, HBASE-5878-v6-0.98.patch,
HBASE-5878-v6.patch, HBASE-5878-v7-0.98.patch, HBASE-5878.patch
>
>
> SequencFileLogReader: 
> Currently Hbase using getFileLength api from DFSInputStream class by reflection. DFSInputStream
is not exposed as public. So, this may change in future. Now HDFS exposed HdfsDataInputStream
as public API.
> We can make use of it, when we are not able to find the getFileLength api from DFSInputStream
as a else condition. So, that we will not have any sudden surprise like we are facing today.
> Also,  it is just logging one warn message and proceeding if it throws any exception
while getting the length. I think we can re-throw the exception because there is no point
in continuing with dataloss.
> {code}
> long adjust = 0;
>           try {
>             Field fIn = FilterInputStream.class.getDeclaredField("in");
>             fIn.setAccessible(true);
>             Object realIn = fIn.get(this.in);
>             // In hadoop 0.22, DFSInputStream is a standalone class.  Before this,
>             // it was an inner class of DFSClient.
>             if (realIn.getClass().getName().endsWith("DFSInputStream")) {
>               Method getFileLength = realIn.getClass().
>                 getDeclaredMethod("getFileLength", new Class<?> []{});
>               getFileLength.setAccessible(true);
>               long realLength = ((Long)getFileLength.
>                 invoke(realIn, new Object []{})).longValue();
>               assert(realLength >= this.length);
>               adjust = realLength - this.length;
>             } else {
>               LOG.info("Input stream class: " + realIn.getClass().getName() +
>                   ", not adjusting length");
>             }
>           } catch(Exception e) {
>             SequenceFileLogReader.LOG.warn(
>               "Error while trying to get accurate file length.  " +
>               "Truncation / data loss may occur if RegionServers die.", e);
>           }
>           return adjust + super.getPos();
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message