hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Andrew Wang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-7678) Erasure coding: DFSInputStream with decode functionality
Date Wed, 29 Apr 2015 00:10:06 GMT

    [ https://issues.apache.org/jira/browse/HDFS-7678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14518401#comment-14518401

Andrew Wang commented on HDFS-7678:

Thanks for the patch Zhe, some nice functionality here. Some review comments:


* Extra imports in DFSStripedInputStream
* Some lines longer than 80chars


* I see us swallowing InterruptedException which is quite naughty, but a lot of other input
stream code does the same. It's a code smell, we really should be cleaning up and rethrowing
the exception. Think about it at least for this patch, and we should file a follow-on for
trunk and the potentially the rest of the EC code.
* waitNextCompletion, shouldn't the read timeout be an overall timeout, not a per-task timeout?
Users at least want an overall timeout.
* throwing InterruptedException on empty futures is semantically incorrect, why not return
* waitNextCompletion and its usage seems kind of complicated. Let's think about simplifying
* Do we actually need missingBlkIndices or the non-success cases? It's the set complement
of fetchedBlkIndices. Can determine it after.
* If we enforce the overall timeout in fetchBlockByteRange, we can do the futures cleanup
there too. Pass the delta timeout down to waitNextCompletion. This feels better, since it
links the timeout case with the timeout cleanup. Maybe another wrapper function to encapsulate
this, since waitNextCompletion is used in two places.
* Comments all over this logic would be good.
* Is it possible to have a 0 rp.getReadLength() ? Precondition check this?
* In general I would prefer to see Precondition checks rather than asserts, since asserts
are disabled outside of tests
* We always go through a function called "fetchExtraBlks..." even if we successfully got all
the blocks we need the first time. No early exit?
* Also seems like we have some code dupe between fetch and fetchExtra, let's think about breaking
out some shared functions. I wonder if it'd be better to do all the fetching first (including
parity if necessary), then pass it over to a decode function (if necessary).
* "found" is not used

> Erasure coding: DFSInputStream with decode functionality
> --------------------------------------------------------
>                 Key: HDFS-7678
>                 URL: https://issues.apache.org/jira/browse/HDFS-7678
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>    Affects Versions: HDFS-7285
>            Reporter: Li Bo
>            Assignee: Zhe Zhang
>         Attachments: BlockGroupReader.patch, HDFS-7678-HDFS-7285.002.patch, HDFS-7678.000.patch,
> A block group reader will read data from BlockGroup no matter in striping layout or contiguous
layout. The corrupt blocks can be known before reading(told by namenode), or just be found
during reading. The block group reader needs to do decoding work when some blocks are found

This message was sent by Atlassian JIRA

View raw message