accumulo-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From echeipesh <>
Subject [GitHub] accumulo pull request: ACCUMULO-3602 BatchScanner optimization for...
Date Wed, 08 Apr 2015 16:49:05 GMT
Github user echeipesh commented on a diff in the pull request:
    --- Diff: core/src/main/java/org/apache/accumulo/core/client/mapred/
    @@ -384,7 +387,21 @@ public static InputTableConfig getInputTableConfig(JobConf job, String
       protected abstract static class AbstractRecordReader<K,V> implements RecordReader<K,V>
         protected long numKeysRead;
         protected Iterator<Map.Entry<Key,Value>> scannerIterator;
    -    protected RangeInputSplit split;
    +    protected org.apache.accumulo.core.client.mapreduce.impl.AccumuloInputSplit split;
    +    protected ScannerBase scannerBase;
    +    /**
    +     * Configures the iterators on a scanner for the given table name.
    +     *
    +     * @param job
    +     *          the Hadoop job configuration
    +     * @param scanner
    +     *          the scanner for which to configure the iterators
    +     * @param tableName
    +     *          the table name for which the scanner is configured
    +     * @since 1.7.0
    +     */
    +    protected abstract void setupIterators(JobConf job, ScannerBase scanner, String tableName,
AccumuloInputSplit split);
    --- End diff --
    Actually that's just factored out of old `RangeInputSplit`. Looking over the code there
doesn't seem to be a reason for it. All of those options are coming from `InputTableConfig`
which, as you say, is deserialized from `JobContext`. Looking over the code there really isn't
any hard reason for that (and most fields in `AccumuloInputSplit`) to be there. Although I
think it makes it a little easier to reason about.

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

View raw message