beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BEAM-92) Data-dependent sinks
Date Tue, 11 Jul 2017 16:21:00 GMT

    [ https://issues.apache.org/jira/browse/BEAM-92?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16082458#comment-16082458
] 

ASF GitHub Bot commented on BEAM-92:
------------------------------------

GitHub user reuvenlax opened a pull request:

    https://github.com/apache/beam/pull/3541

    [BEAM-92] Support value-dependent files in AvroIO.

    Extends the FileBasedSink dynamic filename support to AvroIO. In addition to allowing
dynamic destinations, we also allow per-destination AVRO schemas, metadata, and codec. Allow
the DynamicDestinations class to access side inputs if wanted.
    
    Note: Currently AvroIOTest.testDynamicDestinations fails on the direct runner. This appears
to be due to bugs in the proto translation support in Beam. Working with @kennknowles to debug
and fix this.
    
    R: @jkff 

You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/reuvenlax/incubator-beam dynamic_avro_write

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/beam/pull/3541.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #3541
    
----
commit a61076e1561ff9109f315e67c19ce7466901e3e0
Author: Reuven Lax <relax@google.com>
Date:   2017-07-07T03:22:25Z

    Support DynamicDestinations in AvroIO.

----


> Data-dependent sinks
> --------------------
>
>                 Key: BEAM-92
>                 URL: https://issues.apache.org/jira/browse/BEAM-92
>             Project: Beam
>          Issue Type: New Feature
>          Components: sdk-java-core
>            Reporter: Eugene Kirpichov
>            Assignee: Reuven Lax
>
> Current sink API writes all data to a single destination, but there are many use cases
where different pieces of data need to be routed to different destinations where the set of
destinations is data-dependent (so can't be implemented with a Partition transform).
> One internally discussed proposal was an API of the form:
> {code}
> PCollection<Void> PCollection<T>.apply(
>     Write.using(DoFn<T, SinkT> where,
>                 MapFn<SinkT, WriteOperation<WriteResultT, T>> how)
> {code}
> so an item T gets written to a destination (or multiple destinations) determined by "where";
and the writing strategy is determined by "how" that produces a WriteOperation (current API
- global init/write/global finalize hooks) for any given destination.
> This API also has other benefits:
> * allows the SinkT to be computed dynamically (in "where"), rather than specified at
pipeline construction time
> * removes the necessity for a Sink class entirely
> * is sequenceable w.r.t. downstream transforms (you can stick transforms onto the returned
PCollection<Void>, while the current Write.to() returns a PDone)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Mime
View raw message