beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "peay (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (BEAM-1751) Singleton ByteKeyRange with BigtableIO and Dataflow runner
Date Fri, 24 Mar 2017 19:40:41 GMT

    [ https://issues.apache.org/jira/browse/BEAM-1751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15941015#comment-15941015
] 

peay commented on BEAM-1751:
----------------------------

I just ran it with {{0.7.0-SNAPSHOT}} from this morning -- same issue, nothing changed.

{code}
Exception: java.lang.IllegalArgumentException: Start [xxxx] must be less than end [xxxx]
org.apache.beam.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
org.apache.beam.sdk.io.range.ByteKeyRange.<init>(ByteKeyRange.java:288)
org.apache.beam.sdk.io.range.ByteKeyRange.withEndKey(ByteKeyRange.java:278)
org.apache.beam.sdk.io.gcp.bigtable.BigtableIO$BigtableSource.withEndKey(BigtableIO.java:728)
org.apache.beam.sdk.io.gcp.bigtable.BigtableIO$BigtableReader.splitAtFraction(BigtableIO.java:1034)
org.apache.beam.sdk.io.gcp.bigtable.BigtableIO$BigtableReader.splitAtFraction(BigtableIO.java:953)
org.apache.beam.runners.dataflow.DataflowUnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$ResidualSource.getCheckpointMark(DataflowUnboundedReadFromBoundedSource.java:530)
org.apache.beam.runners.dataflow.DataflowUnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.getCheckpointMark(DataflowUnboundedReadFromBoundedSource.java:386)
org.apache.beam.runners.dataflow.DataflowUnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.getCheckpointMark(DataflowUnboundedReadFromBoundedSource.java:283)
com.google.cloud.dataflow.worker.runners.worker.StreamingModeExecutionContext.flushState(StreamingModeExecutionContext.java:278)
{code}

When submitting the job,
{code}
15:34:11.767 [main] INFO  o.a.b.sdk.io.gcp.bigtable.BigtableIO - About to split into bundles
of size 805306368 with sampleRowKeys length 1 first element offset_bytes: 805306368
15:34:11.767 [main] INFO  o.a.b.sdk.io.gcp.bigtable.BigtableIO - Generated 1 splits. First
split: BigtableSource{tableId=apRadioProperties-default, filter=null, range=ByteKeyRange{startKey=[],
endKey=[]}, estimatedSizeBytes=805306368}
{code}

>From the worker logs:
{code}
"Proposing to split ByteKeyRangeTracker{range=ByteKeyRange{startKey=[xxxx], endKey=[]}, position=null}
at fraction 0.0 (key [xxxx])"    
{code}

> Singleton ByteKeyRange with BigtableIO and Dataflow runner
> ----------------------------------------------------------
>
>                 Key: BEAM-1751
>                 URL: https://issues.apache.org/jira/browse/BEAM-1751
>             Project: Beam
>          Issue Type: Bug
>          Components: runner-dataflow, sdk-java-gcp
>    Affects Versions: 0.5.0
>            Reporter: peay
>            Assignee: Daniel Halperin
>
> I am getting this exception on a smallish table of a couple hundreds of rows from Bigtable,
when running on Dataflow with a single worker.
> This doesn't occur with the direct runner on my laptop, only when running on Dataflow.
Backtrace is from Beam 0.5.
> {code}java.lang.IllegalArgumentException: Start [xxxxxxxxxx] must be less than end [xxxxxxxxxx]
> 	at org.apache.beam.sdk.repackaged.com.google.common.base.Preconditions.checkArgument(Preconditions.java:146)
> 	at org.apache.beam.sdk.io.range.ByteKeyRange.<init>(ByteKeyRange.java:288)
> 	at org.apache.beam.sdk.io.range.ByteKeyRange.withEndKey(ByteKeyRange.java:278)
> 	at org.apache.beam.sdk.io.gcp.bigtable.BigtableIO$BigtableSource.withEndKey(BigtableIO.java:728)
> 	at org.apache.beam.sdk.io.gcp.bigtable.BigtableIO$BigtableReader.splitAtFraction(BigtableIO.java:1034)
> 	at org.apache.beam.sdk.io.gcp.bigtable.BigtableIO$BigtableReader.splitAtFraction(BigtableIO.java:953)
> 	at org.apache.beam.runners.dataflow.DataflowUnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$ResidualSource.getCheckpointMark(DataflowUnboundedReadFromBoundedSource.java:530)
> 	at org.apache.beam.runners.dataflow.DataflowUnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.getCheckpointMark(DataflowUnboundedReadFromBoundedSource.java:386)
> 	at org.apache.beam.runners.dataflow.DataflowUnboundedReadFromBoundedSource$BoundedToUnboundedSourceAdapter$Reader.getCheckpointMark(DataflowUnboundedReadFromBoundedSource.java:283)
> 	at com.google.cloud.dataflow.worker.runners.worker.StreamingModeExecutionContext.flushState(StreamingModeExecutionContext.java:278)
> 	at com.google.cloud.dataflow.worker.runners.worker.StreamingDataflowWorker.process(StreamingDataflowWorker.java:778)
> 	at com.google.cloud.dataflow.worker.runners.worker.StreamingDataflowWorker.access$700(StreamingDataflowWorker.java:105)
> 	at com.google.cloud.dataflow.worker.runners.worker.StreamingDataflowWorker$9.run(StreamingDataflowWorker.java:858)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> 	at java.lang.Thread.run(Thread.java:745)
> {code}
> This is in the log right before:
> {code}
> "Proposing to split ByteKeyRangeTracker{range=ByteKeyRange{startKey=[xxxxxxxxxx], endKey=[]},
position=null} at fraction 0.0 (key [xxxxxxxxxx])"   
> {code}
> I have replaced the actual key with {{xxxxxxxxxx}}, but it is always the same everywhere.
In https://github.com/apache/beam/blob/e68a70e08c9fe00df9ec163d1532da130f69588a/sdks/java/core/src/main/java/org/apache/beam/sdk/io/range/ByteKeyRange.java#L260,
the end position is obtained by truncating the fractional part of {{size * fraction}}, such
that the resulting offset can just be zero if {{fraction}} is too small. `ByteKeyRange` does
not allow a singleton range, however. Since {{fraction}} is zero here, the call to {{splitAtFraction}}
fails. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

Mime
View raw message