Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id D6A742009C6 for ; Tue, 31 May 2016 14:51:59 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id D544F160A44; Tue, 31 May 2016 12:51:59 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 7C76E16098A for ; Tue, 31 May 2016 14:51:58 +0200 (CEST) Received: (qmail 5646 invoked by uid 500); 31 May 2016 12:51:57 -0000 Mailing-List: contact user-help@beam.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@beam.incubator.apache.org Delivered-To: mailing list user@beam.incubator.apache.org Received: (qmail 5636 invoked by uid 99); 31 May 2016 12:51:57 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 31 May 2016 12:51:57 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id 4396AC0098 for ; Tue, 31 May 2016 12:51:57 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 1.179 X-Spam-Level: * X-Spam-Status: No, score=1.179 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_LOW=-0.7, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd4-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx2-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id RtaPRzN9jGKa for ; Tue, 31 May 2016 12:51:54 +0000 (UTC) Received: from mail-lf0-f53.google.com (mail-lf0-f53.google.com [209.85.215.53]) by mx2-lw-eu.apache.org (ASF Mail Server at mx2-lw-eu.apache.org) with ESMTPS id F0AFB5FADE for ; Tue, 31 May 2016 12:51:53 +0000 (UTC) Received: by mail-lf0-f53.google.com with SMTP id w16so77414923lfd.2 for ; Tue, 31 May 2016 05:51:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to; bh=c7dQo7eagcz0J8kSoUTFUyh9ZUVDrdvlplJjkvhjBlc=; b=NIDHR/1j2W3SJuS+9A5k8+fU5EdiKxERoG7RwlzAniW80/+lijUhlYvXv2GgrX3cuI rdxYvVn21BhbjjYx93O2ncZe6vdHA4Lao0cRuM0vXku2ouq9Qg9/H1d4sXwgZbHzXW61 zG+DRvWPXmXL1b83e8ag6+mq+lJrcElv9Tfj0WSGxxNzjiabPlFMrlZk5ysPXIzvVlRc QipOgABdiBBGXPK/r2v7F6sjE4pa2vKSA0B+yRDuVmo/ymA5WoJcPIX/1E1qdkkjMPOZ jKZH+8peABiuPyShi6ZO0r+m5Tbxxdb3WgaUwZ7iTqjm+psJVTAsNjkj2jgDFN02NUEC Zdqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to; bh=c7dQo7eagcz0J8kSoUTFUyh9ZUVDrdvlplJjkvhjBlc=; b=Ob3tdSmim6bo5F0Cnl43bxHBpjbCSzvB0SC2z9xeT3dnC7PDSpGxqiKzQhFqQ6cgLQ tZGa5/8B8LNCHs91jMckm1gIg8YRbME6ndBZE23JKKC/++rlTAUN2RDcss5K/MaZ/aw2 kTFFy0serofpTX/r2KRgRUiexREopYr1t3d6K8i5d2Quxqlt6QDw3YkP6wcUIxz5INYY DsMu9YDPEDRfnIjiZqkDS9oifibASRlWls0niqEqrSuX/GLyQbD42H25ApeQVGu2YeNL 93JAMZ5UOf/ZNQyPrM+MhhietSEBjlVUU7UQQnH5niHKWq17APAL0CRkMtZJW1/8pOaR NDcA== X-Gm-Message-State: ALyK8tKmE28DhTuwYKHJG8Pexsh7Fa2B8PePXMeC0zFS02IN/oRPsMKQXLi524ixvxYeuQ5Vsot9ZZJrPWENew== X-Received: by 10.25.88.197 with SMTP id m188mr98560lfb.106.1464699106815; Tue, 31 May 2016 05:51:46 -0700 (PDT) MIME-Version: 1.0 Received: by 10.25.24.199 with HTTP; Tue, 31 May 2016 05:51:27 -0700 (PDT) In-Reply-To: References: From: Pawel Szczur Date: Tue, 31 May 2016 14:51:27 +0200 Message-ID: Subject: Re: Items not groupped correctly - CoGroupByKey - FlinkPipelienRunner To: user@beam.incubator.apache.org Content-Type: multipart/alternative; boundary=001a11419fd6f6f47d053422d5f7 archived-at: Tue, 31 May 2016 12:52:00 -0000 --001a11419fd6f6f47d053422d5f7 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I've also added the test for GroupByKey. It fails. It kind of makes Flink broken at the moment, isn't it? I'm wondering.. may it be related to some Windowing issue? 2016-05-31 14:40 GMT+02:00 Pawel Szczur : > I've just tested it. It fails. > > Also added the test to the repo: > https://github.com/orian/cogroup-wrong-grouping > > I reason, this means that GroupByKey is flawed? If you open an official > issue, please add it to discussion. > > 2016-05-31 11:55 GMT+02:00 Aljoscha Krettek : > >> Does 2. work for the cases where CoGroupByKey fails? Reason I'm asking i= s >> that CoGroupByKey is essentially implemented like that internally: creat= e >> tagged union -> flatten -> GroupByKey. >> >> On Tue, 31 May 2016 at 01:16 Pawel Szczur wrote: >> >>> I've naively tried few other key types, it seems to be unrelated to key >>> type. >>> >>> As for now I have two workarounds and ignorance: >>> 1. If there is one dominant dataset and other datasets are small (size >>> << GB) then I use SideInput. >>> 2. If I have multiple datasets of similar size I enclose it in a commo= n >>> container, flatten it and GroupByKey. >>> 3. I measure occurrences and ignore the bug for now. >>> >>> Do you have an idea how a test for this may be constructed? It seems >>> handy, I think. >>> >>> I also found two things, maybe they help you: >>> 1. issue doesn't appear without parallelism >>> 2. issue doesn't appear with a tiny datasets >>> >>> 2016-05-30 17:13 GMT+02:00 Aljoscha Krettek : >>> >>>> You're right. I'm still looking into this, unfortunately I haven't mad= e >>>> progress so far. I'll keep you posted. >>>> >>>> On Sun, 29 May 2016 at 18:20 Pawel Szczur >>>> wrote: >>>> >>>>> Hi, >>>>> >>>>> I used the config as in the repo. >>>>> Please grep the the log for "hereGoesLongStringID0,2", you will see >>>>> that this key is processed multiple times. >>>>> >>>>> This is how I understand CoGroupByKey: one has two (or more) >>>>> PCollection>. Both sets are grouped by key. For each unique k= ey a >>>>> KV is produced, a given CoGbkResult contains all valu= es >>>>> from all input PCollections which have the given key. >>>>> >>>>> But from the log it seems that each key produced more than one >>>>> CoGbkResult. >>>>> >>>>> The final counters didn't catch the bug because in your case, the >>>>> value from dataset1 was replicated for each key. >>>>> >>>>> Cheers, Pawel >>>>> >>>>> 2016-05-29 15:59 GMT+02:00 Aljoscha Krettek : >>>>> >>>>>> Hi, >>>>>> I ran your data generator with these configs: >>>>>> p.apply(Create.of(new Config(3, 5, 600_000, 1))) >>>>>> .apply(ParDo.of(new Generator())).apply( >>>>>> AvroIO.Write.to >>>>>> ("/tmp/dataset1").withSchema(DumbData.class).withNumShards(6)); >>>>>> >>>>>> p.apply(Create.of(new Config(3, 5, 600_000, 2))). >>>>>> apply(ParDo.of(new Generator())).apply( >>>>>> AvroIO.Write.to >>>>>> ("/tmp/dataset2").withSchema(DumbData.class).withNumShards(6)); >>>>>> >>>>>> Then I ran the job with parallelism=3D6. I couldn't reproduce the >>>>>> problem, this is the log file from one of several runs: >>>>>> https://gist.github.com/aljoscha/ef1d804f57671cd472c75b92b4aee51b >>>>>> >>>>>> Could you please send me the exact config that you used. Btw, I ran >>>>>> it inside an IDE, do the problems also occur in the IDE for you or o= nly >>>>>> when you execute on a cluster? >>>>>> >>>>>> Cheers, >>>>>> Aljoscha >>>>>> >>>>>> On Sun, 29 May 2016 at 01:51 Pawel Szczur >>>>>> wrote: >>>>>> >>>>>>> Hi Aljoscha. >>>>>>> >>>>>>> I've created a repo with fake dataset to allow easily reproduce the >>>>>>> problem: >>>>>>> https://github.com/orian/cogroup-wrong-grouping >>>>>>> >>>>>>> What I noticed: if the dataset is too small the bug doesn't appear. >>>>>>> >>>>>>> You can modify the size of dataset, but in ideal case it should be >>>>>>> few hundred thousands records per key (I guess it depends on the ma= chine >>>>>>> you run it). >>>>>>> >>>>>>> Cheers, Pawel >>>>>>> >>>>>>> 2016-05-28 12:45 GMT+02:00 Aljoscha Krettek : >>>>>>> >>>>>>>> Hi, >>>>>>>> which version of Beam/Flink are you using. >>>>>>>> >>>>>>>> Could you maybe also provide example data and code that showcases >>>>>>>> the problem? If you have concerns about sending it to a public lis= t you can >>>>>>>> also send it to me directly. >>>>>>>> >>>>>>>> Cheers, >>>>>>>> Aljoscha >>>>>>>> >>>>>>>> On Fri, 27 May 2016 at 20:53 Pawel Szczur >>>>>>>> wrote: >>>>>>>> >>>>>>>>> *Data description.* >>>>>>>>> >>>>>>>>> I have two datasets. >>>>>>>>> >>>>>>>>> Records - the first, containes around 0.5-1M of records per >>>>>>>>> (key,day). For testing I use 2-3 keys and 5-10 days of data. What= I shoot >>>>>>>>> for is 1000+ keys. Each record contains key, timestamp in =CE=BC-= seconds and >>>>>>>>> some other data. >>>>>>>>> Configs - the second, is rather small. It describes the key in >>>>>>>>> time, e.g. you can think about it as a list of tuples: (key, star= t date, >>>>>>>>> end date, description). >>>>>>>>> >>>>>>>>> For the exploration I've encoded the data as files of >>>>>>>>> length-prefixed Protocol Buffer binary encoded messages. Addition= ally the >>>>>>>>> files are packed with gzip. Data is sharded by date. Each file is= around >>>>>>>>> 10MB. >>>>>>>>> >>>>>>>>> *Pipeline* >>>>>>>>> >>>>>>>>> First I add keys to both datasets. For Records dataset it's (key, >>>>>>>>> day rounded timestamp). For Configs a key is (key, day), where da= y is each >>>>>>>>> timestamp value between start date and end date (pointing midnigh= t). >>>>>>>>> The datasets are merged using CoGroupByKey. >>>>>>>>> >>>>>>>>> As a key type I use import org.apache.flink.api.java.tuple.Tuple2 >>>>>>>>> with a Tuple2Coder from this repo. >>>>>>>>> >>>>>>>>> *The problem* >>>>>>>>> >>>>>>>>> If the Records dataset is tiny like 5 days, everything seems fine >>>>>>>>> (check normal_run.log). >>>>>>>>> >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:124) - Final aggregator >>>>>>>>> values: >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:127) - item count : 432233= 2 >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:127) - missing val1 : 0 >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:127) - multiple val1 : 0 >>>>>>>>> >>>>>>>>> When I run the pipeline against 10+ days I encounter an error >>>>>>>>> pointing that for some Records there's no Config (wrong_run.log). >>>>>>>>> >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:124) - Final aggregator >>>>>>>>> values: >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:127) - item count : 857719= 7 >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:127) - missing val1 : 6 >>>>>>>>> INFO [main] (FlinkPipelineRunner.java:127) - multiple val1 : 0 >>>>>>>>> >>>>>>>>> Then I've added some extra logging messages: >>>>>>>>> >>>>>>>>> (ConvertToItem.java:144) - 68643 items for KeyValue3 on: >>>>>>>>> 1462665600000000 >>>>>>>>> (ConvertToItem.java:140) - no items for KeyValue3 on: >>>>>>>>> 1463184000000000 >>>>>>>>> (ConvertToItem.java:123) - missing for KeyValue3 on: >>>>>>>>> 1462924800000000 >>>>>>>>> (ConvertToItem.java:142) - 753707 items for KeyValue3 on: >>>>>>>>> 1462924800000000 marked as no-loc >>>>>>>>> (ConvertToItem.java:123) - missing for KeyValue3 on: >>>>>>>>> 1462752000000000 >>>>>>>>> (ConvertToItem.java:142) - 749901 items for KeyValue3 on: >>>>>>>>> 1462752000000000 marked as no-loc >>>>>>>>> (ConvertToItem.java:144) - 754578 items for KeyValue3 on: >>>>>>>>> 1462406400000000 >>>>>>>>> (ConvertToItem.java:144) - 751574 items for KeyValue3 on: >>>>>>>>> 1463011200000000 >>>>>>>>> (ConvertToItem.java:123) - missing for KeyValue3 on: >>>>>>>>> 1462665600000000 >>>>>>>>> (ConvertToItem.java:142) - 754758 items for KeyValue3 on: >>>>>>>>> 1462665600000000 marked as no-loc >>>>>>>>> (ConvertToItem.java:123) - missing for KeyValue3 on: >>>>>>>>> 1463184000000000 >>>>>>>>> (ConvertToItem.java:142) - 694372 items for KeyValue3 on: >>>>>>>>> 1463184000000000 marked as no-loc >>>>>>>>> >>>>>>>>> You can spot that in first line 68643 items were processed for >>>>>>>>> KeyValue3 and time 1462665600000000. >>>>>>>>> Later on in line 9 it seems the operation processes the same key >>>>>>>>> again, but it reports that no Config was available for these Reco= rds. >>>>>>>>> The line 10 informs they've been marked as no-loc. >>>>>>>>> >>>>>>>>> The line 2 is saying that there were no items for KeyValue3 and >>>>>>>>> time 1463184000000000, but in line 11 you can read that the items= for this >>>>>>>>> (key,day) pair were processed later and they've lacked a Config. >>>>>>>>> >>>>>>>>> *Work-around (after more testing, doesn't work, staying with >>>>>>>>> Tuple2)* >>>>>>>>> >>>>>>>>> I've switched from using Tuple2 to a Protocol Buffer message: >>>>>>>>> >>>>>>>>> message KeyDay { >>>>>>>>> optional ByteString key =3D 1; >>>>>>>>> optional int64 timestamp_usec =3D 2; >>>>>>>>> } >>>>>>>>> >>>>>>>>> But using Tuple2.of() was just easier than: >>>>>>>>> KeyDay.newBuilder().setKey(...).setTimestampUsec(...).build(). >>>>>>>>> >>>>>>>>> // The original description comes from: >>>>>>>>> http://stackoverflow.com/questions/37473682/items-not-groupped-co= rrectly-cogroupbykey >>>>>>>>> >>>>>>>> >>>>>>> >>>>> >>> > --001a11419fd6f6f47d053422d5f7 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I've also added the test for GroupByKey. It fails. It = kind of makes Flink broken at the moment, isn't it?

= I'm wondering.. may it be related to some Windowing issue?
<= div class=3D"gmail_extra">
2016-05-31 14:40 G= MT+02:00 Pawel Szczur <pawelszczur@gmail.com>:
I've just tested it. It fails= .

Also added the test to the repo:=C2=A0https://github.com/orian/= cogroup-wrong-grouping

I reason, this means that Gro= upByKey is flawed? If you open an official issue, please add it to discussi= on.

2016-05-31 11:55 GMT+02:00 Aljoscha K= rettek <aljoscha@apache.org>:
Does 2. work for the cases where CoGroupByKey fails= ? Reason I'm asking is that CoGroupByKey is essentially implemented lik= e that internally: create tagged union -> flatten -> GroupByKey.

On Tue, 31 May 2= 016 at 01:16 Pawel Szczur <pawelszczur@gmail.com> wrote:
I've naively tried few other key types= , it seems to be unrelated to key type.

As for now I have two workar= ounds and ignorance:
=C2=A01. If there is one dominant dataset and othe= r datasets are small (size << GB) then I use SideInput.
=C2= =A02. If I have multiple datasets of similar size I enclose it in a common = container, flatten it and GroupByKey.
=C2=A03. I measure occurren= ces and ignore the bug for now.

Do you have an ide= a how a test for this may be constructed? It seems handy, I think.

I= also found two things, maybe they help you:
=C2=A01. issue doesn= 't appear without parallelism
=C2=A02. issue doesn't appe= ar with a tiny datasets

2016-05-30 17:13 GMT+02:00 Aljoscha Krettek <aljoscha@= apache.org>:
You're right. I'm still looking into this, unfortunately I haven= 't made progress so far. I'll keep you posted.

<= div class=3D"gmail_quote">
On Sun, 29 May 2016 at 18:20 Paw= el Szczur <pa= welszczur@gmail.com> wrote:
=
Hi,

I used the config as in = the repo.
Please grep the the log for "hereG= oesLongStringID0,2", you will see that this key is processed mu= ltiple times.

This is how I understand CoGroupByKe= y: one has two (or more) PCollection<KV<K,?>>. Both sets are gr= ouped by key. For each unique key a KV<K, CoGbkResult> is produced, a= given CoGbkResult contains all values from all input PCollections which ha= ve the given key.

But from the log it seems that each key produced m= ore than one CoGbkResult.

The final counters didn't catch the bu= g because in your case, the value from dataset1 was replicated for each key= .

Cheers, Pawel

2016-05-29 15:59 GMT+02:00 Aljoscha Krett= ek <aljoscha@apache.org>:
Hi,
I ran your data generator with these configs:
p.apply(Create.of(new Config(3, 5, 600_000, 1)))
=C2= =A0 =C2=A0 .apply(ParDo.of(new Generator())).apply(
=C2=A0 =C2=A0= =C2=A0 =C2=A0 AvroIO.= Write.to("/tmp/dataset1").withSchema(DumbData.class).withNumS= hards(6));

p.apply(Create.of(new Config(3, 5, 600_= 000, 2))).
=C2=A0 =C2=A0 apply(ParDo.of(new Generator())).apply(<= /div>
=C2=A0 =C2=A0 =C2=A0 =C2=A0 AvroIO.Write.to("/tmp/dataset2").withSchema(D= umbData.class).withNumShards(6));

Then I ran= the job with parallelism=3D6. I couldn't reproduce the problem, this i= s the log file from one of several runs: https://gist.= github.com/aljoscha/ef1d804f57671cd472c75b92b4aee51b

Could you please send me the exact config that you used. Btw, I ran = it inside an IDE, do the problems also occur in the IDE for you or only whe= n you execute on a cluster?

Cheers,
Aljo= scha

On Sun,= 29 May 2016 at 01:51 Pawel Szczur <pawelszczur@gmail.com> wrote:
Hi Aljoscha.

I= 've created a repo with fake dataset to allow easily reproduce the prob= lem:
https://github.com/orian/cogroup-wrong-grouping

What I noticed: if the dataset is too small the bug= doesn't appear.

You can modify the size of da= taset, but in ideal case it should be few hundred thousands records per key= (I guess it depends on the machine you run it).

C= heers, Pawel

2016-05-28 12:45 GMT+02:00 Aljoscha Krettek <aljoscha@apache.org<= /a>>:
Hi,
= which version of Beam/Flink are you using.

Could y= ou maybe also provide example data and code that showcases the problem? If = you have concerns about sending it to a public list you can also send it to= me directly.

Cheers,
Aljoscha

On Fri, 27 May = 2016 at 20:53 Pawel Szczur <pawelszczur@gmail.com> wrote:
Data description.

I have tw= o datasets.

Records - the first, containes around 0.5-1M of records = per (key,day). For testing I use 2-3 keys and 5-10 days of data. What I sho= ot for is 1000+ keys. Each record contains key, timestamp in =CE=BC-seconds= and some other data.
Configs - the second, is rather small. It describe= s the key in time, e.g. you can think about it as a list of tuples: (key, s= tart date, end date, description).

For the exploration I've enco= ded the data as files of length-prefixed Protocol Buffer binary encoded mes= sages. Additionally the files are packed with gzip. Data is sharded by date= . Each file is around 10MB.

Pipeline

First I add keys = to both datasets. For Records dataset it's (key, day rounded timestamp)= . For Configs a key is (key, day), where day is each timestamp value betwee= n start date and end date (pointing midnight).
The datasets are merged u= sing CoGroupByKey.

As a key type I use import org.apache.flink.api.j= ava.tuple.Tuple2 with a Tuple2Coder from this repo.

The problem

If the Records dataset is tiny like 5 days, everything seems fine= (check normal_run.log).

=C2=A0I= NFO [main] (FlinkPipelineRunner.java:124) - Final aggregator values:
=C2= =A0INFO [main] (FlinkPipelineRunner.java:127) - item count : 4322332
=C2= =A0INFO [main] (FlinkPipelineRunner.java:127) - missing val1 : 0
=C2=A0I= NFO [main] (FlinkPipelineRunner.java:127) - multiple val1 : 0
When I run the pipeline against 10+ days I encounter an error pointing tha= t for some Records there's no Config (wrong_run.log).

=C2=A0INFO [main] (FlinkPipelineRunner.java:124) = - Final aggregator values:
=C2=A0INFO [main] (FlinkPipelineRunner.java:1= 27) - item count : 8577197
=C2=A0INFO [main] (FlinkPipelineRunner.java:1= 27) - missing val1 : 6
=C2=A0INFO [main] (FlinkPipelineRunner.java:127) = - multiple val1 : 0


Then I've added some extra logging me= ssages:

(ConvertToItem.java:144)= - 68643 items for KeyValue3 on: 1462665600000000
(ConvertToItem.java:14= 0) - no items for KeyValue3 on: 1463184000000000
(ConvertToItem.java:123= ) - missing for KeyValue3 on: 1462924800000000
(ConvertToItem.java:142) = - 753707 items for KeyValue3 on: 1462924800000000 marked as no-loc
(Conv= ertToItem.java:123) - missing for KeyValue3 on: 1462752000000000
(Conver= tToItem.java:142) - 749901 items for KeyValue3 on: 1462752000000000 marked = as no-loc
(ConvertToItem.java:144) - 754578 items for KeyValue3 on: 1462= 406400000000
(ConvertToItem.java:144) - 751574 items for KeyValue3 on: 1= 463011200000000
(ConvertToItem.java:123) - missing for KeyValue3 on: 146= 2665600000000
(ConvertToItem.java:142) - 754758 items for KeyValue3 on: = 1462665600000000 marked as no-loc
(ConvertToItem.java:123) - missing for= KeyValue3 on: 1463184000000000
(ConvertToItem.java:142) - 694372 items = for KeyValue3 on: 1463184000000000 marked as no-loc


You can s= pot that in first line 68643 items were processed for KeyValue3 and time 14= 62665600000000.
Later on in line 9 it seems the operation processes the = same key again, but it reports that no Config was available for these Recor= ds.
The line 10 informs they've been marked as no-loc.

The li= ne 2 is saying that there were no items for KeyValue3 and time 146318400000= 0000, but in line 11 you can read that the items for this (key,day) pair we= re processed later and they've lacked a Config.

Work-around (= after more testing, doesn't work, staying with Tuple2)

I'= ;ve switched from using Tuple2 to a Protocol Buffer message:

message KeyDay {
=C2=A0 optional ByteString= key =3D 1;
=C2=A0 optional int64 timestamp_usec =3D 2;
}

<= br>But using Tuple2.of() was just easier than: KeyDay.newBuilder().setKey(.= ..).setTimestampUsec(...).build().






--001a11419fd6f6f47d053422d5f7--