Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B697E112F6 for ; Fri, 19 Sep 2014 09:37:47 +0000 (UTC) Received: (qmail 25676 invoked by uid 500); 19 Sep 2014 09:37:37 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 25551 invoked by uid 500); 19 Sep 2014 09:37:37 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 25531 invoked by uid 99); 19 Sep 2014 09:37:36 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Sep 2014 09:37:36 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of mirko.kaempf@gmail.com designates 209.85.192.176 as permitted sender) Received: from [209.85.192.176] (HELO mail-pd0-f176.google.com) (209.85.192.176) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Sep 2014 09:37:10 +0000 Received: by mail-pd0-f176.google.com with SMTP id g10so3283809pdj.35 for ; Fri, 19 Sep 2014 02:37:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type; bh=TwSLtZmpT9+wIqm0leNsDOHH2boTRRD/8HfUCq5+zvA=; b=svHX4UXYZebS2AmoRugjVnxsd8MYhRpDZEr4e7ADhwNk1nwvSMs+MTq3XDIZAPtyNt XoKT4R5uCTe7FbwRCSETeH4doS4V/b63w7TXVqoXaeZ4XQq448cQqozwCh9I69vBPcEa mCjWJzicr4cvJfopMOSIY2X2isHVDFNAJxYAa03FCT1Rn46keI3NbQvwJlrMCSTwhqR6 RMbkjP8wnnWaPWrmWa8xPR+ksFAgf39y+mFmj9rndIDzcG1wKlcG4xFYsLrxiVbdM2tm 2NRh5IZuCItk3LvFSQm36TIvVixF4ewBd8VWADIuzC+/8n98WhlT86qAzqyd/QCVSPzT vx8Q== X-Received: by 10.70.94.201 with SMTP id de9mr16634301pdb.103.1411119428597; Fri, 19 Sep 2014 02:37:08 -0700 (PDT) MIME-Version: 1.0 Received: by 10.66.197.169 with HTTP; Fri, 19 Sep 2014 02:36:48 -0700 (PDT) In-Reply-To: <541BF232.3010608@vesseltracker.com> References: <541BE68B.4080603@vesseltracker.com> <541BF232.3010608@vesseltracker.com> From: =?UTF-8?B?TWlya28gS8OkbXBm?= Date: Fri, 19 Sep 2014 10:36:48 +0100 Message-ID: Subject: Re: Re-sampling time data with MR job. Ideas To: "user@hadoop.apache.org" Content-Type: multipart/alternative; boundary=001a11c2c0a846febc050367d8df X-Virus-Checked: Checked by ClamAV on apache.org --001a11c2c0a846febc050367d8df Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable I would only change the time resolution: 1 , 2014-01-01 12:13:02 1 , 2014-01-01 12:13:03 ... 2 , 2014-01-01 12:23:04 2 , 2014-01-01 12:24:05 =3D=3D> 1 , 2014-01-01 12:10:00 1 , 2014-01-01 12:10:00 .. 2 , 2014-01-01 12:20:00 2 , 2014-01-01 12:20:00 It is all about selecting the right (k,v) types going out of the mapper. And this depends on what you really want to do. If only transforming the time stamp is the task, than a map only job will work also. This is just a transformation of the individual data point. Resolution goes from 5s to 10min. No need for any order in this case. Even if another data point with an early time comes in it works. If aggregation is used, than this happens on the reducer later on or maybe already in the combiner, but here you have to think about the right types for the MapOutputKey. Cheers, Mirko 2014-09-19 10:06 GMT+01:00 Georgi Ivanov : > Hi Mirko, > Thanks for the reply. > > Lets assume i have a record every 1 second for every given entity. > > entity_id | timestamp | data > > 1 , 2014-01-01 12:13:01 - i want this > ..some more for different entity > 1 , 2014-01-01 12:13:02 > 1 , 2014-01-01 12:13:03 > 1 , 2014-01-01 12:13:04 > 1 , 2014-01-01 12:13:05 > ........ > 1 , 2014-01-01 12:23:01 - I want this > 1 , 2014-01-01 12:23:02 > > The problem is that in reality this is not coming sorted by entity_id , > timestamp > so i can't filter in the mapper . > The mapper will get different entity_id's and based on the input split. > > > > Georgi > > > On 19.09.2014 10:34, Mirko K=C3=A4mpf wrote: > > Hi Georgi, > > I would already emit the new time stamp (with resolution 10 min) in the > mapper. This allows you to (pre)aggregate the data already in the mapper > and you have less traffic during the shuffle & sort stage. Changing the > resolution means you have to aggregate the individual entities or do you > still need all individual entities and just want to translate the timesta= mp > to another resolution (5s =3D> 10 min)? > > Cheers, > Mirko > > > > > 2014-09-19 9:17 GMT+01:00 Georgi Ivanov : > >> Hello, >> I have time related data like this : >> entity_id, timestamp , data >> >> The resolution of the data is something like 5 seconds. >> I want to extract the data with 10 minutes resolution. >> >> So what i can do is : >> Just emit everything in the mapper as data is not sorted there . >> Emit only every 10 minutes from reducer. The reducer is receiving data >> sorted by entity_id,timestamp pair (secondary sorting) >> >> This will work fine, but it will take forever, since i have to process >> TB's of data. >> Also the data emitted to the reducer will be huge( as i am not filtering >> in map phase at all) and the number of reducers is much smaller than the >> number of mappers. >> >> Are there any better ideas how to do this ? >> >> Georgi >> > > > --001a11c2c0a846febc050367d8df Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
I would only change the time resolution:

1 , 2014-01-0= 1 12:13:02
= 1 , 2014-01-01 = 12:13:03
...
2 ,= 2014-01-01 12:23:04
2 , 2= 014-01-01 12:24:05

=3D=3D>=C2=A0

1 , 2014-01-= 01 12:10:00
1 , 2014-01-01= 12:10:00
<= span style=3D"font-family:arial,sans-serif;font-size:13px">..
<= div>2 , 2014-01= -01 12:20:00
2 , 2014-01-0= 1 12:20:00

It is all about selecting the right (k,v) typ= es going out of the mapper. And this depends on what you really want to do.= If only transforming the time stamp is the task, than a map only job will = work also.

This is just a transformation of the individual data po= int. Resolution goes from 5s to 10min. No need for any order in this case.= =C2=A0
Even if another data point with an early time comes in it works.= =C2=A0

If aggregation is used, than this happens on the reducer la= ter on or maybe already in the combiner, but here you have to think about the right= types for the MapOutputKey.

Cheers,
Mirko

=


2014-09-19 10:06 GMT+01:00 Georgi Ivanov <ivanov@vesseltracker= .com>:
=20 =20 =20
Hi Mirko,
Thanks for the reply.

Lets assume i have a record every 1 second for every given entity.
entity_id | timestamp | data

1 , 2014-01-01 12:13:01 - i want this
..some more for different entity
1 , 2014-01-01 12:13:02
1 , 2014-01-01 12:13:03
1 , 2014-01-01 12:13:04
1 , 2014-01-01 12:13:05
........
1 , 2014-01-01 12:23:01 - I want this
1 , 2014-01-01 12:23:02

The problem is that in reality this is not coming sorted by entity_id , timestamp
so i can't filter in the mapper .
The mapper will get different entity_id's and based on the input split.



Georgi


On 19.09.2014 10:34, Mirko K=C3=A4mpf wrote:
Hi Georgi,

I would already emit the new time stamp (with resolution 10 min) in the mapper. This allows you to (pre)aggregate the data already in the mapper and you have less traffic during the shuffle & sort stage. Changing the resolution means you have to aggregate the individual entities or do you still need all individual entities and just want to translate the timestamp to another resolution (5s =3D> 10 min)?

Cheers,
Mirko




2014-09-19 9:17 GMT+01:00 Georgi Ivanov <ivanov@vesseltracker.com>:
Hello,
I have time related data like this :
entity_id, timestamp , data

The resolution of the data is something like 5 seconds.
I want to extract the data with 10 minutes resolution.

So what i can do is :
Just emit everything in the mapper as data is not sorted there .
Emit only every 10 minutes from reducer. The reducer is receiving data sorted by entity_id,timestamp pair (secondary sorting)

This will work fine, but it will take forever, since i have to process TB's of data.
Also the data emitted to the reducer will be huge( as i am not filtering in map phase at all) and the number of reducers is much smaller than the number of mappers.

Are there any better ideas how to do this ?

Georgi



--001a11c2c0a846febc050367d8df--