Return-Path: X-Original-To: apmail-flume-user-archive@www.apache.org Delivered-To: apmail-flume-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id A510610A80 for ; Thu, 4 Dec 2014 08:14:37 +0000 (UTC) Received: (qmail 10332 invoked by uid 500); 4 Dec 2014 08:14:37 -0000 Delivered-To: apmail-flume-user-archive@flume.apache.org Received: (qmail 10265 invoked by uid 500); 4 Dec 2014 08:14:37 -0000 Mailing-List: contact user-help@flume.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@flume.apache.org Delivered-To: mailing list user@flume.apache.org Received: (qmail 10242 invoked by uid 99); 4 Dec 2014 08:14:37 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 04 Dec 2014 08:14:37 +0000 X-ASF-Spam-Status: No, hits=-0.4 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,RCVD_IN_DNSWL_LOW,SPF_PASS,T_FILL_THIS_FORM_SHORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of konstt2000@gmail.com designates 209.85.212.175 as permitted sender) Received: from [209.85.212.175] (HELO mail-wi0-f175.google.com) (209.85.212.175) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 04 Dec 2014 08:14:11 +0000 Received: by mail-wi0-f175.google.com with SMTP id l15so33944046wiw.14 for ; Thu, 04 Dec 2014 00:14:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type:content-transfer-encoding; bh=rglqK9PFk0SymcjVc8bJJQZ0XS+Gwmv3pmbER72xKtY=; b=LyOtDbbIk8aJsfDYAxVtXZCbtzsIR/dweUmAykz0uC0XLnRMpMbTrTkbSazcymvF4U B/5sX/lM7iP1q3t1bsoWFeLTJHP61/TkM1MV6HkDR7Dhnu7Jiuw2a3pn/YQmN8Ozomv9 zttjanLyUNL39DZgl471l8WI98TY1XCJjuzY5vxgmFS/lIJVllIlVUke6SZC3kVmcUFz M2SbEC5Dj/r9A3ITnl6F+BGCArYkeKbSkqEn8z7db6Ek0Lw9xNlNDVpM0gLm2H/CdO9l 9ez237VLtiNlPL9PWS7Ak4q5EO6HGfIEISmPFP72VOZdCv9y0rpbMbupAcCaZJ3zey5U DPIA== MIME-Version: 1.0 X-Received: by 10.194.19.38 with SMTP id b6mr13835994wje.44.1417680851298; Thu, 04 Dec 2014 00:14:11 -0800 (PST) Received: by 10.217.135.6 with HTTP; Thu, 4 Dec 2014 00:14:11 -0800 (PST) In-Reply-To: <6qjs77fxlgko4yykah8oao3k.1417650334874@email.android.com> References: <6qjs77fxlgko4yykah8oao3k.1417650334874@email.android.com> Date: Thu, 4 Dec 2014 09:14:11 +0100 Message-ID: Subject: Re: Deal with duplicates in Flume with a crash. From: Guillermo Ortiz To: user@flume.apache.org Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org What I don't understand it's that you are getting an UUID for sets of 1000 lines, am I right? how could you know if there're duplicates if you are evaluating set of lines and not line per line with UUID? I thought that what you were doing: 1.Get a line from the Source X. 2.Calculate an UUID for a single line with an interceptor 3.Another interceptor checks this UUID in HBase. If it doesn't exist, you send to the channel and put the UUID in Hbase, If you are grouping the lines.. aren't you checking duplicates to set level= ?? Maybe you're checking the UUID in the Sink, although I see the same problem. Where am I wrong?? 2014-12-04 0:50 GMT+01:00 Mike Keane : > I'm not sure I understand your question but I'll be the first to admit th= is is not fool proof. > > That said here are a couple inherent risks I am taking. Assume FlumeEven= tA is one of 1000 events in a batch. If FlumeEventA makes it to FlumeAgent= 1 but the batch fails it is entirely possible when the batch is resent it g= oes to FlumeAgent2. Now this event is on 2 separate file channels, separat= e jvms and separate servers. It is possible but extremely unlikely that Fl= umeEventA is processed at the exact same time in FlumeAgent1 and FlumeAgent= 2. Both agents pop the event off the channel, pull the UUID off the header= and check if it is in HBase. Both do not find it so both write to HDFS an= d we have a duplicate. Considering the archetecture we believe the odds of= this are incredibly small and we are OK with the risk. > > Since the write to HDFS is in a transaction if it fails I don't do a HBas= e put of the UUID, the transaction rolls back and we try again. I did a fa= ir amount studying the sink and bucketwriter code at the time to understand= what the fail conditions are when writing to HDFS. If I remember right it= could fail creating the file, writing to the file, closing the file and re= naming the file. We all have or own SLAs to meet. After a pretty thorough= review and amount of testing we were comfortable this met our SLA better t= han a mapreduce job to dedupe 90 billion log lines per day. > > Joey Echeverria wrote: > > > What happens if the write to HDFS succeeds before the HBase put? > > -Joey > > On Wed, Dec 3, 2014 at 2:35 PM, Mike Keane w= rote: >> We effectively mitigated this problem by using the UUID interceptor and = customizing the HDFS Sink to do a check and put of the UUID to HBase. In t= he customized sink we check HBase to see if we have seen the UUID before, i= f we have it is a duplicate and we log a new duplicate metric with the exis= ting sink metrics and throw the event away. If we have not seen the UUID b= efore we write the Event to HDFS and do a put of the UUID to hbase. >> >> Because of our volume to minimize the number of check/puts to HBase we p= ut multiple logs in a single FlumeEvent. >> >> >> -Mike >> >> ________________________________________ >> From: Guillermo Ortiz [konstt2000@gmail.com] >> Sent: Wednesday, December 03, 2014 4:15 PM >> To: user@flume.apache.org >> Subject: Re: Deal with duplicates in Flume with a crash. >> >> I didn't know anything about a Hive Sink, I'll check the JIRA about it, = thanks. >> The pipeline is Flume-Kafka-SparkStreaming-XXX >> >> So I guess I should deal in SparkStreaming with it, right? I guess >> that it would be easy to do it with an UUID interceptor or is there >> another way easier? >> >> 2014-12-03 22:56 GMT+01:00 Roshan Naik : >>> Using the UUID interceptor at the source closest to data origination.. = it >>> will help identify duplicate events after they are delivered. >>> >>> If it satisfies your use case, the upcoming Hive Sink will mitigate the >>> problem a little bit (since it uses transactions to write to destinatio= n). >>> >>> -roshan >>> >>> >>> On Wed, Dec 3, 2014 at 8:44 AM, Joey Echeverria wro= te: >>>> >>>> There's nothing built into Flume to deal with duplicates, it only >>>> provides at-least-once delivery semantics. >>>> >>>> You'll have to handle it in your data processing applications or add >>>> an ETL step to deal with duplicates before making data available for >>>> other queries. >>>> >>>> -Joey >>>> >>>> On Wed, Dec 3, 2014 at 5:46 AM, Guillermo Ortiz >>>> wrote: >>>> > Hi, >>>> > >>>> > I would like to know if there's a easy way to deal with data >>>> > duplication when an agent crashs and it resends same data again. >>>> > >>>> > Is there any mechanism to deal with it in Flume, >>>> >>>> >>>> >>>> -- >>>> Joey Echeverria >>> >>> >>> >>> CONFIDENTIALITY NOTICE >>> NOTICE: This message is intended for the use of the individual or entit= y to >>> which it is addressed and may contain information that is confidential, >>> privileged and exempt from disclosure under applicable law. If the read= er of >>> this message is not the intended recipient, you are hereby notified tha= t any >>> printing, copying, dissemination, distribution, disclosure or forwardin= g of >>> this communication is strictly prohibited. If you have received this >>> communication in error, please contact the sender immediately and delet= e it >>> from your system. Thank You. >> >> >> >> >> This email and any files included with it may contain privileged, >> proprietary and/or confidential information that is for the sole use >> of the intended recipient(s). Any disclosure, copying, distribution, >> posting, or use of the information contained in or attached to this >> email is prohibited unless permitted by the sender. If you have >> received this email in error, please immediately notify the sender >> via return email, telephone, or fax and destroy this original transmissi= on >> and its included files without reading or saving it in any manner. >> Thank you. >> > > > > -- > Joey Echeverria > > > > > This email and any files included with it may contain privileged, > proprietary and/or confidential information that is for the sole use > of the intended recipient(s). Any disclosure, copying, distribution, > posting, or use of the information contained in or attached to this > email is prohibited unless permitted by the sender. If you have > received this email in error, please immediately notify the sender > via return email, telephone, or fax and destroy this original transmissio= n > and its included files without reading or saving it in any manner. > Thank you. >