Return-Path: X-Original-To: apmail-hive-user-archive@www.apache.org Delivered-To: apmail-hive-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 372CB91DC for ; Tue, 24 Apr 2012 03:56:17 +0000 (UTC) Received: (qmail 8879 invoked by uid 500); 24 Apr 2012 03:56:14 -0000 Delivered-To: apmail-hive-user-archive@hive.apache.org Received: (qmail 8809 invoked by uid 500); 24 Apr 2012 03:56:13 -0000 Mailing-List: contact user-help@hive.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hive.apache.org Delivered-To: mailing list user@hive.apache.org Received: (qmail 8780 invoked by uid 99); 24 Apr 2012 03:56:12 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Apr 2012 03:56:12 +0000 X-ASF-Spam-Status: No, hits=3.1 required=5.0 tests=SPF_PASS,URI_OBFU_WWW X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of prvs=mgrover=45449f0e8@oanda.com designates 98.158.95.75 as permitted sender) Received: from [98.158.95.75] (HELO ironport-01.sms.scalar.ca) (98.158.95.75) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 24 Apr 2012 03:56:07 +0000 Received: from unknown (HELO sms-zimbra-mta-01.sms.scalar.ca) ([192.168.32.56]) by ironport-01.sms.scalar.ca with ESMTP; 23 Apr 2012 23:55:45 -0400 Received: from localhost (localhost.localdomain [127.0.0.1]) by sms-zimbra-mta-01.sms.scalar.ca (Postfix) with ESMTP id 7875D17F46; Mon, 23 Apr 2012 23:55:45 -0400 (EDT) Received: from sms-zimbra-mta-01.sms.scalar.ca ([127.0.0.1]) by localhost (sms-zimbra-mta-01.sms.scalar.ca [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id h4T3B9xYYbH9; Mon, 23 Apr 2012 23:55:44 -0400 (EDT) Received: from sms-zimbra-message-store-03.sms.scalar.ca (unknown [172.17.19.202]) by sms-zimbra-mta-01.sms.scalar.ca (Postfix) with ESMTP id BF6DF17F45; Mon, 23 Apr 2012 23:55:44 -0400 (EDT) Date: Mon, 23 Apr 2012 23:55:44 -0400 (EDT) From: Mark Grover To: user@hive.apache.org Cc: jqcoffey@gmail.com Message-ID: <963909067.116414.1335239744676.JavaMail.root@sms-zimbra-message-store-03.sms.scalar.ca> In-Reply-To: Subject: Re: Lifecycle and Configuration of a hive UDF MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Originating-IP: [173.230.177.120] X-Mailer: Zimbra 7.1.2_GA_3268 (ZimbraWebClient - SAF3 (Win)/7.1.2_GA_3268) X-Virus-Checked: Checked by ClamAV on apache.org Added a tiny blurb here: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+UDF#LanguageManualUDF-UDFinternals Comments/suggestions welcome! Thanks for bringing it up, Justin. Mark Mark Grover, Business Intelligence Analyst OANDA Corporation www: oanda.com www: fxtrade.com e: mgrover@oanda.com "Best Trading Platform" - World Finance's Forex Awards 2009. "The One to Watch" - Treasury Today's Adam Smith Awards 2009. ----- Original Message ----- From: "Justin Coffey" To: user@hive.apache.org Sent: Monday, April 23, 2012 5:19:15 AM Subject: Re: Lifecycle and Configuration of a hive UDF Hello All, Thank you much for the responses. I can confirm that the lag function implementation works in my case: create temporary function lag as 'com.example.hive.udf.Lag'; select session_id,hit_datetime_gmt,lag(hit_datetime_gmt,session_id) from (select session_id,hit_datetime_gmt from omni2 where visit_day='2012-01-12' and session_id is not null distribute by session_id sort by session_id,hit_datetime_gmt ) X distribute by session_id limit 1000 For the rank it looks like: create temporary function rank as 'com.example.hadoop.hive.udf.Rank'; select user_id, time, rank(user_id) as rank from ( select user_id, time from log where day = '2012-04-01' and hour = 7 distribute by user_id sort by user_id, time ) X distribute by user_id limit 2000 As mentioned by others this appears to force the UDF to be executed Reduce side. At least, I can't figure out how it works otherwise because only one MapReduce job is created (with multiple reducers). As a note to the documentation maintainers, it might be nice to have the procedural workflow of UDF/UDTF/UDAF's documented in the wiki. I know it is logical that an aggregation function happens reducer side, but I think there is sufficient complexity in an SQL to MR translator that it is worth the effort to explicitly document it and the other functions (or please just bludgeon me over the head if I happened to miss it). Not to be pedantic, but for example, the UDAF case study doc does not even mention the word "reduce": https://cwiki.apache.org/Hive/genericudafcasestudy.html Thanks again to all the pointers! -Justin On Fri, Apr 20, 2012 at 8:18 PM, Alex Kozlov < alexvk@cloudera.com > wrote: You might also look at http://www. quora .com/Hive-computing/How-are-SQL-type-analytic-and-windowing-functions-accomplished-in-Hadoop-Hive for a way to utilize secondary sort for analytic windowing functions. RANK() OVER(...) will require grouping and sorting. While it can be done in the mapper or reducer stage, it is better to utilize Hadoop's shuffle properties to accomplish both of them. The disadvantage may be that you can compute only one RANK() in a MapReduce job. -- Alex K On Fri, Apr 20, 2012 at 10:54 AM, Philip Tromans < philip.j.tromans@gmail.com > wrote: Have a read of the thread "Lag function in Hive", linked from: http://mail-archives.apache.org/mod_mbox/hive-user/201204.mbox/thread There's an example of how to force a function to run reduce-side. I've written a UDF which replicates RANK () OVER (...), but it requires the syntactic sugar given in the thread. I'd like to make changes to the hive query planner at some point, so that you can annotate a UDF with a "run on reducer" hint, and after that I'd happily open source everything. If you want more details of how to implement your own partitionedRowNumber() UDF then I'd be happy to elaborate. Cheers, Phil. On 20 April 2012 18:35, Mark Grover < mgrover@oanda.com > wrote: > Hi Rajan and Justin, > > As per my understanding, the scope of a UDF is only one row of data at a time. Therefore, it can be done all map side without the need for the reducer being involved. Now, depending on where you are storing the result of the query, your query may have reducers that do something. > > A simple query like Rajan mentioned > select MyUDF(field1,field2) from table; > > should have the UDF execute() being called in the map phase. > > > Now to Justin's question, > rank function ( http://msdn.microsoft.com/en-us/library/ms176102%28v=sql.110%29.aspx ) > seems to have a sytax like: > RANK ( ) OVER ( [ partition_by_clause ] order_by_clause ) > > Rank function works on a collection of rows (distributed by the some column - the same one you would use in your partition_by_clause in MS SQL). > You can accomplish that using UDAF (read more about them at https://cwiki.apache.org/Hive/genericudafcasestudy.html ) or by writing a custom reducer (read about that at https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Transform ). > > I don't think rank can be done using a UDF. > > Good luck! > > Mark > > Mark Grover, Business Intelligence Analyst > OANDA Corporation > > www: oanda.com www: fxtrade.com > > "Best Trading Platform" - World Finance's Forex Awards 2009. > "The One to Watch" - Treasury Today's Adam Smith Awards 2009. > > > ----- Original Message ----- > From: "Justin Coffey" < jqcoffey@gmail.com > > To: user@hive.apache.org > Sent: Thursday, April 19, 2012 10:29:11 AM > Subject: Re: Lifecycle and Configuration of a hive UDF > > Hello All, > I second this question. I have a MS SQL "rank" function which I would like to run, the results it gives appears to suggest it is executed Mapper side as opposed to reducer side, even when run with "cluster by" constraints. > > > -Justin > > > On Thu, Apr 19, 2012 at 1:21 AM, Ranjan Bagchi < ranjan@powerreviews.com > wrote: > > > Hi, > > What's the lifecycle of a hive udf. If I call > > select MyUDF(field1,field2) from table; > > Then MyUDF is instantiated once per mapper, and within each mapper execute(field1, field2) is called for each reducer? I hope this is the case, but I can't find anything about this in the documentation. > > So I'd like to have some run-time configuration of my UDF: I'm curious how people do this. Is there a way I can send it a value or have it access a file, etc? How about performing a query against the hive store? > > Thanks, > > Ranjan > > > > > > -- > jqcoffey@gmail.com > ----- -- jqcoffey@gmail.com -----