Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1DE61D2F7 for ; Fri, 19 Oct 2012 19:46:10 +0000 (UTC) Received: (qmail 64284 invoked by uid 500); 19 Oct 2012 19:46:05 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 64084 invoked by uid 500); 19 Oct 2012 19:46:05 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 64077 invoked by uid 99); 19 Oct 2012 19:46:05 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Oct 2012 19:46:05 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (athena.apache.org: local policy) Received: from [209.85.219.48] (HELO mail-oa0-f48.google.com) (209.85.219.48) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 19 Oct 2012 19:45:59 +0000 Received: by mail-oa0-f48.google.com with SMTP id h2so962144oag.35 for ; Fri, 19 Oct 2012 12:45:38 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:x-gm-message-state; bh=OIFEfUJZYF4tcJoNIWoQtdM+pAgJlT/ytuvSQbXG4Ss=; b=pqynHG+726T1alSpcufQIPNtJDzGSthw/suEpu9jbJh45HxuJNRGRrgIyPlo4+zLl0 CeT6waOFWhvMfsEq0F1OAlOG2340b0zw6ZARuVE6yspesbopYzeWX470WDtaYbhd2t5S CQ6mw+jtxobBD/HiGzqnQrMNTiIStV5rr10eBY4+N2sL283qgl4FlxgR+AWZE6zNwyn+ 6QhhvrEnpM0aRvA/UzuiLcUOIAsO60nghEpScimdBTqHt6UYWzzra2k9e+9LnKzI0wvv UwQ4UaGC1fn4XdlblnQQlNlSiWusYYyMlyvEtkN2sk+tkK4nmpyxOjPpRyzVxt06EkCS azbw== Received: by 10.182.156.100 with SMTP id wd4mr1313534obb.8.1350675937925; Fri, 19 Oct 2012 12:45:37 -0700 (PDT) MIME-Version: 1.0 Received: by 10.76.70.232 with HTTP; Fri, 19 Oct 2012 12:45:17 -0700 (PDT) In-Reply-To: References: From: Ted Dunning Date: Fri, 19 Oct 2012 12:45:17 -0700 Message-ID: Subject: Re: rules engine with Hadoop To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=f46d0444eec17c9a4704cc6ebf14 X-Gm-Message-State: ALoCoQnRIe6Di6LtLtD6GptPqUCptk+XoUbCKmrzzYoCgFN6Un7OEY7bMVpFKcJoM+45PLb4LVlV X-Virus-Checked: Checked by ClamAV on apache.org --f46d0444eec17c9a4704cc6ebf14 Content-Type: text/plain; charset=ISO-8859-1 Unification in a parallel cluster is a difficult problem. Writing very large scale unification programs is an even harder problem. What problem are you trying to solve? One option would be that you need to evaluate a conventionally-sized rulebase against many inputs. Map-reduce should be trivially capable of this. Another option would be that you want to evaluate a huge rulebase against a few inputs. It isn't clear that this would be useful given the problems of huge rulebases and the typically super-linear cost of resolution algorithms. Another option is that you want to evaluate many conventionally-sized rulebases against one or many inputs in order to implement a boosted rule engine. Map-reduce should be relatively trivial for this as well. What is it that you are trying to do? On Fri, Oct 19, 2012 at 12:25 PM, Luangsay Sourygna wrote: > Hi, > > Does anyone know any (opensource) project that builds a rules engine > (based on RETE) on top Hadoop? > Searching a bit on the net, I have only seen a small reference to > Concord/IBM but there is barely any information available (and surely > it is not open source). > > Alpha and beta memories would be stored on HBase. Should be possible, no? > > Regards, > > Sourygna > --f46d0444eec17c9a4704cc6ebf14 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Unification in a parallel cluster is a difficult problem. =A0Writing very l= arge scale unification programs is an even harder problem.

What problem are you trying to solve?

One optio= n would be that you need to evaluate a conventionally-sized rulebase agains= t many inputs. =A0Map-reduce should be trivially capable of this.

Another option would be that you want to evaluate a hug= e rulebase against a few inputs. =A0It isn't clear that this would be u= seful given the problems of huge rulebases and the typically super-linear c= ost of resolution algorithms.

Another option is that you want to evaluate many conven= tionally-sized rulebases against one or many inputs in order to implement a= boosted rule engine. =A0Map-reduce should be relatively trivial for this a= s well.

What is it that you are trying to do?

On Fri, Oct 19, 2012 at 12:25 PM, Luangsay Sourygna <lua= ngsay@gmail.com> wrote:
Hi,

Does anyone know any (opensource) project that builds a rules engine
(based on RETE) on top Hadoop?
Searching a bit on the net, I have only seen a small reference to
Concord/IBM but there is barely any information available (and surely
it is not open source).

Alpha and beta memories would be stored on HBase. Should be possible, no?
Regards,

Sourygna

--f46d0444eec17c9a4704cc6ebf14--