hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Luangsay Sourygna <luang...@gmail.com>
Subject Re: rules engine with Hadoop
Date Sat, 20 Oct 2012 14:24:53 GMT
In your RETE implementation, did you just relied on RAM to store the
alpha and beta memories?
What if there is a huge number of facts/WME/nodes and that you have to
retain them for quite a long period (I mean: what happens if the
alpha&beta memories gets higher than the RAM of your server?) ?

HBase seemed interesting to me because it enables me to "scale out"
this amount of memory and gives me the MR boost. Maybe there is a more
interesting database/distributed cache for that?

A big thank you anyway for your reply: I have googled a bit on your
name and found many papers that should help me in going to the right
direction (from this link:
http://www.thecepblog.com/2010/03/06/rete-engines-must-forwards-and-backwards-chain/).
Till now, the only paper I had found was:
http://reports-archive.adm.cs.cmu.edu/anon/1995/CMU-CS-95-113.pdf
(found on wikipedia) which I started to read.

On Fri, Oct 19, 2012 at 10:30 PM, Peter Lin <woolfel@gmail.com> wrote:
> Since I've implemented RETE algorithm, that is a terrible idea and
> wouldn't be efficient.
>
> storing alpha and beta memories in HBase is technically feasible, but
> it would be so slow as to be useless.
>

Mime
View raw message