incubator-directmemory-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Raffaele P. Guidi" <raffaele.p.gu...@gmail.com>
Subject Re: Involvement as a developer
Date Wed, 19 Oct 2011 14:56:52 GMT
It's probably my fault: as a former basketball player I believe that when an
assist is missed you have to blame the one who throws the ball :) Also,
after having worked alone on DirectMemory for a long time, discussing these
matters with skilled professionals like you is a real pleasure.

Thanks,
    Raffaele

On Wed, Oct 19, 2011 at 3:33 PM, Ashish <paliwalashish@gmail.com> wrote:

> On Wed, Oct 19, 2011 at 7:01 PM, Raffaele P. Guidi
> <raffaele.p.guidi@gmail.com> wrote:
> > Sorry, Ashish, but I think there must be a misunderstanding: the map
> doesn't
> > contain the actual data, it is just the index to data itself, which is
> into
> > the off-heap memory. In fact it is a collection of Pointer objects, which
> > contain the offset and the lenght of the DirectBuffer that contains the
> > actual byte array. So: replicating the map (which is natively offered by
> > both hc and terracotta) means replicating the INDEX of the data, not data
> > itself.
>
> Aha, got it. I thought about the data that's why brought up the point :)
>
> >
> > Again: replication of the map(index) is one matter, distribution of the
> data
> > is a different question. I'm not proposing to use  terracotta or
> hazelcast
> > for their caching features but for their *clustering* features
>
> Got it now :)
>
> >
> > On Wed, Oct 19, 2011 at 2:46 PM, Ashish <paliwalashish@gmail.com> wrote:
> >
> >> On Wed, Oct 19, 2011 at 5:41 PM, Raffaele P. Guidi
> >> <raffaele.p.guidi@gmail.com> wrote:
> >> > Also, on replication/distribution, we have two distinct aspects:
> >> >
> >> >
> >> >   1. *map replication* - the pointers map has to be replicated to all
> >> nodes
> >> >   and each pointer have to contain also a reference to the node who
> >> "owns" the
> >> >   real data
> >> >   2. *communication between nodes* - once one node knows that one
> entry
> >> is
> >> >   contained in node "n" has to ask for it
> >> >
> >> >
> >> > The first point is easily covered by terracotta or hazelcast, while
> the
> >> > second one should be implemented using an RPC mechanism (Thrift or
> Avro
> >> are
> >> > both good choices). Another option is to cover also point 1 with a
> custom
> >> > replication built on top of the chosen RPC framework - of course this
> >> would
> >> > lead to another (do we really need it?) distributed map
> implementation.
> >>
> >> Disagree on this. Be it TC or Hazelcast, they shall cover both the
> points.
> >> Lets take an example of Terracotta. Its a Client-Server architecture
> >> with striping on Server side.
> >> Now if you choose TC (short for Terracotta), you got 3 options
> >> 1. Use DSO or Distributed Shared Object mode - needs instrumentation
> >> and other stuff, not recommended
> >> 2. Use Ehcache at back, and TC takes care Distributing data
> >> 3. Use Map via TC Toolkit
> >>
> >> TC will not let you know where its storing the key (which infact are
> >> stored in HA manner on Server Stripe). That's the beauty of TC. It
> >> does the faulting/flushing transparently to the user code.
> >>
> >> On Hazelcast side, it does allow to know where the key is, but the
> >> moment you use its client, it becomes transparent to you.
> >>
> >> IMHO, using any existing cache solution would complicate the user story.
> >>
> >> Distribution is a nice to have feature, and infact would lead to a
> >> wider adoption :)
> >>
> >> >
> >> > Keeping things like this is easy - of course making it
> >> efficient/performant
> >> > is a different story (i.e., should I keep a local cache of frequently
> >> > accessed items stored in other nodes? etc..).
> >> >
> >> > Ciao,
> >> >    R
> >> >
> >>
> >> thanks
> >> ashish
> >>
> >
>
>
>
> --
> thanks
> ashish
>
> Blog: http://www.ashishpaliwal.com/blog
> My Photo Galleries: http://www.pbase.com/ashishpaliwal
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message