jakarta-jcs-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Travis Savo <ts...@IFILM.com>
Subject RE: diff between Remote Cache and Lateral TCP Cache
Date Sat, 19 Jun 2004 23:49:42 GMT
>Do you mean if the user can't tolerate such limitation and problem, they
shouldn't use lateral cache ?

No, I mean if they can't tolerate having some redundancy in their index,
either because the law of averages and random distribution in their
application isn't applicable, or index uniqueness is a requirement, or
simply because bandwidth is that much more plentiful (or memory is very
scarce), they must supply their own (distributed) locking mechanisms on top
of JCS to provide cache consistency. This is also true of Remote Cache,
where in the time between the get miss, or even between the put event being
queued and the event being received, another could put, causing the object
on the original server to be reput (or deleted in my configuration). It's
explicitly stated that there is no cache consistency with the caches, and
any and all consistency must be supplied via an external mechanism.

Psudo code follows:

lock(key)
try{
  if cache.get(key) has result
    return cache.get(key)
  result = doWork()
  cache.put(key, result)
  return result
}finally{
  unlock(key)
}

JCS does not supply the ability to lock a key, which is what's required
here. Instead it's assume that you'll use your caches however best suits
your purposes, knowing the strengths and weaknesses of your configuration
and the requirements of your business logic.

On a side note: You'd also have to remove the event queues in order to make
puts and removes to and from the caches synchronous, but that's pretty easy.
It just involves ripping out a bit of a code.

>Therefore, the "local put" upon retrieval from lateral cache will make the
lateral cache no less configurable but will have an absolute speed
advantage.

But 'speed' isn't the point of lateral cache. The point is to treat all
'lateral' caches as distributed indexes, not redundant stores (which is what
Remote Cache does). It's strictly a way of making several distributed
indexes appear like one big index on the get. The problem of which machine
to 'put' to in order to distribute your index is not touched by lateral
because it's a problem that will change in every environment.

>The local cache has a configured maximum capacity beyond which the LRU
cache items will be dropped from the local cache.  So only access to items
dropped or never accessed in the local cache will a request be made to the
peer caches on the network.

The behavior your describing is exactly what Remote Cache does. If high
redundancy across caches on frequently accessed items is a Good Thing(tm)
for you, use Remote Cache. If your index is big and storage is available
only in islands (Google), use Lateral.

If you refer to my prior example, you can see where I outlined a scenario
where both are used.

Imagine, if you will, if the Tier-n (the lowest tier) are workstations in a
corporations who lateraled to each other, and had a remote cache was a
domain server, who in turn lateraled to other domain servers (in other
offices), and remoted to the HQ (T1).

Information would trickle to the edge upon demand, cache at the domain
level, and pushed up to the HQ. Workstations would share documents among
themselves, as would domains at the domain level, and everything would make
it's way up to HQ, but stay within the domain unless explicitly moved there.
Think about how the rules of lateral and remote cache come into play when
someone changes a document locally (local put).

Unless I've missed my mark, this would be the most scalable -=distributed=-
caching architecture I can think of. Hell... stick a locking and permissions
scheme around this, and you've got a marketable product.

-Travis Savo <tsavo@ifilm.com>


-----Original Message-----
From: Hanson Char [mailto:hanson_char@yahoo.com]
Sent: Saturday, June 19, 2004 2:22 PM
To: 'Turbine JCS Users List'
Subject: RE: diff between Remote Cache and Lateral TCP Cache


>You have no cache consistency. Period. This is a known limitation.
>Ensuring the index gets properly distributed is your problem.

This doesn't sound right.  The "known limitation" and the "index
distribution problem" are all inherently due to the design of the lateral
cache itself.  Why would such limitation and problem be the users' problem ?
Do you mean if the user can't tolerate such limitation and problem, they
shouldn't use lateral cache ?

>The fact that you can configure it for whatever suits your purpose best
>(maximum speed vs. maximum memory efficiency) is (I think) better than
>providing one very fast, but not configurable implementation
>that won't always be the best solution in every environment.

This also doesn't right right.  The "local put" will make it "very fast" but
it won't make it "not configurable" or less configurable. The local cache
has a configured maximum capacity beyond which the LRU cache items will be
dropped from the local cache.  So only access to items dropped or never
accessed in the local cache will a request be made to the peer caches on the
network.

Therefore, the "local put" upon retrieval from lateral cache will make the
lateral cache no less configurable but will have an absolute speed
advantage.

Hanson

---------------------------------------------------------------------
To unsubscribe, e-mail: turbine-jcs-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: turbine-jcs-user-help@jakarta.apache.org


Mime
View raw message