perl-modperl mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Vanasco <modperl-l...@2xlp.com>
Subject Re: changing global data strategy
Date Wed, 08 Mar 2006 06:34:20 GMT

how big are these data structures?

200k?  2mb?  20mb?

if they're not too big, you could just use memcached.

	http://danga.com:80/memcached/
	http://search.cpan.org/~bradfitz/Cache-Memcached-1.15/Memcached.pm

its ridiculously painless to implement. i found it easier that a lot  
of other approaches.	

but if you have 50mb of data, i'd rethink what you're doing.

you're just going to keep getting screwed when your cache db updates  
(because the updates will only be per-child, not per parent  
process).  so you've got the potential for a 50MB  parent process  
having children that read in 50mb each in data?  thats a cascading  
nightmare.

if you need to precache such giant data structures, i'd do something  
like 2 tiered server
	apache a - talks to web users / load balancer
				sends data / whatever for specific processing to
	daemon b - either apache or some custom server, which handles  
precaching of db and parsing requests from apache
	db - datastore

having all of that data in modperl would be a nightmare though.  even  
with memcached, you'll update fast and everyone can access it, but  
you're going to keep eating memory.  if every session is going to  
toss though 20mb hashes of info, i'd keep that info out of apache  
entirely




On Mar 8, 2006, at 12:16 AM, Will Fould wrote:

> at this point, the application is on a single machine, but I'm  
> being tasked with moving our database onto another machine and  
> implement load balancing b/w 2 webservers.
>
> william
>
>
> On 3/7/06, Will Fould <willfould@gmail.com> wrote:
> an old issue:
>    "a dream solution would be if all child processes could *update*  
> a large global structure."
>
> we have a tool that loads a huge store of data (25-50Mb+) from a  
> database into many perl hashes at start up: each session needs  
> access to all these data but it would be prohibitive to use mysql  
> or another databases for multiple, large lookups (and builds), at  
> each session:  there are quite a few structures, each are very big.
>
> if the data never changed, it would be trivial; load/build just at  
> start-up.
>
> but since the data changes often, we use a semaphore strategy to  
> determine when childern should reload/rebuild the structures (after  
> updates have been made).
>
> this is painful. there has got to be a better way of doing this -  
> I've seen posts on memcache and other, more exotic animals.
>
> can someone point me in the right direction: a reference/read, or a  
> stable modules that exist for our situation?


Mime
View raw message