manifoldcf-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Erlend Garåsen <e.f.gara...@usit.uio.no>
Subject Re: Disk usage for big crawl
Date Tue, 26 Jul 2011 14:00:08 GMT

Thanks for your suggestions, Farzad!

As you probably read in my previous post, I started a full crawl 
yesterday. Instead of the Null Output connector, I used our own request 
handler which is more advanced than the regular ExtractingRequestHandler 
(Solr Cell). We can configure our handler to skip posting the data to 
Solr and instead dump the content to disk. This makes it unnecessary to 
do a recrawl if we just want to create a new Solr index for testing 
purposes. Reading the data from the generated file is much faster.

Erlend

On 25.07.11 21.29, Farzad Valad wrote:
> You can also do some smaller tests and project a number to satisfy your
> db admins. Perform a few small crawls, like 100, 500, 1000 and estimate
> a growth rate. The other thing you can do is full crawl with the Null
> Output connector. Depending on your system you can get speeds up to 60
> docs a second, even at half that speed the crawl will finish in less
> than an hour and you'll at least know what half of requirement is for
> that set, the input crawl needs. Depending on the output connector, you
> may or may not have additional growing storage needs. You can do both
> these technique to get closer at a reasonable guesstimate : )
>
> On 7/25/2011 8:16 AM, Karl Wright wrote:
>> Hi Erlend,
>>
>> I can't answer for how PostgreSQL allocates space on the whole - the
>> PostgreSQL documentation may tell you more. I can say this much:
>>
>> (1) Postgresql keeps "dead tuples" around until they are "vacuumed".
>> This implies that the table space grows until the vacuuming operation
>> takes place.
>> (2) At MetaCarta, we found that PostgreSQL's normal autovacuuming
>> process (which runs in background) was insufficient to keep up with
>> ManifoldCF going at full tilt in a web crawl.
>> (3) The solution at MetaCarta was to periodically run "maintenance",
>> which involves running a VACUUM FULL operation on the database. This
>> will cause the crawl to stall while the vacuum operation is going,
>> since a new (compact) disk image of every table must be made, and thus
>> each table is locked for a period of time.
>>
>> So my suggestion is to adopt a maintenance strategy first, make sure
>> it is working for you, and then calculate how much disk space you will
>> need for that strategy. Typically maintenance might be done once or
>> twice a week. Under heavy crawling (lots and lots of hosts being
>> crawled), you might do maintenance once every 2 days or so.
>>
>> Karl
>>
>>
>> On Mon, Jul 25, 2011 at 9:06 AM, Erlend
>> Garåsen<e.f.garasen@usit.uio.no> wrote:
>>> Hello list,
>>>
>>> In order to crawl around 100,000 documents, how much disk usage/table
>>> space
>>> will be needed for PostgreSQL? Our database administrators are now
>>> asking.
>>> Instead of starting up this crawl (which will take a lot of time) and
>>> try to
>>> measure this manually, I hope we could get an answer from the list
>>> members
>>> instead.
>>>
>>> And will the table space increase significantly for every recrqwl?
>>>
>>> Erlend
>>>
>>> --
>>> Erlend Garåsen
>>> Center for Information Technology Services
>>> University of Oslo
>>> P.O. Box 1086 Blindern, N-0317 OSLO, Norway
>>> Ph: (+47) 22840193, Fax: (+47) 22852970, Mobile: (+47) 91380968, VIP:
>>> 31050
>>>
>


-- 
Erlend Garåsen
Center for Information Technology Services
University of Oslo
P.O. Box 1086 Blindern, N-0317 OSLO, Norway
Ph: (+47) 22840193, Fax: (+47) 22852970, Mobile: (+47) 91380968, VIP: 31050

Mime
View raw message