lucene-solr-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "al patel" <alps....@gmail.com>
Subject Re: Backup and distributed index/backup management
Date Sun, 25 Mar 2007 22:46:46 GMT
Thanks Chris.

So, looks like then one has to delete entries to keep the index managable
then.

In my case, we need to preserve entries - thus, wanted to "archive"
snapshots, but still keep them searchable (thaw certain indices if you may).


So, is there anyone out there looking into "ever increasing index sizes" and
having to maintain older data?

Rgds
-a

On 3/24/07, Chris Hostetter <hossman_lucene@fucit.org> wrote:
>
>
> : My question is, even with backup, solr will still have a single index,
> : right? We will have huge amount of data in index - it is ever
> increasing.
>
> if you have older docs you want to retire out of your index, you'll need
> to do that manually (delete by query can come in handy)
>
> : I want to archive older data - say every 2 weeks and start a new index -
> but
> : want the older indices to be searchable.
> :
> : I can potentially take a snapshot at master at 2 week interval, backup
> and
> : restart master with fresh index.
>
> you don't really need to restart the master ... you could pull snapshots
> from your master to a slave, and then when you decide that slave is "full"
> of old docs you stop pulling snapshots, and delete the old docs from your
> master and start replicating to a new slave.
>
> : Does solr handle this - how? Or how do I solve this problem? Open to
> other
> : suggestions too.
>
> what you're describing is fairly outside of what i would consider "normal"
> Solr usage .. it seems very special purpose.
>
>
>
> -Hoss
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message