httpd-docs mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Justin Erenkrantz" <jus...@erenkrantz.com>
Subject Re: Using Solr to index and search the Apache HTTPD Documents
Date Mon, 08 Oct 2007 19:04:25 GMT
On Oct 8, 2007 11:51 AM, Vincent Bray <noodlet@gmail.com> wrote:
> I'm very much in favour of seeing how far we can take Solr as the
> search mechanism for the httpd docs.

What are the production requirements for Solr?  IOW, what do we need
to run on www.apache.org to make this happen?  How much disk space?
How much RAM?  We do not currently run Java on our main web servers,
so running and maintaining it would have to be sorted out.  I don't
know if the Solr guys are even interested in helping us maintain a
local search engine.  (Previously, the Perl guys tried and gave up.)

The ASF infrastructure team has a checklist of things that must be
satisfied before adding any new 'critical services' (which this falls
under).  See below for the current list.

So, I sort of think that just filling out a special account for a
'custom search engine' would be a *lot* less work.  =)  -- justin

---
This provides a list of requirements and doctrines for web applications
that wish to be deployed on the Apache Infrastrcture.  It is intended to
help address many of the recurring issues we see with deployment and
maintainence of applications.

Definition of 'system': Any web application or site which will receive
traffic from public users in any manner.

Definition of 'critical systems': Any web application or site which runs
under www.apache.org, or is expected to receive a significant portion of
traffic.

1) All systems must be generally secure and robust. In cases of failure,
they should not damage the entire machine.

2) All systems must provide reliable backups, at least once a day, with
preference to incremental, real time or <1 hour snapshots.

3) All systems must be maintainable by multiple active members of the
infrastructure team.

4) All systems must come with a 'runbook' describing what to do in event
of failures, reboots, etc.  (If someone who has root needs to reboot the
box, what do they need to pay attention to?)

5) All systems must provide at least minimal monitoring via Nagios.

6) All systems must be restorable and relocatable to other machines
without significant pain.

7) All systems must have some kind of critical mass.  In general we do
not want to host one offs of any system.

8) All system configuration files must be checked into Subversion.

9) All system source must either be checked into Subversion, be at a
well-known public location, or is provided by the base OS.  (Hosting
binary-only webapps is a non-starter.)

10) All systems, prior to consideration of deployment, must provide a
detailed performance impact analysis (bandwidth and CPU).  How are
techniques like HTTP caching used?  Lack of HTTP caching was MoinMoin's
initial PITA.

11) All systems must have clearly articulated, defined, and recorded
dependencies.

12) All critical systems must be replicated across multiple machines,
with preference to cross-atlantic replication.

13) All systems must have single command operations to start, restart
and stop the system.  Support for init scripts used by the base
operating system is preferred.

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-unsubscribe@httpd.apache.org
For additional commands, e-mail: docs-help@httpd.apache.org


Mime
View raw message