httpd-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fabien COELHO <>
Subject apache non-regression test proposal
Date Sat, 16 Jan 1999 13:20:18 GMT


I read again a few mails exchanged (or just sent without further echo)
about apache automatic tests. Contributions were made my Ralf (#1 ARTS),
ix? (#2 look at mod_perl), Ben Hyde (#3 network test driver),
Ben Laurie (#4 use perl), myself (#5 wget+diff, #6 discussion from 
my previous experience).

I looked at and thought about these ideas or codes. 

Some comments:

(#4) It seems damn useful to presuppose some powerfull tool... 
     So let's assume perl must be available. (done by #1, #2...).
     perl seems more attractive an assumption than C for this purpose.

#7 Testing apache is somehow testing the http protocol from the client side. 
   This means the test must rely on an http client implementation
   (say Http::*, wget, or whatever...). Okay, it's pretty obvious, but...

   However, the client must be quite "fine grain" to be able to test
   persitant connections for instance. Also the control over requests
   and responses should be precise... for instance the order of
   headers should be controllable or kept as is? 

   This rules out tests based on HTTP::* stuff which does implement
   a client, but without the proper control needed. For instance, I
   could not see how to use persistant connections if I wanted to test
   them. Headers seems to be reordered internally. I could not see how to
   direct the connection to a host:port that I choose independently if the
   requested URL...
   So IMHO the "ARTS" test software which is based on this implementation
   is not appropriate to general and fine grain testing.

#8 Tests MUST be as simple as possible to set up, write, aso.

   This mean that adding a new test should take as few lines as possible.
   "simple and short" rules out perl as the main language for the tests, IMO.
   It does not mean that perl must be out of the system, but just that
   the test specifications themselves should not need to be written in

   If I want to test HTTP, I expect to write some HTTP, that's all.

#9 Tests should be run on any machine, IMO, not requiring an internet 
   connection. The "network driver" does not met this.

Proposal for ANTS (Apache Non-regression Test Suite;-):

- test specs are kept in some special files, say *.ants

- basically the test file contains raw requests and meta informations
  for a test engine to run it. It should also contains parameters such
  as the host or port to run the test. It *might* look like that:

#ants! required modules: mod_this # mod_this must be compiled in for this test
#ants! configuration file: test01.conf
#ants! socket: open
#ants! host: %HOSTNAME%
#ants! port: %PORT2% # direct the request to the second test port.
#ants! type: request
#ants! drop headers: Date, Last-Modified # headers to drop from responses.
#ants! contents: drop # don't output the contents to the result file
GET / HTTP/1.1

HEAD /1.html HTTP/1.1

#ants! comment: now test a non conforming request
#ants! socket: close # after this request
#ants! type: raw # do not try to interpret the following as a request...
I do not;-)

#ants! socket: per_request # should be the default...
#ants! type: request
POST /foo/test.cgi HTTP/1.0

post content.

- some engine (possibly written in perl) take this specs,
  run httpd with the test01.conf (possibly preprocessed to fix
  the test port and other variable settings such as paths...),
  then directs each request to %HOSTNAME%:%PORT2% and stores the
  output from the server to say test01.out.tmp.

- at the end of the processing, the result file is compared against
  test01.out which would hold the expected reference result. Each
  different response for a given request is a failure.

- the issues:

  * what are the options/parameters needed in the test engine?
    to name but a few: socket policy (one connect per request, or when to open 
    and close...), request type (a well-formed request, anything...),
    request mode (wait for responses after each request or submit all
    requests and then wait for the responses...), parallelism? spawn?...

  * for all options, decide for the proper default value which will make
    it useless to most tests, but yet available to those who need it.

  * think about extension hooks. For instance, a request can be
    the output of another program...
  * how to reuse part of the output (header) of a response in a request
    (to check for cookies, digest authentification)... 
    [some option like: remember headers: ... passed later...]

  * how to make it modular enough. If someone wants to test mod_ssl,
    .he may need to have an https client...

  * how to make the server to run on a machine and the client on another?
    for some tests...

So as to sum up: a test engine (possibly written in perl) which would 
interpret kind of a specialized "language", or support many option settings,
dedicated to http server testing. 

The language simply describes requests and how to submit them.

Yes, I suggest yet another (very simple) "language". The rational for
this is to enable very simple test writing. If writing a test takes too long,
requires special skills (perl), and is not parametric enough for the purpose
(this teems the case with the current HTTP::* client implementation), it just
won't be used. 

Another key idea of thisproposal is to move the complexity as far as
possible in the test engine, *programmed once*, and make it available to
all possible testers that should/could implement test suites with it.

The questions: 

What are the opinions around about this "kind" of test engine.

Also, suggestions about the what the options should/could look like and be,
from a syntactic and semantics point of view, are welcome. 

Once the "what" is defined, the "how" to implement it is the next issue.
Well you also have the "who";-)

Hope this might help and lead to something.


View raw message