Return-Path: Delivered-To: apmail-httpd-cvs-archive@www.apache.org Received: (qmail 42937 invoked from network); 1 Feb 2007 21:29:01 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 1 Feb 2007 21:29:01 -0000 Received: (qmail 4220 invoked by uid 500); 1 Feb 2007 21:29:06 -0000 Delivered-To: apmail-httpd-cvs-archive@httpd.apache.org Received: (qmail 4134 invoked by uid 500); 1 Feb 2007 21:29:06 -0000 Mailing-List: contact cvs-help@httpd.apache.org; run by ezmlm Precedence: bulk Reply-To: dev@httpd.apache.org list-help: list-unsubscribe: List-Post: List-Id: Delivered-To: mailing list cvs@httpd.apache.org Received: (qmail 4123 invoked by uid 99); 1 Feb 2007 21:29:06 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 Feb 2007 13:29:06 -0800 X-ASF-Spam-Status: No, hits=-9.4 required=10.0 tests=ALL_TRUSTED,NO_REAL_NAME X-Spam-Check-By: apache.org Received: from [140.211.11.3] (HELO eris.apache.org) (140.211.11.3) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 01 Feb 2007 13:28:57 -0800 Received: by eris.apache.org (Postfix, from userid 65534) id 36CD41A981A; Thu, 1 Feb 2007 13:28:37 -0800 (PST) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r502365 - in /httpd/httpd/trunk: CHANGES modules/cache/mod_cache.c modules/cache/mod_cache.h modules/cache/mod_disk_cache.c modules/cache/mod_disk_cache.h modules/cache/mod_mem_cache.c Date: Thu, 01 Feb 2007 21:28:36 -0000 To: cvs@httpd.apache.org From: minfrin@apache.org X-Mailer: svnmailer-1.1.0 Message-Id: <20070201212837.36CD41A981A@eris.apache.org> X-Virus-Checked: Checked by ClamAV on apache.org Author: minfrin Date: Thu Feb 1 13:28:34 2007 New Revision: 502365 URL: http://svn.apache.org/viewvc?view=rev&rev=502365 Log: This time from the top, with three part harmony AND feeling... Revert the read-while-caching and large-file-crash fixes for mod_disk_cache, ready to start again. Reverted: r450105 r450188 r462571 r462601 r462696 r467655 r467684 r468044 r468373 r468409 r470455 Modified: httpd/httpd/trunk/CHANGES httpd/httpd/trunk/modules/cache/mod_cache.c httpd/httpd/trunk/modules/cache/mod_cache.h httpd/httpd/trunk/modules/cache/mod_disk_cache.c httpd/httpd/trunk/modules/cache/mod_disk_cache.h httpd/httpd/trunk/modules/cache/mod_mem_cache.c Modified: httpd/httpd/trunk/CHANGES URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/CHANGES?view=diff&rev=502365&r1=502364&r2=502365 ============================================================================== --- httpd/httpd/trunk/CHANGES [utf-8] (original) +++ httpd/httpd/trunk/CHANGES [utf-8] Thu Feb 1 13:28:34 2007 @@ -61,15 +61,6 @@ make sense and leads to a division by zero. PR 40576. [Xuekun Hu ] - *) mod_cache: Pass the output filter stack through the store_body() - hook, giving each cache backend the ability to make a better - decision as to how it will allocate the tasks of writing to the - cache and writing to the network. Previously the write to the - cache task needed to be complete before the same brigade was - written to the network, and this caused timing and memory issues - on large cached files. This fix replaces the previous fix for this - PR below. PR39380 [Graham Leggett] - *) Fix issue which could cause error messages to be written to access logs on Win32. PR 40476. [Tom Donovan ] @@ -89,27 +80,12 @@ *) mod_proxy_fcgi: Added win32 build. [Mladen Turk] - *) mod_disk_cache: Implement read-while-caching. - [Niklas Edmundsson ] - - *) mod_disk_cache: NULL fd pointers when closing them, fix missing - close/flush, remove some unneccessary code duplication instead - of calling the right helper in replace_brigade_with_cache(). - [Niklas Edmundsson ] - *) sendfile_nonblocking() takes the _brigade_ as an argument, gets the first bucket from the brigade, finds it not to be a FILE bucket and barfs. The fix is to pass a bucket rather than a brigade. [Niklas Edmundsson ] - *) mod_disk_cache: Do away with the write-to-file-then-move-in-place - mentality. [Niklas Edmundsson ] - *) mod_rewrite: support rewritemap by SQL query [Nick Kew] - - *) mod_disk_cache: Make caching of large files possible on 32bit machines - by determining whether the cached file should be copied on disk rather - than loaded into RAM. PR39380 [Niklas Edmundsson ] *) mod_proxy: Print the correct error message for erroneous configured ProxyPass directives. PR 40439. [serai lans-tv.com] Modified: httpd/httpd/trunk/modules/cache/mod_cache.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache/mod_cache.c?view=diff&rev=502365&r1=502364&r2=502365 ============================================================================== --- httpd/httpd/trunk/modules/cache/mod_cache.c (original) +++ httpd/httpd/trunk/modules/cache/mod_cache.c Thu Feb 1 13:28:34 2007 @@ -366,7 +366,13 @@ /* pass the brigades into the cache, then pass them * up the filter stack */ - return cache->provider->store_body(cache->handle, f, in); + rv = cache->provider->store_body(cache->handle, r, in); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server, + "cache: Cache provider's store_body failed!"); + ap_remove_output_filter(f); + } + return ap_pass_brigade(f->next, in); } /* @@ -823,8 +829,14 @@ return ap_pass_brigade(f->next, in); } - return cache->provider->store_body(cache->handle, f, in); + rv = cache->provider->store_body(cache->handle, r, in); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server, + "cache: store_body failed"); + ap_remove_output_filter(f); + } + return ap_pass_brigade(f->next, in); } /* Modified: httpd/httpd/trunk/modules/cache/mod_cache.h URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache/mod_cache.h?view=diff&rev=502365&r1=502364&r2=502365 ============================================================================== --- httpd/httpd/trunk/modules/cache/mod_cache.h (original) +++ httpd/httpd/trunk/modules/cache/mod_cache.h Thu Feb 1 13:28:34 2007 @@ -210,7 +210,7 @@ typedef struct { int (*remove_entity) (cache_handle_t *h); apr_status_t (*store_headers)(cache_handle_t *h, request_rec *r, cache_info *i); - apr_status_t (*store_body)(cache_handle_t *h, ap_filter_t *f, apr_bucket_brigade *b); + apr_status_t (*store_body)(cache_handle_t *h, request_rec *r, apr_bucket_brigade *b); apr_status_t (*recall_headers) (cache_handle_t *h, request_rec *r); apr_status_t (*recall_body) (cache_handle_t *h, apr_pool_t *p, apr_bucket_brigade *bb); int (*create_entity) (cache_handle_t *h, request_rec *r, Modified: httpd/httpd/trunk/modules/cache/mod_disk_cache.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache/mod_disk_cache.c?view=diff&rev=502365&r1=502364&r2=502365 ============================================================================== --- httpd/httpd/trunk/modules/cache/mod_disk_cache.c (original) +++ httpd/httpd/trunk/modules/cache/mod_disk_cache.c Thu Feb 1 13:28:34 2007 @@ -37,15 +37,13 @@ * re-read in .header (must be format #2) * read in .data * - * Always first in the header file: - * disk_cache_format_t format; - * - * VARY_FORMAT_VERSION: + * Format #1: + * apr_uint32_t format; * apr_time_t expire; * apr_array_t vary_headers (delimited by CRLF) * - * DISK_FORMAT_VERSION: - * disk_cache_info_t + * Format #2: + * disk_cache_info_t (first sizeof(apr_uint32_t) bytes is the format) * entity name (dobj->name) [length is in disk_cache_info_t->name_len] * r->headers_out (delimited by CRLF) * CRLF @@ -58,244 +56,13 @@ /* Forward declarations */ static int remove_entity(cache_handle_t *h); static apr_status_t store_headers(cache_handle_t *h, request_rec *r, cache_info *i); -static apr_status_t store_body(cache_handle_t *h, ap_filter_t *f, apr_bucket_brigade *b); +static apr_status_t store_body(cache_handle_t *h, request_rec *r, apr_bucket_brigade *b); static apr_status_t recall_headers(cache_handle_t *h, request_rec *r); static apr_status_t recall_body(cache_handle_t *h, apr_pool_t *p, apr_bucket_brigade *bb); static apr_status_t read_array(request_rec *r, apr_array_header_t* arr, apr_file_t *file); /* - * Modified file bucket implementation to be able to deliver files - * while caching. - */ - -/* Derived from apr_buckets_file.c */ - -#define BUCKET_IS_DISKCACHE(e) ((e)->type == &bucket_type_diskcache) -APU_DECLARE_DATA const apr_bucket_type_t bucket_type_diskcache; - -static void diskcache_bucket_destroy(void *data) -{ - diskcache_bucket_data *f = data; - - if (apr_bucket_shared_destroy(f)) { - /* no need to close files here; it will get - * done automatically when the pool gets cleaned up */ - apr_bucket_free(f); - } -} - - -/* The idea here is to convert diskcache buckets to regular file buckets - as data becomes available */ -/* FIXME: Maybe we should care about the block argument, right now we're - always blocking */ -static apr_status_t diskcache_bucket_read(apr_bucket *e, const char **str, - apr_size_t *len, - apr_read_type_e block) -{ - diskcache_bucket_data *a = e->data; - apr_file_t *f = a->fd; - apr_bucket *b = NULL; - char *buf; - apr_status_t rv; - apr_finfo_t finfo; - apr_size_t filelength = e->length; /* bytes remaining in file past offset */ - apr_off_t fileoffset = e->start; - apr_off_t fileend; - apr_size_t available; -#if APR_HAS_THREADS && !APR_HAS_XTHREAD_FILES - apr_int32_t flags; -#endif - -#if APR_HAS_THREADS && !APR_HAS_XTHREAD_FILES - if ((flags = apr_file_flags_get(f)) & APR_XTHREAD) { - /* this file descriptor is shared across multiple threads and - * this OS doesn't support that natively, so as a workaround - * we must reopen the file into a->readpool */ - const char *fname; - apr_file_name_get(&fname, f); - - rv = apr_file_open(&f, fname, (flags & ~APR_XTHREAD), 0, a->readpool); - if (rv != APR_SUCCESS) - return rv; - - a->fd = f; - } -#endif - - /* in case we die prematurely */ - *str = NULL; - *len = 0; - - while(1) { - /* Figure out how big the file is right now, sit here until - it's grown enough or we get bored */ - fileend = 0; - rv = apr_file_seek(f, APR_END, &fileend); - if(rv != APR_SUCCESS) { - return rv; - } - - if(fileend >= fileoffset + MIN(filelength, CACHE_BUF_SIZE)) { - break; - } - - rv = apr_file_info_get(&finfo, APR_FINFO_MTIME, f); - if(rv != APR_SUCCESS || - finfo.mtime < (apr_time_now() - a->updtimeout) ) - { - return APR_EGENERAL; - } - apr_sleep(CACHE_LOOP_SLEEP); - } - - /* Convert this bucket to a zero-length heap bucket so we won't be called - again */ - buf = apr_bucket_alloc(0, e->list); - apr_bucket_heap_make(e, buf, 0, apr_bucket_free); - - /* Wrap as much as possible into a regular file bucket */ - available = MIN(filelength, fileend-fileoffset); - b = apr_bucket_file_create(f, fileoffset, available, a->readpool, e->list); - APR_BUCKET_INSERT_AFTER(e, b); - - /* Put any remains in yet another bucket */ - if(available < filelength) { - e=b; - /* for efficiency, we can just build a new apr_bucket struct - * to wrap around the existing bucket */ - b = apr_bucket_alloc(sizeof(*b), e->list); - b->start = fileoffset + available; - b->length = filelength - available; - b->data = a; - b->type = &bucket_type_diskcache; - b->free = apr_bucket_free; - b->list = e->list; - APR_BUCKET_INSERT_AFTER(e, b); - } - else { - diskcache_bucket_destroy(a); - } - - *str = buf; - return APR_SUCCESS; -} - -static apr_bucket * diskcache_bucket_make(apr_bucket *b, - apr_file_t *fd, - apr_off_t offset, - apr_size_t len, - apr_interval_time_t timeout, - apr_pool_t *p) -{ - diskcache_bucket_data *f; - - f = apr_bucket_alloc(sizeof(*f), b->list); - f->fd = fd; - f->readpool = p; - f->updtimeout = timeout; - - b = apr_bucket_shared_make(b, f, offset, len); - b->type = &bucket_type_diskcache; - - return b; -} - -static apr_bucket * diskcache_bucket_create(apr_file_t *fd, - apr_off_t offset, - apr_size_t len, - apr_interval_time_t timeout, - apr_pool_t *p, - apr_bucket_alloc_t *list) -{ - apr_bucket *b = apr_bucket_alloc(sizeof(*b), list); - - APR_BUCKET_INIT(b); - b->free = apr_bucket_free; - b->list = list; - return diskcache_bucket_make(b, fd, offset, len, timeout, p); -} - - -/* FIXME: This is probably only correct for the first case, that seems - to be the one that occurs all the time... */ -static apr_status_t diskcache_bucket_setaside(apr_bucket *data, - apr_pool_t *reqpool) -{ - diskcache_bucket_data *a = data->data; - apr_file_t *fd = NULL; - apr_file_t *f = a->fd; - apr_pool_t *curpool = apr_file_pool_get(f); - - if (apr_pool_is_ancestor(curpool, reqpool)) { - return APR_SUCCESS; - } - - if (!apr_pool_is_ancestor(a->readpool, reqpool)) { - /* FIXME: Figure out what needs to be done here */ - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL, - "disk_cache: diskcache_bucket_setaside: FIXME1"); - a->readpool = reqpool; - } - - /* FIXME: Figure out what needs to be done here */ - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL, - "disk_cache: diskcache_bucket_setaside: FIXME2"); - - apr_file_setaside(&fd, f, reqpool); - a->fd = fd; - return APR_SUCCESS; -} - -APU_DECLARE_DATA const apr_bucket_type_t bucket_type_diskcache = { - "DISKCACHE", 5, APR_BUCKET_DATA, - diskcache_bucket_destroy, - diskcache_bucket_read, - diskcache_bucket_setaside, - apr_bucket_shared_split, - apr_bucket_shared_copy -}; - -/* From apr_brigade.c */ - -/* A "safe" maximum bucket size, 1Gb */ -#define MAX_BUCKET_SIZE (0x40000000) - -static apr_bucket * diskcache_brigade_insert(apr_bucket_brigade *bb, - apr_file_t *f, apr_off_t - start, apr_off_t length, - apr_interval_time_t timeout, - apr_pool_t *p) -{ - apr_bucket *e; - - if (length < MAX_BUCKET_SIZE) { - e = diskcache_bucket_create(f, start, (apr_size_t)length, timeout, p, - bb->bucket_alloc); - } - else { - /* Several buckets are needed. */ - e = diskcache_bucket_create(f, start, MAX_BUCKET_SIZE, timeout, p, - bb->bucket_alloc); - - while (length > MAX_BUCKET_SIZE) { - apr_bucket *ce; - apr_bucket_copy(e, &ce); - APR_BRIGADE_INSERT_TAIL(bb, ce); - e->start += MAX_BUCKET_SIZE; - length -= MAX_BUCKET_SIZE; - } - e->length = (apr_size_t)length; /* Resize just the last bucket */ - } - - APR_BRIGADE_INSERT_TAIL(bb, e); - return e; -} - -/* --------------------------------------------------------------- */ - -/* * Local static functions */ @@ -335,9 +102,9 @@ } } -static apr_status_t mkdir_structure(disk_cache_conf *conf, const char *file, apr_pool_t *pool) +static void mkdir_structure(disk_cache_conf *conf, const char *file, apr_pool_t *pool) { - apr_status_t rv = APR_SUCCESS; + apr_status_t rv; char *p; for (p = (char*)file + conf->cache_root_len + 1;;) { @@ -348,17 +115,12 @@ rv = apr_dir_make(file, APR_UREAD|APR_UWRITE|APR_UEXECUTE, pool); - *p = '/'; if (rv != APR_SUCCESS && !APR_STATUS_IS_EEXIST(rv)) { - break; + /* XXX */ } + *p = '/'; ++p; } - if (rv != APR_SUCCESS && !APR_STATUS_IS_EEXIST(rv)) { - return rv; - } - - return APR_SUCCESS; } /* htcacheclean may remove directories underneath us. @@ -388,6 +150,33 @@ return rv; } +static apr_status_t file_cache_el_final(disk_cache_object_t *dobj, + request_rec *r) +{ + /* move the data over */ + if (dobj->tfd) { + apr_status_t rv; + + apr_file_close(dobj->tfd); + + /* This assumes that the tempfile is on the same file system + * as the cache_root. If not, then we need a file copy/move + * rather than a rename. + */ + rv = apr_file_rename(dobj->tempfile, dobj->datafile, r->pool); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server, + "disk_cache: rename tempfile to datafile failed:" + " %s -> %s", dobj->tempfile, dobj->datafile); + apr_file_remove(dobj->tempfile, r->pool); + } + + dobj->tfd = NULL; + } + + return APR_SUCCESS; +} + static apr_status_t file_cache_errorcleanup(disk_cache_object_t *dobj, request_rec *r) { /* Remove the header file and the body file. */ @@ -405,6 +194,53 @@ } +/* These two functions get and put state information into the data + * file for an ap_cache_el, this state information will be read + * and written transparent to clients of this module + */ +static int file_cache_recall_mydata(apr_file_t *fd, cache_info *info, + disk_cache_object_t *dobj, request_rec *r) +{ + apr_status_t rv; + char *urlbuff; + disk_cache_info_t disk_info; + apr_size_t len; + + /* read the data from the cache file */ + len = sizeof(disk_cache_info_t); + rv = apr_file_read_full(fd, &disk_info, len, &len); + if (rv != APR_SUCCESS) { + return rv; + } + + /* Store it away so we can get it later. */ + dobj->disk_info = disk_info; + + info->status = disk_info.status; + info->date = disk_info.date; + info->expire = disk_info.expire; + info->request_time = disk_info.request_time; + info->response_time = disk_info.response_time; + + /* Note that we could optimize this by conditionally doing the palloc + * depending upon the size. */ + urlbuff = apr_palloc(r->pool, disk_info.name_len + 1); + len = disk_info.name_len; + rv = apr_file_read_full(fd, urlbuff, len, &len); + if (rv != APR_SUCCESS) { + return rv; + } + urlbuff[disk_info.name_len] = '\0'; + + /* check that we have the same URL */ + /* Would strncmp be correct? */ + if (strcmp(urlbuff, dobj->name) != 0) { + return APR_EGENERAL; + } + + return APR_SUCCESS; +} + static const char* regen_key(apr_pool_t *p, apr_table_t *headers, apr_array_header_t *varray, const char *oldkey) { @@ -524,90 +360,70 @@ dobj->datafile = data_file(r->pool, conf, dobj, key); dobj->hdrsfile = header_file(r->pool, conf, dobj, key); dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); - dobj->initial_size = len; - dobj->file_size = -1; - dobj->updtimeout = conf->updtimeout; - dobj->frv = APR_SUCCESS; return OK; } - -static apr_status_t file_read_timeout(apr_file_t *file, char * buf, - apr_size_t len, apr_time_t timeout) +static int open_entity(cache_handle_t *h, request_rec *r, const char *key) { - apr_size_t left, done; - apr_finfo_t finfo; + apr_uint32_t format; + apr_size_t len; + const char *nkey; apr_status_t rc; + static int error_logged = 0; + disk_cache_conf *conf = ap_get_module_config(r->server->module_config, + &disk_cache_module); + apr_finfo_t finfo; + cache_object_t *obj; + cache_info *info; + disk_cache_object_t *dobj; + int flags; - done = 0; - left = len; - - while(1) { - rc = apr_file_read_full(file, buf+done, left, &len); - if (rc == APR_SUCCESS) { - break; - } - done += len; - left -= len; + h->cache_obj = NULL; - if(!APR_STATUS_IS_EOF(rc)) { - return rc; - } - rc = apr_file_info_get(&finfo, APR_FINFO_MTIME, file); - if(rc != APR_SUCCESS) { - return rc; - } - if(finfo.mtime < (apr_time_now() - timeout) ) { - return APR_ETIMEDOUT; + /* Look up entity keyed to 'url' */ + if (conf->cache_root == NULL) { + if (!error_logged) { + error_logged = 1; + ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, + "disk_cache: Cannot cache files to disk without a CacheRoot specified."); } - apr_sleep(CACHE_LOOP_SLEEP); + return DECLINED; } - return APR_SUCCESS; -} + /* Create and init the cache object */ + h->cache_obj = obj = apr_pcalloc(r->pool, sizeof(cache_object_t)); + obj->vobj = dobj = apr_pcalloc(r->pool, sizeof(disk_cache_object_t)); + info = &(obj->info); -static apr_status_t open_header(cache_handle_t *h, request_rec *r, - const char *key, disk_cache_conf *conf) -{ - int flags; - disk_cache_format_t format; - apr_status_t rc; - const char *nkey = key; - disk_cache_info_t disk_info; - cache_object_t *obj = h->cache_obj; - disk_cache_object_t *dobj = obj->vobj; + /* Open the headers file */ + dobj->prefix = NULL; - flags = APR_READ|APR_BINARY|APR_BUFFERED; + /* Save the cache root */ + dobj->root = apr_pstrndup(r->pool, conf->cache_root, conf->cache_root_len); + dobj->root_len = conf->cache_root_len; + dobj->hdrsfile = header_file(r->pool, conf, dobj, key); + flags = APR_READ|APR_BINARY|APR_BUFFERED; rc = apr_file_open(&dobj->hfd, dobj->hdrsfile, flags, 0, r->pool); if (rc != APR_SUCCESS) { - return CACHE_EDECLINED; + return DECLINED; } /* read the format from the cache file */ - rc = apr_file_read_full(dobj->hfd, &format, sizeof(format), NULL); - if(APR_STATUS_IS_EOF(rc)) { - return CACHE_ENODATA; - } - else if(rc != APR_SUCCESS) { - return rc; - } + len = sizeof(format); + apr_file_read_full(dobj->hfd, &format, len, &len); - /* Vary-files are being written to tmpfile and moved in place, so - the should always be complete */ if (format == VARY_FORMAT_VERSION) { apr_array_header_t* varray; apr_time_t expire; - rc = apr_file_read_full(dobj->hfd, &expire, sizeof(expire), NULL); - if(rc != APR_SUCCESS) { - return rc; - } + len = sizeof(expire); + apr_file_read_full(dobj->hfd, &expire, len, &len); if (expire < r->request_time) { - return CACHE_EDECLINED; + return DECLINED; } varray = apr_array_make(r->pool, 5, sizeof(char*)); @@ -616,346 +432,127 @@ ap_log_error(APLOG_MARK, APLOG_ERR, rc, r->server, "disk_cache: Cannot parse vary header file: %s", dobj->hdrsfile); - return CACHE_EDECLINED; + return DECLINED; } apr_file_close(dobj->hfd); nkey = regen_key(r->pool, r->headers_in, varray, key); + dobj->hashfile = NULL; dobj->prefix = dobj->hdrsfile; - dobj->hdrsfile = data_file(r->pool, conf, dobj, nkey); + dobj->hdrsfile = header_file(r->pool, conf, dobj, nkey); + flags = APR_READ|APR_BINARY|APR_BUFFERED; rc = apr_file_open(&dobj->hfd, dobj->hdrsfile, flags, 0, r->pool); if (rc != APR_SUCCESS) { - dobj->hfd = NULL; - return CACHE_EDECLINED; - } - rc = apr_file_read_full(dobj->hfd, &format, sizeof(format), NULL); - if(APR_STATUS_IS_EOF(rc)) { - return CACHE_ENODATA; - } - else if(rc != APR_SUCCESS) { - return rc; + return DECLINED; } } - - if(format != DISK_FORMAT_VERSION) { - ap_log_error(APLOG_MARK, APLOG_INFO, 0, r->server, - "disk_cache: File '%s' had a version mismatch. File had " - "version: %d (current is %d). Deleted.", dobj->hdrsfile, - format, DISK_FORMAT_VERSION); - file_cache_errorcleanup(dobj, r); - return CACHE_EDECLINED; + else if (format != DISK_FORMAT_VERSION) { + ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, + "disk_cache: File '%s' has a version mismatch. File had version: %d.", + dobj->hdrsfile, format); + return DECLINED; + } + else { + apr_off_t offset = 0; + /* This wasn't a Vary Format file, so we must seek to the + * start of the file again, so that later reads work. + */ + apr_file_seek(dobj->hfd, APR_SET, &offset); + nkey = key; } obj->key = nkey; + dobj->key = nkey; dobj->name = key; + dobj->datafile = data_file(r->pool, conf, dobj, nkey); + dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); - /* read the data from the header file */ - rc = apr_file_read_full(dobj->hfd, &disk_info, sizeof(disk_info), NULL); - if(APR_STATUS_IS_EOF(rc)) { - return CACHE_ENODATA; + /* Open the data file */ + flags = APR_READ|APR_BINARY; +#ifdef APR_SENDFILE_ENABLED + flags |= APR_SENDFILE_ENABLED; +#endif + rc = apr_file_open(&dobj->fd, dobj->datafile, flags, 0, r->pool); + if (rc != APR_SUCCESS) { + /* XXX: Log message */ + return DECLINED; } - else if(rc != APR_SUCCESS) { - return rc; + + rc = apr_file_info_get(&finfo, APR_FINFO_SIZE, dobj->fd); + if (rc == APR_SUCCESS) { + dobj->file_size = finfo.size; } - /* Store it away so we can get it later. */ - dobj->disk_info = disk_info; + /* Read the bytes to setup the cache_info fields */ + rc = file_cache_recall_mydata(dobj->hfd, info, dobj, r); + if (rc != APR_SUCCESS) { + /* XXX log message */ + return DECLINED; + } - return APR_SUCCESS; + /* Initialize the cache_handle callback functions */ + ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, + "disk_cache: Recalled cached URL info header %s", dobj->name); + return OK; } +static int remove_entity(cache_handle_t *h) +{ + /* Null out the cache object pointer so next time we start from scratch */ + h->cache_obj = NULL; + return OK; +} -static apr_status_t open_header_timeout(cache_handle_t *h, request_rec *r, - const char *key, disk_cache_conf *conf, - disk_cache_object_t *dobj) +static int remove_url(cache_handle_t *h, apr_pool_t *p) { apr_status_t rc; - apr_finfo_t finfo; - - while(1) { - if(dobj->hfd) { - apr_file_close(dobj->hfd); - dobj->hfd = NULL; - } - rc = open_header(h, r, key, conf); - if(rc != APR_SUCCESS && rc != CACHE_ENODATA) { - if(rc != CACHE_EDECLINED) { - ap_log_error(APLOG_MARK, APLOG_ERR, rc, r->server, - "disk_cache: Cannot load header file: %s", - dobj->hdrsfile); - } - return rc; - } + disk_cache_object_t *dobj; - /* Objects with unknown body size will have file_size == -1 until the - entire body is written and the header updated with the actual size. - And since we depend on knowing the body size we wait until the size - is written */ - if(rc == APR_SUCCESS && dobj->disk_info.file_size >= 0) { - break; - } - rc = apr_file_info_get(&finfo, APR_FINFO_MTIME, dobj->hfd); - if(rc != APR_SUCCESS) { - return rc; - } - if(finfo.mtime < (apr_time_now() - dobj->updtimeout)) { - ap_log_error(APLOG_MARK, APLOG_WARNING, 0, r->server, - "disk_cache: Timed out waiting for header for URL %s" - " - caching the body failed?", key); - return CACHE_EDECLINED; - } - apr_sleep(CACHE_LOOP_SLEEP); + /* Get disk cache object from cache handle */ + dobj = (disk_cache_object_t *) h->cache_obj->vobj; + if (!dobj) { + return DECLINED; } - return APR_SUCCESS; -} - - -static apr_status_t open_body_timeout(request_rec *r, const char *key, - disk_cache_object_t *dobj) -{ - apr_off_t off; - apr_time_t starttime = apr_time_now(); - int flags; - apr_status_t rc; -#if APR_HAS_SENDFILE - core_dir_config *pdconf = ap_get_module_config(r->per_dir_config, - &core_module); -#endif + /* Delete headers file */ + if (dobj->hdrsfile) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL, + "disk_cache: Deleting %s from cache.", dobj->hdrsfile); - flags = APR_READ|APR_BINARY|APR_BUFFERED; -#if APR_HAS_SENDFILE - flags |= ((pdconf->enable_sendfile == ENABLE_SENDFILE_OFF) - ? 0 : APR_SENDFILE_ENABLED); -#endif - - /* Wait here until we get a body cachefile, data in it, and do quick sanity - * check */ - - while(1) { - if(dobj->fd == NULL) { - rc = apr_file_open(&dobj->fd, dobj->datafile, flags, 0, r->pool); - if(rc != APR_SUCCESS) { - if(starttime < (apr_time_now() - dobj->updtimeout) ) { - ap_log_error(APLOG_MARK, APLOG_WARNING, 0, r->server, - "disk_cache: Timed out waiting for body for " - "URL %s - caching failed?", key); - return CACHE_EDECLINED; - } - apr_sleep(CACHE_LOOP_SLEEP); - continue; - } + rc = apr_file_remove(dobj->hdrsfile, p); + if ((rc != APR_SUCCESS) && !APR_STATUS_IS_ENOENT(rc)) { + /* Will only result in an output if httpd is started with -e debug. + * For reason see log_error_core for the case s == NULL. + */ + ap_log_error(APLOG_MARK, APLOG_DEBUG, rc, NULL, + "disk_cache: Failed to delete headers file %s from cache.", + dobj->hdrsfile); + return DECLINED; } + } - dobj->file_size = 0; - rc = apr_file_seek(dobj->fd, APR_END, &dobj->file_size); - if(rc != APR_SUCCESS) { - return rc; - } + /* Delete data file */ + if (dobj->datafile) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL, + "disk_cache: Deleting %s from cache.", dobj->datafile); - if(dobj->initial_size < dobj->file_size) { - ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, - "disk_cache: Bad cached body for URL %s, size %" - APR_OFF_T_FMT " != %" APR_OFF_T_FMT, dobj->name, - dobj->initial_size, dobj->file_size); - file_cache_errorcleanup(dobj, r); - return CACHE_EDECLINED; - } - else if(dobj->initial_size > dobj->file_size) { - /* Still caching or failed? */ - apr_finfo_t finfo; - - rc = apr_file_info_get(&finfo, APR_FINFO_MTIME, dobj->fd); - if(rc != APR_SUCCESS || - finfo.mtime < (apr_time_now() - dobj->updtimeout) ) - { - ap_log_error(APLOG_MARK, APLOG_WARNING, rc, r->server, - "disk_cache: Body for URL %s is too small - " - "caching the body failed?", dobj->name); - return CACHE_EDECLINED; - } - } - if(dobj->file_size > 0) { - break; + rc = apr_file_remove(dobj->datafile, p); + if ((rc != APR_SUCCESS) && !APR_STATUS_IS_ENOENT(rc)) { + /* Will only result in an output if httpd is started with -e debug. + * For reason see log_error_core for the case s == NULL. + */ + ap_log_error(APLOG_MARK, APLOG_DEBUG, rc, NULL, + "disk_cache: Failed to delete data file %s from cache.", + dobj->datafile); + return DECLINED; } - apr_sleep(CACHE_LOOP_SLEEP); } - /* Go back to the beginning */ - off = 0; - rc = apr_file_seek(dobj->fd, APR_SET, &off); - if(rc != APR_SUCCESS) { - return rc; - } - - return APR_SUCCESS; -} - - -static int open_entity(cache_handle_t *h, request_rec *r, const char *key) -{ - apr_status_t rc; - disk_cache_object_t *dobj; - cache_info *info; - apr_size_t len; - static int error_logged = 0; - disk_cache_conf *conf = ap_get_module_config(r->server->module_config, - &disk_cache_module); - char urlbuff[MAX_STRING_LEN]; - - h->cache_obj = NULL; - - /* Look up entity keyed to 'url' */ - if (conf->cache_root == NULL) { - if (!error_logged) { - error_logged = 1; - ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, - "disk_cache: Cannot cache files to disk without a " - "CacheRoot specified."); - } - return DECLINED; - } - - /* Create and init the cache object */ - h->cache_obj = apr_pcalloc(r->pool, sizeof(cache_object_t)); - h->cache_obj->vobj = dobj = apr_pcalloc(r->pool, sizeof(disk_cache_object_t)); - info = &(h->cache_obj->info); - - /* Save the cache root */ - dobj->root = apr_pstrndup(r->pool, conf->cache_root, conf->cache_root_len); - dobj->root_len = conf->cache_root_len; - - dobj->hdrsfile = header_file(r->pool, conf, dobj, key); - - dobj->updtimeout = conf->updtimeout; - - /* Open header and read basic info, wait until header contains - valid size information for the body */ - rc = open_header_timeout(h, r, key, conf, dobj); - if(rc != APR_SUCCESS) { - return DECLINED; - } - - /* TODO: We have the ability to serve partially cached requests, - * however in order to avoid some sticky what if conditions - * should the content turn out to be too large to be cached, - * we must only allow partial cache serving if the cached - * entry has a content length known in advance. - */ - - info->status = dobj->disk_info.status; - info->date = dobj->disk_info.date; - info->expire = dobj->disk_info.expire; - info->request_time = dobj->disk_info.request_time; - info->response_time = dobj->disk_info.response_time; - - dobj->initial_size = (apr_off_t) dobj->disk_info.file_size; - dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); - - len = dobj->disk_info.name_len; - - if(len > 0) { - rc = file_read_timeout(dobj->hfd, urlbuff, len, dobj->updtimeout); - if (rc == APR_ETIMEDOUT) { - ap_log_error(APLOG_MARK, APLOG_WARNING, rc, r->server, - "disk_cache: Timed out waiting for urlbuff for " - "URL %s - caching failed?", key); - return DECLINED; - } - else if(rc != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_WARNING, rc, r->server, - "disk_cache: Error reading urlbuff for URL %s", - key); - return DECLINED; - } - } - urlbuff[len] = '\0'; - - /* check that we have the same URL */ - if (strcmp(urlbuff, dobj->name) != 0) { - ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, - "disk_cache: Cached URL %s didn't match requested " - "URL %s", urlbuff, dobj->name); - return DECLINED; - } - - dobj->datafile = data_file(r->pool, conf, dobj, h->cache_obj->key); - dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); - - /* Only need body cachefile if we have a body */ - if(dobj->initial_size > 0) { - rc = open_body_timeout(r, key, dobj); - if(rc != APR_SUCCESS) { - return DECLINED; - } - } - else { - dobj->file_size = 0; - } - - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: Recalled status for cached URL %s", dobj->name); - return OK; -} - - -static int remove_entity(cache_handle_t *h) -{ - /* Null out the cache object pointer so next time we start from scratch */ - h->cache_obj = NULL; - return OK; -} - -static int remove_url(cache_handle_t *h, apr_pool_t *p) -{ - apr_status_t rc; - disk_cache_object_t *dobj; - - /* Get disk cache object from cache handle */ - dobj = (disk_cache_object_t *) h->cache_obj->vobj; - if (!dobj) { - return DECLINED; - } - - /* Delete headers file */ - if (dobj->hdrsfile) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL, - "disk_cache: Deleting %s from cache.", dobj->hdrsfile); - - rc = apr_file_remove(dobj->hdrsfile, p); - if ((rc != APR_SUCCESS) && !APR_STATUS_IS_ENOENT(rc)) { - /* Will only result in an output if httpd is started with -e debug. - * For reason see log_error_core for the case s == NULL. - */ - ap_log_error(APLOG_MARK, APLOG_DEBUG, rc, NULL, - "disk_cache: Failed to delete headers file %s from cache.", - dobj->hdrsfile); - return DECLINED; - } - } - - /* Delete data file */ - if (dobj->datafile) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, NULL, - "disk_cache: Deleting %s from cache.", dobj->datafile); - - rc = apr_file_remove(dobj->datafile, p); - if ((rc != APR_SUCCESS) && !APR_STATUS_IS_ENOENT(rc)) { - /* Will only result in an output if httpd is started with -e debug. - * For reason see log_error_core for the case s == NULL. - */ - ap_log_error(APLOG_MARK, APLOG_DEBUG, rc, NULL, - "disk_cache: Failed to delete data file %s from cache.", - dobj->datafile); - return DECLINED; - } - } - - /* now delete directories as far as possible up to our cache root */ - if (dobj->root) { - const char *str_to_copy; + /* now delete directories as far as possible up to our cache root */ + if (dobj->root) { + const char *str_to_copy; str_to_copy = dobj->hdrsfile ? dobj->hdrsfile : dobj->datafile; if (str_to_copy) { @@ -1061,7 +658,7 @@ &amt); } -static apr_status_t read_table(request_rec *r, +static apr_status_t read_table(cache_handle_t *handle, request_rec *r, apr_table_t *table, apr_file_t *file) { char w[MAX_STRING_LEN]; @@ -1074,6 +671,8 @@ /* ### What about APR_EOF? */ rv = apr_file_gets(w, MAX_STRING_LEN - 1, file); if (rv != APR_SUCCESS) { + ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, + "Premature end of cache headers."); return rv; } @@ -1116,7 +715,7 @@ } if (maybeASCII > maybeEBCDIC) { ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, - "disk_cache: CGI Interface Error: Script headers apparently ASCII: (CGI = %s)", + "CGI Interface Error: Script headers apparently ASCII: (CGI = %s)", r->filename); inbytes_left = outbytes_left = cp - w; apr_xlate_conv_buffer(ap_hdrs_from_ascii, @@ -1141,50 +740,6 @@ return APR_SUCCESS; } - -static apr_status_t read_table_timeout(cache_handle_t *handle, request_rec *r, - apr_table_t **table, apr_file_t *file, - apr_time_t timeout) -{ - apr_off_t off; - apr_finfo_t finfo; - apr_status_t rv; - - off = 0; - rv = apr_file_seek(file, APR_CUR, &off); - if(rv != APR_SUCCESS) { - return rv; - } - - while(1) { - *table = apr_table_make(r->pool, 20); - rv = read_table(r, *table, file); - if(rv == APR_SUCCESS) { - break; - } - apr_table_clear(*table); - - rv = apr_file_seek(file, APR_SET, &off); - if(rv != APR_SUCCESS) { - return rv; - } - - rv = apr_file_info_get(&finfo, APR_FINFO_MTIME, file); - if(rv != APR_SUCCESS || - finfo.mtime < (apr_time_now() - timeout) ) - { - ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, r, - "disk_cache: Timed out waiting for cache headers " - "URL %s", handle->cache_obj->key); - return APR_EGENERAL; - } - apr_sleep(CACHE_LOOP_SLEEP); - } - - return APR_SUCCESS; -} - - /* * Reads headers from a buffer and returns an array of headers. * Returns NULL on file error @@ -1195,7 +750,6 @@ static apr_status_t recall_headers(cache_handle_t *h, request_rec *r) { disk_cache_object_t *dobj = (disk_cache_object_t *) h->cache_obj->vobj; - apr_status_t rv; /* This case should not happen... */ if (!dobj->hfd) { @@ -1203,24 +757,14 @@ return APR_NOTFOUND; } - rv = read_table_timeout(h, r, &(h->resp_hdrs), dobj->hfd, dobj->updtimeout); - if(rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: Timed out waiting for response headers " - "for URL %s - caching failed?", dobj->name); - return rv; - } + h->req_hdrs = apr_table_make(r->pool, 20); + h->resp_hdrs = apr_table_make(r->pool, 20); - rv = read_table_timeout(h, r, &(h->req_hdrs), dobj->hfd, dobj->updtimeout); - if(rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: Timed out waiting for request headers " - "for URL %s - caching failed?", dobj->name); - return rv; - } + /* Call routine to read the header lines/status line */ + read_table(h, r, h->resp_hdrs, dobj->hfd); + read_table(h, r, h->req_hdrs, dobj->hfd); apr_file_close(dobj->hfd); - dobj->hfd = NULL; ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, "disk_cache: Recalled headers for URL %s", dobj->name); @@ -1232,25 +776,7 @@ apr_bucket *e; disk_cache_object_t *dobj = (disk_cache_object_t*) h->cache_obj->vobj; - /* Insert as much as possible as regular file (ie. sendfile():able) */ - if(dobj->file_size > 0) { - if(apr_brigade_insert_file(bb, dobj->fd, 0, - dobj->file_size, p) == NULL) - { - return APR_ENOMEM; - } - } - - /* Insert any remainder as read-while-caching bucket */ - if(dobj->file_size < dobj->initial_size) { - if(diskcache_brigade_insert(bb, dobj->fd, dobj->file_size, - dobj->initial_size - dobj->file_size, - dobj->updtimeout, p - ) == NULL) - { - return APR_ENOMEM; - } - } + apr_brigade_insert_file(bb, dobj->fd, 0, dobj->file_size, p); e = apr_bucket_eos_create(bb->bucket_alloc); APR_BRIGADE_INSERT_TAIL(bb, e); @@ -1292,199 +818,101 @@ return rv; } - -static apr_status_t open_new_file(request_rec *r, const char *filename, - apr_file_t **fd, disk_cache_conf *conf) +static apr_status_t store_headers(cache_handle_t *h, request_rec *r, cache_info *info) { - int flags; + disk_cache_conf *conf = ap_get_module_config(r->server->module_config, + &disk_cache_module); apr_status_t rv; -#if APR_HAS_SENDFILE - core_dir_config *pdconf = ap_get_module_config(r->per_dir_config, - &core_module); -#endif - - flags = APR_CREATE | APR_WRITE | APR_READ | APR_BINARY | APR_BUFFERED | APR_EXCL | APR_TRUNCATE; -#if APR_HAS_SENDFILE - flags |= ((pdconf->enable_sendfile == ENABLE_SENDFILE_OFF) - ? 0 : APR_SENDFILE_ENABLED); -#endif - - while(1) { - rv = apr_file_open(fd, filename, flags, - APR_FPROT_UREAD | APR_FPROT_UWRITE, r->pool); - - ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server, - "disk_cache: open_new_file: Opening %s", filename); - - if(APR_STATUS_IS_EEXIST(rv)) { - apr_finfo_t finfo; - - rv = apr_stat(&finfo, filename, APR_FINFO_MTIME, r->pool); - if(APR_STATUS_IS_ENOENT(rv)) { - /* Someone else has already removed it, try again */ - continue; - } - else if(rv != APR_SUCCESS) { - return rv; - } + apr_size_t amt; + disk_cache_object_t *dobj = (disk_cache_object_t*) h->cache_obj->vobj; - if(finfo.mtime < (apr_time_now() - conf->updtimeout) ) { - /* Something stale that's left around */ + disk_cache_info_t disk_info; + struct iovec iov[2]; - rv = apr_file_remove(filename, r->pool); - if(rv != APR_SUCCESS && !APR_STATUS_IS_ENOENT(rv)) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: open_new_file: Failed to " - "remove old %s", filename); - return rv; - } - continue; - } - else { - /* Someone else has just created the file, return identifiable - status so calling function can do the right thing */ + /* This is flaky... we need to manage the cache_info differently */ + h->cache_obj->info = *info; - return CACHE_EEXIST; - } - } - else if(APR_STATUS_IS_ENOENT(rv)) { - /* The directory for the file didn't exist */ + if (r->headers_out) { + const char *tmp; - rv = mkdir_structure(conf, filename, r->pool); - if(rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: open_new_file: Failed to make " - "directory for %s", filename); - return rv; - } - continue; - } - else if(rv == APR_SUCCESS) { - return APR_SUCCESS; - } - else { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: open_new_file: Failed to open %s", - filename); - return rv; - } - } + tmp = apr_table_get(r->headers_out, "Vary"); - /* We should never get here, so */ - return APR_EGENERAL; -} + if (tmp) { + apr_array_header_t* varray; + apr_uint32_t format = VARY_FORMAT_VERSION; + mkdir_structure(conf, dobj->hdrsfile, r->pool); -static apr_status_t store_vary_header(cache_handle_t *h, disk_cache_conf *conf, - request_rec *r, cache_info *info, - const char *varyhdr) -{ - disk_cache_object_t *dobj = (disk_cache_object_t*) h->cache_obj->vobj; - apr_array_header_t* varray; - const char *vfile; - apr_status_t rv; - int flags; - disk_cache_format_t format = VARY_FORMAT_VERSION; - struct iovec iov[2]; - apr_size_t amt; + rv = apr_file_mktemp(&dobj->tfd, dobj->tempfile, + APR_CREATE | APR_WRITE | APR_BINARY | APR_EXCL, + r->pool); - if(dobj->prefix != NULL) { - vfile = dobj->prefix; - } - else { - vfile = dobj->hdrsfile; - } + if (rv != APR_SUCCESS) { + return rv; + } - flags = APR_CREATE | APR_WRITE | APR_BINARY | APR_EXCL | APR_BUFFERED; - rv = apr_file_mktemp(&dobj->tfd, dobj->tempfile, flags, r->pool); - if (rv != APR_SUCCESS) { - return rv; - } + amt = sizeof(format); + apr_file_write(dobj->tfd, &format, &amt); - iov[0].iov_base = (void*)&format; - iov[0].iov_len = sizeof(format); + amt = sizeof(info->expire); + apr_file_write(dobj->tfd, &info->expire, &amt); - iov[1].iov_base = (void*)&info->expire; - iov[1].iov_len = sizeof(info->expire); + varray = apr_array_make(r->pool, 6, sizeof(char*)); + tokens_to_array(r->pool, tmp, varray); - rv = apr_file_writev(dobj->tfd, (const struct iovec *) &iov, 2, &amt); - if (rv != APR_SUCCESS) { - file_cache_errorcleanup(dobj, r); - return rv; - } + store_array(dobj->tfd, varray); - varray = apr_array_make(r->pool, 6, sizeof(char*)); - tokens_to_array(r->pool, varyhdr, varray); + apr_file_close(dobj->tfd); - rv = store_array(dobj->tfd, varray); - if (rv != APR_SUCCESS) { - file_cache_errorcleanup(dobj, r); - return rv; - } + dobj->tfd = NULL; - rv = apr_file_close(dobj->tfd); - dobj->tfd = NULL; - if (rv != APR_SUCCESS) { - file_cache_errorcleanup(dobj, r); - return rv; - } + rv = safe_file_rename(conf, dobj->tempfile, dobj->hdrsfile, + r->pool); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server, + "disk_cache: rename tempfile to varyfile failed: %s -> %s", + dobj->tempfile, dobj->hdrsfile); + apr_file_remove(dobj->tempfile, r->pool); + return rv; + } - rv = safe_file_rename(conf, dobj->tempfile, vfile, r->pool); - if (rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: rename tempfile to varyfile failed: " - "%s -> %s", dobj->tempfile, vfile); - file_cache_errorcleanup(dobj, r); - return rv; + dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); + tmp = regen_key(r->pool, r->headers_in, varray, dobj->name); + dobj->prefix = dobj->hdrsfile; + dobj->hashfile = NULL; + dobj->datafile = data_file(r->pool, conf, dobj, tmp); + dobj->hdrsfile = header_file(r->pool, conf, dobj, tmp); + } } - dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); - if(dobj->prefix == NULL) { - const char *tmp = regen_key(r->pool, r->headers_in, varray, dobj->name); + rv = apr_file_mktemp(&dobj->hfd, dobj->tempfile, + APR_CREATE | APR_WRITE | APR_BINARY | + APR_BUFFERED | APR_EXCL, r->pool); - dobj->prefix = dobj->hdrsfile; - dobj->hdrsfile = header_file(r->pool, conf, dobj, tmp); + if (rv != APR_SUCCESS) { + return rv; } - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: Stored vary header for URL %s", dobj->name); - - return APR_SUCCESS; -} - - -static apr_status_t store_disk_header(disk_cache_object_t *dobj, - request_rec *r, cache_info *info) -{ - disk_cache_format_t format = DISK_FORMAT_VERSION; - struct iovec iov[3]; - int niov; - disk_cache_info_t disk_info; - apr_size_t amt; - apr_status_t rv; + dobj->name = h->cache_obj->key; + disk_info.format = DISK_FORMAT_VERSION; disk_info.date = info->date; disk_info.expire = info->expire; disk_info.entity_version = dobj->disk_info.entity_version++; disk_info.request_time = info->request_time; disk_info.response_time = info->response_time; disk_info.status = info->status; - disk_info.file_size = dobj->initial_size; - - niov = 0; - iov[niov].iov_base = (void*)&format; - iov[niov++].iov_len = sizeof(format); - iov[niov].iov_base = (void*)&disk_info; - iov[niov++].iov_len = sizeof(disk_cache_info_t); disk_info.name_len = strlen(dobj->name); - iov[niov].iov_base = (void*)dobj->name; - iov[niov++].iov_len = disk_info.name_len; - rv = apr_file_writev(dobj->hfd, (const struct iovec *) &iov, niov, &amt); + iov[0].iov_base = (void*)&disk_info; + iov[0].iov_len = sizeof(disk_cache_info_t); + iov[1].iov_base = (void*)dobj->name; + iov[1].iov_len = disk_info.name_len; + + rv = apr_file_writev(dobj->hfd, (const struct iovec *) &iov, 2, &amt); if (rv != APR_SUCCESS) { - file_cache_errorcleanup(dobj, r); return rv; } @@ -1504,7 +932,6 @@ r->err_headers_out); rv = store_table(dobj->hfd, headers_out); if (rv != APR_SUCCESS) { - file_cache_errorcleanup(dobj, r); return rv; } } @@ -1518,394 +945,128 @@ r->server); rv = store_table(dobj->hfd, headers_in); if (rv != APR_SUCCESS) { - file_cache_errorcleanup(dobj, r); return rv; } } - return APR_SUCCESS; -} - - -static apr_status_t store_headers(cache_handle_t *h, request_rec *r, - cache_info *info) -{ - disk_cache_conf *conf = ap_get_module_config(r->server->module_config, - &disk_cache_module); - apr_status_t rv; - int flags=0, rewriting; - disk_cache_object_t *dobj = (disk_cache_object_t*) h->cache_obj->vobj; - - - /* This is flaky... we need to manage the cache_info differently */ - h->cache_obj->info = *info; - - if(dobj->hfd) { - ap_log_error(APLOG_MARK, APLOG_INFO, 0, r->server, - "disk_cache: Rewriting headers for URL %s", dobj->name); - - rewriting = TRUE; - } - else { - ap_log_error(APLOG_MARK, APLOG_INFO, 0, r->server, - "disk_cache: Storing new headers for URL %s", dobj->name); - - rewriting = FALSE; - } + apr_file_close(dobj->hfd); /* flush and close */ - if (r->headers_out) { - const char *tmp; - - tmp = apr_table_get(r->headers_out, "Vary"); - - if (tmp) { - rv = store_vary_header(h, conf, r, info, tmp); - if(rv != APR_SUCCESS) { - return rv; - } - } - } - - if(rewriting) { - /* Assume we are just rewriting the header if we have an fd. The - fd might be readonly though, in that case reopen it for writes. - Something equivalent to fdopen would have been handy. */ - - flags = apr_file_flags_get(dobj->hfd); - - if(!(flags & APR_WRITE)) { - apr_file_close(dobj->hfd); - rv = apr_file_open(&dobj->hfd, dobj->hdrsfile, - APR_WRITE | APR_BINARY | APR_BUFFERED, 0, r->pool); - if (rv != APR_SUCCESS) { - dobj->hfd = NULL; - return rv; - } - } - else { - /* We can write here, so let's just move to the right place */ - apr_off_t off=0; - rv = apr_file_seek(dobj->hfd, APR_SET, &off); - if (rv != APR_SUCCESS) { - return rv; - } - } - } - else { - rv = open_new_file(r, dobj->hdrsfile, &(dobj->hfd), conf); - if(rv == CACHE_EEXIST) { - dobj->skipstore = TRUE; - } - else if(rv != APR_SUCCESS) { - return rv; - } - } - - if(dobj->skipstore) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: Skipping store for URL %s: Someone else " - "beat us to it", dobj->name); - return APR_SUCCESS; + /* Remove old file with the same name. If remove fails, then + * perhaps we need to create the directory tree where we are + * about to write the new headers file. + */ + rv = apr_file_remove(dobj->hdrsfile, r->pool); + if (rv != APR_SUCCESS) { + mkdir_structure(conf, dobj->hdrsfile, r->pool); } - rv = store_disk_header(dobj, r, info); - if(rv != APR_SUCCESS) { + rv = safe_file_rename(conf, dobj->tempfile, dobj->hdrsfile, r->pool); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, + "disk_cache: rename tempfile to hdrsfile failed: %s -> %s", + dobj->tempfile, dobj->hdrsfile); + apr_file_remove(dobj->tempfile, r->pool); return rv; } - /* If the body size is unknown, the header file will be rewritten later - so we can't close it */ - if(dobj->initial_size < 0) { - rv = apr_file_flush(dobj->hfd); - } - else { - rv = apr_file_close(dobj->hfd); - dobj->hfd = NULL; - } - if(rv != APR_SUCCESS) { - return rv; - } + dobj->tempfile = apr_pstrcat(r->pool, conf->cache_root, AP_TEMPFILE, NULL); ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, "disk_cache: Stored headers for URL %s", dobj->name); return APR_SUCCESS; } -/** - * Store the body of the response in the disk cache. - * - * As the data is written to the cache, it is also written to - * the filter provided. On network write failure, the full body - * will still be cached. - */ -static apr_status_t store_body(cache_handle_t *h, ap_filter_t *f, apr_bucket_brigade *bb) +static apr_status_t store_body(cache_handle_t *h, request_rec *r, + apr_bucket_brigade *bb) { - apr_bucket *e, *b; - request_rec *r = f->r; + apr_bucket *e; apr_status_t rv; disk_cache_object_t *dobj = (disk_cache_object_t *) h->cache_obj->vobj; disk_cache_conf *conf = ap_get_module_config(r->server->module_config, &disk_cache_module); - dobj->store_body_called++; - - if(r->no_cache) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: store_body called for URL %s even though" - "no_cache is set", dobj->name); - file_cache_errorcleanup(dobj, r); - ap_remove_output_filter(f); - return ap_pass_brigade(f->next, bb); - } - - if(dobj->initial_size == 0) { - /* Don't waste a body cachefile on a 0 length body */ - return ap_pass_brigade(f->next, bb); - } - - if(!dobj->skipstore && dobj->fd == NULL) { - rv = open_new_file(r, dobj->datafile, &(dobj->fd), conf); - if (rv == CACHE_EEXIST) { - /* Someone else beat us to storing this */ - dobj->skipstore = TRUE; - } - else if (rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, - "disk_cache: store_body tried to open cached file " - "for URL %s and this failed", dobj->name); - ap_remove_output_filter(f); - return ap_pass_brigade(f->next, bb); - } - else { - dobj->file_size = 0; + /* We write to a temp file and then atomically rename the file over + * in file_cache_el_final(). + */ + if (!dobj->tfd) { + rv = apr_file_mktemp(&dobj->tfd, dobj->tempfile, + APR_CREATE | APR_WRITE | APR_BINARY | + APR_BUFFERED | APR_EXCL, r->pool); + if (rv != APR_SUCCESS) { + return rv; } + dobj->file_size = 0; } - if(dobj->skipstore) { - /* Someone else beat us to storing this object. - * We are too late to take advantage of this storage :( */ - ap_remove_output_filter(f); - return ap_pass_brigade(f->next, bb); - } - - /* set up our temporary brigade */ - if (!dobj->tmpbb) { - dobj->tmpbb = apr_brigade_create(r->pool, r->connection->bucket_alloc); - } - else { - apr_brigade_cleanup(dobj->tmpbb); - } - - /* start caching the brigade */ - ap_log_error(APLOG_MARK, APLOG_INFO, 0, r->server, - "disk_cache: Caching body for URL %s", dobj->name); - - e = APR_BRIGADE_FIRST(bb); - while (e != APR_BRIGADE_SENTINEL(bb)) { - + for (e = APR_BRIGADE_FIRST(bb); + e != APR_BRIGADE_SENTINEL(bb); + e = APR_BUCKET_NEXT(e)) + { const char *str; apr_size_t length, written; - apr_off_t offset = 0; - - /* try write all data buckets to the cache, except for metadata buckets */ - if(!APR_BUCKET_IS_METADATA(e)) { - - /* read in a bucket fragment */ - rv = apr_bucket_read(e, &str, &length, APR_BLOCK_READ); - if (rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: Error when reading bucket for URL %s, aborting request", - dobj->name); - file_cache_errorcleanup(dobj, r); - /* not being able to read the bucket is fatal, - * return this up the filter stack - */ - return rv; - } - - /* try write the bucket fragment to the cache */ - apr_file_seek(dobj->fd, APR_END, &offset); - rv = apr_file_write_full(dobj->fd, str, length, &written); - offset = - (apr_off_t)written; - apr_file_seek(dobj->fd, APR_END, &offset); - - /* if the cache write was successful, swap the original bucket - * with a file bucket pointing to the same data in the cache. - * - * This is done because: - * - * - The ap_core_output_filter can take advantage of its ability - * to do non blocking writes on file buckets. - * - * - We are prevented from the need to read the original bucket - * a second time inside ap_core_output_filter, which could be - * expensive or memory consuming. - * - * - The cache, in theory, should be faster than the backend, - * otherwise there would be little point in caching in the first - * place. - */ - if (APR_SUCCESS == rv) { - - /* remove and destroy the original bucket from the brigade */ - b = e; - e = APR_BUCKET_NEXT(e); - APR_BUCKET_REMOVE(b); - apr_bucket_destroy(b); - - /* Is our network connection still alive? - * If not, we must continue caching the file, so keep looping. - * We will return the error at the end when caching is done. - */ - if (APR_SUCCESS == dobj->frv) { - - /* insert a file bucket pointing to the cache into out temporary brigade */ - if (diskcache_brigade_insert(dobj->tmpbb, dobj->fd, dobj->file_size, - written, - dobj->updtimeout, r->pool) == NULL) { - return APR_ENOMEM; - } - - /* TODO: If we are not able to guarantee that - * apr_core_output_filter() will not block on our - * file buckets, then the check for whether the - * socket will block must go here. - */ - - /* send our new brigade to the network */ - dobj->frv = ap_pass_brigade(f->next, dobj->tmpbb); - - } - - /* update the write counter, and sanity check the size */ - dobj->file_size += written; - if (dobj->file_size > conf->maxfs) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: URL %s failed the size check " - "(%" APR_OFF_T_FMT " > %" APR_OFF_T_FMT ")", - dobj->name, dobj->file_size, conf->maxfs); - file_cache_errorcleanup(dobj, r); - ap_remove_output_filter(f); - return ap_pass_brigade(f->next, bb); - } - - } - - /* - * If the cache write failed, continue to loop and pass data to - * the network. Remove the cache filter from the output filters - * so we don't inadvertently try to cache write again, leaving - * a hole in the cached data. - */ - else { - - /* mark the write as having failed */ - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "disk_cache: Error when writing cache file for " - "URL %s", dobj->name); - - /* step away gracefully */ - file_cache_errorcleanup(dobj, r); - ap_remove_output_filter(f); - - /* write the rest of the brigade to the network, and leave */ - return ap_pass_brigade(f->next, bb); - - } - - + rv = apr_bucket_read(e, &str, &length, APR_BLOCK_READ); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, + "disk_cache: Error when reading bucket for URL %s", + h->cache_obj->key); + /* Remove the intermediate cache file and return non-APR_SUCCESS */ + file_cache_errorcleanup(dobj, r); + return rv; } - - /* write metadata buckets direct to the output filter */ - else { - - /* move the metadata bucket to our temporary brigade */ - b = e; - e = APR_BUCKET_NEXT(e); - APR_BUCKET_REMOVE(b); - APR_BRIGADE_INSERT_HEAD(dobj->tmpbb, b); - - /* Is our network connection still alive? - * If not, we must continue looping, but stop writing to the network. - */ - if (APR_SUCCESS == dobj->frv) { - - /* TODO: If we are not able to guarantee that - * apr_core_output_filter() will not block on our - * file buckets, then the check for whether the - * socket will block must go here. - */ - - /* send our new brigade to the network */ - dobj->frv = ap_pass_brigade(f->next, dobj->tmpbb); - - } - + rv = apr_file_write_full(dobj->tfd, str, length, &written); + if (rv != APR_SUCCESS) { + ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server, + "disk_cache: Error when writing cache file for URL %s", + h->cache_obj->key); + /* Remove the intermediate cache file and return non-APR_SUCCESS */ + file_cache_errorcleanup(dobj, r); + return rv; + } + dobj->file_size += written; + if (dobj->file_size > conf->maxfs) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, + "disk_cache: URL %s failed the size check " + "(%" APR_OFF_T_FMT ">%" APR_OFF_T_FMT ")", + h->cache_obj->key, dobj->file_size, conf->maxfs); + /* Remove the intermediate cache file and return non-APR_SUCCESS */ + file_cache_errorcleanup(dobj, r); + return APR_EGENERAL; } - - apr_brigade_cleanup(dobj->tmpbb); - - } - - - /* Drop out here if this wasn't the end */ - if (!APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(bb))) { - return APR_SUCCESS; - } - - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: Done caching URL %s, len %" APR_OFF_T_FMT, - dobj->name, dobj->file_size); - - if (APR_SUCCESS != dobj->frv) { - ap_log_error(APLOG_MARK, APLOG_ERR, dobj->frv, r->server, - "disk_cache: An error occurred while writing to the " - "network for URL %s.", - h->cache_obj->key); } - if (dobj->file_size < conf->minfs) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: URL %s failed the size check " - "(%" APR_OFF_T_FMT "<%" APR_OFF_T_FMT ")", - h->cache_obj->key, dobj->file_size, conf->minfs); - /* Remove the intermediate cache file and return filter status */ - file_cache_errorcleanup(dobj, r); - return dobj->frv; - } - if (dobj->initial_size < 0) { - /* Update header information now that we know the size */ - dobj->initial_size = dobj->file_size; - rv = store_headers(h, r, &(h->cache_obj->info)); - if (rv != APR_SUCCESS) { + /* Was this the final bucket? If yes, close the temp file and perform + * sanity checks. + */ + if (APR_BUCKET_IS_EOS(APR_BRIGADE_LAST(bb))) { + if (r->connection->aborted || r->no_cache) { + ap_log_error(APLOG_MARK, APLOG_INFO, 0, r->server, + "disk_cache: Discarding body for URL %s " + "because connection has been aborted.", + h->cache_obj->key); + /* Remove the intermediate cache file and return non-APR_SUCCESS */ file_cache_errorcleanup(dobj, r); - return dobj->frv; + return APR_EGENERAL; + } + if (dobj->file_size < conf->minfs) { + ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, + "disk_cache: URL %s failed the size check " + "(%" APR_OFF_T_FMT "<%" APR_OFF_T_FMT ")", + h->cache_obj->key, dobj->file_size, conf->minfs); + /* Remove the intermediate cache file and return non-APR_SUCCESS */ + file_cache_errorcleanup(dobj, r); + return APR_EGENERAL; } - } - else if (dobj->initial_size != dobj->file_size) { - ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: URL %s - body size mismatch: suggested %" - APR_OFF_T_FMT " bodysize %" APR_OFF_T_FMT ")", - dobj->name, dobj->initial_size, dobj->file_size); - file_cache_errorcleanup(dobj, r); - return dobj->frv; - } - /* All checks were fine, close output file */ - rv = apr_file_close(dobj->fd); - dobj->fd = NULL; - if (rv != APR_SUCCESS) { + /* All checks were fine. Move tempfile to final destination */ + /* Link to the perm file, and close the descriptor */ + file_cache_el_final(dobj, r); ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, - "disk_cache: While trying to close the cache file for " - "URL %s, the close failed", dobj->name); - file_cache_errorcleanup(dobj, r); - return dobj->frv; + "disk_cache: Body for URL %s cached.", dobj->name); } - return dobj->frv; + return APR_SUCCESS; } - static void *create_config(apr_pool_t *p, server_rec *s) { disk_cache_conf *conf = apr_pcalloc(p, sizeof(disk_cache_conf)); @@ -1915,7 +1076,6 @@ conf->dirlength = DEFAULT_DIRLENGTH; conf->maxfs = DEFAULT_MAX_FILE_SIZE; conf->minfs = DEFAULT_MIN_FILE_SIZE; - conf->updtimeout = DEFAULT_UPDATE_TIMEOUT; conf->cache_root = NULL; conf->cache_root_len = 0; @@ -1999,25 +1159,6 @@ return NULL; } - -static const char -*set_cache_updtimeout(cmd_parms *parms, void *in_struct_ptr, const char *arg) -{ - apr_int64_t val; - disk_cache_conf *conf = ap_get_module_config(parms->server->module_config, - &disk_cache_module); - - if (apr_strtoff(&val, arg, NULL, 0) != APR_SUCCESS || val < 0) - { - return "CacheUpdateTimeout argument must be a non-negative integer representing the timeout in milliseconds for cache update operations"; - } - - conf->updtimeout = val * 1000; - - return NULL; -} - - static const command_rec disk_cache_cmds[] = { AP_INIT_TAKE1("CacheRoot", set_cache_root, NULL, RSRC_CONF, @@ -2030,8 +1171,6 @@ "The minimum file size to cache a document"), AP_INIT_TAKE1("CacheMaxFileSize", set_cache_maxfs, NULL, RSRC_CONF, "The maximum file size to cache a document"), - AP_INIT_TAKE1("CacheUpdateTimeout", set_cache_updtimeout, NULL, RSRC_CONF, - "Timeout in ms for cache updates"), {NULL} }; Modified: httpd/httpd/trunk/modules/cache/mod_disk_cache.h URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache/mod_disk_cache.h?view=diff&rev=502365&r1=502364&r2=502365 ============================================================================== --- httpd/httpd/trunk/modules/cache/mod_disk_cache.h (original) +++ httpd/httpd/trunk/modules/cache/mod_disk_cache.h Thu Feb 1 13:28:34 2007 @@ -22,19 +22,12 @@ */ #define VARY_FORMAT_VERSION 3 -#define DISK_FORMAT_VERSION_OLD 4 -#define DISK_FORMAT_VERSION 5 +#define DISK_FORMAT_VERSION 4 #define CACHE_HEADER_SUFFIX ".header" #define CACHE_DATA_SUFFIX ".data" #define CACHE_VDIR_SUFFIX ".vary" -#define CACHE_BUF_SIZE 65536 - -/* How long to sleep before retrying while looping */ -#define CACHE_LOOP_SLEEP 200000 - - #define AP_TEMPFILE_PREFIX "/" #define AP_TEMPFILE_BASE "aptmp" #define AP_TEMPFILE_SUFFIX "XXXXXX" @@ -42,10 +35,9 @@ #define AP_TEMPFILE_NAMELEN strlen(AP_TEMPFILE_BASE AP_TEMPFILE_SUFFIX) #define AP_TEMPFILE AP_TEMPFILE_PREFIX AP_TEMPFILE_BASE AP_TEMPFILE_SUFFIX -/* Indicates the format of the header struct stored on-disk. */ -typedef apr_uint32_t disk_cache_format_t; - typedef struct { + /* Indicates the format of the header struct stored on-disk. */ + apr_uint32_t format; /* The HTTP status code returned for this response. */ int status; /* The size of the entity name that follows. */ @@ -57,9 +49,6 @@ apr_time_t expire; apr_time_t request_time; apr_time_t response_time; - /* The body size forced to 64bit to not break when people go from non-LFS - * to LFS builds */ - apr_int64_t file_size; } disk_cache_info_t; /* @@ -75,19 +64,12 @@ const char *hdrsfile; /* name of file where the hdrs will go */ const char *hashfile; /* Computed hash key for this URI */ const char *name; /* Requested URI without vary bits - suitable for mortals. */ + const char *key; /* On-disk prefix; URI with Vary bits (if present) */ apr_file_t *fd; /* data file */ apr_file_t *hfd; /* headers file */ apr_file_t *tfd; /* temporary file for data */ apr_off_t file_size; /* File size of the cached data file */ - apr_off_t initial_size; /* Initial file size reported by caller */ disk_cache_info_t disk_info; /* Header information. */ - - apr_interval_time_t updtimeout; /* Cache update timeout */ - - int skipstore; /* Set if we should skip storing stuff */ - int store_body_called; /* Number of times store_body() has executed */ - apr_bucket_brigade *tmpbb; /* Temporary bucket brigade. */ - apr_status_t frv; /* Last known status of network write */ } disk_cache_object_t; @@ -100,7 +82,6 @@ #define DEFAULT_DIRLENGTH 2 #define DEFAULT_MIN_FILE_SIZE 1 #define DEFAULT_MAX_FILE_SIZE 1000000 -#define DEFAULT_UPDATE_TIMEOUT apr_time_from_sec(10) typedef struct { const char* cache_root; @@ -109,26 +90,6 @@ int dirlength; /* Length of subdirectory names */ apr_off_t minfs; /* minimum file size for cached files */ apr_off_t maxfs; /* maximum file size for cached files */ - apr_interval_time_t updtimeout; /* Cache update timeout */ } disk_cache_conf; - -#define CACHE_ENODATA (APR_OS_START_USERERR+1) -#define CACHE_EDECLINED (APR_OS_START_USERERR+2) -#define CACHE_EEXIST (APR_OS_START_USERERR+3) - - -typedef struct diskcache_bucket_data diskcache_bucket_data; -struct diskcache_bucket_data { - /* Number of buckets using this memory */ - apr_bucket_refcount refcount; - apr_file_t *fd; - /* The pool into which any needed structures should - * be created while reading from this file bucket */ - apr_pool_t *readpool; - /* Cache update timeout */ - apr_interval_time_t updtimeout; - -}; - #endif /*MOD_DISK_CACHE_H*/ Modified: httpd/httpd/trunk/modules/cache/mod_mem_cache.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cache/mod_mem_cache.c?view=diff&rev=502365&r1=502364&r2=502365 ============================================================================== --- httpd/httpd/trunk/modules/cache/mod_mem_cache.c (original) +++ httpd/httpd/trunk/modules/cache/mod_mem_cache.c Thu Feb 1 13:28:34 2007 @@ -71,7 +71,6 @@ long total_refs; /**< total number of references this entry has had */ apr_uint32_t pos; /**< the position of this entry in the cache */ - apr_status_t frv; /* last known status of writing to the output filter */ } mem_cache_object_t; @@ -102,7 +101,7 @@ /* Forward declarations */ static int remove_entity(cache_handle_t *h); static apr_status_t store_headers(cache_handle_t *h, request_rec *r, cache_info *i); -static apr_status_t store_body(cache_handle_t *h, ap_filter_t *f, apr_bucket_brigade *b); +static apr_status_t store_body(cache_handle_t *h, request_rec *r, apr_bucket_brigade *b); static apr_status_t recall_headers(cache_handle_t *h, request_rec *r); static apr_status_t recall_body(cache_handle_t *h, apr_pool_t *p, apr_bucket_brigade *bb); @@ -621,10 +620,9 @@ return APR_SUCCESS; } -static apr_status_t store_body(cache_handle_t *h, ap_filter_t *f, apr_bucket_brigade *b) +static apr_status_t store_body(cache_handle_t *h, request_rec *r, apr_bucket_brigade *b) { apr_status_t rv; - request_rec *r = f->r; cache_object_t *obj = h->cache_obj; cache_object_t *tobj = NULL; mem_cache_object_t *mobj = (mem_cache_object_t*) obj->vobj; @@ -669,9 +667,7 @@ rv = apr_file_open(&tmpfile, name, mobj->flags, APR_OS_DEFAULT, r->pool); if (rv != APR_SUCCESS) { - ap_log_error(APLOG_MARK, APLOG_ERR, rv, r->server, - "mem_cache: Failed to open file '%s' while attempting to cache the file descriptor.", name); - return ap_pass_brigade(f->next, b); + return rv; } apr_file_inherit_unset(tmpfile); apr_os_file_get(&(mobj->fd), tmpfile); @@ -680,7 +676,7 @@ ap_log_error(APLOG_MARK, APLOG_INFO, 0, r->server, "mem_cache: Cached file: %s with key: %s", name, obj->key); obj->complete = 1; - return ap_pass_brigade(f->next, b); + return APR_SUCCESS; } /* Content not suitable for fd caching. Cache in-memory instead. */ @@ -694,12 +690,7 @@ if (mobj->m == NULL) { mobj->m = malloc(mobj->m_len); if (mobj->m == NULL) { - /* we didn't have space to cache it, fall back gracefully */ - cleanup_cache_object(obj); - ap_remove_output_filter(f); - ap_log_error(APLOG_MARK, APLOG_ERR, APR_ENOMEM, r->server, - "mem_cache: Could not store body - not enough memory."); - return ap_pass_brigade(f->next, b); + return APR_ENOMEM; } obj->count = 0; } @@ -720,12 +711,7 @@ * buffer */ mobj->m = realloc(mobj->m, obj->count); if (!mobj->m) { - /* we didn't have space to cache it, fall back gracefully */ - cleanup_cache_object(obj); - ap_remove_output_filter(f); - ap_log_error(APLOG_MARK, APLOG_ERR, APR_ENOMEM, r->server, - "mem_cache: Could not store next bit of body - not enough memory."); - return ap_pass_brigade(f->next, b); + return APR_ENOMEM; } /* Now comes the crufty part... there is no way to tell the @@ -781,36 +767,26 @@ } rv = apr_bucket_read(e, &s, &len, eblock); if (rv != APR_SUCCESS) { - cleanup_cache_object(obj); - /* not being able to read the bucket is fatal, - * return this up the filter stack - */ return rv; } if (len) { /* Check for buffer overflow */ - if ((obj->count + len) > mobj->m_len) { - /* we didn't have space to cache it, fall back gracefully */ - cleanup_cache_object(obj); - ap_remove_output_filter(f); - ap_log_error(APLOG_MARK, APLOG_ERR, APR_ENOMEM, r->server, - "mem_cache: Could not store body - buffer overflow."); - return ap_pass_brigade(f->next, b); - } - else { + if ((obj->count + len) > mobj->m_len) { + return APR_ENOMEM; + } + else { memcpy(cur, s, len); cur+=len; obj->count+=len; - } + } } /* This should not fail, but if it does, we are in BIG trouble * cause we just stomped all over the heap. */ AP_DEBUG_ASSERT(obj->count <= mobj->m_len); } - return ap_pass_brigade(f->next, b); + return APR_SUCCESS; } - /** * Configuration and start-up */