Return-Path: Delivered-To: apmail-couchdb-user-archive@www.apache.org Received: (qmail 29252 invoked from network); 4 Dec 2009 16:09:15 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 4 Dec 2009 16:09:15 -0000 Received: (qmail 95151 invoked by uid 500); 4 Dec 2009 16:09:14 -0000 Delivered-To: apmail-couchdb-user-archive@couchdb.apache.org Received: (qmail 95104 invoked by uid 500); 4 Dec 2009 16:09:14 -0000 Mailing-List: contact user-help@couchdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@couchdb.apache.org Delivered-To: mailing list user@couchdb.apache.org Received: (qmail 95094 invoked by uid 99); 4 Dec 2009 16:09:14 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 04 Dec 2009 16:09:14 +0000 X-ASF-Spam-Status: No, hits=1.5 required=10.0 tests=SPF_PASS,WEIRD_PORT X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of zachary.zolton@gmail.com designates 209.85.218.218 as permitted sender) Received: from [209.85.218.218] (HELO mail-bw0-f218.google.com) (209.85.218.218) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 04 Dec 2009 16:09:06 +0000 Received: by bwz10 with SMTP id 10so2080547bwz.35 for ; Fri, 04 Dec 2009 08:08:46 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :from:date:message-id:subject:to:content-type :content-transfer-encoding; bh=4TzWcWC93bNF3825O8qVVecScZb2h6tYekQeRq5H+GM=; b=MdY/TJQQQG3iRfMfs/FIideEJhum7cBNowVup29ei0EYnQoURtDUZNFMSBlYjPlbQ1 edpj9Jw1K6tJlxvAgGbHHkaguqZu0e9xwWgPTQ/nPzCF46S3QiTzTEjRomDcYUDtRczQ QVMzKOFP0mkhWdlQjzDENjVy8wYINAiWM9KbY= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :content-type:content-transfer-encoding; b=iEn1kflSpntKX3WrIK7ifK40csc1vc/P4DZmdNWaQEzkhKLkmJmANle1E37vg4OglF R692Y06TTUN5uwWZnKmF0Kmw3OUQrsk1KUnF+WaHY3/CkX3GbjB1hXQD7wOxHZNcoqCk dFtegjb46O0sIZ8+8U2KrZS4/6BfnpkHGOcsU= MIME-Version: 1.0 Received: by 10.204.10.7 with SMTP id n7mr3492682bkn.68.1259942925308; Fri, 04 Dec 2009 08:08:45 -0800 (PST) In-Reply-To: References: <4B18D3B2.3020108@gmail.com> <3a3e8270912040146u4ce88db8xbda3299dca3ead8d@mail.gmail.com> From: Zachary Zolton Date: Fri, 4 Dec 2009 10:08:25 -0600 Message-ID: Subject: Re: Get Document Size To: user@couchdb.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org This seems to work for attachments: function(doc) { if (doc['_attachments']) { var attachementSize =3D 0; for each (var stub in doc['_attachments']) { attachementSize +=3D stub.length; } emit(null, attachementSize); } } On Fri, Dec 4, 2009 at 9:46 AM, Sebastian Cohnen wrote: > this still does not include the documents attachments... > > On 04.12.2009, at 16:33, Zachary Zolton wrote: > >> You mean something like this map function? >> >> function(doc) { >> =A0function toJson() { >> =A0 =A0// code... >> =A0} >> >> =A0emit(doc.someAttr, length(toJson(doc))); >> } >> >> >> On Fri, Dec 4, 2009 at 3:46 AM, Andreas Pavlogiannis >> wrote: >>> That might work for a single document, but I =A0'd rather calculate the= size >>> inside a map function so that I can then use a reduce to sum up. >>> >>> 2009/12/4 Sebastian Cohnen >>> >>>> what about doing a HEAD request and look for Content-Length? >>>> >>>> {~} I curl -I http://localhost:5984/test/DOCUMENT-ID >>>> HTTP/1.1 200 OK >>>> Server: CouchDB/0.11.0b60a6b3e7-git (Erlang OTP/R13B) >>>> Etag: "1-1a6c2a80b8615b2399ff5ba66d18534d" >>>> Date: Fri, 04 Dec 2009 09:38:34 GMT >>>> Content-Type: text/plain;charset=3Dutf-8 >>>> Content-Length: 169 >>>> Cache-Control: must-revalidate >>>> >>>> >>>> On 04.12.2009, at 10:17, Andreas Pavlogiannis wrote: >>>> >>>>> Hello, >>>>> >>>>> Is there a uniform way to obtain a document's size (attachments' size >>>> included) ? >>>>> >>>>> Thanks, Andreas >>>> >>>> >>> > >