Well, theoretically, no purpose at all.
What I *really* want to do is regen all the views - and I discovered that if you compact a view that doesnt exist, it'll create it.

"Why", you may ask, "Why would you do that, and just not query the (missing) view and have it get recreated?"
Don't really have an easy answer to that - i guess it just fits pretty easily into the maintenance scripts we have sitting around (nice complicated scripts that compact the databases and views, check for the statuses, etc., etc., which retrofit really easily for this purpose)

cheers


Mahesh Paolini-Subramanya | CTO | mahesh@aptela.com | 703.386.1500 Ext. 9100
2250 Corporate Park Drive | Suite 150 | Herndon, VA | www.aptela.com
Check out our Blog | Follow us on Twitter | Refer a Friend 


On Jan 2, 2011, at 7:25 PM, Adam Kocoloski wrote:

Hmm, so you deleted all the views, then compacted them?  What purpose does that serve?

Adam

On Jan 2, 2011, at 7:03 PM, Mahesh Paolini-Subramanya wrote:

Updater?  Mildly confused....  (sorry 'bout that - terminology incomprehension on my part...)

cheers
Mahesh Paolini-Subramanya | CTO | mahesh@aptela.com | 703.386.1500 Ext. 9100
2250 Corporate Park Drive | Suite 150 | Herndon, VA | www.aptela.com
Check out our Blog | Follow us on Twitter | Refer a Friend

<image003.jpg>

On Jan 2, 2011, at 5:10 PM, Adam Kocoloski wrote:

Mahesh, do you by chance have an updater running for the view group when the view compaction completes?

On Jan 2, 2011, at 4:53 PM, Adam Kocoloski wrote:

These reports do sound suspiciously like the problems described in COUCHDB-901 that caused us to rewrite the OS process management in BigCouch.  Mahesh, yours is the first report I've heard that specifically implicated view compaction.  That's significant.  I didn't understand what you meant by "clobber all the views", though.

I'll poke around and see if I can devise a scenario where the OS process would not be released after view compaction.  In my experience the internal ets tables maintained in CouchDB to track OS processes got out-of-sync - one table would report 1000 available processes, while another table would claim only 2.  Best,

Adam

On Jan 2, 2011, at 12:25 PM, Mahesh Paolini-Subramanya wrote:

Sorry - apologies - hit Send by accident :-(

Wow - wierdness here.  I'm running 1.0.1, and have noticed the same thing
happening.  The specific (and very replicable - for me at least) scenario is
as follows
- I've got around a thousand dbs, each with around 50K documents, and each
with about 5 (javascript) design documents.
- I clobber all the views, and remotely (LWP::UserAgent) compact the view
($db/_compact/$design)
- THe compaction of the design document results in the design exactly being
generate (this is good_
- The compaction also leave a couchjs process just lying around (this would
be bad)
- After a few thousand couchjs processes, couchdb just chokes.

Any ideas? This related in any way to the process issues in BigCouch (Jira
-901)?

cheers




On Mon, Sep 7, 2009 at 12:18 AM, Paul Davis <paul.joseph.davis@gmail.com>wrote:

Another random thought, after your clients have been running a bit, an
easy way to check why type of stuff is going on with sockets is:

$ netstat -ap tcp

If your apps have periods of high turnover, check the sockets to see
if you have lots of them in TIME_WAIT state.

Paul

On Sun, Sep 6, 2009 at 11:39 PM, Arun Thampi<arun.thampi@gmail.com> wrote:
Thanks Paul for the quick response. I'll poke around further to see if
there
is anything amiss and will keep the list posted.
Cheers,
Arun

On Mon, Sep 7, 2009 at 11:24 AM, Paul Davis <paul.joseph.davis@gmail.com
wrote:

The algorithm for CouchDB's use of OS processes is pretty simple:

"Give me an os process plz"

If none are available, then it creates a new one. A large number of OS
processes would suggest to me that something isn't releasing OS
processes correctly. My first guess would be to look at _list and
_show as those both require an OS process, but are also at the mercy
of client connection semantics. I might be reaching a bit, but I
wonder if CouchRest is failing to close sockets properly. Its not much
more than a random finger pointing, but there have been other errors
recently that also suggest CouchRest isn't doing proper socket
handling.

The other possibility is there's something weird with killing
processes, but couchspawnkillable should've fixed that.

HTH,
Paul Davis

On Sun, Sep 6, 2009 at 11:18 PM, Arun Thampi<arun.thampi@gmail.com>
wrote:
Hi guys - Been running CouchDB trunk(r804727) in production for about
3
weeks now and one thing I've noticed is that the number of couchjs
processes
(/usr/local/lib/couchdb/bin/couchjs
/usr/local/share/couchdb/server/main.js)
keeps increasing to a large amount. Is this normal? Does CouchDB
manage
these processes and eventually kill inactive couchjs processes?
Just FYI I'm using CouchRest as part of a Rails app to query two
different
views in my db.

Thanks in advance.

Cheers,
Arun

--
It's better to be a pirate than join the Navy - Steve Jobs
http://mclov.in





--
It's better to be a pirate than join the Navy - Steve Jobs
http://mclov.in





--
Mahesh Paolini-Subramanya | CTO | mahesh@aptela.com | 703.386.1500 Ext. 9100
13454 Sunrise Valley Drive | Suite 500 | Herndon, VA