couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Peyton Vaughn <pvau...@6fusion.com>
Subject Re: Sharding question for clustered CouchDB 2.0
Date Fri, 22 Jul 2016 22:05:07 GMT
Hi, thanks for taking time to reply.

Actually, there are no duplicate documents, nor are there any deletions.
This is the result of posting ~ a million small documents to the cluster.

I'll run compaction when I have a chance and see what impact that has.



On Fri, Jul 22, 2016 at 5:47 PM, Robert Newson <rnewson@apache.org> wrote:

> Are you updating one doc over and over? That's my inference. Also you'll
> need to run compaction on all shards then look at the distribution
> afterward.
>
> Sent from my iPhone
>
> > On 22 Jul 2016, at 21:02, Peyton Vaughn <pvaughn@6fusion.com> wrote:
> >
> > Hi,
> >
> > I've been working through getting a Couch cluster set up in Kubernetes.
> > Finally got to the point of testing it and am a bit surprised by the
> > distribution of data I see amongst the shards (this is for 2 nodes on 2
> > separate host):
> >
> > node1:
> > ~>du -hs *
> >
> > 6.7G    shards/00000000-1fffffff
> > 855M    shards/20000000-3fffffff
> > 859M    shards/40000000-5fffffff
> > 856M    shards/60000000-7fffffff
> > 859M    shards/80000000-9fffffff
> > 858M    shards/a0000000-bfffffff
> > 6.5G    shards/c0000000-dfffffff
> > 851M    shards/e0000000-ffffffff
> >
> > node2:
> > ~>du -hs *
> > 853M    00000000-1fffffff
> > 855M    20000000-3fffffff
> > 859M    40000000-5fffffff
> > 856M    60000000-7fffffff
> > 859M    80000000-9fffffff
> > 858M    a0000000-bfffffff
> > 853M    c0000000-dfffffff
> > 851M    e0000000-ffffffff
> >
> > Two of the shards really stand out in terms of disk usage... so I was
> > wondering if this is expected behavior, or have I managed to misconfigure
> > something?
> >
> >
> > I really appreciate any insight - am really trying to understand 2.0 as
> > best I can.
> > Thanks!
> > Peyton
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message