couchdb-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nolan Lawson <no...@nolanlawson.com>
Subject Re: internal_server_error : No DB shards could be opened.
Date Mon, 06 Feb 2017 18:23:27 GMT
I’ve also seen “No DB shards could be opened” while attempting to get the pouchdb-find
test suite running against CouchDB 2.0. According to Garren it seems like it’s solved in
the CouchDB master branch?

https://github.com/pouchdb/pouchdb/pull/6201#issuecomment-277540242 <https://github.com/pouchdb/pouchdb/pull/6201#issuecomment-277540242>

Cheers,
Nolan

> On Feb 6, 2017, at 9:56 AM, Garth Gutenberg <garth.gutenberg@gmail.com> wrote:
> 
> Hey guys.  I'm having a problem that I hope someone can shed some light on.
> 
> I have a 3 node cluster.  I just imported 420 DBs into node 0 (about 20gb
> on disk) via bulk insert and triggered view indexes along the way.  Nodes 1
> and 2 were happily replicating (or whatever the cluster term for that is
> now), and all was good.  Import completed, all the boxes were dormant.
> However when I load Fauxton on node 1, I get the following message beside
> each DB:
> 
> This database failed to load.
> 
> 
> Each DB has the following log entries for it when it's being accessed on
> this node:
> 
> [error] 2017-02-06T17:35:32.309294Z
> couchdb@couchdb1.aries.aws.weeverapps.com.aries.aws.weeverapps.com
> <0.1206.9> 0ef0b93a1b req_err(1995524407) internal_server_error : No DB
> shards could be opened.
>    [<<"fabric_util:get_shard/4 L180">>,<<"fabric:get_security/2
> L146">>,<<"chttpd_auth_request:db_authorization_check/1
> L87">>,<<"chttpd_auth_request:authorize_request/1
> L19">>,<<"chttpd:process_request/1 L291">>,<<"chttpd:handle_request_int/1
> L229">>,<<"mochiweb_http:headers/6 L122">>,<<"proc_lib:init_p_do_apply/3
> L237">>]
> [notice] 2017-02-06T17:35:32.309544Z
> couchdb@couchdb1.aries.aws.weeverapps.com.aries.aws.weeverapps.com
> <0.1206.9> 0ef0b93a1b couchdb1.aries.aws.weeverapps.com:5984 10.150.0.42
> undefined GET /app_18950%2Fconfig 500 ok 1
> 
> It looks like it's appending the search domain to the FQDN for some reason,
> but only on this node.  Also, if I query for membership I get:
> 
> {"all_nodes":["couchdb@couchdb0.aries.aws.weeverapps.com","
> couchdb@couchdb2.aries.aws.weeverapps.com"],"cluster_nodes":["
> couchdb@couchdb0.aries.aws.weeverapps.com","
> couchdb@couchdb1.aries.aws.weeverapps.com","
> couchdb@couchdb2.aries.aws.weeverapps.com"]}
> 
> Both nodes 0 and 2 appear to be operating fine.  Thankfully this is still
> in a lab environment, but we'd really like to get this into production, so
> would like to understand/solve this problem asap.


Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message