incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Michael Bykov <m.by...@gmail.com>
Subject Re: replication: lexical error: invalid char in json text
Date Sat, 06 Oct 2012 09:51:03 GMT
2012/10/5 Dave Cottlehuber <dch@jsonified.com>:
> On 5 October 2012 17:51, Dave Cottlehuber <dch@jsonified.com> wrote:
>> On 5 October 2012 16:33, Michael Bykov <m.bykov@gmail.com> wrote:
>>> Hi,
>>>
>>> Local couchdb works great, but replication does not work.
>
>>> Replication worked perfectly several days ago. You can see the correct
>>> (that was replicated) part of it here:
>>> http://diglossa.ru:5984/_utils/database.html?greek
>>>
>>> Please advice how can I find invalid record in local DB?
>>>
>>> I got this in log:
>>>
>>>
>>> =CRASH REPORT==== 5-Oct-2012::18:16:23 ===
>>>   crasher:
>>>     initial call: couch_replicator:init/1
>>>     pid: <0.720.0>
>>>     registered_name: []
>>>     exception exit: {worker_died,<0.727.0>,
>>>                         {{nocatch,
>>>                              {invalid_json,
>>>                                  {{error,
>>>                                       {1,
>>>                                        "lexical error: invalid char in
>>> json text.\n"}},
>>>                                   <<"<html>\r\n<head><title>413
>>> Request Entity Too Large</title></head>\r\n<body
>>> bgcolor=\"white\">\r\n<center><h1>413 Request Entity Too
>>> Large</h1></center>\r\n<hr><center>nginx/1.1.9</center>\r\n</body>\r\n</html>\r\n">>}}},
>>>                          [{ejson,nif_decode,1,[{file,"ejson.erl"},{line,57}]},
>>>                           {ejson,decode,1,[{file,"ejson.erl"},{line,38}]},
>>>                           {couch_replicator_httpc,process_response,5,
>>>                               [{file,"src/couch_replicator_httpc.erl"},
>>>                                {line,88}]},
>>>
>>>
>>>
>>> --
>>> М.
>>>
>>> http://diglossa.ru
>>> xmpp://m.bykov@jabber.ru
>>
>> I'm guessing the underlying issue is replication with source being an
>> older version of couchdb, or for some reason (e.g. different
>> spidermonkey or JSON parsing library) is unable to complete parsing
>> one document, causing the replication to die when only a partial doc
>> is sent. It could be a lot more informative about where it's at
>> however. If you're using filtered replication perhaps that may also be
>> a mode of failure.
>>
>> Um there are a couple of ways to do this; the simplest is to sort &
>> then diff the _all_docs list from both DBs, with whatever sensible
>> tidyup is needed.
>>
>> Alternatively, I think in either source or destination log (when
>> you're in debug mode) will show the last successful doc received or
>> transferred.
>>
>> A+
>> Dave
>
> Actually after reading the HTML bit the issue is more likely with your
> nginx config:
>
>     413 Request Entity Too Large
>
> I'd still diff _all_docs and then use single doc replication [1] as a
> quick test, instead of needing to re-run the whole replication.
>
> A+
> Dave
>
> [1]: http://wiki.apache.org/couchdb/Replication#Named_Document_Replication

Hi Dave,

Thank you, with _all_docs I've easily got an invalid doc.



-- 
М.

http://diglossa.ru
xmpp://m.bykov@jabber.ru

Mime
View raw message