couchdb-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From kxe...@apache.org
Subject [03/14] Documentation was moved to couchdb-documentation repository
Date Thu, 16 Oct 2014 09:09:15 GMT
http://git-wip-us.apache.org/repos/asf/couchdb/blob/cdac7299/share/doc/src/query-server/protocol.rst
----------------------------------------------------------------------
diff --git a/share/doc/src/query-server/protocol.rst b/share/doc/src/query-server/protocol.rst
deleted file mode 100644
index c1f0b49..0000000
--- a/share/doc/src/query-server/protocol.rst
+++ /dev/null
@@ -1,967 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-
-.. _query-server/protocol:
-
-=====================
-Query Server Protocol
-=====================
-
-The `Query Server` is an external process that communicates with CouchDB via a
-JSON protocol over stdio  and processes all design functions calls:
-`views`, `shows`, `lists`, `filters`, `updates` and `validate_doc_update`.
-
-CouchDB communicates with the Query Server process though stdio interface by
-JSON messages that terminated by newline character. Messages that are sent to
-the Query Server are always `array`-typed that could be matched by the pattern
-``[<command>, <*arguments>]\n``.
-
-.. note::
-   To simplify examples reading we omitted trailing ``\n`` character to let
-   Sphinx highlight them well. Also, all examples contain formatted JSON values
-   while real data transfers in compact mode without formatting spaces.
-
-.. _qs/reset:
-
-``reset``
-=========
-
-:Command: ``reset``
-:Arguments: :ref:`Query server state <config/query_server_config>` (optional)
-:Returns: ``true``
-
-This resets the state of the Query Server and makes it forget all previous
-input. If applicable, this is the point to run garbage collection.
-
-CouchDB sends::
-
-    ["reset"]
-
-The Query Server answers::
-
-    true
-
-To set up new Query Server state the second argument is used with object data.
-This argument is used
-
-CouchDB sends::
-
-    ["reset", {"reduce_limit": true, "timeout": 5000}]
-
-The Query Server answers::
-
-    true
-
-
-.. _qs/add_lib:
-
-``add_lib``
-===========
-
-:Command: ``add_lib``
-:Arguments: CommonJS library object by ``views/lib`` path
-:Returns: ``true``
-
-Adds :ref:`CommonJS <commonjs>` library to Query Server state for further usage
-in `map` functions.
-
-CouchDB sends::
-
-  [
-    "add_lib",
-    {
-      "utils": "exports.MAGIC = 42;"
-    }
-  ]
-
-The Query Server answers::
-
-  true
-
-
-.. note::
-
-   This library shouldn't have any side effects nor track its own state
-   or you'll have a lot of happy debugging time if something went wrong.
-   Remember that a complete index rebuild is a heavy operation and this is
-   the only way to fix your mistakes with shared state.
-
-.. _qs/add_fun:
-
-``add_fun``
------------
-
-:Command: ``add_fun``
-:Arguments: Map function source code.
-:Returns: ``true``
-
-When creating or updating a view the Query Server gets sent the view function
-for evaluation. The Query Server should parse, compile and evaluate the
-function it receives to make it callable later. If this fails, the Query Server
-returns an error. CouchDB might store several functions before sending in any 
-actual documents.
-
-CouchDB sends::
-
-    [
-      "add_fun",
-      "function(doc) { if(doc.score > 50) emit(null, {'player_name': doc.name}); }"
-    ]
-
-The Query Server answers::
-
-    true
-
-
-.. _qs/map_doc:
-
-``map_doc``
-===========
-
-:Command: ``map_doc``
-:Arguments: Document object
-:Returns: Array of key-value pairs per applied :ref:`function <qs/add_fun>`
-
-When the view function is stored in the Query Server, CouchDB starts sending in
-all the documents in the database, one at a time. The Query Server calls the
-previously stored functions one after another with a document and stores its
-result. When all functions have been called, the result is returned as a JSON
-string.
-
-CouchDB sends::
-
-    [
-      "map_doc",
-      {
-        "_id": "8877AFF9789988EE",
-        "_rev": "3-235256484",
-        "name": "John Smith",
-        "score": 60
-      }
-    ]
-
-If the function above is the only function stored, the Query Server answers::
-
-    [
-      [
-        [null, {"player_name": "John Smith"}]
-      ]
-    ]
-
-That is, an array with the result for every function for the given document.
-
-If a document is to be excluded from the view, the array should be empty.
-
-CouchDB sends::
-
-    [
-      "map_doc",
-      {
-        "_id": "9590AEB4585637FE",
-        "_rev": "1-674684684",
-        "name": "Jane Parker",
-        "score": 43
-      }
-    ]
-
-The Query Server answers::
-
-    [[]]
-
-
-.. _qs/reduce:
-
-``reduce``
-==========
-
-:Command: ``reduce``
-:Arguments:
-  - Reduce function source
-  - Array of :ref:`map function <mapfun>` results where each item represented
-    in format ``[[key, id-of-doc], value]``
-:Returns: Array with pair values: ``true`` and another array with reduced result
-
-If the view has a reduce function defined, CouchDB will enter into the reduce
-phase. The view server will receive a list of reduce functions and some map
-results on which it can apply them.
-
-CouchDB sends::
-
-  [
-    "reduce",
-    [
-      "function(k, v) { return sum(v); }"
-    ],
-    [
-      [[1, "699b524273605d5d3e9d4fd0ff2cb272"], 10],
-      [[2, "c081d0f69c13d2ce2050d684c7ba2843"], 20],
-      [[null, "foobar"], 3]
-    ]
-  ]
-
-The Query Server answers::
-
-  [
-    true,
-    [33]
-  ]
-
-Note that even though the view server receives the map results in the form
-``[[key, id-of-doc], value]``, the function may receive them in a different
-form. For example, the JavaScript Query Server applies functions on the list of
-keys and the list of values.
-
-.. _qs/rereduce:
-
-``rereduce``
-============
-
-:Command: ``rereduce``
-:Arguments: List of values.
-
-When building a view, CouchDB will apply the reduce step directly to the output
-of the map step and the rereduce step to the output of a previous reduce step.
-
-CouchDB will send a list of values, with no keys or document ids, to the
-rereduce step.
-
-CouchDB sends::
-
-  [
-    "rereduce",
-    [
-      "function(k, v, r) { return sum(v); }"
-    ],
-    [
-      33,
-      55,
-      66
-    ]
-  ]
-
-The Query Server answers::
-
-  [
-    true,
-    [154]
-  ]
-
-
-.. _qs/ddoc:
-
-``ddoc``
-========
-
-:Command: ``ddoc``
-:Arguments: Array of objects.
-
-  - First phase (ddoc initialization):
-
-    - ``"new"``
-    - Design document ``_id``
-    - Design document object
-
-  - Second phase (design function execution):
-
-    - Design document ``_id``
-    - Function path as an array of object keys
-    - Array of function arguments
-
-:Returns:
-
-  - First phase (ddoc initialization): ``true``
-  - Second phase (design function execution): custom object depending on
-    executed function
-
-
-
-This command acts in two phases: `ddoc` registration and `design function`
-execution.
-
-In the first phase CouchDB sends a full design document content to the Query
-Server to let it cache it by ``_id`` value for further function execution.
-
-To do this, CouchDB sends::
-
-  [
-    "ddoc",
-    "new",
-    "_design/temp",
-    {
-      "_id": "_design/temp",
-      "_rev": "8-d7379de23a751dc2a19e5638a7bbc5cc",
-      "language": "javascript",
-      "shows": {
-        "request": "function(doc,req){ return {json: req}; }",
-        "hello": "function(doc,req){ return {body: 'Hello, ' + (doc || {})._id + '!'}; }"
-      }
-    }
-  ]
-
-The Query Server answers::
-
-  true
-
-
-After than this design document is ready to serve next subcommands - that's the
-second phase.
-
-.. note::
-
-   Each ``ddoc`` subcommand is the root design document key, so they are not
-   actually subcommands, but first elements of the JSON path that may be handled
-   and processed.
-
-   The pattern for subcommand execution is common:
-
-   ``["ddoc", <design_doc_id>, [<subcommand>, <funcname>], [<argument1>, <argument2>, ...]]``
-
-
-.. _qs/ddoc/shows:
-
-``shows``
----------
-
-:Command: ``ddoc``
-:SubCommand: ``shows``
-:Arguments:
-
-  - Document object or ``null`` if document `id` wasn't specified in request
-  - :ref:`request_object`
-
-:Returns: Array with two elements:
-
-  - ``"resp"``
-  - :ref:`response_object`
-
-Executes :ref:`show function <showfun>`.
-
-Couchdb sends::
-
-  [
-    "ddoc",
-    "_design/temp",
-    [
-        "shows",
-        "doc"
-    ],
-    [
-      null,
-      {
-        "info": {
-          "db_name": "test",
-          "doc_count": 8,
-          "doc_del_count": 0,
-          "update_seq": 105,
-          "purge_seq": 0,
-          "compact_running": false,
-          "disk_size": 15818856,
-          "data_size": 1535048,
-          "instance_start_time": "1359952188595857",
-          "disk_format_version": 6,
-          "committed_update_seq": 105
-        },
-        "id": null,
-        "uuid": "169cb4cc82427cc7322cb4463d0021bb",
-        "method": "GET",
-        "requested_path": [
-          "api",
-          "_design",
-          "temp",
-          "_show",
-          "request"
-        ],
-        "path": [
-          "api",
-          "_design",
-          "temp",
-          "_show",
-          "request"
-        ],
-        "raw_path": "/api/_design/temp/_show/request",
-        "query": {},
-        "headers": {
-          "Accept": "*/*",
-          "Host": "localhost:5984",
-          "User-Agent": "curl/7.26.0"
-        },
-        "body": "undefined",
-        "peer": "127.0.0.1",
-        "form": {},
-        "cookie": {},
-        "userCtx": {
-          "db": "api",
-          "name": null,
-          "roles": [
-            "_admin"
-          ]
-        },
-        "secObj": {}
-      }
-    ]
-  ]
-
-The Query Server sends::
-
-  [
-    "resp",
-    {
-      "body": "Hello, undefined!"
-    }
-  ]
-
-
-.. _qs/ddoc/lists:
-
-``lists``
----------
-
-:Command: ``ddoc``
-:SubCommand: ``lists``
-:Arguments:
-
-  - :ref:`view_head_info_object`:
-  - :ref:`request_object`
-
-:Returns: Array. See below for details.
-
-Executes :ref:`list function <listfun>`.
-
-The communication protocol for `list` functions is a bit complex so let's use
-an example for illustration.
-
-Let's assume that we have view a function that emits `id-rev` pairs::
-
-  function(doc) {
-    emit(doc._id, doc._rev);
-  }
-
-And we'd like to emulate ``_all_docs`` JSON response with list function. Our
-*first* version of the list functions looks like this::
-
-  function(head, req){
-    start({'headers': {'Content-Type': 'application/json'}});
-    var resp = head;
-    var rows = [];
-    while(row=getRow()){
-      rows.push(row);
-    }
-    resp.rows = rows;
-    return toJSON(resp);
-  }
-
-The whole communication session during list function execution could be divided
-on three parts:
-
-#. Initialization
-
-   The first returned object from list function is an array of next structure::
-
-      ["start", <chunks>, <headers>]
-
-   Where ``<chunks>`` is an array of text chunks that will be sent to client
-   and ``<headers>`` is an object with response HTTP headers.
-
-   This message is sent from the Query Server to CouchDB on the
-   :js:func:`start` call which initialize HTTP response to the client::
-
-     [
-       "start",
-       [],
-       {
-         "headers": {
-           "Content-Type": "application/json"
-         }
-       }
-     ]
-
-   After this, the list function may start to process view rows.
-
-#. View Processing
-
-   Since view results can be extremely large, it is not wise to pass all its
-   rows in a single command. Instead, CouchDB can send view rows one by one
-   to the Query Server allowing processing view and output generation in a
-   streaming way.
-
-   CouchDB sends a special array that carries view row data::
-
-     [
-       "list_row",
-       {
-         "id": "0cb42c267fe32d4b56b3500bc503e030",
-         "key": "0cb42c267fe32d4b56b3500bc503e030",
-         "value": "1-967a00dff5e02add41819138abb3284d"
-       }
-     ]
-
-   If Query Server has something to return on this, it returns an array with a
-   ``"chunks"`` item in the head and an array of data in the tail. Now, for our
-   case it has nothing to return, so the response will be::
-
-     [
-       "chunks",
-       []
-     ]
-
-   When there is no more view rows to process, CouchDB sends special message,
-   that signs about that there is no more data to send from its side::
-
-     ["list_end"]
-
-
-#. Finalization
-
-   The last stage of the communication process is the returning *list tail*:
-   the last data chunk. After this, processing list function will be completed
-   and client will receive complete response.
-
-   For our example the last message will be the next::
-
-     [
-       "end",
-       [
-         "{\"total_rows\":2,\"offset\":0,\"rows\":[{\"id\":\"0cb42c267fe32d4b56b3500bc503e030\",\"key\":\"0cb42c267fe32d4b56b3500bc503e030\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"},{\"id\":\"431926a69504bde41851eb3c18a27b1f\",\"key\":\"431926a69504bde41851eb3c18a27b1f\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"}]}"
-       ]
-     ]
-
-There, we had made a big mistake: we had returned out result in a single
-message from the Query Server. That's ok when there are only a few rows in the
-view result, but it's not acceptable for millions documents and millions view 
-rows
-
-Let's fix our list function and see the changes in communication::
-
-  function(head, req){
-    start({'headers': {'Content-Type': 'application/json'}});
-    send('{');
-    send('"total_rows":' + toJSON(head.total_rows) + ',');
-    send('"offset":' + toJSON(head.offset) + ',');
-    send('"rows":[');
-    if (row=getRow()){
-      send(toJSON(row));
-    }
-    while(row=getRow()){
-      send(',' + toJSON(row));
-    }
-    send(']');
-    return '}';
-  }
-
-"Wait, what?" - you'd like to ask. Yes, we'd build JSON response manually by
-string chunks, but let's take a look on logs::
-
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["start",["{","\"total_rows\":2,","\"offset\":0,","\"rows\":["],{"headers":{"Content-Type":"application/json"}}]
-  [Wed, 24 Jul 2013 05:45:30 GMT] [info] [<0.18963.1>] 127.0.0.1 - - GET /blog/_design/post/_list/index/all_docs 200
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Input  :: ["list_row",{"id":"0cb42c267fe32d4b56b3500bc503e030","key":"0cb42c267fe32d4b56b3500bc503e030","value":"1-967a00dff5e02add41819138abb3284d"}]
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["chunks",["{\"id\":\"0cb42c267fe32d4b56b3500bc503e030\",\"key\":\"0cb42c267fe32d4b56b3500bc503e030\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"}"]]
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Input  :: ["list_row",{"id":"431926a69504bde41851eb3c18a27b1f","key":"431926a69504bde41851eb3c18a27b1f","value":"1-967a00dff5e02add41819138abb3284d"}]
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["chunks",[",{\"id\":\"431926a69504bde41851eb3c18a27b1f\",\"key\":\"431926a69504bde41851eb3c18a27b1f\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"}"]]
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Input  :: ["list_end"]
-  [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["end",["]","}"]]
-
-Note, that now the Query Server sends response by lightweight chunks and if
-our communication process was extremely slow, the client will see how response
-data appears on their screen. Chunk by chunk, without waiting for the complete
-result, like they have for our previous list function.
-
-.. _qs/ddoc/updates:
-
-``updates``
------------
-
-:Command: ``ddoc``
-:SubCommand: ``updates``
-:Arguments:
-
-  - Document object or ``null`` if document `id` wasn't specified in request
-  - :ref:`request_object`
-
-:Returns: Array with there elements:
-
-  - ``"up"``
-  - Document object or ``null`` if nothing should be stored
-  - :ref:`response_object`
-
-Executes :ref:`update function <updatefun>`.
-
-CouchDB sends::
-
-    [
-        "ddoc",
-        "_design/id",
-        [
-            "updates",
-            "nothing"
-        ],
-        [
-            null,
-            {
-                "info": {
-                    "db_name": "test",
-                    "doc_count": 5,
-                    "doc_del_count": 0,
-                    "update_seq": 16,
-                    "purge_seq": 0,
-                    "compact_running": false,
-                    "disk_size": 8044648,
-                    "data_size": 7979601,
-                    "instance_start_time": "1374612186131612",
-                    "disk_format_version": 6,
-                    "committed_update_seq": 16
-                },
-                "id": null,
-                "uuid": "7b695cb34a03df0316c15ab529002e69",
-                "method": "POST",
-                "requested_path": [
-                    "test",
-                    "_design",
-                    "1139",
-                    "_update",
-                    "nothing"
-                ],
-                "path": [
-                    "test",
-                    "_design",
-                    "1139",
-                    "_update",
-                    "nothing"
-                ],
-                "raw_path": "/test/_design/1139/_update/nothing",
-                "query": {},
-                "headers": {
-                    "Accept": "*/*",
-                    "Accept-Encoding": "identity, gzip, deflate, compress",
-                    "Content-Length": "0",
-                    "Host": "localhost:5984"
-                },
-                "body": "",
-                "peer": "127.0.0.1",
-                "form": {},
-                "cookie": {},
-                "userCtx": {
-                    "db": "test",
-                    "name": null,
-                    "roles": [
-                        "_admin"
-                    ]
-                },
-                "secObj": {}
-            }
-        ]
-    ]
-
-The Query Server answers::
-
-  [
-    "up",
-    null,
-    {"body": "document id wasn't provided"}
-  ]
-
-or in case of successful update::
-
-  [
-    "up",
-    {
-      "_id": "7b695cb34a03df0316c15ab529002e69",
-      "hello": "world!"
-    },
-    {"body": "document was updated"}
-  ]
-
-
-.. _qs/ddoc/filters:
-
-``filters``
------------
-
-:Command: ``ddoc``
-:SubCommand: ``filters``
-:Arguments:
-
-  - Array of document objects
-  - :ref:`request_object`
-
-:Returns: Array of two elements:
-
-  - ``true``
-  - Array of booleans in the same order of input documents.
-
-Executes :ref:`filter function <filterfun>`.
-
-CouchDB sends::
-
-  [
-      "ddoc",
-      "_design/test",
-      [
-          "filters",
-          "random"
-      ],
-      [
-          [
-              {
-                  "_id": "431926a69504bde41851eb3c18a27b1f",
-                  "_rev": "1-967a00dff5e02add41819138abb3284d",
-                  "_revisions": {
-                      "start": 1,
-                      "ids": [
-                          "967a00dff5e02add41819138abb3284d"
-                      ]
-                  }
-              },
-              {
-                  "_id": "0cb42c267fe32d4b56b3500bc503e030",
-                  "_rev": "1-967a00dff5e02add41819138abb3284d",
-                  "_revisions": {
-                      "start": 1,
-                      "ids": [
-                          "967a00dff5e02add41819138abb3284d"
-                      ]
-                  }
-              }
-          ],
-          {
-              "info": {
-                  "db_name": "test",
-                  "doc_count": 5,
-                  "doc_del_count": 0,
-                  "update_seq": 19,
-                  "purge_seq": 0,
-                  "compact_running": false,
-                  "disk_size": 8056936,
-                  "data_size": 7979745,
-                  "instance_start_time": "1374612186131612",
-                  "disk_format_version": 6,
-                  "committed_update_seq": 19
-              },
-              "id": null,
-              "uuid": "7b695cb34a03df0316c15ab529023a81",
-              "method": "GET",
-              "requested_path": [
-                  "test",
-                  "_changes?filter=test",
-                  "random"
-              ],
-              "path": [
-                  "test",
-                  "_changes"
-              ],
-              "raw_path": "/test/_changes?filter=test/random",
-              "query": {
-                  "filter": "test/random"
-              },
-              "headers": {
-                  "Accept": "application/json",
-                  "Accept-Encoding": "identity, gzip, deflate, compress",
-                  "Content-Length": "0",
-                  "Content-Type": "application/json; charset=utf-8",
-                  "Host": "localhost:5984"
-              },
-              "body": "",
-              "peer": "127.0.0.1",
-              "form": {},
-              "cookie": {},
-              "userCtx": {
-                  "db": "test",
-                  "name": null,
-                  "roles": [
-                      "_admin"
-                  ]
-              },
-              "secObj": {}
-          }
-      ]
-  ]
-
-The Query Server answers::
-
-  [
-    true,
-    [
-      true,
-      false
-    ]
-  ]
-
-
-
-.. _qs/ddoc/views:
-
-``views``
----------
-
-:Command: ``ddoc``
-:SubCommand: ``views``
-:Arguments: Array of document objects
-:Returns: Array of two elements:
-
-  - ``true``
-  - Array of booleans in the same order of input documents.
-
-.. versionadded:: 1.2
-
-Executes :ref:`view function <viewfilter>` in place of the filter.
-
-Acts in the same way as :ref:`qs/ddoc/filters` command.
-
-.. _qs/ddoc/validate_doc_update:
-
-``validate_doc_update``
------------------------
-
-:Command: ``ddoc``
-:SubCommand: ``validate_doc_update``
-:Arguments:
-
-  - Document object that will be stored
-  - Document object that will be replaced
-  - :ref:`userctx_object`
-  - :ref:`security_object`
-
-:Returns: ``1``
-
-Executes :ref:`validation function <vdufun>`.
-
-CouchDB send::
-
-  [
-    "ddoc",
-    "_design/id",
-    ["validate_doc_update"],
-    [
-      {
-        "_id": "docid",
-        "_rev": "2-e0165f450f6c89dc6b071c075dde3c4d",
-        "score": 10
-      },
-      {
-        "_id": "docid",
-        "_rev": "1-9f798c6ad72a406afdbf470b9eea8375",
-        "score": 4
-      },
-      {
-        "name": "Mike",
-        "roles": ["player"]
-      },
-      {
-        "admins": {},
-        "members": []
-      }
-    ]
-  ]
-
-The Query Server answers::
-
-  1
-
-.. note::
-
-   While the only valid response for this command is ``true`` to prevent
-   document save the Query Server need to raise an error: ``forbidden`` or
-   ``unauthorized`` - these errors will be turned into correct ``HTTP 403`` and
-   ``HTTP 401`` responses respectively.
-
-
-.. _qs/errors:
-
-Raising errors
-==============
-
-When something went wrong the Query Server is able to inform CouchDB about
-such a situation by sending special message in response of received command.
-
-Error messages prevent further command execution and return an error description
-to CouchDB. All errors are logically divided into two groups:
-
-- `Common errors`. These errors only break the current Query Server command and
-  return the error info to the CouchDB instance *without* terminating the Query
-  Server  process.
-- `Fatal errors`. The fatal errors signal about something really bad that hurts
-  the overall Query Server process stability and productivity. For instance, if
-  you're using Python Query Server and some design function is unable to import
-  some third party module, it's better to count such error as fatal and
-  terminate whole process or you still have to do the same after import fixing,
-  but manually.
-
-.. _qs/error:
-
-``error``
----------
-
-To raise an error, the Query Server have to answer::
-
-  ["error", "error_name", "reason why"]
-
-The ``"error_name"`` helps to classify problems by their type e.g. if it's
-``"value_error"`` so probably user have entered wrong data, ``"not_found"``
-notifies about missed resource and ``"type_error"`` definitely says about
-invalid and non expected input from user.
-
-The ``"reason why"`` is the error message that explains why it raised and, if
-possible, what is needed to do to fix it.
-
-For example, calling :ref:`updatefun` against non existent document could produce
-next error message::
-
-  ["error", "not_found", "Update function requires existent document"]
-
-
-.. _qs/error/forbidden:
-
-``forbidden``
--------------
-
-The `forbidden` error is widely used by :ref:`vdufun` to stop further function
-processing and prevent on disk store of the new document version. Since this
-error actually is not an error, but an assertion against user actions, CouchDB
-doesn't log it at `"error"` level, but returns `HTTP 403 Forbidden` response
-with error information object.
-
-To raise this error, the Query Server have to answer::
-
-  {"forbidden": "reason why"}
-
-
-.. _qs/error/unauthorized:
-
-``unauthorized``
-----------------
-
-The `unauthorized` error mostly acts like `forbidden` one, but with
-the meaning of *please authorize first*. This small difference helps end users
-to understand what they can do to solve the problem. CouchDB doesn't log it at
-`"error"` level, but returns `HTTP 401 Unauthorized` response with error
-information object.
-
-To raise this error, the Query Server have to answer::
-
-  {"unauthorized": "reason why"}
-
-.. _qs/log:
-
-Logging
-=======
-
-At any time, the Query Server may send some information that will be saved in
-CouchDB's log file. This is done by sending a special object with just one
-field, log, on a separate line::
-
-  ["log", "some message"]
-
-CouchDB responds nothing, but writes received message into log file::
-
-  [Sun, 13 Feb 2009 23:31:30 GMT] [info] [<0.72.0>] Query Server Log Message: some message
-
-These messages are only logged at :config:option:`info level <log/level>`.

http://git-wip-us.apache.org/repos/asf/couchdb/blob/cdac7299/share/doc/src/replication/conflicts.rst
----------------------------------------------------------------------
diff --git a/share/doc/src/replication/conflicts.rst b/share/doc/src/replication/conflicts.rst
deleted file mode 100644
index 2d01a8c..0000000
--- a/share/doc/src/replication/conflicts.rst
+++ /dev/null
@@ -1,793 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-
-.. _replication/conflicts:
-
-==============================
-Replication and conflict model
-==============================
-
-Let's take the following example to illustrate replication and conflict handling.
-
-- Alice has a document containing Bob's business card;
-- She synchronizes it between her desktop PC and her laptop;
-- On the desktop PC, she updates Bob's E-mail address;
-  Without syncing again, she updates Bob's mobile number on the laptop;
-- Then she replicates the two to each other again.
-
-So on the desktop the document has Bob's new E-mail address and his old mobile
-number, and on the laptop it has his old E-mail address and his new mobile
-number.
-
-The question is, what happens to these conflicting updated documents?
-
-CouchDB replication
-===================
-
-CouchDB works with JSON documents inside databases. Replication of databases
-takes place over HTTP, and can be either a "pull" or a "push", but is
-unidirectional. So the easiest way to perform a full sync is to do a "push"
-followed by a "pull" (or vice versa).
-
-So, Alice creates v1 and sync it. She updates to v2a on one side and v2b on the
-other, and then replicates. What happens?
-
-The answer is simple: both versions exist on both sides!
-
-.. code-block:: text
-
-     DESKTOP                          LAPTOP
-   +---------+
-   | /db/bob |                                     INITIAL
-   |   v1    |                                     CREATION
-   +---------+
-
-   +---------+                      +---------+
-   | /db/bob |  ----------------->  | /db/bob |     PUSH
-   |   v1    |                      |   v1    |
-   +---------+                      +---------+
-
-   +---------+                      +---------+  INDEPENDENT
-   | /db/bob |                      | /db/bob |     LOCAL
-   |   v2a   |                      |   v2b   |     EDITS
-   +---------+                      +---------+
-
-   +---------+                      +---------+
-   | /db/bob |  ----------------->  | /db/bob |     PUSH
-   |   v2a   |                      |   v2a   |
-   +---------+                      |   v2b   |
-                                    +---------+
-
-   +---------+                      +---------+
-   | /db/bob |  <-----------------  | /db/bob |     PULL
-   |   v2a   |                      |   v2a   |
-   |   v2b   |                      |   v2b   |
-   +---------+                      +---------+
-
-After all, this is not a filesystem, so there's no restriction that only one
-document can exist with the name /db/bob. These are just "conflicting" revisions
-under the same name.
-
-Because the changes are always replicated, the data is safe. Both machines have
-identical copies of both documents, so failure of a hard drive on either side
-won't lose any of the changes.
-
-Another thing to notice is that peers do not have to be configured or tracked.
-You can do regular replications to peers, or you can do one-off, ad-hoc pushes
-or pulls. After the replication has taken place, there is no record kept of
-which peer any particular document or revision came from.
-
-So the question now is: what happens when you try to read /db/bob? By default,
-CouchDB picks one arbitrary revision as the "winner", using a deterministic
-algorithm so that the same choice will be made on all peers. The same happens
-with views: the deterministically-chosen winner is the only revision fed into
-your map function.
-
-Let's say that the winner is v2a. On the desktop, if Alice reads the document
-she'll see v2a, which is what she saved there. But on the laptop, after
-replication, she'll also see only v2a. It could look as if the changes she made
-there have been lost - but of course they have not, they have just been hidden
-away as a conflicting revision. But eventually she'll need these changes merged
-into Bob's business card, otherwise they will effectively have been lost.
-
-Any sensible business-card application will, at minimum, have to present the
-conflicting versions to Alice and allow her to create a new version
-incorporating information from them all. Ideally it would merge the updates
-itself.
-
-Conflict avoidance
-==================
-
-When working on a single node, CouchDB will avoid creating conflicting revisions
-by returning a :statuscode:`409` error. This is because, when you
-PUT a new version of a document, you must give the ``_rev`` of the previous
-version. If that ``_rev`` has already been superseded, the update is rejected
-with a  :statuscode:`409` response.
-
-So imagine two users on the same node are fetching Bob's business card, updating
-it concurrently, and writing it back:
-
-.. code-block:: text
-
-  USER1    ----------->  GET /db/bob
-           <-----------  {"_rev":"1-aaa", ...}
-
-  USER2    ----------->  GET /db/bob
-           <-----------  {"_rev":"1-aaa", ...}
-
-  USER1    ----------->  PUT /db/bob?rev=1-aaa
-           <-----------  {"_rev":"2-bbb", ...}
-
-  USER2    ----------->  PUT /db/bob?rev=1-aaa
-           <-----------  409 Conflict  (not saved)
-
-User2's changes are rejected, so it's up to the app to fetch /db/bob again,
-and either:
-
-#. apply the same changes as were applied to the earlier revision, and submit
-   a new PUT
-#. redisplay the document so the user has to edit it again
-#. just overwrite it with the document being saved before (which is not
-   advisable, as user1's changes will be silently lost)
-
-So when working in this mode, your application still has to be able to handle
-these conflicts and have a suitable retry strategy, but these conflicts never
-end up inside the database itself.
-
-Conflicts in batches
-====================
-
-There are two different ways that conflicts can end up in the database:
-
-- Conflicting changes made on different databases, which are replicated to each
-  other, as shown earlier.
-- Changes are written to the database using ``_bulk_docs`` and all_or_nothing,
-  which bypasses the 409 mechanism.
-
-The :ref:`_bulk_docs API <api/db/bulk_docs>` lets you submit multiple updates
-(and/or deletes) in a single HTTP POST. Normally, these are treated as
-independent updates; some in the batch may fail because the `_rev` is stale
-(just like a 409 from a PUT) whilst others are written successfully.
-The response from ``_bulk_docs`` lists the success/fail separately for each
-document in the batch.
-
-However there is another mode of working, whereby you specify
-``{"all_or_nothing":true}`` as part of the request. This is CouchDB's nearest
-equivalent of a "transaction", but it's not the same as a database transaction
-which can fail and roll back. Rather, it means that all of the changes in the
-request will be forcibly applied to the database, even if that introduces
-conflicts. The only guarantee you are given is that they will either all be
-applied to the database, or none of them (e.g. if the power is pulled out before
-the update is finished writing to disk).
-
-So this gives you a way to introduce conflicts within a single database
-instance. If you choose to do this instead of PUT, it means you don't have to
-write any code for the possibility of getting a 409 response, because you will
-never get one. Rather, you have to deal with conflicts appearing later in the
-database, which is what you'd have to do in a multi-master application anyway.
-
-.. code-block:: http
-
-  POST /db/_bulk_docs
-
-.. code-block:: javascript
-
-  {
-    "all_or_nothing": true,
-    "docs": [
-      {"_id":"x", "_rev":"1-xxx", ...},
-      {"_id":"y", "_rev":"1-yyy", ...},
-      ...
-    ]
-  }
-
-Revision tree
-=============
-
-When you update a document in CouchDB, it keeps a list of the previous
-revisions. In the case where conflicting updates are introduced, this history
-branches into a tree, where the current conflicting revisions for this document
-form the tips (leaf nodes) of this tree:
-
-.. code-block:: text
-
-      ,--> r2a
-    r1 --> r2b
-      `--> r2c
-
-Each branch can then extend its history - for example if you read revision r2b
-and then PUT with ?rev=r2b then you will make a new revision along that
-particular branch.
-
-.. code-block:: text
-
-      ,--> r2a -> r3a -> r4a
-    r1 --> r2b -> r3b
-      `--> r2c -> r3c
-
-Here, (r4a, r3b, r3c) are the set of conflicting revisions. The way you resolve
-a conflict is to delete the leaf nodes along the other branches. So when you
-combine (r4a+r3b+r3c) into a single merged document, you would replace r4a and
-delete r3b and r3c.
-
-.. code-block:: text
-
-      ,--> r2a -> r3a -> r4a -> r5a
-    r1 --> r2b -> r3b -> (r4b deleted)
-      `--> r2c -> r3c -> (r4c deleted)
-
-Note that r4b and r4c still exist as leaf nodes in the history tree, but as
-deleted docs. You can retrieve them but they will be marked ``"_deleted":true``.
-
-When you compact a database, the bodies of all the non-leaf documents are
-discarded. However, the list of historical _revs is retained, for the benefit of
-later conflict resolution in case you meet any old replicas of the database at
-some time in future. There is "revision pruning" to stop this getting
-arbitrarily large.
-
-Working with conflicting documents
-==================================
-
-The basic :get:`/{doc}/{docid}` operation will not show you any
-information about conflicts. You see only the deterministically-chosen winner,
-and get no indication as to whether other conflicting revisions exist or not:
-
-.. code-block:: javascript
-
-  {
-    "_id":"test",
-    "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-    "hello":"bar"
-  }
-
-If you do ``GET /db/bob?conflicts=true``, and the document is in a conflict
-state, then you will get the winner plus a _conflicts member containing an array
-of the revs of the other, conflicting revision(s). You can then fetch them
-individually using subsequent ``GET /db/bob?rev=xxxx`` operations:
-
-.. code-block:: javascript
-
-  {
-    "_id":"test",
-    "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-    "hello":"bar",
-    "_conflicts":[
-      "2-65db2a11b5172bf928e3bcf59f728970",
-      "2-5bc3c6319edf62d4c624277fdd0ae191"
-    ]
-  }
-
-If you do ``GET /db/bob?open_revs=all`` then you will get all the leaf nodes of
-the revision tree. This will give you all the current conflicts, but will also
-give you leaf nodes which have been deleted (i.e. parts of the conflict history
-which have since been resolved). You can remove these by filtering out documents
-with ``"_deleted":true``:
-
-.. code-block:: javascript
-
-  [
-    {"ok":{"_id":"test","_rev":"2-5bc3c6319edf62d4c624277fdd0ae191","hello":"foo"}},
-    {"ok":{"_id":"test","_rev":"2-65db2a11b5172bf928e3bcf59f728970","hello":"baz"}},
-    {"ok":{"_id":"test","_rev":"2-b91bb807b4685080c6a651115ff558f5","hello":"bar"}}
-  ]
-
-The ``"ok"`` tag is an artifact of ``open_revs``, which also lets you list
-explicit revisions as a JSON array, e.g. ``open_revs=[rev1,rev2,rev3]``. In this
-form, it would be possible to request a revision which is now missing, because
-the database has been compacted.
-
-.. note::
-  The order of revisions returned by ``open_revs=all`` is **NOT** related to
-  the deterministic "winning" algorithm. In the above example, the winning
-  revision is 2-b91b... and happens to be returned last, but in other cases it
-  can be returned in a different position.
-
-Once you have retrieved all the conflicting revisions, your application can then
-choose to display them all to the user. Or it could attempt to merge them, write
-back the merged version, and delete the conflicting versions - that is, to
-resolve the conflict permanently.
-
-As described above, you need to update one revision and delete all the
-conflicting revisions explicitly. This can be done using a single `POST` to
-``_bulk_docs``, setting ``"_deleted":true`` on those revisions you wish to
-delete.
-
-Multiple document API
-=====================
-
-You can fetch multiple documents at once using ``include_docs=true`` on a view.
-However, a ``conflicts=true`` request is ignored; the "doc" part of the value
-never includes a ``_conflicts`` member. Hence you would need to do another query
-to determine for each document whether it is in a conflicting state:
-
-.. code-block:: bash
-
-  $ curl 'http://127.0.0.1:5984/conflict_test/_all_docs?include_docs=true&conflicts=true'
-
-.. code-block:: javascript
-
-  {
-    "total_rows":1,
-    "offset":0,
-    "rows":[
-      {
-        "id":"test",
-        "key":"test",
-        "value":{"rev":"2-b91bb807b4685080c6a651115ff558f5"},
-        "doc":{
-          "_id":"test",
-          "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-          "hello":"bar"
-        }
-      }
-    ]
-  }
-
-.. code-block:: bash
-
-  $ curl 'http://127.0.0.1:5984/conflict_test/test?conflicts=true'
-
-.. code-block:: javascript
-
-  {
-    "_id":"test",
-    "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-    "hello":"bar",
-    "_conflicts":[
-      "2-65db2a11b5172bf928e3bcf59f728970",
-      "2-5bc3c6319edf62d4c624277fdd0ae191"
-    ]
-  }
-
-View map functions
-==================
-
-Views only get the winning revision of a document. However they do also get a
-``_conflicts`` member if there are any conflicting revisions. This means you can
-write a view whose job is specifically to locate documents with conflicts.
-Here is a simple map function which achieves this:
-
-.. code-block:: javascript
-
-  function(doc) {
-    if (doc._conflicts) {
-      emit(null, [doc._rev].concat(doc._conflicts));
-    }
-  }
-
-which gives the following output:
-
-.. code-block:: javascript
-
-  {
-    "total_rows":1,
-    "offset":0,
-    "rows":[
-      {
-        "id":"test",
-        "key":null,
-        "value":[
-          "2-b91bb807b4685080c6a651115ff558f5",
-          "2-65db2a11b5172bf928e3bcf59f728970",
-          "2-5bc3c6319edf62d4c624277fdd0ae191"
-        ]
-      }
-    ]
-  }
-
-If you do this, you can have a separate "sweep" process which periodically scans
-your database, looks for documents which have conflicts, fetches the conflicting
-revisions, and resolves them.
-
-Whilst this keeps the main application simple, the problem with this approach is
-that there will be a window between a conflict being introduced and it being
-resolved. From a user's viewpoint, this may appear that the document they just
-saved successfully may suddenly lose their changes, only to be resurrected some
-time later. This may or may not be acceptable.
-
-Also, it's easy to forget to start the sweeper, or not to implement it properly,
-and this will introduce odd behaviour which will be hard to track down.
-
-CouchDB's "winning" revision algorithm may mean that information drops out of a
-view until a conflict has been resolved. Consider Bob's business card again;
-suppose Alice has a view which emits mobile numbers, so that her telephony
-application can display the caller's name based on caller ID. If there are
-conflicting documents with Bob's old and new mobile numbers, and they happen to
-be resolved in favour of Bob's old number, then the view won't be able to
-recognise his new one. In this particular case, the application might have
-preferred to put information from both the conflicting documents into the view,
-but this currently isn't possible.
-
-Suggested algorithm to fetch a document with conflict resolution:
-
-#. Get document via ``GET docid?conflicts=true`` request;
-#. For each member in the ``_conflicts`` array call ``GET docid?rev=xxx``.
-   If any errors occur at this stage, restart from step 1.
-   (There could be a race where someone else has already resolved this conflict
-   and deleted that rev)
-#. Perform application-specific merging
-#. Write ``_bulk_docs`` with an update to the first rev and deletes of the other
-   revs.
-
-This could either be done on every read (in which case you could replace all
-calls to GET in your application with calls to a library which does the above),
-or as part of your sweeper code.
-
-And here is an example of this in Ruby using the low-level `RestClient`_:
-
-.. _RestClient: https://rubygems.org/gems/rest-client
-
-.. code-block:: ruby
-
-  require 'rubygems'
-  require 'rest_client'
-  require 'json'
-  DB="http://127.0.0.1:5984/conflict_test"
-
-  # Write multiple documents as all_or_nothing, can introduce conflicts
-  def writem(docs)
-    JSON.parse(RestClient.post("#{DB}/_bulk_docs", {
-      "all_or_nothing" => true,
-      "docs" => docs,
-    }.to_json))
-  end
-
-  # Write one document, return the rev
-  def write1(doc, id=nil, rev=nil)
-    doc['_id'] = id if id
-    doc['_rev'] = rev if rev
-    writem([doc]).first['rev']
-  end
-
-  # Read a document, return *all* revs
-  def read1(id)
-    retries = 0
-    loop do
-      # FIXME: escape id
-      res = [JSON.parse(RestClient.get("#{DB}/#{id}?conflicts=true"))]
-      if revs = res.first.delete('_conflicts')
-        begin
-          revs.each do |rev|
-            res << JSON.parse(RestClient.get("#{DB}/#{id}?rev=#{rev}"))
-          end
-        rescue
-          retries += 1
-          raise if retries >= 5
-          next
-        end
-      end
-      return res
-    end
-  end
-
-  # Create DB
-  RestClient.delete DB rescue nil
-  RestClient.put DB, {}.to_json
-
-  # Write a document
-  rev1 = write1({"hello"=>"xxx"},"test")
-  p read1("test")
-
-  # Make three conflicting versions
-  write1({"hello"=>"foo"},"test",rev1)
-  write1({"hello"=>"bar"},"test",rev1)
-  write1({"hello"=>"baz"},"test",rev1)
-
-  res = read1("test")
-  p res
-
-  # Now let's replace these three with one
-  res.first['hello'] = "foo+bar+baz"
-  res.each_with_index do |r,i|
-    unless i == 0
-      r.replace({'_id'=>r['_id'], '_rev'=>r['_rev'], '_deleted'=>true})
-    end
-  end
-  writem(res)
-
-  p read1("test")
-
-An application written this way never has to deal with a ``PUT 409``, and is
-automatically multi-master capable.
-
-You can see that it's straightforward enough when you know what you're doing.
-It's just that CouchDB doesn't currently provide a convenient HTTP API for
-"fetch all conflicting revisions", nor "PUT to supersede these N revisions", so
-you need to wrap these yourself. I also don't know of any client-side libraries
-which provide support for this.
-
-Merging and revision history
-============================
-
-Actually performing the merge is an application-specific function. It depends
-on the structure of your data. Sometimes it will be easy: e.g. if a document
-contains a list which is only ever appended to, then you can perform a union of
-the two list versions.
-
-Some merge strategies look at the changes made to an object, compared to its
-previous version. This is how git's merge function works.
-
-For example, to merge Bob's business card versions v2a and v2b, you could look
-at the differences between v1 and v2b, and then apply these changes to v2a as
-well.
-
-With CouchDB, you can sometimes get hold of old revisions of a document.
-For example, if you fetch ``/db/bob?rev=v2b&revs_info=true`` you'll get a list
-of the previous revision ids which ended up with revision v2b. Doing the same
-for v2a you can find their common ancestor revision. However if the database
-has been compacted, the content of that document revision will have been lost.
-``revs_info`` will still show that v1 was an ancestor, but report it as
-"missing"::
-
-  BEFORE COMPACTION           AFTER COMPACTION
-
-       ,-> v2a                     v2a
-     v1
-       `-> v2b                     v2b
-
-So if you want to work with diffs, the recommended way is to store those diffs
-within the new revision itself. That is: when you replace v1 with v2a, include
-an extra field or attachment in v2a which says which fields were changed from
-v1 to v2a. This unfortunately does mean additional book-keeping for your
-application.
-
-Comparison with other replicating data stores
-=============================================
-
-The same issues arise with other replicating systems, so it can be instructive
-to look at these and see how they compare with CouchDB. Please feel free to add
-other examples.
-
-Unison
-------
-
-`Unison`_ is a bi-directional file synchronisation tool. In this case, the
-business card would be a file, say `bob.vcf`.
-
-.. _Unison: http://www.cis.upenn.edu/~bcpierce/unison/
-
-When you run unison, changes propagate both ways. If a file has changed on one
-side but not the other, the new replaces the old. Unison maintains a local state
-file so that it knows whether a file has changed since the last successful
-replication.
-
-In our example it has changed on both sides. Only one file called `bob.vcf`
-can exist within the filesystem. Unison solves the problem by simply ducking
-out: the user can choose to replace the remote version with the local version,
-or vice versa (both of which would lose data), but the default action is to
-leave both sides unchanged.
-
-From Alice's point of view, at least this is a simple solution. Whenever she's
-on the desktop she'll see the version she last edited on the desktop, and
-whenever she's on the laptop she'll see the version she last edited there.
-
-But because no replication has actually taken place, the data is not protected.
-If her laptop hard drive dies, she'll lose all her changes made on the laptop;
-ditto if her desktop hard drive dies.
-
-It's up to her to copy across one of the versions manually (under a different
-filename), merge the two, and then finally push the merged version to the other
-side.
-
-Note also that the original file (version v1) has been lost by this point.
-So it's not going to be known from inspection alone which of v2a and v2b has the
-most up-to-date E-mail address for Bob, and which has the most up-to-date mobile
-number. Alice has to remember which she entered last.
-
-
-Git
-----
-
-`Git`_ is a well-known distributed source control system. Like Unison, git deals
-with files. However, git considers the state of a whole set of files as a single
-object, the "tree". Whenever you save an update, you create a "commit" which
-points to both the updated tree and the previous commit(s), which in turn point
-to the previous tree(s). You therefore have a full history of all the states of
-the files. This forms a branch, and a pointer is kept to the tip of the branch,
-from which you can work backwards to any previous state. The "pointer" is
-actually an SHA1 hash of the tip commit.
-
-.. _Git: http://git-scm.com/
-
-If you are replicating with one or more peers, a separate branch is made for
-each of the peers. For example, you might have::
-
-    master               -- my local branch
-    remotes/foo/master   -- branch on peer 'foo'
-    remotes/bar/master   -- branch on peer 'bar'
-
-In the normal way of working, replication is a "pull", importing changes from
-a remote peer into the local repository. A "pull" does two things: first "fetch"
-the state of the peer into the remote tracking branch for that peer; and then
-attempt to "merge" those changes into the local branch.
-
-Now let's consider the business card. Alice has created a git repo containing
-``bob.vcf``, and cloned it across to the other machine. The branches look like
-this, where ``AAAAAAAA`` is the SHA1 of the commit::
-
-  ---------- desktop ----------           ---------- laptop ----------
-  master: AAAAAAAA                        master: AAAAAAAA
-  remotes/laptop/master: AAAAAAAA         remotes/desktop/master: AAAAAAAA
-
-Now she makes a change on the desktop, and commits it into the desktop repo;
-then she makes a different change on the laptop, and commits it into the laptop
-repo::
-
-  ---------- desktop ----------           ---------- laptop ----------
-  master: BBBBBBBB                        master: CCCCCCCC
-  remotes/laptop/master: AAAAAAAA         remotes/desktop/master: AAAAAAAA
-
-Now on the desktop she does ``git pull laptop``. Firstly, the remote objects
-are copied across into the local repo and the remote tracking branch is
-updated::
-
-  ---------- desktop ----------           ---------- laptop ----------
-  master: BBBBBBBB                        master: CCCCCCCC
-  remotes/laptop/master: CCCCCCCC         remotes/desktop/master: AAAAAAAA
-
-.. note::
-  repo still contains AAAAAAAA because commits BBBBBBBB and CCCCCCCC point to it
-
-Then git will attempt to merge the changes in. It can do this because it knows
-the parent commit to ``CCCCCCCC`` is ``AAAAAAAA``, so it takes a diff between
-``AAAAAAAA`` and ``CCCCCCCC`` and tries to apply it to ``BBBBBBBB``.
-
-If this is successful, then you'll get a new version with a merge commit::
-
-  ---------- desktop ----------           ---------- laptop ----------
-  master: DDDDDDDD                        master: CCCCCCCC
-  remotes/laptop/master: CCCCCCCC         remotes/desktop/master: AAAAAAAA
-
-Then Alice has to logon to the laptop and run ``git pull desktop``. A similar
-process occurs. The remote tracking branch is updated::
-
-  ---------- desktop ----------           ---------- laptop ----------
-  master: DDDDDDDD                        master: CCCCCCCC
-  remotes/laptop/master: CCCCCCCC         remotes/desktop/master: DDDDDDDD
-
-Then a merge takes place. This is a special-case: ``CCCCCCCC`` one of the parent
-commits of ``DDDDDDDD``, so the laptop can `fast forward` update from
-``CCCCCCCC`` to ``DDDDDDDD`` directly without having to do any complex merging.
-This leaves the final state as::
-
-  ---------- desktop ----------           ---------- laptop ----------
-  master: DDDDDDDD                        master: DDDDDDDD
-  remotes/laptop/master: CCCCCCCC         remotes/desktop/master: DDDDDDDD
-
-Now this is all and good, but you may wonder how this is relevant when thinking
-about CouchDB.
-
-Firstly, note what happens in the case when the merge algorithm fails.
-The changes are still propagated from the remote repo into the local one, and
-are available in the remote tracking branch; so unlike Unison, you know the data
-is protected. It's just that the local working copy may fail to update, or may
-diverge from the remote version. It's up to you to create and commit the
-combined version yourself, but you are guaranteed to have all the history you
-might need to do this.
-
-Note that whilst it's possible to build new merge algorithms into Git,
-the standard ones are focused on line-based changes to source code. They don't
-work well for XML or JSON if it's presented without any line breaks.
-
-The other interesting consideration is multiple peers. In this case you have
-multiple remote tracking branches, some of which may match your local branch,
-some of which may be behind you, and some of which may be ahead of you
-(i.e. contain changes that you haven't yet merged)::
-
-  master: AAAAAAAA
-  remotes/foo/master: BBBBBBBB
-  remotes/bar/master: CCCCCCCC
-  remotes/baz/master: AAAAAAAA
-
-Note that each peer is explicitly tracked, and therefore has to be explicitly
-created. If a peer becomes stale or is no longer needed, it's up to you to
-remove it from your configuration and delete the remote tracking branch.
-This is different to CouchDB, which doesn't keep any peer state in the database.
-
-Another difference with git is that it maintains all history back to time
-zero - git compaction keeps diffs between all those versions in order to reduce
-size, but CouchDB discards them. If you are constantly updating a document,
-the size of a git repo would grow forever. It is possible (with some effort)
-to use "history rewriting" to make git forget commits earlier than a particular
-one.
-
-
-.. _replication/conflicts/git:
-
-What is the CouchDB replication protocol? Is it like Git?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-:Author: Jason Smith
-:Date: 2011-01-29
-:Source: http://stackoverflow.com/questions/4766391/what-is-the-couchdb-replication-protocol-is-it-like-git
-
-**Key points**
-
-**If you know Git, then you know how Couch replication works.** Replicating is
-*very* similar to pushing or pulling with distributed source managers like Git.
-
-**CouchDB replication does not have its own protocol.** A replicator simply
-connects to two DBs as a client, then reads from one and writes to the other.
-Push replication is reading the local data and updating the remote DB;
-pull replication is vice versa.
-
-* **Fun fact 1**: The replicator is actually an independent Erlang application,
-  in its own process. It connects to both couches, then reads records from one
-  and writes them to the other.
-* **Fun fact 2**: CouchDB has no way of knowing who is a normal client and who
-  is a replicator (let alone whether the replication is push or pull).
-  It all looks like client connections. Some of them read records. Some of them
-  write records.
-
-**Everything flows from the data model**
-
-The replication algorithm is trivial, uninteresting. A trained monkey could
-design it. It's simple because the cleverness is the data model, which has these
-useful characteristics:
-
-#. Every record in CouchDB is completely independent of all others. That sucks
-   if you want to do a JOIN or a transaction, but it's awesome if you want to
-   write a replicator. Just figure out how to replicate one record, and then
-   repeat that for each record.
-#. Like Git, records have a linked-list revision history. A record's revision ID
-   is the checksum of its own data. Subsequent revision IDs are checksums of:
-   the new data, plus the revision ID of the previous.
-
-#. In addition to application data (``{"name": "Jason", "awesome": true}``),
-   every record stores the evolutionary timeline of all previous revision IDs
-   leading up to itself.
-
-   - Exercise: Take a moment of quiet reflection. Consider any two different
-     records, A and B. If A's revision ID appears in B's timeline, then B
-     definitely evolved from A. Now consider Git's fast-forward merges.
-     Do you hear that? That is the sound of your mind being blown.
-
-#. Git isn't really a linear list. It has forks, when one parent has multiple
-   children. CouchDB has that too.
-
-   - Exercise: Compare two different records, A and B. A's revision ID does not
-     appear in B's timeline; however, one revision ID, C, is in both A's and B's
-     timeline. Thus A didn't evolve from B. B didn't evolve from A. But rather,
-     A and B have a common ancestor C. In Git, that is a "fork." In CouchDB,
-     it's a "conflict."
-
-   - In Git, if both children go on to develop their timelines independently,
-     that's cool. Forks totally support that.
-   - In CouchDB, if both children go on to develop their timelines
-     independently, that cool too. Conflicts totally support that.
-   - **Fun fact 3**: CouchDB "conflicts" do not correspond to Git "conflicts."
-     A Couch conflict is a divergent revision history, what Git calls a "fork."
-     For this reason the CouchDB community pronounces "conflict" with a silent
-     `n`: "co-flicked."
-
-#. Git also has merges, when one child has multiple parents. CouchDB *sort* of
-   has that too.
-
-   - **In the data model, there is no merge.** The client simply marks one
-     timeline as deleted and continues to work with the only extant timeline.
-   - **In the application, it feels like a merge.** Typically, the client merges
-     the *data* from each timeline in an application-specific way.
-     Then it writes the new data to the timeline. In Git, this is like copying
-     and pasting the changes from branch A into branch B, then commiting to
-     branch B and deleting branch A. The data was merged, but there was no
-     `git merge`.
-   - These behaviors are different because, in Git, the timeline itself is
-     important; but in CouchDB, the data is important and the timeline is
-     incidental—it's just there to support replication. That is one reason why
-     CouchDB's built-in revisioning is inappropriate for storing revision data
-     like a wiki page.
-
-**Final notes**
-
-At least one sentence in this writeup (possibly this one) is complete BS.
-

http://git-wip-us.apache.org/repos/asf/couchdb/blob/cdac7299/share/doc/src/replication/index.rst
----------------------------------------------------------------------
diff --git a/share/doc/src/replication/index.rst b/share/doc/src/replication/index.rst
deleted file mode 100644
index 637ce31..0000000
--- a/share/doc/src/replication/index.rst
+++ /dev/null
@@ -1,37 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication:
-
-===========
-Replication
-===========
-
-The replication is an incremental one way process involving two databases
-(a source and a destination).
-
-The aim of the replication is that at the end of the process, all active
-documents on the source database are also in the destination database and all
-documents that were deleted in the source databases are also deleted (if exists)
-on the destination database.
-
-The replication process only copies the last revision of a document, so all
-previous revisions that were only on the source database are not copied to the
-destination database.
-
-.. toctree::
-   :maxdepth: 2
-
-   intro
-   protocol
-   replicator
-   conflicts

http://git-wip-us.apache.org/repos/asf/couchdb/blob/cdac7299/share/doc/src/replication/intro.rst
----------------------------------------------------------------------
diff --git a/share/doc/src/replication/intro.rst b/share/doc/src/replication/intro.rst
deleted file mode 100644
index 2d09617..0000000
--- a/share/doc/src/replication/intro.rst
+++ /dev/null
@@ -1,95 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication/intro:
-
-Introduction to Replication
-===========================
-
-One of CouchDB's strengths is the ability to synchronize two copies of the same
-database. This enables users to distribute data across several nodes or
-datacenters, but also to move data more closely to clients.
-
-Replication involves a source and a destination database, which can be on the
-same or on different CouchDB instances. The aim of the replication is that at
-the end of the process, all active documents on the source database are also in
-the destination database and all documents that were deleted in the source
-databases are also deleted on the destination database (if they even existed).
-
-
-Triggering Replication
-----------------------
-
-Replication is controlled through documents in the :ref:`_replicator <replicator>`
-database, where each document describes one replication process (see
-:ref:`replication-settings`).
-
-A replication is triggered by storing a replication document in the replicator
-database. Its status can be inspected through the active tasks API (see
-:ref:`api/server/active_tasks` and :ref:`replication-status`). A replication can be
-stopped by deleting the document, or by updating it with its `cancel` property
-set to `true`.
-
-
-Replication Procedure
----------------------
-
-During replication, CouchDB will compare the source and the destination
-database to determine which documents differ between the source and the
-destination database. It does so by following the :ref:`changes` on the source
-and comparing the documents to the destination. Changes are submitted to the
-destination in batches where they can introduce conflicts. Documents that
-already exist on the destination in the same revision are not transferred. As
-the deletion of documents is represented by a new revision, a document deleted
-on the source will also be deleted on the target.
-
-A replication task will finish once it reaches the end of the changes feed. If
-its `continuous` property is set to true, it will wait for new changes to
-appear until the task is cancelled. Replication tasks also create checkpoint
-documents on the destination to ensure that a restarted task can continue from
-where it stopped, for example after it has crashed.
-
-When a replication task is initiated on the sending node, it is called *push*
-replication, if it is initiated by the receiving node, it is called *pull*
-replication.
-
-
-Master - Master replication
----------------------------
-
-One replication task will only transfer changes in one direction. To achieve
-master-master replication, it is possible to set up two replication tasks in
-opposite direction. When a change is replicated from database A to B by the
-first task, the second task from B to A will discover that the new change on
-B already exists in A and will wait for further changes.
-
-
-Controlling which Documents to Replicate
-----------------------------------------
-
-There are two ways for controlling which documents are replicated, and which
-are skipped. *Local* documents are never replicated (see :ref:`api/local`).
-
-Additionally, :ref:`filterfun` can be used in a replication (see
-:ref:`replication-settings`). The replication task will then evaluate
-the filter function for each document in the changes feed. The document will
-only be replicated if the filter returns `true`.
-
-
-Migrating Data to Clients
--------------------------
-
-Replication can be especially useful for bringing data closer to clients.
-`PouchDB <http://pouchdb.com/>`_ implements the replication algorithm of CouchDB
-in JavaScript, making it possible to make data from a CouchDB database
-available in an offline browser application, and synchronize changes back to
-CouchDB.

http://git-wip-us.apache.org/repos/asf/couchdb/blob/cdac7299/share/doc/src/replication/protocol.rst
----------------------------------------------------------------------
diff --git a/share/doc/src/replication/protocol.rst b/share/doc/src/replication/protocol.rst
deleted file mode 100644
index 0f6fdfd..0000000
--- a/share/doc/src/replication/protocol.rst
+++ /dev/null
@@ -1,202 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication/protocol:
-
-============================
-CouchDB Replication Protocol
-============================
-
-The **CouchDB Replication protocol** is a protocol for synchronizing
-documents between 2 peers over HTTP/1.1.
-
-Language
---------
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in :rfc:`2119`.
-
-
-Goals
------
-
-The CouchDB Replication protocol is a synchronization protocol for
-synchronizing documents between 2 peers over HTTP/1.1.
-
-In theory the CouchDB protocol can be used between products that
-implement it. However the reference implementation, written in Erlang_, is
-provided by the couch_replicator_ module available in Apache CouchDB.
-
-
-The CouchDB_ replication protocol is using the `CouchDB REST API
-<http://wiki.apache.org/couchdb/Reference>`_ and so is based on HTTP and
-the Apache CouchDB MVCC Data model. The primary goal of this
-specification is to describe the CouchDB replication algorithm.
-
-
-Definitions
------------
-
-ID:
-    An identifier (could be an UUID) as described in :rfc:`4122`
-
-Sequence:
-    An ID provided by the changes feed. It can be numeric but not
-    necessarily.
-
-Revision:
-    (to define)
-
-Document
-    A document is JSON entity with a unique ID and revision.
-
-Database
-    A collection of documents with a unique URI
-
-URI
-    An uri is defined by the :rfc:`2396` . It can be an URL as defined
-    in :rfc:`1738`.
-
-Source
-    Database from where the Documents are replicated
-
-Target
-    Database where the Document are replicated
-
-Checkpoint
-    Last source sequence ID
-
-
-Algorithm
----------
-
-1. Get unique identifiers for the Source and Target based on their URI if
-   replication task ID is not available.
-
-2. Save this identifier in a special Document named `_local/<uniqueid>`
-   on the Target database. This document isn't replicated. It will
-   collect the last Source sequence ID, the Checkpoint, from the
-   previous replication process.
-
-3. Get the Source changes feed by passing it the Checkpoint using the
-   `since` parameter by calling the `/<source>/_changes` URL. The
-   changes feed only return a list of current revisions.
-
-
-.. note::
-
-    This step can be done continuously using the `feed=longpoll` or
-    `feed=continuous` parameters. Then the feed will continuously get
-    the changes.
-
-
-4. Collect a group of Document/Revisions ID pairs from the **changes
-   feed** and send them to the target databases on the
-   `/<target>/_revs_diffs` URL. The result will contain the list of
-   revisions **NOT** in the Target.
-
-5. GET each revisions from the source Database by calling the URL
-   `/<source>/<docid>?revs=true&open_revs`=<revision>` . This
-   will get the document with its parent revisions. Also don't forget to
-   get attachments that aren't already stored at the target. As an
-   optimisation you can use the HTTP multipart api to get all.
-
-6. Collect a group of revisions fetched at previous step and store them
-   on the target database using the `Bulk Docs
-   <http://wiki.apache.org/couchdb/HTTP_Document_API#Bulk_Docs>`_ API
-   with the `new_edit: false` JSON property to preserve their revisions
-   ID.
-
-7. After the group of revision is stored on the Target, save
-   the new Checkpoint on the Source.
-
-
-.. note::
-
-    - Even if some revisions have been ignored the sequence should be
-      take in consideration for the Checkpoint.
-
-    - To compare non numeric sequence ordering, you will have to keep an
-      ordered list of the sequences IDS as they appear in the _changes
-      feed and compare their indices.
-
-Filter replication
-------------------
-
-The replication can be filtered by passing the `filter` parameter to the
-changes feeds with a function name. This will call a function on each
-changes. If this function return True, the document will be added to the
-feed.
-
-
-Optimisations
--------------
-
-- The system should run each steps in parallel to reduce the latency.
-
-- The number of revisions passed to the step 3 and 6 should be large
-  enough to reduce the bandwidth and make sure to reduce the latency.
-
-
-API Reference
--------------
-
-- :head:`/{db}` -- Check Database existence
-- :post:`/{db}/_ensure_full_commit` -- Ensure that all changes are stored
-  on disk
-- :get:`/{db}/_local/{id}` -- Read the last Checkpoint
-- :put:`/{db}/_local/{id}` -- Save a new Checkpoint
-
-Push Only
-~~~~~~~~~
-
-- :put:`/{db}` -- Create Target if it not exists and option was provided
-- :post:`/{db}/_revs_diff` -- Locate Revisions that are not known to the
-  Target
-- :post:`/{db}/_bulk_docs` -- Upload Revisions to the Target
-- :put:`/{db}/{docid}`?new_edits=false -- Upload a single Document with
-  attachments to the Target
-
-Pull Only
-~~~~~~~~~
-
-- :get:`/{db}/_changes` -- Locate changes since on Source the last pull.
-  The request uses next query parameters:
-
-  - ``style=all_docs``
-  - ``feed=feed`` , where feed is :ref:`normal <changes/normal>` or
-    :ref:`longpoll <changes/longpoll>`
-  - ``limit=limit``
-  - ``heartbeat=heartbeat``
-
-- :get:`/{db}/{docid}` -- Retrieve a single Document from Source with attachments.
-  The request uses next query parameters:
-
-  - ``open_revs=revid`` - where ``revid`` is the actual Document Revision at the
-    moment of the pull request
-  - ``revs=true``
-  - ``atts_since=lastrev``
-
-Reference
----------
-
-* `TouchDB iOS wiki <https://github.com/couchbaselabs/TouchDB-iOS/wiki/Replication-Algorithm>`_
-* `CouchDB documentation
-  <http://wiki.apache.org/couchdb/Replication>`_
-* CouchDB `change notifications`_
-
-.. _CouchDB: http://couchdb.apache.org
-.. _Erlang: http://erlang.org
-.. _couch_replicator: https://github.com/apache/couchdb/tree/master/src/couch_replicator
-.. _change notifications: http://guide.couchdb.org/draft/notifications.html
-

http://git-wip-us.apache.org/repos/asf/couchdb/blob/cdac7299/share/doc/src/replication/replicator.rst
----------------------------------------------------------------------
diff --git a/share/doc/src/replication/replicator.rst b/share/doc/src/replication/replicator.rst
deleted file mode 100644
index 347b5a5..0000000
--- a/share/doc/src/replication/replicator.rst
+++ /dev/null
@@ -1,403 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replicator:
-
-Replicator Database
-===================
-
-The ``_replicator`` database works like any other in CouchDB, but documents
-added to it will trigger replications. Creating (``PUT`` or ``POST``) a
-document to start a replication. ``DELETE`` a replicaiton document to
-cancel an ongoing replication.
-
-These documents have exactly the same content as the JSON objects we use to
-``POST`` to ``_replicate`` (fields ``source``, ``target``, ``create_target``,
-``continuous``, ``doc_ids``, ``filter``, ``query_params``, ``use_checkpoints``,
-``checkpoint_interval``).
-
-Replication documents can have a user defined ``_id`` (handy for finding a
-specific replication request later). Design Documents
-(and ``_local`` documents) added to the replicator database are ignored.
-
-The default name of this database is ``_replicator``. The name can be
-changed in the ``local.ini`` configuration, section ``[replicator]``,
-parameter ``db``.
-
-Basics
-------
-
-Let's say you POST the following document into ``_replicator``:
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar",
-        "create_target":  true
-    }
-
-In the couch log you'll see 2 entries like these:
-
-.. code-block:: text
-
-    [Thu, 17 Feb 2011 19:43:59 GMT] [info] [<0.291.0>] Document `my_rep` triggered replication `c0ebe9256695ff083347cbf95f93e280+create_target`
-    [Thu, 17 Feb 2011 19:44:37 GMT] [info] [<0.124.0>] Replication `c0ebe9256695ff083347cbf95f93e280+create_target` finished (triggered by document `my_rep`)
-
-As soon as the replication is triggered, the document will be updated by
-CouchDB with 3 new fields:
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar",
-        "create_target":  true,
-        "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
-        "_replication_state":  "triggered",
-        "_replication_state_time":  1297974122
-    }
-
-Special fields set by the replicator start with the prefix
-``_replication_``.
-
--  ``_replication_id``
-
-   The ID internally assigned to the replication. This is also the ID
-   exposed by ``/_active_tasks``.
-
--  ``_replication_state``
-
-   The current state of the replication.
-
--  ``_replication_state_time``
-
-   A Unix timestamp (number of seconds since 1 Jan 1970) that tells us
-   when the current replication state (marked in ``_replication_state``)
-   was set.
-
--  ``_replication_state_reason``
-
-   If ``replication_state`` is ``error``, this field contains the reason.
-
-.. code-block:: javascript
-
-    {
-    "_id": "my_rep",
-    "_rev": "2-9f2c0d9372f4ee4dc75652ab8f8e7c70",
-    "source": "foodb",
-    "target": "bardb",
-    "_replication_state": "error",
-    "_replication_state_time": "2013-12-13T18:48:00+01:00",
-    "_replication_state_reason": "db_not_found: could not open foodb",
-    "_replication_id": "fe965cdc47b4d5f6c02811d9d351ac3d"
-    }
-
-When the replication finishes, it will update the ``_replication_state``
-field (and ``_replication_state_time``) with the value ``completed``, so
-the document will look like:
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar",
-        "create_target":  true,
-        "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
-        "_replication_state":  "completed",
-        "_replication_state_time":  1297974122
-    }
-
-When an error happens during replication, the ``_replication_state``
-field is set to ``error`` (and ``_replication_state_reason`` and
-``_replication_state_time`` are updated).
-
-When you PUT/POST a document to the ``_replicator`` database, CouchDB
-will attempt to start the replication up to 10 times (configurable under
-``[replicator]``, parameter ``max_replication_retry_count``). If it
-fails on the first attempt, it waits 5 seconds before doing a second
-attempt. If the second attempt fails, it waits 10 seconds before doing a
-third attempt. If the third attempt fails, it waits 20 seconds before
-doing a fourth attempt (each attempt doubles the previous wait period).
-When an attempt fails, the Couch log will show you something like:
-
-.. code-block:: text
-
-    [error] [<0.149.0>] Error starting replication `67c1bb92010e7abe35d7d629635f18b6+create_target` (document `my_rep_2`): {db_not_found,<<"could not open http://myserver:5986/foo/">>
-
-.. note::
-   The ``_replication_state`` field is only set to ``error`` when all
-   the attempts were unsuccessful.
-
-There are only 3 possible values for the ``_replication_state`` field:
-``triggered``, ``completed`` and ``error``. Continuous replications
-never get their state set to ``completed``.
-
-Documents describing the same replication
------------------------------------------
-
-Lets suppose 2 documents are added to the ``_replicator`` database in
-the following order:
-
-.. code-block:: javascript
-
-    {
-        "_id": "doc_A",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar"
-    }
-
-and
-
-.. code-block:: javascript
-
-    {
-        "_id": "doc_B",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar"
-    }
-
-Both describe exactly the same replication (only their ``_ids`` differ).
-In this case document ``doc_A`` triggers the replication, getting
-updated by CouchDB with the fields ``_replication_state``,
-``_replication_state_time`` and ``_replication_id``, just like it was
-described before. Document ``doc_B`` however, is only updated with one
-field, the ``_replication_id`` so it will look like this:
-
-.. code-block:: javascript
-
-    {
-        "_id": "doc_B",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar",
-        "_replication_id":  "c0ebe9256695ff083347cbf95f93e280"
-    }
-
-While document ``doc_A`` will look like this:
-
-.. code-block:: javascript
-
-    {
-        "_id": "doc_A",
-        "source":  "http://myserver.com:5984/foo",
-        "target":  "bar",
-        "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
-        "_replication_state":  "triggered",
-        "_replication_state_time":  1297974122
-    }
-
-Note that both document get exactly the same value for the
-``_replication_id`` field. This way you can identify which documents
-refer to the same replication - you can for example define a view which
-maps replication IDs to document IDs.
-
-Canceling replications
-----------------------
-
-To cancel a replication simply ``DELETE`` the document which triggered
-the replication. The Couch log will show you an entry like the
-following:
-
-.. code-block:: text
-
-    [Thu, 17 Feb 2011 20:16:29 GMT] [info] [<0.125.0>] Stopped replication `c0ebe9256695ff083347cbf95f93e280+continuous+create_target` because replication document `doc_A` was deleted
-
-.. note::
-   You need to ``DELETE`` the document that triggered the replication.
-   ``DELETE``-ing another document that describes the same replication
-   but did not trigger it, will not cancel the replication.
-
-Server restart
---------------
-
-When CouchDB is restarted, it checks its ``_replicator`` database and
-restarts any replication that is described by a document that either has
-its ``_replication_state`` field set to ``triggered`` or it doesn't have
-yet the ``_replication_state`` field set.
-
-.. note::
-   Continuous replications always have a ``_replication_state`` field
-   with the value ``triggered``, therefore they're always restarted
-   when CouchDB is restarted.
-
-Changing the Replicator Database
---------------------------------
-
-Imagine your replicator database (default name is ``_replicator``) has the
-two following documents that represent pull replications from servers A
-and B:
-
-.. code-block:: javascript
-
-    {
-        "_id": "rep_from_A",
-        "source":  "http://aserver.com:5984/foo",
-        "target":  "foo_a",
-        "continuous":  true,
-        "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
-        "_replication_state":  "triggered",
-        "_replication_state_time":  1297971311
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id": "rep_from_B",
-        "source":  "http://bserver.com:5984/foo",
-        "target":  "foo_b",
-        "continuous":  true,
-        "_replication_id":  "231bb3cf9d48314eaa8d48a9170570d1",
-        "_replication_state":  "triggered",
-        "_replication_state_time":  1297974122
-    }
-
-Now without stopping and restarting CouchDB, you change the name of the
-replicator database to ``another_replicator_db``:
-
-.. code-block:: bash
-
-    $ curl -X PUT http://localhost:5984/_config/replicator/db -d '"another_replicator_db"'
-    "_replicator"
-
-As soon as this is done, both pull replications defined before, are
-stopped. This is explicitly mentioned in CouchDB's log:
-
-.. code-block:: text
-
-    [Fri, 11 Mar 2011 07:44:20 GMT] [info] [<0.104.0>] Stopping all ongoing replications because the replicator database was deleted or changed
-    [Fri, 11 Mar 2011 07:44:20 GMT] [info] [<0.127.0>] 127.0.0.1 - - PUT /_config/replicator/db 200
-
-Imagine now you add a replication document to the new replicator
-database named ``another_replicator_db``:
-
-.. code-block:: javascript
-
-    {
-        "_id": "rep_from_X",
-        "source":  "http://xserver.com:5984/foo",
-        "target":  "foo_x",
-        "continuous":  true
-    }
-
-From now own you have a single replication going on in your system: a
-pull replication pulling from server X. Now you change back the
-replicator database to the original one ``_replicator``:
-
-::
-
-    $ curl -X PUT http://localhost:5984/_config/replicator/db -d '"_replicator"'
-    "another_replicator_db"
-
-Immediately after this operation, the replication pulling from server X
-will be stopped and the replications defined in the ``_replicator``
-database (pulling from servers A and B) will be resumed.
-
-Changing again the replicator database to ``another_replicator_db`` will
-stop the pull replications pulling from servers A and B, and resume the
-pull replication pulling from server X.
-
-Replicating the replicator database
------------------------------------
-
-Imagine you have in server C a replicator database with the two
-following pull replication documents in it:
-
-.. code-block:: javascript
-
-    {
-         "_id": "rep_from_A",
-         "source":  "http://aserver.com:5984/foo",
-         "target":  "foo_a",
-         "continuous":  true,
-         "_replication_id":  "c0ebe9256695ff083347cbf95f93e280",
-         "_replication_state":  "triggered",
-         "_replication_state_time":  1297971311
-    }
-
-.. code-block:: javascript
-
-    {
-         "_id": "rep_from_B",
-         "source":  "http://bserver.com:5984/foo",
-         "target":  "foo_b",
-         "continuous":  true,
-         "_replication_id":  "231bb3cf9d48314eaa8d48a9170570d1",
-         "_replication_state":  "triggered",
-         "_replication_state_time":  1297974122
-    }
-
-Now you would like to have the same pull replications going on in server
-D, that is, you would like to have server D pull replicating from
-servers A and B. You have two options:
-
--  Explicitly add two documents to server's D replicator database
-
--  Replicate server's C replicator database into server's D replicator
-   database
-
-Both alternatives accomplish exactly the same goal.
-
-Delegations
------------
-
-Replication documents can have a custom ``user_ctx`` property. This
-property defines the user context under which a replication runs. For
-the old way of triggering a replication (POSTing to ``/_replicate/``),
-this property is not needed. That's because information about the
-authenticated user is readily available during the replication, which is
-not persistent in that case. Now, with the replicator database, the
-problem is that information about which user is starting a particular
-replication is only present when the replication document is written.
-The information in the replication document and the replication itself
-are persistent, however. This implementation detail implies that in the
-case of a non-admin user, a ``user_ctx`` property containing the user's
-name and a subset of their roles must be defined in the replication
-document. This is enforced by the document update validation function
-present in the default design document of the replicator database. The
-validation function also ensures that non-admin users are unable to set
-the value of the user context's ``name`` property to anything other than
-their own user name. The same principle applies for roles.
-
-For admins, the ``user_ctx`` property is optional, and if it's missing
-it defaults to a user context with name ``null`` and an empty list of
-roles, which means design documents won't be written to local targets.
-If writing design documents to local targets is desired, the role
-``_admin`` must be present in the user context's list of roles.
-
-Also, for admins the ``user_ctx`` property can be used to trigger a
-replication on behalf of another user. This is the user context that
-will be passed to local target database document validation functions.
-
-.. note::
-   The ``user_ctx`` property only has effect for local endpoints.
-
-Example delegated replication document:
-
-.. code-block:: javascript
-
-    {
-         "_id": "my_rep",
-         "source":  "http://bserver.com:5984/foo",
-         "target":  "bar",
-         "continuous":  true,
-         "user_ctx": {
-              "name": "joe",
-              "roles": ["erlanger", "researcher"]
-         }
-    }
-
-As stated before, the ``user_ctx`` property is optional for admins, while
-being mandatory for regular (non-admin) users. When the roles property
-of ``user_ctx`` is missing, it defaults to the empty list ``[]``.


Mime
View raw message