Return-Path: Delivered-To: apmail-incubator-connectors-commits-archive@minotaur.apache.org Received: (qmail 18814 invoked from network); 22 Feb 2010 14:48:20 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.3) by minotaur.apache.org with SMTP; 22 Feb 2010 14:48:20 -0000 Received: (qmail 24511 invoked by uid 500); 22 Feb 2010 14:48:20 -0000 Delivered-To: apmail-incubator-connectors-commits-archive@incubator.apache.org Received: (qmail 24466 invoked by uid 500); 22 Feb 2010 14:48:20 -0000 Mailing-List: contact connectors-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: connectors-dev@incubator.apache.org Delivered-To: mailing list connectors-commits@incubator.apache.org Received: (qmail 24457 invoked by uid 99); 22 Feb 2010 14:48:20 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Feb 2010 14:48:20 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 22 Feb 2010 14:48:20 +0000 Received: from brutus.apache.org (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 0727B234C045 for ; Mon, 22 Feb 2010 06:48:00 -0800 (PST) Date: Mon, 22 Feb 2010 14:48:00 +0000 (UTC) From: confluence@apache.org To: connectors-commits@incubator.apache.org Message-ID: <1900245436.1559.1266850080021.JavaMail.www-data@brutus.apache.org> Subject: [CONF] Lucene Connector Framework > Lucene Connector Framework concepts MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Auto-Submitted: auto-generated Space: Lucene Connector Framework (http://cwiki.apache.org/confluence/display/CONNECTORS) Page: Lucene Connector Framework concepts (http://cwiki.apache.org/confluence/display/CONNECTORS/Lucene+Connector+Framework+concepts) Edited by Karl Wright: --------------------------------------------------------------------- Lucene Connector Framework is a crawler framework which is designed to meet several key goals. * It's reliable, and resilient against being shutdown or restarted * It's incremental, meaning that jobs describe a set of documents by some criteria, and are meant to be run again and again to pick up any differences * It supports connections to multiple kinds of repositories at the same time * It defines and fully supports a model of document security, so that each document listed in a search result from the back-end search engine is one that the current user is allowed to see * It operates with reasonable efficiency and throughput * Its memory usage characteristics are bounded and predictable in advance LCF meets many of its architectural goals by being implemented on top of a relational database. The current implementation requires Postgresql, which is by far the richest open-source database available. h1. Lucene Connector Framework document model Each document in LCF consists of some opaque binary data, plus some opaque associated metadata (which is described by name-value pairs), and is uniquely addressed by a URI. The back-end search engines which LCF communicates with are all expected to support, to a greater or lesser degree, this model. Documents may also have access tokens associated with them. These access tokens are described more fully in the next section. h1. Lucene Connector Framework security model The LCF security model is based loosely on the standard authorization concepts and hierarchies found in Microsoft's Active Directory. Active Directory is quite common in the kinds of environments where data repositories exist that are ripe for indexing. Active Directory's authorization model is also easily used in a general way to represent authorization for a huge variety of third-party content repositories. LCF defines a concept of an _access token_. An access token, to LCF, is a string which is meaningful only to a specific connector or connectors. This string describes the ability of a user to view (or not view) some set of documents. For documents protected by Active Directory itself, an access token would be an Active Directory SID (e.g. "S-1-23-4-1-45"). But, for example, for documents protected by Livelink a wholly different string would be used. In the LCF security model, it is the job of an _authority_ to provide a list of access tokens for a given searching user. Multiple authorities cooperate in that each one can add to the list of access tokens describing a given user's security. The resulting access tokens are handed to the search engine as part of every search request, so that the search engine may properly exclude documents that the user is not allowed to see. When document indexing is done, therefore, it is the job of the crawler to hand access tokens to the search engine, so that it may categorize the documents properly according to their accessibility. Note that the access tokens so provided are meaningful only within the space of the governing authority. Access tokens can be provided as "grant" tokens, or as "deny" tokens. Finally, there are multiple levels of tokens, which correspond to Active Directory's concepts of "share" security, "directory" security, or "file" security. (The latter concepts are rarely used except for documents that come from Windows or Samba systems.) Once all these documents and their access tokens are handed to the search engine, it is the search engine's job to enforce security by excluding inappropriate documents from the search results. For Lucene, this infrastructure is expected to be built on top of Lucene's generic metadata abilities, but has not been implemented at this time. h1. Lucene Connector Framework conceptual entities h2. Connectors LCF defines three different kinds of connectors. These are: * Authority connectors * Repository connectors * Output connectors All connectors share certain characteristics. First, they are pooled. This means that LCF keeps configured and connected instances of a connector around for a while, and has the ability to limit the total number of such instances to within some upper limit. Connector implementations have specific methods in them for managing their existence in the pools that LCF keeps them in. Second, they are configurable. The configuration description for a connector is an XML document, whose precise format is determined by the connector implementation. A configured connector instance is called a _connection_, by common LCF convention. The function of each type of connector is described below. || Connector type || Function || | Authority connector | Furnishes a standard way of mapping a user name to access tokens that are meaningful for a given type of repository | | Repository connector | Fetches documents from a specific kind of repository, such as SharePoint or off the web | | Output connector | Pushes document ingestion requests and deletion requests to a specific kind of back end search engine or other entity, such as Lucene | h2. Connections As described above, a _connection_ is a connector implementation plus connector-specific configuration information. A user can define a connection of all three types in the crawler UI. The kind of information included in the configuration data for a connector typically describes the "how", as opposed to the "what". For example, you'd configure a Livelink connection by specifying how to talk to the Livelink server. You would *not* include information about which documents to select in such a configuration. There is one difference between how you define a _repository connection_, vs. how you would define an _authority connection_ or _output connection_. The difference is that you must specify a governing authority connection for your repository connection. This is because *all* documents ingested by LCF need to include appropriate access tokens, and those access tokens are specific to the governing authority. h2. Jobs A _job_ in LCF parlance is a description of some kind of synchronization that needs to occur between a specified repository connection and a specified output connection. A job includes the following: * A verbal description * A repository connection (and thus implicitly an authority connection as well) * An output connection * A repository-connection-specific description of "what" documents and metadata the job applies to * A model for crawling: either "run to completion", or "run continuously" * A schedule for when the job will run: either within specified time windows, or on demand Jobs are allowed to share the same repository connection, and thus they can overlap in the set of documents they describe. LCF permits this situation, although when it occurs it is probably an accident. Change your notification preferences: http://cwiki.apache.org/confluence/users/viewnotifications.action