Return-Path: Delivered-To: apmail-incubator-connectors-commits-archive@minotaur.apache.org Received: (qmail 982 invoked from network); 16 Mar 2010 21:51:24 -0000 Received: from unknown (HELO mail.apache.org) (140.211.11.3) by 140.211.11.9 with SMTP; 16 Mar 2010 21:51:24 -0000 Received: (qmail 90022 invoked by uid 500); 16 Mar 2010 21:51:24 -0000 Delivered-To: apmail-incubator-connectors-commits-archive@incubator.apache.org Received: (qmail 89985 invoked by uid 500); 16 Mar 2010 21:51:24 -0000 Mailing-List: contact connectors-commits-help@incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: connectors-dev@incubator.apache.org Delivered-To: mailing list connectors-commits@incubator.apache.org Received: (qmail 89978 invoked by uid 99); 16 Mar 2010 21:51:24 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Mar 2010 21:51:24 +0000 X-ASF-Spam-Status: No, hits=-1025.8 required=10.0 tests=ALL_TRUSTED,AWL X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Mar 2010 21:51:20 +0000 Received: from brutus.apache.org (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 05751234C1F2 for ; Tue, 16 Mar 2010 21:51:00 +0000 (UTC) Date: Tue, 16 Mar 2010 21:51:00 +0000 (UTC) From: confluence@apache.org To: connectors-commits@incubator.apache.org Message-ID: <1069743077.3766.1268776260021.JavaMail.www-data@brutus.apache.org> Subject: [CONF] Lucene Connector Framework > How to Build and Deploy Lucene Connectors Framework MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 7bit Auto-Submitted: auto-generated Space: Lucene Connector Framework (http://cwiki.apache.org/confluence/display/CONNECTORS) Page: How to Build and Deploy Lucene Connectors Framework (http://cwiki.apache.org/confluence/display/CONNECTORS/How+to+Build+and+Deploy+Lucene+Connectors+Framework) Edited by Karl Wright: --------------------------------------------------------------------- h1. Building LCF Lucene Connectors Framework consists of the framework itself, a set of connectors, and an Apache2 plug-in module. These can be built as follows. h3. Building the framework and the connectors To build the LCF framework code, and the particular connectors you are interested in, you currently need to do the following: # Check out [https://svn.apache.org/repos/asf/incubator/lcf/trunk]. # cd to "modules". # Install desired dependent LGPL and proprietary libraries, wsdls, and xsds. See below for details. # Run ant. If you supply *no* LGPL or proprietary libraries, the framework itself and only the following repository connectors will be built: * Filesystem connector * JDBC connector, with just the postgresql jdbc driver * RSS connector * Webcrawler connector In addition, the following output connectors will be built: * MetaCarta GTS output connector * Lucene SOLR output connector * Null output connector The LGPL and proprietary connector dependencies are described in separate sections below. The output of the ant build is produced in the _modules/dist_ directory, which is further broken down by process. The number of produced process directories may vary, because optional individual connectors do sometimes supply processes that must be run to support the connector. See the table below for a description of the _modules/dist_ folder. || _modules/dist_ directory || Meaning || | _tomcat_ | Web applications that should be deployed on tomcat, plus recommended tomcat -D switch names and values | | _processes_ | classpath jars that should be included in the class path for all non-connector-specific processes, along with -D switches, using the same convention as described for tomcat, above | | _wsdd_ | wsdd files that are needed by the included connectors in order to function | | _xxx-process_ | classpath jars and -D switches needed for a required connector-specific process | In all of the _dist_ directories above, required -D switches are represented by a file name that is named for the switch, where the desired value of the switch is stored as the file's contents. For example, the file "foo.bar" might have the contents "hello", which should correspond during deployment to a java switch of the form "-Dfoo.bar=hello". When you are constructing the appropriate classpath for your LCF processes, it is important to remember that "more" is not necessarily "better". The process deployment strategy implied by the build structure has been carefully thought out to avoid jar conflicts. Indeed, several connectors are structured using multiple processes precisely for that reason. h5. Building the Documentum connector The Documentum connector requires EMC's DFC product in order to be built. Install DFC on the build system, and locate the jars it installs. You will need to copy at least dfc.jar, dfcbase.jar, and dctm.jar into the directory "modules/connectors/documentum/dfc". h5. Building the FileNet connector The FileNet connector requires IBM's FileNet P8 API jar in order to be build. Install the FileNet P8 API on the build system, and copy at least "Jace.jar" from that installation into "modules/connectors/filenet/filenet-api". h5. Building the JDBC connector, including Oracle, SQLServer, or Sybase JDBC drivers The JDBC connector also knows how to work with Oracle, SQLServer, and Sybase JDBC drivers. For Oracle, download the appropriate Oracle JDBC jar from the Oracle site, and copy it into the directory "modules/connectors/jdbc/jdbc-drivers". For SQLServer and Sybase, download jtds.jar, and copy it into the same directory. h5. Building the jCIFS connector To build this connector, you need to download jcifs.jar from http://samba.jcifs.org, and copy it into the "modules/connectors/jcifs/jcifs" directory. h5. Building the LiveLink connector This connector needs LAPI, which is a proprietary java library that allows access to OpenText's LiveLink server. Copy the lapi.jar into the "modules/connectors/livelink/lapi" directory. h5. Building the Memex connector This connector needs the Memex API jar, usually called JavaMXIELIB.jar. Copy this jar into the "modules/connectors/memex/mxie-java" directory. h5. Building the Meridio connector The Meridio connector needs wsdls and xsds downloaded from an installed Meridio instance using *disco.exe*, which is installed as part of Microsoft Visual Studio, typically under "c:\Program Files\Microsoft SDKs\Windows\V6.x\bin". Obtain the preliminary wsdls and xsds by interrogating the following Meridio web services: * http\[s\]:///DMWS/MeridioDMWS.asmx * http\[s\]:///RMWS/MeridioRMWS.asmx You should have obtained the following files in this step: * MeridioDMWS.wsdl * MeridioRMWS.wsdl * DMDataSet.xsd * RMDataSet.xsd * RMClassificationDataSet.xsd Next, patch these using Microsoft's *xmldiffpatch* utility suite, downloadable for Windows from [http://msdn.microsoft.com/en-us/library/aa302294.aspx]. The appropriate diff files to apply as patches can be found in "modules/connectors/meridio/upstream-diffs". After the patching, rename so that you have the files: * MeridioDMWS_axis.wsdl * MeridioRMWS_axis.wsdl * DMDataSet_castor.xsd * RMDataSet_castor.xsd * RMClassificationDataSet_castor.xsd Finally, copy all of these to: "modules/connectors/meridio/wsdls". h5. Building the SharePoint connector In order to build this connector, you need to download wsdls from an installed SharePoint instance. The wsdls in question are: * Permissions.wsdl * Lists.wsdl * Dspsts.wsdl * usergroup.wsdl * versions.wsdl * webs.wsdl To download a wsdl, use Microsoft's *disco.exe* tool, which is part of Visual Studio, typically under "c:\Program Files\Microsoft SDKs\Windows\V6.x\bin". You'd want to interrogate the following urls: * http\[s\]:///_vti_bin/Permissions.asmx * http\[s\]:///_vti_bin/Lists.asmx * http\[s\]:///_vti_bin/Dspsts.asmx * http\[s\]:///_vti_bin/usergroup.asmx * http\[s\]:///_vti_bin/versions.asmx * http\[s\]:///_vti_bin/webs.asmx When the wsdl files have been downloaded, copy them to: "modules/connectors/sharepoint/wsdls". h3. Building LCF's Apache2 plugin To build the mod-authz-annotate plugin, you need to start with a Unix system that has the apache2 development tools installed on it, plus the curl development package (from [http://curl.haxx.se] or elsewhere). Then, cd to modules/mod-authz-annotate, and type "make". The build will produce a file called mod-authz-annotate.so, which should be copied to the appropriate Apache2 directory so it can be used as a plugin. h1. Running Lucene Connectors Framework h3. Framework and connectors The core part of Lucene Connectors Framework consists of several pieces. These basic pieces are enumerated below: * A Postgresql database, which is where LCF keeps all of its configuration and state information * A synchronization directory, which how LCF coordinates activity among its various processes * An *agents* process, which is the process that actually crawls documents and ingests them * A *crawler-ui* web application, which presents the UI users interact with to configure and control the crawler * An *authority-service* web application, which responds to requests for authorization tokens, given a user name In addition, there are a number of java classes in Lucene Connectors Framework that are intended to be called directly, to perform specific actions in the environment or in the database. These classes are usually invoked from the command line, with appropriate arguments supplied, and are thus considered to be LCF *commands*. Basic functionality supplied by these command classes are as follows: * Create/Destroy the LCF database instance * Start/Stop the *agents* process * Register/Unregister an agent class (there's currently only one included) * Register/Unregister an output connector * Register/Unregister a repository connector * Register/Unregister an authority connector * Clean up synchronization directory garbage resulting from an ungraceful interruption of an LCF process * Query for certain kinds of job-related information Individual connectors may contribute additional command classes and processes to this picture. A properly built connector typically consists of: * One or more jar files meant to be included in the *agents* process and command invocation classpaths * One or more "iar" incremental war files, which are meant to be unpacked on top of the *lcf-crawler-ui* or *lcf-authority-service* web applications * Possibly some java commands, which are meant to support or configure the connector in some way. * Possibly a connector-specific process or two, each requiring a distinct classpath, which usually serves to isolate the *crawler-ui* web application, *authority service* web application, *agents* process, and any commands from problematic aspects of the client environment * A recommended set of java "define" variables, which should be used consistently with all involved processes, e.g. the *agents* process, the application server running the *authority-service* and *crawler-ui*, and any commands. An individual connector package will typically supply an output connector, or a repository connector, or both a repository connector and an authority connector. The ant build script under _modules_ automatically forms each individual connector's contribution to the overall system into the overall package. h5. Configuring the Postgresql database Despite having an internal architecture that cleanly abstracts from specific database details, Lucene Connectors Framework is currently fairly specific to Postgresql at this time. There are a number of reasons for this. # Lucene Connectors Framework uses the database for its document queue, which places a significant load on it. The back-end database is thus a significant factor in LCF's performance. But, in exchange, LCF benefits enormously from the underlying ACID properties of the database. # The syntax abstraction is not perfect. Some details, such as how regular expressions are handled, have not been abstracted sufficiently at the time of this writing. # The strategy for getting optimal query plans from the database is not abstracted. For example, Postgresql 8.3+ is very sensitive to certain statistics about a database table, and will not generate a performant plan if the statistics are inaccurate by even a little, in some cases. So, for Postgresql, the database table must be analyzed very frequently, to avoid catastrophically bad plans. But luckily, Postgresql is pretty good at doing analysis quickly. Oracle, on the other hand, takes a very long time to perform analysis, but its plans are much less sensitive. # Postgresql always does a sequential scan in order to count the number of rows in a table, while other databases return this efficiently. This has affected the design of the LCF UI. # The choice of query form influences the query plan. Ideally, this is not true, but for both Postgresql and for (say) Oracle, it is. # Postgresql has a high degree of parallelism and lack of internal single-threadedness. Lucene Connectors Framework has been tested against Postgresql 8.3.7. We recommend the following configuration parameter settings to work optimally with LCF: * A default database encoding of UTF-8 * _postgresql.conf_ settings as described in the table below * _pg_hba.conf_ settings to allow password access for TCP/IP connections from Lucene Connectors Framework * A maintenance strategy involving cronjob-style vacuuming, rather than Postgresql autovacuum || _postgresql.conf_ parameter || Tested value || | shared_buffers | 1024MB | | checkpoint_segments | 300 | | maintenance_work_mem | 2MB | | tcpip_socket | true | | max_connections | 400 | | checkpoint_timeout | 900 | | datastyle | ISO,European | | autovacuum | off | h5. A note about maintenance Postgresql's architecture causes it to accumulate dead tuples in its data files, which do not interfere with its performance but do bloat the database over time. The usage pattern of LCF is such that it can cause significant bloat to occur to the underlying Postgresql database in only a few days, under sufficient load. Postgresql has a feature to address this bloat, called *vacuuming*. This comes in three varieties: autovacuum, manual vacuum, and manual full vacuum. We have found that Postgresql's autovacuum feature is inadequate under such conditions, because it not only fights for database resources pretty much all the time, but it falls further and further behind as well. Postgresql's in-place manual vacuum functionality is a bit better, but is still much, much slower than actually making a new copy of the database files, which is what happens when a manual full vacuum is performed. Dead-tuple bloat also occurs in indexes in Postgresql, so tables that have had a lot of activity may benefit from being reindexed at the time of maintenance. We therefore recommend periodic, scheduled maintenance operations instead, consisting of the following: * VACUUM FULL VERBOSE; * REINDEX DATABASE ; During maintenance, Postgresql locks tables one at a time. Nevertheless, the crawler ui may become unresponsive for some operations, such as when counting outstanding documents on the job status page. LCF thus has the ability to check for the existence of a file prior to such sensitive operations, and will display a useful "maintenance in progress" message if that file is found. This allows a user to set up a maintenance system that provides adequate feedback for an LCF user of the overall status of the system. h5. The LCF configuration file Currently, LCF requires two configuration files: the property file, and the logging configuration file. The property file path can be specified by the system property "org.apache.lcf.configfile". If not specified through a -D operation, its name is presumed to be _/lcf/properties.ini_. The configuration file allows several properties to be specified. One of the optional properties is the name of the logging configuration file. This property's name is "org.apache.lcf.logconfigfile". If not present, the logging configuration file will be assumed to be _/lcf/logging.ini_. The logging configuration file is a standard commons-logging property file, and should be formatted accordingly. The following table describes the configuration property file properties, and what they do: || Property || Required? || Function || | org.apache.lcf.synchdirectory | Yes | Specifies the path of a synchronization directory. All LCF process owners *must* have read/write privileges to this directory. | | org.apache.lcf.database.maxhandles | No | Specifies the maximum number of database connection handles that will by pooled. Recommended value is 200. | | org.apache.lcf.database.handletimeout | No | Specifies the maximum time a handle is to live before it is presumed dead. Recommend a value of 604800, which is the maximum allowable. | | org.apache.lcf.logconfigfile | No | Specifies location of logging configuration file. | | org.apache.lcf.database.name | No | Describes database name for LCF; defaults to "dbname" if not specified. | | org.apache.lcf.database.username | No | Describes database user name for LCF; defaults to "lcf" if not specified. | | org.apache.lcf.database.password | No | Describes database user's password for LCF; defaults to "local_pg_password" if not specified. | | com.metacarta.crawler.threads | No | Number of crawler worker threads created. Suggest a value of 30. | | com.metacarta.crawler.deletethreads | No | Number of crawler delete threads created. Suggest a value of 10. | | com.metacarta.misc | No | Miscellaneous debugging output. Legal values INFO, WARN, or DEBUG. | | com.metacarta.db | No | Database debugging output. Legal values INFO, WARN, or DEBUG. | | com.metacarta.lock | No | Lock management debugging output. Legal values INFO, WARN, or DEBUG. | | com.metacarta.cache | No | Cache management debugging output. Legal values INFO, WARN, or DEBUG. | | com.metacarta.agents | No | Agent management debugging output. Legal values INFO, WARN, or DEBUG. | | com.metacarta.perf | No | Performance logging debugging output. Legal values INFO, WARN, or DEBUG. | | com.metacarta.crawlerthreads | No | Log crawler thread activity. Legal values INFO, WARN, or DEBUG. | | com.metacarta.hopcount | No | Log hopcount tracking activity. Legal values INFO, WARN, or DEBUG. | | com.metacarta.jobs | No | Log job activity. Legal values INFO, WARN, or DEBUG. | | com.metacarta.connectors | No | Log connector activity. Legal values INFO, WARN, or DEBUG. | | com.metacarta.scheduling | No | Log document scheduling activity. Legal values INFO, WARN, or DEBUG. | | com.metacarta.authorityconnectors | No | Log authority connector activity. Legal values INFO, WARN, or DEBUG. | | com.metacarta.authorityservice | No | Log authority service activity. Legal values are INFO, WARN, or DEBUG. | h5. Commands After you have created the necessary configuration files, you will need to initialize the database, register the "pull-agent" agent, and then register your individual connectors. LCF provides a set of commands for performing these actions, and others as well. The classes implementing these commands are specified below. || Core Command Class || Function || | org.apache.lcf.core.DBCreate | Create LCF database instance | | org.apache.lcf.core.DBDrop | Drop LCF database instance | | org.apache.lcf.core.LockClean | Clean out synchronization directory | || Agents Command Class || Function || | org.apache.lcf.agents.Install | Create LCF agents tables | | org.apache.lcf.agents.Uninstall | Remove LCF agents tables | | org.apache.lcf.agents.Register | Register an agent class | | org.apache.lcf.agents.UnRegister | Un-register an agent class | | org.apache.lcf.agents.UnRegisterAll | Un-register all current agent classes | | org.apache.lcf.agents.SynchronizeAll | Un-register all registered agent classes that can't be found | | org.apache.lcf.agents.RegisterOutput | Register an output connector class | | org.apache.lcf.agents.UnRegisterOutput | Un-register an output connector class | | org.apache.lcf.agents.UnRegisterAllOutputs | Un-register all current output connector classes | | org.apache.lcf.agents.SynchronizeOutputs | Un-register all registered output connector classes that can't be found | | org.apache.lcf.agents.AgentRun | Main *agents* process class | | org.apache.lcf.agents.AgentStop | Stops the running *agents* process | || Crawler Command Class || Function || | org.apache.lcf.crawler.Register | Register a repository connector class | | org.apache.lcf.crawler.UnRegister | Un-register a repository connector class | | org.apache.lcf.crawler.UnRegisterAll | Un-register all repository connector classes | | org.apache.lcf.crawler.SynchronizeConnectors | Un-register all registered repository connector classes that can't be found | | org.apache.lcf.crawler.ExportConfiguration | Export crawler configuration to a file | | org.apache.lcf.crawler.ImportConfiguration | Import crawler configuration from a file | || Authority Command Class || Function || | org.apache.lcf.authorities.RegisterAuthority | Register an authority connector class | | org.apache.lcf.authorities.UnRegisterAuthority | Un-register an authority connector class | | org.apache.lcf.authorities.UnRegisterAllAuthorities | Un-register all authority connector classes | | org.apache.lcf.authorities.SynchronizeAuthorities | Un-register all registered authority connector classes that can't be found | Remember that you need to include all the jars under _module/dist/processes_ in the classpath whenever you run one of these commands! You also must include the corresponding -D switches, as described earlier. h5. Initialization command examples To get going with LCF, you will need to use the above commands to create the database instance, initialize the schema, and register all of the appropriate components. Your initialization sequence will therefore include at least some of the following: || Command || Arguments || | org.apache.lcf.core.DBCreate | | | org.apache.lcf.agents.Install | | | org.apache.lcf.agents.Register | org.apache.lcf.crawler.system.CrawlerAgent | | org.apache.lcf.agents.RegisterOutput | org.apache.lcf.agents.output.gts.GTSConnector "GTS Connector" | | org.apache.lcf.agents.RegisterOutput | org.apache.lcf.agents.output.solr.SolrConnector "SOLR Connector" | | org.apache.lcf.agents.RegisterOutput | org.apache.lcf.agents.output.nullconnector.NullConnector "Null Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.DCTM.DCTM "Documentum Connector" | | org.apache.lcf.authorities.RegisterAuthority | org.apache.lcf.crawler.authorities.DCTM.AuthorityConnector "Documentum Authority" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.filenet.FilenetConnector "FileNet Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.filesystem.FileConnector "Filesystem Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.jdbc.JDBCConnector "Database Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.sharedrive.ShareDriveConnector "Windows Share Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.livelink.LivelinkConnector "LiveLink Connector" | | org.apache.lcf.authorities.RegisterAuthority | org.apache.lcf.crawler.connectors.livelink.LivelinkAuthority "LiveLink Authority" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.memex.MemexConnector "Memex Connector" | | org.apache.lcf.authorities.RegisterAuthority | org.apache.lcf.crawler.connectors.memex.MemexAuthority "Memex Authority" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.meridio.MeridioConnector "Meridio Connector" | | org.apache.lcf.authorities.RegisterAuthority | org.apache.lcf.crawler.connectors.meridio.MemexAuthority "Meridio Authority" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.rss.RSSConnector "RSS Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.sharepoint.SharePointRepository "SharePoint Connector" | | org.apache.lcf.crawler.Register | org.apache.lcf.crawler.connectors.webcrawler.WebcrawlerConnector "Web Connector" | h5. Deploying the *lcf-crawler-ui* and *lcf-authority-service* web applications If you built LCF using ant under the _modules_ directory, then the ant build will have constructed two war files for you under _modules/dist/tomcat_. Take these war files and deploy them as web applications under one or more instances of tomcat. There is no requirement that the *lcf-crawler-ui* web application and the *lcf-authority-service* web application be deployed on the same instance of tomcat. With the current architecture of LCF, they must be deployed on the same server, however. Under _modules/dist/tomcat_, you may also see files that are not war files. These files are meant to be used as command-line -D switches for the tomcat process. The switches may or may not be identical for the two web applications, but they will never conflict. You may need to alter environment variables or your tomcat startup scripts in order to provide these switches. (More about this in the future...) h5. Running the *agents* process The *agents* process is the process that actually performs the crawling for LCF. Start this process by running the command "org.apache.lcf.agents.AgentRun". This class will run until stopped by invoking the command "org.apache.lcf.agents.AgentStop". It is highly recommended that you stop the process in this way. You may also stop the process using a SIGTERM signal, but "kill -9" or the equivalent is NOT recommended, because that may result in dangling locks in the LCF synchronization directory. (If you have to, clean up these locks by shutting down all LCF processes, including the tomcat instances that are running the web applications, and invoking the command "org.apache.lcf.core.LockClean".) h5. Running connector-specific processes Connector-specific processes require the classpath for their invocation to include all the jars that are in the corresponding _modules/dist/-process_ directory. The Documentum and FileNet connectors are the only two connectors that currently require additional processes. Start these processes using the commands listed below, and stop them with SIGTERM. || Connector || Process || Start class || | Documentum | documentum-server-process | org.apache.lcf.crawler.server.DCTM.DCTM | | Documentum | documentum-registry-process | org.apache.lcf.crawler.registry.DCTM.DCTM | | FileNet | filenet-server-process | org.apache.lcf.crawler.server.filenet.Filenet | | FileNet | filenet-registry-process | org.apache.lcf.crawler.registry.filenet.Filenet | h3. Running the LCF Apache2 plug in The LCF Apache2 plugin, mod-authz-annotate, is designed to convert an authenticated principle (e.g. from mod-auth-kerb), and query a set of authority services for access tokens using an HTTP request. These access tokens are then passed to a (not included) search engine UI, which can use them to help compose a search that properly excludes content that the user is not supposed to see. The list of authority services so queried is configured in Apache's httpd.conf file. This project includes only one such service: the java authority service, which uses authority connections defined in the crawler UI to obtain appropriate access tokens. In order for mod-authz-annotate to be used, it must be placed into Apache2's extensions directory, and configured appropriately in the httpd.conf file. Note: The LCF project currently contains no support or assistance for converting a Kerberos principal to a list of Active Directory SIDs. This functionality is best modeled as an independent authority service that mod-authz-annotate is configured to talk to. Without such a service, the LCF security model will be effectively useless for the following connectors: * FileNet * Meridio * SharePoint The best way to construct such an authority service is dependent on what platform one tries to do it on. On Windows, it is fairly easy to do using the standard Windows API's. On Linux, custom software may need to be written to decode Kerberos security packets. One supplier of such software can be found here: [http://www.likewise.com/] h5. Configuring the LCF Apache2 plug in mod-authz-annotate understands the following httpd.conf commands: || Command || Meaning || Values || | AuthzAnnotateEnable | Turn on/off the plugin | "On", "Off" | | AuthzAnnotateAuthority | Point to an authority service that supports ACL queries, but not ID queries | The authority URL | | AuthzAnnotateACLAuthority | Point to an authority service that supports ACL queries, but not ID queries | The authority URL | | AuthzAnnotateIDAuthority | Point to an authority service that supports ID queries, but not ACL queries | The authority URL | | AuthzAnnotateIDACLAuthority | Point to an authority service that supports both ACL queries and ID queries | The authority URL | Change your notification preferences: http://cwiki.apache.org/confluence/users/viewnotifications.action