incubator-connectors-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From conflue...@apache.org
Subject [CONF] Apache Connectors Framework > Programmatic Operation of ACF
Date Mon, 13 Sep 2010 22:40:00 GMT
Space: Apache Connectors Framework (https://cwiki.apache.org/confluence/display/CONNECTORS)
Page: Programmatic Operation of ACF (https://cwiki.apache.org/confluence/display/CONNECTORS/Programmatic+Operation+of+ACF)


Edited by Karl Wright:
---------------------------------------------------------------------
h1. Programmatic Operation of ACF

A certain subset of ACF users want to think of ACF as an engine that they can poke from whatever
other system they are developing.  While ACF is not precisely a document indexing engine per
se, it can certainly be controlled programmatically.  Right now, there are three principle
ways of achieving this control.

h3. Control by Servlet API

ACF provides a servlet-based JSON API that gives you the complete ability to define connections
and jobs, and control job execution.  You can read about JSON [here|http://www.json.org].
 The API is designed to be RESTful in character.  Thus, it makes full use of the HTTP verbs
GET, PUT, POST, and DELETE, and represents objects as URLs.  The basic format of the JSON
servlet resource URLs is as follows:

http\[s\]://_<server_and_port>_/acf-api-service/json/_<resource>_

The servlet ignores request data, except when the PUT or POST verb is used.  In that case,
the request data is presumed to be a JSON object.  The servlet responds either with an error
response code (either 400 or 500) with an appropriate explanatory message, or with a 200 (OK),
201 (CREATED), or 404 (NOT FOUND) response code along with a response JSON object.

The actual available resources and commands are as follows:



|| Resource || Verb || What it does || Input format || Output format ||
| outputconnectors | GET | List all registered output connectors | N/A | \{"outputconnector":\[_<list_of_output_connector_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| authorityconnectors | GET | List all registered authority connectors | N/A | \{"authorityconnector":\[_<list_of_authority_connector_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| repositoryconnectors | GET | List all registered repository connectors | N/A | \{"repositoryconnector":\[_<list_of_repository_connector_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| outputconnections | GET | List all output connections | N/A | \{"outputconnection":\[_<list_of_output_connection_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| outputconnections/_<encoded_connection_name>_ | GET | Get a specific output connection
| N/A | \{"outputconnection":_<output_connection_object>_\} *OR* \{ \} *OR* \{"error":_<error_text>_\}
|
| outputconnections/_<encoded_connection_name>_ | PUT | Save or create an output connection
| \{"outputconnection":_<output_connection_object>_\} | \{"connection_name":_<connection_name>_\}
*OR* \{"error":_<error_text>_\} |
| outputconnections/_<encoded_connection_name>_ | DELETE | Delete an output connection
| N/A | \{ \} *OR* \{"error":_<error_text>_\} |
| status/outputconnections/_<encoded_connection_name>_ | GET | Check the status of an
output connection | N/A | \{"check_result":_<message>_\} *OR* \{"error":_<error_text>_\}
|
| info/outputconnections/_<encoded_connection_name>_/_<connector_specific_resource>_
| GET | Retrieve arbitrary connector-specific resource | N/A | _<response_data>_ *OR*
\{"error":_<error_text>_\} *OR* \{"service_interruption":_<error_text>_\} |
| authorityconnections | GET | List all authority connections | N/A | \{"authorityconnection":\[_<list_of_authority_connection_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| authorityconnections/_<encoded_connection_name>_ | GET | Get a specific authority
connection | N/A | \{"authorityconnection":_<authority_connection_object>_\} *OR* \{
\} *OR* \{"error":_<error_text>_\} |
| authorityconnections/_<encoded_connection_name>_ | PUT | Save or create an authority
connection | \{"authorityconnection":_<authority_connection_object>_\} | \{"connection_name":_<connection_name>_\}
*OR* \{"error":_<error_text>_\} |
| authorityconnections/_<encoded_connection_name>_ | DELETE | Delete an authority connection
| N/A | \{ \} *OR* \{"error":_<error_text>_\} |
| status/authorityconnections/_<encoded_connection_name>_ | GET | Check the status of
an authority connection | N/A | \{"check_result":_<message>_\} *OR* \{"error":_<error_text>_\}
|
| repositoryconnections | GET | List all repository connections | N/A | \{"repositoryconnection":\[_<list_of_repository_connection_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| repositoryconnections/_<encoded_connection_name>_ | GET | Get a specific repository
connection | N/A | \{"repositoryconnection":_<repository_connection_object>_\} *OR*
\{ \} *OR* \{"error":_<error_text>_\} |
| repositoryconnections/_<encoded_connection_name>_ | PUT | Save or create a repository
connection | \{"repositoryconnection":_<repository_connection_object>_\} | \{"connection_name":_<connection_name>_\}
*OR* \{"error":_<error_text>_\} |
| repositoryconnections/_<encoded_connection_name>_ | DELETE | Delete a repository connection
| N/A | \{ \} *OR* \{"error":_<error_text>_\} |
| status/repositoryconnections/_<encoded_connection_name>_ | GET | Check the status
of a repository connection | N/A | \{"check_result":_<message>_\} *OR* \{"error":_<error_text>_\}
|
| info/repositoryconnections/_<encoded_connection_name>_/_<connector_specific_resource>_
| GET | Retrieve arbitrary connector-specific resource | N/A | _<response_data>_ *OR*
\{"error":_<error_text>_\} *OR* \{"service_interruption":_<error_text>_\} |
| jobs | GET | List all job definitions | N/A | \{"job":\[_<list_of_job_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| jobs | POST | Create a job | \{"job":_<job_object>_\} | \{"job_id":_<job_identifier>_\}
*OR* \{"error":_<error_text>_\} |
| jobs/_<job_id>_ | GET | Get a specific job definition | N/A | \{"job":_<job_object_>\}
*OR* \{ \} *OR* \{"error":_<error_text>_\} |
| jobs/_<job_id>_ | PUT | Save a job definition | \{"job":_<job_object>_\} | \{"job_id":_<job_identifier>_\}
*OR* \{"error":_<error_text>_\} |
| jobs/_<job_id>_ | DELETE | Delete a job definition | N/A | \{ \} *OR* \{"error":_<error_text>_\}
|
| jobstatuses | GET | List all jobs and their status | N/A | \{"job":\[_<list_of_job_status_objects>_\]\}
*OR* \{"error":_<error_text>_\} |
| jobstatuses/_<job_id>_ | GET | Get a specific job's status | N/A | \{"jobstatus":_<job_status_object>\}
*OR* \{ \} *OR* \{"error":_<error_text>_\}  |
| start/_<job_id>_ | PUT | Start a specified job manually | N/A | \{ \} *OR* \{"error":_<error_text>_\}
|
| abort/_<job_id>_ | PUT | Abort a specified job | N/A | \{ \} *OR* \{"error":_<error_text>_\}
|
| restart/_<job_id>_ | PUT | Stop and start a specified job | N/A | \{ \} *OR* \{"error":_<error_text>_\}
|
| pause/_<job_id>_ | PUT | Pause a specified job | N/A | \{ \} *OR* \{"error":_<error_text>_\}
|
| resume/_<job_id>_ | PUT | Resume a specified job | N/A | \{ \} *OR* \{"error":_<error_text>_\}
|

Other resources having to do with reports have been planned, but not yet been implemented.

h5. Output connector objects

The JSON fields an output connector object has are as follows:

|| Field || Meaning ||
| "description" | The optional description of the connector |
| "class_name" | The class name of the class implementing the connector |

h5. Authority connector objects

The JSON fields an authority connector object has are as follows:

|| Field || Meaning ||
| "description" | The optional description of the connector |
| "class_name" | The class name of the class implementing the connector |

h5. Repository connector objects

The JSON fields a repository connector object has are as follows:

|| Field || Meaning ||
| "description" | The optional description of the connector |
| "class_name" | The class name of the class implementing the connector |

h5. Output connection objects

Output connection names, when they are part of a URL, should be encoded as follows:

# All instances of '.' should be replaced by '..'.
# All instances of '/' should be replaced by '.+'.
# The URL should be encoded using standard URL utf-8-based %-encoding.

The JSON fields an output connection object has are as follows:

|| Field || Meaning ||
| "name" | The unique name of the connection |
| "description" | The description of the connection |
| "class_name" | The java class name of the class implementing the connection |
| "max_connections" | The total number of outstanding connections allowed to exist at a time
|
| "configuration" | The configuration object for the connection, which is specific to the
connection class |

h5. Authority connection objects

Authority connection names, when they are part of a URL, should be encoded as follows:

# All instances of '.' should be replaced by '..'.
# All instances of '/' should be replaced by '.+'.
# The URL should be encoded using standard URL utf-8-based %-encoding.

The JSON fields for an authority connection object are as follows:

|| Field || Meaning ||
| "name" | The unique name of the connection |
| "description" | The description of the connection |
| "class_name" | The java class name of the class implementing the connection |
| "max_connections" | The total number of outstanding connections allowed to exist at a time
|
| "configuration" | The configuration object for the connection, which is specific to the
connection class |

h5. Repository connection objects

Repository connection names, when they are part of a URL, should be encoded as follows:

# All instances of '.' should be replaced by '..'.
# All instances of '/' should be replaced by '.+'.
# The URL should be encoded using standard URL utf-8-based %-encoding.

The JSON fields for a repository connection object are as follows:

|| Field || Meaning ||
| "name" | The unique name of the connection |
| "description" | The description of the connection |
| "class_name" | The java class name of the class implementing the connection |
| "max_connections" | The total number of outstanding connections allowed to exist at a time
|
| "configuration" | The configuration object for the connection, which is specific to the
connection class |
| "acl_authority" | The (optional) name of the authority that will enforce security for this
connection |
| "throttle" | An array of throttle objects, which control how quickly documents can be requested
from this connection |

Each throttle object has the following fields:

|| Field || Meaning ||
| "match" | The regular expression which is used to match a document's bins to determine if
the throttle should be applied |
| "match_description" | Optional text describing the meaning of the throttle |
| "rate" | The maximum fetch rate to use if the throttle applies, in fetches per minute |

h5. Job objects

The JSON fields for a job are is as follows:

|| Field || Meaning ||
| "id" | The job's identifier, if present.  If not present, ACF will create one (and will
also create the job when saved). |
| "description" | Text describing the job |
| "repository_connection" | The name of the repository connection to use with the job |
| "output_connection" | The name of the output connection to use with the job |
| "document_specification" | The document specification object for the job, whose format is
repository-connection specific |
| "output_specification" | The output specification object for the job, whose format is output-connection
specific |
| "start_mode" | The start mode for the job, which can be one of "schedule window start",
"schedule window anytime", or "manual" |
| "run_mode" | The run mode for the job, which can be either "continuous" or "scan once" |
| "hopcount_mode" | The hopcount mode for the job, which can be either "accurate", "no delete",
"never delete" |
| "priority" | The job's priority, typically "5" |
| "recrawl_interval" | The default time between recrawl of documents (if the job is "continuous"),
in milliseconds, or "infinite" for infinity |
| "expiration_interval" | The time until a document expires (if the job is "continuous"),
in milliseconds, or "infinite" for infinity |
| "reseed_interval" | The time between reseeding operations (if the job is "continuous"),
in milliseconds, or "infinite" for infinity |
| "hopcount" | An array of hopcount objects, describing the link types and associated maximum
hops permitted for the job |
| "schedule" | An array of schedule objects, describing when the job should be started and
run |

Each hopcount object has the following fields:

|| Field || Meaning ||
| "link_type" | The connection-type-dependent type of a link for which a hop count restriction
is specified |
| "count" | The maximum number of hops allowed for the associated link type, starting at a
seed |

Each schedule object has the following fields:

|| Field || Meaning ||
| "timezone" | The optional time zone for the schedule object; if not present the default
server time zone is used |
| "duration" | The optional length of the described time window, in milliseconds; if not present,
duration is considered infinite |
| "dayofweek" | The optional day-of-the-week enumeration object |
| "monthofyear" | The optional month-of-the-year enumeration object |
| "dayofmonth" | The optional day-of-the-month enumeration object |
| "year" | The optional year enumeration object |
| "hourofday" | The optional hour-of-the-day enumeration object |
| "minutesofhour" | The optional minutes-of-the-hour enumeration object |

Each enumeration object describes an array of integers using the form:

\{"value":\[_<integer_list>_\]\}

Each integer is a zero-based index describing which entity is being specified.  For example,
for "dayofweek", 0 corresponds to Sunday, etc., and thus "dayofweek":\{"value":\[0,6\]\} would
describe Saturdays and Sundays.

h5. Job status objects

The JSON fields of a job status object are as follows:

|| Field || Meaning ||
| "job_id" | The job identifier |
| "status" | The job status, having the possible values: "not yet run", "running", "paused",
"done", "waiting", "starting up", "cleaning up", "error", "aborting", "restarting", "running
no connector", and "terminating" |
| "error_text" | The error text, if the status is "error" |
| "start_time" | The job start time, in milliseconds since Jan 1, 1970 |
| "end_time" | The job end time, in milliseconds since Jan 1, 1970 |
| "documents_in_queue" | The total number of documents in the queue for the job |
| "documents_outstanding" | The number of documents for the job that are currently considered
'active' |
| "documents_processed" | The number of documents that in the queue for the job that have
been processed at least once |

h5. Connection-type-specific objects

As you may note when trying to use the above JSON API methods, you cannot get very far in
defining connections or jobs without knowing the JSON format of a connection's configuration
information, or a job's connection-specific document specification and output specification
information.  The form of these objects is controlled by the Java implementation of the underlying
connector, and is translated directly into JSON, so if you write your own connector you should
be able to figure out what it will be in the API.  For connectors already part of ACF, it
remains an ongoing task to document these connector-specific objects.  This task is not yet
underway.

Luckily, it is pretty easy to learn a lot about the objects in question by simply creating
connections and jobs in the ACF crawler UI, and then inspecting the resulting JSON objects
through the API.  In this way, it should be possible to do a decent job of coding most API-based
integrations.  The one place where difficulties will certainly occur will be if you try to
completely replace the ACF crawler UI with one of your own.  This is because most connectors
have methods that communicate with their respective back-ends in order to allow the user to
select appropriate values.  For example, the path drill-down that is presented by the LiveLink
connector requires that the connector interrogate the appropriate LiveLink repository in order
to populate its path selection pull-downs.  There is, at this time, only one sanctioned way
to accomplish the same job using the API, which is to use the appropriate "_connection_type_/execute/_type-specific_command_"
command to perform the necessary functions.  Some set of useful functions has been coded for
every appropriate connector, but the exact commands for every connector, and their JSON syntax,
remains undocumented for now.

h5. File system connector

The file system connector has no configuration information, and no connector-specific commands.
 However, it does have document specification information.  The information looks something
like this:

\{"startpoint":\[\{"_attribute_path":"c:\path_to_files","include":\[\{"_attribute_type":"file","_attribute_match":"\*.txt"\},\{"_attribute_type":"file","_attribute_match":"\*.doc"\,"_attribute_type":"directory","_attribute_match":"\*"\],"exclude":\["*.mov"\]\]\}

As you can see, multiple starting paths are possible, and the inclusion and exclusion rules
also can be one or multiple.


h3. Control via Commands

For script writers, there currently exist a number of ACF execution commands.  These commands
are primarily rich in the area of definition of connections and jobs, controlling jobs, and
running reports.  The following table lists the current suite.

|| Command || What it does ||
| org.apache.acf.agents.DefineOutputConnection | Create a new output connection |
| org.apache.acf.agents.DeleteOutputConnection | Delete an existing output connection |
| org.apache.acf.authorities.ChangeAuthSpec | Modify an authority's configuration information
|
| org.apache.acf.authorities.CheckAll | Check all authorities to be sure they are functioning
|
| org.apache.acf.authorities.DefineAuthorityConnection | Create a new authority connection
|
| org.apache.acf.authorities.DeleteAuthorityConnection | Delete an existing authority connection
|
| org.apache.acf.crawler.AbortJob | Abort a running job |
| org.apache.acf.crawler.AddScheduledTime | Add a schedule record to a job |
| org.apache.acf.crawler.ChangeJobDocSpec | Modify a job's specification information |
| org.apache.acf.crawler.DefineJob | Create a new job |
| org.apache.acf.crawler.DefineRepositoryConnection | Create a new repository connection |
| org.apache.acf.crawler.DeleteJob | Delete an existing job |
| org.apache.acf.crawler.DeleteRepositoryConnection | Delete an existing repository connection
|
| org.apache.acf.crawler.ExportConfiguration | Write the complete list of all connection definitions
and job specifications to a file |
| org.apache.acf.crawler.FindJob | Locate a job identifier given a job's name |
| org.apache.acf.crawler.GetJobSchedule | Find a job's schedule given a job's identifier |
| org.apache.acf.crawler.ImportConfiguration | Import configuration as written by a previous
ExportConfiguration command |
| org.apache.acf.crawler.ListJobStatuses | List the status of all jobs |
| org.apache.acf.crawler.ListJobs | List the identifiers for all jobs |
| org.apache.acf.crawler.PauseJob | Given a job identifier, pause the specified job |
| org.apache.acf.crawler.RestartJob | Given a job identifier, restart the specified job |
| org.apache.acf.crawler.RunDocumentStatus | Run a document status report |
| org.apache.acf.crawler.RunMaxActivityHistory | Run a maximum activity report |
| org.apache.acf.crawler.RunMaxBandwidthHistory | Run a maximum bandwidth report |
| org.apache.acf.crawler.RunQueueStatus | Run a queue status report |
| org.apache.acf.crawler.RunResultHistory | Run a result history report |
| org.apache.acf.crawler.RunSimpleHistory | Run a simply history report |
| org.apache.acf.crawler.StartJob | Start a job |
| org.apache.acf.crawler.WaitForJobDeleted | After a job has been deleted, wait until the
delete has completed |
| org.apache.acf.crawler.WaitForJobInactive | After a job has been started or aborted, wait
until the job ceases all activity |
| org.apache.acf.crawler.WaitJobPaused | After a job has been paused, wait for the pause to
take effect |

h3. Control by direct code

Control by direct java code is quite a reasonable thing to do.  The sources of the above commands
should give a pretty clear idea how to proceed, if that's what you want to do.


h3. Caveats

The above commands know nothing about the differences between connection types.  Instead,
they deal with configuration and specification information in the form of XML documents. 
Normally, these XML documents are hidden from a system integrator, unless they happen to look
into the database with a tool such as psql.  But the API commands above often will require
such XML documents to be included as part of the command execution.

This has one major consequence.  Any application that would manipulate connections and jobs
directly cannot be connection-type independent - these applications must know the proper form
of XML to submit to the command.  So, it is not possible to use these command APIs to write
one's own UI wrapper, without sacrificing some of the repository independence that ACF by
itself maintains.


Change your notification preferences: https://cwiki.apache.org/confluence/users/viewnotifications.action
  

Mime
View raw message