Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id B0F72200C88 for ; Tue, 25 Apr 2017 03:55:15 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id AFACF160BB2; Tue, 25 Apr 2017 01:55:15 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id CAC8A160BD5 for ; Tue, 25 Apr 2017 03:55:11 +0200 (CEST) Received: (qmail 14099 invoked by uid 500); 25 Apr 2017 01:55:10 -0000 Mailing-List: contact commits-help@asterixdb.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@asterixdb.apache.org Delivered-To: mailing list commits@asterixdb.apache.org Received: (qmail 12824 invoked by uid 99); 25 Apr 2017 01:55:09 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 25 Apr 2017 01:55:09 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 66E38E110C; Tue, 25 Apr 2017 01:55:09 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: imaxon@apache.org To: commits@asterixdb.apache.org Date: Tue, 25 Apr 2017 01:55:41 -0000 Message-Id: <97728fd6a3db4a189cddcbd730f66fac@git.apache.org> In-Reply-To: <6e6db72dfb7a4e1a9a799af69cd9f5a6@git.apache.org> References: <6e6db72dfb7a4e1a9a799af69cd9f5a6@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [34/36] asterixdb-site git commit: Add 0.9.1 Documentation archived-at: Tue, 25 Apr 2017 01:55:15 -0000 http://git-wip-us.apache.org/repos/asf/asterixdb-site/blob/100cb803/content/docs/0.9.1/aql/externaldata.html ---------------------------------------------------------------------- diff --git a/content/docs/0.9.1/aql/externaldata.html b/content/docs/0.9.1/aql/externaldata.html new file mode 100644 index 0000000..54c92b5 --- /dev/null +++ b/content/docs/0.9.1/aql/externaldata.html @@ -0,0 +1,758 @@ + + + + + + + + + AsterixDB – Accessing External Data in AsterixDB + + + + + + + + + + + + + + + + + +
+ + + + + +
+ + + +
+ +

Accessing External Data in AsterixDB

+ +
+

Introduction [Back to TOC]

+

Data that needs to be processed by AsterixDB could be residing outside AsterixDB storage. Examples include data files on a distributed file system such as HDFS or on the local file system of a machine that is part of an AsterixDB cluster. For AsterixDB to process such data, an end-user may create a regular dataset in AsterixDB (a.k.a. an internal dataset) and load the dataset with the data. AsterixDB also supports ‘‘external datasets’’ so that it is not necessary to “load” all data prior to using it. This also avoids creating multiple copies of data and the need to keep the copies in sync.

+
+

Adapter for an External Dataset [Back to TOC]

+

External data is accessed using wrappers (adapters in AsterixDB) that abstract away the mechanism of connecting with an external service, receiving its data and transforming the data into ADM objects that are understood by AsterixDB. AsterixDB comes with built-in adapters for common storage systems such as HDFS or the local file system.

+
+

Builtin Adapters [Back to TOC]

+

AsterixDB offers a set of builtin adapters that can be used to query external data or for loading data into an internal dataset using a load statement or a data feed. Each adapter requires specifying the format of the data in order to be able to parse objects correctly. Using adapters with feeds, the parameter output-type must also be specified.

+

Following is a listing of existing built-in adapters and their configuration parameters:

+ +
    + +
  1. localfs: used for reading data stored in a local filesystem in one or more of the node controllers + +
      + +
    • path: A fully qualified path of the form host://absolute_path. Comma separated list if there are multiple directories or files
    • + +
    • expression: A regular expression to match and filter against file names
    • +
  2. + +
  3. hdfs: used for reading data stored in an HDFS instance + +
      + +
    • path: A fully qualified path of the form host://absolute_path. Comma separated list if there are multiple directories or files
    • + +
    • expression: A regular expression to match and filter against file names
    • + +
    • input-format: A fully qualified name or an alias for a class of HDFS input format
    • + +
    • hdfs: The HDFS name node URL
    • +
  4. + +
  5. socket: used for listening to connections that sends data streams through one or more sockets + +
      + +
    • sockets: comma separated list of sockets to listen to
    • + +
    • address-type: either IP if the list uses IP addresses, or NC if the list uses NC names
    • +
  6. + +
  7. socket_client: used for connecting to one or more sockets and reading data streams + +
      + +
    • sockets: comma separated list of sockets to connect to
    • +
  8. + +
  9. twitter_push: used for establishing a connection and subscribing to a twitter feed + +
      + +
    • consumer.key: access parameter provided by twitter OAuth
    • + +
    • consumer.secret: access parameter provided by twitter OAuth
    • + +
    • access.token: access parameter provided by twitter OAuth
    • + +
    • access.token.secret: access parameter provided by twitter OAuth
    • +
  10. + +
  11. twitter_pull: used for polling a twitter feed for tweets based on a configurable frequency + +
      + +
    • consumer.key: access parameter provided by twitter OAuth
    • + +
    • consumer.secret: access parameter provided by twitter OAuth
    • + +
    • access.token: access parameter provided by twitter OAuth
    • + +
    • access.token.secret: access parameter provided by twitter OAuth
    • + +
    • query: twitter query string
    • + +
    • interval: poll interval in seconds
    • +
  12. + +
  13. rss: used for reading RSS feed + +
      + +
    • url: a comma separated list of RSS urls
    • +
  14. +
+
+

Creating an External Dataset [Back to TOC]

+

As an example we consider the Lineitem dataset from the TPCH schema. We assume that you have successfully created an AsterixDB instance following the instructions at Installing AsterixDB Using Managix. For constructing an example, we assume a single machine setup..

+

Similar to a regular dataset, an external dataset has an associated datatype. We shall first create the datatype associated with each object in Lineitem data. Paste the following in the query textbox on the webpage at http://127.0.0.1:19001 and hit ‘Execute’.

+ +
+
+
    create dataverse ExternalFileDemo;
+    use dataverse ExternalFileDemo;
+
+    create type LineitemType as closed {
+      l_orderkey:int32,
+      l_partkey: int32,
+      l_suppkey: int32,
+      l_linenumber: int32,
+      l_quantity: double,
+      l_extendedprice: double,
+      l_discount: double,
+      l_tax: double,
+      l_returnflag: string,
+      l_linestatus: string,
+      l_shipdate: string,
+      l_commitdate: string,
+      l_receiptdate: string,
+      l_shipinstruct: string,
+      l_shipmode: string,
+      l_comment: string}
+
+

Here, we describe two scenarios.

+
+

1) Data file resides on the local file system of a host

+

Prerequisite: The host is a part of the ASTERIX cluster.

+

Earlier, we assumed a single machine ASTERIX setup. To satisfy the prerequisite, log-in to the machine running ASTERIX.

+ +
    + +
  • Download the data file to an appropriate location. We denote this location by SOURCE_PATH.
  • +
+

ASTERIX provides a built-in adapter for data residing on the local file system. The adapter is referred by its alias- ‘localfs’. We create an external dataset named Lineitem and use the ‘localfs’ adapter.

+ +
+
+
    create external dataset Lineitem(LineitemType)
+    using localfs
+
+

Above, the definition is not complete as we need to provide a set of parameters that are specific to the source file.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Parameter Description
path A fully qualified path of the form host://<absolute path>. + Use a comma separated list if there are multiple files. + E.g. host1://<absolute path>, host2://<absolute path> and so forth.
format The format for the content. Use 'adm' for data in ADM (ASTERIX Data Model) or JSON format. Use 'delimited-text' if fields are separated by a delimiting character (eg., CSV).
delimiterThe delimiting character in the source file if format is 'delimited text'
+

As we are using a single single machine ASTERIX instance, we use 127.0.0.1 as host in the path parameter. We complete the create dataset statement as follows.

+ +
+
+
    use dataverse ExternalFileDemo;
+
+    create external dataset Lineitem(LineitemType)
+    using localfs
+    (("path"="127.0.0.1://SOURCE_PATH"),
+    ("format"="delimited-text"),
+    ("delimiter"="|"));
+
+

Please substitute SOURCE_PATH with the absolute path to the source file on the local file system.

+
+

Common source of error

+

An incorrect value for the path parameter will give the following exception message when the dataset is used in a query.

+ +
+
+
    org.apache.hyracks.algebricks.common.exceptions.AlgebricksException: org.apache.hyracks.api.exceptions.HyracksDataException: org.apache.hyracks.api.exceptions.HyracksDataException: Job failed.
+
+

Verify the correctness of the path parameter provided to the localfs adapter. Note that the path parameter must be an absolute path to the data file. For e.g. if you saved your file in your home directory (assume it to be /home/joe), then the path value should be

+ +
+
+
    127.0.0.1:///home/joe/lineitem.tbl.
+
+

In your web-browser, navigate to 127.0.0.1:19001 and paste the above to the query text box. Finally hit ‘Execute’.

+

Next we move over to the the section Writing Queries against an External Dataset and try a sample query against the external dataset.

+
+

2) Data file resides on an HDFS instance

+

rerequisite: It is required that the Namenode and HDFS Datanodes are reachable from the hosts that form the AsterixDB cluster. AsterixDB provides a built-in adapter for data residing on HDFS. The HDFS adapter can be referred (in AQL) by its alias - ‘hdfs’. We can create an external dataset named Lineitem and associate the HDFS adapter with it as follows;

+ +
+
+
    create external dataset Lineitem(LineitemType)
+    using hdfs((“hdfs”:”hdfs://localhost:54310”),(“path”:”/asterix/Lineitem.tbl”),...,(“input- format”:”rc-format”));
+
+

The expected parameters are described below:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Parameter Description
hdfs The HDFS URL
path The absolute path to the source HDFS file or directory. Use a comma separated list if there are multiple files or directories.
input-format The associated input format. Use 'text-input-format' for text files , 'sequence-input-format' for hadoop sequence files, 'rc-input-format' for Hadoop Object Columnar files, or a fully qualified name of an implementation of org.apache.hadoop.mapred.InputFormat.
format The format of the input content. Use 'adm' for text data in ADM (ASTERIX Data Model) or JSON format, 'delimited-text' for text delimited data that has fields separated by a delimiting character, 'binary' for other data.
delimiter The delimiting character in the source file if format is 'delimited text'
parser The parser used to parse HDFS objects if the format is 'binary'. Use 'hive- parser' for data deserialized by a Hive Serde (AsterixDB can understand deserialized Hive objects) or a fully qualified class name of user- implemented parser that implements the interface org.apache.asterix.external.input.InputParser.
hive-serde The Hive serde is used to deserialize HDFS objects if format is binary and the parser is hive-parser. Use a fully qualified name of a class implementation of org.apache.hadoop.hive.serde2.SerDe.
local-socket-path The UNIX domain socket path if local short-circuit reads are enabled in the HDFS instance
+

Difference between ‘input-format’ and ‘format’

+

input-format: Files stored under HDFS have an associated storage format. For example, TextInputFormat represents plain text files. SequenceFileInputFormat indicates binary compressed files. RCFileInputFormat corresponds to objects stored in a object columnar fashion. The parameter ‘input-format’ is used to distinguish between these and other HDFS input formats.

+

format: The parameter ‘format’ refers to the type of the data contained in the file. For example, data contained in a file could be in json or ADM format, could be in delimited-text with fields separated by a delimiting character or could be in binary format.

+

As an example. consider the data file. The file is a text file with each line representing a object. The fields in each object are separated by the ‘|’ character.

+

We assume the HDFS URL to be hdfs://localhost:54310. We further assume that the example data file is copied to HDFS at a path denoted by “/asterix/Lineitem.tbl”.

+

The complete set of parameters for our example file are as follows. ((“hdfs”=“hdfs://localhost:54310”,(“path”=“/asterix/Lineitem.tbl”),(“input-format”=“text- input-format”),(“format”=“delimited-text”),(“delimiter”=“|”))

+
+

Using the Hive Parser

+

if a user wants to create an external dataset that uses hive-parser to parse HDFS objects, it is important that the datatype associated with the dataset matches the actual data in the Hive table for the correct initialization of the Hive SerDe. Here is the conversion from the supported Hive data types to AsterixDB data types:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Hive AsterixDB
BOOLEANBoolean
BYTE(TINY INT)Int8
DOUBLEDouble
FLOATFloat
INTInt32
LONG(BIG INT)Int64
SHORT(SMALL INT)Int16
STRINGString
TIMESTAMPDatetime
DATEDate
STRUCTNested Object
LISTOrderedList or UnorderedList
+
+

Examples of dataset definitions for external datasets

+

Example 1: We can modify the create external dataset statement as follows:

+ +
+
+
    create external dataset Lineitem('LineitemType)
+    using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text- input-format"),("format"="delimited-text"),("delimiter"="|"));
+
+

Example 2: Here, we create an external dataset of lineitem objects stored in sequence files that has content in ADM format:

+ +
+
+
    create external dataset Lineitem('LineitemType)
+    using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/SequenceLineitem.tbl"),("input- format"="sequence-input-format"),("format"="adm"));
+
+

Example 3: Here, we create an external dataset of lineitem objects stored in object-columnar files that has content in binary format parsed using hive-parser with hive ColumnarSerde:

+ +
+
+
    create external dataset Lineitem('LineitemType)
+    using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/RCLineitem.tbl"),("input-format"="rc-input-format"),("format"="binary"),("parser"="hive-parser"),("hive- serde"="org.apache.hadoop.hive.serde2.columnar.ColumnarSerde"));
+
+
+

Writing Queries against an External Dataset [Back to TOC]

+

You may write AQL queries against an external dataset in exactly the same way that queries are written against internal datasets. The following is an example of an AQL query that applies a filter and returns an ordered result.

+ +
+
+
    use dataverse ExternalFileDemo;
+
+    for $c in dataset('Lineitem')
+    where $c.l_orderkey <= 3
+    order by $c.l_orderkey, $c.l_linenumber
+    return $c
+
+
+

Building Indexes over External Datasets [Back to TOC]

+

AsterixDB supports building B-Tree and R-Tree indexes over static data stored in the Hadoop Distributed File System. To create an index, first create an external dataset over the data as follows

+ +
+
+
    create external dataset Lineitem(LineitemType)
+    using hdfs(("hdfs"="hdfs://localhost:54310"),("path"="/asterix/Lineitem.tbl"),("input-format"="text-input- format"),("format"="delimited-text"),("delimiter"="|"));
+
+

You can then create a B-Tree index on this dataset instance as if the dataset was internally stored as follows:

+ +
+
+
    create index PartkeyIdx on Lineitem(l_partkey);
+
+

You could also create an R-Tree index as follows:

+ +
+
+
    create index IndexName on DatasetName(attribute-name) type rtree;
+
+

After building the indexes, the AsterixDB query compiler can use them to access the dataset and answer queries in a more cost effective manner. AsterixDB can read all HDFS input formats, but indexes over external datasets can currently be built only for HDFS datasets with ‘text-input-format’, ‘sequence-input-format’ or ‘rc-input-format’.

+
+

External Data Snapshots [Back to TOC]

+

An external data snapshot represents the status of a dataset’s files in HDFS at a point in time. Upon creating the first index over an external dataset, AsterixDB captures and stores a snapshot of the dataset in HDFS. Only objects present at the snapshot capture time are indexed, and any additional indexes created afterwards will only contain data that was present at the snapshot capture time thus preserving consistency across all indexes of a dataset. To update all indexes of an external dataset and advance the snapshot time to be the present time, a user can use the refresh external dataset command as follows:

+ +
+
+
    refresh external dataset DatasetName;
+
+

After a refresh operation commits, all of the dataset’s indexes will reflect the status of the data as of the new snapshot capture time.

+
+

Frequently Asked Questions [Back to TOC]

+

Q. I added data to my dataset in HDFS, Will the dataset indexes in AsterixDB be updated automatically?

+

A. No, you must use the refresh external dataset statement to make the indexes aware of any changes in the dataset files in HDFS.

+

Q. Why doesn’t AsterixDB update external indexes automatically?

+

A. Since external data is managed by other users/systems with mechanisms that are system dependent, AsterixDB has no way of knowing exactly when data is added or deleted in HDFS, so the responsibility of refreshing indexes are left to the user. A user can use internal datasets for which AsterixDB manages the data and its indexes.

+

Q. I created an index over an external dataset and then added some data to my HDFS dataset. Will a query that uses the index return different results from a query that doesn’t use the index?

+

A. No, queries’ results are access path independent and the stored snapshot is used to determines which data are going to be included when processing queries.

+

Q. I created an index over an external dataset and then deleted some of my dataset’s files in HDFS, Will indexed data access still return the objects in deleted files?

+

A. No. When AsterixDB accesses external data, with or without the use of indexes, it only access files present in the file system at runtime.

+

Q. I submitted a refresh command on a an external dataset and a failure occurred, What has happened to my indexes?

+

A. External Indexes Refreshes are treated as a single transaction. In case of a failure, a rollback occurs and indexes are restored to their previous state. An error message with the cause of failure is returned to the user.

+

Q. I was trying to refresh an external dataset while some queries were accessing the data using index access method. Will the queries be affected by the refresh operation?

+

A. Queries have access to external dataset indexes state at the time where the queries are submitted. A query that was submitted before a refresh commits will only access data under the snapshot taken before the refresh; queries that are submitted after the refresh commits will access data under the snapshot taken after the refresh.

+

Q. What happens when I try to create an additional index while a refresh operation is in progress or vice versa?

+

A. The create index operation will wait until the refresh commits or aborts and then the index will be built according to the external data snapshot at the end of the refresh operation. Creating indexes and refreshing datasets are mutually exclusive operations and will not be run in parallel. Multiple indexes can be created in parallel, but not multiple refresh operations.

+
+
+
+ +
+ +
+
+
Copyright © 2017 + The Apache Software Foundation. + All Rights Reserved. + +
+ + +
Apache AsterixDB, AsterixDB, Apache, the Apache + feather logo, and the Apache AsterixDB project logo are either + registered trademarks or trademarks of The Apache Software + Foundation in the United States and other countries. + All other marks mentioned may be trademarks or registered + trademarks of their respective owners.
+ + +
+
+ + http://git-wip-us.apache.org/repos/asf/asterixdb-site/blob/100cb803/content/docs/0.9.1/aql/filters.html ---------------------------------------------------------------------- diff --git a/content/docs/0.9.1/aql/filters.html b/content/docs/0.9.1/aql/filters.html new file mode 100644 index 0000000..42c97ed --- /dev/null +++ b/content/docs/0.9.1/aql/filters.html @@ -0,0 +1,313 @@ + + + + + + + + + AsterixDB – Filter-Based LSM Index Acceleration + + + + + + + + + + + + + + + + + +
+ + + + + +
+ + + +
+ +

Filter-Based LSM Index Acceleration

+ +
+

Motivation [Back to TOC]

+

Traditional relational databases usually employ conventional index structures such as B+ trees due to their low read latency. However, such traditional index structures use in-place writes to perform updates, resulting in costly random writes to disk. Today’s emerging applications often involve insert-intensive workloads for which the cost of random writes prohibits efficient ingestion of data. Consequently, popular NoSQL systems such as Cassandra, HBase, LevelDB, BigTable, etc. have adopted Log-Structured Merge (LSM) Trees as their storage structure. LSM-trees avoids the cost of random writes by batching updates into a component of the index that resides in main memory – an in-memory component. When the space occupancy of the in-memory component exceeds a specified threshold, its entries are flushed to disk forming a new component – a disk component. As disk components accumulate on disk, they are periodically merged together subject to a merge policy that decides when and what to merge. The benefit of the LSM-trees comes at the cost of possibly sacrificing read efficiency, but, it has been shown in previous studies that these inefficiencies can be mostly mitigated.

+

AsterixDB has also embraced LSM-trees, not just by using them as primary indexes, but also by using the same LSM-ification technique for all of its secondary index structures. In particular, AsterixDB adopted a generic framework for converting a class of indexes (that includes conventional B+ trees, R trees, and inverted indexes) into LSM-based secondary indexes, allowing higher data ingestion rates. In fact, for certain index structures, our results have shown that using an LSM-based version of an index can be made to significantly outperform its conventional counterpart for both ingestion and query speed (an example of such an index being the R-tree for spatial data).

+

Since an LSM-based index naturally partitions data into multiple disk components, it is possible, when answering certain queries, to exploit partitioning to only access some components and safely filter out the remaining components, thus reducing query times. For instance, referring to our TinySocial example, suppose a user always retrieves tweets from the TweetMessages dataset based on the send-time field (e.g., tweets posted in the last 24 hours). Since there is not a secondary index on the send-time field, the only available option for AsterixDB would be to scan the whole TweetMessages dataset and then apply the predicate as a post-processing step. However, if disk components of the primary index were tagged with the minimum and maximum timestamp values of the objects they contain, we could utilize the tagged information to directly access the primary index and prune comp onents that do not match the query predicate. Thus, we could save substantial cost by avoiding scanning the whole dataset and only access the relevant components. We simply call such tagging information that are associated with components, filters. (Note that even if there were a secondary index on send-time field, using filters could save substantial cost by avoiding accessing the secondary index, followed by probing the primary index for every fetched entry.) Moreover, the same filtering technique can also be used with any secondary LSM index (e.g., an LSM R-tree), in case the query contains multiple predicates (e.g., spatial and temporal predicates), to obtain similar pruning power.

+
+

Filters in AsterixDB [Back to TOC]

+

We have added support for LSM-based filters to all of AsterixDB’s index types. To enable the use of filters, the user must specify the filter’s key when creating a dataset, as shown below:

+
+
+

Creating a Dataset with a Filter

+ +
+
+
    create dataset Tweets(TweetType) primary key tweetid with filter on send-time;
+
+

Filters can be created on any totally ordered datatype (i.e., any field that can be indexed using a B+ -tree), such as integers, doubles, floats, UUIDs, datetimes, etc.

+

When a dataset with a filter is created, the name of the filter’s key field is persisted in the Metadata.Dataset dataset (which is the metadata dataset that stores the details of each dataset in an AsterixDB instance) so that DML operations against the dataset can recognize the existence of filters and can update them or utilize them accordingly. Creating a dataset with a filter in AsterixDB implies that the primary and all secondary indexes of that dataset will maintain filters on their disk components. Once a filtered dataset is created, the user can use the dataset normally (just like any other dataset). AsterixDB will automatically maintain the filters and will leverage them to efficiently answer queries whenever possible (i.e., when a query has predicates on the filter’s key).

+
+

Filters and Merge Policies [Back to TOC]

+

The AsterixDB default merge policy, the prefix merge policy, relies on component sizes and the number of components to decide which components to merge. This merge policy has proven to provide excellent performance for both ingestion and queries. However, when evaluating our filtering solution with the prefix policy, we observed a behavior that can reduce filter effectiveness. In particular, we noticed that under the prefix merge policy, the disk components of a secondary index tend to be constantly merged into a single component. This is because the prefix policy relies on a single size parameter for all of the indexes of a dataset. This parameter is typically chosen based on the sizes of the disk components of the primary index, which tend to be much larger than the sizes of the secondary indexes’ disk components. This difference caused the prefix merge policy to behave similarly to the constant merge policy (i.e., relatively poorly) when applied to secondary indexes in the sense that the secondary indexes are constantly merged into a single disk component. Consequently, the effectiveness of filters on secondary indexes was greatly reduced under the prefix-merge policy, but they were still effective when probing the primary index. Based on this behavior, we developed a new merge policy, an improved version of the prefix policy, called the correlated-prefix policy. The basic idea of this policy is that it delegates the decision of merging the disk components of all the indexes in a dataset to the primary index. When the policy decides that the primary index needs to be merged (using the same decision criteria as for the prefix policy), then it will issue successive merge requests to the I/O scheduler on behalf of all other indexes associated with the same dataset. The end result is that secondary indexes will always have the same number of disk components as their primary index under the correlated-prefix merge policy. This has improved query perfor mance, since disk components of secondary indexes now have a much better chance of being pruned.

+
+
+
+ +
+ +
+
+
Copyright © 2017 + The Apache Software Foundation. + All Rights Reserved. + +
+ + +
Apache AsterixDB, AsterixDB, Apache, the Apache + feather logo, and the Apache AsterixDB project logo are either + registered trademarks or trademarks of The Apache Software + Foundation in the United States and other countries. + All other marks mentioned may be trademarks or registered + trademarks of their respective owners.
+ + +
+
+ + http://git-wip-us.apache.org/repos/asf/asterixdb-site/blob/100cb803/content/docs/0.9.1/aql/fulltext.html ---------------------------------------------------------------------- diff --git a/content/docs/0.9.1/aql/fulltext.html b/content/docs/0.9.1/aql/fulltext.html new file mode 100644 index 0000000..16ce5d0 --- /dev/null +++ b/content/docs/0.9.1/aql/fulltext.html @@ -0,0 +1,372 @@ + + + + + + + + + AsterixDB – AsterixDB Support of Full-text search queries + + + + + + + + + + + + + + + + + +
+ + + + + +
+ + + +
+ +

AsterixDB Support of Full-text search queries

+ +
+

Motivation [Back to TOC]

+

Full-Text Search (FTS) queries are widely used in applications where users need to find records that satisfy an FTS predicate, i.e., where simple string-based matching is not sufficient. These queries are important when finding documents that contain a certain keyword is crucial. FTS queries are different from substring matching queries in that FTS queries find their query predicates as exact keywords in the given string, rather than treating a query predicate as a sequence of characters. For example, an FTS query that finds “rain” correctly returns a document when it contains “rain” as a word. However, a substring-matching query returns a document whenever it contains “rain” as a substring, for instance, a document with “brain” or “training” would be returned as well.

+
+

Syntax [Back to TOC]

+

The syntax of AsterixDB FTS follows a portion of the XQuery FullText Search syntax. Two basic forms are as follows:

+ +
+
+
    ftcontains(Expression1, Expression2, {FullTextOption})
+    ftcontains(Expression1, Expression2)
+
+

For example, we can execute the following query to find tweet messages where the message-text field includes “voice” as a word. Please note that an FTS search is case-insensitive. Thus, “Voice” or “voice” will be evaluated as the same word.

+ +
+
+
    use dataverse TinySocial;
+
+    for $msg in dataset TweetMessages
+    where ftcontains($msg.message-text, "voice", {"mode":"any"})
+    return {"id": $msg.id}
+
+

The DDL and DML of TinySocial can be found in ADM: Modeling Semistructed Data in AsterixDB.

+

The same query can be also expressed in the SQL++.

+ +
+
+
    use TinySocial;
+
+    select element {"id":msg.id}
+    from TweetMessages as msg
+    where TinySocial.ftcontains(msg.`message-text`, "voice", {"mode":"any"})
+
+

The Expression1 is an expression that should be evaluable as a string at runtime as in the above example where $msg.message-text is a string field. The Expression2 can be a string, an (un)ordered list of string value(s), or an expression. In the last case, the given expression should be evaluable into one of the first two types, i.e., into a string value or an (un)ordered list of string value(s).

+

The following examples are all valid expressions.

+ +
+
+
   ... where ftcontains($msg.message-text, "sound")
+   ... where ftcontains($msg.message-text, "sound", {"mode":"any"})
+   ... where ftcontains($msg.message-text, ["sound", "system"], {"mode":"any"})
+   ... where ftcontains($msg.message-text, {{"speed", "stand", "customization"}}, {"mode":"all"})
+   ... where ftcontains($msg.message-text, let $keyword_list := ["voice", "system"] return $keyword_list, {"mode":"all"})
+   ... where ftcontains($msg.message-text, $keyword_list, {"mode":"any"})
+
+

In the last example above, $keyword_list should evaluate to a string or an (un)ordered list of string value(s).

+

The last FullTextOption parameter clarifies the given FTS request. If you omit the FullTextOption parameter, then the default value will be set for each possible option. Currently, we only have one option named mode. And as we extend the FTS feature, more options will be added. Please note that the format of FullTextOption is a record, thus you need to put the option(s) in a record {}. The mode option indicates whether the given FTS query is a conjunctive (AND) or disjunctive (OR) search request. This option can be either “any” or “all”. The default value for mode is “all”. If one specifies “any”, a disjunctive search will be conducted. For example, the following query will find documents whose message-text field contains “sound” or “system”, so a document will be returned if it contains eit her “sound”, “system”, or both of the keywords.

+ +
+
+
   ... where ftcontains($msg.message-text, ["sound", "system"], {"mode":"any"})
+
+

The other option parameter,“all”, specifies a conjunctive search. The following examples will find the documents whose message-text field contains both “sound” and “system”. If a document contains only “sound” or “system” but not both, it will not be returned.

+ +
+
+
   ... where ftcontains($msg.message-text, ["sound", "system"], {"mode":"all"})
+   ... where ftcontains($msg.message-text, ["sound", "system"])
+
+

Currently AsterixDB doesn’t (yet) support phrase searches, so the following query will not work.

+ +
+
+
   ... where ftcontains($msg.message-text, "sound system", {"mode":"any"})
+
+

As a workaround solution, the following query can be used to achieve a roughly similar goal. The difference is that the following queries will find documents where $msg.message-text contains both “sound” and “system”, but the order and adjacency of “sound” and “system” are not checked, unlike in a phrase search. As a result, the query below would also return documents with “sound system can be installed.”, “system sound is perfect.”, or “sound is not clear. You may need to install a new system.”

+ +
+
+
   ... where ftcontains($msg.message-text, ["sound", "system"], {"mode":"all"})
+   ... where ftcontains($msg.message-text, ["sound", "system"])
+
+
+

Creating and utilizing a Full-text index [Back to TOC]

+

When there is a full-text index on the field that is being searched, rather than scanning all records, AsterixDB can utilize that index to expedite the execution of a FTS query. To create a full-text index, you need to specify the index type as fulltext in your DDL statement. For instance, the following DDL statement create a full-text index on the TweetMessages.message-text attribute.

+ +
+
+
create index messageFTSIdx on TweetMessages(message-text) type fulltext;
+
+
+
+
+ +
+ + + +