Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id DE894200B39 for ; Sat, 9 Jul 2016 20:42:29 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id DD4A2160A59; Sat, 9 Jul 2016 18:42:29 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id BFD1D160A67 for ; Sat, 9 Jul 2016 20:42:27 +0200 (CEST) Received: (qmail 67152 invoked by uid 500); 9 Jul 2016 18:42:26 -0000 Mailing-List: contact commits-help@atlas.incubator.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@atlas.incubator.apache.org Delivered-To: mailing list commits@atlas.incubator.apache.org Received: (qmail 67143 invoked by uid 99); 9 Jul 2016 18:42:26 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Jul 2016 18:42:26 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 0E429C22B1 for ; Sat, 9 Jul 2016 18:42:26 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.507 X-Spam-Level: X-Spam-Status: No, score=-4.507 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-1.287] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id rM_I3Qvn5MNK for ; Sat, 9 Jul 2016 18:42:21 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with SMTP id E90CC5FBBD for ; Sat, 9 Jul 2016 18:42:14 +0000 (UTC) Received: (qmail 66860 invoked by uid 99); 9 Jul 2016 18:42:14 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 09 Jul 2016 18:42:14 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id DF223E07FE; Sat, 9 Jul 2016 18:42:13 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: shwethags@apache.org To: commits@atlas.incubator.apache.org Date: Sat, 09 Jul 2016 18:42:25 -0000 Message-Id: <6eb9b634dbb54032ab63b21ac59c97b0@git.apache.org> In-Reply-To: <1d018c2708764386897d4ed848c1cac3@git.apache.org> References: <1d018c2708764386897d4ed848c1cac3@git.apache.org> X-Mailer: ASF-Git Admin Mailer Subject: [13/14] incubator-atlas-website git commit: updating site for 0.7 release archived-at: Sat, 09 Jul 2016 18:42:30 -0000 http://git-wip-us.apache.org/repos/asf/incubator-atlas-website/blob/60041d8d/0.7.0-incubating/Configuration.html ---------------------------------------------------------------------- diff --git a/0.7.0-incubating/Configuration.html b/0.7.0-incubating/Configuration.html new file mode 100644 index 0000000..58f157a --- /dev/null +++ b/0.7.0-incubating/Configuration.html @@ -0,0 +1,459 @@ + + + + + + + + + Apache Atlas – Configuring Apache Atlas - Application Properties + + + + + + + + + + + + + + + + + + + + +
+ + + + + + +
+ +
+

Configuring Apache Atlas - Application Properties

+

All configuration in Atlas uses java properties style configuration. The main configuration file is atlas-application.properties which is in the conf dir at the deployed location. It consists of the following sections:

+
+

Graph Configs

+
+

Graph persistence engine

+

This section sets up the graph db - titan - to use a persistence engine. Please refer to link for more details. The example below uses BerkeleyDBJE.

+
+
+atlas.graph.storage.backend=berkeleyje
+atlas.graph.storage.directory=data/berkley
+
+
+
+
Graph persistence engine - Hbase
+

Basic configuration

+
+
+atlas.graph.storage.backend=hbase
+#For standalone mode , specify localhost
+#for distributed mode, specify zookeeper quorum here - For more information refer http://s3.thinkaurelius.com/docs/titan/current/hbase.html#_remote_server_mode_2
+atlas.graph.storage.hostname=<ZooKeeper Quorum>
+
+
+

HBASE_CONF_DIR environment variable needs to be set to point to the Hbase client configuration directory which is added to classpath when Atlas starts up. hbase-site.xml needs to have the following properties set according to the cluster setup

+
+
+#Set below to /hbase-secure if the Hbase server is setup in secure mode
+zookeeper.znode.parent=/hbase-unsecure
+
+
+

Advanced configuration

+

# If you are planning to use any of the configs mentioned below, they need to be prefixed with "atlas.graph." to take effect in ATLAS Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/titan-config-ref.html#_storage_hbase

+

Permissions

+

When Atlas is configured with HBase as the storage backend the graph db (titan) needs sufficient user permissions to be able to create and access an HBase table. In a secure cluster it may be necessary to grant permissions to the 'atlas' user for the 'titan' table.

+

With Ranger, a policy can be configured for 'titan'.

+

Without Ranger, HBase shell can be used to set the permissions.

+
+
+   su hbase
+   kinit -k -t <hbase keytab> <hbase principal>
+   echo "grant 'atlas', 'RWXCA', 'titan'" | hbase shell
+
+
+

Note that if the embedded-hbase-solr profile is used then HBase is included in the distribution so that a standalone instance of HBase can be started as the default storage backend for the graph repository. Using the embedded-hbase-solr profile will configure Atlas so that HBase instance will be started and stopped along with the Atlas server by default. To use the embedded-hbase-solr profile please see "Building Atlas" in the Installation Steps section.

+
+

Graph Search Index

+

This section sets up the graph db - titan - to use an search indexing system. The example configuration below sets up to use an embedded Elastic search indexing system.

+
+
+atlas.graph.index.search.backend=elasticsearch
+atlas.graph.index.search.directory=data/es
+atlas.graph.index.search.elasticsearch.client-only=false
+atlas.graph.index.search.elasticsearch.local-mode=true
+atlas.graph.index.search.elasticsearch.create.sleep=2000
+
+
+
+
Graph Search Index - Solr
+

Please note that Solr installation in Cloud mode is a prerequisite before configuring Solr as the search indexing backend. Refer InstallationSteps section for Solr installation/configuration.

+
+
+ atlas.graph.index.search.backend=solr5
+ atlas.graph.index.search.solr.mode=cloud
+ atlas.graph.index.search.solr.zookeeper-url=<the ZK quorum setup for solr as comma separated value> eg: 10.1.6.4:2181,10.1.6.5:2181
+
+
+

Also note that if the embedded-hbase-solr profile is used then Solr is included in the distribution so that a standalone instance of Solr can be started as the default search indexing backend. Using the embedded-hbase-solr profile will configure Atlas so that the standalone Solr instance will be started and stopped along with the Atlas server by default. To use the embedded-hbase-solr profile please see "Building Atlas" in the Installation Steps section.

+
+

Choosing between Persistence and Indexing Backends

+

Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/bdb.html and http://s3.thinkaurelius.com/docs/titan/0.5.4/hbase.html for choosing between the persistence backends. BerkeleyDB is suitable for smaller data sets in the range of upto 10 million vertices with ACID gurantees. HBase on the other hand doesnt provide ACID guarantees but is able to scale for larger graphs. HBase also provides HA inherently.

+
+

Choosing between Persistence Backends

+

Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/bdb.html and http://s3.thinkaurelius.com/docs/titan/0.5.4/hbase.html for choosing between the persistence backends. BerkeleyDB is suitable for smaller data sets in the range of upto 10 million vertices with ACID gurantees. HBase on the other hand doesnt provide ACID guarantees but is able to scale for larger graphs. HBase also provides HA inherently.

+
+

Choosing between Indexing Backends

+

Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/elasticsearch.html and http://s3.thinkaurelius.com/docs/titan/0.5.4/solr.html for choosing between ElasticSearch and Solr. Solr in cloud mode is the recommended setup.

+
+

Switching Persistence Backend

+

For switching the storage backend from BerkeleyDB to HBase and vice versa, refer the documentation for "Graph Persistence Engine" described above and restart ATLAS. The data in the indexing backend needs to be cleared else there will be discrepancies between the storage and indexing backend which could result in errors during the search. ElasticSearch runs by default in embedded mode and the data could easily be cleared by deleting the ATLAS_HOME/data/es directory. For Solr, the collections which were created during ATLAS Installation - vertex_index, edge_index, fulltext_index could be deleted which will cleanup the indexes

+
+

Switching Index Backend

+

Switching the Index backend requires clearing the persistence backend data. Otherwise there will be discrepancies between the persistence and index backends since switching the indexing backend means index data will be lost. This leads to "Fulltext" queries not working on the existing data For clearing the data for BerkeleyDB, delete the ATLAS_HOME/data/berkeley directory For clearing the data for HBase, in Hbase shell, run 'disable titan' and 'drop titan'

+
+

Lineage Configs

+

The higher layer services like lineage, schema, etc. are driven by the type system and this section encodes the specific types for the hive data model.

+

# This models reflects the base super types for Data and Process

+
+
+atlas.lineage.hive.table.type.name=DataSet
+atlas.lineage.hive.process.type.name=Process
+atlas.lineage.hive.process.inputs.name=inputs
+atlas.lineage.hive.process.outputs.name=outputs
+
+## Schema
+atlas.lineage.hive.table.schema.query=hive_table where name=?, columns
+
+
+
+

Notification Configs

+

Refer http://kafka.apache.org/documentation.html#configuration for Kafka configuration. All Kafka configs should be prefixed with 'atlas.kafka.'

+
+
+atlas.notification.embedded=true
+atlas.kafka.data=${sys:atlas.home}/data/kafka
+atlas.kafka.zookeeper.connect=localhost:9026
+atlas.kafka.bootstrap.servers=localhost:9027
+atlas.kafka.zookeeper.session.timeout.ms=400
+atlas.kafka.zookeeper.sync.time.ms=20
+atlas.kafka.auto.commit.interval.ms=1000
+atlas.kafka.hook.group.id=atlas
+
+
+

Note that Kafka group ids are specified for a specific topic. The Kafka group id configuration for entity notifications is 'atlas.kafka.entities.group.id'

+
+
+atlas.kafka.entities.group.id=<consumer id>
+
+
+

These configuration parameters are useful for setting up Kafka topics via Atlas provided scripts, described in the Installation Steps page.

+
+
+# Whether to create the topics automatically, default is true.
+# Comma separated list of topics to be created, default is "ATLAS_HOOK,ATLAS_ENTITES"
+atlas.notification.topics=ATLAS_HOOK,ATLAS_ENTITIES
+# Number of replicas for the Atlas topics, default is 1. Increase for higher resilience to Kafka failures.
+atlas.notification.replicas=1
+# Enable the below two properties if Kafka is running in Kerberized mode.
+# Set this to the service principal representing the Kafka service
+atlas.notification.kafka.service.principal=kafka/_HOST@EXAMPLE.COM
+# Set this to the location of the keytab file for Kafka
+#atlas.notification.kafka.keytab.location=/etc/security/keytabs/kafka.service.keytab
+
+
+

These configuration parameters are useful for saving messages in case there are issues in reaching Kafka for sending messages.

+
+
+# Whether to save messages that failed to be sent to Kafka, default is true
+atlas.notification.log.failed.messages=true
+# If saving messages is enabled, the file name to save them to. This file will be created under the log directory of the hook's host component - like HiveServer2
+atlas.notification.failed.messages.filename=atlas_hook_failed_messages.log
+
+
+
+

Client Configs

+
+
+atlas.client.readTimeoutMSecs=60000
+atlas.client.connectTimeoutMSecs=60000
+atlas.rest.address=<http/https>://<atlas-fqdn>:<atlas port> - default http://localhost:21000
+
+
+
+

Security Properties

+
+

SSL config

+

The following property is used to toggle the SSL feature.

+
+
+atlas.enableTLS=false
+
+
+
+

High Availability Properties

+

The following properties describe High Availability related configuration options:

+
+
+# Set the following property to true, to enable High Availability. Default = false.
+atlas.server.ha.enabled=true
+
+# Define a unique set of strings to identify each instance that should run an Atlas Web Service instance as a comma separated list.
+atlas.server.ids=id1,id2
+# For each string defined above, define the host and port on which Atlas server binds to.
+atlas.server.address.id1=host1.company.com:21000
+atlas.server.address.id2=host2.company.com:31000
+
+# Specify Zookeeper properties needed for HA.
+# Specify the list of services running Zookeeper servers as a comma separated list.
+atlas.server.ha.zookeeper.connect=zk1.company.com:2181,zk2.company.com:2181,zk3.company.com:2181
+# Specify how many times should connection try to be established with a Zookeeper cluster, in case of any connection issues.
+atlas.server.ha.zookeeper.num.retries=3
+# Specify how much time should the server wait before attempting connections to Zookeeper, in case of any connection issues.
+atlas.server.ha.zookeeper.retry.sleeptime.ms=1000
+# Specify how long a session to Zookeeper should last without inactiviy to be deemed as unreachable.
+atlas.server.ha.zookeeper.session.timeout.ms=20000
+
+# Specify the scheme and the identity to be used for setting up ACLs on nodes created in Zookeeper for HA.
+# The format of these options is <scheme>:<identity>. For more information refer to http://zookeeper.apache.org/doc/r3.2.2/zookeeperProgrammers.html#sc_ZooKeeperAccessControl.
+# The 'acl' option allows to specify a scheme, identity pair to setup an ACL for.
+atlas.server.ha.zookeeper.acl=auth:sasl:client@comany.com
+# The 'auth' option specifies the authentication that should be used for connecting to Zookeeper.
+atlas.server.ha.zookeeper.auth=sasl:client@company.com
+
+# Since Zookeeper is a shared service that is typically used by many components,
+# it is preferable for each component to set its znodes under a namespace.
+# Specify the namespace under which the znodes should be written. Default = /apache_atlas
+atlas.server.ha.zookeeper.zkroot=/apache_atlas
+
+# Specify number of times a client should retry with an instance before selecting another active instance, or failing an operation.
+atlas.client.ha.retries=4
+# Specify interval between retries for a client.
+atlas.client.ha.sleep.interval.ms=5000
+
+
+
+

Server Properties

+
+
+# Set the following property to true, to enable the setup steps to run on each server start. Default = false.
+atlas.server.run.setup.on.start=false
+
+
+
+

Performance configuration items

+

The following properties can be used to tune performance of Atlas under specific circumstances:

+
+
+# The number of times Atlas code tries to acquire a lock (to ensure consistency) while committing a transaction.
+# This should be related to the amount of concurrency expected to be supported by the server. For e.g. with retries set to 10, upto 100 threads can concurrently create types in the Atlas system.
+# If this is set to a low value (default is 3), concurrent operations might fail with a PermanentLockingException.
+atlas.graph.storage.lock.retries=10
+
+# Milliseconds to wait before evicting a cached entry. This should be > atlas.graph.storage.lock.wait-time x atlas.graph.storage.lock.retries
+# If this is set to a low value (default is 10000), warnings on transactions taking too long will occur in the Atlas application log.
+atlas.graph.storage.cache.db-cache-time=120000
+
+
+
+
+ +
+ + + + http://git-wip-us.apache.org/repos/asf/incubator-atlas-website/blob/60041d8d/0.7.0-incubating/HighAvailability.html ---------------------------------------------------------------------- diff --git a/0.7.0-incubating/HighAvailability.html b/0.7.0-incubating/HighAvailability.html new file mode 100644 index 0000000..f7edaf6 --- /dev/null +++ b/0.7.0-incubating/HighAvailability.html @@ -0,0 +1,405 @@ + + + + + + + + + Apache Atlas – Fault Tolerance and High Availability Options + + + + + + + + + + + + + + + + + + + + +
+ + + + + + +
+ +
+

Fault Tolerance and High Availability Options

+
+

Introduction

+

Apache Atlas uses and interacts with a variety of systems to provide metadata management and data lineage to data administrators. By choosing and configuring these dependencies appropriately, it is possible to achieve a high degree of service availability with Atlas. This document describes the state of high availability support in Atlas, including its capabilities and current limitations, and also the configuration required for achieving this level of high availability.

+

The architecture page in the wiki gives an overview of the various components that make up Atlas. The options mentioned below for various components derive context from the above page, and would be worthwhile to review before proceeding to read this page.

+
+

Atlas Web Service

+

Currently, the Atlas Web Service has a limitation that it can only have one active instance at a time. In earlier releases of Atlas, a backup instance could be provisioned and kept available. However, a manual failover was required to make this backup instance active.

+

From this release, Atlas will support multiple instances of the Atlas Web service in an active/passive configuration with automated failover. This means that users can deploy and start multiple instances of the Atlas Web Service on different physical hosts at the same time. One of these instances will be automatically selected as an 'active' instance to service user requests. The others will automatically be deemed 'passive'. If the 'active' instance becomes unavailable either because it is deliberately stopped, or due to unexpected failures, one of the other instances will automatically be elected as an 'active' instance and start to service user requests.

+

An 'active' instance is the only instance that can respond to user requests correctly. It can create, delete, modify or respond to queries on metadata objects. A 'passive' instance will accept user requests, but will redirect them using HTTP redirect to the currently known 'active' instance. Specifically, a passive instance will not itself respond to any queries on metadata objects. However, all instances (both active and passive), will respond to admin requests that return information about that instance.

+

When configured in a High Availability mode, users can get the following operational benefits:

+

+
    +
  • Uninterrupted service during maintenance intervals: If an active instance of the Atlas Web Service needs to be brought down for maintenance, another instance would automatically become active and can service requests.
  • +
  • Uninterrupted service in event of unexpected failures: If an active instance of the Atlas Web Service fails due to software or hardware errors, another instance would automatically become active and can service requests.
+

In the following sub-sections, we describe the steps required to setup High Availability for the Atlas Web Service. We also describe how the deployment and client can be designed to take advantage of this capability. Finally, we describe a few details of the underlying implementation.

+
+

Setting up the High Availability feature in Atlas

+

The following pre-requisites must be met for setting up the High Availability feature.

+

+
    +
  • Ensure that you install Apache Zookeeper on a cluster of machines (a minimum of 3 servers is recommended for production).
  • +
  • Select 2 or more physical machines to run the Atlas Web Service instances on. These machines define what we refer to as a 'server ensemble' for Atlas.
+

To setup High Availability in Atlas, a few configuration options must be defined in the atlas-application.properties file. While the complete list of configuration items are defined in the Configuration Page, this section lists a few of the main options.

+

+
    +
  • High Availability is an optional feature in Atlas. Hence, it must be enabled by setting the configuration option atlas.server.ha.enabled to true.
  • +
  • Next, define a list of identifiers, one for each physical machine you have selected for the Atlas Web Service instance. These identifiers can be simple strings like id1, id2 etc. They should be unique and should not contain a comma.
  • +
  • Define a comma separated list of these identifiers as the value of the option atlas.server.ids.
  • +
  • For each physical machine, list the IP Address/hostname and port as the value of the configuration atlas.server.address.id, where id refers to the identifier string for this physical machine. +
      +
    • For e.g., if you have selected 2 machines with hostnames host1.company.com and host2.company.com, you can define the configuration options as below:
+
+
+      atlas.server.ids=id1,id2
+      atlas.server.address.id1=host1.company.com:21000
+      atlas.server.address.id2=host2.company.com:21000
+      
+
+

+
    +
  • Define the Zookeeper quorum which will be used by the Atlas High Availability feature.
+
+
+      atlas.server.ha.zookeeper.connect=zk1.company.com:2181,zk2.company.com:2181,zk3.company.com:2181
+      
+
+

+
    +
  • You can review other configuration options that are defined for the High Availability feature, and set them up as desired in the atlas-application.properties file.
  • +
  • For production environments, the components that Atlas depends on must also be set up in High Availability mode. This is described in detail in the following sections. Follow those instructions to setup and configure them.
  • +
  • Install the Atlas software on the selected physical machines.
  • +
  • Copy the atlas-application.properties file created using the steps above to the configuration directory of all the machines.
  • +
  • Start the dependent components.
  • +
  • Start each instance of the Atlas Web Service.
+

To verify that High Availability is working, run the following script on each of the instances where Atlas Web Service is installed.

+
+
+$ATLAS_HOME/bin/atlas_admin.py -status
+
+
+

This script can print one of the values below as response:

+

+
    +
  • ACTIVE: This instance is active and can respond to user requests.
  • +
  • PASSIVE: This instance is PASSIVE. It will redirect any user requests it receives to the current active instance.
  • +
  • BECOMING_ACTIVE: This would be printed if the server is transitioning to become an ACTIVE instance. The server cannot service any metadata user requests in this state.
  • +
  • BECOMING_PASSIVE: This would be printed if the server is transitioning to become a PASSIVE instance. The server cannot service any metadata user requests in this state.
+

Under normal operating circumstances, only one of these instances should print the value ACTIVE as response to the script, and the others would print PASSIVE.

+
+

Configuring clients to use the High Availability feature

+

The Atlas Web Service can be accessed in two ways:

+

+
    +
  • Using the Atlas Web UI: This is a browser based client that can be used to query the metadata stored in Atlas.
  • +
  • Using the Atlas REST API: As Atlas exposes a RESTful API, one can use any standard REST client including libraries in other applications. In fact, Atlas ships with a client called AtlasClient that can be used as an example to build REST client access.
+

In order to take advantage of the High Availability feature in the clients, there are two options possible.

+
+
Using an intermediate proxy
+

The simplest solution to enable highly available access to Atlas is to install and configure some intermediate proxy that has a capability to transparently switch services based on status. One such proxy solution is HAProxy.

+

Here is an example HAProxy configuration that can be used. Note this is provided for illustration only, and not as a recommended production configuration. For that, please refer to the HAProxy documentation for appropriate instructions.

+
+
+frontend atlas_fe
+  bind *:41000
+  default_backend atlas_be
+
+backend atlas_be
+  mode http
+  option httpchk get /api/atlas/admin/status
+  http-check expect string ACTIVE
+  balance roundrobin
+  server host1_21000 host1:21000 check
+  server host2_21000 host2:21000 check backup
+
+listen atlas
+  bind localhost:42000
+
+
+

The above configuration binds HAProxy to listen on port 41000 for incoming client connections. It then routes the connections to either of the hosts host1 or host2 depending on a HTTP status check. The status check is done using a HTTP GET on the REST URL /api/atlas/admin/status, and is deemed successful only if the HTTP response contains the string ACTIVE.

+
+
Using automatic detection of active instance
+

If one does not want to setup and manage a separate proxy, then the other option to use the High Availability feature is to build a client application that is capable of detecting status and retrying operations. In such a setting, the client application can be launched with the URLs of all Atlas Web Service instances that form the ensemble. The client should then call the REST URL /api/atlas/admin/status on each of these to determine which is the active instance. The response from the Active instance would be of the form {Status:ACTIVE}. Also, when the client faces any exceptions in the course of an operation, it should again determine which of the remaining URLs is active and retry the operation.

+

The AtlasClient class that ships with Atlas can be used as an example client library that implements the logic for working with an ensemble and selecting the right Active server instance.

+

Utilities in Atlas, like quick_start.py and import-hive.sh can be configured to run with multiple server URLs. When launched in this mode, the AtlasClient automatically selects and works with the current active instance. If a proxy is set up in between, then its address can be used when running quick_start.py or import-hive.sh.

+
+

Implementation Details of Atlas High Availability

+

The Atlas High Availability work is tracked under the master JIRA ATLAS-510. The JIRAs filed under it have detailed information about how the High Availability feature has been implemented. At a high level the following points can be called out:

+

+
    +
  • The automatic selection of an Active instance, as well as automatic failover to a new Active instance happen through a leader election algorithm.
  • +
  • For leader election, we use the Leader Latch Recipe of Apache Curator.
  • +
  • The Active instance is the only one which initializes, modifies or reads state in the backend stores to keep them consistent.
  • +
  • Also, when an instance is elected as Active, it refreshes any cached information from the backend stores to get up to date.
  • +
  • A servlet filter ensures that only the active instance services user requests. If a passive instance receives these requests, it automatically redirects them to the current active instance.
+
+

Metadata Store

+

As described above, Atlas uses Titan to store the metadata it manages. By default, Atlas uses a standalone HBase instance as the backing store for Titan. In order to provide HA for the metadata store, we recommend that Atlas be configured to use distributed HBase as the backing store for Titan. Doing this implies that you could benefit from the HA guarantees HBase provides. In order to configure Atlas to use HBase in HA mode, do the following:

+

+
    +
  • Choose an existing HBase cluster that is set up in HA mode to configure in Atlas (OR) Set up a new HBase cluster in HA mode. +
      +
    • If setting up HBase for Atlas, please following instructions listed for setting up HBase in the Installation Steps.
  • +
  • We recommend using more than one HBase masters (at least 2) in the cluster on different physical hosts that use Zookeeper for coordination to provide redundancy and high availability of HBase. +
      +
    • Refer to the Configuration page for the options to configure in atlas.properties to setup Atlas with HBase.
+
+

Index Store

+

As described above, Atlas indexes metadata through Titan to support full text search queries. In order to provide HA for the index store, we recommend that Atlas be configured to use Solr as the backing index store for Titan. In order to configure Atlas to use Solr in HA mode, do the following:

+

+
    +
  • Choose an existing SolrCloud cluster setup in HA mode to configure in Atlas (OR) Set up a new SolrCloud cluster. +
      +
    • Ensure Solr is brought up on at least 2 physical hosts for redundancy, and each host runs a Solr node.
    • +
    • We recommend the number of replicas to be set to at least 2 for redundancy.
  • +
  • Create the SolrCloud collections required by Atlas, as described in Installation Steps
  • +
  • Refer to the Configuration page for the options to configure in atlas.properties to setup Atlas with Solr.
+
+

Notification Server

+

Metadata notification events from Hooks are sent to Atlas by writing them to a Kafka topic called ATLAS_HOOK. Similarly, events from Atlas to other integrating components like Ranger, are written to a Kafka topic called ATLAS_ENTITIES. Since Kafka persists these messages, the events will not be lost even if the consumers are down as the events are being sent. In addition, we recommend Kafka is also setup for fault tolerance so that it has higher availability guarantees. In order to configure Atlas to use Kafka in HA mode, do the following:

+

+
    +
  • Choose an existing Kafka cluster set up in HA mode to configure in Atlas (OR) Set up a new Kafka cluster.
  • +
  • We recommend that there are more than one Kafka brokers in the cluster on different physical hosts that use Zookeeper for coordination to provide redundancy and high availability of Kafka. +
      +
    • Setup at least 2 physical hosts for redundancy, each hosting a Kafka broker.
  • +
  • Set up Kafka topics for Atlas usage: +
      +
    • The number of partitions for the ATLAS topics should be set to 1 (numPartitions)
    • +
    • Decide number of replicas for Kafka topic: Set this to at least 2 for redundancy.
    • +
    • Run the following commands:
+
+
+      $KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper <list of zookeeper host:port entries> --topic ATLAS_HOOK --replication-factor <numReplicas> --partitions 1
+      $KAFKA_HOME/bin/kafka-topics.sh --create --zookeeper <list of zookeeper host:port entries> --topic ATLAS_ENTITIES --replication-factor <numReplicas> --partitions 1
+      Here KAFKA_HOME points to the Kafka installation directory.
+      
+
+

+
    +
  • In atlas-application.properties, set the following configuration:
+
+
+     atlas.notification.embedded=false
+     atlas.kafka.zookeeper.connect=<comma separated list of servers forming Zookeeper quorum used by Kafka>
+     atlas.kafka.bootstrap.servers=<comma separated list of Kafka broker endpoints in host:port form> - Give at least 2 for redundancy.
+     
+
+
+

Known Issues

+

+
    +
  • If the HBase region servers hosting the Atlas ‘titan’ HTable are down, Atlas would not be able to store or retrieve metadata from HBase until they are brought back online.
+
+
+ +
+ + + + http://git-wip-us.apache.org/repos/asf/incubator-atlas-website/blob/60041d8d/0.7.0-incubating/InstallationSteps.html ---------------------------------------------------------------------- diff --git a/0.7.0-incubating/InstallationSteps.html b/0.7.0-incubating/InstallationSteps.html new file mode 100644 index 0000000..63cd759 --- /dev/null +++ b/0.7.0-incubating/InstallationSteps.html @@ -0,0 +1,556 @@ + + + + + + + + + Apache Atlas – Building & Installing Apache Atlas + + + + + + + + + + + + + + + + + + + + +
+ + + + + + +
+ +
+

Building & Installing Apache Atlas

+
+

Building Atlas

+
+
+git clone https://git-wip-us.apache.org/repos/asf/incubator-atlas.git atlas
+
+cd atlas
+
+export MAVEN_OPTS="-Xmx1536m -XX:MaxPermSize=512m" && mvn clean install
+
+
+

Once the build successfully completes, artifacts can be packaged for deployment.

+
+
+
+mvn clean package -Pdist
+
+
+
+

To build a distribution that configures Atlas for external HBase and Solr, build with the external-hbase-solr profile.

+
+
+
+mvn clean package -Pdist,external-hbase-solr
+
+
+
+

Note that when the external-hbase-solr profile is used the following steps need to be completed to make Atlas functional.

+
    +
  • Configure atlas.graph.storage.hostname (see "Graph persistence engine - HBase" in the Configuration section).
  • +
  • Configure atlas.graph.index.search.solr.zookeeper-url (see "Graph Search Index - Solr" in the Configuration section).
  • +
  • Set HBASE_CONF_DIR to point to a valid HBase config directory (see "Graph persistence engine - HBase" in the Configuration section).
  • +
  • Create the SOLR indices (see "Graph Search Index - Solr" in the Configuration section).
+

To build a distribution that packages HBase and Solr, build with the embedded-hbase-solr profile.

+
+
+
+mvn clean package -Pdist,embedded-hbase-solr
+
+
+
+

Using the embedded-hbase-solr profile will configure Atlas so that an HBase instance and a Solr instance will be started and stopped along with the Atlas server by default.

+

Atlas also supports building a distribution that can use BerkeleyDB and Elastic search as the graph and index backends. To build a distribution that is configured for these backends, build with the berkeley-elasticsearch profile.

+
+
+
+mvn clean package -Pdist,berkeley-elasticsearch
+
+
+
+

An additional step is required for the binary built using this profile to be used along with the Atlas distribution. Due to licensing requirements, Atlas does not bundle the BerkeleyDB Java Edition in the tarball.

+

You can download the Berkeley DB jar file from the URL: http://download.oracle.com/otn/berkeley-db/je-5.0.73.zip and copy the je-5.0.73.jar to the ${atlas_home}/libext directory.

+

Tar can be found in atlas/distro/target/apache-atlas-${project.version}-bin.tar.gz

+

Tar is structured as follows

+
+
+
+|- bin
+   |- atlas_start.py
+   |- atlas_stop.py
+   |- atlas_config.py
+   |- quick_start.py
+   |- cputil.py
+|- conf
+   |- atlas-application.properties
+   |- atlas-env.sh
+   |- hbase
+      |- hbase-site.xml.template
+   |- log4j.xml
+   |- solr
+      |- currency.xml
+      |- lang
+         |- stopwords_en.txt
+      |- protowords.txt
+      |- schema.xml
+      |- solrconfig.xml
+      |- stopwords.txt
+      |- synonyms.txt
+|- docs
+|- hbase
+   |- bin
+   |- conf
+   ...
+|- server
+   |- webapp
+      |- atlas.war
+|- solr
+   |- bin
+   ...
+|- README
+|- NOTICE
+|- LICENSE
+|- DISCLAIMER.txt
+|- CHANGES.txt
+
+
+
+

Note that if the embedded-hbase-solr profile is specified for the build then HBase and Solr are included in the distribution.

+

In this case, a standalone instance of HBase can be started as the default storage backend for the graph repository. During Atlas installation the conf/hbase/hbase-site.xml.template gets expanded and moved to hbase/conf/hbase-site.xml for the initial standalone HBase configuration. To configure ATLAS graph persistence for a different HBase instance, please see "Graph persistence engine - HBase" in the Configuration section.

+

Also, a standalone instance of Solr can be started as the default search indexing backend. To configure ATLAS search indexing for a different Solr instance please see "Graph Search Index - Solr" in the Configuration section.

+
+

Installing & Running Atlas

+
+
Installing Atlas
+
+
+tar -xzvf apache-atlas-${project.version}-bin.tar.gz
+
+cd atlas-${project.version}
+
+
+
+
Configuring Atlas
+

By default config directory used by Atlas is {package dir}/conf. To override this set environment variable ATLAS_CONF to the path of the conf dir.

+

atlas-env.sh has been added to the Atlas conf. This file can be used to set various environment variables that you need for you services. In addition you can set any other environment variables you might need. This file will be sourced by atlas scripts before any commands are executed. The following environment variables are available to set.

+
+
+# The java implementation to use. If JAVA_HOME is not found we expect java and jar to be in path
+#export JAVA_HOME=
+
+# any additional java opts you want to set. This will apply to both client and server operations
+#export ATLAS_OPTS=
+
+# any additional java opts that you want to set for client only
+#export ATLAS_CLIENT_OPTS=
+
+# java heap size we want to set for the client. Default is 1024MB
+#export ATLAS_CLIENT_HEAP=
+
+# any additional opts you want to set for atlas service.
+#export ATLAS_SERVER_OPTS=
+
+# java heap size we want to set for the atlas server. Default is 1024MB
+#export ATLAS_SERVER_HEAP=
+
+# What is is considered as atlas home dir. Default is the base locaion of the installed software
+#export ATLAS_HOME_DIR=
+
+# Where log files are stored. Defatult is logs directory under the base install location
+#export ATLAS_LOG_DIR=
+
+# Where pid files are stored. Defatult is logs directory under the base install location
+#export ATLAS_PID_DIR=
+
+# where the atlas titan db data is stored. Defatult is logs/data directory under the base install location
+#export ATLAS_DATA_DIR=
+
+# Where do you want to expand the war file. By Default it is in /server/webapp dir under the base install dir.
+#export ATLAS_EXPANDED_WEBAPP_DIR=
+
+
+

Settings to support large number of metadata objects

+

If you plan to store several tens of thousands of metadata objects, it is recommended that you use values tuned for better GC performance of the JVM.

+

The following values are common server side options:

+
+
+export ATLAS_SERVER_OPTS="-server -XX:SoftRefLRUPolicyMSPerMB=0 -XX:+CMSClassUnloadingEnabled -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+PrintTenuringDistribution -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=dumps/atlas_server.hprof -Xloggc:logs/gc-worker.log -verbose:gc -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1m -XX:+PrintGCDetails -XX:+PrintHeapAtGC -XX:+PrintGCTimeStamps"
+
+
+

The -XX:SoftRefLRUPolicyMSPerMB option was found to be particularly helpful to regulate GC performance for query heavy workloads with many concurrent users.

+

The following values are recommended for JDK 7:

+
+
+export ATLAS_SERVER_HEAP="-Xms15360m -Xmx15360m -XX:MaxNewSize=3072m -XX:PermSize=100M -XX:MaxPermSize=512m"
+
+
+

The following values are recommended for JDK 8:

+
+
+export ATLAS_SERVER_HEAP="-Xms15360m -Xmx15360m -XX:MaxNewSize=5120m -XX:MetaspaceSize=100M -XX:MaxMetaspaceSize=512m"
+
+
+

NOTE for Mac OS users If you are using a Mac OS, you will need to configure the ATLAS_SERVER_OPTS (explained above).

+

In {package dir}/conf/atlas-env.sh uncomment the following line

+
+
+#export ATLAS_SERVER_OPTS=
+
+
+

and change it to look as below

+
+
+export ATLAS_SERVER_OPTS="-Djava.awt.headless=true -Djava.security.krb5.realm= -Djava.security.krb5.kdc="
+
+
+

Hbase as the Storage Backend for the Graph Repository

+

By default, Atlas uses Titan as the graph repository and is the only graph repository implementation available currently. The HBase versions currently supported are 1.1.x. For configuring ATLAS graph persistence on HBase, please see "Graph persistence engine - HBase" in the Configuration section for more details.

+

Pre-requisites for running HBase as a distributed cluster

+
    +
  • 3 or 5 ZooKeeper nodes
  • +
  • Atleast 3 RegionServer nodes. It would be ideal to run the DataNodes on the same hosts as the Region servers for data locality.
+

HBase tablename in Titan can be set using the following configuration in ATLAS_HOME/conf/atlas-application.properties:

+
+
+atlas.graph.storage.hbase.table=apache_atlas_titan
+atlas.audit.hbase.tablename=apache_atlas_entity_audit
+
+
+

Configuring SOLR as the Indexing Backend for the Graph Repository

+

By default, Atlas uses Titan as the graph repository and is the only graph repository implementation available currently. For configuring Titan to work with Solr, please follow the instructions below

+

+ +

+
    +
  • Start solr in cloud mode.
SolrCloud mode uses a ZooKeeper Service as a highly available, central location for cluster management. For a small cluster, running with an existing ZooKeeper quorum should be fine. For larger clusters, you would want to run separate multiple ZooKeeper quorum with atleast 3 servers. Note: Atlas currently supports solr in "cloud" mode only. "http" mode is not supported. For more information, refer solr documentation - https://cwiki.apache.org/confluence/display/solr/SolrCloud +

+
    +
  • For e.g., to bring up a Solr node listening on port 8983 on a machine, you can use the command:
+
+
+      $SOLR_HOME/bin/solr start -c -z <zookeeper_host:port> -p 8983
+      
+
+

+
    +
  • Run the following commands from SOLR_BIN (e.g. $SOLR_HOME/bin) directory to create collections in Solr corresponding to the indexes that Atlas uses. In the case that the ATLAS and SOLR instance are on 2 different hosts,
first copy the required configuration files from ATLAS_HOME/conf/solr on the ATLAS instance host to the Solr instance host. SOLR_CONF in the below mentioned commands refer to the directory where the solr configuration files have been copied to on Solr host: +
+
+  $SOLR_BIN/solr create -c vertex_index -d SOLR_CONF -shards #numShards -replicationFactor #replicationFactor
+  $SOLR_BIN/solr create -c edge_index -d SOLR_CONF -shards #numShards -replicationFactor #replicationFactor
+  $SOLR_BIN/solr create -c fulltext_index -d SOLR_CONF -shards #numShards -replicationFactor #replicationFactor
+
+
+

Note: If numShards and replicationFactor are not specified, they default to 1 which suffices if you are trying out solr with ATLAS on a single node instance. Otherwise specify numShards according to the number of hosts that are in the Solr cluster and the maxShardsPerNode configuration. The number of shards cannot exceed the total number of Solr nodes in your !SolrCloud cluster.

+

The number of replicas (replicationFactor) can be set according to the redundancy required.

+

Also note that solr will automatically be called to create the indexes when the Atlas server is started if the SOLR_BIN and SOLR_CONF environment variables are set and the search indexing backend is set to 'solr5'.

+

+
    +
  • Change ATLAS configuration to point to the Solr instance setup. Please make sure the following configurations are set to the below values in ATLAS_HOME/conf/atlas-application.properties
+
+
+ atlas.graph.index.search.backend=solr5
+ atlas.graph.index.search.solr.mode=cloud
+ atlas.graph.index.search.solr.zookeeper-url=<the ZK quorum setup for solr as comma separated value> eg: 10.1.6.4:2181,10.1.6.5:2181
+
+
+

+
    +
  • Restart Atlas
+

For more information on Titan solr configuration , please refer http://s3.thinkaurelius.com/docs/titan/0.5.4/solr.htm

+

Pre-requisites for running Solr in cloud mode * Memory - Solr is both memory and CPU intensive. Make sure the server running Solr has adequate memory, CPU and disk. Solr works well with 32GB RAM. Plan to provide as much memory as possible to Solr process * Disk - If the number of entities that need to be stored are large, plan to have at least 500 GB free space in the volume where Solr is going to store the index data * SolrCloud has support for replication and sharding. It is highly recommended to use SolrCloud with at least two Solr nodes running on different servers with replication enabled. If using SolrCloud, then you also need ZooKeeper installed and configured with 3 or 5 ZooKeeper nodes

+

Configuring Kafka Topics

+

Atlas uses Kafka to ingest metadata from other components at runtime. This is described in the Architecture page in more detail. Depending on the configuration of Kafka, sometimes you might need to setup the topics explicitly before using Atlas. To do so, Atlas provides a script bin/atlas_kafka_setup.py which can be run from the Atlas server. In some environments, the hooks might start getting used first before Atlas server itself is setup. In such cases, the topics can be run on the hosts where hooks are installed using a similar script hook-bin/atlas_kafka_setup_hook.py. Both these use configuration in atlas-application.properties for setting up the topics. Please refer to the Configuration page for these details.

+
+
Setting up Atlas
+

There are a few steps that setup dependencies of Atlas. One such example is setting up the Titan schema in the storage backend of choice. In a simple single server setup, these are automatically setup with default configuration when the server first accesses these dependencies.

+

However, there are scenarios when we may want to run setup steps explicitly as one time operations. For example, in a multiple server scenario using High Availability, it is preferable to run setup steps from one of the server instances the first time, and then start the services.

+

To run these steps one time, execute the command bin/atlas_start.py -setup from a single Atlas server instance.

+

However, the Atlas server does take care of parallel executions of the setup steps. Also, running the setup steps multiple times is idempotent. Therefore, if one chooses to run the setup steps as part of server startup, for convenience, then they should enable the configuration option atlas.server.run.setup.on.start by defining it with the value true in the atlas-application.properties file.

+
+
Starting Atlas Server
+
+
+bin/atlas_start.py [-port <port>]
+
+
+

By default,

+
    +
  • To change the port, use -port option.
  • +
  • atlas server starts with conf from {package dir}/conf. To override this (to use the same conf with multiple atlas upgrades), set environment variable ATLAS_CONF to the path of conf dir
+
+

Using Atlas

+

+
    +
  • Quick start model - sample model and data
+
+
+  bin/quick_start.py [<atlas endpoint>]
+
+
+

+
    +
  • Verify if the server is up and running
+
+
+  curl -v http://localhost:21000/api/atlas/admin/version
+  {"Version":"v0.1"}
+
+
+

+
    +
  • List the types in the repository
+
+
+  curl -v http://localhost:21000/api/atlas/types
+  {"results":["Process","Infrastructure","DataSet"],"count":3,"requestId":"1867493731@qtp-262860041-0 - 82d43a27-7c34-4573-85d1-a01525705091"}
+
+
+

+
    +
  • List the instances for a given type
+
+
+  curl -v http://localhost:21000/api/atlas/entities?type=hive_table
+  {"requestId":"788558007@qtp-44808654-5","list":["cb9b5513-c672-42cb-8477-b8f3e537a162","ec985719-a794-4c98-b98f-0509bd23aac0","48998f81-f1d3-45a2-989a-223af5c1ed6e","a54b386e-c759-4651-8779-a099294244c4"]}
+
+  curl -v http://localhost:21000/api/atlas/entities/list/hive_db
+
+
+

+
    +
  • Search for entities (instances) in the repository
+
+
+  curl -v http://localhost:21000/api/atlas/discovery/search/dsl?query="from hive_table"
+
+
+

Dashboard

+

Once atlas is started, you can view the status of atlas entities using the Web-based dashboard. You can open your browser at the corresponding port to use the web UI.

+
+

Stopping Atlas Server

+
+
+bin/atlas_stop.py
+
+
+
+

Known Issues

+
+
Setup
+

If the setup of Atlas service fails due to any reason, the next run of setup (either by an explicit invocation of atlas_start.py -setup or by enabling the configuration option atlas.server.run.setup.on.start) will fail with a message such as A previous setup run may not have completed cleanly.. In such cases, you would need to manually ensure the setup can run and delete the Zookeeper node at /apache_atlas/setup_in_progress before attempting to run setup again.

+

If the setup failed due to HBase Titan schema setup errors, it may be necessary to repair the HBase schema. If no data has been stored, one can also disable and drop the 'titan' schema in HBase to let setup run again.

+
+
+ +
+ + + +