Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id DDA26200D0A for ; Tue, 29 Aug 2017 23:37:03 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id DC186167A6C; Tue, 29 Aug 2017 21:37:03 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 92C8C167A71 for ; Tue, 29 Aug 2017 23:37:01 +0200 (CEST) Received: (qmail 59869 invoked by uid 500); 29 Aug 2017 21:37:00 -0000 Mailing-List: contact commits-help@atlas.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@atlas.apache.org Delivered-To: mailing list commits@atlas.apache.org Received: (qmail 59605 invoked by uid 99); 29 Aug 2017 21:37:00 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 29 Aug 2017 21:37:00 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id C42B5188D21 for ; Tue, 29 Aug 2017 21:36:58 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -4.222 X-Spam-Level: X-Spam-Status: No, score=-4.222 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, RCVD_IN_DNSWL_HI=-5, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RP_MATCHES_RCVD=-0.001, SPF_PASS=-0.001] autolearn=disabled Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id CllvcG9WtEuU for ; Tue, 29 Aug 2017 21:36:42 +0000 (UTC) Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with SMTP id F05C461263 for ; Tue, 29 Aug 2017 21:36:33 +0000 (UTC) Received: (qmail 56419 invoked by uid 99); 29 Aug 2017 21:36:33 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 29 Aug 2017 21:36:33 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id B95BFF5FF3; Tue, 29 Aug 2017 21:36:31 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit From: sarath@apache.org To: commits@atlas.incubator.apache.org Date: Tue, 29 Aug 2017 21:37:07 -0000 Message-Id: <447724f5fc0246c4bc240a3c7e00a811@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [38/42] atlas-website git commit: ATLAS-2068: Update atlas website about 0.8.1 release archived-at: Tue, 29 Aug 2017 21:37:04 -0000 http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/StormAtlasHook.html ---------------------------------------------------------------------- diff --git a/0.8.1/StormAtlasHook.html b/0.8.1/StormAtlasHook.html new file mode 100644 index 0000000..fee7d9d --- /dev/null +++ b/0.8.1/StormAtlasHook.html @@ -0,0 +1,319 @@ + + + + + + + + + Apache Atlas – Storm Atlas Bridge + + + + + + + + + + + + + + + + + + + + +
+ + + + + + +
+ +
+

Storm Atlas Bridge

+
+

Introduction

+

Apache Storm is a distributed real-time computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing what Hadoop did for batch processing. The process is essentially a DAG of nodes, which is called topology.

+

Apache Atlas is a metadata repository that enables end-to-end data lineage, search and associate business classification.

+

The goal of this integration is to push the operational topology metadata along with the underlying data source(s), target(s), derivation processes and any available business context so Atlas can capture the lineage for this topology.

+

There are 2 parts in this process detailed below:

+
    +
  • Data model to represent the concepts in Storm
  • +
  • Storm Atlas Hook to update metadata in Atlas
+
+

Storm Data Model

+

A data model is represented as Types in Atlas. It contains the descriptions of various nodes in the topology graph, such as spouts and bolts and the corresponding producer and consumer types.

+

The following types are added in Atlas.

+

+
    +
  • storm_topology - represents the coarse-grained topology. A storm_topology derives from an Atlas Process type and hence can be used to inform Atlas about lineage.
  • +
  • Following data sets are added - kafka_topic, jms_topic, hbase_table, hdfs_data_set. These all derive from an Atlas Dataset type and hence form the end points of a lineage graph.
  • +
  • storm_spout - Data Producer having outputs, typically Kafka, JMS
  • +
  • storm_bolt - Data Consumer having inputs and outputs, typically Hive, HBase, HDFS, etc.
+

The Storm Atlas hook auto registers dependent models like the Hive data model if it finds that these are not known to the Atlas server.

+

The data model for each of the types is described in the class definition at org.apache.atlas.storm.model.StormDataModel.

+
+

Storm Atlas Hook

+

Atlas is notified when a new topology is registered successfully in Storm. Storm provides a hook, backtype.storm.ISubmitterHook, at the Storm client used to submit a storm topology.

+

The Storm Atlas hook intercepts the hook post execution and extracts the metadata from the topology and updates Atlas using the types defined. Atlas implements the Storm client hook interface in org.apache.atlas.storm.hook.StormAtlasHook.

+
+

Limitations

+

The following apply for the first version of the integration.

+

+
    +
  • Only new topology submissions are registered with Atlas, any lifecycle changes are not reflected in Atlas.
  • +
  • The Atlas server needs to be online when a Storm topology is submitted for the metadata to be captured.
  • +
  • The Hook currently does not support capturing lineage for custom spouts and bolts.
+
+

Installation

+

The Storm Atlas Hook needs to be manually installed in Storm on the client side. The hook artifacts are available at: $ATLAS_PACKAGE/hook/storm

+

Storm Atlas hook jars need to be copied to $STORM_HOME/extlib. Replace STORM_HOME with storm installation path.

+

Restart all daemons after you have installed the atlas hook into Storm.

+
+

Configuration

+
+

Storm Configuration

+

The Storm Atlas Hook needs to be configured in Storm client config in $STORM_HOME/conf/storm.yaml as:

+
+
+storm.topology.submission.notifier.plugin.class: "org.apache.atlas.storm.hook.StormAtlasHook"
+
+
+

Also set a 'cluster name' that would be used as a namespace for objects registered in Atlas. This name would be used for namespacing the Storm topology, spouts and bolts.

+

The other objects like data sets should ideally be identified with the cluster name of the components that generate them. For e.g. Hive tables and databases should be identified using the cluster name set in Hive. The Storm Atlas hook will pick this up if the Hive configuration is available in the Storm topology jar that is submitted on the client and the cluster name is defined there. This happens similarly for HBase data sets. In case this configuration is not available, the cluster name set in the Storm configuration will be used.

+
+
+atlas.cluster.name: "cluster_name"
+
+
+

In $STORM_HOME/conf/storm_env.ini, set an environment variable as follows:

+
+
+STORM_JAR_JVM_OPTS:"-Datlas.conf=$ATLAS_HOME/conf/"
+
+
+

where ATLAS_HOME is pointing to where ATLAS is installed.

+

You could also set this up programatically in Storm Config as:

+
+
+    Config stormConf = new Config();
+    ...
+    stormConf.put(Config.STORM_TOPOLOGY_SUBMISSION_NOTIFIER_PLUGIN,
+            org.apache.atlas.storm.hook.StormAtlasHook.class.getName());
+
+
+
+
+ +
+ + + + http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/TypeSystem.html ---------------------------------------------------------------------- diff --git a/0.8.1/TypeSystem.html b/0.8.1/TypeSystem.html new file mode 100644 index 0000000..d64bf52 --- /dev/null +++ b/0.8.1/TypeSystem.html @@ -0,0 +1,409 @@ + + + + + + + + + Apache Atlas – Type System + + + + + + + + + + + + + + + + + + + + +
+ + + + + + +
+ +
+

Type System

+
+

Overview

+

Atlas allows users to define a model for the metadata objects they want to manage. The model is composed of definitions called ‘types’. Instances of ‘types’ called ‘entities’ represent the actual metadata objects that are managed. The Type System is a component that allows users to define and manage the types and entities. All metadata objects managed by Atlas out of the box (like Hive tables, for e.g.) are modelled using types and represented as entities. To store new types of metadata in Atlas, one needs to understand the concepts of the type system component.

+
+

Types

+

A ‘Type’ in Atlas is a definition of how a particular type of metadata objects are stored and accessed. A type represents one or a collection of attributes that define the properties for the metadata object. Users with a development background will recognize the similarity of a type to a ‘Class’ definition of object oriented programming languages, or a ‘table schema’ of relational databases.

+

An example of a type that comes natively defined with Atlas is a Hive table. A Hive table is defined with these attributes:

+
+
+Name: hive_table
+MetaType: Class
+SuperTypes: DataSet
+Attributes:
+    name: String (name of the table)
+    db: Database object of type hive_db
+    owner: String
+    createTime: Date
+    lastAccessTime: Date
+    comment: String
+    retention: int
+    sd: Storage Description object of type hive_storagedesc
+    partitionKeys: Array of objects of type hive_column
+    aliases: Array of strings
+    columns: Array of objects of type hive_column
+    parameters: Map of String keys to String values
+    viewOriginalText: String
+    viewExpandedText: String
+    tableType: String
+    temporary: Boolean
+
+
+

The following points can be noted from the above example:

+

+
    +
  • A type in Atlas is identified uniquely by a ‘name’
  • +
  • A type has a metatype. A metatype represents the type of this model in Atlas. Atlas has the following metatypes: +
      +
    • Basic metatypes: E.g. Int, String, Boolean etc.
    • +
    • Enum metatypes
    • +
    • Collection metatypes: E.g. Array, Map
    • +
    • Composite metatypes: E.g. Class, Struct, Trait
  • +
  • A type can ‘extend’ from a parent type called ‘supertype’ - by virtue of this, it will get to include the attributes that are defined in the supertype as well. This allows modellers to define common attributes across a set of related types etc. This is again similar to the concept of how Object Oriented languages define super classes for a class. It is also possible for a type in Atlas to extend from multiple super types. +
      +
    • In this example, every hive table extends from a pre-defined supertype called a ‘DataSet’. More details about this pre-defined types will be provided later.
  • +
  • Types which have a metatype of ‘Class’, ‘Struct’ or ‘Trait’ can have a collection of attributes. Each attribute has a name (e.g. ‘name’) and some other associated properties. A property can be referred to using an expression type_name.attribute_name. It is also good to note that attributes themselves are defined using Atlas metatypes. +
      +
    • In this example, hive_table.name is a String, hive_table.aliases is an array of Strings, hive_table.db refers to an instance of a type called hive_db and so on.
  • +
  • Type references in attributes, (like hive_table.db) are particularly interesting. Note that using such an attribute, we can define arbitrary relationships between two types defined in Atlas and thus build rich models. Note that one can also collect a list of references as an attribute type (e.g. hive_table.cols which represents a list of references from hive_table to the hive_column type)
+
+

Entities

+

An ‘entity’ in Atlas is a specific value or instance of a Class ‘type’ and thus represents a specific metadata object in the real world. Referring back to our analogy of Object Oriented Programming languages, an ‘instance’ is an ‘Object’ of a certain ‘Class’.

+

An example of an entity will be a specific Hive Table. Say Hive has a table called ‘customers’ in the ‘default’ database. This table will be an ‘entity’ in Atlas of type hive_table. By virtue of being an instance of a class type, it will have values for every attribute that are a part of the Hive table ‘type’, such as:

+
+
+id: "9ba387dd-fa76-429c-b791-ffc338d3c91f"
+typeName: “hive_table”
+values:
+    name: “customers”
+    db: "b42c6cfc-c1e7-42fd-a9e6-890e0adf33bc"
+    owner: “admin”
+    createTime: "2016-06-20T06:13:28.000Z"
+    lastAccessTime: "2016-06-20T06:13:28.000Z"
+    comment: null
+    retention: 0
+    sd: "ff58025f-6854-4195-9f75-3a3058dd8dcf"
+    partitionKeys: null
+    aliases: null
+    columns: ["65e2204f-6a23-4130-934a-9679af6a211f", "d726de70-faca-46fb-9c99-cf04f6b579a6", ...]
+    parameters: {"transient_lastDdlTime": "1466403208"}
+    viewOriginalText: null
+    viewExpandedText: null
+    tableType: “MANAGED_TABLE”
+    temporary: false
+
+
+

The following points can be noted from the example above:

+

+
    +
  • Every entity that is an instance of a Class type is identified by a unique identifier, a GUID. This GUID is generated by the Atlas server when the object is defined, and remains constant for the entire lifetime of the entity. At any point in time, this particular entity can be accessed using its GUID. +
      +
    • In this example, the ‘customers’ table in the default database is uniquely identified by the GUID "9ba387dd-fa76-429c-b791-ffc338d3c91f"
  • +
  • An entity is of a given type, and the name of the type is provided with the entity definition. +
      +
    • In this example, the ‘customers’ table is a ‘hive_table.
  • +
  • The values of this entity are a map of all the attribute names and their values for attributes that are defined in the hive_table type definition.
  • +
  • Attribute values will be according to the metatype of the attribute. +
      +
    • Basic metatypes: integer, String, boolean values. E.g. ‘name’ = ‘customers’, ‘Temporary’ = ‘false’
    • +
    • Collection metatypes: An array or map of values of the contained metatype. E.g. parameters = { “transient_lastDdlTime”: “1466403208”}
    • +
    • Composite metatypes: For classes, the value will be an entity with which this particular entity will have a relationship. E.g. The hive table “customers” is present in a database called “default”. The relationship between the table and database are captured via the “db” attribute. Hence, the value of the “db” attribute will be a GUID that uniquely identifies the hive_db entity called “default”
+

With this idea on entities, we can now see the difference between Class and Struct metatypes. Classes and Structs both compose attributes of other types. However, entities of Class types have the Id attribute (with a GUID value) a nd can be referenced from other entities (like a hive_db entity is referenced from a hive_table entity). Instances of Struct types do not have an identity of their own. The value of a Struct type is a collection of attributes that are ‘embedded’ inside the entity itself.

+
+

Attributes

+

We already saw that attributes are defined inside composite metatypes like Class and Struct. But we simplistically referred to attributes as having a name and a metatype value. However, attributes in Atlas have some more properties that define more concepts related to the type system.

+

An attribute has the following properties:

+
+
+    name: string,
+    dataTypeName: string,
+    isComposite: boolean,
+    isIndexable: boolean,
+    isUnique: boolean,
+    multiplicity: enum,
+    reverseAttributeName: string
+
+
+

The properties above have the following meanings:

+

+
    +
  • name - the name of the attribute
  • +
  • dataTypeName - the metatype name of the attribute (native, collection or composite)
  • +
  • isComposite - +
      +
    • This flag indicates an aspect of modelling. If an attribute is defined as composite, it means that it cannot have a lifecycle independent of the entity it is contained in. A good example of this concept is the set of columns that make a part of a hive table. Since the columns do not have meaning outside of the hive table, they are defined as composite attributes.
    • +
    • A composite attribute must be created in Atlas along with the entity it is contained in. i.e. A hive column must be created along with the hive table.
  • +
  • isIndexable - +
      +
    • This flag indicates whether this property should be indexed on, so that look ups can be performed using the attribute value as a predicate and can be performed efficiently.
  • +
  • isUnique - +
      +
    • This flag is again related to indexing. If specified to be unique, it means that a special index is created for this attribute in Titan that allows for equality based look ups.
    • +
    • Any attribute with a true value for this flag is treated like a primary key to distinguish this entity from other entities. Hence care should be taken ensure that this attribute does model a unique property in real world. +
        +
      • For e.g. consider the name attribute of a hive_table. In isolation, a name is not a unique attribute for a hive_table, because tables with the same name can exist in multiple databases. Even a pair of (database name, table name) is not unique if Atlas is storing metadata of hive tables amongst multiple clusters. Only a cluster location, database name and table name can be deemed unique in the physical world.
  • +
  • multiplicity - indicates whether this attribute is required, optional, or could be multi-valued. If an entity’s definition of the attribute value does not match the multiplicity declaration in the type definition, this would be a constraint violation and the entity addition will fail. This field can therefore be used to define some constraints on the metadata information.
+

Using the above, let us expand on the attribute definition of one of the attributes of the hive table below. Let us look at the attribute called ‘db’ which represents the database to which the hive table belongs:

+
+
+db:
+    "dataTypeName": "hive_db",
+    "isComposite": false,
+    "isIndexable": true,
+    "isUnique": false,
+    "multiplicity": "required",
+    "name": "db",
+    "reverseAttributeName": null
+
+
+

Note the “required” constraint on multiplicity. A table entity cannot be sent without a db reference.

+
+
+columns:
+    "dataTypeName": "array<hive_column>",
+    "isComposite": true,
+    "isIndexable": true,
+    “isUnique": false,
+    "multiplicity": "optional",
+    "name": "columns",
+    "reverseAttributeName": null
+
+
+

Note the “isComposite” true value for columns. By doing this, we are indicating that the defined column entities should always be bound to the table entity they are defined with.

+

From this description and examples, you will be able to realize that attribute definitions can be used to influence specific modelling behavior (constraints, indexing, etc) to be enforced by the Atlas system.

+
+

System specific types and their significance

+

Atlas comes with a few pre-defined system types. We saw one example (DataSet) in the preceding sections. In this section we will see all these types and understand their significance.

+

Referenceable: This type represents all entities that can be searched for using a unique attribute called qualifiedName.

+

Asset: This type contains attributes like name, description and owner. Name is a required attribute (multiplicity = required), the others are optional. The purpose of Referenceable and Asset is to provide modellers with way to enforce consistency when defining and querying entities of their own types. Having these fixed set of attributes allows applications and User interfaces to make convention based assumptions about what attributes they can expect of types by default.

+

Infrastructure: This type extends Referenceable and Asset and typically can be used to be a common super type for infrastructural metadata objects like clusters, hosts etc.

+

DataSet: This type extends Referenceable and Asset. Conceptually, it can be used to represent an type that stores data. In Atlas, hive tables, Sqoop RDBMS tables etc are all types that extend from DataSet. Types that extend DataSet can be expected to have a Schema in the sense that they would have an attribute that defines attributes of that dataset. For e.g. the columns attribute in a hive_table. Also entities of types that extend DataSet participate in data transformation and this transformation can be captured by Atlas via lineage (or provenance) graphs.

+

Process: This type extends Referenceable and Asset. Conceptually, it can be used to represent any data transformation operation. For example, an ETL process that transforms a hive table with raw data to another hive table that stores some aggregate can be a specific type that extends the Process type. A Process type has two specific attributes, inputs and outputs. Both inputs and outputs are arrays of DataSet entities. Thus an instance of a Process type can use these inputs and outputs to capture how the lineage of a DataSet evolves.

+
+
+ +
+ + + + http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/apple-touch-icon.png ---------------------------------------------------------------------- diff --git a/0.8.1/api/apple-touch-icon.png b/0.8.1/api/apple-touch-icon.png new file mode 100644 index 0000000..6d2fc39 Binary files /dev/null and b/0.8.1/api/apple-touch-icon.png differ http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/application.wadl ---------------------------------------------------------------------- diff --git a/0.8.1/api/application.wadl b/0.8.1/api/application.wadl new file mode 100644 index 0000000..fbace48 --- /dev/null +++ b/0.8.1/api/application.wadl @@ -0,0 +1,756 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + limit > 0. -1 maps to atlas.search.defaultlimit property value]]> + + + + + = 0. -1 maps to offset 0]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + limit > 0. -1 maps to atlas.search.defaultlimit property value]]> + + + + + = 0. -1 maps to offset 0 +Limit and offset in API are used in conjunction with limit and offset in DSL query +Final limit = min(API limit, max(query limit - API offset, 0)) +Final offset = API offset + query offset]]> + + + + + + + + + + + + + + + + + + + + + + + + + limit > 0. -1 maps to atlas.search.defaultlimit property value]]> + + + + + = 0. -1 maps to offset 0]]> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/atlas-webapp-php.zip ---------------------------------------------------------------------- diff --git a/0.8.1/api/atlas-webapp-php.zip b/0.8.1/api/atlas-webapp-php.zip new file mode 100644 index 0000000..5b803a6 Binary files /dev/null and b/0.8.1/api/atlas-webapp-php.zip differ http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/atlas-webapp.rb ---------------------------------------------------------------------- diff --git a/0.8.1/api/atlas-webapp.rb b/0.8.1/api/atlas-webapp.rb new file mode 100644 index 0000000..fd0a201 --- /dev/null +++ b/0.8.1/api/atlas-webapp.rb @@ -0,0 +1,246 @@ +# +# +# +# Generated by Enunciate. +# +require 'json' + +# adding necessary json serialization methods to standard classes. +class Object + def to_jaxb_json_hash + return self + end + def self.from_json o + return o + end +end + +class String + def self.from_json o + return o + end +end + +class Boolean + def self.from_json o + return o + end +end + +class Numeric + def self.from_json o + return o + end +end + +class Time + #json time is represented as number of milliseconds since epoch + def to_jaxb_json_hash + return (to_i * 1000) + (usec / 1000) + end + def self.from_json o + if o.nil? + return nil + else + return Time.at(o / 1000, (o % 1000) * 1000) + end + end +end + +class Array + def to_jaxb_json_hash + a = Array.new + each { | _item | a.push _item.to_jaxb_json_hash } + return a + end +end + +class Hash + def to_jaxb_json_hash + h = Hash.new + each { | _key, _value | h[_key.to_jaxb_json_hash] = _value.to_jaxb_json_hash } + return h + end +end + + +module Org + +module Apache + +module Atlas + +module Web + +module Resources + + # + class ErrorBean + + # (no documentation provided) + attr_accessor :status + # (no documentation provided) + attr_accessor :message + # (no documentation provided) + attr_accessor :stackTrace + + # the json hash for this ErrorBean + def to_jaxb_json_hash + _h = {} + _h['status'] = status.to_jaxb_json_hash unless status.nil? + _h['message'] = message.to_jaxb_json_hash unless message.nil? + _h['stackTrace'] = stackTrace.to_jaxb_json_hash unless stackTrace.nil? + return _h + end + + # the json (string form) for this ErrorBean + def to_json + to_jaxb_json_hash.to_json + end + + #initializes this ErrorBean with a json hash + def init_jaxb_json_hash(_o) + @status = Fixnum.from_json(_o['status']) unless _o['status'].nil? + @message = String.from_json(_o['message']) unless _o['message'].nil? + @stackTrace = String.from_json(_o['stackTrace']) unless _o['stackTrace'].nil? + end + + # constructs a ErrorBean from a (parsed) JSON hash + def self.from_json(o) + if o.nil? + return nil + else + inst = new + inst.init_jaxb_json_hash o + return inst + end + end + end + +end + +end + +end + +end + +end + +module Org + +module Apache + +module Atlas + +module Web + +module Resources + + # + class ErrorBean + + # (no documentation provided) + attr_accessor :status + # (no documentation provided) + attr_accessor :message + + # the json hash for this ErrorBean + def to_jaxb_json_hash + _h = {} + _h['status'] = status.to_jaxb_json_hash unless status.nil? + _h['message'] = message.to_jaxb_json_hash unless message.nil? + return _h + end + + # the json (string form) for this ErrorBean + def to_json + to_jaxb_json_hash.to_json + end + + #initializes this ErrorBean with a json hash + def init_jaxb_json_hash(_o) + @status = Fixnum.from_json(_o['status']) unless _o['status'].nil? + @message = String.from_json(_o['message']) unless _o['message'].nil? + end + + # constructs a ErrorBean from a (parsed) JSON hash + def self.from_json(o) + if o.nil? + return nil + else + inst = new + inst.init_jaxb_json_hash o + return inst + end + end + end + +end + +end + +end + +end + +end + +module Org + +module Apache + +module Atlas + +module Web + +module Resources + + # + class Results + + # (no documentation provided) + attr_accessor :href + # (no documentation provided) + attr_accessor :status + + # the json hash for this Results + def to_jaxb_json_hash + _h = {} + _h['href'] = href.to_jaxb_json_hash unless href.nil? + _h['status'] = status.to_jaxb_json_hash unless status.nil? + return _h + end + + # the json (string form) for this Results + def to_json + to_jaxb_json_hash.to_json + end + + #initializes this Results with a json hash + def init_jaxb_json_hash(_o) + @href = String.from_json(_o['href']) unless _o['href'].nil? + @status = Fixnum.from_json(_o['status']) unless _o['status'].nil? + end + + # constructs a Results from a (parsed) JSON hash + def self.from_json(o) + if o.nil? + return nil + else + inst = new + inst.init_jaxb_json_hash o + return inst + end + end + end + +end + +end + +end + +end + +end http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/crossdomain.xml ---------------------------------------------------------------------- diff --git a/0.8.1/api/crossdomain.xml b/0.8.1/api/crossdomain.xml new file mode 100644 index 0000000..0d42929 --- /dev/null +++ b/0.8.1/api/crossdomain.xml @@ -0,0 +1,25 @@ + + + + + + + + + + + + + + + + + \ No newline at end of file http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/css/home.gif ---------------------------------------------------------------------- diff --git a/0.8.1/api/css/home.gif b/0.8.1/api/css/home.gif new file mode 100644 index 0000000..49aa306 Binary files /dev/null and b/0.8.1/api/css/home.gif differ http://git-wip-us.apache.org/repos/asf/atlas-website/blob/c5b7bdb3/0.8.1/api/css/prettify.css ---------------------------------------------------------------------- diff --git a/0.8.1/api/css/prettify.css b/0.8.1/api/css/prettify.css new file mode 100644 index 0000000..d44b3a2 --- /dev/null +++ b/0.8.1/api/css/prettify.css @@ -0,0 +1 @@ +.pln{color:#000}@media screen{.str{color:#080}.kwd{color:#008}.com{color:#800}.typ{color:#606}.lit{color:#066}.pun,.opn,.clo{color:#660}.tag{color:#008}.atn{color:#606}.atv{color:#080}.dec,.var{color:#606}.fun{color:red}}@media print,projection{.str{color:#060}.kwd{color:#006;font-weight:bold}.com{color:#600;font-style:italic}.typ{color:#404;font-weight:bold}.lit{color:#044}.pun,.opn,.clo{color:#440}.tag{color:#006;font-weight:bold}.atn{color:#404}.atv{color:#060}}pre.prettyprint{padding:2px;border:1px solid #888}ol.linenums{margin-top:0;margin-bottom:0}li.L0,li.L1,li.L2,li.L3,li.L5,li.L6,li.L7,li.L8{list-style-type:none}li.L1,li.L3,li.L5,li.L7,li.L9{background:#eee} \ No newline at end of file