Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 23ED9200CB5 for ; Tue, 27 Jun 2017 20:16:08 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 222E1160BDC; Tue, 27 Jun 2017 18:16:08 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2698A160C03 for ; Tue, 27 Jun 2017 20:16:04 +0200 (CEST) Received: (qmail 54703 invoked by uid 500); 27 Jun 2017 18:16:04 -0000 Mailing-List: contact commits-help@metron.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@metron.apache.org Delivered-To: mailing list commits@metron.apache.org Received: (qmail 54405 invoked by uid 99); 27 Jun 2017 18:16:03 -0000 Received: from Unknown (HELO svn01-us-west.apache.org) (209.188.14.144) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 27 Jun 2017 18:16:03 +0000 Received: from svn01-us-west.apache.org (localhost [127.0.0.1]) by svn01-us-west.apache.org (ASF Mail Server at svn01-us-west.apache.org) with ESMTP id 861B73A516F for ; Tue, 27 Jun 2017 18:15:59 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Subject: svn commit: r20216 [14/18] - in /dev/metron/0.4.0-RC4: ./ site-book/ site-book/css/ site-book/images/ site-book/images/logos/ site-book/images/profiles/ site-book/img/ site-book/js/ site-book/metron-analytics/ site-book/metron-analytics/metron-maas-ser... Date: Tue, 27 Jun 2017 18:15:56 -0000 To: commits@metron.apache.org From: mattf@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20170627181559.861B73A516F@svn01-us-west.apache.org> archived-at: Tue, 27 Jun 2017 18:16:08 -0000 Added: dev/metron/0.4.0-RC4/site-book/metron-platform/metron-data-management/index.html ============================================================================== --- dev/metron/0.4.0-RC4/site-book/metron-platform/metron-data-management/index.html (added) +++ dev/metron/0.4.0-RC4/site-book/metron-platform/metron-data-management/index.html Tue Jun 27 18:15:56 2017 @@ -0,0 +1,1010 @@ + + + + + + + + + Metron – Resource Data Management + + + + + + + + + + + + + + + + + +
+ + + + + +
+
+ +
+ + +
+ +

Resource Data Management

+

+

This project is a collection of classes to assist with loading of various enrichment and threat intelligence sources into Metron.

+
+

Simple HBase Enrichments/Threat Intelligence

+

The vast majority of enrichments and threat intelligence processing tend toward the following pattern:

+ +
    + +
  • Take a field
  • + +
  • Look up the field in a key/value store
  • + +
  • If the key exists, then either it’s a threat to be alerted or it should be enriched with the value associated with the key.
  • +
+

As such, we have created this capability as a default threat intel and enrichment adapter. The basic primitive for simple enrichments and threat intelligence sources is a complex key containing the following:

+ +
    + +
  • Type : The type of threat intel or enrichment (e.g. malicious_ip)
  • + +
  • Indicator : The indicator in question
  • + +
  • Value : The value to associate with the type, indicator pair. This is a JSON map.
  • +
+

At present, all of the dataloads utilities function by converting raw data sources to this primitive key (type, indicator) and value to be placed in HBase.

+

In the case of threat intel, a hit on the threat intel table will result in:

+ +
    + +
  • The is_alert field being set to true in the index
  • + +
  • A field named threatintels.hbaseThreatIntel.$field.$threatintel_type is set to alert + +
      + +
    • $field is the field in the original document that was a match (e.g. src_ip_addr)
    • + +
    • $threatintel_type is the type of threat intel imported (defined in the Extractor configuration below).
    • +
  • +
+

In the case of simple hbase enrichment, a hit on the enrichments table will result in the following new field for each key in the value:enrichments.hbaseEnrichment.$field.$enrichment_type.$key

+ +
    + +
  • $field is the field in the original document that was a match (e.g. src_ip_addr)
  • + +
  • $enrichment_type is the type of enrichment imported (defined in the Extractor configuration below).
  • + +
  • $key is a key in the JSON map associated with the row in HBase.
  • +
+

For instance, in the situation where we had the following very silly key/value in HBase in the enrichment table:

+ +
    + +
  • indicator: 127.0.0.1
  • + +
  • type : important_addresses
  • + +
  • value: { "name" : "localhost", "location" : "home" }
  • +
+

If we had a document whose ip_src_addr came through with a value of 127.0.0.1, we would have the following fields added to the indexed document:

+ +
    + +
  • enrichments.hbaseEnrichment.ip_src_addr.important_addresses.name : localhost
  • + +
  • enrichments.hbaseEnrichment.ip_src_addr.important_addresses.location : home
  • +
+
+

Extractor Framework

+

For the purpose of ingesting data of a variety of formats, we have created an Extractor framework which allows for common data formats to be interpreted as enrichment or threat intelligence sources. The formats supported at present are:

+ +
    + +
  • CSV (both threat intel and enrichment)
  • + +
  • STIX (threat intel only)
  • + +
  • Custom (pass your own class)
  • +
+

All of the current utilities take a JSON file to configure how to interpret input data. This JSON describes the type of data and the schema if necessary for the data if it is not fixed (as in STIX, e.g.).

+
+

CSV Extractor

+

Consider the following example configuration file which describes how to process a CSV file.

+ +
+
+
{
+  "config" : {
+    "columns" : {
+         "ip" : 0
+        ,"source" : 2
+    }
+    ,"indicator_column" : "ip"
+    ,"type" : "malicious_ip"
+    ,"separator" : ","
+  }
+  ,"extractor" : "CSV"
+}
+
+

In this example, we have instructed the extractor of the schema (i.e. the columns field), two columns at the first and third position. We have indicated that the ip column is the indicator type and that the enrichment type is named malicious_ip. We have also indicated that the extractor to use is the CSV Extractor. The other option is the STIX extractor or a fully qualified classname for your own extractor.

+

The meta column values will show up in the value in HBase because it is called out as a non-indicator column. The key for the value will be ‘meta’. For instance, given an input string of 123.45.123.12,something,the grapevine, the following key, value would be extracted:

+ +
    + +
  • Indicator : 123.45.123.12
  • + +
  • Type : malicious_ip
  • + +
  • Value : { "ip" : "123.45.123.12", "source" : "the grapevine" }
  • +
+
+

STIX Extractor

+

Consider the following config for importing STIX documents. This is a threat intelligence interchange format, so it is particularly relevant and attractive data to import for our purposes. Because STIX is a standard format, there is no need to specify the schema or how to interpret the documents.

+

We support a subset of STIX messages for importation:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
STIX Type Specific Type Enrichment Type Name
Address IPV_4_ADDR address:IPV_4_ADDR
Address IPV_6_ADDR address:IPV_6_ADDR
Address E_MAIL address:E_MAIL
Address MAC address:MAC
Domain FQDN domain:FQDN
Hostname hostname
+

NOTE: The enrichment type will be used as the type above.

+

Consider the following configuration for an Extractor

+ +
+
+
{
+  "config" : {
+    "stix_address_categories" : "IPV_4_ADDR"
+  }
+  ,"extractor" : "STIX"
+}
+
+

In here, we’re configuring the STIX extractor to load from a series of STIX files, however we only want to bring in IPv4 addresses from the set of all possible addresses. Note that if no categories are specified for import, all are assumed. Also, only address and domain types allow filtering via stix_address_categories and stix_domain_categories config parameters.

+
+

Common Extractor Properties

+

Users also have the ability to transform and filter enrichment and threat intel data using Stellar as it is loaded into HBase. This feature is available to all extractor types.

+

As an example, we will be providing a CSV list of top domains as an enrichment and filtering the value metadata, as well as the indicator column, with Stellar expressions.

+ +
+
+
{
+  "config" : {
+    "zk_quorum" : "node1:2181",
+    "columns" : {
+       "rank" : 0,
+       "domain" : 1
+    },
+    "value_transform" : {
+       "domain" : "DOMAIN_REMOVE_TLD(domain)"
+    },
+    "value_filter" : "LENGTH(domain) > 0",
+    "indicator_column" : "domain",
+    "indicator_transform" : {
+       "indicator" : "DOMAIN_REMOVE_TLD(indicator)"
+    },
+    "indicator_filter" : "LENGTH(indicator) > 0",
+    "type" : "top_domains",
+    "separator" : ","
+  },
+  "extractor" : "CSV"
+}
+
+

There are 2 property maps that work with full Stellar expressions, and 2 properties that will work with Stellar predicates.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Property Description
value_transform Transform fields defined in the “columns” mapping with Stellar transformations. New keys introduced in the transform will be added to the key metadata.
value_filter Allows additional filtering with Stellar predicates based on results from the value transformations. In this example, records whose domain property is empty after removing the TLD will be omitted.
indicator_transform Transform the indicator column independent of the value transformations. You can refer to the original indicator value by using “indicator” as the variable name, as shown in the example above. In addition, if you prefer to piggyback your transformations, you can refer to the variable “domain”, which will allow your indicator transforms to inherit transformations done to this value during the value transformations.
indicator_filter Allows additional filtering with Stellar predicates based on results from the value transformations. In this example, records whose indicator value is empty after removing the TLD will be omitted.
+

top-list.csv

+ +
+
+
1,google.com
+2,youtube.com
+...
+
+

Running a file import with the above data and extractor configuration would result in the following 2 extracted data records:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Indicator Type Value
google top_domains { “rank” : “1”, “domain” : “google” }
yahoo top_domains { “rank” : “2”, “domain” : “yahoo” }
+

Similar to the parser framework, providing a Zookeeper quorum via the zk_quorum property will enable Stellar to access properties that reside in the global config. Expanding on our example above, if the global config looks as follows:

+ +
+
+
{
+    "global_property" : "metron-ftw"
+}
+
+

And we expand our value_tranform:

+ +
+
+
...
+    "value_transform" : {
+       "domain" : "DOMAIN_REMOVE_TLD(domain)",
+       "a-new-prop" : "global_property"
+    },
+...
+
+

The resulting value data would look like the following:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Indicator Type Value
google top_domains { “rank” : “1”, “domain” : “google”, “a-new-prop” : “metron-ftw” }
yahoo top_domains { “rank” : “2”, “domain” : “yahoo”, “a-new-prop” : “metron-ftw” }
+
+

Enrichment Config

+

In order to automatically add new enrichment and threat intel types to existing, running enrichment topologies, you will need to add new fields and new types to the zookeeper configuration. A convenience parameter has been made to assist in this when doing an import. Namely, you can specify the enrichment configs and how they associate with the fields of the documents flowing through the enrichment topology.

+

Consider the following Enrichment Configuration JSON. This one is for a threat intelligence type:

+ +
+
+
{
+  "zkQuorum" : "localhost:2181"
+ ,"sensorToFieldList" : {
+    "bro" : {
+           "type" : "THREAT_INTEL"
+          ,"fieldToEnrichmentTypes" : {
+             "ip_src_addr" : [ "malicious_ip" ]
+            ,"ip_dst_addr" : [ "malicious_ip" ]
+                                      }
+           }
+                        }
+}
+
+

We have to specify the following:

+ +
    + +
  • The zookeeper quorum which holds the cluster configuration
  • + +
  • The mapping between the fields in the enriched documents and the enrichment types.
  • +
+

This configuration allows the ingestion tools to update zookeeper post-ingestion so that the enrichment topology can take advantage immediately of the new type.

+
+

Loading Utilities

+

The two configurations above are used in the three separate ingestion tools:

+ +
    + +
  • Taxii Loader
  • + +
  • Bulk load from HDFS via MapReduce
  • + +
  • Flat File ingestion
  • +
+
+

Taxii Loader

+

The shell script $METRON_HOME/bin/threatintel_taxii_load.sh can be used to poll a Taxii server for STIX documents and ingest them into HBase.
It is quite common for this Taxii server to be an aggregation server such as Soltra Edge.

+

In addition to the Enrichment and Extractor configs described above, this loader requires a configuration file describing the connection information to the Taxii server. An illustrative example of such a configuration file is:

+ +
+
+
{
+   "endpoint" : "http://localhost:8282/taxii-discovery-service"
+  ,"type" : "DISCOVER"
+  ,"collection" : "guest.Abuse_ch"
+  ,"table" : "threat_intel"
+  ,"columnFamily" : "cf"
+  ,"allowedIndicatorTypes" : [ "domainname:FQDN", "address:IPV_4_ADDR" ]
+}
+
+

As you can see, we are specifying the following information:

+ +
    + +
  • endpoint : The URL of the endpoint
  • + +
  • type : POLL or DISCOVER depending on the endpoint.
  • + +
  • collection : The Taxii collection to ingest
  • + +
  • table : The HBase table to import into
  • + +
  • columnFamily : The column family to import into
  • + +
  • allowedIndicatorTypes : an array of acceptable threat intel types (see the “Enrichment Type Name” column of the Stix table above for the possibilities).
  • +
+

The parameters for the utility are as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Short Code Long Code Is Required? Description
-h No Generate the help screen/set of options
-e –extractor_config Yes JSON Document describing the extractor for this input data source
-c –taxii_connection_config Yes The JSON config file to configure the connection
-p –time_between_polls No The time between polling the Taxii server in milliseconds. (default: 1 hour)
-b –begin_time No Start time to poll the Taxii server (all data from that point will be gathered in the first pull). The format for the date is yyyy-MM-dd HH:mm:ss
-l –log4j No The Log4j Properties to load
-n –enrichment_config No The JSON document describing the enrichments to configure. Unlike other loaders, this is run first if specified.
+
+

Flatfile Loader

+

The shell script $METRON_HOME/bin/flatfile_loader.sh will read data from local disk, HDFS or URLs and load the enrichment or threat intel data into an HBase table.
Note: This utility works for enrichment as well as threat intel due to the underlying infrastructure being the same.

+

One special thing to note here is that there is a special configuration parameter to the Extractor config that is only considered during this loader:

+ +
    + +
  • inputFormat : This specifies how to consider the data. The two implementations are BY_LINE and WHOLE_FILE.
  • +
+

The default is BY_LINE, which makes sense for a list of CSVs where each line indicates a unit of information which can be imported. However, if you are importing a set of STIX documents, then you want each document to be considered as input to the Extractor.

+

The parameters for the utility are as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Short Code Long Code Is Required? Description
-h No Generate the help screen/set of options
-q –quiet No Do not update progress
-e –extractor_config Yes JSON Document describing the extractor for this input data source
-m –import_mode No The Import mode to use: LOCAL, MR. Default: LOCAL
-t –hbase_table Yes The HBase table to import into
-c –hbase_cf Yes The HBase table column family to import into
-i –input Yes The input data location on local disk. If this is a file, then that file will be loaded. If this is a directory, then the files will be loaded recursively under that directory.
-l –log4j No The log4j properties file to load
-n –enrichment_config No The JSON document describing the enrichments to configure. Unlike other loaders, this is run first if specified.
-p –threads No The number of threads to use when extracting data. The default is the number of cores.
-b –batchSize No The batch size to use for HBase puts
+
+

GeoLite2 Loader

+

The shell script $METRON_HOME/bin/geo_enrichment_load.sh will retrieve MaxMind GeoLite2 data and load data into HDFS, and update the configuration.

+

THIS SCRIPT WILL NOT UPDATE AMBARI’S GLOBAL.JSON, JUST THE ZK CONFIGS. CHANGES WILL GO INTO EFFECT, BUT WILL NOT PERSIST PAST AN AMBARI RESTART UNTIL UPDATED THERE.

+

The parameters for the utility are as follows:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Short Code Long Code Is Required? Description
-h No Generate the help screen/set of options
-g –geo_url No GeoIP URL - defaults to http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz
-r –remote_dir No HDFS directory to land formatted GeoIP file - defaults to /apps/metron/geo/<epoch millis>/
-t –tmp_dir No Directory for landing the temporary GeoIP data - defaults to /tmp
-z –zk_quorum Yes Zookeeper Quorum URL (zk1:port,zk2:port,…)
+
+
+
+ +
+ + + + Added: dev/metron/0.4.0-RC4/site-book/metron-platform/metron-enrichment/index.html ============================================================================== --- dev/metron/0.4.0-RC4/site-book/metron-platform/metron-enrichment/index.html (added) +++ dev/metron/0.4.0-RC4/site-book/metron-platform/metron-enrichment/index.html Tue Jun 27 18:15:56 2017 @@ -0,0 +1,740 @@ + + + + + + + + + Metron – Enrichment + + + + + + + + + + + + + + + + + +
+ + + + + +
+
+ +
+ + +
+ +

Enrichment

+

+
+

Introduction

+

The enrichment topology is a topology dedicated to taking the data from the parsing topologies that have been normalized into the Metron data format (e.g. a JSON Map structure with original_message and timestamp) and

+ +
    + +
  • Enriching messages with external data from data stores (e.g. hbase) by adding new fields based on existing fields in the messages.
  • + +
  • Marking messages as threats based on data in external data stores
  • + +
  • Marking threat alerts with a numeric triage level based on a set of Stellar rules.
  • +
+
+

Enrichment Architecture

+

Architecture

+
+

Enrichment Configuration

+

The configuration for the enrichment topology, the topology primarily responsible for enrichment and threat intelligence enrichment, is defined by JSON documents stored in zookeeper.

+

There are two types of configurations at the moment, global and sensor specific.

+
+

Global Configuration

+

See the “Global Configuration” section.

+
+

Sensor Enrichment Configuration

+

The sensor specific configuration is intended to configure the individual enrichments and threat intelligence enrichments for a given sensor type (e.g. snort).

+

Just like the global config, the format is a JSON stored in zookeeper. The configuration is a complex JSON object with the following top level fields:

+ +
    + +
  • enrichment : A complex JSON object representing the configuration of the enrichments
  • + +
  • threatIntel : A complex JSON object representing the configuration of the threat intelligence enrichments
  • +
+
+

The enrichment Configuration

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field Description Example
fieldToTypeMap In the case of a simple HBase enrichment (i.e. a key/value lookup), the mapping between fields and the enrichment types associated with those fields must be known. This enrichment type is used as part of the HBase key. Note: applies to hbaseEnrichment only. "fieldToTypeMap" : { "ip_src_addr" : [ "asset_enrichment" ] }
fieldMap The map of enrichment bolts names to configuration handlers which know how to split the message up. The simplest of which is just a list of fields. More complex examples would be the stellar enrichment which provides stellar statements. Each field listed in the array arg is sent to the enrichment referenced in the key. Cardinality of fields to enrichments is many-to-many. "fieldMap": {"hbaseEnrichment": ["ip_src_addr","ip_dst_addr"]}
config The general configuration for the enrichment "config": {"typeToColumnFamily": { "asset_enrichment" : "cf" } }
+

The config map is intended to house enrichment specific configuration. For instance, for the hbaseEnrichment, the mappings between the enrichment types to the column families is specified.

+

The fieldMapcontents are of interest because they contain the routing and configuration information for the enrichments.
When we say ‘routing’, we mean how the messages get split up and sent to the enrichment adapter bolts.
The simplest, by far, is just providing a simple list as in

+ +
+
+
    "fieldMap": {
+      "geo": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ],
+      "host": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ],
+      "hbaseEnrichment": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ]
+      }
+
+

Based on this sample config, both ip_src_addr and ip_dst_addr will go to the geo, host, and hbaseEnrichment adapter bolts.

+
+

Stellar Enrichment Configuration

+

For the geo, host and hbaseEnrichment, this is sufficient. However, more complex enrichments may contain their own configuration. Currently, the stellar enrichment is more adaptable and thus requires a more nuanced configuration.

+

At its most basic, we want to take a message and apply a couple of enrichments, such as converting the hostname field to lowercase. We do this by specifying the transformation inside of the config for the stellar fieldMap. There are two syntaxes that are supported, specifying the transformations as a map with the key as the field and the value the stellar expression:

+ +
+
+
    "fieldMap": {
+       ...
+      "stellar" : {
+        "config" : {
+          "hostname" : "TO_LOWER(hostname)"
+        }
+      }
+    }
+
+

Another approach is to make the transformations as a list with the same var := expr syntax as is used in the Stellar REPL, such as:

+ +
+
+
    "fieldMap": {
+       ...
+      "stellar" : {
+        "config" : [
+          "hostname := TO_LOWER(hostname)"
+        ]
+      }
+    }
+
+

Sometimes arbitrary stellar enrichments may take enough time that you would prefer to split some of them into groups and execute the groups of stellar enrichments in parallel. Take, for instance, if you wanted to do an HBase enrichment and a profiler call which were independent of one another. This usecase is supported by splitting the enrichments up as groups.

+

Consider the following example:

+ +
+
+
    "fieldMap": {
+       ...
+      "stellar" : {
+        "config" : {
+          "malicious_domain_enrichment" : {
+            "is_bad_domain" : "ENRICHMENT_EXISTS('malicious_domains', ip_dst_addr, 'enrichments', 'cf')"
+          },
+          "login_profile" : [
+            "profile_window := PROFILE_WINDOW('from 6 months ago')", 
+            "global_login_profile := PROFILE_GET('distinct_login_attempts', 'global', profile_window)",
+            "stats := STATS_MERGE(global_login_profile)",
+            "auth_attempts_median := STATS_PERCENTILE(stats, 0.5)", 
+            "auth_attempts_sd := STATS_SD(stats)",
+            "profile_window := null", 
+            "global_login_profile := null", 
+            "stats := null"
+          ]
+        }
+      }
+    }
+
+

Here we want to perform two enrichments that hit HBase and we would rather not run in sequence. These enrichments are entirely independent of one another (i.e. neither relies on the output of the other). In this case, we’ve created a group called malicious_domain_enrichment to inquire about whether the destination address exists in the HBase enrichment table in the malicious_domains enrichment type. This is a simple enrichment, so we can express the enrichment group as a map with the new field is_bad_domain being a key and the stellar expression associated with that operation being the associated value.

+

In contrast, the stellar enrichment group login_profile is interacting with the profiler, has multiple temporary expressions (i.e. profile_window, global_login_profile, and stats) that are useful only within the context of this group of stellar expressions. In this case, we would need to ensure that we use the list construct when specifying the group and remember to set the temporary variables to null so they are not passed along.

+

In general, things to note from this section are as follows:

+ +
    + +
  • The stellar enrichments for the stellar enrichment adapter are specified in the config for the stellar enrichment adapter in the fieldMap
  • + +
  • Groups of independent (i.e. no expression in any group depend on the output of an expression from an other group) may be executed in parallel
  • + +
  • If you have the need to use temporary variables, you may use the list construct. Ensure that you assign the variables to null before the end of the group.
  • + +
  • Ensure that you do not assign a field to a stellar expression which returns an object which JSON cannot represent.
  • + +
  • Fields assigned to Maps as part of stellar enrichments have the maps unfolded, similar to the HBase Enrichment + +
      + +
    • For example the stellar enrichment for field foo which assigns a map such as foo := { 'grok' : 1, 'bar' : 'baz'} would yield the following fields: + +
        + +
      • foo.grok == 1
      • + +
      • foo.bar == 'baz'
      • +
    • +
  • +
+
+

The threatIntel Configuration

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field Description Example
fieldToTypeMap In the case of a simple HBase threat intel enrichment (i.e. a key/value lookup), the mapping between fields and the enrichment types associated with those fields must be known. This enrichment type is used as part of the HBase key. Note: applies to hbaseThreatIntel only. "fieldToTypeMap" : { "ip_src_addr" : [ "malicious_ips" ] }
fieldMap The map of threat intel enrichment bolts names to fields in the JSON messages. Each field is sent to the threat intel enrichment bolt referenced in the key. Each field listed in the array arg is sent to the enrichment referenced in the key. Cardinality of fields to enrichments is many-to-many. "fieldMap": {"hbaseThreatIntel": ["ip_src_addr","ip_dst_addr"]}
triageConfig The configuration of the threat triage scorer. In the situation where a threat is detected, a score is assigned to the message and embedded in the indexed message. "riskLevelRules" : { "IN_SUBNET(ip_dst_addr, '192.168.0.0/24')" : 10 }
config The general configuration for the Threat Intel "config": {"typeToColumnFamily": { "malicious_ips","cf" } }
+

The config map is intended to house threat intel specific configuration. For instance, for the hbaseThreatIntel threat intel adapter, the mappings between the enrichment types to the column families is specified. The fieldMap configuration is similar to the enrichment configuration in that the adapters available are the same.

+

The triageConfig field is also a complex field and it bears some description:

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Field Description Example
riskLevelRules This is a list of rules (represented as Stellar expressions) associated with scores with optional names and comments see below
aggregator An aggregation function that takes all non-zero scores representing the matching queries from riskLevelRules and aggregates them into a single score. "MAX"
+

A risk level rule is of the following format:

+ +
    + +
  • name : The name of the threat triage rule
  • + +
  • comment : A comment describing the rule
  • + +
  • rule : The rule, represented as a Stellar statement
  • + +
  • score : Associated threat triage score for the rule
  • + +
  • reason : Reason the rule tripped. Can be represented as a Stellar statement
  • +
+

An example of a rule is as follows:

+ +
+
+
    "riskLevelRules" : [ 
+        { 
+          "name" : "is internal"
+        , "comment" : "determines if the destination is internal."
+        , "rule" : "IN_SUBNET(ip_dst_addr, '192.168.0.0/24')"
+        , "score" : 10
+        , "reason" : "FORMAT('%s is internal', ip_dst_addr)"
+        }
+                       ]
+
+

The supported aggregation functions are:

+ +
    + +
  • MAX : The max of all of the associated values for matching queries
  • + +
  • MIN : The min of all of the associated values for matching queries
  • + +
  • MEAN : The mean of all of the associated values for matching queries
  • + +
  • POSITIVE_MEAN : The mean of the positive associated values for the matching queries.
  • +
+
+

Example Configuration

+

An example configuration for the YAF sensor is as follows:

+ +
+
+
{
+  "enrichment": {
+    "fieldMap": {
+      "geo": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ],
+      "host": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ],
+      "hbaseEnrichment": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ]
+    }
+  ,"fieldToTypeMap": {
+      "ip_src_addr": [
+        "playful_classification"
+      ],
+      "ip_dst_addr": [
+        "playful_classification"
+      ]
+    }
+  },
+  "threatIntel": {
+    "fieldMap": {
+      "hbaseThreatIntel": [
+        "ip_src_addr",
+        "ip_dst_addr"
+      ]
+    },
+    "fieldToTypeMap": {
+      "ip_src_addr": [
+        "malicious_ip"
+      ],
+      "ip_dst_addr": [
+        "malicious_ip"
+      ]
+    },
+    "triageConfig" : {
+      "riskLevelRules" : [ 
+        {
+          "rule" : "ip_src_addr == '10.0.2.3' or ip_dst_addr == '10.0.2.3'",
+          "score" : 10
+        }
+      ],
+      "aggregator" : "MAX"
+    }
+  }
+}
+
+

ThreatIntel alert levels are emitted as a new field “threat.triage.level.” So for the example above, an incoming message that trips the ip_src_addr rule will have a new field threat.triage.level=10.

+

+

Example Enrichment via Stellar

+

Let’s walk through doing a simple enrichment using Stellar on your cluster using the Squid topology.

+
+

Install Prerequisites

+

Now let’s install some prerequisites:

+ +
    + +
  • Squid client via yum install squid
  • + +
  • ES Head plugin via /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
  • +
+

Start Squid via service squid start

+
+

Adjust Enrichment Configurations for Squid to Call Stellar

+

Let’s adjust the configurations for the Squid topology to annotate the messages using some Stellar functions.

+ +
    + +
  • +

    Edit the squid enrichment configuration at $METRON_HOME/config/zookeeper/enrichments/squid.json (this file will not exist, so create a new one) to add some new fields based on stellar queries:

    + +
    +
    +
    {
    +  "enrichment" : {
    +"fieldMap": {
    +  "stellar" : {
    +    "config" : {
    +      "numeric" : {
    +                  "foo": "1 + 1"
    +                  }
    +      ,"ALL_CAPS" : "TO_UPPER(source.type)"
    +    }
    +  }
    + }
    +  },
    +  "threatIntel" : {
    +"fieldMap":{
    + "stellar" : {
    +    "config" : {
    +      "bar" : "TO_UPPER(source.type)"
    +    }
    +  } 
    +},
    +"triageConfig" : {
    +}
    +  }
    +}
    +
    +

    We have added the following fields as part of the enrichment phase of the enrichment topology:

  • + +
  • +

    foo == 2

  • + +
  • ALL_CAPS == SQUID
  • +
+

We have added the following as part of the threat intel:

+ +
    + +
  • bar == SQUID
  • +
+

Please note that foo and ALL_CAPS will be applied in separate workers due to them being in separate groups.

+ +
    + +
  • Upload new configs via $METRON_HOME/bin/zk_load_configs.sh --mode PUSH -i $METRON_HOME/config/zookeeper -z node1:2181
  • + +
  • Make the Squid topic in kafka via /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper node1:2181 --create --topic squid --partitions 1 --replication-factor 1
  • +
+
+

Start Topologies and Send Data

+

Now we need to start the topologies and send some data:

+ +
    + +
  • Start the squid topology via $METRON_HOME/bin/start_parser_topology.sh -k node1:6667 -z node1:2181 -s squid
  • + +
  • Generate some data via the squid client: + +
      + +
    • squidclient http://yahoo.com
    • + +
    • squidclient http://cnn.com
    • +
  • + +
  • Send the data to kafka via cat /var/log/squid/access.log | /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list node1:6667 --topic squid
  • + +
  • Browse the data in elasticsearch via the ES Head plugin @ http://node1:9200/_plugin/head/ and verify that in the squid index you have two documents
  • + +
  • Ensure that the documents have new fields foo, bar and ALL_CAPS with values as described above.
  • +
+

Note that we could have used any Stellar statements here, including calling out to HBase via ENRICHMENT_GET and ENRICHMENT_EXISTS or even calling a machine learning model via Model as a Service.

+

+

Notes on Performance Tuning

+

Default installed Metron is untuned for production deployment. There are a few knobs to tune to get the most out of your system.

+
+

Kafka Queue

+

The enrichments kafka queue is a collection point from all of the parser topologies. As such, make sure that the number of partitions in the kafka topic is sufficient to handle the throughput that you expect from your parser topologies.

+
+

Enrichment Topology

+

The enrichment topology as started by the $METRON_HOME/bin/start_enrichment_topology.sh script uses a default of one executor per bolt. In a real production system, this should be customized by modifying the flux file in $METRON_HOME/flux/enrichment/remote.yaml.

+ +
    + +
  • Add a parallelism field to the bolts to give Storm a parallelism hint for the various components. Give bolts which appear to be bottlenecks (e.g. stellar enrichment bolt, hbase enrichment and threat intel bolts) a larger hint.
  • + +
  • Add a parallelism field to the kafka spout which matches the number of partitions for the enrichment kafka queue.
  • + +
  • Adjust the number of workers for the topology by adjusting the topology.workers field for the topology.
  • +
+

Finally, if workers and executors are new to you or you don’t know where to modify the flux file, the following might be of use to you:

+ +
+
+
+
+ +
+ + + + Added: dev/metron/0.4.0-RC4/site-book/metron-platform/metron-indexing/index.html ============================================================================== --- dev/metron/0.4.0-RC4/site-book/metron-platform/metron-indexing/index.html (added) +++ dev/metron/0.4.0-RC4/site-book/metron-platform/metron-indexing/index.html Tue Jun 27 18:15:56 2017 @@ -0,0 +1,447 @@ + + + + + + + + + Metron – Indexing + + + + + + + + + + + + + + + + + +
+ + + + + +
+
+ +
+ + +
+ +

Indexing

+

+
+

Introduction

+

The indexing topology is a topology dedicated to taking the data from the enrichment topology that have been enriched and storing the data in one or more supported indices

+ +
    + +
  • HDFS as rolled text files, one JSON blob per line
  • + +
  • Elasticsearch
  • + +
  • Solr
  • +
+

By default, this topology writes out to both HDFS and one of Elasticsearch and Solr.

+

Indices are written in batch and the batch size is specified in the Sensor Indexing Configuration via the batchSize parameter. This config is variable by sensor type.

+
+

Indexing Architecture

+

Architecture

+

The indexing topology is extremely simple. Data is ingested into kafka and sent to

+ +
    + +
  • An indexing bolt configured to write to either elasticsearch or Solr
  • + +
  • An indexing bolt configured to write to HDFS under /apps/metron/enrichment/indexed
  • +
+

By default, errors during indexing are sent back into the indexing kafka queue so that they can be indexed and archived.

+
+

Sensor Indexing Configuration

+

The sensor specific configuration is intended to configure the indexing used for a given sensor type (e.g. snort).

+

Just like the global config, the format is a JSON stored in zookeeper and on disk at $METRON_HOME/config/zookeeper/indexing. Within the sensor-specific configuration, you can configure the individual writers. The writers currently supported are:

+ +
    + +
  • elasticsearch
  • + +
  • hdfs
  • + +
  • solr
  • +
+

Depending on how you start the indexing topology, it will have either elasticsearch or solr and hdfs writers running.

+

The configuration for an individual writer-specific configuration is a JSON map with the following fields:

+ +
    + +
  • index : The name of the index to write to (defaulted to the name of the sensor).
  • + +
  • batchSize : The size of the batch that is written to the indices at once (defaulted to 1).
  • + +
  • enabled : Whether the writer is enabled (default true).
  • +
+
+

Indexing Configuration Examples

+

For a given sensor, the following scenarios would be indicated by the following cases:

+
+

Base Case

+ +
+
+
{
+}
+
+

or no file at all.

+ +
    + +
  • elasticsearch writer + +
      + +
    • enabled
    • + +
    • batch size of 1
    • + +
    • index name the same as the sensor
    • +
  • + +
  • hdfs writer + +
      + +
    • enabled
    • + +
    • batch size of 1
    • + +
    • index name the same as the sensor
    • +
  • +
+

If a writer config is unspecified, then a warning is indicated in the Storm console. e.g.: WARNING: Default and (likely) unoptimized writer config used for hdfs writer and sensor squid

+
+

Fully specified

+ +
+
+
{
+   "elasticsearch": {
+      "index": "foo",
+      "batchSize" : 100,
+      "enabled" : true 
+    },
+   "hdfs": {
+      "index": "foo",
+      "batchSize": 1,
+      "enabled" : true
+    }
+}
+
+ +
    + +
  • elasticsearch writer + +
      + +
    • enabled
    • + +
    • batch size of 100
    • + +
    • index name of “foo”
    • +
  • + +
  • hdfs writer + +
      + +
    • enabled
    • + +
    • batch size of 1
    • + +
    • index name of “foo”
    • +
  • +
+
+

HDFS Writer turned off

+ +
+
+
{
+   "elasticsearch": {
+      "index": "foo",
+      "enabled" : true 
+    },
+   "hdfs": {
+      "index": "foo",
+      "batchSize": 100,
+      "enabled" : false
+    }
+}
+
+ +
    + +
  • elasticsearch writer + +
      + +
    • enabled
    • + +
    • batch size of 1
    • + +
    • index name of “foo”
    • +
  • + +
  • hdfs writer + +
      + +
    • disabled
    • +
  • +
+

+

Notes on Performance Tuning

+

Default installed Metron is untuned for production deployment. By far and wide, the most likely piece to require TLC from a performance perspective is the indexing layer. An index that does not keep up will back up and you will see errors in the kafka bolt. There are a few knobs to tune to get the most out of your system.

+
+

Kafka Queue

+

The indexing kafka queue is a collection point from the enrichment topology. As such, make sure that the number of partitions in the kafka topic is sufficient to handle the throughput that you expect.

+
+

Indexing Topology

+

The indexing topology as started by the $METRON_HOME/bin/start_elasticsearch_topology.sh or $METRON_HOME/bin/start_solr_topology.sh script uses a default of one executor per bolt. In a real production system, this should be customized by modifying the flux file in $METRON_HOME/flux/indexing/remote.yaml.

+ +
    + +
  • Add a parallelism field to the bolts to give Storm a parallelism hint for the various components. Give bolts which appear to be bottlenecks (e.g. the indexing bolt) a larger hint.
  • + +
  • Add a parallelism field to the kafka spout which matches the number of partitions for the enrichment kafka queue.
  • + +
  • Adjust the number of workers for the topology by adjusting the topology.workers field for the topology.
  • +
+

Finally, if workers and executors are new to you or you don’t know where to modify the flux file, the following might be of use to you:

+ +
+
+

Zeppelin Notebooks

+

Zeppelin notebooks can be added to /src/main/config/zeppelin/ (and subdirectories can be created for organization). The placed files must be .json files and be named appropriately. These files must be added to the metron.spec file and the RPMs rebuilt to be available to be loaded into Ambari.

+

The notebook files will be found on the server in $METRON_HOME/config/zeppelin

+

The Ambari Management Pack has a custom action to load these templates, ZEPPELIN_DASHBOARD_INSTALL, that will import them into Zeppelin.

+
+
+
+ +
+ + + +