beam-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Work logged] (BEAM-4065) Performance Tests Results Analysis and Regression Detection
Date Tue, 08 May 2018 14:17:00 GMT

     [ https://issues.apache.org/jira/browse/BEAM-4065?focusedWorklogId=99538&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-99538
]

ASF GitHub Bot logged work on BEAM-4065:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 08/May/18 14:16
            Start Date: 08/May/18 14:16
    Worklog Time Spent: 10m 
      Work Description: szewi commented on a change in pull request #5180: [BEAM-4065] Basic
performance tests analysis added.
URL: https://github.com/apache/beam/pull/5180#discussion_r186742639
 
 

 ##########
 File path: .test-infra/jenkins/verify_performance_test_results.py
 ##########
 @@ -0,0 +1,259 @@
+#!/usr/bin/env python
+#
+#
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#
+#       http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+#   This script performs basic analytic of performance tests results.
+#   It operates in two modes:
+#   --mode=report - In this mode script iterates over list of BigQuery tables and
+#   analyses the data. This mode is intended to be run on a regulars basis, e.g. daily.
+#   Report will contain average tests execution time of given metric, its comparison with
+#   with average calculated from historical data, recent standard deviation and standard
+#   deviation calculated based on historical data.
+#   --mode=validation - In this mode script will analyse single BigQuery table and check
+#   recent results.
+#
+#   Other parameters are described in script. Notification is optional parameter.
+#   --send_notification - if present, script will send notification to slack channel.
+#   Requires setting env variable SLACK_WEBOOK_URL which value could be obtained by
+#   creating incoming webhook on Slack.
+#
+#   Example script usage:
+#   verify_performance_test_results.py \
+#     --bqtable='["beam_performance.avroioit_hdfs_pkb_results", \
+#                 "beam_performance.textioit_pkb_results"]' \
+#     --metric="run_time" --mode=report --send_notification
+#
+
+import argparse, time, calendar, json, re, os, requests
+from google.cloud import bigquery
+
+### TIME SETTINGS ###########
+TIME_PATTERN = '%d-%m-%Y_%H-%M-%S'
+NOW = int(time.time())
+#NOW = calendar.timegm(time.strptime('14-03-2018_13-30-27', TIME_PATTERN)) #left for testing
+# First analysis time interval definition - 24h before
+TIME_POINT_1 = NOW - 1 * 86400
+# Second analysis time interval definition - week before
+TIME_POINT_2 = NOW - 7 * 86400
+##############################
+
+SLACK_USER = os.getenv('SLACK_USER', "jenkins-beam")
+SLACK_WEBOOK_URL = os.getenv('SLACK_WEBOOK_URL')
+SLACK_CHANNEL = os.getenv('SLACK_CHANNEL', "beam-testing")
+
+def submit_big_query_job(sql_command, return_type):
+    query_job = client.query(sql_command)
+    results = query_job.result()
+    if return_type == "list":
+        # Queries that have multiple elements in output displayed as query_result
+        result_list = []
+        for row in results:
+            result_list.append(row.query_result)
+        return result_list
+    elif return_type == "value":
+        # All queries must have single element in output displayed as query_result
+        for row in results:
+            return (row.query_result)
+    else:
+        print("This type is not supported")
+        return None
+
+def count_queries(table_name, time_start, time_stop, metric):
+    # This function checks how many data was inserted in time interval.
+    sql_command = 'select count(*) as query_result from {} where TIMESTAMP > {} and TIMESTAMP
< {} and METRIC=\'{}\''.format(
+        table_name,
+        time_start,
+        time_stop,
+        metric
+        )
+    count = submit_big_query_job(sql_command, "value")
+    print("Number of records inserted into {} between {} - {}: {}".format(
+        table_name,
+        time.strftime(TIME_PATTERN, time.gmtime(time_start)),
+        time.strftime(TIME_PATTERN, time.gmtime(time_stop)),
+        count))
+    return count
+
+def get_average_from(table_name, time_start, time_stop, metric):
+    # This function return average value of the provided metric in time interval.
+    sql_command = 'select avg(value) as query_result from {} where TIMESTAMP > {} and
TIMESTAMP < {} and METRIC=\'{}\''.format(
+        table_name,
+        time_start,
+        time_stop,
+        metric
+    )
+    average = submit_big_query_job(sql_command, "value")
+    return average
+
+def get_stddev_from(table_name, time_start, time_stop, metric):
+    # This function return standard deviation of the provided metric in time interval.
+    sql_command = 'select stddev(value) as query_result from {} where TIMESTAMP > {} and
TIMESTAMP < {} and METRIC=\'{}\''.format(
+        table_name,
+        time_start,
+        time_stop,
+        metric
+    )
+    stddev = submit_big_query_job(sql_command, "value")
+    return stddev
+
+def get_records_from(table_name, time_start, time_stop, metric, number_of_records):
+    # This function checks how many data was inserted in time interval.
+    sql_command = 'select value as query_result from {} where TIMESTAMP > {} and TIMESTAMP
< {} and METRIC=\'{}\' ORDER BY TIMESTAMP DESC LIMIT {}'.format(
+        table_name,
+        time_start,
+        time_stop,
+        metric,
+        number_of_records
+        )
+    print(sql_command)
+    list_of_records = submit_big_query_job(sql_command, "list")
+    return list_of_records
+
+def create_report(bqtables):
+    # This function create a report message from tables.
+    report_message = ''
+    slack_report_message = ''
+
+    for bqtable in bqtables:
+        # Get raw table name
+        bq_table_name = re.sub(r'\"|\[|\]', '', bqtable).strip()
+
+        # Make sure there was data records inserted
+        nb_recent_records = count_queries(bq_table_name, TIME_POINT_1, NOW, metric)
+        if nb_recent_records == 0:
 
 Review comment:
   Maybe this comment is unclear. The variable nb_recent_records was introduced for this purpose
and is storing result of this test by executing COUNT on BigQuery table and checking how many
data was uploaded recently., or do you mean writing additional python testing method?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 99538)
    Time Spent: 4h 50m  (was: 4h 40m)

> Performance Tests Results Analysis and Regression Detection
> -----------------------------------------------------------
>
>                 Key: BEAM-4065
>                 URL: https://issues.apache.org/jira/browse/BEAM-4065
>             Project: Beam
>          Issue Type: Improvement
>          Components: build-system
>            Reporter: Kamil Szewczyk
>            Assignee: Kamil Szewczyk
>            Priority: Major
>          Time Spent: 4h 50m
>  Remaining Estimate: 0h
>
> Performance tests are running on Jenkins on regular basis and results are pushed to BigQuery.
However there is no automatic regression detection or daily reports with results.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message