Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 646D8200B58 for ; Wed, 27 Jul 2016 19:51:40 +0200 (CEST) Received: by cust-asf.ponee.io (Postfix) id 630C9160AA8; Wed, 27 Jul 2016 17:51:40 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 37FC2160A6F for ; Wed, 27 Jul 2016 19:51:38 +0200 (CEST) Received: (qmail 36701 invoked by uid 500); 27 Jul 2016 17:51:37 -0000 Mailing-List: contact commits-help@climate.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@climate.apache.org Delivered-To: mailing list commits@climate.apache.org Received: (qmail 36692 invoked by uid 99); 27 Jul 2016 17:51:37 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd4-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 27 Jul 2016 17:51:37 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd4-us-west.apache.org (ASF Mail Server at spamd4-us-west.apache.org) with ESMTP id D32C7C0BF8 for ; Wed, 27 Jul 2016 17:51:36 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd4-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 0.374 X-Spam-Level: X-Spam-Status: No, score=0.374 tagged_above=-999 required=6.31 tests=[KAM_ASCII_DIVIDERS=0.8, KAM_LAZY_DOMAIN_SECURITY=1, RP_MATCHES_RCVD=-1.426] autolearn=disabled Received: from mx2-lw-us.apache.org ([10.40.0.8]) by localhost (spamd4-us-west.apache.org [10.40.0.11]) (amavisd-new, port 10024) with ESMTP id 80CK0mB1y3jh for ; Wed, 27 Jul 2016 17:51:24 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx2-lw-us.apache.org (ASF Mail Server at mx2-lw-us.apache.org) with ESMTP id B929E60E90 for ; Wed, 27 Jul 2016 17:51:23 +0000 (UTC) Received: from svn01-us-west.apache.org (svn.apache.org [10.41.0.6]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id AD862E12D0 for ; Wed, 27 Jul 2016 17:50:58 +0000 (UTC) Received: from svn01-us-west.apache.org (localhost [127.0.0.1]) by svn01-us-west.apache.org (ASF Mail Server at svn01-us-west.apache.org) with ESMTP id A7D853A0051 for ; Wed, 27 Jul 2016 17:50:58 +0000 (UTC) Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Subject: svn commit: r993913 [5/16] - in /websites/staging/climate/trunk/content: ./ api/1.0.0/_sources/ api/1.0.0/_sources/config/ api/1.0.0/_sources/data_source/ api/1.0.0/_sources/ocw/ api/1.0.0/_sources/ui-backend/ api/1.0.0/_static/ api/1.0.0/config/ api/1... Date: Wed, 27 Jul 2016 17:50:57 -0000 To: commits@climate.apache.org From: buildbot@apache.org X-Mailer: svnmailer-1.0.9 Message-Id: <20160727175058.A7D853A0051@svn01-us-west.apache.org> archived-at: Wed, 27 Jul 2016 17:51:40 -0000 Added: websites/staging/climate/trunk/content/api/1.0.0/ocw/overview.html ============================================================================== --- websites/staging/climate/trunk/content/api/1.0.0/ocw/overview.html (added) +++ websites/staging/climate/trunk/content/api/1.0.0/ocw/overview.html Wed Jul 27 17:50:56 2016 @@ -0,0 +1,305 @@ + + + + + + + + Overview — Apache Open Climate Workbench 1.0.0 documentation + + + + + + + + + + + + + + + +
+
+
+
+ +
+

Overview¶

+

The Apache Open Climate Workbench toolkit aims to provide a suit of tools to make Climate Scientists lives easier. It does this by providing tools for loading and manipulating datasets, running evaluations, and plotting results. Below is a breakdown of many of the OCW components with an explanation of how to use them. An OCW evaluation usually has the following steps:

+
    +
  1. Load one or more datasets
  2. +
  3. Perform dataset manipulations (subset, temporal/spatial rebin, etc.)
  4. +
  5. Load various metrics
  6. +
  7. Instantiate and run the evaluation
  8. +
  9. Plot results
  10. +
+
+

Common Data Abstraction¶

+

The OCW dataset.Dataset class is the primary data abstraction used throughout OCW. It facilitates the uniform handling of data throughout the toolkit and provides a few useful helper functions such as dataset.Dataset.spatial_boundaries() and dataset.Dataset.time_range(). Creating a new dataset object is straightforward but generally you will want to use an OCW data source to load the data for you.

+
+
+

Data Sources¶

+

OCW data sources allow users to easily load dataset.Dataset objects from a number of places. These data sources help with step 1 of an evaluation above. In general the primary file format that is supported is NetCDF. For instance, the local, dap and esgf data sources only support loading NetCDF files fro m your local machine, an OpenDAP URL, and the ESGF respectively. Some data sources, such as rcmed, point to externally supported data sources. In the case of the RCMED data source, the Regional Climate Model Evaluation Database is run by NASA’s Jet Propulsion Laboratory.

+

Adding additional data sources is quite simple. The only API limitation that we have on a data source is that it returns a valid dataset.Dataset object. Please feel free to send patches for adding more data sources.

+

A simple example using the local data source to load a NetCDF file from your local machine:

+
>>> import ocw.data_source.local as local
+>>> ds = local.load_file('/tmp/some_dataset.nc', 'SomeVarInTheDataset')
+
+
+
+
+

Dataset Manipulations¶

+

All dataset.Dataset manipulations are handled by the dataset_processor module. In general, an evaluation will include calls to dataset_processor.subset(), dataset_processor.spatial_regrid(), and dataset_processor.temporal_rebin() to ensure that the datasets can actually be compared. dataset_processor functions take a dataset.Dataset object and some various parameters and return a modified dataset.Dataset object. The original dataset is never manipulated in the process.

+

Subsetting is a great way to speed up your processing and keep useless data out of your plots. Notice that we’re using a dataset.Bounds objec to represent the area of interest:

+
>>> import ocw.dataset_processor as dsp
+>>> new_bounds = Bounds(min_lat, max_lat, min_lon, max_lon, start_time, end_time)
+>>> knmi_dataset = dsp.subset(new_bounds, knmi_dataset)
+
+
+

Temporally re-binning a dataset is great when the time step of the data is too fine grain for the desired use. For instance, perhaps we want to see a yearly trend but we have daily data. We would need to make the following call to adjust our dataset:

+
>>> knmi_dataset = dsp.temporal_rebin(knmi_dataset, datetime.timedelta(days=365))
+
+
+

It is critically necessary for our datasets to be on the same lat/lon grid before we try to compare them. That’s where spatial re-gridding comes in helpful. Here we re-grid our example dataset onto a 1-degree lat/lon grid within the range that we subsetted the dataset previously:

+
>>> new_lons = np.arange(min_lon, max_lon, 1)
+>>> new_lats = np.arange(min_lat, max_lat, 1)
+>>> knmi_dataset = dsp.spatial_regrid(knmi_dataset, new_lats, new_lons)
+
+
+
+
+

Metrics¶

+

Metrics are the backbone of an evaluation. You’ll find a number of (hopefully) useful “default” metrics in the metrics module in the toolkit. In general you won’t be too likely to use a metric outside of an evaluation, however you could run a metric manually if you so desired.:

+
>>> import ocw.metrics
+>>> # Load 2 datasets
+>>> bias = ocw.metrics.Bias()
+>>> print bias.run(dataset1, dataset2)
+
+
+

While this might be exactly what you need to get the job done, it is far more likely that you’ll need to run a number of metrics over a number of datasets. That’s where running an evaluation comes in, but we’ll get to that shortly.

+

There are two “types” of metrics that the toolkit supports. A unary metric acts on a single dataset and returns a result. A binary metric acts on a target and reference dataset and returns a result. This is helpful to know if you decide that the included metrics aren’t sufficient. We’ve attempted to make adding a new metric as simple as possible. You simply create a new class that inherits from either the unary or binary base classes and override the run function. At this point your metric will behave exactly like the included metrics in the toolkit. Below is an example of how one of the included metrics is implemented. If you need further assistance with your own metrics be sure to email the project’s mailing list!:

+
>>> class Bias(BinaryMetric):
+>>>     '''Calculate the bias between a reference and target dataset.'''
+>>>
+>>>     def run(self, ref_dataset, target_dataset):
+>>>         '''Calculate the bias between a reference and target dataset.
+>>>
+>>>         .. note::
+>>>            Overrides BinaryMetric.run()
+>>>
+>>>         :param ref_dataset: The reference dataset to use in this metric run.
+>>>         :type ref_dataset: ocw.dataset.Dataset object
+>>>         :param target_dataset: The target dataset to evaluate against the
+>>>             reference dataset in this metric run.
+>>>         :type target_dataset: ocw.dataset.Dataset object
+>>>
+>>>         :returns: The difference between the reference and target datasets.
+>>>         :rtype: Numpy Array
+>>>         '''
+>>>         return ref_dataset.values - target_dataset.values
+
+
+

While this might look a bit scary at first, if we take out all the documentation you’ll see that it’s really extremely simple.:

+
>>> # Our new Bias metric inherits from the Binary Metric base class
+>>> class Bias(BinaryMetric):
+>>>     # Since our new metric is a binary metric we need to override
+>>>     # the run funtion in the BinaryMetric base class.
+>>>     def run(self, ref_dataset, target_dataset):
+>>>         # To implement the bias metric we simply return the difference
+>>>         # between the reference and target dataset's values arrays.
+>>>         return ref_dataset.values - target_dataset.values
+
+
+

It is very important to note that you shouldn’t change the datasets that are passed into the metric that you’re implementing. If you do you might cause unexpected results in future parts of the evaluation. If you need to do manipulations, copy the data first and do manipulations on the copy. Leave the original dataset alone!

+
+
+

Handling an Evaluation¶

+

We saw above that it is easy enough to run a metric over a few datasets manually. However, when we have a lot of datasets and/or a lot of metrics to run that can become tedious and error prone. This is where the evaluation.Evaluation class comes in handy. It ensures that all the metrics that you choose are run over all combinations of the datasets that you input. Consider the following simple example:

+
>>> import ocw.evaluation as eval
+>>> import ocw.data_source.local as local
+>>> import ocw.metrics as metrics
+>>>
+>>> # Load a few datasets
+>>> ref_dataset = local.load_file(...)
+>>> target1 = local.load_file(...)
+>>> target2 = local.load_file(...)
+>>> target_datasets = [target1, target2]
+>>>
+>>> # Do some dataset manipulations here such as subsetting and regridding
+>>>
+>>> # Load a few metrics
+>>> bias = metrics.Bias()
+>>> tstd = metrics.TemporalStdDev()
+>>> metrics = [bias, tstd]
+>>>
+>>> new_eval = eval.Evaluation(ref_dataset, target_datasets, metrics)
+>>> new_eval.run()
+>>> print new_eval.results
+>>> print new_eval.unary_results
+
+
+

First we load all of our datasets and do any manipulations (which we leave out for brevity). Then we load the metrics that we want to run, namely Bias and TemporalStdDev. We then load our evaluation object.:

+
>>> new_eval = eval.Evaluation(ref_dataset, target_datasets, metrics)
+
+
+

Notice two things about this. First, we’re splitting the datasets into a reference dataset (ref_dataset) and a list of target datasets (target_datasets). Second, one of the metrics that we loaded (metrics.TemporalStdDev) is a unary metric. The reference/target dataset split is necessary to handling binary metrics. When an evaluation is run, all the binary metrics are run against every (reference, target) dataset pair. So the above evaluation could be replaced with the following calls. Of course this wouldn’t handle the unary metric, but we’ll get to that in a second.:

+
>>> result1 = bias.run(ref_dataset, target1)
+>>> result2 = bias.run(ref_dataset, target2)
+
+
+

Unary metrics are handled slightly differently but they’re still simple. Each unary metric passed into the evaluation is run against every dataset in the evaluation. So we could replace the above evaluation with the following calls:

+
>>> unary_result1 = tstd(ref_dataset)
+>>> unary_result2 = tstd(target1)
+>>> unary_result3 = tstd(target2)
+
+
+

The only other part that we need to explore to fully understand the evalution.Evaluation class is how the results are stored internally from the run. The results list is a multidimensional array holding all the binary metric results and the unary_results is a list holding all the unary metric results. To more accurately replace the above evaluation with manual calls we would write the following:

+
>>> results = [
+>>>     # Results for target1
+>>>     [
+>>>         bias.run(ref_dataset, target1)
+>>>         # If there were other binary metrics, the results would be here.
+>>>     ],
+>>>     # Results for target2
+>>>     [
+>>>         bias.run(ref_dataset, target2)
+>>>         # If there were other binary metrics, the results would be here.
+>>>     ]
+>>> ]
+>>>
+>>> unary_results = [
+>>>     # Results for TemporalStdDev
+>>>     [
+>>>         tstd(ref_dataset),
+>>>         tstd(target1),
+>>>         tstd(target2)
+>>>     ]
+>>>     # If there were other unary metrics, the results would be in a list here.
+>>> ]
+
+
+
+
+

Plotting¶

+

Plotting can be fairly complicated business. Luckily we have pretty good documentation on the project wiki that can help you out. There are also fairly simple examples in the project’s example folder with the remainder of the code such as the following:

+
>>> # Let's grab the values returned for bias.run(ref_dataset, target1)
+>>> results = bias_evaluation.results[0][0]
+>>>
+>>> Here's the same lat/lons we used earlier when we were re-gridding
+>>> lats = new_lats
+>>> lons = new_lons
+>>> fname = 'My_Test_Plot'
+>>>
+>>> plotter.draw_contour_map(results, lats, lons, fname)
+
+
+

This would give you a contour map calls My_Test_Plot for the requested bias metric run.

+
+
+ + +
+
+
+
+
+

Table Of Contents

+ + +

Previous topic

+

Welcome to Apache Open Climate Workbench’s documentation!

+

Next topic

+

Dataset Module

+

This Page

+ + + +
+
+
+
+ + + + \ No newline at end of file Added: websites/staging/climate/trunk/content/api/1.0.0/ocw/plotter.html ============================================================================== --- websites/staging/climate/trunk/content/api/1.0.0/ocw/plotter.html (added) +++ websites/staging/climate/trunk/content/api/1.0.0/ocw/plotter.html Wed Jul 27 17:50:56 2016 @@ -0,0 +1,416 @@ + + + + + + + + Plotter Module — Apache Open Climate Workbench 1.0.0 documentation + + + + + + + + + + + + + + + +
+
+
+
+ +
+

Plotter Module¶

+
+
+class plotter.TaylorDiagram(refstd, radmax=1.5, fig=None, rect=111, label='_')¶
+

Taylor diagram helper class

+

Plot model standard deviation and correlation to reference (data) +sample in a single-quadrant polar plot, with r=stddev and +theta=arccos(correlation).

+

This class was released as public domain by the original author +Yannick Copin. You can find the original Gist where it was +released at: https://gist.github.com/ycopin/3342888

+

Set up Taylor diagram axes, i.e. single quadrant polar +plot, using mpl_toolkits.axisartist.floating_axes. refstd is +the reference standard deviation to be compared to.

+
+
+add_contours(std1, corr1, std2, corr2, **kwargs)¶
+

Add a line between two points +[std1, corr1] and [std2, corr2]

+
+ +
+
+add_rms_contours(levels=5, **kwargs)¶
+

Add constant centered RMS difference contours.

+
+ +
+
+add_sample(stddev, corrcoef, *args, **kwargs)¶
+

Add sample (stddev,corrcoeff) to the Taylor diagram. args +and kwargs are directly propagated to the Figure.plot +command.

+
+ +
+
+add_stddev_contours(std, corr1, corr2, **kwargs)¶
+

Add a curved line with a radius of std between two points +[std, corr1] and [std, corr2]

+
+ +
+ +
+
+plotter.draw_barchart(results, yvalues, fname, ptitle='', fmt='png', xlabel='', ylabel='')¶
+

Draw a barchart.

+ +++ + + + +
Parameters:
    +
  • results (numpy.ndarray) – 1D array of data.
  • +
  • yvalues – List of y-axis labels
  • +
  • fname (string) – Filename of the plot.
  • +
  • ptitle (string) – (Optional) plot title.
  • +
  • fmt (string) – (Optional) filetype for the output.
  • +
  • xlabel (string) – (Optional) x-axis title.
  • +
  • ylabel (string) – (Optional) y-axis title.
  • +
+
+
+ +
+
+plotter.draw_contour_map(dataset, lats, lons, fname, fmt='png', gridshape=(1, 1), clabel='', ptitle='', subtitles=None, cmap=None, clevs=None, nlevs=10, parallels=None, meridians=None, extend='neither', aspect=3.4)¶
+

Draw a multiple panel contour map plot.

+ +++ + + + +
Parameters:
    +
  • dataset (numpy.ndarray) – 3D array of data to be plotted with shape (nT, nLat, nLon).
  • +
  • lats (numpy.ndarray) – Array of latitudes values.
  • +
  • lons (numpy.ndarray) – Array of longitudes
  • +
  • fname (string) – The filename of the plot.
  • +
  • fmt (string) – (Optional) filetype for the output.
  • +
  • gridshape (tuple() of the form (num_rows, num_cols)) – (Optional) tuple denoting the desired grid shape +(num_rows, num_cols) for arranging the subplots.
  • +
  • clabel (string) – (Optional) colorbar title.
  • +
  • ptitle (string) – (Optional) plot title.
  • +
  • subtitles (list of string) – (Optional) list of titles for each subplot.
  • +
  • cmap (string or matplotlib.colors.LinearSegmentedColormap) – (Optional) string or matplotlib.colors.LinearSegmentedColormap +instance denoting the colormap. This must be able to be recognized by +Matplotlib’s get_cmap function.
  • +
  • clevs (list of int or float) – (Optional) contour levels values.
  • +
  • nlevs (int) – (Optional) target number of contour levels if clevs is None.
  • +
  • parallels (list of int or float) – (Optional) list of ints or floats for the parallels to +be drawn. See the Basemap documentation +for additional information.
  • +
  • meridians (list of int or float) –

    (Optional) list of ints or floats for the meridians to +be drawn. See the Basemap documentation +for additional information.

    +
  • +
  • extend (string) – (Optional) flag to toggle whether to place arrows at the colorbar +boundaries. Default is ‘neither’, but can also be ‘min’, ‘max’, or +‘both’. Will be automatically set to ‘both’ if clevs is None.
  • +
+
+
+ +
+
+plotter.draw_histogram(dataset_array, data_names, fname, fmt='png', nbins=10)¶
+
+
Purpose::
+
Draw histograms
+
Input::
+
dataset_array - a list of data values [data1, data2, ....] +data_names - a list of data names [‘name1’,’name2’,....] +fname - a string specifying the filename of the plot +bins - number of bins
+
+
+ +
+
+plotter.draw_marker_on_map(lat, lon, fname, fmt='png', location_name=' ', gridshape=(1, 1))¶
+
+
Purpose::
+
Draw a marker on a map
+
Input::
+
lat - latitude for plotting a marker +lon - longitude for plotting a marker +fname - a string specifying the filename of the plot
+
+
+ +
+
+plotter.draw_portrait_diagram(results, rowlabels, collabels, fname, fmt='png', gridshape=(1, 1), xlabel='', ylabel='', clabel='', ptitle='', subtitles=None, cmap=None, clevs=None, nlevs=10, extend='neither', aspect=None)¶
+

Draw a portrait diagram plot.

+ +++ + + + +
Parameters:
    +
  • results (numpy.ndarray) – 3D array of the fields to be plotted. The second dimension +should correspond to the number of rows in the diagram and the +third should correspond to the number of columns.
  • +
  • rowlabels (list of string) – Labels for each row.
  • +
  • collabels (list of string) – Labels for each row.
  • +
  • fname (string) – Filename of the plot.
  • +
  • fmt (string) – (Optional) filetype for the output.
  • +
  • gridshape (tuple() of the form (num_rows, num_cols)) – (Optional) tuple denoting the desired grid shape +(num_rows, num_cols) for arranging the subplots.
  • +
  • xlabel (string) – (Optional) x-axis title.
  • +
  • ylabel (string) – (Optional) y-ayis title.
  • +
  • clabel (string) – (Optional) colorbar title.
  • +
  • ptitle (string) – (Optional) plot title.
  • +
  • subtitles (list of string) – (Optional) list of titles for each subplot.
  • +
  • cmap (string or matplotlib.colors.LinearSegmentedColormap) –

    (Optional) string or matplotlib.colors.LinearSegmentedColormap +instance denoting the colormap. This must be able to be recognized by +Matplotlib’s get_cmap function.

    +
  • +
  • clevs (list of int or float) – (Optional) contour levels values.
  • +
  • nlevs (int) – Optional target number of contour levels if clevs is None.
  • +
  • extend (string) – (Optional) flag to toggle whether to place arrows at the colorbar +boundaries. Default is ‘neither’, but can also be ‘min’, ‘max’, or +‘both’. Will be automatically set to ‘both’ if clevs is None.
  • +
  • aspect (float) – (Optional) approximate aspect ratio of each subplot +(width / height). Default is 8.5 / 5.5
  • +
+
+
+ +
+
+plotter.draw_subregions(subregions, lats, lons, fname, fmt='png', ptitle='', parallels=None, meridians=None, subregion_masks=None)¶
+

Draw subregion domain(s) on a map.

+ +++ + + + +
Parameters:
    +
  • subregions (list of subregion objects (Bounds objects)) – The subregion objects to plot on the map.
  • +
  • lats (numpy.ndarray) – Array of latitudes values.
  • +
  • lons (numpy.ndarray) – Array of longitudes values.
  • +
  • fname (string) – The filename of the plot.
  • +
  • fmt (string) – (Optional) filetype for the output.
  • +
  • ptitle (string) – (Optional) plot title.
  • +
  • parallels (list of int or float) –

    (Optional) list of int or float for the parallels to +be drawn. See the Basemap documentation +for additional information.

    +
  • +
  • meridians (list of int or float) –

    (Optional) list of int or float for the meridians to +be drawn. See the Basemap documentation +for additional information.

    +
  • +
  • subregion_masks (dict of bool arrays) – (Optional) dict of bool arrays for each +subregion for giving finer control of the domain to be drawn, by default +the entire domain is drawn.
  • +
+
+
+ +
+
+plotter.draw_taylor_diagram(results, names, refname, fname, fmt='png', gridshape=(1, 1), ptitle='', subtitles=None, pos='upper right', frameon=True, radmax=1.5)¶
+

Draw a Taylor diagram.

+ +++ + + + +
Parameters:
    +
  • results (numpy.ndarray) – An Nx2 array containing normalized standard deviations, +correlation coefficients, and names of evaluation results.
  • +
  • names (list of string) – A list of names for each evaluated dataset
  • +
  • refname (string) – The name of the reference dataset.
  • +
  • fname (string) – The filename of the plot.
  • +
  • fmt (string) – (Optional) filetype for the output plot.
  • +
  • gridshape (A tuple of the form (num_rows, num_cols)) – (Optional) Tuple denoting the desired grid shape +(num_rows, num_cols) for arranging the subplots.
  • +
  • ptitle (string) – (Optional) plot title.
  • +
  • subtitles (list of string) – (Optional) list of strings specifying the title for each +subplot.
  • +
  • pos (string or tuple() of float) – (Optional) string or tuple of floats used to set the position +of the legend. Check the Matplotlib docs +for additional information.
  • +
  • frameon (bool) – (Optional) boolean specifying whether to draw a frame +around the legend box.
  • +
  • radmax (float) – (Optional) float to adjust the extent of the axes in terms of +standard deviation.
  • +
+
+
+ +
+
+plotter.draw_time_series(results, times, labels, fname, fmt='png', gridshape=(1, 1), xlabel='', ylabel='', ptitle='', subtitles=None, label_month=False, yscale='linear', aspect=None)¶
+

Draw a time series plot.

+ +++ + + + +
Parameters:
    +
  • results (numpy.ndarray) – 3D array of time series data.
  • +
  • times (list of datetime.datetime) – List of Python datetime objects used by Matplotlib to handle +axis formatting.
  • +
  • labels (list of string) – List of names for each data being plotted.
  • +
  • fname (string) – Filename of the plot.
  • +
  • fmt (string) – (Optional) filetype for the output.
  • +
  • gridshape (tuple() of the form (num_rows, num_cols)) – (Optional) tuple denoting the desired grid shape +(num_rows, num_cols) for arranging the subplots.
  • +
  • xlabel (string) – (Optional) x-axis title.
  • +
  • ylabel (string) – (Optional) y-axis title.
  • +
  • ptitle (string) – (Optional) plot title.
  • +
  • subtitles (list of string) – (Optional) list of titles for each subplot.
  • +
  • label_month (bool) – (Optional) flag to toggle drawing month labels on the +x-axis.
  • +
  • yscale (string) – (Optional) y-axis scale value, ‘linear’ for linear and ‘log’ +for log base 10.
  • +
  • aspect (float) – (Optional) approximate aspect ratio of each subplot +(width / height). Default is 8.5 / 5.5
  • +
+
+
+ +
+
+plotter.set_cmap(name)¶
+

Sets the default colormap (eg when setting cmap=None in a function) +See: http://matplotlib.org/examples/pylab_examples/show_colormaps.html +for a list of possible colormaps. +Appending ‘_r’ to a matplotlib colormap name will give you a reversed +version of it.

+ +++ + + + +
Parameters:name (string) – The name of the colormap.
+
+ +
+ + +
+
+
+
+
+

Previous topic

+

Metrics Module

+

Next topic

+

Utils Module

+

This Page

+ + + +
+
+
+
+ + + + \ No newline at end of file