Modified: websites/staging/climate/trunk/content/api/current/ocw/dataset_loader.html ============================================================================== --- websites/staging/climate/trunk/content/api/current/ocw/dataset_loader.html (original) +++ websites/staging/climate/trunk/content/api/current/ocw/dataset_loader.html Wed May 2 18:42:25 2018 @@ -1,23 +1,21 @@ + - - - Dataset Loader Module — Apache Open Climate Workbench 1.2.0 documentation - + Dataset Loader Module — Apache Open Climate Workbench 1.3.0 documentation - @@ -25,7 +23,6 @@ - @@ -35,7 +32,7 @@ - +
@@ -55,79 +52,76 @@

Generate a list of OCW Dataset objects from a variety of sources.

Generate a list of OCW Dataset objects from a variety of sources.

Each keyword argument can be information for a dataset in dictionary -form. For example: -`` ->>> loader_opt1 = {‘loader_name’: ‘rcmed’, ‘name’: ‘cru’,

-
-
‘dataset_id’: 10, ‘parameter_id’: 34}
-
>>> loader_opt2 = {'path': './data/TRMM_v7_3B43_1980-2010.nc,
+form. For example:

+
>>> loader_opt1 = {'loader_name': 'rcmed', 'name': 'cru',
+                   'dataset_id': 10, 'parameter_id': 34}
+>>> loader_opt2 = {'path': './data/TRMM_v7_3B43_1980-2010.nc,
                    'variable': 'pcp'}
 >>> loader = DatasetLoader(loader_opt1, loader_opt2)
-``
 

Or more conveniently if the loader configuration is defined in a -yaml file named config_file (see RCMES examples): -`` ->>> import yaml ->>> config = yaml.load(open(config_file)) ->>> obs_loader_config = config[‘datasets’][‘reference’] ->>> loader = DatasetLoader(*obs_loader_config) -``

+yaml file named config_file (see RCMES examples):

+
>>> import yaml
+>>> config = yaml.load(open(config_file))
+>>> obs_loader_config = config['datasets']['reference']
+>>> loader = DatasetLoader(*obs_loader_config)
+
+

As shown in the first example, the dictionary for each argument should contain a loader name and parameters specific to the particular loader. -Once the configuration is entered, the datasets may be loaded using: -`` ->>> loader.load_datasets() ->>> obs_datasets = loader.datasets -``

+Once the configuration is entered, the datasets may be loaded using:

+
>>> loader.load_datasets()
+>>> obs_datasets = loader.datasets
+
+

Additionally, each dataset must have a loader_name keyword. This may -be one of the following: -* 'local' - One or multiple dataset files in a local directory -* 'local_split' - A single dataset split accross multiple files in a

-
-
local directory
-
    +be one of the following:

    +
      +
    • 'local' - One or multiple dataset files in a local directory
    • +
    • +
      'local_split' - A single dataset split accross multiple files in a
      +
      local directory
      +
      +
    • 'esgf' - Download the dataset from the Earth System Grid
      -

      Federation

      -
      +
      Federation
    • 'rcmed' - Download the dataset from the Regional Climate Model
      -

      Evaluation System Database

      -
      +
      Evaluation System Database
    • -
    • 'dap' - Download the dataset from an OPeNDAP URL

      -
    • +
    • 'dap' - Download the dataset from an OPeNDAP URL
    • 'podaac' - Download the dataset from Physical Oceanography
      -

      Distributed Active Archive Center

      -
      +
      Distributed Active Archive Center

    Users who wish to load datasets from loaders not described above may define their own custom dataset loader function and incorporate it as -follows: ->>> loader.add_source_loader(‘my_loader_name’, my_loader_func)

    +follows:

    +
    >>> loader.add_source_loader('my_loader_name', my_loader_func)
    +
    +
    - - +
    Parameters:loader_opts (dict) – Dictionaries containing the each dataset loader +
    Parameters:loader_opts (dict) – Dictionaries containing the each dataset loader configuration, representing the keyword arguments of the loader function specified by an additional key -called ‘loader_name’. If not specified by the user, +called ‘loader_name’. If not specified by the user, this defaults to local.
    Raises:KeyError – If an invalid argument is passed to a data source
    Raises:KeyError – If an invalid argument is passed to a data source +loader function.
    -

    loader function.

    add_loader_opts(*loader_opts)¶
    @@ -137,10 +131,10 @@ loader.

    -Parameters:loader_opts (dict) – Dictionaries containing the each dataset loader +Parameters:loader_opts (dict) – Dictionaries containing the each dataset loader configuration, representing the keyword arguments of the loader function specified by an additional key -called ‘loader_name’. If not specified by the user, +called ‘loader_name’. If not specified by the user, this defaults to local. @@ -156,16 +150,15 @@ this defaults to local. Parameters:
      -
    • loader_name (string) – The name of the data source.
    • -
    • loader_func – Reference to a custom defined function. This should
    • +
    • loader_name (string) – The name of the data source.
    • +
    • loader_func (callable) – Reference to a custom defined function. This should +return an OCW Dataset object, and have an origin which satisfies +origin[‘source’] == loader_name.
    -

    return an OCW Dataset object, and have an origin which satisfies -origin[‘source’] == loader_name. -:type loader_func: callable

@@ -182,10 +175,10 @@ origin[‘source’] == loader_n -Parameters:loader_opts (dict) – Dictionaries containing the each dataset loader +Parameters:loader_opts (dict) – Dictionaries containing the each dataset loader configuration, representing the keyword arguments of the loader function specified by an additional key -called ‘loader_name’. If not specified by the user, +called ‘loader_name’. If not specified by the user, this defaults to local. @@ -216,7 +209,7 @@ this defaults to local.

This Page

@@ -235,14 +228,14 @@ this defaults to local.
Modified: websites/staging/climate/trunk/content/api/current/ocw/dataset_processor.html ============================================================================== --- websites/staging/climate/trunk/content/api/current/ocw/dataset_processor.html (original) +++ websites/staging/climate/trunk/content/api/current/ocw/dataset_processor.html Wed May 2 18:42:25 2018 @@ -1,23 +1,21 @@ + - - - Dataset Processor Module — Apache Open Climate Workbench 1.2.0 documentation - + Dataset Processor Module — Apache Open Climate Workbench 1.3.0 documentation - @@ -25,7 +23,6 @@ - @@ -35,7 +32,7 @@ - +
@@ -54,7 +51,7 @@ the input dataset

-Parameters:dataset (dataset.Dataset) – The dataset to convert. +Parameters:dataset (dataset.Dataset) – The dataset to convert. Returns:A Dataset with values converted to new units. @@ -74,7 +71,7 @@ similar shape, dimensions, and units.

-Parameters:datasets – Datasets to be used to compose the ensemble dataset from. +Parameters:datasets – Datasets to be used to compose the ensemble dataset from. All Datasets must be the same shape. Return type:dataset.Dataset @@ -103,9 +100,9 @@ Force monthly data to the first of the m Parameters: @@ -131,9 +128,9 @@ overlap of the subregion and dataset wit Parameters: @@ -156,11 +153,11 @@ overlap of the subregion and dataset wit Parameters: @@ -183,11 +180,11 @@ are outside target_dataset’s domai Parameters: @@ -212,7 +209,7 @@ are outside target_dataset’s domai -Parameters:dataset – The dataset for which units should be updated. :type dataset; dataset.Dataset +Parameters:dataset – The dataset for which units should be updated. :type dataset; dataset.Dataset Returns:The dataset with (potentially) updated units. :rtype: dataset.Dataset @@ -229,8 +226,8 @@ are outside target_dataset’s domai Parameters: @@ -253,8 +250,8 @@ are outside target_dataset’s domai Parameters:
Modified: websites/staging/climate/trunk/content/api/current/ocw/evaluation.html ============================================================================== --- websites/staging/climate/trunk/content/api/current/ocw/evaluation.html (original) +++ websites/staging/climate/trunk/content/api/current/ocw/evaluation.html Wed May 2 18:42:25 2018 @@ -1,23 +1,21 @@ + - - - Evaluation Module — Apache Open Climate Workbench 1.2.0 documentation - + Evaluation Module — Apache Open Climate Workbench 1.3.0 documentation - @@ -25,7 +23,6 @@ - @@ -35,7 +32,7 @@ - +
@@ -70,12 +67,12 @@ Evaluation.

Parameters: @@ -95,9 +92,9 @@ Evaluation is run with one or more metri -Parameters:target_dataset (dataset.Dataset) – The target Dataset to add to the Evaluation. +Parameters:target_dataset (dataset.Dataset) – The target Dataset to add to the Evaluation. -Raises:ValueError – If a dataset to add isn’t an instance of Dataset. +Raises:ValueError – If a dataset to add isn’t an instance of Dataset. @@ -111,10 +108,10 @@ Evaluation is run with one or more metri -Parameters:target_datasets (list of dataset.Dataset) – The list of datasets that should be added to +Parameters:target_datasets (list of dataset.Dataset) – The list of datasets that should be added to the Evaluation. -Raises:ValueError – If a dataset to add isn’t an instance of Dataset. +Raises:ValueError – If a dataset to add isn’t an instance of Dataset. @@ -129,9 +126,9 @@ the Evaluation. -Parameters:metric (metrics) – The metric instance to add to the Evaluation. +Parameters:metric (metrics) – The metric instance to add to the Evaluation. -Raises:ValueError – If the metric to add isn’t a class that inherits +Raises:ValueError – If the metric to add isn’t a class that inherits from metrics.Metric. @@ -147,9 +144,9 @@ from metrics.Metric. -Parameters:metrics (list of metrics) – The list of metric instances to add to the Evaluation. +Parameters:metrics (list of metrics) – The list of metric instances to add to the Evaluation. -Raises:ValueError – If a metric to add isn’t a class that inherits +Raises:ValueError – If a metric to add isn’t a class that inherits from metrics.Metric. @@ -159,7 +156,7 @@ from metrics.Metric.
metrics = None¶
-

The list of “binary” metrics (A metric which takes two Datasets) +

The list of “binary” metrics (A metric which takes two Datasets) that the Evaluation should use.

@@ -168,7 +165,7 @@ that the Evaluation should use.

results = None¶

A list containing the results of running regular metric evaluations. The shape of results is (num_target_datasets, num_metrics) if -the user doesn’t specify subregion information. Otherwise the shape +the user doesn’t specify subregion information. Otherwise the shape is (num_target_datasets, num_metrics, num_subregions).

@@ -177,13 +174,13 @@ is run()¶

Run the evaluation.

There are two phases to a run of the Evaluation. First, if there are -any “binary” metrics they are run through the evaluation. Binary +any “binary” metrics they are run through the evaluation. Binary metrics are only run if there is a reference dataset and at least one target dataset.

If there is subregion information provided then each dataset is subset before being run through the binary metrics.

..note:: Only the binary metrics are subset with subregion information.

-

Next, if there are any “unary” metrics they are run. Unary metrics are +

Next, if there are any “unary” metrics they are run. Unary metrics are only run if there is at least one target dataset or a reference dataset.

@@ -197,7 +194,7 @@ the reference dataset when the evaluatio
unary_metrics = None¶
-

The list of “unary” metrics (A metric which takes one Dataset) that +

The list of “unary” metrics (A metric which takes one Dataset) that the Evaluation should use.

@@ -234,7 +231,7 @@ evaluations. The shape of unary_results

This Page

@@ -253,14 +250,14 @@ evaluations. The shape of unary_results
Modified: websites/staging/climate/trunk/content/api/current/ocw/metrics.html ============================================================================== --- websites/staging/climate/trunk/content/api/current/ocw/metrics.html (original) +++ websites/staging/climate/trunk/content/api/current/ocw/metrics.html Wed May 2 18:42:25 2018 @@ -1,23 +1,21 @@ + - - - Metrics Module — Apache Open Climate Workbench 1.2.0 documentation - + Metrics Module — Apache Open Climate Workbench 1.3.0 documentation - @@ -25,7 +23,6 @@ - @@ -35,7 +32,7 @@ - +
@@ -66,8 +63,8 @@ Parameters: @@ -75,7 +72,7 @@ reference dataset in this metric run.Returns:

The difference between the reference and target datasets.

-Return type:

numpy.ndarray

+Return type:

numpy.ndarray

@@ -97,9 +94,9 @@ reference dataset in this metric run. Parameters: @@ -137,8 +134,8 @@ target dataset.

Parameters: @@ -173,9 +170,9 @@ calculated over time and space.

Parameters: @@ -206,8 +203,8 @@ reference dataset in this metric run Parameters: @@ -215,7 +212,7 @@ reference dataset in this metric run.Returns:

standard deviation ratio, pattern correlation coefficient

-Return type:

:float:’float’,’float’

+Return type:

:float:’float’,’float’

@@ -241,8 +238,8 @@ reference dataset in this metric run. Parameters: @@ -260,13 +257,13 @@ reference dataset in this metric run. class metrics.TemporalCorrelation¶

Calculate the temporal correlation coefficients and associated -confidence levels between two datasets, using Pearson’s correlation.

+confidence levels between two datasets, using Pearson’s correlation.

run(reference_dataset, target_dataset)¶
Calculate the temporal correlation coefficients and associated
-
confidence levels between two datasets, using Pearson’s correlation.
+
confidence levels between two datasets, using Pearson’s correlation.

Note

@@ -277,9 +274,9 @@ confidence levels between two datasets, Parameters:
    -
  • reference_dataset (dataset.Dataset) – The reference dataset to use in this metric +
  • reference_dataset (dataset.Dataset) – The reference dataset to use in this metric run
  • -
  • target_dataset (dataset.Dataset) – The target dataset to evaluate against the +
  • target_dataset (dataset.Dataset) – The target dataset to evaluate against the reference dataset in this metric run
@@ -312,8 +309,8 @@ coefficients

Parameters:
    -
  • ref_dataset (dataset.Dataset) – The reference dataset to use in this metric run.
  • -
  • target_dataset (dataset.Dataset) – The target dataset to evaluate against the +
  • ref_dataset (dataset.Dataset) – The reference dataset to use in this metric run.
  • +
  • target_dataset (dataset.Dataset) – The target dataset to evaluate against the reference dataset in this metric run.
@@ -343,7 +340,7 @@ reference dataset in this metric run. -Parameters:target_dataset (dataset.Dataset) – The target_dataset on which to calculate the +Parameters:target_dataset (dataset.Dataset) – The target_dataset on which to calculate the temporal standard deviation. Returns:The temporal standard deviation of the target dataset @@ -368,7 +365,7 @@ temporal standard deviation. -Parameters:target_dataset (dataset.Dataset) – The dataset on which the current metric will +Parameters:target_dataset (dataset.Dataset) – The dataset on which the current metric will be run. Returns:The result of evaluating the metric on the target_dataset. @@ -388,16 +385,16 @@ be run. Parameters:
    -
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • -
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • -
  • average_over_time ('bool') – if True, calculated bias is averaged for the axis=0
  • +
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • +
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • +
  • average_over_time ('bool') – if True, calculated bias is averaged for the axis=0
Returns:

Biases array of the target dataset

-Return type:

:class:’numpy.ma.core.MaskedArray’

+Return type:

:class:’numpy.ma.core.MaskedArray’

@@ -413,15 +410,15 @@ be run. Parameters:
    -
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • -
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • +
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • +
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
-Returns:

pearson’s correlation coefficient between the two input arrays

+Returns:

pearson’s correlation coefficient between the two input arrays

-Return type:

:class:’numpy.ma.core.MaskedArray’

+Return type:

:class:’numpy.ma.core.MaskedArray’

@@ -433,9 +430,9 @@ be run. metrics.calc_histogram_overlap(hist1, hist2)¶

from Lee et al. (2014) :param hist1: a histogram array -:type hist1: :class:’numpy.ndarray’ +:type hist1: :class:’numpy.ndarray’ :param hist2: a histogram array with the same size as hist1 -:type hist2: :class:’numpy.ndarray’

+:type hist2: :class:’numpy.ndarray’

@@ -443,13 +440,13 @@ be run. metrics.calc_joint_histogram(data_array1, data_array2, bins_for_data1, bins_for_data2)¶

Calculate a joint histogram of two variables in data_array1 and data_array2 :param data_array1: the first variable -:type data_array1: :class:’numpy.ma.core.MaskedArray’ +:type data_array1: :class:’numpy.ma.core.MaskedArray’ :param data_array2: the second variable -:type data_array2: :class:’numpy.ma.core.MaskedArray’ +:type data_array2: :class:’numpy.ma.core.MaskedArray’ :param bins_for_data1: histogram bin edges for data_array1 -:type bins_for_data1: :class:’numpy.ndarray’ +:type bins_for_data1: :class:’numpy.ndarray’ :param bins_for_data2: histogram bin edges for data_array2 -:type bins_for_data2: :class:’numpy.ndarray’

+:type bins_for_data2: :class:’numpy.ndarray’

@@ -461,16 +458,16 @@ be run. Parameters:
    -
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • -
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • -
  • average_over_time ('bool') – if True, calculated bias is averaged for the axis=0
  • +
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • +
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • +
  • average_over_time ('bool') – if True, calculated bias is averaged for the axis=0
Returns:

root mean square error

-Return type:

:class:’float’

+Return type:

:class:’float’

@@ -486,15 +483,15 @@ be run. Parameters:
    -
  • array (:class:'numpy.ma.core.MaskedArray') – an array to calculate sample standard deviation
  • -
  • axis ('int') – Axis along which the sample standard deviation is computed.
  • +
  • array (:class:'numpy.ma.core.MaskedArray') – an array to calculate sample standard deviation
  • +
  • axis ('int') – Axis along which the sample standard deviation is computed.
Returns:

sample standard deviation of array

-Return type:

:class:’numpy.ma.core.MaskedArray’

+Return type:

:class:’numpy.ma.core.MaskedArray’

@@ -510,16 +507,16 @@ be run. Parameters:
    -
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • -
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • -
  • average_over_time ('bool') – if True, calculated bias is averaged for the axis=0
  • +
  • target_array (:class:'numpy.ma.core.MaskedArray') – an array to be evaluated, as model output
  • +
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array of reference dataset
  • +
  • average_over_time ('bool') – if True, calculated bias is averaged for the axis=0
Returns:

(standard deviation of target_array)/(standard deviation of reference array)

-Return type:

:class:’float’

+Return type:

:class:’float’

@@ -535,10 +532,10 @@ be run. Parameters:
    -
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array to be analyzed
  • -
  • threshold ('float') – the minimum amount of rainfall [mm/hour]
  • -
  • nyear ('int') – the number of discontinous periods
  • -
  • dt ('float') – the temporal resolution of reference_array
  • +
  • reference_array (:class:'numpy.ma.core.MaskedArray') – an array to be analyzed
  • +
  • threshold ('float') – the minimum amount of rainfall [mm/hour]
  • +
  • nyear ('int') – the number of discontinous periods
  • +
  • dt ('float') – the temporal resolution of reference_array
@@ -568,7 +565,7 @@ be run.

This Page

@@ -587,14 +584,14 @@ be run.