climate-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From jo...@apache.org
Subject svn commit: r1506804 - /incubator/climate/branches/RefactorInput/ocw/evaluation.py
Date Thu, 25 Jul 2013 02:24:57 GMT
Author: joyce
Date: Thu Jul 25 02:24:57 2013
New Revision: 1506804

URL: http://svn.apache.org/r1506804
Log:
CLIMATE-214 - Update Evaluation run

- Flesh out run method. Unary metrics are now handled in the run method
  separately from the "binary" metrics. Regular metrics are also run.
  The results for unary and regular metrics are kept separate.

Modified:
    incubator/climate/branches/RefactorInput/ocw/evaluation.py

Modified: incubator/climate/branches/RefactorInput/ocw/evaluation.py
URL: http://svn.apache.org/viewvc/incubator/climate/branches/RefactorInput/ocw/evaluation.py?rev=1506804&r1=1506803&r2=1506804&view=diff
==============================================================================
--- incubator/climate/branches/RefactorInput/ocw/evaluation.py (original)
+++ incubator/climate/branches/RefactorInput/ocw/evaluation.py Thu Jul 25 02:24:57 2013
@@ -133,24 +133,39 @@ class Evaluation:
             logging.warning(error)
             return
 
-        # All pairs of (ref_dataset, target_dataset) must be run with
-        # each available metric.
-        for target in self.target_datasets:
-            for metric in self.metrics:
-                # Should we pass the calling of the metric off to another 
-                # function? This might make dataset access cleaner instead
-                # of doing it inline.
-                #result += _apply_metric(metric, self.ref_dataset, target)
-                #self.results += result
-
-                # Should a metric expect to take Dataset objects or should
-                # it expect to take data (aka numpy) arrays?
-                #self.results += metric(self.ref_dataset, target)
-
-                # If it expects the actual data
-                #self.results += metric(self.ref_dataset.value,
-                #                       target.value)
-                pass
+        # Results are saved as a list of lists of the form
+        # [
+        #   [ // The results for the first metric
+        #     The results of first target dataset,
+        #     The results of second target dataset,
+        #     The results of third target dataset
+        #   ]
+        #   [ // The results for the second metric
+        #     The results of first target dataset,
+        #     The results of second target dataset,
+        #     The results of third target dataset
+        #   ]
+        # ]
+        if shouldRunRegularMetrics():
+            self.results = []
+            for target in self.target_datasets:
+                self.results.append([])
+                for metric in self.metrics:
+                    run_result = [metric.run(self.ref_dataset, taget)]
+                    self.results[-1].append(run_result)
+
+        if shouldRunUnaryMetrics():
+            self.unary_results = []
+
+            for metric in self.unary_metrics:
+                self.unary_results.append([])
+                # Unary metrics should be run over the reference Dataset also
+                if self.ref_dataset:
+                    self.unary_results[-1].append(metric.run(ref_dataset))
+
+                for target in self.target_datasets:
+                    self.unary_results[-1].append(metric.run(target))
+
 
     def _evaluation_is_valid(self):
         '''Check if the evaluation is well-formed.



Mime
View raw message