20.04.2013 Views

Documentation of the Evaluation of CALPUFF and Other Long ...

Documentation of the Evaluation of CALPUFF and Other Long ...

Documentation of the Evaluation of CALPUFF and Other Long ...

SHOW MORE
SHOW LESS

Create successful ePaper yourself

Turn your PDF publications into a flip-book with our unique Google optimized e-Paper software.

EVALUATION OF THE MM5 AND CALMET METEOROLOGICAL MODELS USING THE CAPTEX<br />

CTEX5 FIELD EXPERIMENT DATA<br />

Statistical evaluation <strong>of</strong> <strong>the</strong> prognostic (MM5) <strong>and</strong> diagnostic (CALMET) meteorological model<br />

applications for <strong>the</strong> CTEX5 CAPTEX release was conducted using surface meteorological<br />

measurements. For <strong>the</strong> MM5 datasets, performance for meteorological parameters <strong>of</strong> wind<br />

(speed <strong>and</strong> direction), temperature, <strong>and</strong> humidity (mixing ratio) was examined. For <strong>the</strong><br />

CALMET experiments, CALMET estimated winds (speed <strong>and</strong> direction) were examined because<br />

<strong>the</strong> two‐dimensional temperature <strong>and</strong> relative humidity fields output are simple interpolated<br />

fields <strong>of</strong> <strong>the</strong> observations. Therefore, <strong>the</strong> evaluation for CALMET was restricted to winds where<br />

<strong>the</strong> majority <strong>of</strong> change can be induced by both diagnostic terrain adjustments <strong>and</strong> varying <strong>the</strong><br />

OA strategy. Note that except for <strong>the</strong> NOOBS = 2 CALMET sensitivity tests (i.e., <strong>the</strong> “D” series <strong>of</strong><br />

CALMET sensitivity tests), surface meteorological observations are blended with <strong>the</strong> wind fields<br />

in <strong>the</strong> CALMET STEP2 objective analysis (OA) procedure. Thus, <strong>the</strong> evaluation <strong>of</strong> <strong>the</strong> CALMET<br />

wind fields is not a true independent evaluation as <strong>the</strong> surface meteorological observations<br />

used in <strong>the</strong> evaluation are also used as input into CALMET. So we expect <strong>the</strong> CALMET wind<br />

fields to compare better with observations than MM5, but that does not mean that CALMET is<br />

producing better meteorological fields. As clearly shown by EPA (2009a,b), <strong>the</strong> CALMET<br />

diagnostic (STEP1) <strong>and</strong> blending <strong>of</strong> observations using <strong>the</strong> STEP2 OA procedure can introduce<br />

discontinuities <strong>and</strong> artifacts in <strong>the</strong> wind fields generated by <strong>the</strong> MM5/WRF prognostic<br />

meteorological model that is used as input to CALMET, even though <strong>the</strong> CALMET winds may<br />

match <strong>the</strong> observed surface winds at <strong>the</strong> locations <strong>of</strong> <strong>the</strong> monitoring sites does not necessarily<br />

mean that CALMET is performing better than MM5/WRF.<br />

The METSTAT s<strong>of</strong>tware (Emery et al., 2001) was used to match MM5 output with observation<br />

data. The MMIFStat s<strong>of</strong>tware (McNally, 2010) tool was used to match CALMET output with<br />

observation data. Emery <strong>and</strong> co‐workers (2001) have developed a set <strong>of</strong> “benchmarks” for<br />

comparing prognostic meteorological model performance statistics metrics. These benchmarks<br />

were developed after examining <strong>the</strong> performance <strong>of</strong> <strong>the</strong> MM5 <strong>and</strong> RAMS prognostic<br />

meteorological models for over 30 applications. The purpose <strong>of</strong> <strong>the</strong> benchmarks is not to<br />

assign a passing or failing grade, ra<strong>the</strong>r it is to put <strong>the</strong> prognostic meteorological model<br />

performance in context. The surface meteorological model performance benchmarks from<br />

Emery et al., (2001) are displayed in Table A‐1. Note that <strong>the</strong> wind speed RMSE benchmark was<br />

also used for wind speed MNGE given <strong>the</strong> similarity <strong>of</strong> <strong>the</strong> RMSE <strong>and</strong> MNGE performance<br />

statistics. These benchmarks are not applicable for diagnostic model evaluations.<br />

1

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!