20.04.2013 Views

Documentation of the Evaluation of CALPUFF and Other Long ...

Documentation of the Evaluation of CALPUFF and Other Long ...

Documentation of the Evaluation of CALPUFF and Other Long ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

The main conclusion <strong>of</strong> <strong>the</strong> SRL75 <strong>CALPUFF</strong> evaluation is that <strong>the</strong> fitted Gaussian plume<br />

evaluation approach can be a poor <strong>and</strong> misleading indicator <strong>of</strong> LRT dispersion model<br />

performance. In fact, <strong>the</strong> whole concept <strong>of</strong> a well‐defined Gaussian plume at far downwind<br />

distances (e.g., > 50 km) is questionable since wind variations <strong>and</strong> shear can destroy <strong>the</strong><br />

Gaussian distribution. Thus, we recommend that future studies no longer use <strong>the</strong> fitted<br />

Gaussian plume evaluation methodology for evaluating LRT dispersion models <strong>and</strong> adopt<br />

alternate evaluation approaches that are free from a priori assumption regarding <strong>the</strong><br />

distribution <strong>of</strong> <strong>the</strong> observed tracer concentrations.<br />

Cross Appalachian Tracer Experiment (CAPTEX)<br />

The Cross Appalachian Tracer Experiment (CAPTEX) performed five tracer releases from ei<strong>the</strong>r<br />

Dayton, Ohio or Sudbury, Ontario with tracer concentrations measured at hundreds <strong>of</strong><br />

monitoring sites deployed in <strong>the</strong> nor<strong>the</strong>astern U.S. <strong>and</strong> sou<strong>the</strong>astern Canada out to distances <strong>of</strong><br />

1000 km downwind <strong>of</strong> <strong>the</strong> release sites. Numerous <strong>CALPUFF</strong> sensitivity tests were performed<br />

for <strong>the</strong> third (CTEX3) <strong>and</strong> fifth (CTEX5) CAPTEX tracer releases from, respectively, Dayton <strong>and</strong><br />

Sudbury. The performance <strong>of</strong> <strong>the</strong> six LRT models was also intercompared using <strong>the</strong> CTEX3 <strong>and</strong><br />

CTEX5 field experiments.<br />

CAPTEX Meteorological Modeling<br />

MM5 meteorological modeling was conducted for <strong>the</strong> CTEX3 <strong>and</strong> CTEX5 periods using modeling<br />

approaches prevalent in <strong>the</strong> 1980’s (e.g., one 80 km grid with 16 vertical layers) that was<br />

sequentially updated to use a more current MM5 modeling approach (e.g., 108/36/12/4 km<br />

nested grids with 43 vertical layers). The MM5 experiments also employed various levels <strong>of</strong><br />

four dimensional data assimilation (FDDA) from none (i.e., forecast mode) to increasing<br />

aggressive use <strong>of</strong> FDDA.<br />

CALMET sensitivity tests were conducted using 80, 36 <strong>and</strong> 12 km MM5 data as input <strong>and</strong> using<br />

CALMET grid resolutions <strong>of</strong> 18, 12 <strong>and</strong> 4 km. For each MM5 <strong>and</strong> CALMET grid resolution<br />

combination, additional CALMET sensitivity tests were performed to investigate <strong>the</strong> effects <strong>of</strong><br />

different options for blending <strong>the</strong> meteorological observations into <strong>the</strong> CALMET STEP1 wind<br />

fields using <strong>the</strong> STEP2 objective analysis (OA) procedures to produce <strong>the</strong> wind field that is<br />

provided as input to <strong>CALPUFF</strong>:<br />

• A – RMAX1/RMAX2 = 500/1000<br />

• B – RMAX1/RMAX2 = 100/200<br />

• C – RMAX1/RMAX2 = 10/100<br />

• D – no meteorological observations (NOOBS = 2)<br />

Wind fields estimated by <strong>the</strong> MM5 <strong>and</strong> CALMET CTEX3 <strong>and</strong> CTEX5 sensitivity tests were paired<br />

with surface wind observations in space <strong>and</strong> time, <strong>the</strong>n aggregated by day <strong>and</strong> <strong>the</strong>n aggregated<br />

over <strong>the</strong> modeling period. The surface wind comparison is not an independent evaluation since<br />

many <strong>of</strong> <strong>the</strong> surface wind observations in <strong>the</strong> evaluation database are also provided as input to<br />

CALMET. Since <strong>the</strong> CALMET STEP2 OA procedure is designed to make <strong>the</strong> CALMET winds at <strong>the</strong><br />

monitoring sites better match <strong>the</strong> observed values, one would expect CALMET simulations<br />

using observations to perform better than those that do not. However, as EPA points out in<br />

<strong>the</strong>ir 2009 IWAQM reassessment report, CALMET’s OA procedure can also produce<br />

discontinuities <strong>and</strong> artifacts in <strong>the</strong> wind fields resulting in a degradation <strong>of</strong> <strong>the</strong> wind fields even<br />

though <strong>the</strong>y may match <strong>the</strong> observed winds better at <strong>the</strong> locations <strong>of</strong> <strong>the</strong> observations (EPA,<br />

12

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!