20.04.2013 Views

Documentation of the Evaluation of CALPUFF and Other Long ...

Documentation of the Evaluation of CALPUFF and Other Long ...

Documentation of the Evaluation of CALPUFF and Other Long ...

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

15 BASED 1.79 80 18 ‐‐ No<br />

16 EXP3A 1.79 36 12 10/100 Yes<br />

17 EXP3B 1.79 36 12 100/200 Yes<br />

18 EXP3C 1.79 36 12 500/1000 Yes<br />

19 EXP3D 1.79 36 12 ‐‐ No<br />

20 4KM_MMIF 1.78 4 ‐‐ ‐‐ No<br />

21 EXP4C 1.72 12 12 10/100 Yes<br />

22 36KM_MMIF 1.42 36 ‐‐ ‐‐ No<br />

23 80KM_MMIF 1.42 80 ‐‐ ‐‐ No<br />

24 12KM_MMIF 1.28 12 ‐‐ ‐‐ No<br />

Conclusions <strong>of</strong> <strong>the</strong> CAPTEX <strong>CALPUFF</strong> Tracer Sensitivity Tests<br />

There are some differences <strong>and</strong> similarities in <strong>CALPUFF</strong>’s ability to simulate <strong>the</strong> observed tracer<br />

concentrations in <strong>the</strong> CTEX3 <strong>and</strong> CTEX5 field experiments. The overall conclusions <strong>of</strong> <strong>the</strong><br />

evaluation <strong>of</strong> <strong>the</strong> <strong>CALPUFF</strong> model using <strong>the</strong> CAPTEX tracer test field experiment data can be<br />

summarized as follows:<br />

• There is a noticeable variability in <strong>the</strong> <strong>CALPUFF</strong> model performance depending on <strong>the</strong><br />

selected input options to CALMET.<br />

‐ By varying CALMET inputs <strong>and</strong> options through <strong>the</strong>ir range <strong>of</strong> plausibility, <strong>CALPUFF</strong><br />

can produce a wide range <strong>of</strong> concentrations estimates.<br />

• Regarding <strong>the</strong> effects <strong>of</strong> <strong>the</strong> RMAX1/RMAX2 parameters on <strong>CALPUFF</strong>/CALMET model<br />

performance, <strong>the</strong> “A” series (500/1000) performed best for CTEX3 but <strong>the</strong> “C” series<br />

(10/100) performed best for CTEX5 with both CTEX3 <strong>and</strong> CTEX5 agreeing that <strong>the</strong> “B”<br />

series (100/200) is <strong>the</strong> worst performing setting for RMAX1/RMAX2.<br />

‐ This is in contrast to <strong>the</strong> CALMET wind evaluation that found <strong>the</strong> “B” series was <strong>the</strong><br />

CALMET configuration that most closely matched observed surface winds.<br />

‐ The CALMET wind evaluation was not an independent evaluation since some <strong>of</strong> <strong>the</strong><br />

wind observations used in <strong>the</strong> model evaluation database were also used as input to<br />

CALMET.<br />

<strong>Evaluation</strong> <strong>of</strong> Six LRT Dispersion Models using <strong>the</strong> CTEX3 Database<br />

Six LRT dispersions models were applied for <strong>the</strong> CTEX3 experiment using common<br />

meteorological inputs based solely on MM5. Figure ES‐4 displays <strong>the</strong> RANK model performance<br />

statistic for <strong>the</strong> six LRT dispersion models. The RANK statistical performance metric was<br />

proposed by Draxler (2001) as a single model performance metric that equally ranks <strong>the</strong><br />

combination <strong>of</strong> performance metrics for correlation (PCC or R 2 ), bias (FB), spatial analysis (FMS)<br />

<strong>and</strong> unpaired distribution comparisons (KS). The RANK metrics ranges from 0.0 to 4.0 with a<br />

perfect model receiving a score <strong>of</strong> 4.0.<br />

18

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!