The main objective of a modelling study is usually to determine the significance of the effects of pollutants discharged from a particular source. The results must therefore be reported effectively and concisely in a manner suitable for the purpose for which they were produced. This means the results must be communicated in a way that can be understood by other people who may not be experienced in interpreting model output. There are two elements to this: first, to report the modelling results themselves in an easy-to-understand manner; and second, to evaluate the implications of the results in terms of the potential effects of the predicted ground-level concentrations on people's health and the environment (also in an easy-to-understand manner).
This section focuses primarily on the first part - making modelling results easy to understand. The second aspect - how to evaluate modelling results in terms of potential environmental effects and the national environmental standards - will be covered in the upcoming Good Practice Guide for Assessing Discharges to Air.
The key factors involved in reporting modelling results are:
- do not include large sections of data in a report, except as an appendix or electronic attachment
- always include information about the input data and how variations may affect the results
- discuss the accuracy of the modelling results
- prepare maps of the pollution contours, where useful
- indicate which factors are most influential in determining the peak ground-level concentrations.
Most models allow results to be assimilated and reported in a variety of formats to allow statistical analysis. These include the maximum predicted concentration at each or any receptor, or up to the nth highest predictions, where n is defined by the user. n is chosen to provide commonly used percentile predictions (such as the 99.5 percentile, which is the highest ground-level concentration at each receptor after the highest 0.5% of predictions have been discarded). Tables of the 50 (ISCST3 and CALPUFF) or 100 (AUSPLUME) highest predictions for all receptors can also be generated.
Some models also allow files to be generated that record the number of exceedances of a user-specified threshold value at each receptor. This function allows, for example, the production of graphs or tables showing the percentage of time that model results exceed the evaluation criteria.
AUSPLUME also provides for the generation of a binary file containing all results for all receptors, and in the current version (5.4) of AUSPLUME these data can be processed from within the GUI using the 'statistics utility' to produce percentile data files for the highest, second-highest, and 99.9, to the 90th percentile value.
ISCST3 allows the generation of a 'POSTFILE', which contains all results for all receptors. The postfile can be read by post-processing subroutines such as 'Percent View' by Lakes Environmental (free to download from www.lakes-environmental.com), and used to generate statistical data for each receptor.
For the purpose of comparing modelling results to an evaluation criterion:
a) run the model for the minimum period of one full year of meteorological data where possible (i.e. 8760 hours)
b) identify the receptor(s) that are most highly impacted and those that are most sensitive
c) for the receptor(s), report the 99.9 percentile value of the predicted ground-level concentration as the maximum ground-level concentration likely to occur.
Provide an indication of the representativeness of the 99.9 percentile value ground-level concentration by also presenting a number of other percentile values (e.g. maximum, 99.5th and 99th percentile values).
Use the frequency of exceedances to indicate the frequency of 'pollution events' that exceed the evaluation criterion being used.
Reporting the 99.9% predicted value is not simply a case of listing the highest 100 predictions over all receptors and then taking the ninth value on that list. This is a common mistake. The 99.9% value reported must be with reference to a specific receptor, which must be located at the point of highest impact. To ensure that the area of highest impact is identified, it may be helpful to plot contours of both the maximum and 99.9% values. An alternative is to list the ninth-highest value for every receptor and report the highest value identified.
6.1.2 Tabulated results
An example of tabulated results from AUSPLUME for the highest ground-level concentration at each receptor is shown in Table 6.1. ISCST3 and CALPUFF files show a similar format. This table shows that the highest ground-level concentration (in these grid locations) is located 10 m East, 70 m North, was 0.809 (concentration units) and occurred at 8 pm on 18 May 1997. Such data can be imported into spreadsheets like Excel or Lotus and sorted to analyse for seasonal or daily trends (Figure 6.1).
Table 6.1: Example of tabulated results in AUSPLUME
|Highest recordings for each receptor (in concentration units)|
Averaging time = 1 hour
6.1.3 Graphical results
Models can generate data files for importing into a graphics programme. ISCST3, AUSPLUME, and CALPUFF (via the post-processing programme CALPOST) all produce data files summarising the results in an 'x, y, z' three-column ASCII format (x co-ordinate, y co-ordinate, concentration) suitable for importing into SURFER for graphical analysis. SURFER is the most commonly used plotting programme with dispersion models. AUSPLUME links directly to SURFER for graphical utilities within the AUSPLUME GUI.
Spreadsheets such as EXCEL are also used for graphing one-dimensional data from screening analyses, such as that shown in Table 6.1.
Following are some suggestions for preparing graphs in SURFER from modelling simulations.
Set the number of points in the SURFER grid to be the same as your number of receptors (section 4.2.2).
Overlay the graphed concentration contours with a base map and terrain map (if appropriate) to allow people viewing the graphs to understand perspective, scale, and context of the results (e.g. 'Where's my house in relation to this?'). If you do this, make sure that your scale is the same on both the contour map and the base map, and if possible overlay the two maps onto the same axes using the 'Overlay Maps' function.
Remember that SURFER is simply a mathematical interpolation programme that draws contours of best fit between your data points. If your number of data points is low, the interpolation may look poor. Pockets of concentric circles often indicate an anomalous data point which is out of place compared to neighbouring receptors, and the data file used to create the graph should be checked.
If you have multiple source groups in your model, then AUSPLUME lists the results for each group one after the other in the same plot data file. These must be divided into individual plot files using a text processor before importing into SURFER.
SURFER will allow you to calculate the area of a receptor grid that is impacted by concentrations above a user-defined level. This can be a useful tool if you want to explore the extent of impact as well as the magnitude.
'Percent View' by Lakes Environmental (free to download from www.lakes-environmental.com) can be used to generate percentile plots (up to 99.0%) for whole ISCST3 grids, (Figure 6-2 and Figure 6-3).
Results of the top 100 (say) predictions can also be used to generate detailed percentile statistics at any given receptor by making the grid so small that it only includes one receptor at the location you're interested in. The results table then shows the top 100 results for that receptor, which can be used to calculate the 99.9, 99.0, 98.0, 95.0 percentiles, etc, and graphed (Figure 6-4).
Similar statistical post-processing options to those in ISCST3 and AUSPLUME are available in CALPUFF's post-processing program CALPOST.
Present modelling results graphically whenever it is helpful and appropriate.
Use sufficient labelling and include legends to allow people without expert training or experience in dispersion modelling to understand the data.
If presenting contour plots:
a) indicate the location of sources, site property boundary and potentially sensitive receptors
b) keep the number of concentration contours to the minimum necessary for conveying the information
c) include the relevant evaluation criteria
d) paste the contours over a map or photograph of the impacted area
e) calculate the area of a receptor grid that is impacted by concentrations above the evaluation criteria. This is a useful tool if you want to explore the extent of impact as well as the magnitude.
Present a percentage occurrence analysis for sensitive receptors.
Present graphs showing the daily and seasonal variation of the ground-level concentrations caused by the contaminants discharged from the source.
Include the plot file data as an electronic appendix to the report.
6.2 Accounting for and reporting of model error and uncertainty
One of the most common criticisms of dispersion modelling is, "It's not at all accurate - it's only a model". To avoid such criticisms it is important to follow some simple principals, as listed in the recommendation box below.
Design all modelling studies to be as accurate as possible for the purpose of the study.
Allow the accuracy of the modelling study to be easily assessed by:
a) stating the objectives of the study
b) demonstrating that the model inputs are as correct as possible
c) knowing and stating the model performance limitations
d) demonstrating (via the methodology) that the modelling process has been conducted appropriately
e) including any validating information from monitoring that might be available.
If corners are cut on any of these, the results can be at best meaningless, and at worst dangerous, especially if they are used to justify an important decision. There are three main general sources of error and uncertainty in dispersion modelling:
- inaccurate input data
- inappropriate use of the model (or expecting too much from it)
- poor performance of the model itself.
The total uncertainty contained in the model results is the cumulative effect of these sources. It is useful here to distinguish between 'reducible' and inherent uncertainty. Reducible uncertainty includes the accuracy of the input data (sections 6.2.1 and 6.2.4), and the way in which the model is run (sections 4 and 6.2.3). The inherent uncertainty is the fundamental limitations in the way a model works. This is beyond the control of the model user but is an issue they must be aware of (section 6.2.2).
6.2.1 Input data uncertainty
Any model is only as good as the input data. But of course the question is always: how good does it need to be?
There are three sets of data needed for dispersion modelling:
(a) source, or emissions characteristics,
(b) meteorological data, and
(c) terrain and local features data.
a Source characteristics
The critical factor is to know the rate of emissions, in mass units (grams per second or kilograms per hour or tonnes per day), of the contaminant of interest. This needs to be known for each time period of the model run, usually hourly for a year. Only in very special cases is this constant and known accurately. There are several possible approaches.
- The most common method, which is usually easy to achieve and justify, is to use the maximum emission rate. This occurs when an appliance is operating at its upper limit (e.g. a coal boiler consuming the maximum amount of fuel for which it is designed). If the emissions are measured by an 'approved' method, this is ideal. Guidance on emissions monitoring methods can be found in the Ministry's Compliance Monitoring and Emissions Testing of Discharges to Air (MfE, 1998). If actual emissions measurements are not available, then either a manufacturer's design specification or an emission factor (refer section 4.1.2) can be used.
- Another method, applicable in many circumstances, is to use a percentile discharge rate - either 99.9%, 99.5% or even 95%. This is common in processes that can have occasional upset conditions, such as a wastewater plant malfunctioning. Using the upset rate can bias model results severely, leading to predicted concentrations that might be far higher than are ever likely to occur because the particular combination of discharge and meteorology leading to these concentrations might be very rare. This should be investigated and the use of a percentile discharge rate should be clearly justified.
- A method occasionally used is to measure rates that vary by time of day, day of week, or season. Some processes do not discharge all the time, and modelling that takes account of this is more realistic.
- For processes where there is a known hourly discharge rate, in theory these can be directly input into the model, along with the concurrent meteorological information, to produce a very accurate assessment. In practice this is almost never done. This level of accuracy in emissions rates is usually not warranted, as the uncertainties in other factors (meteorology, terrain, model performance) take over.
The overriding feature is that peak modelled ground-level concentrations will be directly related to the emission rate, so it is important:
- to use a rate that is sufficiently large to cover the worst-case discharge of concern
- that the period the maximum emission lasts for matches the averaging period of the relevant evaluation criteria.
Clearly state the value and the origin of the source characteristics data that have been put into the model.
Include a copy of the model input file as an (electronic) appendix to the report.
Justify your choice of a particular value of a parameter, or run the model with a range of possible input values.
Preferentially use measured source characteristic values over estimated rates or emission factors.
If using calculated source characteristic values, clearly state the method used to calculate the value. Provide detailed calculations in an appendix to the main report and explain potential uncertainty with the values.
Pay particular attention to emission rate data by:
a) using a rate that is sufficiently large to cover the worst-case discharge of concern
b) ensuring the period the emission lasts for matches the averaging period of the relevant assessment criteria.
Provide a sensitivity analysis of model results to variation in source characteristics. This can be done by running the model with the two extreme values of a particular characteristic (e.g. low and high efflux velocities).
Facilitate an independent review of the source data and avoid requests for further information by reporting all sources of data and assumptions made.
b Meteorological data
Lack of appropriate meteorological information is often the single most important limiting factor in modelling accuracy. It is also the most subjective in deciding just how many data are needed, from which location and how accurate they must be.
The ideal is to have at least one year of data, with at least hourly resolution, at the site of interest (usually within a few hundred metres). The minimum measurement requirements are for wind speed and direction, but some method of estimating stability and mixing height is also required as an input for steady-state modelling. A full description of the meteorological detail is contained in section 5.
Often there are no suitable meteorological data at all. In this case, a 'screening' modelling study using a theoretical meteorological data set can be done. This will uncover the worst-case situation, and show the highest concentration that might occur. However, it gives no information on the frequency or location of the peak concentration, nor on the percentile statistics. When the predicted maximum ground-level concentration is well within the evaluation criteria, the use of a screening model may be sufficient. However, where the predicted ground-level concentration is higher than the evaluation criteria, a more thorough modelling study may be required and more accurate input data (including meteorological and emissions data) will be needed.
For each step in improving the meteorological data, the accuracy and reliability (and 'modelling believability') of modelling results improves. Possible improvements include:
- a simple mast with basic monitoring equipment in the general vicinity
- a simple mast at the site
- a well instrumented mast
- an array of masts
- full vertical sounding data
- model-generated data (using mast and/or sounding data)
- periods longer than one year
- previously used data sets (with accuracy confirmed in previous studies).
When the site is not uniform, further problems can occur. This frequently happens in New Zealand. For instance, the plume is influenced by meteorological conditions that are not the same as those at the site. Winds at plume height may be different from those at the surface, sometimes substantially so. There are also more subtle problems with conditions changing during the modelling period. Some models can handle this (especially puff models), but additional detail in the input data is required.
The required accuracy of modelling results and input data is guided by national guidance in this document and the Guide to Assessing Discharges to Air(currently under development), requirements in regional plans, recommendations from council staff and reviewers, and legal/council precedents. A key component of this system is often the use of independent reviewers of modelling, particularly in cases where there is an indication that some contaminant concentration is close to, or exceeding, the evaluation criteria. To assist councils, reviewers and modellers, some key principals should be followed when deciding on and reporting information about the level of detail in the meteorological input data. These are given in the recommendation box below.
Clearly state the origin of the meteorological data that have been put into the model.
Minimise the meteorological input data uncertainty by following (as far as practicable) the recommendations made in this document in section 5.
Facilitate an independent review of the meteorological data by reporting all sources of data, assumptions made and any guideline recommendations not followed.
Assess the sensitivity of the model's prediction of the magnitude of the maximum ground-level concentration to meteorological input data. Do this by running the model with data from a number of years, or data from a site with similar climate and meteorology. A comparison with results obtained using screening data can also be useful.
Include a copy of the meteorological data file(s) used as an (electronic) appendix to the report.
c Terrain and other local features
As discussed in section 4.3.4, dispersion modelling requires information about the terrain features surrounding the site that affect dispersion and plume behaviour. These include:
- terrain descriptions
- the location and size of hills
- building features
- surface features such as roughness length
- heat flux (for some models).
Determining the required accuracy for terrain and other local features is quite subjective. In many cases the decision is determined by what is available rather than what is required. It is also very dependent on the application; for instance, for mildly buoyant sources with low stacks, the building downwash issue can be critical, and building dimensions and orientations will determine the accuracy of the model prediction. At the other extreme, for hot, buoyant sources, discharged through tall stacks with final plume heights above 100 m, the building dimensions are irrelevant.
Similar arguments exist for each of the other parameters, and so the effect of terrain information on the accuracy of the model will vary between different applications.
Clearly state the origin of the terrain data that have been put into the model.
Justify your choice of a particular value of a parameter, or run the model with a range of possible input values.
Quantify the influence of terrain information on the model results in any particular application by performing an analysis of the sensitivity of the model results to each terrain parameter (section 6.2.4c).
6.2.2 Model performance
After input data uncertainty, the fundamental limitation for dispersion model accuracy is the way the model works. This includes the structure, physics and chemistry, and the way these are all parameterised and computed. There is considerable debate over this, as can be attested by anyone who has attended a technical meeting of model authors, and watched them defend their model's features!
In theory, it should be possible to evaluate any model's performance by a formalised evaluation scheme, whereby it is compared with actual monitoring results (with all other things being equal - emissions rates, meteorology and terrain). Indeed this is done to compare different models. However, in practice this is a complex and expensive process, and virtually impossible for all circumstances. The issues associated with evaluating model performance are outlined in detail by Hanna (1988) and Weil et al. (1992). More recently, model validation has been addressed by the initiative on the Harmonisation within Atmospheric Dispersion for Regulatory Purposes (http://www.harmo.org/). One of the outcomes of this initiative has been to produce the so-called 'Model Validation Kit'. This kit is a collection of three experimental data sets accompanied by software for model evaluation.
Most of the commonly used models have undergone some form of validation of their performance. It is recommended that model users should familiarise themselves with the relevant literature before using and presenting results from a particular model. Table 6.2 contains examples of the validation studies that have been undertaken.
Table 6.2: Model validation studies
Hall et al (2002)
Riswadkar and Kumar (1994)
Luhar and Hurley (2003)
Luhar and Hurley (2002)
Strimaitis and Chang (1998)
One of the most commonly applied models in New Zealand, AUSPLUME, does not have an extensive series of formalised evaluations, instead relying on its similarity to standard Gaussian-plume models, such as ISCST3, which have been validated. One of the validation studies of AUSPLUME that has been completed (Bluett, 1998) shows that the model's performance in New Zealand is generally within a factor of two and similar to that observed in overseas studies.
A further complication exists in New Zealand, where many cases have complex terrain features. Complex terrain is handled poorly by Gaussian-plume models, and where it is an issue advanced models should be used. In theory, advanced models should give very accurate results provided adequate input data are available.
It is typically accepted that accompanied by good input data, dispersion modelling may be used to predict concentrations within a factor of two.
The 'factor of two' performance guideline is probably still applicable to Gaussian-plume models. In the interim or until the model is validated, it is probably a safe estimate of likely model accuracy.
If the model shows that the peak concentration is less than half the evaluation criteria, then it can be accepted with a good degree of confidence that the criteria will not be exceeded.
A result showing, say, just 20% under the evaluation criteria is not enough evidence to show that the relevant criteria will not be exceeded. Further evidence, such as conservative inputs or validation of model results against monitoring data, should be used to demonstrate the robustness of results that are relatively close to the guideline (or national environmental standard) value.
Greater confidence can be placed in the results of well-validated and well-executed plume and puff models that have accurate input data.
Until greater general experience is gained or some further formal validation studies are completed, it is not possible to say how much more confidence can be given to well-executed plume and puff models.
Model performance should be regarded as better in simple compared to complex situations (e.g. flat compared to hilly terrain).
6.2.3 Misapplication of models
A common, but largely avoidable, source of modelling uncertainty is a model being used inappropriately. Some cases of this are:
- using a Gaussian-plume model to predict effects on a steep hill
- ignoring building downwash for a short stack on a large building
- using the output from screening modelling to produce a percentage exceedance (yes it has been done!)
- using a default meteorological data set that comes with the model (and is from the other side of the world to New Zealand)
- having the wrong default values in the user settings (such as a 0.1 m roughness length over an urban area when it should be 1 m or even 2 m)
- editing input data sets (particularly meteorological files) to remove conditions that lead to high concentrations
- assigning too much accuracy to the model output (e.g. "The modelled peak is 348, which is less than the 350 guideline, so its fine").
There are no specific recommendations to avoid these problems, except to approach all modelling results with caution and to seek further information where anything is not clear. However, provided modellers have reasonable experience and clearly document the model development and analysis of results, any misapplication of models should be avoided or picked up by the council assessor. Misapplication can also be avoided by discussing modelling options with the council assessment officer before commencing the modelling exercise and submitting the assessment of environmental effects.
Avoid misapplication of models by clearly documenting the development of the model and the analysis of its results.
6.2.4 Minimising errors
Despite the limitations discussed above, there are several practical steps that can be taken to minimise uncertainty in modelling results.
a Check, check and check
It is remarkably easy to get one or more inputs wrong. Figures get transposed, formatting is not right, there is poor quality control on input files, revised output gets overlaid on old outputs, plotting results on maps are in the wrong place - even things like using northern hemisphere co-ordinates because they are the default. Many of these errors can propagate into the final results. There is no substitute for checking. As a general guide, it is worthwhile spending almost as much time checking all the inputs and data used as setting up and running the model. Methods that can be used to check input files and output data are provided in section 6.3.
b Sensitivity analysis
Another more formalised way to assess model result uncertainty is to conduct a few extra runs with slightly changed parameters. What if we make the stack slightly higher? What if we restrict minimum mixing heights to 50 m instead of 30 m? What if we 'move' the source 100 m further out? What if we change the roughness length from 0.5 m to 1 m? Each of these actions should have a broadly predictable effect on the results If this isn't as expected, something may be wrong. This analysis determines which are the important parameters; that is, those to which the model results are most sensitive. These are the parameters that need to be known with the most certainty.
Model results are increasingly presented in terms of percentile exceedances, rather than absolute maximum results. This makes the results more robust, and probably more realistic for what people want out of the modelling assessment. Ground-level concentrations at any particular receptor may be highly skewed. The absolute worst hour may have a concentration twice that of the second-worst hour, and 10 times that of the ninth-highest. However, the ninth-highest may only be fractionally above the tenth-highest. This means the modelling result which is taken out and used (often just a single figure) is greatly sensitive to modelling uncertainty when it is the peak, but much less so when it is the 99.9 percentile.
Use an independent person review (and perhaps cross-check) all of the model inputs and outputs. It is not sufficient for the reviewer to consider the final hard copy of the report.
Check model results for 'realism' (e.g. diurnal or seasonal variation).
Where appropriate, perform a sensitivity analysis by conducting extra model runs with parameters changed to reflect the extremes of any particular parameter (e.g. high and low efflux velocities).
Present results for the maximum concentration and a range of percentile statistics to provide an indication of the sensitivity of the maximum ground-level concentration to the model inputs.
6.3 Analysis and interpretation of model results
Once the modelling has been carried out, the results should be analysed to ensure they are believable - at this stage there may still be errors in the model configuration that have not been found (or could not have been predicted). Although the user is often guided by experience, there are a several checks that should always be carried out.
Are the highest concentrations in the right location?
- Expect peak concentrations very near the source for low-level emissions.
- Expect peaks further downwind of tall stacks.
- Expect peaks on terrain features as plumes impinge on them (although these may not be realistic in a Gaussian-plume model if the hill is too distant).
Are the highest concentrations consistent with the meteorological conditions?
- Expect peak concentrations from tall stacks during convective/fumigation conditions.
- Expect peaks from low-level emissions during stable conditions (e.g. night time).
- Check how the concentrations vary with wind speed, taking care with calm periods.
- Check whether the highest-ranked concentrations occur at the same time, but at different locations (receptors), and are therefore occurring under the same meteorological conditions.
- Group the highest-ranked concentrations according to location, time of day and meteorological conditions to determine whether they are clustered into pollution 'events'.
Do the highest concentrations coincide with the maximum emissions?
- If the emissions are time-dependent, look at the relationship between times of maximum emissions and times of highest concentrations.
Are the highest (and lowest) concentrations consistent with air quality observations?
- If air quality observations are available, and the model results provide a good match at the monitoring site, then confidence in the model to simulate pollution levels elsewhere is increased.
When using non-steady-state meteorology: are the important conditions simulated well by the meteorological model?
- Quantify the extent to which the dispersion model results are affected by meteorological model performance.
- If high concentrations are expected during, say, sea-breeze conditions, slow valley-drainage flows or pooling of still air, check that the meteorological model gives a realistic representation of such conditions.
- Check whether peak concentrations occur during these conditions, both in the model and in the observations (if any).
- If the model performs poorly in these conditions, take steps to improve the meteorological simulation (through changes in the meteorological model configuration).
These considerations will help the interpretation and provide information that can be used to validate the model results. They will also help to determine the relationships between pollution levels, meteorology and emissions. Finally, if required, the above considerations will enable predictions of what would happen under alternative scenarios. Any predictions should be tested through further model runs, which might incorporate changes in or redesign of the emitters; for instance:
- restriction of operation times
- changes in stack height, stack location or fuel type.
Most emission options will probably have been specified in advance, but the modelling may be used to indicate other options. These tests are in addition to the sensitivity studies described above.
To provide a full interpretation of the results provided by any dispersion model:
a) carry out an analysis of the dispersion model results, ensuring that periods of extreme concentrations are consistent with the meteorological conditions, geographical situations, source configuration and emission rates
b) examine the relationships between concentrations, meteorology and emissions
c) compare the dispersion and meteorological model results with observations (if available).
6.4 Accounting for background concentrations
While there is usually a case for assessing the effects of a particular discharge, people are more interested in the overall end result - the cumulative effect. The Resource Management Act 1991 also requires this, and it is spelt out in most regional plans.
This means that modelling results must be added to current background concentrations discharged by other sources. It sounds simple, but there are many issues to deal with, including:
- If background data concentrations are available, how should they be used?
- What if there are no background data?
- Do maximum predicted and monitored concentrations occur at the same time of the day and under the same meteorological conditions?
- Should the concentrations just be added?
- Should we use peak values or average values, or something else?
More detailed guidance on dealing with background concentrations will be provided in the upcoming Good Practice Guide for Assessing Discharges to Air.
Modelling assessments must take into account the potential cumulative effects caused by the addition of the discharge being modelled to the current background concentrations.
6.4.1 When local air quality data are available
Having suitable data on background concentrations is an ideal, but uncommon, circumstance. However, the general rule is that anything is better than nothing, and it is worth obtaining whatever data are available from a monitoring site as close as possible to the discharge. Typical sources of data include:
- the National Air Quality Database (http://aqdb.niwa.cri.nz/)
- the Ministry for the Environment's Global Environmental Monitoring programme (Auckland & Christchurch only)
- the Ministry for the Environment's Air Indicators web pages (http://www.environment.govt.nz/indicators/air/)
- regional, district or city council state-of-the-environment reports
- regional, district or city council monitoring programmes
- reports on specific monitoring programmes
- research data (universities and Crown Research Institutes)
- published papers
- consultants' reports (on consent applications)
- industry monitoring programmes
- airshed modelling.
The type, quality and representativeness of these data sets vary enormously, and it is very important to understand what has been measured. In conjunction with the air quality monitoring data, it is also important to get hold of any meteorological monitoring from the site as well. This information can help to determine whether the peak background concentrations occur under the same conditions as the peak modelled predictions.
When available, use locally recorded air quality data to assess background levels.
The use of background data for cumulative effects assessments should be accompanied by a discussion of its applicability for the intended purpose.
If there is any doubt as to the validity of the information, it should not be used without specific justification.
Meteorological data from the monitoring site should also be examined when assessing the background monitoring results.
6.4.2 When local air quality data are not available
In most cases, an assessment of cumulative effects is required, so background concentrations need to be estimated. Options for estimating background concentrations are discussed below.
a Model other sources
In some cases it is viable to explicitly model the likely cumulative ground-level concentrations caused by other sources in the area. For instance, if the issue is how a particular plant's emissions affect an area that only has one or two other sources (even if these are complex, such as a roadway), then the modelling can include these sources.
b Compare the location with somewhere similar
If the area does not have significant large sources, and does not have any complex geographical or meteorological features, then it can be assumed that the air quality will be similar to another area of similar population density, emission sources and meteorology. This method requires that such an area can be identified, and that monitoring data are available.
c Make a worst-case assumption
In the absence of any of the above it might be necessary to simply 'guess' the existing air quality. The safest guess is to assume a concentration that is at the upper end of what might be feasible, based on what is monitored in, say, Auckland or Christchurch. As an example, it is almost inconceivable that summer background PM10 concentrations in a small town would be greater than those found in the middle of Auckland, so it is reasonably safe to use the monitored values from Auckland. However, the fact that this approach is potentially overly conservative should be taken into account in the assessment.
d Start a new monitoring programme
If all else fails, or if the issue is likely to be of significant importance, start a new monitoring programme as soon as possible. This need not be expensive, as useful information can be gained from relatively short-term surveys, or from passive monitoring. Comprehensive guidance on setting up ambient air quality monitoring stations is provided by the Ministry for the Environment in the Guide to Air Quality Monitoring and Data Management (Ministry for the Environment, 2000b).
When locally recorded air quality data are not available, one or more of the following methods should be used to estimate background concentrations:
a) Model other sources to provide an estimate of background concentrations
b) Use data from a similar location affected by similar discharges and meteorology
c) Make a worst-case assumption of background concentrations
d) Start a new monitoring programme to accurately determine background concentrations.
6.4.3 How to incorporate background data
Once background air quality data and model results are available, adding the two together to provide an estimate of the cumulative impact of the discharge provides the most conservative result. However, there are a number of issues with this approach, and in some circumstances a different method is preferable.
a Spatial co-incidence problems
It is often difficult to know whether the background data are representative of the point at which the modelled peak occurs. In general they will not be, leading to an overestimate of the cumulative effect. However, provided the overestimate is within the evaluation criteria the effects of the discharge are likely to be minor.
b Time co-incidence problems
Both the modelled and the background concentrations vary with time of day. In most cases the peak due to a point source emission does not occur at the same time as the background peak (which in many parts of New Zealand occurs during rush-hour traffic times or where wintertime domestic burning is carried out, during inversion layers that form over night). High background concentrations therefore almost always occur in calm to light wind conditions, when plumes from point sources may not reach the ground. On the other hand, point source peaks usually occur in:
(a) highly unstable daytime conditions
(b) in stable, light-wind night-time conditions or
(c) during the transition from night to morning, when fumigation may occur.
c Peak vs average
Should modelled peaks be added to measured peaks? Or averages to averages? Or peaks to averages? Each can give very different results. The most sensible approach is to add a peak (or 99.9 percentile) modelled result to an average background, since it is highly unlikely that the peaks in the two cases will ever be co-incident. However, if the peak background concentrations do occur under the same conditions as the peak concentrations from the discharge then the two peaks should be added together.
A study on how to add peak predicted concentrations to background values was recently undertaken by the UK Environment Agency (Environment Agency, 2000). The study concluded that simply adding peak model concentrations to background concentrations can result in severe overestimation of the source contribution, and that a more realistic method is to add twice the annual mean background concentration to the peak (or 99.9th percentile) modelled concentration. This method has not been reviewed or trialled in New Zealand, and it is not possible to comment on its relevance to the New Zealand situation.
When assessing the cumulative effects, use available background concentrations and account for the:
a) spatial co-incidence
b) time co-incidence
c) peak verses average concentrations
d) issues that may exist between the modelled and monitored (or estimated) background concentrations.
6.5 Assessment of effects
The final part in the process of deciding whether the model uncertainty is acceptable is to use the modelling result to assess some effect of the contaminant on people or the environment. Even when a lot is known about the effects, there are large uncertainties in the actual individual effect. Formaldehyde is a good example: some people are sensitive to quite low values, whereas others can easily tolerate concentrations 10 to 100 times higher. Which value should be chosen?
Before undertaking modelling and preparing an assessment of effects, consult the relevant environmental authority and check out the Good Practice Guide for Assessing Discharges to Air (currently under development by the Ministry) to determine:
a) the contaminants of greatest concern
b) the potential adverse effects that need to be assessed
c) the sensitivity of the receiving environment
d) the assessment criteria that will be used to assess the modelling results.
6.5.1 Evaluation criteria
There are a number of ways to assess the environmental and health effects of discharges to air once modelling results are available. The first step is evaluation against the national environmental standards. More information on how to do this will be included in the Good Practice Guide for Assessing Discharges to Air (currently under development by the Ministry). This new guide will cover:
- information required to undertake an assessment
- guidance on the level of assessment required depending on the scale and significance of the discharge
- guidance on when modelling is required
- interpretation of results against national environmental standards
- recommended evaluation criteria for pollutants not covered in the national environmental standards
- guidance on when a full health risk assessment is required.
6.6 Unresolved issues
Despite the vast amount of research that has been conducted on dispersion modelling and the fact that it is used hundreds of times a day all over the world, there are several issues that remain essentially unresolved. These include issues relating to missing data, calms, extreme weather, trends and synergistic effects.
6.6.1 Missing data
There are often missing data periods, in both emissions and meteorological data sets. Since most models will not tolerate missing data, various techniques are used to fill these holes for the purpose of getting the model to run at all. What if a critical period is missing? Say the peak emission rate, or a particularly awkward period of weather. When this is noticed, it can be accounted for in some way, although often it might not even be noticed.
Carefully review the model and all of the input data for potential occurrences of missing data.
This is the modelling 'reality check', and its value should not be underestimated.
When the wind speed drops below about 0.5 m/s, the wind direction becomes undefined and unresolvable, and the plume can end up going anywhere, or simply pooling. Unfortunately, these are exactly the circumstances which can lead to the highest ground-level concentrations but which cause the steady-state Gaussian equations to fail completely (wind speed appears on the bottom line of the equations, and cannot be zero). To handle this, the model forces a minimum wind speed of typically 0.5 m/s (it used to be 1 m/s, and in future it may be less). Puff models are a little better, and in theory allow for very light winds. Under these conditions the puffs are able to diffuse and grow without being advected. Fortunately, for most locations and most discharges, this is a rare circumstance.
If calms are identified as a potential concern, a more complete risk analysis should be completed. This analysis should at least consider the frequency and the potential consequences of calm conditions.
6.6.3 Extreme weather
With a one-year meteorological data set it is entirely conceivable that the worst-case meteorological conditions are not identified, and thus not modelled. This is a common criticism of modelling, and in many cases needs to be addressed with a specific study on the representativeness of the period of data used. This is done by comparing some statistics of the modelling data set, such as average wind speed, with those from the closest long-term climate station in order to assess the representativeness of that particular period.
The potential effects of extreme weather on pollutant dispersion should be identified.
The meteorological data set that is being used should be checked to ensure it contains conditions that allow for the effects of extreme weather to be assessed.
Similar comparisons to those outlined above should be made for long-terms trends such as climate change, land-use patterns, buildings, or even drifts in emission rates that could potentially alter the modelling results.
Modelling studies should identify and address any long-term trends that may affect the conclusions of that particular study (e.g. increasing background levels over time).
6.6.5 Synergistic effects
It is well known that some contaminants have worse effects in the presence of others than they do on their own. This is a very specialised subject, and not addressed by any current dispersion models, nor most common guidelines.