Offshore Wind Ornithological Impact Assessment - Review of Digital Aerial Survey Methods
Published January 2023: Version without Confidential Annexes
1. Glossary
Term |
Definition |
---|---|
Baseline surveys |
Pre-application surveys of proposed offshore windfarm sites to gather data on numbers, distribution and behaviours of birds on the water and in the air. This information is used to inform impact assessments (EIA and HRA). May also refer to strategic surveys to help inform site selection during a plan-led process.
|
Detection rates |
The proportion (%) of objects (e.g. birds) present in the area covered by DAS (see below) that are subsequently recognised as objects in the imagery captured during the survey.
|
Digital Aerial Survey(s) (DAS) |
Surveys carried out from an aircraft capturing either still or video imagery of the sea surface, and a section of the airspace between the sea surface and aircraft. DAS for offshore wind farm environmental assessments and monitoring typically sample 10-20% of the relevant sea surface area.
|
Environmental Impact Assessment (EIA) and Environmental Impact Assessment Report (EIAR) |
An EIA is a tool used to assess the significant effects of a project or development proposal on the environment. EIAs make sure that developers think about the likely effects on the environment at the earliest possible time and aim to avoid, reduce or offset those effects. An EIAR is a report of the impacts (positive or negative), which the proposed project, if carried out, would have on the environment. It is prepared by the developer to inform the EIA process.
|
Ground sampling distance (GSD) |
The GSD is the distance between two consecutive pixel centres measured on the ground. The bigger the value of the image GSD, the lower the spatial resolution of the image and the details are less visible.
|
Habitat Regulations Appraisal (HRA) |
HRA is an assessment process required under the Habitats Regulations to consider impacts of plans or projects on European sites.
|
Identification rates |
The proportion (%) of objects detected in the survey imagery (see detection rates, |
Term |
Definition |
|
above) that can be identified with a defined level of confidence to a given level. This can be to species (e.g. herring gull or common guillemot), species group (e.g. large gull or auk), or broader grouping (e.g. unidentified bird). |
NatureScot Scientific Advisory Committee (SAC) |
The NatureScot SAC advises the NatureScot Board and staff on scientific and technical matters, reviews the quality of our research, advises on the scientific basis for the notification of any Site of Special Scientific Interest (SSSI) in Scotland, and contributes specialist knowledge and wider advice via working groups and individually. |
Pre- and post- construction monitoring |
Pre- and post- construction monitoring occurs after projects are consented (see also Baseline surveys). Specifically DAS may be used to gather data on the number and distribution of birds and marine mammals before and after construction occurs, in order to assess potential impacts from a project.
|
Section 36 |
Section 36 of the Electricity Act 1989 (“the 1989 Act”) applies to proposals for any offshore generating station whose capacity exceeds 1MW within Scottish territorial waters or the Scottish Renewable Energy Zone (REZ). Offshore generating stations also require a marine licence under the Marine (Scotland) Act 2010 (between 0 and 12 nm) or under the Marine and Coastal Access Act 2009 (between 12 and 200 nm).
|
Spatial auto-correlation |
Spatial auto-correlation refers to the measure of association between observation of variables that are close to each other at a spatial scale. Spatial autocorrelation means that the samples may not be statistically independent – see pseudoreplication.
|
Pseudoreplication |
Pseudoreplication occurs when observations are not statistically independent, but treated as if they are. This can occur when there are multiple observations on the same subjects, when samples are nested or hierarchically organised, or when measurements are correlated in time or space |
2. Acknowledgments
The sub-group is extremely grateful to APEM and HiDef for providing their time and input into this review process.
3. Summary and recommendations
A sub-group of NatureScot’s Scientific Advisory Committee (SAC) was established to undertake a focused and time-limited review of Digital Aerial Surveys (DAS) methodologies as applied to impact assessment and monitoring for marine birds at offshore windfarms. The sub-group was asked to focus on responses to a set of questions (see Annex 1 & Annex 2 – commercially sensitive in confidence) sent by NatureScot and Marine Scotland Science (MSS) to the two commercial DAS providers currently operating in the UK. This led to an additional set of questions (see Section 6), which were the focus of presentations and discussion at a workshop convened by the sub-group and attended by both providers (Annex 4 – commercially sensitive in confidence).
The review focussed in particular on survey design and survey altitudes and implications for species detection, risk of incidental disturbance, and comparability of baseline, pre- and post-construction monitoring surveys. Comments from the sub-group on both sets of questions can be found in Sections 5 & 6 and associated recommendations are summarised below.
It is important to note, however, that this advice may need further review and updating as survey methodologies and technologies develop.
Recommendations:
- Survey reports should provide specific quantification and detail surrounding the design and analysis of individual surveys to ensure that NatureScot and MS-LOT are able to review the findings fully.
2. Thought should be given to the provision of a statistical analysis plan (SAP) as part of the survey submission.
3. All survey reports should explicitly provide the analysis of the data showing whether spatial auto-correlation is present or not. If it is present, the providers should include details of how such autocorrelation is accounted for within subsequent analysis.
4. Without evidenced demonstration, including formal statistical analysis indicating that two systems provide comparable data, results from surveys with different technologies or resolutions ideally should not be mixed.
5. Providers should be explicit in which technologies are used and when changes are made. MS-LOT and NatureScot should be updated on any changes to survey equipment or survey design.
6. Providers should, where appropriate, give consideration to experimentally demonstrating comparability between different kits and platforms. For example, through the use of twin deployments.
7. In addition to the details currently provided with respect to survey methodologies, providers should also include detailed information on the following:
- camera / rig specifications,
- resolution (GSD),
- plane altitude and how this impacts the volume of air surveyed (which will change with altitude) and, therefore, calculations of collisions risk, and
- statistical methods, e.g. those used for the estimation of densities and populations.
8. An independent validation to verify and quantify detection and identification rates would add confidence to the survey methods and outputs.
9. A clear definition of detection and identification rates needs to be explicitly stated in all survey reports.
10. Following from 9), the methods of estimating abundance, detection and identification rates and their uncertainties needs to be clearly stated. This is required to ensure that any comparison of surveys can be undertaken.
11. Any assumptions underpinning the calculations for detection and identification rates should be stated, including potential biases.
12. Clear justification should be provided for the method of apportioning birds that have only been identified to species groups (e.g. auks, terns) to individual species. Potential associated biases should be clearly described.
13. Clear statements supporting the survey altitude choice should be given, and the evidence supporting the choice provided quantitatively.
14. The statistical principles of survey design would typically be expressed in terms of specific objectives (including power and achieving specified levels of confidence), and these should be included in any report.
15. Quantitative evidence should be provided of the extent of disturbance effects or lack thereof in the form of changes to key behaviours (flight take-off rates, changes in flight direction, changes in flight height, diving rates, rates of freezing on sea surface), during surveys in comparison to appropriate controls.
16. DAS providers should make clear any assumptions being made in their assessment of disturbance.
17. As more DAS suppliers enter the market, NatureScot and/or Marine Scotland should consider creating a generic set of operating standards which suppliers would be required to comply with.
4. Background
4.1 Use of digital aerial surveys for offshore wind farm ornithological assessments and monitoring
Offshore renewables have the potential to make a significant contribution to the Scottish Government’s targets to generate 50% of Scotland's overall energy consumption from renewable sources by 2030 and to achieve net zero by 2045. The offshore wind industry is set to expand substantially in Scotland over the next decade and beyond as the Scottish Government strive to meet clean energy and climate change targets and support a green economic recovery.
Consenting processes for proposed offshore wind developments include Habitat Regulations Appraisal (HRA) and Environmental Impact Assessment (EIA). These assessments require quantitative assessments of impacts on a wide range of receptors, including marine birds, at both protected site and wider population levels.
Assessment and monitoring of these impacts requires robust baseline and pre- and post- construction data on the spatial distributions of birds within the proposed development areas and surrounding buffer zones. For various reasons, including increasing distance from shore of the proposed wind farm sites, digital aerial surveys (DAS) have become the preferred method for such data collection.
There are currently only two commercial providers of these DAS regularly operating in the UK (APEM Limited and HiDef Aerial Surveying Limited – shortened hereafter to APEM and HiDef). The companies use different, and evolving, technologies for data capture and analysis. There are numerous aspects of DAS that may impact quality and applicability of the results. These include: camera technologies including use of still or video methods; survey altitudes; sample design (e.g. grid or transect and percentage area sampled); postsurvey image identification; and, data analyses (interpolation and or modelling). The two companies use approaches that differ in all of these aspects. Due to commercial sensitivities between the two providers, aspects of this report are commercially sensitive - in confidence, to respect each provider’s proprietary interest in their technology and analytical techniques.
4.2 Drivers for this review process
As part of the Section 36 Electricity Act, prospective developers of offshore wind farms (OWFs) are required to submit an application with an accompanying Environmental Impact Assessment Report (EIAR) to Marine Scotland Licencing and Operations Team (MS-LOT). To facilitate the information provided to accompany an application, there are requirements for baseline surveys to help inform the various tools and methods to predict impacts. MSLOT, acting on behalf of Scottish Ministers, will determine all applications after formally consulting on applications received. NatureScot is a statutory consultee and both RSPB and Marine Scotland Science (MSS) are consultees, or advisors, to this process.
In March 2021, NatureScot and MSS marine energy and ornithology advisers sent a set of technical questions (see Annex 1) to both DAS providers (APEM and HiDef). The intention was to obtain additional information on the survey methods adopted by the providers to assist NatureScot and MSS in:
a) assessing the DAS results being submitted in support of consent applications and to support pre- and post-construction monitoring requirements; and consequently,
b) providing robust associated advice to developers and MS-LOT.
These questions focussed in particular on survey design and survey altitudes and implications for species detection, risk of incidental disturbance, and comparability of baseline, pre- and post- construction monitoring surveys.
Both providers submitted short written responses to these questions in April 2021. However, these responses were felt to lack detail and supporting evidence on some key aspects. As such, they did not fully address the underlying purpose of providing confidence to NatureScot and MSS in the robustness of survey outputs. It was recognised by NatureScot and MSS that independent scientific review of the responses, including further engagement with the providers, would be helpful. Given the high level of commercial sensitivities involved, it was determined that the most appropriate approach was to establish a sub-group of NatureScot’s Scientific Advisory Committee to undertake a focused and time-limited review.
4.3 Review Process
The sub-group was formed in November 2021, with the Terms of Reference (Annex 3) finalised and agreed shortly afterwards. The sub-group members are:
- Professor Marian Scott (University of Glasgow - SAC member and chair)
- Dr Ruth Mitchell (The James Hutton Institute and SAC member)
- Professor Rob Marrs (University of Liverpool Emeritus Prof. – SAC expert panel)
- Dr Francis Daunt (UKCEH – specialist adviser)
- Dr Aly McCluskie (RSPB – specialist adviser)
- Dr Tom Evans (Marine Scotland Science – specialist adviser)
- Dr Kate Thompson (NatureScot – specialist adviser)
- Dr Chris Eastham (NatureScot – secretariat)
The sub-group were tasked with the review of the DAS providers’ responses to the original questions. Specifically, the subgroup were asked to consider and address the following questions:
- Have each of the eleven questions posed been fully answered?
- Have any potential adverse impacts of the survey methods been adequately considered (questions 4, 5 and 6)?
- Is the information and evidence provided to support the responses adequate, accurate and relevant?
- Is any further evidence required to support evaluation of the responses?
- Are there any key aspects of the methodologies that NatureScot and/or MSS should be informing the providers they need to address, in order to provide full confidence in quality and applicability of outputs to consenting and monitoring requirements for offshore windfarms?
The sub-group were also invited to provide any additional associated advice to assist NatureScot and MSS in the evaluation of future DAS results, which will inform guidance and advice to the sector.
4.4 Timeline
The process agreed included the requirement for several virtual meetings of the subcommittee. An introductory meeting on 21st January 2022 was followed by a meeting on 4th February 2022 to review responses to original questions and agree next steps.
A virtual workshop, attended separately by representatives of both DAS providers, was held on 11th March 2022. Prior to this workshop the sub-group sent a list of additional questions (see Section 6) to both DAS providers. At the workshop, each provider was asked to provide a 30 minute presentation, focused on these additional questions, followed by an hour for questions and answers. Representatives attended from each company separately.
Following the workshop, two additional meetings on 13th April 2022 and 6th May 2022 were held for the sub-group to review comments and finalise the report. Also following the workshop, HiDef provided a white paper on the comparison of the performance of the HiDef GEN 2.5 with the GEN 2 rigs. However, this white paper was not considered as it fell outside the time period of this review.
This report provides the results of the sub-group’s technical review of the information provided by the DAS providers, and provides advice on the implications of the review for evaluation of future DAS results and for implementation of new DAS methodologies. It is important to note, however, that this advice may need to be updated as survey methodologies and technologies further develop. The full report will be provided to NatureScot and MSS and the main text, excluding the confidential Annexes, will be shared with the DAS providers. No recommendations will be made to the NatureScot Board.
5. Sub-group initial review
The sub-group undertook a preliminary review of the sets of responses from the DAS providers to the original questions posed by NatureScot and MSS. This initial review considered four of the questions posed to the sub-group (see section 4.3 - Review process). The sub-group also reflected that in some instances the original questions posed were phrased too generally and could have been more specific in detail (e.g. making clearer that NatureScot and MSS wished to understand potential sources of bias and the basis for quantification of detection and identification rates). The sub-group considered that this might in part explain some of the lack of detail/evidence provided. Given the lack of detail in the responses to these initial questions, the sub-group composed additional, more focused questions to the DAS providers (see Section 6) to be addressed in the workshop presentations.
The conclusions of this initial review are summarised as follows:
- Have each of the eleven questions posed [by NatureScot and MSS to the DAS providers] been fully answered?
- In general the sub-group consider that some answers from the DAS providers were incomplete and lacked sufficient detail and supporting evidence. For example, questions around potential disturbance of birds (questions 4, 5 & 6), reasons for choice of different survey methods (questions 3 & 11), validation of detection and species identification rates reported (question 7), and handling of auto-correlation (question 11) were considered to be incomplete.
- Evidence provided was often anecdotal and qualitative.
2. Have any potential adverse impacts of the survey methods been adequately considered (questions 4, 5 and 6)?
- The sub-group were concerned about the lack of evidence around disturbance due to surveys, which was the main adverse impact considered. There was considered to be a lack of clarity on how disturbance is monitored and recorded: it was also unclear how data on disturbance are analysed. For example, it was felt that there was a lack of quantitative evidence including the assessment of whether key metrics, such as proportion of birds in flight, were different in the survey area to a control group of birds outside the survey area.
- Different behaviours resulting from disturbance were not addressed, in particular freezing, which is a common response of birds when a threat is a certain distance from them (but not so close as to induce a flight response). This could lead to an inflated proportion of individuals of a diving species on the sea surface compared to that seen with undisturbed birds, which in turn, would affect the accuracy of the estimates of number of birds having accounted for availability bias.
- As a result, although the welfare aspect of bird disturbance was considered in the providers’ original responses, the implications of disturbance on survey results was in the view of the sub-group not fully addressed.
- A further source of potential disturbance not considered in detail was the potential for disturbance of birds in neighbouring transects or sampling points and resulting potential for introduction of bias to the survey results.
- Is the information and evidence provided to support the responses adequate, accurate and relevant?
Although relevant, the sub-group considered that overall the responses were not adequate. The sub-group were not able to assess the accuracy of the responses as there is no independent evidence base against which to do so.
4. Is any further evidence required to support evaluation of the responses?
It was concluded that further evidence was required. Consequently, as described under section 4.3, both DAS providers were sent additional questions (see Section 6) to be addressed in the presentations at the workshop on 11th March 2022.
The final question posed to the sub-group (see section 4.3) is addressed through the final recommendations (see section 3).
6. Review of DAS provider responses to additional questions
Following the workshop on 11th March 2022, the sub-group reviewed the additional information provided by the DAS providers in both their presentations and subsequent discussions. The focus was on the responses to the further questions posed to the DAS providers. These are considered in turn, below.
- With respect to potential disturbance of both target and non-target species, please provide details of the data and evidence that support the conclusions in your responses to original questions 4 to 6 with respect to impacts (or lack thereof) of plane presence on bird behaviours and how these may vary with survey altitude.
Please include consideration of both birds on the sea surface and in flight and specifically address the following points:
- If and how is the potential for birds to dive underwater due to disturbance assessed and what is the control against which such an assessment is made?
- What evidence is there for how birds respond to aircraft shadow and whether this varies by time of day/season (depending on light conditions)?
- Whether disturbance from the aircraft could change the ratio of birds in the air to those on the water (in either current or neighbouring survey transects) and the associated potential to affect calculations dependant on this ratio (collision risk modelling).
Sub-group comments
Both providers presented additional information in their presentations on these specific points. However, the sub-group still considered that:
- While recognising that there are fundamental challenges in quantifying disturbance, and that both providers provided additional information, this information was still limited and in some parts anecdotal.
- In the evidence provided, there were assumptions made regarding which behaviours are classed as disturbance behaviours, and whether these are species specific. The sub-group considers that an explicit statement and recognition of the assumptions is an important aspect of the evidence required.
Recommendations:
- Quantitative evidence should be provided of the extent of disturbance effects or lack thereof in the form of changes to key behaviours (flight take-off rates, changes in flight direction, changes in flight height, diving rates, rates of freezing on sea surface), during surveys in comparison to appropriate controls.
- DAS providers should make clear any assumptions being made in their assessment of disturbance.
- Following on from the above question, and relating to your responses to original questions 2, 3 and 7, please describe the data used to justify your preferred flight height, considering both disturbance and other potential constraints (such as object detection and identification rates, survey height restrictions over built offshore windfarms etc.) Please include consideration of any trade-offs you have had to make where the optimal flight height differs between different factors.
Sub-group comments
Responses to the original questions 2, 3, and 7 gave an explanation of how the survey operators choose preferred survey altitudes. It was noted that this choice may be prescribed by the client. The additional information provided in the presentations supplemented the responses to the original questions, and allowed the sub-group to reflect on survey comparability, specifically as operating characteristics change and as new technology is deployed (which is addressed in question 4 below). However, survey altitudes / lens selection may also impact on detection and identification rates which were the subject of a separate question (3) addressed below. The sub-group noted there were some different views expressed by the two providers in terms of survey altitude and disturbance of birds.
Recommendations:
- Clear statements supporting the survey altitude choice should be given, and the evidence supporting the choice provided quantitatively.
- The statistical principles of survey design would typically be expressed in terms of specific objectives (including power and achieving specified levels of confidence), and these should be included in any report.
- Please provide the data and evidence to support your stated:
a) detection rates; and
b) identification rates
Please also provide associated confidence intervals or measures of precision for these rates.
Please provide details here for both your GEN2.0 and GEN 2.5 rigs (HiDef only). Please provide details here for both 1.5 and 2.0 cm GSD surveys (APEM only).
Sub-group comments
The providers’ evidence around assumed object detection and identification rates was based primarily on their internal QA processes. These processes measure and check levels of agreement between trained analysts in both initial detection of objects within the digital images collected and subsequent identification of these objects (to species or higher levels). Both providers presented evidence of their procedures, however the sub-group felt that further information was needed around more general aspects of detection and identification.
The detection and identification processes can be broken down into different stages, each with its own potential errors and include:
a) Detection on camera of the object on/in the sea/air – the sub-group felt that evidence concerning this stage of the process was not presented. It was not clear to the sub-group from the evidence provided what the underpinning assumptions were in the calculation of detection rates, and indeed there was some potential confusion around the definition of the detection rates that were quoted since they appeared different for the two providers.
b) Detection by surveyor of objects on the digital image - this is captured through the providers’ QA process. The sub-group considered that both providers described the internal QA process clearly, but that the information could be provided more quantitatively, including false positive and false negative rates.
c) Identification process – the sub-group felt this was again captured by the providers’ QA processes, but differences in how identification rates are calculated, and associated measures of the levels of confidence attributed to final identification of objects, were not clear.
d) The sub-group were unclear concerning justification of the process of apportioning birds that have only been identified to species groups (e.g. auks, terns) to individual species for further analysis. Apportioning based on the ratio of birds identified to species may not be appropriate if there is variation in the ability to identify different species, neither would it be appropriate simply to exclude these records.
Recommendations:
- An independent validation to verify and quantify detection and identification rates would add confidence to the survey methods and outputs.
- A clear definition of detection and identification rates needs to be explicitly stated in all survey reports.
- Following from the recommendation above, the methods of estimating abundance, detection and identification rates and their uncertainties needs to be clearly stated. This is required to ensure that any comparison of surveys can be undertaken.
- Any assumptions underpinning the calculations for detection and identification rates should be stated, including potential biases.
- Clear justification should be provided for the method of apportioning birds that have only been identified to species groups (e.g. auks, terns) to individual species. Potential associated biases should be clearly described.
- As your image capture and data processing systems evolve, what processes and approaches do you use to assess comparability of results among surveys utilising different generations of your technologies and to address any associated potential issues?
Sub-group comments
Both providers recognise comparability will be important. Some data from the different systems used by the providers were given in the presentations. The subgroup noted that comparability will need to be formally evidenced (including as appropriate through designed experiments and statistical analysis).
Recommendations:
- Without evidenced demonstration, including formal statistical analysis indicating that two systems provide comparable data, results from surveys with different technologies or resolutions ideally should not be mixed.
- Providers should be explicit in which technologies are used and when changes are made. MS-LOT and NatureScot should be updated on any changes to survey equipment or survey design.
- Providers should, where appropriate, give consideration to experimentally demonstrating comparability between different kits and platforms. For example, through the use of twin deployments.
- In addition to the details currently provided with respect to survey methodologies, providers should also include detailed information on the following:
a) camera / rig specifications,
b) resolution (GSD),
c) plane altitude and how this impacts the volume of air surveyed (which will change with altitude) and, therefore, calculations of collisions risk, and
d) statistical methods, e.g. used for the estimation of densities and populations.
- Relating to your response to original question 11, please describe the data and statistical considerations used to justify your assessment of the pros and cons of flying surveys as grid or transect (see also responses to original question 6).
Sub-group comments
The two providers prefer different sampling techniques, either transect or grid, and provided some justification for their preferences (which may also be determined by their customers). The sub-group then sought clarification about the different methods of statistical analysis used to analyse the resulting data. One particular aspect that was addressed concerned the evaluation of any spatial auto-correlation. Both providers assured the sub-group that they did examine the survey results for autocorrelation but that using their preferred survey methodology spatial auto-correlation was not an issue. However, neither provider presented data and analysis to evidence this.
Recommendation
All survey reports should explicitly provide the analysis of the data showing whether spatial auto-correlation is present or not. If it is present, the providers should include details of how such autocorrelation is accounted for within subsequent analysis.
6) When planning specific surveys, please describe the considerations and criteria that inform your approach to the main elements and stages. Please specifically address:
a) survey design (sampling strategy);
b) data processing to derive georeferenced data sets for objects detected and identified to varying levels of precision; and,
c) choice of analytical methods to derive associated population estimates and distribution maps across the wider target survey area, including how any auto-correlation (temporal and spatial) is taken into account.
Please clearly specify and describe the biases that may arise at each of these three stages and how these may interact.
Sub-group comments:
These specific aspects have been addressed in the comments to the previous questions. It was noted that both providers generally use the common software package MRSea, but there are instances when this may not be appropriate and other approaches are used.
Recommendations:
- Survey reports should provide specific quantification and detail surrounding the design and analysis of individual surveys to ensure that NatureScot and MS-LOT are able to review the findings fully.
- Thought should be given to the provision of a statistical analysis plan (SAP) as part of the survey submission.
7. Additional technical points
In addition to the responses to the specific questions, the sub-group also noted a small number of additional technical points that could be expanded for further consideration in subsequent reviews. These include:
- the inter-relationship between survey altitude, sensing equipment and GSD;
- the interplay between survey altitude, choice of grid or transect design and the volume of airspace sampled above the sea surface;
- that higher survey altitudes reduce number of potential flying days (because of cloud bases);
- that DAS survey altitudes are typically too high to enable simultaneous visual observation of bird responses from the same platform;
- that both providers currently rely on manual processing to both detect and identify objects in images (looking forward this is an area that would lend itself to automation using image processing techniques); and
- that data analysis of DAS may be done by third parties, and that while the MRSea methodology is widely used, there are again potential for development of new analytic methods.
Annex 1: First set of questions to digital aerial survey providers from NatureScot and Marine Scotland Science
1. Have you updated any of your standard operating procedures/survey methods since 2018?
2. What altitudes are typically flown during your aerial surveys and what GSDs are you able to achieve at these heights?
3. What are your primary objectives for flying at different altitudes or collecting different GSD?
4. The literature suggests that no disturbance to birds occurs when aircraft fly above 450-500m (depending on reference e.g. Thaxter et al 2009; Thaxter et al. 2016 and Perrow 2019). Can you provide any justification for a lower threshold minimum flight height where birds won’t be disturbed? Are you aware of any additional evidence for either no disturbance or disturbance (e.g. considering flushing, changes in flight behaviour, and diving) to birds in relation to plane altitude?
5. What species would you consider to be at most risk of disturbance from aircraft flying at ~400m asl?
6. How do you detect disturbance if it occurs over a greater distance than the transect width given the field of view from the camera?
7. What improvements would be made to different species detection rates and species identification with a higher resolution?
8. How compatible will data be if collected at different altitudes, or different GSD (i.e. pre-construction and post construction)? Are there any statistical issues in comparing data collected using different survey methods?
9. If the lower altitude is below aviation limits for a built out windfarm how realistic is it to consider that new technology (e.g. higher resolution cameras allowing equivalent surveys at high survey height) will be available in time for post construction surveys? Our understanding of this is that aircraft should maintain a ~ 152m clearance from wind turbines. As new turbines are being designed with blade tip height of 275m this means minimum (legal) flight height will be approx. 430m.
10. Disturbance issues withstanding, does the aircraft altitude affect either the survey effort or flight height calculations?
11. Could you describe the advantages and disadvantages to the use of survey transects vs grid? Does the optimal survey design vary depending on the species/species groups of most interest, and if so how? Do the apparent benefits/disadvantages of each method differ depending on whether design based or model based abundances are calculated (ensuring that spatio-temporal autocorrelation is accounted for to avoid issues with pseudoreplication)?
Annex 2: Commercially sensitive in confidence - Responses from APEM and HiDef to first set of questions
Annex 3: Terms of Reference
Review of digital aerial survey methodologies
SAC sub-group - Terms of Reference
13 December 2021
Background
Offshore renewables have the potential to make a significant contribution to the Scottish Government’s target to generate 50% of Scotland's overall energy consumption from renewable sources by 2030 and reaching net zero by 2045. The offshore wind industry is set to expand substantially in Scotland over the next decade and beyond as the Scottish Government strive for clean energy and climate change targets, and a green economic recovery.
Consenting processes for proposed offshore wind developments include Habitat Regulation Appraisal (HRA) and Environmental Impact Assessment (EIA). These assessments require quantitative assessments of impacts on a wide range of receptors, including marine birds, at both protected site and ecosystem levels.
Assessment of these impacts requires robust baseline information on the spatial distributions of birds within and above proposed development areas and surrounding buffer zones. Due to increasing distance from shore of the proposed wind farm sites, digital aerial surveys (DAS) have become the preferred method for collection of data.
There are currently only two commercial providers of these DAS operating in the UK. The companies use different, and evolving, technologies for data capture and analysis. There are numerous aspects of DAS that may impact quality and applicability of the results. These include: camera technologies including use of still or video methods; survey altitudes; sample design (e.g. grid or transect and percentage area sampled); post-survey image identification; and, data analyses (interpolation and or modelling).
In March 2021, NatureScot and Marine Scotland Science (MSS) sent a set of technical questions to both DAS providers. These focussed in particular on survey design and survey altitudes and implications for species detection, risk of incidental disturbance, and comparability of baseline and post consent monitoring surveys.
The role of the sub-group
The role of the sub-group is to provide a technical review of the responses from the DAS providers to these questions, and provide advice on the implications of the review for future DAS methodologies. This advice will be provided to NatureScot and MSS staff, and will not involve making recommendations to the NatureScot Board.
Confidentiality
Given the commercial sensitivity regarding the two DAS providers, and high levels of sensitivity around NatureScot’s and MSS’s interpretation of the information when providing future advice to developers and regulators, all information provided for the review and the review report should be treated as confidential.
Membership
Marian Scott (chair - SAC member)
Ruth Mitchell (SAC member)
Rob Marrs (expert panel)
Francis Daunt (UKCEH – expert panel)
Aly McCluskie (RSPB – expert panel)
Tom Evans (Marine Scotland Science – specialist advisor)
Kate Thompson (NatureScot – specialist advisor)
Chris Eastham (NatureScot – secretary)
Meetings & timeline
A meeting in January to provide the context and details of the review. This will be followed by a second meeting early February and then a workshop in mid-February to discuss the review findings. The workshop will form the basis of the report. Doodle polls will be issued for the meetings and workshop.
Timeline for completion: now to April 2022
Reporting
Following the workshop the secretary will provide a draft report to the chair for initial comments. The draft report will then be send to the members of the sub-group for comments. The final report will be provided to NatureScot and MSS.