This appendix provides an overview and best practices for conducting MSE. As with the other appendices, it is anticipated this overview will continue to be expanded and refined as part of Master Plan implementation so it can serve as an effective resource to managers and stakeholders.
Management Strategy Evaluation
The fisheries management cycle functions best when each of the components is chosen with the other components in mind. For many fisheries, the data collection protocol is designed with an understanding of the species’ biology and what data can be collected given the available resources. The stock assessment should provide indicators and reference points that can be used in the HCR, and the HCR should recommend regulations that are appropriate given biological constraints, management capacity, and objectives for the stock. To make these choices, it is necessary to consider the performance of the fisheries management cycle as a unit. Each component of the strategy should be chosen to maximize the likelihood of achieving management objectives given the current level of uncertainty, as well as the management agency’s capacity for governance. MSE has been successfully employed around the globe to aid managers in making decisions and achieving their goals. This appendix describes MSE and provides guidance on conducting an MSE.
What is Management Strategy Evaluation?
MSE is a simulation technique to evaluate the expected performance of management strategies prior to selection and implementation. The two main elements of an MSE are an operating model of the ecosystem and a management model of the management system. During an MSE, everything that is known about the fishery, including the population dynamics of the stock and the behavior of the fishing fleet, is simulated in the operating model. The management model incorporates the four components that make up a management strategy, including data collection, stock assessment, harvest control, and the implementation of management measures to control fishing.
The operating and management models are separate but pass information back and forth during each simulated management cycle. The information that is used is simulated data based on the actual data collection protocols in a fishery. The simulated data are then analyzed by a stock assessment component, and an indicator is produced. The indicator is passed to the HCR, which dictates a management action that should be applied during the following simulated fishing season. The management measure is then passed from the management model back to the operating model, and the following fishing season is simulated with that management control in place. This process is repeated for a pre-specified number of management cycles (e.g., 50 years), and performance metrics such as fishery yield and population status are tracked to understand how the management strategy is likely to perform in the short- and long- term.
The separation between the operating and management model is one of the strengths of MSE because it allows managers to test how well a management strategy performs when aspects of the ecological system are either unknown or are misunderstood. For example, MSE makes it possible to assess how management performance is affected when the value for a factor in an assessment methodology in the management model is different from the value actually governing the population biology in the operating model. Another strength of MSE is that the process is repeated many times with randomly-drawn parameter values to simulate either the natural variation of the system, lack of knowledge about a particular biological process, or imperfect implementation of management measures. For these reasons MSE is widely considered to be the best way to quantify the impacts of uncertainty inherent in the system being managed, and to evaluate the trade-offs in the performance of alternative management strategies.
How does Management Strategy Evaluation differ from traditional (assessment-focused) management?
The traditional approach to providing fisheries management advice has involved conducting a stock assessment using all available information to estimate the status of the resource. Uncertainty in stock status was evaluated using confidence intervals, sensitivity tests, and projection models, in which a static management policy (such as a set harvest rate or quota) was used to assess the risk associated with that management policy. MSE overcomes many of the shortcomings of this approach. MSE simulates data collection during each management cycle, and then management advice resulting from that data is fed back into the system and used to update the stock and fleet dynamics in the next time-step (Walters and Martell 2004).
Best practices for Management Strategy Evaluation
While MSE is useful for creating adaptive management strategies, the analyses can be complex and require time and resources. In the past, significant quantitative expertise was required to build and run simulation models, though recent advances have made MSE faster, more affordable, and more accessible to a wider range of fisheries, including those with limited data. The behavior of the fishery must be modeled as accurately as possible, which usually requires gathering information from stakeholders, biologists, and managers who know the fishery best. This may require an iterative process to accurately and comprehensively characterize the fishery and its management goals, determine which performance metrics are most informative, interpret results, and evaluate tradeoffs.
This section discusses the steps required to conduct an MSE (Figure L1) and provides guidance on each step. This process should typically be re-applied every five years.
Step 1: Identify management objectives and develop quantitative performance metrics that reflect those objectives.
The first step of any MSE process is to identify the management goals and objectives of the fishery. This discussion should involve managers and stakeholders, and include biological, ecological, and socioeconomic objectives, because different user groups may have different goals. Once a suite of management objectives is agreed upon, quantitative performance metrics that reflect those objectives should be defined. This is a very important part of the MSE process because simulation models can track a large amount of information about the health of the stock and fishery yield for every management strategy and scenario tested. Performance metrics condense this vast amount of information into a manageable suite of meaningful metrics and provide a means for comparing each potential harvest strategy directly against each other. However, translating generic, high-level policy goals and conceptual definitions of sustainability into concrete, quantifiable performance metrics can be difficult.
One method for translating goals into quantitative performance metrics is to ensure that for each management objective, three elements are defined: 1) the element to be achieved; 2) a time frame for achieving the objective; and 3) an acceptable rate of failure for achieving the objective (also known as an acceptable risk level). For example, a high-level policy goal for a fishery may include maintaining sustainable stock levels. Unsustainable levels are usually defined as those where recruitment may be impaired. For rockfish along the west coast of North America, PFMC has defined this to be 25% of unfished biomass. Managers who are translating the goal of “maintaining sustainable stock levels” into a performance metric may decide that they want their management strategy to achieve biomass levels >25% of unfished biomass over a 50-year time period with 90% probability. This performance metric clearly defines the objective (biomass above 25% of unfished), the time frame (50 years), and the acceptable rate of failure (above the objective 90% of the time or more).
Common management objectives for fisheries include maximizing economic benefits while minimizing risk to stocks (Punt 2015). As a result, performance measures for MSEs usually focus on three dimensions of performance: catch, biomass of the target species, and variability of catch. However, there are many ways that performance within these categories can be tracked, and Table L1 provides examples of the different kinds of performance metrics that have been used.
Careful consideration should be given when choosing performance metrics. The appropriate number of metrics will depend on the fisheries’ objectives, but in general it is difficult to compare more than six metrics simultaneously. Performance metrics should be chosen so that they are easy for decision-makers and stakeholders to understand. For example, a common fishery objective includes minimizing large swings in the TAC from year-to-year. Performance metric design should be an iterative process and involve stakeholders to determine which metrics are best for each situation.
Guidance:
- Performance metrics should reflect management objectives. For each management objective, define the objective, time frame, and acceptable failure rate.
- Involve stakeholders in the process to clarify management objectives and define performance metrics.
- Keep the number of performance metrics as small as possible.
- Choose performance metrics that are easily understood by a wide audience.
Table L1. Types of management objectives and example performance metrics.
Type of management objective
|
Example performance metrics
|
Population health (Target species)
|
Biomass
|
Biomass relative to unfished
biomass (B
0)
|
Biomass relative to reference
biomass (such as B
MSY)
|
Biomass relative to
initial/historical biomass
|
Lowest biomass
|
Lowest biomass relative to
B
0
|
Probability of local
depletion
|
Probability biomass is above or
below threshold
|
Number of consecutive years biomass
is above or below threshold
|
Percent of older/larger individuals
in catch
|
Average age of catch
|
Catch and catch variability
|
Catch- total, average, or
median
|
Catch variability
|
Catch relative to reference
value
|
Probability catch < threshold
value
|
Lowest catch
|
Probability of catching fish above
a certain size
|
Number of consecutive years catch
> threshold value
|
CPUE, or catch rate
|
Catch rate relative to the
reference catch rate
|
Catch composition (percent of each
species)
|
Socioeconomic performance
|
Discounted revenue
|
Costs (monitoring,
enforcement)
|
Profit
|
Profit variability
|
Profit per ton or per unit
effort
|
Access and distribution equity
among sectors and ports
|
Conflict among sectors
|
Effort
|
Displaced effort
|
Amount of quota trading
|
Employment
|
Ecosystem impacts
|
Biomass of non-target species
|
Catch composition of non-target
species
|
Percentage of discards (by weight
or number)
|
Number or biomass of at-risk
species
|
Probability of interaction with
at-risk/threatened species
|
Proportion of total habitat fished
|
Step 2: Identify what information is known about the fishery, including major uncertainties.
The next step in conducting an MSE is to gather all the available data and information for the fishery and identify gaps in information. This should include all available data on catch, effort, biological parameters, fishery management, ecological impacts, and any other information that has been collected via monitoring.
This step serves two important purposes. First, this information will be used to develop the operating model (Step 3). Second, by collecting what is known, it will be possible to identify where the major areas of uncertainty lie in terms of the biology, environment, fishery, and management system. This is an important step because part of the MSE process involves determining which management strategies are robust to these uncertainties. For data-rich stocks, this step usually coincides with a stock assessment model, which analyzes all the available data to estimate stock status as well as other biologically important parameters. Stock assessments also provide quantitative information where there are major uncertainties. However, MSEs can be conducted for fisheries that are too data-poor to have a formal stock assessment. For these fisheries, the process of gathering information may be more qualitative, but is no less important. This can be done through consultations among stakeholders, biologists, and other experts, by borrowing biological information from closely-related stocks, or through a more formal risk assessment process such as a PSA, where participants are required to score how certain they are about each piece of information.
Guidance:
- The best available information for the fishery should be considered, and key areas of uncertainty should be identified.
- Many different forms of uncertainty should be considered, including process uncertainty, parameter uncertainty, model uncertainty, assessment uncertainty, and implementation uncertainty.
- Uncertainty scenarios should be ranked based on the participants’ assessment of plausibility, and high and medium plausibility scenarios should form the basis for operating models.
Step 3: Develop a set of operating models representing the fishery.
An operating model is a mathematical representation of all the biological components of the system to be managed, as well as the fishery which targets that modeled population. Multiple operating models are usually required because of the need to cover the range of the ever-present uncertainties. The most plausible hypothesis about how the system functions may be considered the reference (or base case) operating model, and a set of “uncertainty scenario” operating models are also developed to represent the major uncertainties (Rademeyer et al. 2007). The reference operating model is typically based on the stock assessment model that best fits the data. The operating models should be developed using a widely-available programming language so that the analysis is repeatable and results reproducible. In addition, the mathematical structure of each operating model should be well documented.
Guidance:
- Operating models should be created to represent all high and medium plausibility scenarios.
- The most plausible scenario is considered the reference operating model.
- All models should be developed in a commonly-used, widely-available programming language, and should be well documented and reproducible.
Step 4: Develop candidate management strategies and create implementation models to simulate the application of those management strategies.
An implementation model that reflects how management regulations are applied in practice must also be developed for each candidate management strategy. This model describes how data are collected from the managed system (including the effect of measurement noise), how that data is analyzed during the assessment phase, and how fishing activities should be changed in the following simulated time step (HCR). Ultimately, the choice of candidate management strategies should reflect the governance and scientific capacity of the managing agency and should be realistic and implementable.
MSE developers should strive to simulate data collection as realistically as possible, with careful consideration given to the current and future sampling effort the management agency can employ. In addition, multiple error structures for the sampled data should be considered. Commonly, MSEs generate age/length composition data from the survey or fishery catch in a way that matches the distributions, which can underestimate the number of samples needed when sampling is employed in the real world. As with the operating models, implementation models should be developed using a widely-available programming language so that the analysis is repeatable and the results are easily reproducible.
Guidance:
- The choice of candidate management strategies should reflect the capacity of the managing agency.
- The implementation models should attempt to capture the various aspects of each management strategy as realistically as possible.
Step 5: Run simulations.
In this process, all the candidate management strategies (implementation models) are applied to all the uncertainty scenarios (operating models). This means that an MSE that tests six candidate management strategies on six different uncertainty scenarios will produce results from 36 different combinations. In addition, because each test simulates management over many years (usually at least 20) and includes repeated runs to understand how random variability impacts performance (frequently 1,000 individual trials), considerable time, computing power, and an organized approach to storing and summarizing results is required. The calculation of the performance metrics selected in Step 1 is coded into the MSE test so that these statistics will be readily available. Running simulations is often an iterative process as learning during the simulation process can result in developers altering either the candidate management strategies, operating models, or both.
Step 6: Compare performance, evaluate tradeoffs, and select a management strategy.
Once the simulations are run, it is necessary to examine the results and select a management strategy that best meets management objectives and is robust to the various types of uncertainty in the fishery. The analyst that conducted the MSE should participate in the evaluation process by explaining results and facilitating discussion, but the ultimate choice of which management strategy is “best” should be determined by the managing agency. Stakeholders and decision-makers should be fully involved in selecting among management strategies. This will likely be an iterative process where the analyst interacts with and responds to the needs of decision-makers. Consequently, there needs to be an investment of time in working with decision-makers to ensure they understand the information presented.
When comparing the performance metrics for each candidate management strategy, it is necessary to determine a process for deciding on the best option. Occasionally a single management strategy will clearly dominate the others in all performance categories, but more likely there will be tradeoffs between the performance metrics (e.g., a strategy that results in high yield, but also higher risk to the population). The ideal way to select among management strategies is to define a utility function that puts an a priori weight on each performance metric (essentially, a numeric factor reflecting how important it is), and choose the management strategy that achieves maximum utility. However, this method is very difficult to implement in the real world because stakeholder groups often have different values for different performance metrics, and those values are difficult to quantify objectively. Instead, the most commonly-used method for selecting performance metrics usually involves the following steps:
- The analyst explains all the options and presents the relative results.
- Those management strategies that do not meet the minimum sustainability criteria are eliminated, as these strategies often cannot legally be implemented, and would likely be considered unviable by all stakeholder groups.
- Any management strategies that are outperformed in all performance metrics are eliminated to reduce the number of options as quickly as possible.
- Decision-makers use either a satisficing or trading-off approach to select from the remaining candidates. Satisficing involves specifying minimum performance standards for all performance measures and only considering management strategies that satisfy those standards. In contrast, trading-off acknowledges that any minimum performance standards will always be somewhat arbitrary, and that decision-makers should attempt to find management strategies that achieve the best balance among performance measures.
Climate change and Management Strategy Evaluation
Climate change and environmental variation can drive changes in a wide array of biological processes affecting fishery management, including spawning, spatial distributions, migratory patterns, gear selectivity, and diet, as well as growth, survival, mortality, and recruitment rates. Changes in any one of these parameters can profoundly affect the estimated value of fishery reference points such as B0, MSY, OY, etc. MSEs provide an opportunity to examine how those types of changes are likely to affect the performance of a given management strategy by modeling environmental and climate impacts on population dynamics. These simulations can be used to evaluate the benefits of adopting a management strategy that explicitly accounts for environmental and climate impacts.
Two approaches have been developed to apply MSE to evaluate the impact of environmental variation on the performance of management strategies: the mechanistic approach and the empirical approach. The mechanistic approach estimates the relationship between the environment and elements of the population dynamics of the fished species and makes predictions of population trends using the outputs from global climate models (Punt 2015). This approach can be very difficult, especially in data-poor fisheries. A key step when applying this approach is to represent uncertainty appropriately, because fishery models estimate how populations will respond to changing conditions by looking at past performance, which is not necessarily representative of changes under future climate scenarios (Reifen and Toumi 2009).
The empirical approach examines broad impacts of climate change, environmental variation, and ecosystem shifts without explicitly specifying a mechanism (Punt 2015). This is done by imposing trends in the values of key parameters of the operating model to simulate plausible changes that might occur at the stock level under climate change, without attempting to link the operating model explicitly to global climate change models. The empirical approach can be used to understand how robust a management strategy is to changing conditions even when there are no actual environmental data available to use to relate to future changes in the parameters of the operating model. It has been recommended as a more appropriate approach for the majority of fisheries (Szuwalski and Punt 2013).
Guidance
- Stakeholders should be involved in the decision-making process, which usually requires some investment in explaining the process along the way.
- The analyst should refrain from deciding which management strategy is “best”; the decision should be made by the management agency and reflect their objectives.
- A four-step approach is usually used to eliminate unviable candidate procedures. Decision-makers will need to use either a trading-off or a satisficing approach to decide on a management strategy.