DISCUSSION & CONCLUSIONS


      RMLANDS is a stochastic landscape model that simulates disturbance and post-disturbance recovery of vegetation within a heterogeneous landscape. The landscape structures produced by this model were analyzed with FRAGSTATS, which summarizes landscape structure by means of numerous quantitative metrics, and HABIT@, which summarizes wildlife habitat capability by means of species-specific models for selected indicator species. We applied these models to the problem of characterizing the range of variability in landscape structure and wildlife habitat within the San Juan National Forest in the southern Rocky Mountains, USA. The model was parameterized on the basis of our best empirical understanding of the pre-1900 disturbance regime in this region. The period of several centuries prior to 1900 represents a time when broad-scale climatic conditions were generally similar to those of today, but Euro-American settlers had not yet introduced the sweeping ecological changes that now have greatly altered many Rocky Mountain landscapes -- through fire suppression, grazing, road-building, timber cutting, recreation, and other activities (Knight et al. 2000). Thus, the pre-1900 period provides a suitable reference condition against which we can compare current landscape structure and dynamics (Swetnam et al. 1999, Landres et al. 1999). In addition, an understanding of natural landscape structures and variability during this reference period also provides a basis for forest management policies that seek to mimic natural disturbance patterns in our logging, grazing, and other activities involving commodity production from public forest lands (Romme et al. 2000, Buse and Perera 2002).


Disturbance Processes & Dynamics


      Wildfire.--Numerous types of natural disturbance occurred in the study area during the reference period: fire, snow avalanches, windthrow, and a variety of tree-killing insects, fungi and other pathogens (Veblen et al. 1989, Lertzman and Krebs 1991, Veblen et al. 1991a, b, Roovers and Rebertus 1993, Veblen 2000, Romme et al. 2003). However, the most important and coarsest in scale of these natural disturbances was fire. Based on a number of fire history studies and relatively extensive local empirical data, Romme et al. (2003) concluded that the median fire return interval varied dramatically across the landscape along an elevational gradient in relation to fuels and moisture conditions, ranging from 10-30 years in the lower elevation ponderosa pine type, 20-50 years in the dry mixed-conifer type, 60-120 years in the aspen type, and 200-350 years in the spruce-fir type. They also noted that many individual stands escaped fire for far longer than the median return interval and some burned at shorter intervals, creating a complex vegetation mosaic at the landscape scale. They further hypothesized that under these reference period conditions, stand replacement fires initiated stand development and maintained a coarse-grain mosaic of successional stages and cover types across the landscape, while non-replacement fires functioned to maintain communities in a particular condition (e.g., open canopy ponderosa pine forest) or accelerate the successional process of stand development.


      Our simulations largely confirmed these observations and provided a detailed quantitative summary of the wildfire disturbance regime. In particular, we recorded a similar elevational gradient in return intervals (or rotation periods), ranging from 40-46 years in the lower elevation ponderosa pine (without and with aspen) type, 53-63 years in the warm dry mixed-conifer (without and with aspen) type, 109 years in the pure aspen type, 138-146 years in the cool moist mixed-conifer (with and without aspen) type, and 218-267 years in the spruce-fir (with and without aspen) type (Table-rotation). The minor differences between our findings and Romme et al. (2003) largely reflect biases associated with the approaches used to estimate return intervals in each study. For example, our estimate of a 40-year mean return interval in ponderosa pine forests was inclusive of high- and low-mortality fires and was averaged over every cell classified as ponderosa pine, whereas the estimate of Romme et al. (2003) was based on intervals between recorded fire scars (and therefore limited to low-mortality fires only) from a sample of trees in large homogeneous stands of ponderosa pine (which generally have the shortest return intervals). When we computed the return interval between low-mortality fires using a comparable approach, in which each recorded interval between low-mortality fires in a cell was treated as an independent observation in order to approximate the method of Romme et al. (2003), the mean return interval was somewhat shorter (30 years) and within the range reported by Romme et al (2002). The difference in these estimates reflects the fact that we averaged over all ponderosa pine cells instead of restricting our estimate to only large homogeneous stands (with shorter return intervals). Similarly, our method of computing the mean return interval in the higher elevation spruce-fir forest included both low- and high-mortality fires, whereas the estimate of Romme et al. (2003) was based on stand-replacement (i.e., high mortality) fires only. When we restricted our computation to high-mortality fires, the mean return interval increased to 266-329 years and was consistent with the previous study. Overall, our results were remarkably consistent with those determined by Romme et al. (2003) based on empirical field studies.


      We also noted the distinct variability in return intervals among locations within a single cover type. For example, even in ponderosa pine-oak forest - the cover type with the shortest mean return interval (~40 years), the mean return interval between fires (of any mortality level) varied widely from 19 years to >800 years (Figure-return) and varied spatially across the forest (Figure-map). In general, return intervals increased with elevation, reflecting the moister, cooler conditions at higher elevations, and increased for stands embedded in a neighborhood containing cover types with longer return intervals (e.g., aspen, cool moist mixed-conifer forest). These patterns of variation were remarkably consistent among all cover types, highlighting the importance that landscape context has on fire regimes and demonstrating that no single statistic, such as mean fire interval (MFI), is adequate to characterize historical fire regimes, and that the widely used MFI actually may be quite misleading if taken literally - it may connote homogeneity and/or consistency when in fact spatial and temporal variability is the trademark of these regimes.


      Perhaps the single greatest insight gained from our simulations with regards to wildfire stems from the shear magnitude of wildfire disturbance that is required to produce the widely accepted return intervals for the reference period. On average, once every two decades, >10% of the area (~80,000 ha) was burned, and roughly once every 120 years, >20% (~160,000 ha) was burned (Figure-recurrence), inclusive of both high- and low-mortality affected areas. This is a tremendous amount of burning and perhaps a magnitude of burning that is poorly appreciated by land managers and the general public based on public reactions to recent “large” fires in the west. Note, the largest recorded fire in the project area was the Missionary Ridge fire of 2002, which burned a mere 20,000 ha, yet prompted significant reaction among managers and the public.


      Insects & Disease.--Hundreds of species of insects, fungi, and other pathogens that cause tree death or damage also inhabited these forests during the reference period (Furniss and Carolin 1977). Any of them may have been locally important on occasion (Schmid and Mata 1996). However, it was not feasible to explicitly simulate more than a handful of insects and diseases in a complex landscape model like RMLANDS. Therefore, we identified four insect species and one insect/disease (pathogen) complex that have had the most frequent and widespread impact on vegetation in the SJNF region (Romme et al. 2003). The insects included mountain pine beetle (Dendroctonus ponderosae) and its affiliates, Douglas-fir bark beetle (Dendroctonus pseudotsugae), spruce bark beetle (Dendroctonus rufipennis), and western spruce budworm (Choristoneura occidentalis). The insect/disease complex that we treated is referred to as "pinyon decline" and includes a combination of black stain root rot (Leptographium wagneri) the pinyon ips beetle (Ips confusus).


      In contrast to wildfire, there was comparatively little empirical data on insect/disease disturbance regimes for the reference period and almost no local data. Consequently, we were forced to draw heavily on contemporary observations of outbreaks from throughout the Rocky Mountain region (Schmid and Mata 1996) in combination with local and regional expert opinion. In addition, due to the paucity of empirical data available for model verification purposes, we were forced to calibrate the simulations based on the user-specified disturbance regimes. Not surprisingly, therefore, our simulations produced disturbance regimes consistent with our parameterization. While this may seem a bit circular, it was a necessary process for a complex model such as RMLANDS. Plus, our real emphasis was on quantifying the vegetation patterns and dynamics resulting from these disturbance processes. Hence, we gained few true insights from our simulations regarding the insect/disease disturbance regimes. Nevertheless, there were a couple of important observations worth noting.


      First, the overall rotation periods for insect/disease disturbances were generally much longer than wildfire, which had an overall rotation period of 94 years (Table-rotation). Spruce budworm had the shortest rotation period of any insect/disease agent at 103 years (Table-rotation), followed by spruce beetle at 273 years (Table-rotation), pine beetle at 306 years (Table-rotation), pinyon decline at 466 years (Table-rotation) and Douglas-fir beetle at almost 1,200 years (Table-rotation). Hence, taken individually, with the exception of spruce budworm, insect and disease disturbances had much less overall impact on the landscape than wildfire. However, taken collectively, insects and diseases clearly impacted more area per unit time than wildfire. We might conclude, therefore, that too little attention has been given to the potentially important role of insects/disease compared to wildfire.


      Second, the ecological impacts of insect/disease outbreaks were fundamentally different than wildfire in a couple of important ways with regards to vegetation patterns and dynamics. With the exception of spruce beetle outbreaks, all other insect/disease outbreaks resulted in proportionately very little stand replacement. Most disturbances were low mortality and either promoted successional development of younger stands by thinning the tree canopy and facilitating understory development or acted within older stands as a gap-scale disturbance processes that facilitated the development of the true old-growth, shifting mosaic stand condition. In contrast, wildfire in most cover types resulted in proportionately more stand replacement and was therefore principally responsible for the maintenance of the coarse-grained mosaic of successional stages across the landscape. There were exceptions to this generalization. For example, spruce beetle outbreaks were principally high-mortality disturbances and functioned much the same way as wildfire in high-elevation conifer forests. Similarly, pine beetle outbreaks and wildfire operated in similar fashion in low-elevation ponderosa pine forests, in both cases producing mostly low-mortality impacts. Insect/disease disturbances in most cases also exhibited notably different spatial patterns. In general, wildfires produced relatively contagious disturbance patches (i.e., large, contiguous patches containing relatively few gaps - a spatial property known as low lacunarity, Plotnick et al. 1993), whereas most insect/disease outbreaks produced relatively non-contagious patterns characterized by a great deal of internal fine-grained heterogeneity (i.e., high lacunarity)(Figure-lacunarity). Although not universally true, in general, insect/disease agents were responsible for creating much of the fine-scale heterogeneity in vegetation patterns in our simulations.


Vegetation Patterns & Dynamics


      It is widely accepted that during the reference period natural disturbance processes, notably wildfire and a variety of insects/diseases, operated to create and maintain a complex vegetation mosaic of successional stages and cover types (Romme et al. 2003). It is less clear whether this vegetation mosaic was stable in structure (composition and configuration) or the degree to which it varied over time. With our simulations, we sought to quantify the range of variability in landscape structure during the reference period to help ascertain the degree of dynamism in landscape structure and to provide a benchmark for comparison with alternative future land management scenarios. To this end our simulations produced several important findings.

 

Click on the links below to view a movie depicting vegetation changes on either the San Juan National Forest, Columbine District, or Hermosa Waterhsed over an 800-year (10-year time steps) simulation representing the reference period disturbance regime. NOTE, these are large (40-60 Mb) Microsoft Media files (.avi) that require appropriate movie viewing software (e.g., Quick Time Player).

                San Juan National Forest Movie

                Columbine District Movie

                Hermosa Watershed Movie


      Overall, the vegetation mosaic was remarkably variable in structure over time (see movie links above). For example, across the 57 dynamic patch types (i.e., unique combinations of cover type and stand condition), the average coefficient of variation in percentage of the landscape comprised of the corresponding patch type was 131% (range 32-746%). Hence, while the landscape could be characterized as a shifting mosaic of successional stages and cover types, it was not a steady-state shifting mosaic (sensu Bormann and Likens 1979). In other words, the composition of the mosaic was not constant. This finding is consistent with evidence from a wide variety of other coniferous forest landscapes in North America, including boreal forests of western Canada (Johnson 1992) and the Great Lakes region (Baker 1992a,b), and subalpine forests of the Yellowstone Plateau (Turner et al. 1993). The dynamism we noted can be attributed to two major sources of variation in the model: (1) stochasticity in the disturbance parameters associated with initiation, spread, and mortality and (2) the climate modifier. The stochastic nature of disturbance initiation (e.g., representing random lightning strikes associated with local storm systems) and subsequent spread (e.g., representing uncertain weather conditions during the hours, days, etc. following a wildfire initiation) introduced uncertainty into which stands were disturbed, when they were disturbed, and how severely they were disturbed. Thus, as a matter of chance, a large stand-replacing disturbance might have affected a large proportion of a cover type and thereby altered its seral-stage distribution. The climate modifier had a similar destabilizing effect on vegetation patterns by influencing the rate of disturbance in each timestep (decade) of the simulation. For example, drought cycles (as implemented via the climate modifier) had a substantial influence on the frequency and extent of wildfire (Figure-initiations, Figure-extent). This pattern of variation in climate and fire is consistent with findings from tree-ring studies in Colorado and throughout the Southwest (e.g., Swetnam and Betancourt 1998, Veblen 2000). In combination, varying climate conditions and the stochastic occurrences of disturbances acted to keep the system in a constant state of change.


      Although the vegetation mosaic was not in a steady-state equilibrium, the mosaic was generally in dynamic (or bounded) equilibrium (sensu Turner et al. 1993). That is to say, while the structure of the landscape varied over time, it generally fluctuated within bounds about a stable mean (e.g., Figure-equilibrium). This behavior is essential to our objective of describing the range of variability in landscape structure, because the concept of a “range of variability” implies that the range is stable. If the landscape is not in dynamic equilibrium and for example exhibits a trend, then the measured range of variation will vary with the specific period of measurement (Figure-equilibration scale). In our simulations, most (but not all) metrics achieved a stable, bounded equilibrium within a 100-300 year simulation period - although it took twice that length of time to verify that the range of variation was in fact stable. Not surprisingly, the period required for equilibration in landscape composition varied somewhat among cover types. In general, cover types that experienced shorter disturbance return intervals and/or faster rates of succession equilibrated in the shortest period. For example, the seral-stage distribution in mountain shrubland, which experienced an 81-year rotation period for stand-replacement wildfire disturbances and succeeded to the latest seral stage in as little as 50 years following stand-replacement, reached stable bounds within a 100-year period (Figure-mts condition). Conversely, the seral-stage distribution in spruce-fir forest, which experienced a comparable 321-year rotation period and took at least 300 years to succeed, reached relatively stable bounds within roughly a 300-year period (Figure-sf condition). Depending on the criteria used to define stability, there is even evidence that the spruce-fir seral-stage distribution did not completely stabilize over the entire 800-year simulation period (Figure-equilibrate).


      Our results demonstrate that the range of variability in landscape structure cannot be expressed in a single metric - at least not effectively - because the metrics associated with different aspects of landscape composition and configuration exhibit varying degrees and patterns of dynamism. In our simulation, landscape composition metrics exhibited five-times greater dynamism (on average) overall than landscape configuration metrics (Table-hrv). Thus, while the composition of the vegetation mosaic - specifically, changes associated with seral-stage distributions - fluctuated dramatically over time, the spatial pattern of the mosaic was relatively stable. The variability in configuration was principally associated with changes in the size and continuity of the large patches in the landscape. Overall, this suggests that large, severe disturbance events, those that occurred relatively infrequently but that substantially altered the seral-stage distribution and created a coarse-grained mosaic of vegetation patches, were disproportionately important in regulating the dynamism in landscape structure. In contrast, the relatively frequent small disturbances had little impact on overall landscape pattern or change through time. This finding is consistent with increasing evidence from a wide variety of disturbance-dominated landscapes in North America that large fires are often more severe than smaller fires and have a stronger influence on long-term ecological structure and function, for example, by introducing successional trajectories that differ from the expected (Moritz 1997, Romme et al. 1998, Turner et al. 1997). In particular, landscapes resulting from large fires are often complex mosaics comprised of low severity surface burns where soils are largely intact, high severity areas of crown fires with extensive tree mortality and consumption of soil organic layers, moderate or mixed effects, and islands of unburned vegetation. Important consequences of the spatial patterns of burn severity include resulting patterns of surviving organisms that dictate initial succession patterns (Turner et al. 1998) and differential responses of animal species in relation to post-burn habitats (Kotliar et al. 2002).


      The role of large disturbances is especially noteworthy when considered in relation to climate change. Small climatic changes have potentially significant effects on disturbance regimes, and dramatic changes in ecological communities related to fire regime dynamics are possible (Clark 1988, Swetnam et al. 1990, Sprugel 1991, Turner et al. 1997, Meyer and Pierce 2003). One predicted outcome of global climate change includes the increased frequency of severe disturbance events, including large fires (Ryan 1991, Torn and Fried 1992). Indeed, the possibility that recent high-severity large fires are part of a longer-term trend has raised alarm in many areas, including the Rocky Mountain West, because the implications for key environmental processes and biological responses are highly uncertain (McCarthy and Yanoff 2003). Our results suggest that if climate change results in an alteration in the frequency and extent of large disturbances, then one of the principal impacts may be to alter the dynamics in landscape structure.


      Based on these findings, it is easy to reach the conclusion that the landscape was in equilibrium, albeit a dynamic one, during the reference period. However, this conclusion warrants careful consideration given the important ramifications. In particular, our simulation model treated disturbance and succession as a stationary process (i.e., stochastic process that does not change in distribution over time or space; Loucks 1970) with random perturbation. For example, all other things being equal, the rate (probability) of succession transition from one stand condition to another was held constant for the entire simulation. Given stationary parameters, it is a certainty that the model will eventually reach equilibrium if given enough time. Similarly, given the stochastic implementation of these stationary processes, it is just as likely that the landscape will not achieve equilibrium over a very short period of time (e.g., few timesteps). Hence, the equilibrium concept is ultimately simply a matter of scale (Turner et al. 1993). The more relevant question is: given the length of time it takes for the landscape to demonstrate equilibrium dynamics, is it likely that the factors governing disturbance and succession processes (and hence vegetation dynamics), such as climate, were stationary for that length of time during the reference period? As stated above, most metrics exhibited a stable, bounded equilibrium within a 100- to 300-year period, which is well within the length of our reference period (1500 to late 1800's). Moreover, while we recognize that Rocky Mountain climates have varied during the last 600 years at scales of decades and centuries (e.g., Millspaugh et al. 2000, Veblen 2000) and that the reference period was not a time of complete stasis, climatically, ecologically, or culturally (e.g., Petersen 1981, Whitney 1994), the period of several centuries prior to 1900 was a time of relatively consistent environmental and cultural conditions in the region (Romme et al. 2003). Thus, we believe that it is likely that the landscape exhibited dynamic equilibrium conditions during the reference period.


      Lastly, the existence of dynamic equilibrium does not imply that the mean condition of the landscape (with respect to any particular metric) is an adequate descriptor of the reference period - only that it is stable. Indeed, the actual mean condition of the landscape, as given by any metric, was rarely, if ever, realized. Instead, given the high variability over time in landscape structure, the range (or bounds) of variation should be emphasized, especially when using these results as a benchmark for comparison with other disturbance scenarios.


Wildlife Habitat Patterns & Dynamics


      Policy mandates require National Forest managers to maintain viable populations of all native wildlife species found on National Forest lands. Additional research is urgently needed to evaluate the current status of wildlife species, and to identify management strategies that will help ensure their long-term persistence. A combination of empirical, theoretical, and modeling studies can best contribute to our understanding of past, present, and future status of overall biodiversity and individual species in National Forests. In this study, we have used a modeling approach to characterize the baseline range of variability in habitat capability for a suite of species representing a diversity of habitat requirements. An important next step will be to compare the range of variation that we have described with potential habitat under alternative land management scenarios. It is important to note that we did not simulate wildlife populations per se. Rather, we simulated habitat conditions, and thereby implied potential population distributions and dynamics as a function of habitat conditions. Empirical studies are urgently needed to test the predictions of population distribution and dynamics generated by this study.


      We demonstrated that habitat capability varies over time and space for all species – something that has long been recognized intuitively but rarely quantified. Not surprisingly, the magnitude and pattern of variation differed somewhat among species (Table-LC index). We initially predicted that habitat characteristics for generalist species (represented by elk in this study) would exhibit the least variation over time. Our reasoning was that as one kind of suitable habitat became reduced in availability through normal landscape dynamics, other kinds would become more available. Thus, long-term variability in the total amount and configuration of suitable habitat would be small. In contrast, we predicted that habitat for specialist species (pine marten, three-toed woodpecker, and olive-sided flycatcher in this study) would fluctuate more widely over time, because alternative kinds of habitat did not exist to compensate for natural fluctuations in the preferred habitat.


      As predicted, at the scale of the Columbine District (the largest extent considered for all four species), the long-term range of variability in habitat capability was less for elk than for two of the three habitat specialist species (three-toed woodpecker and olive-sided flycatcher)(Table-LC index). To our surprise, the pine marten also exhibited relatively low variability in habitat capability. Apparently, the contagious spatial pattern in vegetation successional stages produced by coarse-scale, high-mortality disturbances in high-elevation conifer forests (i.e., wildfires and spruce beetle outbreaks) functioned to maintain large patches of interior forest needed by this species. In contrast, the relatively high variability in three-toed woodpecker and olive-sided flycatcher habitat reflected the episodic pulses of high-quality habitat that followed large-scale disturbance events in the mid- and high-elevation conifer forests. These patterns generally held at the finer watershed scale, although elk habitat was comparatively much more variable than expected. This was almost certainly due to the relatively large ratio of home range size to landscape extent (~1:40). At this scale, habitat fluctuations caused by large disturbance events influenced an increasing proportion of the potential home ranges.


      We selected these four wildlife indicator species based on differences in life history and habitat associations. Not surprisingly, therefore, each species exhibited somewhat unique patterns of variation through time and space that made generalizations difficult. For example, olive-sided flycatcher habitat was well-distributed throughout the landscape at all times and consisted of a mixture of relatively persistent high-contrast edges bordering permanent openings such as meadows, barren areas, and water bodies, in addition to transient edges associated with the heterogeneous pattern of tree mortality following both wildfire and insect outbreaks. Thus, there were both persistent local sources of high-quality habitat and transient but extensive sources of high-quality habitat that followed episodic disturbances. Overall, the spatial distribution of high-quality olive-sided flycatcher habitat was quite distinct (Figure-osfl) when compared to, say, the more coarse-grained and contagious distribution of pine marten habitat (Figure-marten). In addition, in contrast to the very transient nature (i.e., 10-20 years) of three-toed woodpecker habitat following disturbances, high-quality olive-sided flycatcher habitat tended to degrade much more slowly over several decades in response to gradual succession processes operating in the disturbance opening. Ultimately, as these few examples illustrate, each indicator species exhibited a unique spatial and temporal pattern of variability in capable habitat that reflected differences in life history (e.g., home range size) and habitat affinities (e.g., preferences for edges, forest interiors, or post-disturbance environments).


HRV Departure


      One of the principal purposes of gaining a better quantitative understanding of the historic reference period is to know whether recent human activities have caused landscapes to move outside their historic range of variability (Landres et al.1999; Swetnam et al. 1999). To this end, land managers have largely adopted a single approach based on Fire Regime Condition Class (FRCC) determination. FRCC is a categorical classification of the degree to which the current fire regime and composition and structure of a vegetation community deviates from its natural or historic range of variability (HRV) under a designated reference period (FRCC website). FRCC has become a driving force behind current land management activities and is widely being used as the primary (or even sole) basis for identifying and prioritizing areas for ecological restoration, including the reduction of wildfire risks associated with hazardous fuels.


      Our simulations were designed to provide a quantitative description of landscape structure dynamics for the pre-1900 reference period against which to compare landscape trajectories under alternative future land management scenarios. We believe that the most appropriate and defensible use of our quantitative findings is in the context of evaluating the relative impacts of alternative scenarios (Romme et al. 2000, Buse and Perera 2002), and therefore it was not our original intent to compare the HRV in landscape structure against the current landscape. However, HRV departure has become a (the) critical management issue on this and other national forests, as noted above. Thus, we modified the approach for FRCC determination to one better suited to our modeling environment, and possessing other distinct advantages over FRCC (see Methods for a detailed description of our approach), in order to examine the magnitude of departure of the current landscape condition from the simulated HRV. Briefly, our approach is based on a spatially-explicit model of disturbance and succession (instead of a nonspatial model), incorporates multiple disturbance processes (not just fire), explicitly incorporates the measured range of variation in each metric (instead of using the mean), results in a continuously-scaled departure index (instead of a 3-class categorization of departure level)(Figure-hrv-departure), adopts a truly multivariate perspective on vegetation departure by incorporating multiple composition and configuration metrics (instead of a bivariate summary), and allows for an explicit assessment of the effects of scale on departure. Like the FRCC approach, however, our approach is not without limitations; therefore, the findings discussed below must be considered within the scope and limitations that follow.


      The current landscape structure appears to deviate substantially from the simulated HRV (Table-hrv-summary-combined), and this seems to be the case despite difficulties in classifying and mapping the structure of the current landscape (see limitations below). Note that for our purposes the “current” condition refers to the landscape in 2003 after the Missionary Ridge fire. Many characteristics appear far outside that range of variability. Indeed, one-third of the landscape composition metrics (16/48) and most of the landscape configuration metrics (15/19) are completely outside their HRV’s (i.e., 100% departure index)(Table-hrv-combined). In general, the current landscape has fewer, larger, more extensive and less isolated patches with less edge habitat than existed under the simulated HRV. The larger patches tend to be geometrically less complex and contain proportionately more core area than existed under the simulated HRV. Overall, the current landscape is more contagious and less structurally diverse than ever existed under the simulated HRV. This can be interpreted as a more homogenous landscape, where the lack of any extensive disturbance during the past 100 years has led to large, mostly late-seral patches, with low contrast due to the paucity of younger seral stages.


      The patterns of departure were generally similar at the class level for each of the cover types with reliable data on current conditions (i.e., forest types). In particular, the current high-elevation landscape is dominated by large patches of late-seral conifer forest and an almost total absence of early-seral forest due to the lack of extensive disturbance. Clearcutting produced a small amount of early-seral forest during the mid-1900s, but this practice was discontinued by 1980 in all but aspen forests because of problems in regenerating clearcut stands. The remaining early-seral patches are small, geometrically simple and relatively isolated. The story is similar for pure aspen forests. However, aspen-dominated forest, which includes aspen in the early- and mid-seral stages of the mixed conifer-aspen forest types, exhibits a notable deviation from this general pattern. While the current landscape has less aspen in the early- and mid-seral stages, due to the lack of disturbances, and more in the late-seral stage - similar to the other high-elevation forest types - the coarse spatial configuration of aspen-dominated forest appears to be generally within the simulated range of variation. The low-elevation forests, especially ponderosa pine, also contain an overabundance of stands in the late-seral stages, but the most notable departure is the complete absence of stands in the fire-maintained open canopy condition. Historically, this condition was quite prevalent in our project area (Romme et al. 2003) and throughout the southwest (Swetnam and Baisan 1996). In our simulations, low-elevation fire-maintained open canopy forest comprised anywhere from roughly 9% to 16% of the landscape over time and was therefore a dominant feature of the landscape at all times. The current departure is clearly due to the dearth of wildfires over the past century, which has had a couple of notable consequences. First, the absence of fire following a pulse of widespread regeneration in the early twentieth century has allowed many young stands to become overstocked through the successful establishment and growth of new stems. This has resulted in the preponderance of stands in the stem exclusion stage of development. Second, the absence of fire has allowed many stands to succeed to the understory reinitiation or shifting mosaic (i.e., late-seral) stages, instead of transitioning to the fire-maintained open canopy condition.


      Due to the above vegetation conditions and patterns, the current landscape appears to deviate substantially from the HRV in susceptibility to at least four of the simulated insects/pathogens disturbances (Table-hrv-susceptibility). The current landscape appears especially vulnerable to pine beetle outbreaks, due to the preponderance of pine stands in the dense stem exclusion stage of development, and spruce budworm and spruce beetle outbreaks, due to the preponderance of mixed-conifer and spruce-fir stands in the late-seral stages of development. The high susceptibility of the current landscape to pinyon decline is immanently clear from observations of the current region-wide ongoing epidemic.


      Given the direct link between vegetation patterns and wildlife habitat, it is not surprising that the wildlife indicator species we analyzed also exhibited substantial departure from their simulated HRVs (Table-LC index). Of the species considered, the pine marten is the principal beneficiary of the current landscape departure. Extensive late-seral conifer forest in the higher elevations is likely providing ideal habitat conditions for this species (Buskirk and Powell 1994, Hargis et al. 1999). Three-toed woodpeckers also benefit from these conditions, even though this species is better adapted to exploit post-disturbance environments (Harris 1982, Hitchcox 1988, Hutto 1995). The three-toed woodpecker is the only species not exhibiting significant departure. Elk and olive-sided flycatchers are both disadvantaged in the current landscape. Both species benefit from edges between early- and late-seral vegetation patches. Specifically, elk benefit from the juxtaposition of forage (found in early-seral openings) and hiding cover (found in closed-canopy stands)(Reynolds 1966, Boyce and Hayden-Wing 1980, Thill et al. 1983), while olive-sided flycatchers use open areas as foraging habitat and use edges as nesting habitat (Finch and Reynolds 1988, Altman 1997). The paucity of disturbances over the past century has left the current landscape rather deprived of edge habitat and has reduced the overall interspersion and juxtaposition of the vegetation mosaic, with negative consequences on habitat capability for these two indicator species.

 

      Limitations.–Managers need to be cognizant of two important considerations when interpreting our HRV departure results. First, although it is clear that the current landscape structure is not within the modeled range of variability, the magnitude of the deviation is less clear. Specifically, inconsistencies in the spatial resolution of the initial cover type map (e.g., failure to delineate all small vegetation patches) may impose an artificial coarseness to the landscape structure that affects the computed values of most landscape metrics - at least the configuration metrics. In other words, the fine-grained heterogeneity in vegetation created by the disturbance processes in RMLANDS was probably not comparably represented in the Forest Service database due to human inconsistencies in mapping small vegetation patches. We took two precautions to guard against this problem. (1) In addition to analyzing the 25-m resolution maps of cover type and stand condition, we rescaled the output maps from RMLANDS to a 0.5-ha minimum mapping unit and analyzed these coarser-grained maps as well (i.e., the reclass maps). (2) We included a number of area-weighted configuration metrics that are relatively insensitive to small patches. The rescaling was partially but not completely effective in accounting for these discrepancies. Thus, we emphasized the area-weighted metrics when evaluating configuration departure. Note, the composition metrics (i.e., the percentage of the landscape in each class) are relatively immune to this issue. Given our reliance on the composition metrics and the emphasis we placed on the area-weighted configuration metrics, we feel that is safe to conclude that the current landscape structure is well outside the modeled range of variability. However, it is important to be aware that our reported HRV departure indices, except for the seral-stage departure index (class level) and landscape composition departure index (landscape level), are probably biased high (i.e., inflated).


      Second, any conclusions regarding HRV departure depend on an accurate mapping of stand conditions in the current landscape. We noted two related problems in this respect. First, in the forested cover types, stand inventory data did not allow us to consistently and reliably discriminate between stands in the understory reinitiation and shifting mosaic stages. In particular, recorded stand ages were based on the age of the oldest trees in the stands, not the age since that last stand-replacing disturbance. By definition, the age since stand origin is always greater than the age of the oldest trees once the stand reaches the true shifting mosaic stage of development, but the size of the oldest (largest) trees may not be appreciably different between these stages. Thus, it is likely that a significant portion of the stands classified as being in the understory reinitiation condition are actually in the shifting mosaic stage of development. This bias would result in an inflated seral-stage departure index at the class level and an inflated landscape composition departure index at the landscape level. We took one precaution to guard against this problem. For purposes of HRV departure calculations, where appropriate, we combined the understory reinitiation and shifting mosaic stages into a combined late-seral stage and reported these results in addition to the original results. Second, we altogether lack reliable age and stand condition data for several cover types. In particular, we have inadequate data for most non-forested types (e.g., mountain shrublands, mesic sagebrush, pinyon-juniper woodlands). Consequently, our initial assignment of stands to condition classes (seral stages) was based on interpolation from sparse data or on a random assignment based on seral-stage distributions estimated by local experts. In either case, we are not confident that our current condition estimates are accurate. Unfortunately, there was no way to guard against potentially spurious results in these cover types. Hence, until more complete data on current stand age and condition are obtained for these cover types, our results for these cover types must be viewed with extreme caution. In this regard, in the HRV summary table reported above we subjectively assigned a confidence level to each of our departure estimates. Note, this discrepancy does not affect our simulated HRV distribution, as we accounted for an equilibration period in the model.


      Management Implications.–Our simulations indicate that the current landscape structure deviates substantially from its historic range of variability and that the level of “departure” varies spatially across the forest in relation to differences among cover types (Figure-hrv-departure-map). In general, the current landscape is dominated by mid- to late-successional forest and lacks the fire-dependent stand conditions and spatial heterogeneity in vegetation that was maintained by natural disturbances during the reference period. This landscape condition appear to be largely a legacy of the last century of land management practices, in particular fire exclusion (Romme et al. 2003). Indeed, Euro-American activities have altered the disturbance regime of many western forest landscapes, resulting in substantial changes in landscape structure and function (e.g., Baker 1992; Wallin et al. 1996; Baisan and Swetnam 1997; Agee 1999, McGarigal et al. 2001). In the southern Rocky Mountains these effects have been less ubiquitous and less straightforward in high-elevation landscapes than in low-elevation landscapes (Romme et al. 2003). Lower elevations have been subject to substantially altered disturbance regimes for more than a century (Romme et al. 2003, Swetnam and Baisan 1996). Despite apparently little change in the natural disturbance regime in the high-elevation landscapes (e.g., Romme and Despain 1989; Bessie and Johnson 1995; Weir et al. 1995; Schmid and Mata 1996), other human activities since the late 1800s have clearly altered disturbance regimes and landscape structure (Hejl et al. 1995; Miller et al. 1996; Reed et al. 1996a,b; Tinker et al. 1997). These activities are related mainly to timber harvest and to the extensive network of roads constructed to support timber harvest, fire control, and recreation. In addition to these ubiquitous human impacts, the generally benign climate of the twentieth century also was a significant reason for the lack of large, stand-replacing disturbances, either by fire or spruce beetle (Romme et al. 2003).


      Our findings are particularly interesting in light of increasing concern over anthropogenic habitat loss and fragmentation (Rochelle et al. 1999; Knight et al. 2000). Forest fragmentation has received considerable research attention in many regions of North America (e.g., Whitcomb et al. 1981; Robbins et al. 1989; Lehmkuhl and Ruggiero 1991; McGarigal and McComb 1995; Schmiegelow et al. 1997; Trzcinski et al. 1999; Villard et al. 1999). However, we are in the earliest stages of understanding the patterns, processes, and ecological significance of forest fragmentation in the southern Rocky Mountain region (Knight et al. 2000). It is not clear, for example, how the native biota responds to anthropogenic changes in landscape patterns caused by logging and road-building and disruption of natural disturbance regimes (e.g., fire suppression). This difficulty is exacerbated because Rocky Mountain landscapes are inherently very heterogeneous – a result of steep natural gradients in elevation, topography, and substrate – and forests in this region tend to be somewhat patchy even in the absence of human alterations (Hejl 1992).


      Based on our results, it might be tempting for managers to reach the simple conclusion that the landscape is less fragmented today than during the reference period. However, this conclusion is not as straightforward as it might seem for the following reasons. First, fragmentation is a landscape-level process in which a specific habitat is progressively sub-divided into smaller, geometrically altered, and more isolated fragments as a result of both natural and human activities, and this process involves changes in landscape composition, structure, and function at many scales and occurs on a backdrop of a natural patch mosaic created by changing landforms and natural disturbances (McGarigal and McComb 1999). Of critical importance is the fact that fragmentation occurs to a specific habitat type, not the entire landscape mosaic, even though it happens at the landscape scale. Thus, landscapes don’t get fragmented, specific habitats do. In our study, we evaluated the spatial pattern - and by implication, the fragmentation - of many different patch types (defined by unique combinations of cover type and stand condition). Many of these patch types are indeed less fragmented in the current landscape than they were under the simulated HRV. This is true in general for most of the late-seral forest patch types. However, not all patch types are less fragmented in the current landscape. For example, many of the early-seral forest patch types are in fact much more fragmented in the current landscape than they were under the simulated HRV. Thus, conclusions about habitat fragmentation in the current landscape must be qualified with specific reference to one or more well-defined habitats.


      Second, we evaluated vegetation patterns in the current landscape after excluding roads (i.e., we removed roads from the land cover map by filling in those areas with the abutting cover type), in order to be consistent with our simulation of landscape structure changes during the reference period. Yet, of all the novel kinds of disturbances that humans have introduced in the forests of the southern Rocky Mountains during the last century, roads may be the most ubiquitous and significant long-term legacy of our activities (Romme et al. 2003). Roads are unprecedented features in the ecological history of these landscapes (Forman 1995), and potentially affect many ecological processes (Forman and Alexander 1998; Trombulak and Frissell 2000). In particular, roads are linear landscape features that can create high-contrast edges and bisect patches. Consequently, roads can cause greater fragmentation of habitats than the direct loss of habitat from associated land use activities (Reed et al. 1996b; Tinker et al. 1997, McGarigal et al. 2001). Given the ubiquitous nature of roads and their disproportionate influence on landscape structure and function, any conclusions regarding departure in relation to habitat fragmentation that does not consider road impacts should be viewed with extreme caution. Note, the impacts of roads on landscape structure will be addressed in our evaluation of alternative land management scenarios in the next phase of this project.


      Our simulations indicate that returning the landscape structure to a condition that falls within the simulated HRV would likely be a difficult and long-term undertaking if it were deemed desirable. We deduced this from the time it took the current landscape to equilibrate to the reference-period disturbance regime. The model equilibration period in many ways provides a direct measure of HRV departure; it is defined as the period required to return the initial landscape condition to a stable range of variation. It is a function of not only how far outside the stable range of variation the current landscape is, but also the speed at which disturbance and succession processes interact to affect a change in the landscape trajectory. Thus, we can infer that if management activities were designed to emulate natural disturbance processes, then it would take a length of time equal to the equilibration period to return the landscape to its HRV. In our simulations, most landscape structure metrics equilibrated within 100 years, although some metrics equilibrated faster and others slower. In particular, the configuration of the high-elevation conifer forest mosaic took considerably longer (up to 300 years) to equilibrate owing to the long return interval between disturbances and the relatively slow rate of stand development. It must be emphasized, however, that this does not imply that it should be our goal in management to recreate all of the ecological conditions and dynamics of the reference period. Complete achievement of such a goal would be impossible, given the climatic, cultural, and ecological changes that have occurred in the last century. Moreover, the extent and intensity of disturbance required to emulate the natural disturbance regime would be unacceptable socially, economically, and politically.


Effects of Scale and Context


      The pattern detected in any ecological mosaic is a function of scale, and the ecological concept of scale encompasses both extent and grain (Forman and Godron 1986; Turner et al. 1989; Wiens 1989; Moody and Woodcock 1995). Extent and grain define the upper and lower limits of resolution of a study and any inferences about scale-dependency in a system are constrained by the extent and grain of investigation (Wiens 1989). In the analysis of landscape change, spatial scale is defined by the minimum patch size (grain) and the geographic extent of the landscape; temporal scale is defined by the minimum (grain) and total (extent) period over which landscape change is assessed. We cannot detect patterns or changes in patterns beyond the extent or below the resolution of the grain. This has important implications pertaining to the interpretation of our findings.


      First, we chose to examine landscape structure dynamics at two spatial resolutions: (1) 0.0625-ha (25 m cell size) minimum mapping unit, and (2) 0.5-ha minimum mapping unit. Note, in the coarse-grained representation, the cell size was maintained at 25 m - only the minimum mapping unit was increased. The 0.5-ha resolution was also used to reclassify cover types and stand conditions into a smaller set of aggregated classes in order to highlight habitats of special interest. We expected landscape composition estimates to be insensitive to spatial resolution and indeed this was the case. Surprisingly, the results pertaining to landscape configuration were largely insensitive to spatial resolution as well. In retrospect, it was apparent that small patches had a trivial impact on most configuration metrics and virtually no impact on the metrics selected for interpretation (i.e., area-weighted metrics). This is not to say that the fine-grained patterns of heterogeneity are not important ecologically, only that at the scale of the large landscape extents (10s-100s of thousands of hectares) we examined, the quantitative importance of the fine-grained patterns was dwarfed by the coarse-grained patterns created by the larger patches.


      Second, we designed RMLANDS to operate with a 10-year timestep. Thus, the minimum temporal resolution was fixed and we saw no reason to examine coarser resolutions. In addition, we established the temporal extent of our simulations based on our desire to capture and describe a stable range of variation in landscape structure. Preliminary trials determined that an 800-year simulation, after accounting for a 100-year equilibration period, was necessary to reliably estimate the range of variation in each landscape metric. Again, we deemed it inappropriate for our purposes to examine shorter simulations and it was statistically unnecessary to examine longer simulations. Thus, we did not vary the length of our simulations. However, it is important to recognize (and easy to demonstrate) that the measured range of variation in most metrics is sensitive to simulations shorter than some critical length (Figure-equilibration scale). The critical length varies among metrics and patch types, but it is safe to conclude that a minimum of 100-300 years is needed to capture the full range of variation in most metrics, and twice that long to confirm that it is stable. Thus, a management strategy designed to emulate the natural disturbance regime would take 100-300 years to see the landscape fluctuate through its full range of conditions. This is a humbling thought given that most professional careers last no more than 30 years - a blip on the scale of landscape dynamics - and that most policies are geared toward 10- to 20-year planning horizons.


      Third, when we examined progressively smaller spatial units of the entire simulated landscape (Figure-map), temporal variability increased – as would be expected (Turner et al. 1993), and there was an apparent threshold in the relationship between landscape extent and temporal variability (Figure-scale-land). Specifically, the magnitude of variability in landscape structure increased only modestly as the landscape extent decreased from the forest scale (847,638 ha) to the district scale (average = 282,546 ha), but increased dramatically as the landscape extent decreased to the watershed scale (average = 38,469 ha). A similar relationship was evident for three-toed woodpecker habitat capability, the only species for which we were able to complete the habitat capability analysis at all three scales (Figure-scale-wildlife). In addition, each of the sub-landscapes we examined was somewhat unique in its absolute range of variability in landscape structure. Not surprisingly, uniqueness was greatest for the smaller landscapes extents (watersheds). Interestingly, despite the importance of landscape extent and context on the measured range of variability, the degree of departure of the current landscape from the simulated HRV was relatively invariant to scale and context.


      These results have important management implications. First, they demonstrate that no two landscapes in this mountainous region are identical; each has more or less unique characteristics of topography, vegetation, etc. that affect its dynamical behavior. Consequently, there is no one correct scale for assessing HRV. Second, while no one scale is necessarily more correct than another, our results do suggest that some scales may be more appropriate than others for characterizing HRV. Specifically, our results show that at extents larger than the district scale, the relative variability in landscape structure does not change much, but that at smaller extents, the variability increases dramatically. We interpret this to mean that at the district extent (and larger), the landscape is large enough to fully incorporate the disturbance regime and exhibit stable dynamical behavior. At this scale, our simulated system falls within the ‘stable, high variance’ portion of the state-space model developed by Turner et al. (1993). At increasingly smaller extents, the size of the largest disturbance events approaches and eventually exceeds the size of the landscape, producing major fluctuations in landscape structure. Ultimately, at increasingly smaller extents the range of variability in landscape structure becomes so great as to be meaningless. Thus, we conclude that under the simulated disturbance regime, characterizing HRV is best done at the district or forest scale. Lastly, although the relative degree of dynamism is apparently stable at extents larger than the district scale, the absolute range of variation in landscape structure can vary among districts by more than 50%. Thus, each district exhibits a slightly different absolute range of variation in landscape structure. Most of these differences can be attributed to differences in landscape composition (Table-areal coverage). The challenge to managers is in deciding whether to give explicit recognition to the these differences when establishing management direction, or to subsume these difference at the forest level on pragmatic grounds. We believe that it is probably sufficient to characterize HRV at the forest scale for purposes of general communication, but that it would be wise if possible to use the district-specific HRV results when setting management targets. In this manner, the spatial variation in ecological patterns and dynamics across the forest are given explicit consideration.


Concluding Remarks


      In closing, it is important to remember that our simulation study was intended to complement the detailed landscape condition analysis completed for the South Central Highlands Section of southwestern Colorado and northwestern New Mexico (Romme et al. 2003). Our study provides a detailed quantitative analysis of the simulated vegetation dynamics under the historic reference period that complements the detailed, but qualitative, landscape condition assessment of the previous report. Overall, our findings are in complete qualitative agreement with the previous assessment. In addition to enhancing our general understanding of landscape dynamics, our HRV results are of paramount use as a reference or benchmark for comparison with alternative future land management scenarios - the focus of the next phase of this project.


      As with any study, our results and conclusions must be interpreted within the scope and limitations of this study. In particular, our analyses were designed to simulate vegetation dynamics under a specific historic reference period. We chose the period from about 1300 to the late 1800s, representing the period from Anasazi abandonment to EuroAmerican settlement as the reference period (often referred to as the period of indigenous settlement). Thus, our results pertain to landscape conditions during that period. More importantly, our results are based on a simulation model (RMLANDS), and this model, like any model, is an abstract and simplified representation of reality. Given the design limits of this model and the challenges of parameterizing a complex model like this, our results should not be interpreted as “golden”. Rather, they should be used to help identify the most influential factors driving landscape change, identify critical empirical information needs, identify interesting system behavior (e.g., thresholds), identify the limits of our understanding, and help us to explore “what if” scenarios.


Literature Cited