US Long-Term Ecological Research Network

Historical Plat Maps of Dane County Digitized and Converted to GIS (1962-2005)

Abstract
We constructed a time-series spatial dataset of parcel boundaries for the period 1962-2005, in roughly 4-year intervals, by digitizing historical plat maps for Dane County and combining them with the 2005 GIS digital parcel dataset. The resulting datasets enable the consistent tracking of subdivision and development for all parcels over a given time frame. The process involved 1) dissolving and merging the 2005 digital Dane County parcel dataset based on contiguity and name, 2) further merging 2005 parcels based on the hard copy 2005 Plat book, and then 3) the reverse chronological merging of parcels to reconstruct previous years, at 4-year intervals, based on historical plat books. Additional land use information such as 1) whether a structure was actually constructed (using the companion digitized aerial photo dataset), 2) cover crop, and 3) permeable surface area, can be added to these datasets at a later date.
Dataset ID
291
Date Range
-
Maintenance
Completed
Metadata Provider
Methods
Overview: Hard copy historical plat maps of Dane County in four year intervals from 1962to 2005 were digitized and converted to a GIS format using a process known as rectification, wherebycontrol points are set such that a point placed on the scanned image takes on the coordinates of thepoint chosen from the earliest GIS dataset, which for Dane County is from 2005. After a number ofcontrol points are set, the map is assigned the coordinates of the 2005 GIS dataset. In this way,the scanned plat map is now an image file with a distinct spatial location. Since the scanned platmaps do not have any attributes associated with the parcels, the third step is to assign attributesby working backwards from the 2005 GIS dataset. This process begins by making a copy of the 2005 GISdataset, then overlaying this new layer with the rectified scanned image. A subdivision choice isidentified where the parcel lines on the GIS layer are not in agreement with the scanned plat maps.The last step is to modify the copy of the 2005 GIS layer so that it matches the underlying plat map- in effect creating a historical GIS layer corresponding to the year of the plat map. When thelines that delineate a parcel appear in the GIS file but not the plat map, the multiple smallparcels in the 2005 GIS layer are merged together to represent the pre-subdivision parcel. Thisprocess is repeated for each historical year that plat maps are available. In the end, each timeperiod-1974 through 2000 in 4 year intervals-has a GIS file with all of the spatial attributes ofthe parcels.Land Atlas - Plat Books: The Land Atlas plat books were obtained for Dane County from theMadison Public Library, Stoughton Public Library and Robinson Map Library. With these materials onloan the pages were scanned at 150ppi in grayscale format; this process took place at the RobinsonMap Library. Once scanned, these images were georeferenced based on the 2000 digital parcel map.This process of rectification was done in Russell Labs using ESRI ArcMap 9.3. Control points such asroad intersections, were chosen to accurately georeference the 1997 scanned parcel map (1973 wasdone in this way as well). This process was done using a specific ArcGIS tool(View/Toolbars/Georeferencing). For the other years the scanned images were georeferenced based offthe four corners of the 1997 georeferenced scanned images. Georeferencing off the 1997 rectifiedimage allows for easier and quicker rectification but also facilitated detection of differencesbetween the scanned plats. The scanned image of the land ownership could be turned on and off foreasy comparison to the previous time set; these differences are the changes which were made on thedigital ownership map. We scanned and digitized the following years:Scanned plats: 1958, 1962, 1968, 1973, 1978, 1981, 1985, 1989, 1993, 1997, 2001, 2005Digitized plats: 1962, 1968, 1973, 1978, 1981, 1985, 1989, 1993, 1997, 2001, 2005Prepping the parcel Map: Digital parcel shapefiles for the years 2000 and 2005 wereprovided by the Dane County Land Information Office(http://www.countyofdane.com/lio/metadata/Parcels.htm) and were used as the starting reference.These datasets needed to be prepared for use. Many single parcels were represented by multiplecontiguous polygons. These were dissolved. (Multi-part, or non-contiguous polygons were notdissolved.) Here is the process to dissolve by NAME_CONT (contact name): Many polygons do not have acontact name. The majority of Madison and other towns do not have NAME_CONT, but most large parcelsdo. In order not to dissolve all of the parcels for which NAME_CONT is blank we did the following:Open the digital parcel shapefile and go to Selection/Select by Attributes. In this window choose thecorrect layer, chose method create new selection , scroll and double click NAME_CONT, then in thebottom both make sure it says [ "NAME_CONT" <>; ] (without brackets). This will select allpolygons which do not have an empty Name Contact attribute (empty value). From those polygonsselected they were aggregated based on the Name Contact field (parcels with the same NameContact were combined), where borders were contiguous. To do this the dissolve tool in DataManagement Tool/Generalization/Dissolve was used. Dissolve on field NAME_CONT and enter everyother field into the statistical fields menu. This was done without the multipart feature optionchecked, resulting in parcels only being combined when they share border. Keep these dissolvedpolygons highlighted. Once the dissolve process is complete use select by attributes tool again butthis time choose method of Add to Current Selection and say [ "NAME_CONT" = ]. This will provide adigital layer of polygons aggregated by name as well as nameless polygons to be manuallymanipulated.Parcel Map Manipulation: The goal from here was to, as accurately as possible, recreate adigital replica of the scanned parcel map, and aggregate up parcels with the same owner. This goalof replication is in regards to the linework as opposed to the owner name or any other informationin order to accurately capture the correct area as parcel size changed. This process of movingboundaries was independent of merging parcels. If individual scanned parcel boundaries are differentfrom the overlayed digital parcel shapefile, then the digital parcel linework must be changed. Asthis project utilizes both parcel shape and area, the parcels must be accurate. When mergingparcels, parcels with the same owner name, same owner connected on the plat map with an arrow, sameowner but separated by a road, or same owner and share a same point (two lots share a single pointat the corner) were merged to create a multi-part feature. Parcels with the same owner separated byanother parcel of a different owner with no points touching where not merged. This process ofreverse digitization was done using ArcMap. The already dissolved shapefile was copied to create onefile that was a historical record and one file to be edited to become the previous year (the nextyear back in time). With the digital parcel shapefile loaded, the rectified scanned plat maps werethen added. Once open, turn on the Editor Toolbar and Start Editing . The tools to use are thesketch tool and the merge tool. Quick keys where used (editor tool bar\customize) to speed thisprocess. To edit, zoom to a comfortable level (1:12,000) and slowly move across the townships in apattern which allows no areas to be missed (easiest to go township by township). When polygonsneeded to be reconstructed (the process of redrawing the parcel boundary linework), this was doneusing the sketch tool with either the create new polygon option or cut polygon option in theeditor toolbar. Using the sketch tool, with area highlighted, you can redraw the boundaries bycutting the polygons. Areas can be merged then recut to depict the underlying parcel map. If, forexample, a new development has gone in, many small parcels can be merged together to create a bigparcel, and then that large parcel can be broken into the parcels that were originally combined toform the subdivision. We can do this because the names in the attribute are not being preserved.This is a key note: THE OWNER NAME IS NOT A VARIABLE WE ARE CREATING, PRESERVING, OR OTHERWISEREPRESENTING. Once you merge the parcels, they will only maintain one of the names (and which nameis maintained is pretty much random). After the entire county is complete, go through again to checkthe new parcel shapefile, there will be mistakes. Snake through, going across the bottom one row ofsquares at a time. Examples of mistakes include primarily multi-part features that were exploded tochange one part, where the other parts would need to be re-merged. Another common correction arosebecause we typically worked on one township at a time, whereas ownership often crossed townships, soduring this second pass, we corrected cross-township ownership at the edges of the two scannedparcel maps. Finally, some roads which had been built into parcels (driveways) needed to be removedand these were not always caught during the first pass. Once the second run through is complete copythis shapefile so that it also has a back up.
Purpose
<p>Our purpose was to forecast detailed empirical distributions of the spatial pattern of land-use and ecosystem change and to test hypotheses about how economic variables affect land development in the Yahara watershed.</p>
Quality Assurance
<p>Accuracy was double check by visually comparing against corresponding plat book twice.</p>
Short Name
Historical Plat Maps of Dane County
Version Number
14

WSC 2006 Spatial interactions among ecosystem services in the Yahara Watershed

Abstract
Understanding spatial distributions, synergies and tradeoffs of multiple ecosystem services (benefits people derive from ecosystems) remains challenging. We analyzed the supply of 10 ecosystem services for 2006 across a large urbanizing agricultural watershed in the Upper Midwest of the United States, and asked: (i) Where are areas of high and low supply of individual ecosystem services, and are these areas spatially concordant across services? (ii) Where on the landscape are the strongest tradeoffs and synergies among ecosystem services located? (iii) For ecosystem service pairs that experience tradeoffs, what distinguishes locations that are win win exceptions from other locations? Spatial patterns of high supply for multiple ecosystem services often were not coincident locations where six or more services were produced at high levels (upper 20th percentile) occupied only 3.3 percent of the landscape. Most relationships among ecosystem services were synergies, but tradeoffs occurred between crop production and water quality. Ecosystem services related to water quality and quantity separated into three different groups, indicating that management to sustain freshwater services along with other ecosystem services will not be simple. Despite overall tradeoffs between crop production and water quality, some locations were positive for both, suggesting that tradeoffs are not inevitable everywhere and might be ameliorated in some locations. Overall, we found that different areas of the landscape supplied different suites of ecosystem services, and their lack of spatial concordance suggests the importance of managing over large areas to sustain multiple ecosystem services. <u>Documentation</u>: Refer to the supporting information of the follwing paper for full details on data sources, methods and accuracy assessment: Qiu, Jiangxiao, and Monica G. Turner. &quot;Spatial interactions among ecosystem services in an urbanizing agricultural watershed.&quot; <em>Proceedings of the National Academy of Sciences</em> 110.29 (2013): 12149-12154.
Contact
Dataset ID
290
Date Range
-
Maintenance
completed
Metadata Provider
Methods
Each ecosystem service was quantified and mapped by using empirical estimates and spatially explicit model for the terrestrial landscape of the Yahara Watershed for 2006. Crop production (expected annual crop yield, bu per yr) Crop yield was estimated for the four major crop types (corn, soybean, winter wheat and oats) that account for 98.5 percent of the cultivated land in the watershed by overlaying maps of crop types and soil-specific crop yield estimates. The spatial distribution of each crop was obtained from the 2006 Cropland Data Layer (CDL) from the National Agricultural Statistics Service (NASS) and soil productivity data were extracted from Soil Survey Geographic (SSURGO) database. Crop and soil data were converted to 30 m resolution and the two maps were overlain to estimate crop yield in each cell. For each crop-soil combination, crop area was multiplied by the estimated yield per unit area. Estimates for each crop type were summed to map estimated crop yield for 2006. Pasture production (expected annual forage yield, animal-unit-month per year ) As for crop production, forage yield was estimated by overlaying the distribution of all forage crops (alfalfa, hay and pasture/grass) and soil specific yield estimates. The spatial distribution of each forage crop was also derived from 2006 CDL, and rescaled to 30 m grid prior to calculation. The SSURGO soil productivity layer provided estimates of potential annual yield per unit area for each forage crop. Overlay analyses were performed for each forage-soil combination, as done for crops, and summed to obtain the total expected forage yield in the watershed for 2006. Freshwater supply (annual groundwater recharge, cm per year) . Groundwater recharge was quantified and mapped using the modified Thornthwaite-Mather Soil-Water-Balance (SWB) model. SWB is a deterministic, physically based and quasi three-dimensional model that accounts for precipitation, evaporation, interception, surface runoff, soil moisture storage and snowmelt. Groundwater recharge was calculated on a grid cell basis at a daily step with the following mass balance equation<p align="center">Recharge= (precipitation + snowmelt + inflow) &ndash;<p align="center">(interception + outflow + evapotranspiration) &ndash; delta soil moisture<p align="center"> We ran the model for three years (2004 to 2006) at 30m resolution, with the first two years as spin up of antecedent conditions (e.g. soil moisture and snow cover) that influence groundwater recharge for the focal year of 2006. Carbon storage (metric tons<sup> </sup>per ha) We estimated the amount of carbon stored in each 30 m cell in the Yahara Watershed by summing four major carbon pools: aboveground biomass, belowground biomass, soil carbon and deadwood/litter. Our quantification for each pool was based mainly on carbon estimates from the IPCC tier-I approach and other published field studies of carbon density and was estimated by land-use/cover type.Groundwater quality (probability of groundwater nitrate concentration greater than 3.0 mg per liter, unitless 0 to1) Groundwater nitrate data were obtained from Groundwater Retrieve Network (GRN), Wisconsin Department of Natural Resources (DNR). A total of 528 shallow groundwater well (well depth less than the depth from surface to Eau Claire shale) nitrate samples collected in 2006 were used for our study. We performed kriging analysis to interpolate the spatial distribution of the probability of groundwater nitrate concentration greater than 3 mg<sup> </sup>per liter. We mapped the interpolation results at a 30m spatial resolution using Geostatistical Analyst extension in ArcGIS (ESRI). In this map, areas with lower probability values provided more groundwater quality service, and vice versa. Surface water quality (annual phosphorus loading, kg per hectare). We adapted a spatially explicit, scenario-driven modeling tool, Integrated Valuation of Ecosystem Services and Tradeoffs (InVEST) to simulate discharge of nonpoint-source phosphorus. A grid cells phosphorus contribution was quantified as a function of water yield index, land use/cover, export coefficient, and downslope retention ability with the following equation:Expx = ALVx * sum of the products from y=x+1 to X for (1-Ey)where ALVx is the adjusted phosphorus export from pixel x , Ey is the filtration efficiency of each downstream pixel y , and X represents phosphorus transport route from where it originated to the downstream water bodies. Filtration efficiency was assigned by cover type: natural vegetation was assigned a high value, semi-natural vegetation an intermediate value, and developed or impervious covers were assigned low values. We ran the model for 2006 and mapped estimated phosphorus loading across the watershed. The ecosystem service of providing high quality surface water was the inverse of phosphorus loading. Therefore, areas with lower phosphorus loading values delivered more surface water quality, and areas with higher phosphorus loading values supplied less surface water quality.Soil retention (annual sediment yield, metric tons per hectare). We quantified annual sediment yield as the (inverse) indicator for soil retention by using the Modified Universal Soil Loss Equation (MUSLE). MUSLE is a storm event based model that estimates sediment yield as a function of runoff factor, soil erodibility, geomorphology, land use/cover and land management. Specifically, a grid cells contribution of sediment for a given storm event is calculated as:Sed= 11.8*(Q*q<sub>p</sub>)<sup>0.56</sup> * K * LS * C * Pwhere Sed represents the amount of sediment that is transported downstream network (metric tons), Q is the surface runoff volume (m<sup>3</sup>), q<sub>p </sub>is the peak flow rate (cubic meters per s), K is soil erodibility which is based on organic matter content, soil texture, permeability and profiles, LS is combined slope and steepness factor, and C* P is the product of plant cover and its associated management practice factor. We used the ArcSWAT interface of the Soil and Water Assessment Tool (SWAT) to perform all the simulations. We ran this model at a daily time step from 2004 to 2006, with the first two years as spin up , then mapped total sediment yield for 2006 across the watershed. Similar to surface water quality, the ecosystem service of soil retention was the inverse of sediment yield. In this map, areas with lower sediment yield provided more of this service, and areas with higher sediment yield delivered less. Flood regulation (flooding regulation capacity, unitless, 0 to 100) We used the capacity assessment approach to quantify the flood regulation service based on four hydrological parameters: interception, infiltration, surface runoff and peak flow. We first applied the Kinematic Runoff and Erosion (KINEROS) model to derive estimates of three parameters (infiltration, surface runoff and peak flow) for six sampled sub basins in this watershed. KINEROS is an event-oriented, physically based, distribution model that simulates interception, infiltration, surface runoff and erosion at sub-basin scales. In each simulation, a sub basin was first divided into smaller hydrological units. For the given pre-defined storm event, the model then calculated the amount of infiltration, surface runoff and peak flow for each unit. Second, we classified these estimates into 10 discrete capacity classes with range from 0 to 10 (0 indicates no capacity and 10 indicates the highest capacity) and united units with the same capacity values and overlaid with land cover map. Third, we calculated the distribution of all land use/cover classes within every spatial unit (with a particular capacity). We then assigned each land use/cover a capacity parameter based on its dominance (in percentage) within all capacity classes. As a result, every land use/cover was assigned a 0 to 10 capacity value for infiltration, surface runoff and peak flow. This procedure was repeated for six sub basins, and derived capacity values were averaged by cover type. We applied the same procedure to soil data and derived averaged capacity values for each soil type with the same set of three parameters. In addition, we obtained interceptions from published studies for each land use/cover and standardized to the same 0 to 10 range. Finally, the flood regulation capacity (FRC) for each 30m cell was calculated with the equation below:FRC= for each land use and land cover class the sum of (interception + infilitration + runoff + peakflow) + for each soil class the sum of (infiltration + runoff + peakflow).To simplify interpretation, we rescaled original flood regulation capacity values to a range of 0 to100, with 0 representing the lowest regulation capacity and 100 the highest. Forest recreation (recreation score, unitless, 0 to 100). We quantified the forest recreation service as a function of the amount of forest habitat, recreational opportunities provided, proximity to population center, and accessibility of the area for each 30m grid cell with the equation below:FRSi= Ai * sum of (Oppti + Popi + Roadi)where FRS is forest recreation score, A is the area of forest habitat, Oppt represents the recreation opportunities, Pop is the proximity to population centers, and Road stands for the distance to major roads. To simplify interpretation, we rescaled the original forest recreation score (ranging from 0 to 5200) to a range of 0 to 100, with 0 representing no forest recreation service and 100 representing highest service. Several assumptions were made for this assessment approach. Larger areas and places with more recreational opportunities would provide more recreational service, areas near large population centers would be visited and used more than remote areas, and proximity to major roads would increase access and thus recreational use of an area. Hunting recreation (recreation score, unitless 0 to100) We applied the same procedure used for forest recreation to quantify hunting service. Due to limited access to information regarding private land used for hunting, we only included public lands, mainly state parks, for this assessment. The hunting recreation service was estimated as a function of the extent of wildlife areas open for hunting, the number of game species, proximity to population center, and accessibility for each 30m grid cell with the following equation:<br />HRSi= Ai * sum of (Spei + Popi + Roadi)where HRS is hunting recreation score, A is the area of public wild areas open for hunting and fishing, Spe represents the number of game species, Pop stands for the proximity to population centers, and Road is the distance to major roads. To simplify interpretation, we rescaled the original hunting recreation score (ranging from 0 to 28000) to a range of 0 to100, with 0 representing no hunting recreation service and 100 representing highest service. Similar assumptions were made for this assessment. Larger areas and places with more game species would support more hunting, and areas closer to large population centers would be used more than remote areas. Finally, proximity to major roads would increase access and use of an area.
Short Name
Ecosystem services in the Yahara Watershed
Version Number
20

LTREB Biological Limnology at Lake Myvatn 2012-current

Abstract
These data are part of a long-term monitoring program in the central part of Myvatn that represents the dominant habitat, with benthos consisting of diatomaceous ooze. The program was designed to characterize import benthis and pelagic variables across years as midge populations varied in abundance. Starting in 2012 samples were taken at roughly weekly inervals during June, July, and August, which corresponds to the summer generation of the dominant midge,<em>Tanytarsus gracilentus</em>.
Creator
Dataset ID
296
Date Range
-
Maintenance
Ongoing
Metadata Provider
Methods
Benthic Chlorophyll Field sampling (5 samples) (2012, 2013)1. Take 5 cores from the lake2. Cut the first 0.75 cm (1 chip) of the core with the extruder and place in deli container. Label with date and core number.3. Place deli containers into opaque container (cooler) and return to lab. This is the same sample that is used for the organic matter analysis.In 2014, the method for sampling benthic chlorophyll changed. The calculation of chlorophyll was changed to reflect the different area sampled. Below is the pertinent section from the methods protocols. Processing after the collection of the sample was not changed.Take sediment samples from the 5 cores collected for sediment characteristics. Take 4 syringes of sediment with 10mL syringe (15.96mm diameter). Take 4-5cm of sediment. Then, remove bottom 2cm and place top 2cm in the film canister.Filtering1. Measure volume of material in deli container with 60mL syringe and record.2. Homogenize and take 1mL sample with micropipette. The tip on the micropipette should be cut to avoid clogging with diatoms. Place the 1mL sample in a labeled film canister. Freeze sample at negative 20 degrees Celsius unless starting methanol extraction immediately.3. Add 20mL methanol. This methanol can be kept cool in the fridge, although then you will need a second bottle of methanol for the fluorometer. Shake for 5 sec.4. After 6-18 hours, shake container for 5 sec.Fluorometer1. Allow the film canisters to sit at room temperature for approximately 15 min to avoid excessive condensation on the glass tubes. Shake tubes for 5 sec after removing from fridge but then be careful to let them settle before removing sample.2. Record the sample information for all of the film canisters on the data sheet.3. Add 4mL of sample to a 13x100mL glass tube.4. Insert the sample into the fluorometer and record the reading in the Fluor Before Acid column. The sample reading should be close to one of the secondary solid standards (42ug/L or 230ug/L), if not, dilute the sample to within 25 per cent of the secondary solid standards (30-54ug/L or 180-280ug/L). It is a good idea to quickly check 2mL of a sample that is suspected to be too high to get an idea if other samples may need to be diluted. If possible, read the samples undiluted.5. If a sample needs to be diluted, use a 1000 microLiter pipette and add 2mL of methanol to a tube followed by 2mL of undiluted sample. Gently invert the tube twice and clean the bottom with a paper towel before inserting it into the fluorometer. If the sample is still outside of the ranges above, combine 1 mL of undiluted sample with 3 mL of methanol. Be sure to record the dilution information on the data sheet.6. Acidify the sample by adding 120microLiters of 0.1 N HCl (30microLiters for every one mL of sample). Then gently invert the sample and wait 90 seconds (we used 60 seconds in 2012, the protocol said 90) before putting the sample into the fluorometer and recording the reading in the Fluor After Acid column. Be sure to have acid in each tube for exactly the same amount of time. This means doing one tube at a time or spacing them 30-60 seconds apart.7. Double check the results and redo samples, which have suspicious numbers. Make sure that the after-acidification values make sense when compared to the before acidification value (the before acid/after acid ratio should be approximately the same for all samples).Clean up1. Methanol can be disposed of down the drain as long as at least 50 times as much water is flushed.2. Rinse the film canisters and lids well with tap water and scrub them out with a bottle brush making sure to remove any remaining filter paper. Give a final rinse with distilled water. Pelagic Chlorophyll Field sampling (5 samples)1. Take 2 samples at each of three depths, 1, 2, and 3m with Arni&rsquo;s zooplankton trap. For the 1m sample, drop the trap to the top of the chain. Each trap contains about 2.5L of water when full. 2. Empty into bucket by opening the bottom flap with your hand.3. Take bucket to lab.Filtering1. Filter 1L water from integrated water sample (or until the filter is clogged) through the 47 mm GF/F filter. The pressure used during filtering should be low ( less than 5 mm Hg) to prevent cell breakage. Filtering and handling of filters should be performed under dimmed lighting.2. Remove the filter with forceps, fold it in half (pigment side in), and put it in the film canister. Take care to not touch the pigments with the forceps.3. Add 20mL methanol. This methanol can be kept cool in the fridge, although then you will need a second bottle of methanol for the fluorometer. Shake for 5 sec. and place in fridge.4. After 6-18 hours, shake container for 5 sec.5. Analyze sample in fluorometer after 24 hours.Fluorometer1. Allow the film canisters to sit at room temperature for approximately 15 min to avoid excessive condensation on the glass tubes. Shake tubes for 5 sec after removing from fridge but then be careful to let them settle before removing sample.2. Record the sample information for all of the film canisters on the data sheet.3. Add 4mL of sample to a 13x100mL glass tube.4. Insert the sample into the fluorometer and record the reading in the Fluor Before Acid column. The sample reading should be close to one of the secondary solid standards (42ug/L or 230ug/L), if not, dilute the sample to within 25 percent of the secondary solid standards (30-54ug/L or 180-280ug/L). It is a good idea to quickly check 2mL of a sample that is suspected to be too high to get an idea if other samples may need to be diluted. If possible, read the samples undiluted.5. If a sample needs to be diluted, use a 1000uL pipette and add 2mL of methanol to a tube followed by 2mL of undiluted sample. Gently invert the tube twice and clean the bottom with a paper towel before inserting it into the fluorometer. If the sample is still outside of the ranges above, combine 1 mL of undiluted sample with 3 mL of methanol. Be sure to record the dilution information on the data sheet.6. Acidify the sample by adding 120 microLiters of 0.1 N HCl (30 microLiters for every one mL of sample). Then gently invert the sample and wait 90 seconds (we used 60 seconds in 2012, the protocol said 90) before putting the sample into the fluorometer and recording the reading in the Fluor After Acid column. Be sure to have acid in each tube for exactly the same amount of time. This means doing one tube at a time or spacing them 30-60 seconds apart.7. Double check the results and redo samples, which have suspicious numbers. Make sure that the after-acidification values make sense when compared to the before acidification value (the before acid/after acid ratio should be approximately the same for all samples).Clean up1. Methanol can be disposed of down the drain as long as at least 50 times as much water is flushed.2. Rinse the film canisters and lids well with tap water and scrub them out with a bottle brush making sure to remove any remaining filter paper. Give a final rinse with distilled water. Pelagic Zooplankton Counts Field samplingUse Arni&rsquo;s zooplankton trap (modified Schindler) to take 2 samples at each of 1, 2, and 3m (6 total). For the 1m sample, drop the trap to the top of the chain. Each trap contains about 2.5L of water when full. Integrate samples in bucket and bring back to lab for further processing.Sample preparation in lab1. Sieve integrated plankton tows through 63&micro;m mesh and record volume of full sample2. Collect in Nalgene bottles and make total volume to 50mL3. Add 8 drops of lugol to fix zooplankton.4. Label bottle with sample date, benthic or pelagic zooplankton, and total volume sieved. Samples can be stored in the fridge until time of countingCounting1. Remove sample from fridge2. Sieve sample with 63 micro meter mesh over lab sink to remove Lugol&rsquo;s solution (which vaporizes under light)3. Suspend sample in water in sieve and flush from the back with squirt bottle into counting tray4. Homogenize sample with forceps or plastic pipette with tip cut off5. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted as well.6. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridgeSubsamplingIf homogenized original sample contains more than 500 individuals in the first line of counting tray, you may subsample under the following procedure.1. Return original sample to Nalgene bottle and add water to 50mL2. Homogenize sample by swirling Nalgene bottle3. Collect 10mL of zooplankton sample with Hensen-Stempel pipette4. Empty contents of Hensen-Stempel pipette into large Bogorov tray5. Homogenize sample in tray with forceps or plastic pipette with tip cut off6. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted, too! 7. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridge Benthic Microcrustacean Counts Field samplingLeave benthic zooplankton sampler for 24h. Benthic sampler consists of 10 inverted jars with funnel traps in metal grid with 4 feet. Set up on bench using feet (on side) to get a uniform height of the collection jars (lip of jar = 5cm above frame). Upon collection, pull sampler STRAIGHT up, remove jars, homogenize in bucket and bring back to lab. Move the boat slightly to avoid placing sampler directly over cored sediment.Sample preparation in lab1. Sieve integrated samples through 63 micrometer mesh and record volume of full sample2. Collect in Nalgene bottles and make total volume to 50mL3. Add 8 drops of lugol to fix zooplankton.4. Label bottle with sample date, benthic or pelagic zooplankton, and total volume sieved. Samples can be stored in the fridge until time of countingCounting1. Remove sample from fridge2. Sieve sample with 63 micrometer mesh over lab sink to remove Lugol&rsquo;s solution (which vaporizes under light)3. Suspend sample in water in sieve and flush from the back with squirt bottle into counting tray4. Homogenize sample with forceps or plastic pipette with tip cut off5. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted, too!6. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridgeSubsamplingIf homogenized original sample contains more than 500 individuals in the first line of counting tray, you may subsample under the following procedure.1. Return original sample to Nalgene bottle and add water to 50mL2. Homogenize sample by swirling Nalgene bottle3. Collect 10mL of zooplankton sample with Hensen-Stempel pipette4. Empty contents of Hensen-Stempel pipette into large Bogorov tray5. Homogenize sample in tray with forceps or plastic pipette with tip cut off6. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted, too! 7. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridge Chironomid Counts (2012, 2013) For first instar chironomids in top 1.5cm of sediment only (5 samples)1. Use sink hose to sieve sediment through 63 micrometer mesh. You may use moderate pressure to break up tubes.2. Back flush sieve contents into small deli container.3. Return label to deli cup (sticking to underside of lid works well).For later instar chironomids in the section 1.5-11.5cm (5 samples)4. Sieve with 125 micrometer mesh in the field.5. Sieve through 125micrometer mesh again in lab to reduce volume of sample.6. Transfer sample to deli container or pitfall counting tray.For all chironomid samples7. Under dissecting scope, pick through sieved contents for midge larvae. You may have to open tubes with forceps in order to check for larvae inside.8. Remove larvae with forceps while counting, and place into a vial containing 70 percent ethanol. Larvae will eventually be sorted into taxonomic groups (see key). You may sort them into taxonomic groups as you pick the larvae, or you can identify the larvae while measuring head capsules if chironomid densities are low (under 50 individuals per taxanomic group).9. For a random sample of up to 50 individuals of each taxonomic group, measure head capsule, see Chironomid size (head capsule width).10. Archive samples from each sampling date together in a single 20mL glass vial with screw cap in 70 percent ethanol and label with sample contents , Chir, sample date, lake ID, station ID, and number of cores. Chironomid Cound (2014) In 2014, the method for sampling chironomid larvae changed starting with the sample on 2014-06-27; the variable &quot;top_bottom&quot; is coded as a 2. In contrast to previous measurements, the top and bottom core samples were combined and then subsampled. Below is the pertinent section of the protocols.Chironomid samples should be counted within 24 hours of collection. This ensures that larvae are as active and easily identified as possible, and also prevents predatory chironomids from consuming other larvae. Samples should be refrigerated upon returning from the field.<strong>For first instar chironomids in top 1.5cm of sediment only (5 samples)</strong>1. Use sink hose to sieve sediment through 63&micro;m mesh. You may use moderate pressure to break up tubes.2. Back flush sieve contents using a water bottle into small deli container.3. Return label to deli cup (sticking to underside of lid works well).<strong>For larger instar chironomids in the section 1.5-11.5cm (5 samples)</strong>4. Sieve with 125&micro;m mesh in the field.5. Sieve through 125&micro;m mesh again in lab to reduce volume of sample and break up tubes.6. Transfer sample to deli container with the appropriate label.<strong>Subsample if necessary</strong>If necessary, subsample with the following protocol.a. Combine top and bottom samples from each core (1-5) in midge sample splitter.b. Homogenize sample thoroughly, collect one half in deli container, and label container with core number and &ldquo;1/2&rdquo;c. If necessary, split the half that remains in the sampler into quarters, and collect each in deli containers labeled with core number, &ldquo;1/4&rdquo;, and replicate 1 or 2d. Store all deli containers in fridge until counted, and save until all counting is complete&quot; Chironomid Size (head capsule width) 1. Obtain picked samples preserved in ethanol and empty onto petri dish.2. Sort larvae by family groups, arranging in same orientation for easy measurment.3. Set magnification to 20, diopter, x 50 times4. Take measurments for up to 50 or more individuals of each taxa. Round to nearest optical micrometer unit.5. Fill out data sheet for number of larvae in each taxa, Chironomid measurements for each taxa, date of sample, station sample was taken from, which core the sample came from, who picked the core, and your name as the measurer.6. Enter data into shared sheetSee &quot;Chironomid Counts&quot; for changes in sampling chironomid larvae in 2014.
Version Number
17
Subscribe to landscape