US Long-Term Ecological Research Network

Historical Plat Maps of Dane County Digitized and Converted to GIS (1962-2005)

Abstract
We constructed a time-series spatial dataset of parcel boundaries for the period 1962-2005, in roughly 4-year intervals, by digitizing historical plat maps for Dane County and combining them with the 2005 GIS digital parcel dataset. The resulting datasets enable the consistent tracking of subdivision and development for all parcels over a given time frame. The process involved 1) dissolving and merging the 2005 digital Dane County parcel dataset based on contiguity and name, 2) further merging 2005 parcels based on the hard copy 2005 Plat book, and then 3) the reverse chronological merging of parcels to reconstruct previous years, at 4-year intervals, based on historical plat books. Additional land use information such as 1) whether a structure was actually constructed (using the companion digitized aerial photo dataset), 2) cover crop, and 3) permeable surface area, can be added to these datasets at a later date.
Dataset ID
291
Date Range
-
Maintenance
Completed
Metadata Provider
Methods
Overview: Hard copy historical plat maps of Dane County in four year intervals from 1962to 2005 were digitized and converted to a GIS format using a process known as rectification, wherebycontrol points are set such that a point placed on the scanned image takes on the coordinates of thepoint chosen from the earliest GIS dataset, which for Dane County is from 2005. After a number ofcontrol points are set, the map is assigned the coordinates of the 2005 GIS dataset. In this way,the scanned plat map is now an image file with a distinct spatial location. Since the scanned platmaps do not have any attributes associated with the parcels, the third step is to assign attributesby working backwards from the 2005 GIS dataset. This process begins by making a copy of the 2005 GISdataset, then overlaying this new layer with the rectified scanned image. A subdivision choice isidentified where the parcel lines on the GIS layer are not in agreement with the scanned plat maps.The last step is to modify the copy of the 2005 GIS layer so that it matches the underlying plat map- in effect creating a historical GIS layer corresponding to the year of the plat map. When thelines that delineate a parcel appear in the GIS file but not the plat map, the multiple smallparcels in the 2005 GIS layer are merged together to represent the pre-subdivision parcel. Thisprocess is repeated for each historical year that plat maps are available. In the end, each timeperiod-1974 through 2000 in 4 year intervals-has a GIS file with all of the spatial attributes ofthe parcels.Land Atlas - Plat Books: The Land Atlas plat books were obtained for Dane County from theMadison Public Library, Stoughton Public Library and Robinson Map Library. With these materials onloan the pages were scanned at 150ppi in grayscale format; this process took place at the RobinsonMap Library. Once scanned, these images were georeferenced based on the 2000 digital parcel map.This process of rectification was done in Russell Labs using ESRI ArcMap 9.3. Control points such asroad intersections, were chosen to accurately georeference the 1997 scanned parcel map (1973 wasdone in this way as well). This process was done using a specific ArcGIS tool(View/Toolbars/Georeferencing). For the other years the scanned images were georeferenced based offthe four corners of the 1997 georeferenced scanned images. Georeferencing off the 1997 rectifiedimage allows for easier and quicker rectification but also facilitated detection of differencesbetween the scanned plats. The scanned image of the land ownership could be turned on and off foreasy comparison to the previous time set; these differences are the changes which were made on thedigital ownership map. We scanned and digitized the following years:Scanned plats: 1958, 1962, 1968, 1973, 1978, 1981, 1985, 1989, 1993, 1997, 2001, 2005Digitized plats: 1962, 1968, 1973, 1978, 1981, 1985, 1989, 1993, 1997, 2001, 2005Prepping the parcel Map: Digital parcel shapefiles for the years 2000 and 2005 wereprovided by the Dane County Land Information Office(http://www.countyofdane.com/lio/metadata/Parcels.htm) and were used as the starting reference.These datasets needed to be prepared for use. Many single parcels were represented by multiplecontiguous polygons. These were dissolved. (Multi-part, or non-contiguous polygons were notdissolved.) Here is the process to dissolve by NAME_CONT (contact name): Many polygons do not have acontact name. The majority of Madison and other towns do not have NAME_CONT, but most large parcelsdo. In order not to dissolve all of the parcels for which NAME_CONT is blank we did the following:Open the digital parcel shapefile and go to Selection/Select by Attributes. In this window choose thecorrect layer, chose method create new selection , scroll and double click NAME_CONT, then in thebottom both make sure it says [ "NAME_CONT" <>; ] (without brackets). This will select allpolygons which do not have an empty Name Contact attribute (empty value). From those polygonsselected they were aggregated based on the Name Contact field (parcels with the same NameContact were combined), where borders were contiguous. To do this the dissolve tool in DataManagement Tool/Generalization/Dissolve was used. Dissolve on field NAME_CONT and enter everyother field into the statistical fields menu. This was done without the multipart feature optionchecked, resulting in parcels only being combined when they share border. Keep these dissolvedpolygons highlighted. Once the dissolve process is complete use select by attributes tool again butthis time choose method of Add to Current Selection and say [ "NAME_CONT" = ]. This will provide adigital layer of polygons aggregated by name as well as nameless polygons to be manuallymanipulated.Parcel Map Manipulation: The goal from here was to, as accurately as possible, recreate adigital replica of the scanned parcel map, and aggregate up parcels with the same owner. This goalof replication is in regards to the linework as opposed to the owner name or any other informationin order to accurately capture the correct area as parcel size changed. This process of movingboundaries was independent of merging parcels. If individual scanned parcel boundaries are differentfrom the overlayed digital parcel shapefile, then the digital parcel linework must be changed. Asthis project utilizes both parcel shape and area, the parcels must be accurate. When mergingparcels, parcels with the same owner name, same owner connected on the plat map with an arrow, sameowner but separated by a road, or same owner and share a same point (two lots share a single pointat the corner) were merged to create a multi-part feature. Parcels with the same owner separated byanother parcel of a different owner with no points touching where not merged. This process ofreverse digitization was done using ArcMap. The already dissolved shapefile was copied to create onefile that was a historical record and one file to be edited to become the previous year (the nextyear back in time). With the digital parcel shapefile loaded, the rectified scanned plat maps werethen added. Once open, turn on the Editor Toolbar and Start Editing . The tools to use are thesketch tool and the merge tool. Quick keys where used (editor tool bar\customize) to speed thisprocess. To edit, zoom to a comfortable level (1:12,000) and slowly move across the townships in apattern which allows no areas to be missed (easiest to go township by township). When polygonsneeded to be reconstructed (the process of redrawing the parcel boundary linework), this was doneusing the sketch tool with either the create new polygon option or cut polygon option in theeditor toolbar. Using the sketch tool, with area highlighted, you can redraw the boundaries bycutting the polygons. Areas can be merged then recut to depict the underlying parcel map. If, forexample, a new development has gone in, many small parcels can be merged together to create a bigparcel, and then that large parcel can be broken into the parcels that were originally combined toform the subdivision. We can do this because the names in the attribute are not being preserved.This is a key note: THE OWNER NAME IS NOT A VARIABLE WE ARE CREATING, PRESERVING, OR OTHERWISEREPRESENTING. Once you merge the parcels, they will only maintain one of the names (and which nameis maintained is pretty much random). After the entire county is complete, go through again to checkthe new parcel shapefile, there will be mistakes. Snake through, going across the bottom one row ofsquares at a time. Examples of mistakes include primarily multi-part features that were exploded tochange one part, where the other parts would need to be re-merged. Another common correction arosebecause we typically worked on one township at a time, whereas ownership often crossed townships, soduring this second pass, we corrected cross-township ownership at the edges of the two scannedparcel maps. Finally, some roads which had been built into parcels (driveways) needed to be removedand these were not always caught during the first pass. Once the second run through is complete copythis shapefile so that it also has a back up.
Purpose
<p>Our purpose was to forecast detailed empirical distributions of the spatial pattern of land-use and ecosystem change and to test hypotheses about how economic variables affect land development in the Yahara watershed.</p>
Quality Assurance
<p>Accuracy was double check by visually comparing against corresponding plat book twice.</p>
Short Name
Historical Plat Maps of Dane County
Version Number
14

LTREB Biological Limnology at Lake Myvatn 2012-current

Abstract
These data are part of a long-term monitoring program in the central part of Myvatn that represents the dominant habitat, with benthos consisting of diatomaceous ooze. The program was designed to characterize import benthis and pelagic variables across years as midge populations varied in abundance. Starting in 2012 samples were taken at roughly weekly inervals during June, July, and August, which corresponds to the summer generation of the dominant midge,<em>Tanytarsus gracilentus</em>.
Creator
Dataset ID
296
Date Range
-
Maintenance
Ongoing
Metadata Provider
Methods
Benthic Chlorophyll Field sampling (5 samples) (2012, 2013)1. Take 5 cores from the lake2. Cut the first 0.75 cm (1 chip) of the core with the extruder and place in deli container. Label with date and core number.3. Place deli containers into opaque container (cooler) and return to lab. This is the same sample that is used for the organic matter analysis.In 2014, the method for sampling benthic chlorophyll changed. The calculation of chlorophyll was changed to reflect the different area sampled. Below is the pertinent section from the methods protocols. Processing after the collection of the sample was not changed.Take sediment samples from the 5 cores collected for sediment characteristics. Take 4 syringes of sediment with 10mL syringe (15.96mm diameter). Take 4-5cm of sediment. Then, remove bottom 2cm and place top 2cm in the film canister.Filtering1. Measure volume of material in deli container with 60mL syringe and record.2. Homogenize and take 1mL sample with micropipette. The tip on the micropipette should be cut to avoid clogging with diatoms. Place the 1mL sample in a labeled film canister. Freeze sample at negative 20 degrees Celsius unless starting methanol extraction immediately.3. Add 20mL methanol. This methanol can be kept cool in the fridge, although then you will need a second bottle of methanol for the fluorometer. Shake for 5 sec.4. After 6-18 hours, shake container for 5 sec.Fluorometer1. Allow the film canisters to sit at room temperature for approximately 15 min to avoid excessive condensation on the glass tubes. Shake tubes for 5 sec after removing from fridge but then be careful to let them settle before removing sample.2. Record the sample information for all of the film canisters on the data sheet.3. Add 4mL of sample to a 13x100mL glass tube.4. Insert the sample into the fluorometer and record the reading in the Fluor Before Acid column. The sample reading should be close to one of the secondary solid standards (42ug/L or 230ug/L), if not, dilute the sample to within 25 per cent of the secondary solid standards (30-54ug/L or 180-280ug/L). It is a good idea to quickly check 2mL of a sample that is suspected to be too high to get an idea if other samples may need to be diluted. If possible, read the samples undiluted.5. If a sample needs to be diluted, use a 1000 microLiter pipette and add 2mL of methanol to a tube followed by 2mL of undiluted sample. Gently invert the tube twice and clean the bottom with a paper towel before inserting it into the fluorometer. If the sample is still outside of the ranges above, combine 1 mL of undiluted sample with 3 mL of methanol. Be sure to record the dilution information on the data sheet.6. Acidify the sample by adding 120microLiters of 0.1 N HCl (30microLiters for every one mL of sample). Then gently invert the sample and wait 90 seconds (we used 60 seconds in 2012, the protocol said 90) before putting the sample into the fluorometer and recording the reading in the Fluor After Acid column. Be sure to have acid in each tube for exactly the same amount of time. This means doing one tube at a time or spacing them 30-60 seconds apart.7. Double check the results and redo samples, which have suspicious numbers. Make sure that the after-acidification values make sense when compared to the before acidification value (the before acid/after acid ratio should be approximately the same for all samples).Clean up1. Methanol can be disposed of down the drain as long as at least 50 times as much water is flushed.2. Rinse the film canisters and lids well with tap water and scrub them out with a bottle brush making sure to remove any remaining filter paper. Give a final rinse with distilled water. Pelagic Chlorophyll Field sampling (5 samples)1. Take 2 samples at each of three depths, 1, 2, and 3m with Arni&rsquo;s zooplankton trap. For the 1m sample, drop the trap to the top of the chain. Each trap contains about 2.5L of water when full. 2. Empty into bucket by opening the bottom flap with your hand.3. Take bucket to lab.Filtering1. Filter 1L water from integrated water sample (or until the filter is clogged) through the 47 mm GF/F filter. The pressure used during filtering should be low ( less than 5 mm Hg) to prevent cell breakage. Filtering and handling of filters should be performed under dimmed lighting.2. Remove the filter with forceps, fold it in half (pigment side in), and put it in the film canister. Take care to not touch the pigments with the forceps.3. Add 20mL methanol. This methanol can be kept cool in the fridge, although then you will need a second bottle of methanol for the fluorometer. Shake for 5 sec. and place in fridge.4. After 6-18 hours, shake container for 5 sec.5. Analyze sample in fluorometer after 24 hours.Fluorometer1. Allow the film canisters to sit at room temperature for approximately 15 min to avoid excessive condensation on the glass tubes. Shake tubes for 5 sec after removing from fridge but then be careful to let them settle before removing sample.2. Record the sample information for all of the film canisters on the data sheet.3. Add 4mL of sample to a 13x100mL glass tube.4. Insert the sample into the fluorometer and record the reading in the Fluor Before Acid column. The sample reading should be close to one of the secondary solid standards (42ug/L or 230ug/L), if not, dilute the sample to within 25 percent of the secondary solid standards (30-54ug/L or 180-280ug/L). It is a good idea to quickly check 2mL of a sample that is suspected to be too high to get an idea if other samples may need to be diluted. If possible, read the samples undiluted.5. If a sample needs to be diluted, use a 1000uL pipette and add 2mL of methanol to a tube followed by 2mL of undiluted sample. Gently invert the tube twice and clean the bottom with a paper towel before inserting it into the fluorometer. If the sample is still outside of the ranges above, combine 1 mL of undiluted sample with 3 mL of methanol. Be sure to record the dilution information on the data sheet.6. Acidify the sample by adding 120 microLiters of 0.1 N HCl (30 microLiters for every one mL of sample). Then gently invert the sample and wait 90 seconds (we used 60 seconds in 2012, the protocol said 90) before putting the sample into the fluorometer and recording the reading in the Fluor After Acid column. Be sure to have acid in each tube for exactly the same amount of time. This means doing one tube at a time or spacing them 30-60 seconds apart.7. Double check the results and redo samples, which have suspicious numbers. Make sure that the after-acidification values make sense when compared to the before acidification value (the before acid/after acid ratio should be approximately the same for all samples).Clean up1. Methanol can be disposed of down the drain as long as at least 50 times as much water is flushed.2. Rinse the film canisters and lids well with tap water and scrub them out with a bottle brush making sure to remove any remaining filter paper. Give a final rinse with distilled water. Pelagic Zooplankton Counts Field samplingUse Arni&rsquo;s zooplankton trap (modified Schindler) to take 2 samples at each of 1, 2, and 3m (6 total). For the 1m sample, drop the trap to the top of the chain. Each trap contains about 2.5L of water when full. Integrate samples in bucket and bring back to lab for further processing.Sample preparation in lab1. Sieve integrated plankton tows through 63&micro;m mesh and record volume of full sample2. Collect in Nalgene bottles and make total volume to 50mL3. Add 8 drops of lugol to fix zooplankton.4. Label bottle with sample date, benthic or pelagic zooplankton, and total volume sieved. Samples can be stored in the fridge until time of countingCounting1. Remove sample from fridge2. Sieve sample with 63 micro meter mesh over lab sink to remove Lugol&rsquo;s solution (which vaporizes under light)3. Suspend sample in water in sieve and flush from the back with squirt bottle into counting tray4. Homogenize sample with forceps or plastic pipette with tip cut off5. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted as well.6. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridgeSubsamplingIf homogenized original sample contains more than 500 individuals in the first line of counting tray, you may subsample under the following procedure.1. Return original sample to Nalgene bottle and add water to 50mL2. Homogenize sample by swirling Nalgene bottle3. Collect 10mL of zooplankton sample with Hensen-Stempel pipette4. Empty contents of Hensen-Stempel pipette into large Bogorov tray5. Homogenize sample in tray with forceps or plastic pipette with tip cut off6. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted, too! 7. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridge Benthic Microcrustacean Counts Field samplingLeave benthic zooplankton sampler for 24h. Benthic sampler consists of 10 inverted jars with funnel traps in metal grid with 4 feet. Set up on bench using feet (on side) to get a uniform height of the collection jars (lip of jar = 5cm above frame). Upon collection, pull sampler STRAIGHT up, remove jars, homogenize in bucket and bring back to lab. Move the boat slightly to avoid placing sampler directly over cored sediment.Sample preparation in lab1. Sieve integrated samples through 63 micrometer mesh and record volume of full sample2. Collect in Nalgene bottles and make total volume to 50mL3. Add 8 drops of lugol to fix zooplankton.4. Label bottle with sample date, benthic or pelagic zooplankton, and total volume sieved. Samples can be stored in the fridge until time of countingCounting1. Remove sample from fridge2. Sieve sample with 63 micrometer mesh over lab sink to remove Lugol&rsquo;s solution (which vaporizes under light)3. Suspend sample in water in sieve and flush from the back with squirt bottle into counting tray4. Homogenize sample with forceps or plastic pipette with tip cut off5. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted, too!6. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridgeSubsamplingIf homogenized original sample contains more than 500 individuals in the first line of counting tray, you may subsample under the following procedure.1. Return original sample to Nalgene bottle and add water to 50mL2. Homogenize sample by swirling Nalgene bottle3. Collect 10mL of zooplankton sample with Hensen-Stempel pipette4. Empty contents of Hensen-Stempel pipette into large Bogorov tray5. Homogenize sample in tray with forceps or plastic pipette with tip cut off6. Identify (see zooplankton identification guide) using backlit microscope and count with multiple-tally counter. i. Set magnification so that you can see both top and bottom walls of the tray. ii. Change focus depth to check for floating zooplankton that must be counted, too! 7. Pipette sample back into Nalgene bottle, add water to 50mL, add 8 drops Lugol&rsquo;s solution, and return to fridge Chironomid Counts (2012, 2013) For first instar chironomids in top 1.5cm of sediment only (5 samples)1. Use sink hose to sieve sediment through 63 micrometer mesh. You may use moderate pressure to break up tubes.2. Back flush sieve contents into small deli container.3. Return label to deli cup (sticking to underside of lid works well).For later instar chironomids in the section 1.5-11.5cm (5 samples)4. Sieve with 125 micrometer mesh in the field.5. Sieve through 125micrometer mesh again in lab to reduce volume of sample.6. Transfer sample to deli container or pitfall counting tray.For all chironomid samples7. Under dissecting scope, pick through sieved contents for midge larvae. You may have to open tubes with forceps in order to check for larvae inside.8. Remove larvae with forceps while counting, and place into a vial containing 70 percent ethanol. Larvae will eventually be sorted into taxonomic groups (see key). You may sort them into taxonomic groups as you pick the larvae, or you can identify the larvae while measuring head capsules if chironomid densities are low (under 50 individuals per taxanomic group).9. For a random sample of up to 50 individuals of each taxonomic group, measure head capsule, see Chironomid size (head capsule width).10. Archive samples from each sampling date together in a single 20mL glass vial with screw cap in 70 percent ethanol and label with sample contents , Chir, sample date, lake ID, station ID, and number of cores. Chironomid Cound (2014) In 2014, the method for sampling chironomid larvae changed starting with the sample on 2014-06-27; the variable &quot;top_bottom&quot; is coded as a 2. In contrast to previous measurements, the top and bottom core samples were combined and then subsampled. Below is the pertinent section of the protocols.Chironomid samples should be counted within 24 hours of collection. This ensures that larvae are as active and easily identified as possible, and also prevents predatory chironomids from consuming other larvae. Samples should be refrigerated upon returning from the field.<strong>For first instar chironomids in top 1.5cm of sediment only (5 samples)</strong>1. Use sink hose to sieve sediment through 63&micro;m mesh. You may use moderate pressure to break up tubes.2. Back flush sieve contents using a water bottle into small deli container.3. Return label to deli cup (sticking to underside of lid works well).<strong>For larger instar chironomids in the section 1.5-11.5cm (5 samples)</strong>4. Sieve with 125&micro;m mesh in the field.5. Sieve through 125&micro;m mesh again in lab to reduce volume of sample and break up tubes.6. Transfer sample to deli container with the appropriate label.<strong>Subsample if necessary</strong>If necessary, subsample with the following protocol.a. Combine top and bottom samples from each core (1-5) in midge sample splitter.b. Homogenize sample thoroughly, collect one half in deli container, and label container with core number and &ldquo;1/2&rdquo;c. If necessary, split the half that remains in the sampler into quarters, and collect each in deli containers labeled with core number, &ldquo;1/4&rdquo;, and replicate 1 or 2d. Store all deli containers in fridge until counted, and save until all counting is complete&quot; Chironomid Size (head capsule width) 1. Obtain picked samples preserved in ethanol and empty onto petri dish.2. Sort larvae by family groups, arranging in same orientation for easy measurment.3. Set magnification to 20, diopter, x 50 times4. Take measurments for up to 50 or more individuals of each taxa. Round to nearest optical micrometer unit.5. Fill out data sheet for number of larvae in each taxa, Chironomid measurements for each taxa, date of sample, station sample was taken from, which core the sample came from, who picked the core, and your name as the measurer.6. Enter data into shared sheetSee &quot;Chironomid Counts&quot; for changes in sampling chironomid larvae in 2014.
Version Number
17

Additional Daily Meteorological Data for Madison Wisconsin (1884-2010)

Abstract
These data are in addition to &quot;Madison Wisconsin Daily Meteorological Data 1869-current.&quot; Additional variables added include: daily cloud cover, wind, solar radiation, vapor pressure, dew point temperature, total atmospheric pressure, and average relative humidity for Madison, Wisconsin. In addition, the adjustment factors which were applied on a given date to calculate the adjusted parameters in &quot;Madison Wisconsin Daily Meteorological Data 1869-current&quot; are also included in these data. Raw data, in English units, were assembled by Douglas Clark - Wisconsin State Climatologist. Data were converted to metric units and adjusted for temporal biases by Dale M. Robertson. For adjustments applied to various parameters see Robertson, 1989 Ph.D. Thesis UW-Madison. Adjusted data represent the BEST estimated daily data and may be raw data. Data collected at Washburn observatory, 8-1-1883 to 9-30-1904. Data collected at North Hall, 10-1-1904 to 12-31-1947 Data collected at Truax Field (Admin BLDG), 1-1-1948 to 12-31-1959. Data collected at Truax Field, center of field, 1-1-1960 to Present. Much of the data after 1990 were obtained in digital form from Ed Hopkins, UW-Meteorology. Data starting in 2002-2005 were obtained from Sullivan at http://www.weather.gov/climate/index.php?wfo=mkx%20 ,then go to CF6 and download monthly data to Madison_sullivan_conversion. Relative humidity data was obtained from 1986 to 1995 from CD&#39;s at the State Climatologist&#39;s Office. Since Robertson (1989) adjusted all historical data to that collected prior to 1989; no adjustments were applied to the recent data except for wind and estimated vapor pressure. Wind after January 1997, and only wind from the southwest after November 2007, was extended by Dale M. Robertson and Yi-Fang &quot;Yvonne&quot; Hsieh, see methods. Estimated vapor pressure after April 2002 was updated by Yvonne Hsieh, see methods.
Dataset ID
282
Date Range
-
Metadata Provider
Methods
Raw data (in English units) were assembled by Douglas Clark - Wisconsin State Climatologist. Data were converted to metric units and adjusted for temporal biases by Dale M. Robertson. For adjustments applied to various parameters see Robertson, 1989 Ph.D. Thesis UW-Madison. Adjusted data represent the BEST estimated daily data and may be raw data. Data collected at Washburn observatory, 8-1-1883 to 9-30-1904. Data collected at North Hall, 10-1-1904 to 12-31-1947 Data collected at Truax Field (Admin BLDG), 1-1-1948 to 12-31-1959. Data collected at Truax Field (Center of Field), 1-1-1960 to Present. Much of the data after 1990 were obtained in digital form from Ed Hopkins, UW-Meteorology. Data starting in 2002-05 were obtained from Sullivan at <a href="http://www.weather.gov/climate/index.php?wfo=mkx%20">http://www.weather.gov/climate/index.php?wfo=mkx</a> ,then go to CF6 and download monthly data to Madison_sullivan_conversion. Since Robertson (1989) adjusted all historical data to that collected from 1884-1989; no adjustments were applied to the recent data except for (1) wind and (2) estimated vapor pressure:(1) Wind after January 1997, and only wind from the southwest after November 2007, was extended by Dale M. Robertson and Yvonne Hsieh.In 1996, a discontinuity in the wind record was caused by change in observational techniques and sensor locations (Mckee et al. 2000). To address the non-climatic changes in wind speed, data from MSN were carefully compared with those collected from the tower of the Atmospheric and Oceanic Science Building at the University of Wisconsin-Madison, see http://ginsea.aos.wisc.edu/labs/mendota/index.htm. Hourly data from both sites (UMSN,hourly and UAOS,hourly) during 2003&ndash;2010 were used to form a 4&times;12 (four components of wind direction &times; 12 months) matrix (K4,12) of wind correction factors, yielding UAOS,daily= Ki,j&times;UMSN,daily. The comparison results indicated that the MSN weather station reported a higher magnitude in winds out of the east by 5% and lower magnitude in winds out of the west and south by 30% and 10%. The adjusted wind data (=Ki,j&times;UMSN,daily) were therefore employed and used in the model simulation. After adjustments, there was a decrease in wind velocities starting shortly before 1996. Overall the adjusted wind data had a decline in wind velocities of 16% from 1988&ndash;93 to 1994&ndash;2009) compared to a 7% decline at a nearby weather station with no known observational changes (St. Charles, Illinois; 150 km southeast of Lake Mendota). (2) Estimated vapor pressure was updated (after April 2002) by using the equation from DYRESM for estimation of vapor pressure (a function of both air temperature and dew point temperature); where a=7.5, b=237.3, and c=.7858.
Version Number
23

WDNR Yahara Lakes Fisheries: Fish Lengths and Weights 1987-1998

Abstract
These data were collected by the Wisconsin Department of Natural Resources (WDNR) from 1987-1998. Most of these data (1987-1993) precede 1995, the year that the University of Wisconsin NTL-LTER program took over sampling of the Yahara Lakes. However, WDNR data collected from 1997-1998 (unrelated to LTER sampling) is also included. In 1987 a joint project by the WDNR and the University of Wisconsin-Madison, Center for Limnology (CFL) was initiated on Lake Mendota. The project involved biomanipulation of fish communities within the lake, which was acheived by stocking game fish species (northern pike and walleye). The goal was to induce a trophic cascade that would improve the water clarity of Lake Mendota. See Lathrop et al. 2002. Stocking piscivores to improve fishing and water clarity: a synthesis of the Lake Mendota biomanipulation project. Freshwater Biology 47, 2410-2424. In collecting these data, the objective was to gather population data and monitor populations to track the progress of the biomanipulation. The data is dominated by an assesssment of the game fishery in Lake Mendota, however other Yahara Lakes and non-game fish species are also represented. A combination of gear types was used to gather the population data including boom shocking, fyke netting, mini-fyke netting, seining, and gill netting. Not every sampling year includes length and weight data from all gear types. The WDNR also carried out randomized, access-point creel surveys to estimate fishing pressure, catch rates, harvest, and exploitation rates. Five data files each include length-weight data, and are organized by the type of gear or method which was used to collect the data: 1) fyke, mini-fyke, and seine netting 2) boom shocking 3) gill netting (1993 only) 4)walleye age as determined by scale and spine analysis (1987 only), and 5) creel survey. The final data file contains creel survey information: number of anglers fishing the shoreline, and number of anglers that started and completed trips from public and private access points.
Core Areas
Dataset ID
279
Date Range
-
Metadata Provider
Methods
BOOM SHOCKING1987:A standard WDNR electrofishing boat was used on Lake Mendota set at 300 volts and 2.5 amps (mean) DC, with a 20 % duty cycle and 60 pulses per second. On all sampling dates two people netted fish, the total electrofishing crew was three people. Shocking was divided into stations. For each station, the actual starting and ending time was recorded. Starting and ending points of each station were plotted on a nap. A 7.5 minute topographic map (published 1983) and a cartometer was used to develop a standardized shoreline mileage numbering scheme. Starting at the Yahara River outlet at Tenney Park and measuring counterclockwise, the shoreline was numbered according to the number of miles from the outlet. The length of shoreline shocked for each station was determined using the same maps. The objectives of the fall 1987 electrofishing was: to gather CPE data for comparison with previous surveys of the lake; develop a database for relating fall electroshocker CPE to predator density; collect fall predator diet data; make mark-recapture population estimates of YOY predators; and determine year-class-strength of some nonpredators (yellow perch, yellow bass, and white bass).1993: Electrofishing was used to continue marking largemouth and smallmouth bass (because of low CPE in fyke nets), to recapture fish marked in fyke netting, and to mark and recapture walleyes ( less than 11.0 in.) on Lake Mendota. Four person crews electrofished after sunset from May 05 to June 03, 1993. A standard WDNR electrofishing boat was used, set at about 300 volts and 15.0 amps (mean) DC, with a 20 % duty cycle at 60 pulses per second. On all sampling dates two people netted fish; thus, CPE data are given as catch per two netter hour or mile. Shocking was divided into stations. For each station the actual starting and ending time and the generator s meter times was recorded. Starting and ending points of each station were plotted on a map. 7.5 minute topographic maps (published in 1983) were used in addition to a cartometer to develop a standardized shoreline mileage numbering scheme. Starting at the Yahara River outlet at Tenney Park and measuring counterclockwise the shoreline was numbered according to the number of miles from the outlet. The length of shoreline shocked for each station was determined using these maps. The 4 person electroshocker crews were used again from September 20 to October 19. Fall shocking had several objectives: to gather CPE data for comparison with previous surveys of the lake; develop a database for relating fall electroshocker CPE to piscivore density; and make mark recapture population estimates of young of year (YOY) piscivores.1997:5/13/1997-5/20/1997: Electrofishing was completed at night on lakes: Mendota, Monona, and Waubesa. A standard WDNR electrofishing boat was used, set from 320-420 volts and 16-22 amps DC, with a 20 % duty cycle at 50 pulses per second. Two netters were used for each shocking event. At a particular station, starting and ending times where shocking took place were recorded. The location of the designated shocking stations is unknown.9/23/1997-10/14/1997: Electrofishing was completed at night on Mendota, Monona, Waubesa, and Wingra. A standard WDNR electrofishing boat was used, set from 315-400 volts and 16-24 amps DC, with a 20% duty cycle at 60 pulses per second. Two netters were used for each shocking event. Starting and ending time at each shocking station was listed. The location of the designated shocking stations is unknown.1998:Electrofishing was completed at night on Mendota, Monona, Wingra, and Waubesa from 5/12/1998- 10/28/1998. A standard WDNR electrofishing boat was used, set from 240-410 volts and 15-22 amps DC, with a 20% duty cycle at 50-100 pulses per second. Two netters were used for each shocking event. Starting and ending time at each shocking station was listed. The location of the designated shocking stations is unknown. FYKE NETTING1987:Fyke nets were fished daily from March 17 to April 24, 1987 on Lake Mendota. The nets were constructed of 1.25 inch (stretch) mesh with a lead length of 50 ft. (a few 25 ft. leads were used). The hoop diameter was 3 ft. and the frame measured 3 ft. by 6 ft. Total length of the net was 28 ft. plus the lead length. Nets were set in 48 unknown locations. Initially, effort was concentrated around traditional northern pike spawning sites (Cherokee Marsh, Sixmile Creek, Pheasant Branch Creek, and University Bay). As northern pike catch-per-effort (CPE) declined some nets were moved onto rocky shorelines of the lake to capture walleyes. All adult predators (northern pike, hybrid muskie, largemouth and smallmouth bass, walleye, gar, bowfin, and channel catfish) captured were tagged and scale sampled. Measurements on non-predator species captured in fyke nets were made one day per week. This sampling was used to index size structure and abundance, and to collect age and growth data. In each net, total length and weight of 20 fish of each species caught was measured, and the remaining caught were counted.1993:Same methods as 1987, except fyke nets were fished from 4/8/1993-4/29/1993 on Lake Mendota. The 1993 fyke net data also specifies the &ldquo;mile&rdquo; at which the fyke net was set. This is defined as the number of miles from the outlet of the Yahara River at Tenney Park, moving counterclockwise around the lake. In addition, abundance and lengths of non-gamefish species captured in fyke nets were recorded one day per week. Six nets were randomly selected to sample for non-gamefish data. This sampling was used to index size structure and abundance, and to collect age and growth data. In each randomly selected net, total length and weight was measured for 20 fish of each species, and the remaining caught were counted.1998:There is no formal documentation for the exact methods used for fyke netting from 3/3/1998-8/12/1998 on Lake Mendota. However, given that the data is similar to data collected in 1987 and 1993 it is speculated that the same methods were used.MINI-FYKE NETTING1989:There is no formal documentation for the exact methods used for mini-fyke netting on Lake Mendota and Lake Monona from 7/26/1989-8/25/1989. However, given that the data is similar to data collected from 1990-1993 it is speculated that the same methods were used. In the sampling year of 1989, mini-fyke nets were placed at 22 different unknown stations.1990-1993: Mini-fyke nets were fished on Lake Mendota and Lake Monona during July-September at 20, 29, 13, and 15 sites per month during 1990, 1991, 1992, and 1993, respectively to estimate year-class strength, relative abundance, and size structure of fishes in the littoral zone. Nets were constructed with 3/16 in. mesh, 2 ft. diameter hoops, 2 ft. x 3 ft. frame, and a 25 ft. lead. Sites were comparable to seine sites used in previous surveys. Sites included a variety of substrate types and macrophyte densities. To exclude turtles and large piscivores from minifyke nets, some nets were constructed with approximately 2 in. by 2 in. mesh at the entrance to the net. Thus, mini-fyke net data are most accurate for YOY fishes, and should not be used to make inferences about fishes larger than the exclusion mesh size. 1997:There is no formal documentation for the mini-fyke methods which were used on Lake Waubesa and Lake Wingra from 9/16/1997-9/18/1997. However, given that the data is similar to data collected in 1989, and 1990-1993, it is speculated that the methods used during 1997 are the same. SEINE NETTING1989, 1993: Monthly shoreline seining surveys were conducted on Lake Mendota and Lake Monona during June through September to estimate year class-strength, relative abundance, and size structure of the littoral zone fish community. Twenty sites were identified based on previous studies. Sites included a variety of substrate types and macrophyte densities. Seine hauls were made with a 25ft bag seine with 1/8 inch mesh pulled perpendicular to shore starting from a depth of 1 m. Twenty fish of each species were measured from each haul and any additional fish were counted. Gill Netting (1993)Experimental gill nets were fished in weekly periods during June through August, 1993. Gill nets were used to capture piscivores for population estimates of fish marked in fyke nets. All nets were constructed of five 2.5-4.5 in. mesh panels, and were 125 ft. long. Nets set in water shallower than 10 ft. were 3ft. high or less; all others were 6ft. high or less. Sampling locations were selected randomly from up to three strata: 1) offshore reef sets, 2) inshore sets, 6.0-9.9 ft. deep, and 3) mid-depth sets, 10-29.9 ft. deep. The exact location at which the gill nets were set on the lake is unknown because the latitude and longitude values which were recorded by the WDNR are invalid. Temperature and dissolved oxygen profiles were used to monitor the development of the thermocline and guide net placement during July and August. After the thermocline was established nets were set out to the 30 ft. contour or to the maximum depth with dissolved oxygen greater than 2 ppm. Walleye Age: Scale and Spine Analysis (1987) Scales were taken from walleye that were shocked during the fall of 1987 electrofishing events on Lake Mendota. Scales were taken from 10 fish per one-inch length increment. The scales were removed from behind the left pectoral fin, and from the nape on the left side on esocids. In addition, the second dorsal spine was removed from 10 walleyes per sex and inch increment (to age and compare with scale ages for fish over 20 inches). CREEL SURVEYS1989:Fishing pressure, catch rates, harvest, and exploitation rates were estimated from a randomized, access-point creel survey. The schedule was stratified into weekday and weekend/holiday day types. Shifts were selected randomly and were either 07:00-15:00 h or 15:00-23:00 h. In addition, two 23:00-03:00 h shifts and two 03:00-07:00 h shifts were sampled per month to estimate the same parameters during night time hours. During the ice fishing season (January-February) 22 access points around Lake Mendota and upstream to the Highway 113 bridge were sampled. The clerk counted the number of anglers starting and completing trips during the scheduled stop at each access point. During openwater (March-December) 13 access points were sampled; 10 were boat ramps and 3 were popular shore fishing sites<strong>. </strong>At each of these sites, an instantaneous count of shore anglers was made upon arrival at the site, continuous counts of anglers starting and completing trips at public and private access points were made. Boat occupants and ice fishing anglers were only interviewed if they were completing a trip. Both complete and incomplete interviews were made of shore anglers. Number caught and number kept of each species, and percent of time seeking a particular species were recorded. All predators possessed by anglers were measured, weighed, and inspected for finclips and tags. We measured a random sample of at least 20 fish of each non-predator species per day.1990-1993: Same as 1989, except 23 access points were used during the ice fishing season. In addition, 13 access points were sampled during the openwater (May-December) season; 9 sites were boat ramps and 4 sites were popular shore fishing sites. 1994-1999: No formal documentation exists, but given the similarity in the data and consistency through the years; it is speculated tha tthe methods are the same.
Version Number
19

Trout Lake USGS Water, Energy, and Biogeochemical Budgets (WEBB) Stream Data 1975-current

Abstract
This data was collected by the United States Geological Survey (USGS) for the Water, Energy, and Biogeochemical Budget Project. The data set is primarily composed of water chemistry variables, and was collected from four USGS stream gauge stations in the Northern Highland Lake District of Wisconsin, near Trout Lake. The four USGS stream gauge stations are Allequash Creek at County Highway M (USGS-05357215), Stevenson Creek at County Highway M (USGS-05357225), North Creek at Trout Lake (USGS-05357230), and the Trout River at Trout Lake (USGS-05357245), all near Boulder Junction, Wisconsin. The project has collected stream water chemistry data for a maximum of 36 different chemical parameters,. and three different physical stream parameters: temperature, discharge, and gauge height. All water chemistry samples are collected as grab samples and sent to the USGS National Water Quality Lab in Denver, Colorado. There is historic data for Stevenson Creek from 1975-1977, and then beginning again in 1991. The Trout Lake WEBB project began during the summer of 1991 and sampling of all four sites continues to date.
Creator
Dataset ID
276
Date Range
-
Maintenance
Completed.
Metadata Provider
Methods
DL is used to represent “detection limit” where known.NOTE (1): Each method listed below corresponds with a USGS Parameter Code, which is listed after the variable name. NOTE (2): If the NEMI method # is known, it is also specified at the end of each method description.NOTE (3): Some of the variables are calculated using algorithms within QWDATA. If this is the case see Appendix D of the NWIS User’s Manual for additional information. However, appendix D does not list the algorithm used by the USGS. If a variable is calculated with an algorithm the term: algor, will be listed after the variable name.anc: 99431, Alkalinity is determined in the field by using the gran function plot methods, see TWRI Book 9 Chapter A6.1. anc_1: 90410 and 00410, Alkalinity is determined by titrating the water sample with a standard solution of a strong acid. The end point of the titration is selected as pH 4.5. See USGS TWRI 5-A1/1989, p 57, NEMI method #: I-2030-89.2. c13_c12_ratio: 82081, Exact method unknown. The following method is suspected: Automated dual inlet isotope ratio analysis with sample preparation by precipitation with ammoniacal strontium chloride solution, filtration, purification, acidified of strontium carbonate; sample size is greater than 25 micromoles of carbon; one-sigma uncertainty is approximately ± 0.1 ‰. See USGS Determination of the delta13 C of Dissolved Inorganic Carbon in Water, RSIL Lab Code 1710. Chapter 18 of Section C, Stable Isotope-Ratio Methods Book 10, Methods of the Reston Stable Isotope Laboratory.3. ca, mg, mn, na, and sr all share the same method. The USGS parameter codes are listed first, then the method description with NEMI method #, and finally DL’s:ca- 00915, mg- 00925, mn- 01056, na- 00930, sr- 01080All metals are determined simultaneously on a single sample by a direct reading emission spectrometric method using an inductively coupled argon plasma as an excitation source. Samples are pumped into a crossflow pneumatic nebulizer, and introduced into the plasma through a spray chamber and torch assembly. Each analysis is determined on the basis of the average of three replicate integrations, each of which is background corrected by a spectrum shifting technique except for lithium (670.7 nm) and sodium (589.0 nm). A series of five mixed-element standards and a blank are used for calibration. Method requires an autosampler and emission spectrometry system. See USGS OF 93-125, p 101, NEMI Method #: I-1472-87.DL’s: ca- .02 mg/l, mg-.01 mg/l, mn-1.0 ug/l, na- .2 mg/l, sr- .5 ug/l4. cl, f, and so4 all share the same method. The USGS parameter codes are listed first, then the method description with NEMI method #, and finally DL’s:cl- 00940, f-00950, so4-00945All three anions (chloride, flouride, and sulfate) are separated chromatographically following a single sample injection on an ion exchange column. Ions are separated on the basis of their affinity for the exchange sites of the resin. The separated anions in their acid form are measured using an electrical conductivity cell. Anions are identified on the basis of their retention times compared with known standards. 19 The peak height or area is measured and compared with an analytical curve generated from known standards to quantify the results. See USGS OF 93-125, p 19, NEMI method #: I-2057.DL’s: cl-.2 mg/l, f-.1 mg/l, so4-.2 mg/lco2: 00405, algor, see NWIS User's Manual, QW System, Appendix D, Page 285.co3: 00445, algor.color: 00080, The color of the water is compared to that of the colored glass disks that have been calibrated to correspond to the platinum-cobalt scale of Hazen (1892), See USGS TWRI 5-A1 or1989, P.191, NEMI Method #: I-1250. DL: 1 Pt-Co colorconductance_field: 00094 and 00095, specific conductance is determined in the field using a standard YSI multimeter, See USGS TWRI 9, 6.3.3.A, P. 13, NEMI method #: NFM 6.3.3.A-SW.conductance_lab: 90095, specific conductance is determined by using a wheat and one bridge in which a variable resistance is adjusted so that it is equal to the resistance of the unknown solution between platinized electrodes of a standardized conductivity cell, sample at 25 degrees celcius, See USGS TWRI 5-A1/1989, p 461, NEMI method #: I-1780-85.dic: 00691, This test method can be used to make independent measurements of IC and TC and can also determine TOC as the difference of TC and IC. The basic steps of the procedure are as follows:(1) Removal of IC, if desired, by vacuum degassing;(2) Conversion of remaining inorganic carbon to CO<sub>2</sub> by action of acid in both channels and oxidation of total carbon to CO<sub>2</sub> by action of ultraviolet (UV) radiation in the TC channel. For further information, See ASTM Standards, NEMI method #: D6317. DL: n/adkn: 00623 and 99894, Organic nitrogen compounds are reduced to the ammonium ion by digestion with sulfuric acid in the presence of mercuric sulfate, which acts as a catalyst, and potassium sulfate. The ammonium ion produced by this digestion, as well as the ammonium ion originally present, is determined by reaction with sodium salicylate, sodium nitroprusside, and sodium hypochlorite in an alkaline medium. The resulting color is directly proportional to the concentration of ammonia present, see USGS TWRI 5-A1/1989, p 327, NEMI method #: 351.2. DL: .10 mg/Ldo: 0300, Dissolved oxygen is measured in the field with a standard YSI multimeter, NEMI Method #: NFM 6.2.1-Lum. DL: 1 mg/L.doc: 00681, The sample is acidified, purged to remove carbonates and bicarbonates, and the organic carbon is oxidized to carbon dioxide with persulfate, in the presence of an ultraviolet light. The carbon dioxide is measured by nondispersive infrared spectrometry, see USGS OF 92-480, NEMI Method #: O-1122-92. DL: .10 mg/L.don: 00607, algor, see NWIS User's Manual, QW System, Appendix D, page 291.dp: 00666 and 99893, All forms of phosphorus, including organic phosphorus, are converted to orthophosphate ions using reagents and reaction parameters identical to those used in the block digester procedure for determination of organic nitrogen plus ammonia, that is, sulfuric acid, potassium sulfate, and mercury (II) at a temperature of 370 deg, see USGS OF Report 92-146, or USGS TWRI 5-A1/1979, p 453, NEMI method #: I-2610-91. DL= .012 mg/L.fe: 01046, Iron is determined by atomic absorption spectrometry by direct aspiration of the sample solution into an air-acetylene flame, see USGS TWRI 5-A1/1985, NEMI method #: I-1381. DL= 10µg/L.h_ion: 00191, algor.h20_hardness: 00900, algor.h20_hardness_2: 00902, algor.hco3: 00440, algor.k: 00935, Potassium is determined by atomic absorption spectrometry by direct aspiration of the sample solution into an air-acetylene flame , see USGS TWRI 5-A1/1989, p 393, NEMI method #: I-1630-85. DL= .01 mg/L.n_mixed: 00600, algor.n_mixed_1: 00602, algor.n_mixed_2: 71887, algor.nh3_nh4: 00608, Ammonia reacts with salicylate and hypochlorite ions in the presence of ferricyanide ions to form the salicylic acid analog of indophenol blue (Reardon and others, 1966; Patton and Crouch, 1977; Harfmann and Crouch, 1989). The resulting color is directly proportional to the concentration of ammonia present, See USGS OF 93-125, p 125/1986 (mg/l as N), NEMI Method #: I-2525. DL= .01 mg/L.nh3_nh4_1: 71846, algor.nh3_nh4_2: 00610, same method as 00608, except see USGS TWRI 5-A1/1989, p 321. DL = .01 mg/L.nh3_nh4_3: 71845, algor.no2: 00613, Nitrite ion reacts with sulfanilamide under acidic conditions to form a diazo compound which then couples with N-1-naphthylethylenediamine dihydrochloride to form a red compound, the absorbance of which is measured colorimetrically, see USGS TWRI 5-A1/1989, p 343, NEMI method #: I-2540-90. DL= .01 mg/L.no2_2: 71856, algor.no3: 00618, Nitrate is determined sequentially with six other anions by ion-exchange chromatography, see USGS TWRI 5-A1/1989, P. 339, NEMI method #: I-2057. DL= .05 mg/L.no3_2: 71851, algor.no32: 00630, An acidified sodium chloride extraction procedure is used to extract nitrate and nitrite from samples of bottom material for this determination(Jackson, 1958). Nitrate is reduced to nitrite by cadmium metal. Imidazole is used to buffer the analytical stream. The sample stream then is treated with sulfanilamide to yield a diazo compound, which couples with N-lnaphthylethylenediamine dihydrochloride to form an azo dye, the absorbance of which is measured colorimetrically. Procedure is used to extract nitrate and nitrite from bottom material for this determination (Jackson, 1958), see USGS TWRI 5-A1/1989, p 351. DL= .1 mg/Lno32_2: 00631, same as description for no32, except see USGS OF 93-125, p 157. DL= .1 mg/L.o18_o16_ratio: 82085, Sample preparation by equilibration with carbon dioxide and automated analysis; sample size is 0.1 to 2.0 milliliters of water. For 2-mL samples, the 2-sigma uncertainties of oxygen isotopic measurement results are 0.2 ‰. This means that if the same sample were resubmitted for isotopic analysis, the newly measured value would lie within the uncertainty bounds 95 percent of the time. Water is extracted from soils and plants by distillation with toluene; recommended sample size is 1-5 ml water per analysis, see USGS Determination of the Determination of the delta (18 O or 16O) of Water, RSIL Lab Code 489.o2sat: Dissolved oxygen is measured in the field with a standard YSI multimeter, which also measures % oxygen saturation, NEMI Method #: NFM 6.2.1-Lum.ph_field: 00400, pH determined in situ, using a standard YSI multimeter, see USGS Techniques of Water-Resources Investigations, book 9, Chaps. A1-A9, Chap. A6.4 "pH," NEMI method # NFM 6.4.3.A-SW. DL= .01 pH.ph_lab: 00403, involves use of laboratory pH meter, see USGS TWRI 5-A1/1989, p 363, NEMI method #: I-1586.po4: 00660, algor, see NWIS User's Manual, QW System, Appendix D, Page 286.po4_2: 00671, see USGS TWRI 5-A1/1989, NEMI method #: I-2602. DL= .01 mg/L.s: 63719, cannot determine exact method used. USGS method code: 7704-34-9 is typically used to measure sulfur as a percentage, with an DL =.01 µg/L. It is known that the units for sulfur measurements in this data set are micrograms per liter.sar: 00931, algor, see NWIS User's Manual, QW System, Appendix D, Page 288.si: 00955, Silica reacts with molybdate reagent in acid media to form a yellow silicomolybdate complex. This complex is reduced by ascorbic acid to form the molybdate blue color. The silicomolybdate complex may form either as an alpha or beta polymorph or as a mixture of both. Because the two polymorphic forms have absorbance maxima at different wavelengths, the pH of the mixture is kept below 2.5, a condition that favors formation of the beta polymorph (Govett, 1961; Mullen and Riley, 1955; Strickland, 1952), see USGS TWRI 5-A1/1989, p 417, NEMI method #: I-2700-85. DL= .10 mg/L.spc: 00932, algor, see NWIS User's Manual, QW System, Appendix D, Page 289.tds: 70300 and 70301, A well-mixed sample is filtered through a standard glass fiber filter. The filtrate is evaporated and dried to constant weight at 180 deg C, see " Filterable Residue by Drying Oven," NEMI method #: 160.1, DL= 10 mg/l. Note: despite DL values occur in the data set that are less than 10 mg/l.tds_1: 70301, algor, see NWIS User's Manual, QW System, Appendix D, Page 289.tds_2: 70303, algor, see NWIS User's Manual, QW System, Appendix D, Page 290.tkn: 00625 and 99892, Block digester procedure for determination of organic nitrogen plus ammonia, that is, sulfuric acid, potassium sulfate, and Mercury (II) at a temperature of 370°C. See the USGS Open File Report 92-146 for further details. DL: .10 mg/L.toc: 00680, The sample is acidified, purged to remove carbonates and bicarbonates, and the organic carbon is oxidized to carbon dioxide with persulfate, in the presence of an ultraviolet light. The carbon dioxide is measured by nondispersive infrared spectrometry, see USGS TWRI 5-A3/1987, p 15, NEMI Method #: O-1122-92. DL=.10 mg/L.ton: 00605, algor, See NWIS User's Manual, QW System, Appendix D, page 286.tp: 00665 and 99891, This method may be used to analyze most water, wastewater, brines, and water-suspended sediment containing from 0.01 to 1.0 mg/L of phosphorus. Samples containing greater concentrations need to be diluted, see USGS TWRI 5-A1/1989, p 367, NEMI method #: I-4607. tp_2: 71886, algor.tpc: 00694, The basic steps of this test method are:1) Conversion of remaining IC to CO2 by action of acid, 2) Removal of IC, if desired, by vacuum degassing, 3) Split of flow into two streams to provide for separate IC and TC measurements, 4) Oxidation of TC to CO2 by action of acid-persulfate aided by ultraviolet (UV) radiation in the TC channel, 5) Detection of CO2 by passing each liquid stream over membranes that allow the specific passage of CO2 to high-purity water where change in conductivity is measured, and 6) Conversion of the conductivity detector signal to a display of carbon concentration in parts per million (ppm = mg/L) or parts per billion (ppb = ug/L). The IC channel reading is subtracted from the TC channel reading to give a TOC reading, see ASTM Standards, NEMI Method #: D5997. DL= .06 µg/L.tpn: 49570, A weighed amount of dried particulate (from water) or sediment is combusted at a high temperature using an elemental analyzer. The combustion products are passed over a copper reduction tube to covert nitrogen oxides to molecular nitrogen. Carbon dioxide, nitrogen, and water vapor are mixed at a known volume, temperature, and pressure. The concentrations of nitrogen and carbon are determined using a series of thermal conductivity detectors/traps, measuring in turn by difference hydrogen (as water vapor), carbon (as carbon dioxide), and nitrogen (as molecular nitrogen). Procedures also are provided to differentiate between organic and inorganic carbon, if desired, see USEPA Method 440, NEMI method #: 440. DL= .01 mg/L.
Short Name
TL-USGS-WEBB Data
Version Number
15
Subscribe to long term monitoring