Monday, May 9, 2016

Spectral Signature Analysis & Resource Modeling

Introduction

The main point of this lab was to have us create images that showed spectral reflectance and then to have us interpret and understand those images.

Methods

To first get started I created a Lab 8 folder within my personal folder to stay organized. Part 1 was spectral signature analysis. We worked with a Landsat ETM+ image that covered Eau Claire, Wisconsin. We measured and plotted the spectral reflectance of 12 materials and surfaces from the image including:


  • Standing Water
  • Moving Water
  • Vegetation
  • Riparian Vegetation
  • Crops
  • Urban Grass
  • Dry Soil (uncultivated)
  • Moist Soil (uncultivated)
  • Rock
  • Asphalt Highway
  • Airport Runway
  • Concrete Surface (Parking Lot)

To do this we used an image called Eau_Claire_2000.img and digitized the various surfaces that I listed above. This proved to be rather difficult because the image became very pixelated as you zoomed in. I actually had to use the help of google maps to help me identify some features. To "digitize" the surfaces we simply opened Erdas Imagine with the Eau Claire image and clicked on the home tab, then the Drawing tab and then polygon. We used the polygon tool to digitize each individual surface. Then to show the mean plot of each surface I clicked the raster tab, then the Supervised button and then signature editor. For each digitized surface you would click "Create new Signatures from AOI" and then rename the class by whatever feature you just digitized. Then to show the graph of that surfaces reflectance I would click "Display Mean Plot Window." This same process was continued for all 12 surfaces. Below is an image of the mean plot window of each surface.

Figure 1: Signature Mean Plot of all 12 surfaces

Each individual surface has its own way to reflect electromagnetic energy. For example all the plots for vegetation tend to be very similar, having a low reflectance in the red band and a very high reflectance in the the middle infrared band. The reason for this is because the plants need red light to support photosynthesis for creating energy for the plant. There is high reflectance in the middle infrared because if absorbed this would cause much damage to the plant's cells. 

Part 2 of the lab was resource monitoring such as vegetation health monitoring and soil health monitoring. To set up a Normalized difference vegetation index (NDVI) we first added the ec_cpw_2000.img image. I then clicked the raster tab and then unsupervised and then NDVI. I set up the parameters and named the output image ec_cpw_2000ndvi.img. The process was run and below is the output I obtained.

Figure 2: Image showing difference of healthy vegetation to unhealthy vegetation

The darker parts of the image (dark gray and black) should be the healthier vegetation due to the large absorption from the water content of healthy vegetation. The light areas should then be the unhealthy and dry vegetation that reflects much more light than the healthy vegetation.

The process for the Soil Health Monitoring was much of the same process as explained above. Using the image ec_cpw_2000.img I clicked the tab Raster, then unsupervised then indices. Setting up the parameters to "Ferrous Minerals" I then set the output image to be named ec_cpw_2000fm.img. After running the process the image below is the output I obtained.

Figure 3: Image of Ferrous minerals in Eau Claire area

The ferrous minerals seem to more dramatically change as you leave the city of Eau Claire and head outwards to more open areas.


Monday, May 2, 2016

Photogrammetry

Goals and Objectives

The purpose of this lab was to have us understand the calculation process of correcting an image, this included:
  • photographic scales
  • measurement of areas and perimeters of features
  • calculating relief displacement
It also introduced us to stereoscopy and performing orthorectification on satellite images.

Part 1: Scales, Measurement and relief displacement

Section 1: Calculating scale of nearly vertical aerial photographs

The first question of this lab was "What is the scale of the aerial photograph (Figure 1)?" We were told that the real life distance between point A and B was 8822.47 feet. With this information we had to then measure the distance between point A and B with a ruler. Using the equation S= pd/gd (Scale= picture distance/ground distance) and the measurement I found to be 2.625 inches I found that the scale of this picture is around 1:40,333 (could be slightly different for someone else depending on how accurately they measured the distance between the two points with a ruler. 
Figure 1: Image of Eau Claire, with points A and B

Question 2 also asked "What is the scale of the photograph?" but this time it gave different information. We were told that the photo was taken at an altitude of 20,000 feet above sea level with a focal length lens of 152 mm and that the elevation of Eau Claire County is 796 feet. This time using the equation S=f/H-h (Scale= focal lens length/Altitude above sea level (ASL) - elevation of terrain) and the given numbers, I plugged all of the numbers into the equation and solved it. I found the scale to be 1:38,515.

Section 2: Measurement of areas of features on aerial photographs

For this section we displayed the image ec_west-se.img and calculated the area and perimeter of the lagoon in the image. We simply did this by using a polygon measuring tool and clicking around the border of the lagoon. I double clicked to complete the "digitizing" of the lagoon and it calculated the area and perimeter all at once. 
Figure 2: Image showing lagoon with a red X.

Section 3: Calculating relief displacement from object Height

Relief displacement is an issue with some images that makes taller objects appear as if they are leaning away from the principal point. We opened the image Relief displacement_1.jpg and were told that the height of the aerial camera above the datum was 3,980 feet at the time the photo was taken. We were also told that the scale of the aerial photograph is 1:3,209. What I then did to find the displacement of the smoke stack in the image was I used a ruler to measure the height of the smoke stack and then found it real world height. I then measured the radial distance between the principal point and the top of the smoke stack to figure out how much to correct the displacement. 

Figure 3: Image used to correct the displacement of the smoke stack (labeled A)

\

Part 2: Stereoscopy

During this part of the lab we generated three dimensional images using an elevation model, we then visually evaluated aerial photographs for relief displacement. Using Erdas I brought in the image ec_city.img into one viewer which had a spatial resolution of 1 meter and eau_clair_quad.img into the other viewer which also had a spatial resolution of 1 meter. Looking at both images closely and comparing the two, you could clearly tell which image had relief displacement and that was the ec_city.img. I could tell it was this image by looking at the smoke stack near the hospital and by looking at Hibbard Hall. Both of these building structures are very tall and therefore were displaced more than other objects.

Section 1: Creation of anaglyph image with the use of a digital elevation model (DEM)

An anaglyph is a neat image that displays data as 3- dimensional as long as you have polaroid glasses. I brought the image ec_city.img into Erdas Imagine, again it has a 1 meter spatial resolution. In a second viewer I brought in ec_dem2.img, this has a spatial resolution of 10 meters, both images are of the Eau Claire area. To create an anaglyph I first clicked Terrain>Anaglyph to open the Anaglyph Generation. The input image was ec_city.img, the output anaglyph photo was named ec_anaglyph_sec1.img. All other parameters were accepted. After the process was done running, I opened the image I had just created. You could see some elevation changes in the Eau Claire area but it wasn't entirely correct or accurate.
Figure 4: Small section of the ec_anaglyph_sect1 image I created.

Section 2: Creation of anaglyph image with the use of a LiDAR derived surface model (DSM)

For this section of the lab we essentially did the same thing above but used the eau_claire_quad.img which has a 1 meter spatial resolution and the EC_DSM2m.img which has a spatial resolution of 2 meters. The inputs were these two images and I named the outut ec_anaglyph_sec2.img.
Figure 5: Small section of the ec_anaglyph_sec2 image I created

Part 3: Orthorectification

Orthorectification is the process of removing the effects of image perpective (tilt) and relief (terrain) effects for the purpose of creating a planimetrically correct image. 

Section 1: Create a new project

Erdas imagine is open and the image spot_pan.img and spot_panb.img are in the same viewer. These images need to be orthorectified. To do this I open the LPS Project Manager and create a new block file. The output image is to be saved as Sat_ortho in our own folder. When the Model Setup opens I choose Polynomial-based Pushbroom and select SPOT pushbroom. Once the Block Property Setup opens I select Horizontal Reference Coordinate system and set it up to be UTM zone 11.


Section 2: Add imagery to the Block and Define Sensor Model

The spot_pan image is brought in a verified. 

Section 3: Activate Point Measurement tool and collect GCPs

First I clicked "Start point measurement tool" to activate the tool. Both images xs_ortho and spot_pan are displayed. A GCP is first added to the xs_ortho image and then in the same exact place on the spot_pan image. Making sure that each point is where it is supposed to be, this process continues for the next 11 GCPs.

Section 4: Set Type and Usage, add a 2nd image to the block and collect its GCPs

Another image is added to collect tie points. The type column is updated to "full" and the Usage column is updated to "Control." Now points are collected in the image spot_panb based on where points were already collected in spot_pan. Point ID 1,2,5,6,8,9, and 12. Other Point IDs are left out because they were not located on the spot_panb.

Section 5: Automatic tie point collection, triangulation and ortho resample

This section completes the orthorectification process of the two images in the block, spot_pan and spot_panb. Each point is checked and corrected through a number of different parameters. Once the orthorectification process is complete, both images are brought into Erdas Imagine on the same viewer and are now perfectly overlayed on each other. 

Figure 6: Orthorectified images