In this tutorial, we'll learn how to use an interactive open-source toolkit, the
xROI
that facilitates the process of time series extraction and improves the quality
of the final data. The xROI package provides a responsive environment for
scientists to interactively:
a) delineate regions of interest (ROIs),
b) handle field of view (FOV) shifts, and
c) extract and export time series data characterizing color-based metrics.
Using the xROI R package, the user can detect FOV shifts with minimal difficulty.
The software gives user the opportunity to re-adjust mask files or redraw new
ones every time an FOV shift occurs.
xROI Design
The R language and Shiny package were used as the main development tool for xROI,
while Markdown, HTML, CSS and JavaScript languages were used to improve the
interactivity. While Shiny apps are primarily used for web-based applications to
be used online, the package authors used Shiny for its graphical user interface
capabilities. In other words, both the User Interface (UI) and server modules are run
locally from the same machine and hence no internet connection is required (after
installation). The xROI's UI element presents a side-panel for data entry and
three main tab-pages, each responsible for a specific task. The server-side
element consists of R and bash scripts. Image processing and geospatial features
were performed using the Geospatial Data Abstraction Library (GDAL) and the
rgdal and raster R packages.
Install xROI
The latest release of xROI can be directly downloaded and installed from the development GitHub repository.
# install devtools first
utils::install.packages('devtools', repos = "http://cran.us.r-project.org" )
# use devtools to install from GitHub
devtools::install_github("bnasr/xROI")
xROI depends on many R packages including: raster, rgdal, sp, jpeg,
tiff, shiny, shinyjs, shinyBS, shinyAce, shinyTime, shinyFiles,
shinydashboard, shinythemes, colourpicker, rjson, stringr, data.table,
lubridate, plotly, moments, and RCurl. All the required libraries and
packages will be automatically installed with installation of xROI. The package
offers a fully interactive high-level interface as well as a set of low-level
functions for ROI processing.
Launch xROI
A comprehensive user manual for low-level image processing using xROI is available from
GitHub xROI.
While the user manual includes a set of examples for each function; here we
will learn to use the graphical interactive mode.
Calling the Launch() function, as we'll do below, opens up the interactive
mode in your operating system’s default web browser. The landing page offers an
example dataset to explore different modules or upload a new dataset of images.
You can launch the interactive mode can be launched from an interactive R environment.
# load xROI
library(xROI)
# launch xROI
Launch()
Or from the command line (e.g. bash in Linux, Terminal in macOS and Command
Prompt in Windows machines) where an R engine is already installed.
Rscript -e “xROI::Launch(Interactive = TRUE)”
End xROI
When you are done with the xROI interface you can close the tab in your browser
and end the session in R by using one of the following options
In RStudio: Press the key on your keyboard.
In R Terminal: Press <Ctrl + C> on your keyboard.
Use xROI
To get some hands-on experience with xROI, we can analyze images from the
dukehw
of the PhenoCam network.
First,save and extract (unzip) the file on your computer.
Second, open the data set in xROI by setting the file path to your data
# launch data in ROI
# first edit the path below to the dowloaded directory you just extracted
xROI::Launch('/path/to/extracted/directory')
# alternatively, you can run without specifying a path and use the interface to browse
Now, draw an ROI and the metadata.
Then, save the metadata and explore its content.
Now we can explore if there is any FOV shift in the dataset using the CLI processer tab.
Finally, we can go to the Time series extraction tab. Extract the time-series. Save the output and explore the dataset in R.
Challenge: Use xROI
Let's use xROI on a little more challenging site with field of view shifts.
Before we start the image processing steps, let's read in and plot an image. This
image is an example image that comes with the hazer package.
# read the path to the example image
jpeg_file <- system.file(package = 'hazer', 'pointreyes.jpg')
# read the image as an array
rgb_array <- jpeg::readJPEG(jpeg_file)
# plot the RGB array on the active device panel
# first set the margin in this order:(bottom, left, top, right)
par(mar=c(0,0,3,0))
plotRGBArray(rgb_array, bty = 'n', main = 'Point Reyes National Seashore')
When we work with images, all data we work with is generally on the scale of each
individual pixel in the image. Therefore, for large images we will be working with
large matrices that hold the value for each pixel. Keep this in mind before opening
some of the matrices we'll be creating this tutorial as it can take a while for
them to load.
Histogram of RGB channels
A histogram of the colors can be useful to understanding what our image is made
up of. Using the density() function from the base stats package, we can extract
density distribution of each color channel.
# color channels can be extracted from the matrix
red_vector <- rgb_array[,,1]
green_vector <- rgb_array[,,2]
blue_vector <- rgb_array[,,3]
# plotting
par(mar=c(5,4,4,2))
plot(density(red_vector), col = 'red', lwd = 2,
main = 'Density function of the RGB channels', ylim = c(0,5))
lines(density(green_vector), col = 'green', lwd = 2)
lines(density(blue_vector), col = 'blue', lwd = 2)
In hazer we can also extract three basic elements of an RGB image :
Brightness
Darkness
Contrast
Brightness
The brightness matrix comes from the maximum value of the R, G, or B channel. We
can extract and show the brightness matrix using the getBrightness() function.
# extracting the brightness matrix
brightness_mat <- getBrightness(rgb_array)
# unlike the RGB array which has 3 dimensions, the brightness matrix has only two
# dimensions and can be shown as a grayscale image,
# we can do this using the same plotRGBArray function
par(mar=c(0,0,3,0))
plotRGBArray(brightness_mat, bty = 'n', main = 'Brightness matrix')
Here the grayscale is used to show the value of each pixel's maximum brightness
of the R, G or B color channel.
To extract a single brightness value for the image, depending on our needs we can
perform some statistics or we can just use the mean of this matrix.
Why are we getting so many images up in the high range of the brightness? Where
does this correlate to on the RGB image?
Darkness
Darkness is determined by the minimum of the R, G or B color channel.
Similarly, we can extract and show the darkness matrix using the getDarkness() function.
# extracting the darkness matrix
darkness_mat <- getDarkness(rgb_array)
# the darkness matrix has also two dimensions and can be shown as a grayscale image
par(mar=c(0,0,3,0))
plotRGBArray(darkness_mat, bty = 'n', main = 'Darkness matrix')
# main quantiles
quantile(darkness_mat)
#> 0% 25% 50% 75% 100%
#> 0.03529412 0.23137255 0.36470588 0.47843137 0.83529412
# histogram
par(mar=c(5,4,4,2))
hist(darkness_mat)
Contrast
The contrast of an image is the difference between the darkness and brightness
of the image. The contrast matrix is calculated by difference between the
darkness and brightness matrices.
The contrast of the image can quickly be extracted using the getContrast() function.
# extracting the contrast matrix
contrast_mat <- getContrast(rgb_array)
# the contrast matrix has also 2D and can be shown as a grayscale image
par(mar=c(0,0,3,0))
plotRGBArray(contrast_mat, bty = 'n', main = 'Contrast matrix')
# main quantiles
quantile(contrast_mat)
#> 0% 25% 50% 75% 100%
#> 0.0000000 0.1450980 0.2470588 0.3333333 0.4509804
# histogram
par(mar=c(5,4,4,2))
hist(contrast_mat)
Image fogginess & haziness
Haziness of an image can be estimated using the getHazeFactor() function. This
function is based on the method described in
Mao et al. (2014).
The technique was originally developed to for "detecting foggy images and
estimating the haze degree factor" for a wide range of outdoor conditions.
The function returns a vector of two numeric values:
haze as the haze degree and
A0 as the global atmospheric light, as it is explained in the original paper.
The PhenoCam standards classify any image with the haze degree greater
than 0.4 as a significantly foggy image.
Download and extract the zip file to be used as input data for the following step.
# to download via R
dir.create('data')
#> Warning in dir.create("data"): 'data' already exists
destfile = 'data/pointreyes.zip'
download.file(destfile = destfile, mode = 'wb', url = 'http://bit.ly/2F8w2Ia')
unzip(destfile, exdir = 'data')
# set up the input image directory
#pointreyes_dir <- '/path/to/image/directory/'
pointreyes_dir <- 'data/pointreyes/'
# get a list of all .jpg files in the directory
pointreyes_images <- dir(path = pointreyes_dir,
pattern = '*.jpg',
ignore.case = TRUE,
full.names = TRUE)
Now we can use a for loop to process all of the images to get the haze and A0
values.
(Note, this loop may take a while to process.)
# number of images
n <- length(pointreyes_images)
# create an empty matrix to fill with haze and A0 values
haze_mat <- data.table()
# the process takes a bit, a progress bar lets us know it is working.
pb <- txtProgressBar(0, n, style = 3)
#>
Now we can save all the foggy images to a new folder that will retain the
foggy images but keep them separate from the non-foggy ones that we want to
analyze.
# identify directory to move the foggy images to
foggy_dir <- paste0(pointreyes_dir, 'foggy')
clear_dir <- paste0(pointreyes_dir, 'clear')
# if a new directory, create new directory at this file path
dir.create(foggy_dir, showWarnings = FALSE)
dir.create(clear_dir, showWarnings = FALSE)
# copy the files to the new directories
file.copy(haze_mat[foggy==TRUE,file], to = foggy_dir)
#> [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [15] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [29] FALSE FALSE
file.copy(haze_mat[foggy==FALSE,file], to = clear_dir)
#> [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [15] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [29] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
Now that we have our images separated, we can get the full list of haze
values only for those images that are not classified as "foggy".
# this is an alternative approach instead of a for loop
# loading all the images as a list of arrays
pointreyes_clear_images <- dir(path = clear_dir,
pattern = '*.jpg',
ignore.case = TRUE,
full.names = TRUE)
img_list <- lapply(pointreyes_clear_images, FUN = jpeg::readJPEG)
# getting the haze value for the list
# patience - this takes a bit of time
haze_list <- t(sapply(img_list, FUN = getHazeFactor))
# view first few entries
head(haze_list)
#> haze A0
#> [1,] 0.224981 0.6970257
#> [2,] 0.2339372 0.6826148
#> [3,] 0.231294 0.7009978
#> [4,] 0.2297961 0.6813884
#> [5,] 0.2152078 0.6949932
#> [6,] 0.345584 0.6789334
We can then use these values for further analyses and data correction.
This tutorial covers downloading NEON data, using the Data Portal and
either the neonUtilities R package or the neonutilities Python package,
as well as basic instruction in beginning to explore and work with the
downloaded data, including guidance in navigating data documentation. We
will explore data of 3 different types, and make a simple figure from
each.
NEON data
There are 3 basic categories of NEON data:
Remote sensing (AOP) - Data collected by the airborne observation
platform, e.g. LIDAR, surface reflectance
Observational (OS) - Data collected by a human in the field, or in
an analytical laboratory, e.g. beetle identification, foliar
isotopes
Instrumentation (IS) - Data collected by an automated, streaming
sensor, e.g. net radiation, soil carbon dioxide. This category also
includes the surface-atmosphere exchange (SAE) data, which are processed
and structured in a unique way, distinct from other instrumentation data
(see the introductory
eddy
flux data tutorial for details).
This lesson covers all three types of data. The download procedures
are similar for all types, but data navigation differs significantly by
type.
Objectives
After completing this activity, you will be able to:
Download NEON data using the neonUtilities package.
Understand downloaded data sets and load them into R or Python for
analyses.
Things You’ll Need To Complete This Tutorial
You can follow either the R or Python code throughout this tutorial.
* For R users, we recommend using R version 4+ and RStudio. * For Python
users, we recommend using Python 3.9+.
Set up: Install Packages
Packages only need to be installed once, you can skip this step after
the first time:
R
neonUtilities: Basic functions for accessing NEON
data
neonOS: Functions for common data wrangling needs
for NEON observational data.
terra: Spatial data package; needed for working
with remote sensing data.
import neonutilities as nu
import os
import rasterio
import pandas as pd
import matplotlib.pyplot as plt
Getting started: Download data from the Portal
Go to the NEON
Data Portal and download some data! To follow the tutorial exactly,
download Photosynthetically active radiation (PAR) (DP1.00024.001) data
from September-November 2019 at Wind River Experimental Forest (WREF).
The downloaded file should be a zip file named NEON_par.zip.
If you prefer to explore a different data product, you can still
follow this tutorial. But it will be easier to understand the steps in
the tutorial, particularly the data navigation, if you choose a sensor
data product for this section.
Once you’ve downloaded a zip file of data from the portal, switch
over to R or Python to proceed with coding.
Stack the downloaded data files: stackByTable()
The stackByTable() (or stack_by_table())
function will unzip and join the files in the downloaded zip file.
R
# Modify the file path to match the path to your zip file
stackByTable("~/Downloads/NEON_par.zip")
Python
# Modify the file path to match the path to your zip file
nu.stack_by_table(os.path.expanduser("~/Downloads/NEON_par.zip"))
In the directory where the zipped file was saved, you should now have
an unzipped folder of the same name. When you open this you will see a
new folder called stackedFiles, which should contain at
least seven files: PARPAR_30min.csv,
PARPAR_1min.csv, sensor_positions.csv,
variables_00024.csv, readme_00024.txt,
issueLog_00024.csv, and
citation_00024_RELEASE-202X.txt.
Navigate data downloads: IS
Let’s start with a brief description of each file. This set of files
is typical of a NEON IS data product.
PARPAR_30min.csv: PAR data at 30-minute averaging
intervals
PARPAR_1min.csv: PAR data at 1-minute averaging
intervals
sensor_positions.csv: The physical location of each
sensor collecting PAR measurements. There is a PAR sensor at each level
of the WREF tower, and this table lets you connect the tower level index
to the height of the sensor in meters.
variables_00024.csv: Definitions and units for each
data field in the PARPAR_#min tables.
readme_00024.txt: Basic information about the PAR
data product.
issueLog_00024.csv: A record of known issues
associated with PAR data.
citation_00024_RELEASE-202X.txt: The citation to
use when you publish a paper using these data, in BibTeX format.
We’ll explore the 30-minute data. To read the file, use the function
readTableNEON() or read_table_neon(), which
uses the variables file to assign data types to each column of data:
par30 = nu.read_table_neon(
data_file=os.path.expanduser(
"~/Downloads/NEON_par/stackedFiles/PARPAR_30min.csv"),
var_file=os.path.expanduser(
"~/Downloads/NEON_par/stackedFiles/variables_00024.csv"))
# Open the par30 table in the table viewer of your choice
The first four columns are added by stackByTable() when
it merges files across sites, months, and tower heights. The column
publicationDate is the date-time stamp indicating when the
data were published, and the release column indicates which
NEON data release the data belong to. For more information about NEON
data releases, see the
Data
Product Revisions and Releases page.
Information about each data column can be found in the variables
file, where you can see definitions and units for each column of
data.
Plot PAR data
Now that we know what we’re looking at, let’s plot PAR from the top
tower level. We’ll use the mean PAR from each averaging interval, and we
can see from the sensor positions file that the vertical index 080
corresponds to the highest tower level. To explore the sensor positions
data in more depth, see the
spatial
data tutorial.
We can see there is a lot of light attenuation through the
canopy.
Download files and load directly to R:
loadByProduct()
At the start of this tutorial, we downloaded data from the NEON data
portal. NEON also provides an API, and the neonUtilities
packages provide methods for downloading programmatically in R.
The steps we carried out above - downloading from the portal,
stacking the downloaded files, and reading in to R or Python - can all
be carried out in one step by the neonUtilities function
loadByProduct().
To get the same PAR data we worked with above, we would run this line
of code using loadByProduct():
The object returned by loadByProduct() in R is a named
list, and the object returned by load_by_product() in
Python is a dictionary. The objects contained in the list or dictionary
are the same set of tables we ended with after stacking the data from
the portal above. You can see this by checking the names of the tables
in parlist:
Now let’s walk through the details of the inputs and options in
loadByProduct().
This function downloads data from the NEON API, merges the
site-by-month files, and loads the resulting data tables into the
programming environment, assigning each data type to the appropriate
class. This is a popular choice for NEON data users because it ensures
you’re always working with the latest data, and it ends with
ready-to-use tables. However, if you use it in a workflow you run
repeatedly, keep in mind it will re-download the data every time. See
below for suggestions on saving the data locally to save time and
compute resources.
loadByProduct() works on most observational (OS) and
sensor (IS) data, but not on surface-atmosphere exchange (SAE) data,
remote sensing (AOP) data, and some of the data tables in the microbial
data products. For functions that download AOP data, see the final
section in this tutorial. For functions that work with SAE data, see the
NEON
eddy flux data tutorial.
The inputs to loadByProduct() control which data to
download and how to manage the processing:
dpID: the data product ID, e.g. DP1.00002.001
site: defaults to “all”, meaning all sites with
available data; can be a vector of 4-letter NEON site codes, e.g.
c("HARV","CPER","ABBY") (or
["HARV","CPER","ABBY"] in Python)
startdate and enddate: defaults to NA,
meaning all dates with available data; or a date in the form YYYY-MM,
e.g. 2017-06. Since NEON data are provided in month packages, finer
scale querying is not available. Both start and end date are
inclusive.
package: either basic or expanded data package.
Expanded data packages generally include additional information about
data quality, such as chemical standards and quality flags. Not every
data product has an expanded package; if the expanded package is
requested but there isn’t one, the basic package will be
downloaded.
timeIndex: defaults to “all”, to download all data; or
the number of minutes in the averaging interval. Only applicable to IS
data.
release: Specify a NEON data release to download.
Defaults to the most recent release plus provisional data. See the
release
tutorial for more information.
include.provisional: T or F: should Provisional data be
included in the download? Defaults to F to return only Released data,
which are citable by a DOI and do not change over time. Provisional data
are subject to change.
check.size: T or F: should the function pause before
downloading data and warn you about the size of your download? Defaults
to T; if you are using this function within a script or batch process
you will want to set it to F.
token: Optional NEON API token for faster downloads.
See
this
tutorial for instructions on using a token.
nCores: Number of cores to use for parallel processing.
Defaults to 1, i.e. no parallelization. Only available in R.
forceParallel: If the data volume to be processed does
not meet minimum requirements to run in parallel, this overrides. Only
available in R.
progress: Set to False to turn off the progress bar.
Only available in Python.
cloud_mode: Can be set to True if you are working in a
cloud environment; enables more efficient data transfer from NEON’s
cloud storage. Only available in Python.
The dpID is the data product identifier of the data you
want to download. The DPID can be found on the
Explore Data Products page. It will be in the form DP#.#####.###
Download observational data
To explore observational data, we’ll download aquatic plant chemistry
data (DP1.20063.001) from three lake sites: Prairie Lake (PRLA), Suggs
Lake (SUGG), and Toolik Lake (TOOK).
As we saw above, the object returned by loadByProduct()
is a named list of data frames. Let’s check out what’s the same and
what’s different from the IS data tables.
As with the sensor data, we have some data tables and some metadata
tables. Most of the metadata files are the same as the sensor data:
readme, variables,
issueLog, and citation. These files
contain the same type of metadata here that they did in the IS data
product. Let’s look at the other files:
apl_clipHarvest: Data from the clip harvest
collection of aquatic plants
apl_biomass: Biomass data from the collected
plants
apl_plantExternalLabDataPerSample: Chemistry data
from the collected plants
apl_plantExternalLabQA: Quality assurance data from
the chemistry analyses
asi_externalLabPOMSummaryData: Quality metrics from
the chemistry lab
validation_20063: For observational data, a major
method for ensuring data quality is to control data entry. This file
contains information about the data ingest rules applied to each input
data field.
categoricalCodes_20063: Definitions of each value
for categorical data, such as growth form and sample condition
You can work with these tables from the named list object, but many
people find it easier to extract each table from the list and work with
it as an independent object. To do this, use the list2env()
function in R or globals().update() in Python:
R
list2env(apchem, .GlobalEnv)
## <environment: R_GlobalEnv>
Python
globals().update(apchem)
Save data locally
Keep in mind that using loadByProduct() will re-download
the data every time you run your code. In some cases this may be
desirable, but it can be a waste of time and compute resources. To come
back to these data without re-downloading, you’ll want to save the
tables locally. The most efficient option is to save the named list in
total.
R
saveRDS(apchem,
"~/Downloads/aqu_plant_chem.rds")
Python
# There are a variety of ways to do this in Python; NEON
# doesn't currently have a specific recommendation. If
# you don't have a data-saving workflow you already use,
# we suggest you check out the pickle module.
Then you can re-load the object to a programming environment any
time.
Other options for saving data locally:
Similar to the workflow we started this tutorial with, but using
neonUtilities to download instead of the Portal: Use
zipsByProduct() and stackByTable() instead of
loadByProduct(). With this option, use the function
readTableNEON() to read the files, to get the same column
type assignment that loadByProduct() carries out. Details
can be found in our
neonUtilities
tutorial.
Try out the community-developed neonstore package,
which is designed for maintaining a local store of the NEON data you
use. The neonUtilities function
stackFromStore() works with files downloaded by
neonstore. See the
neonstore
tutorial for more information.
Now let’s explore the aquatic plant data. OS data products are simple
in that the data generally tabular, and data volumes are lower than the
other NEON data types, but they are complex in that almost all consist
of multiple tables containing information collected at different times
in different ways. For example, samples collected in the field may be
shipped to a laboratory for analysis. Data associated with the field
collection will appear in one data table, and the analytical results
will appear in another. Complexity in working with OS data usually
involves bringing data together from multiple measurements or scales of
analysis.
As with the IS data, the variables file can tell you more about the
data tables.
OS data products each come with a Data Product User Guide, which can
be downloaded with the data, or accessed from the document library on
the Data Portal, or the
Product
Details page for the data product. The User Guide is designed to
give a basic introduction to the data product, including a brief summary
of the protocol and descriptions of data format and structure.
Explore isotope data
To get started with the aquatic plant chemistry data, let’s take a
look at carbon isotope ratios in plants across the three sites we
downloaded. The chemical analytes are reported in the
apl_plantExternalLabDataPerSample table, and the table is
in long format, with one record per sample per analyte, so we’ll subset
to only the carbon isotope analyte:
apl13C = apl_plantExternalLabDataPerSample[
apl_plantExternalLabDataPerSample.analyte=="d13C"]
grouped = apl13C.groupby("siteID")["analyteConcentration"]
fig, ax = plt.subplots()
ax.boxplot(x=[group.values for name, group in grouped],
tick_labels=grouped.groups.keys())
plt.show()
We see plants at Suggs and Toolik are quite low in 13C, with more
spread at Toolik than Suggs, and plants at Prairie Lake are relatively
enriched. Clearly the next question is what species these data
represent. But taxonomic data aren’t present in the
apl_plantExternalLabDataPerSample table, they’re in the
apl_biomass table. We’ll need to join the two tables to get
chemistry by taxon.
Every NEON data product has a Quick Start Guide (QSG), and for OS
products it includes a section describing how to join the tables in the
data product. Since it’s a pdf file, loadByProduct()
doesn’t bring it in, but you can view the Aquatic plant chemistry QSG on
the
Product
Details page. The neonOS package uses the information
from the QSGs to provide an automated table-joining function,
joinTableNEON().
Using the merged data, now we can plot carbon isotope ratio for each
taxon.
apl13Cspp = apct[apct.analyte=="d13C"]
grouped = apl13Cspp.groupby("scientificName")["analyteConcentration"]
fig, ax = plt.subplots()
ax.boxplot(x=[group.values for name, group in grouped],
tick_labels=grouped.groups.keys())
ax.tick_params(axis='x', labelrotation=90)
plt.show()
And now we can see most of the sampled plants have carbon isotope
ratios around -30, with just a few species accounting for most of the
more enriched samples.
Download remote sensing data: byFileAOP() and
byTileAOP()
Remote sensing data files are very large, so downloading them can
take a long time. byFileAOP() and byTileAOP()
enable easier programmatic downloads, but be aware it can take a very
long time to download large amounts of data.
Input options for the AOP functions are:
dpID: the data product ID, e.g. DP1.00002.001
site: the 4-letter code of a single site,
e.g. HARV
year: the 4-digit year to download
savepath: the file path you want to download to;
defaults to the working directory
check.size: T or F: should the function pause before
downloading data and warn you about the size of your download? Defaults
to T; if you are using this function within a script or batch process
you will want to set it to F.
easting: byTileAOP() only. Vector of
easting UTM coordinates whose corresponding tiles you want to
download
northing: byTileAOP() only. Vector of
northing UTM coordinates whose corresponding tiles you want to
download
buffer: byTileAOP() only. Size in meters
of buffer to include around coordinates when deciding which tiles to
download
token: Optional NEON API token for faster
downloads.
chunk_size: Only in Python. Set the chunk size of
chunked downloads, can improve efficiency in some cases. Defaults to 1
MB.
Here, we’ll download one tile of Ecosystem structure (Canopy Height
Model) (DP3.30015.001) from WREF in 2017.
In the directory indicated in savepath, you should now
have a folder named DP3.30015.001 with several nested
subfolders, leading to a tif file of a canopy height model tile.
Navigate data downloads: AOP
To work with AOP data, the best bet in R is the terra
package. It has functionality for most analyses you might want to do. In
Python, we’ll use the rasterio package here; explore NEON remote sensing
tutorials for more guidance.
Now we can see canopy height across the downloaded tile; the tallest
trees are over 60 meters, not surprising in the Pacific Northwest. There
is a clearing or clear cut in the lower right quadrant.
Next steps
Now that you’ve learned the basics of downloading and understanding
NEON data, where should you go to learn more? There are many more NEON
tutorials to explore, including how to align remote sensing and
ground-based measurements, a deep dive into the data quality flagging in
the sensor data products, and much more. For a recommended suite of
tutorials for new users, check out the
Getting
Started with NEON Data tutorial series.
KMeans is an iterative clustering algorithm used to classify unsupervised data (eg. data without a training set) into a specified number of groups. The algorithm begins with an initial set of randomly determined cluster centers. Each pixel in the image is then assigned to the nearest cluster center (using distance in N-space as the distance metric) and each cluster center is then re-computed as the centroid of all pixels assigned to the cluster. This process repeats until a desired stopping criterion is reached (e.g. max number of iterations).
To visualize how the algorithm works, it's easier look at a 2D data set. In the example below, watch how the cluster centers shift with progressive iterations,
Principal Component Analysis (PCA) - Dimensionality Reduction
Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands).
To run this notebook, the following Python packages need to be installed. You can install required packages from command line pip install spectra scikit-learn cvxopt.
or if already in a Jupyter Notebook, run the following code in a Notebook code cell.
In order to make use of the interactive graphics capabilities of spectralpython, such as N-Dimensional Feature Display, you work in a Python 3.6 environment (as of July 2018).
First, import the required packages and set display preferences:
from spectral import *
import spectral.io.envi as envi
import numpy as np
import matplotlib
#for clean output, to not print warnings, don't use when developing script
import warnings
warnings.filterwarnings('ignore')
For this example, we will read in a reflectance tile in ENVI format. NEON provides an h5 plugin for ENVI
# You will need to download the example dataset above,
# extract the files therein,
# and update the filepaths below per your local machine
img = envi.open('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.hdr',
'/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat')
Note that the information is stored differently when read in with envi.open. We can find the wavelength information in img.bands.centers. Let's take a look at the first and last wavelengths values:
print('First 3 Band Center Wavelengths:',img.bands.centers[:3])
print('Last 3 Band Center Wavelengths:',img.bands.centers[-3:])
First 3 Band Center Wavelengths: [383.534302, 388.542206, 393.55011]
Last 3 Band Center Wavelengths: [2501.878906, 2506.886719, 2511.894531]
To get a quick look at the img data, use the params method:
img.params
<bound method SpyFile.params of Data Source: '/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.dat'
# Rows: 1000
# Samples: 1000
# Bands: 426
Interleave: BIP
Quantization: 16 bits
Data format: int16>
Metadata information is stored in img.metadata, a dictionary. Let's look at the metadata contents:
md = img.metadata
print('Metadata Contents:')
for item in md:
print('\t',item)
Metadata Contents:
description
samples
lines
bands
data type
interleave
file type
header offset
byte order
map info
coordinate system string
wavelength
fwhm
wavelength units
reflectance scale factor
data ignore value
dataset names
To access any of these metadata items, use the syntax md['description'] or md['map info']:
When dealing with NEON hyperspectral data, we first want to remove the water vapor & noisy bands, keeping only the valid bands. To speed up the classification algorithms for demonstration purposes, we'll look at a subset of the data using read_subimage, a built in method to subset by area and bands. Type help(img.read_subimage) to see how it works.
valid_band_range = [i for j in (range(0,191), range(212, 281), range(315,415)) for i in j] #remove water vapor bands
img_subset = img.read_subimage(range(400,600),range(400,600),bands=valid_band_range) #subset image by area and bands
Plot the subsetted image for reference:
view = imshow(img_subset,bands=(58,34,19),stretch=0.01,title="RGB Image of 2017 SERC Tile Subset")
Now that we have the image subsetted, lets run the k-means algorithm. Type help(kmeans) to show how the function works. To run the k-means algorithm on the image and create 5 clusters, using a maximum of 50 iterations, use the following syntax:
Note that the algorithm terminated afte 14 iterations, when the pixels stopped being reassigned.
Data Tip: You can iterrupt the algorithm with a keyboard interrupt (CTRL-C) if you notice that the number of reassigned pixels drops off. Kmeans catches the KeyboardInterrupt exception and returns the clusters generated at the end of the previous iteration. If you are running the algorithm interactively, this feature allows you to set the max number of iterations to an arbitrarily high number and then stop the algorithm when the clusters have converged to an acceptable level. If you happen to set the max number of iterations too small (many pixels are still migrating at the end of the final iteration), you can simply call kmeans again to resume processing by passing the cluster centers generated by the previous call as the optional start_clusters argument to the function.
Let's take a look at the cluster centers c. In this case, these represent spectras of the five clusters of reflectance that the data were grouped into.
print(c.shape)
(5, 360)
c contains 5 groups of spectral curves with 360 bands (the # of bands we've kept after removing the water vapor windows and the last 10 noisy bands). Let's plot these spectral classes:
%matplotlib inline
import pylab
pylab.figure()
for i in range(c.shape[0]):
pylab.plot(c[i])
pylab.show
pylab.title('Spectral Classes from K-Means Clustering')
pylab.xlabel('Bands (with Water Vapor Windows Removed)')
pylab.ylabel('Reflectance')
What do you think the spectral classes in the figure you just created represent?
Try using a different number of clusters in the kmeans algorithm (e.g., 3 or 10) to see what spectral classes and classifications result.
Principal Component Analysis (PCA)
Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands) .
pc = principal_components(img_subset)
pc_view = imshow(pc.cov)
xdata = pc.transform(img_subset)
In the covariance matrix display, lighter values indicate strong positive covariance, darker values indicate strong negative covariance, and grey values indicate covariance near zero.
pcdata = pc.reduce(num=10).transform(img_subset)
pc_0999 = pc.reduce(fraction=0.999)
# How many eigenvalues are left?
print(len(pc_0999.eigenvalues))
img_pc = pc_0999.transform(img_subset)
print(img_pc.shape)
v = imshow(img_pc[:,:,:5], stretch_all=True)
The Normalized Difference Vegetation Index (NDVI) is a standard band-ratio calculation frequently used to analyze ecological remote sensing data. NDVI indicates whether the remotely-sensed target contains live green vegetation. When sunlight strikes objects, certain wavelengths of the electromagnetic spectrum are absorbed and other wavelengths are reflected. The pigment chlorophyll in plant leaves strongly absorbs visible light (with wavelengths in the range of 400-700 nm) for use in photosynthesis. The cell structure of the leaves, however, strongly reflects near-infrared light (wavelengths ranging from 700 - 1100 nm). Plants reflect up to 60% more light in the near infrared portion of the spectrum than they do in the green portion of the spectrum. By calculating the ratio of Near Infrared (NIR) to Visible (VIS) bands in hyperspectral data, we can obtain a metric of vegetation density and health.
The formula for NDVI is: $$NDVI = \frac{(NIR - VIS)}{(NIR+ VIS)}$$
NDVI is calculated from the visible and near-infrared light reflected by vegetation. Healthy vegetation (left) absorbs most of the
visible light that hits it, and reflects a large portion of near-infrared light. Unhealthy or sparse vegetation (right) reflects more
visible light and less near-infrared light. Source: Figure 1 in Wu et. al. 2014. PLOS.
Start by setting plot preferences and loading the neon_aop_hyperspectral.py module:
import os, sys
from copy import copy
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
This next function is a handy way to download the Python module and data that we will be using for this lesson. This uses the requests package.
# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
if not os.path.isdir(download_dir):
os.makedirs(download_dir)
filename = url.split('/')[-1]
r = requests.get(url, allow_redirects=True)
file_object = open(os.path.join(download_dir,filename),'wb')
file_object.write(r.content)
Download the module from its location on GitHub, add the python_modules to the path and import the neon_aop_hyperspectral.py module as neon_hs.
# download the neon_aop_hyperspectral.py module from GitHub
module_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/AOP/aop_python_modules/neon_aop_hyperspectral.py"
download_url(module_url,'../python_modules')
# add the python_modules to the path and import the python neon download and hyperspectral functions
sys.path.insert(0, '../python_modules')
# import the neon_aop_hyperspectral module, the semicolon supresses an empty plot from displaying
import neon_aop_hyperspectral as neon_hs;
# define the data_url to point to the cloud storage location of the the hyperspectral hdf5 data file
data_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D02/2021_SERC_5/L3/Spectrometer/Reflectance/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5"
# download the h5 data and display how much time it took to download (uncomment 1st and 3rd lines)
# start_time = time.time()
download_url(data_url,'.\data')
# print("--- It took %s seconds to download the data ---" % round((time.time() - start_time),1))
Read in SERC Reflectance Tile
# read the h5 reflectance file (including the full path) to the variable h5_file_name
h5_file_name = data_url.split('/')[-1]
h5_tile = os.path.join(".\data",h5_file_name)
print(f'h5_tile: {h5_tile}')
# Note you will need to update this filepath for your local machine
serc_refl, serc_refl_md, wavelengths = neon_hs.aop_h5refl2array(h5_tile,'Reflectance')
Reading in .\data\NEON_D02_SERC_DP3_368000_4306000_reflectance.h5
Extract NIR and VIS bands
Now that we have uploaded all the required functions, we can calculate NDVI and plot it.
Below we print the center wavelengths that these bands correspond to:
print('band 58 center wavelength (nm): ', wavelengths[57])
print('band 90 center wavelength (nm) : ', wavelengths[89])
band 58 center wavelength (nm): 669.3261
band 90 center wavelength (nm) : 829.5743
Calculate & Plot NDVI
Here we see that band 58 represents red visible light, while band 90 is in the NIR portion of the spectrum. Let's extract these two bands from the reflectance array and calculate the ratio using the numpy.true_divide which divides arrays element-wise. This also handles a case where the denominator = 0, which would otherwise throw a warning or error.
vis = serc_refl[:,:,57]
nir = serc_refl[:,:,89]
# handle a divide by zero by setting the numpy errstate as follows
with np.errstate(divide='ignore', invalid='ignore'):
ndvi = np.true_divide((nir-vis),(nir+vis))
ndvi[ndvi == np.inf] = 0
ndvi = np.nan_to_num(ndvi)
Let's take a look at the min, mean, and max values of NDVI that we calculated:
We can use the function plot_aop_refl to plot this, and choose the seismic color pallette to highlight the difference between positive and negative NDVI values. Since this is a normalized index, the values should range from -1 to +1.
neon_hs.plot_aop_refl(ndvi,serc_refl_md['extent'],
colorlimit = (np.min(ndvi),np.max(ndvi)),
title='SERC Subset NDVI \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',
colormap='seismic')
Extract Spectra Using Masks
In the second part of this tutorial, we will learn how to extract the average spectra of pixels whose NDVI exceeds a specified threshold value. There are several ways to do this using numpy, including the mask functions numpy.ma, as well as numpy.where and finally using boolean indexing.
To start, lets copy the NDVI calculated above and use booleans to create an array only containing NDVI > 0.6.
# make a copy of ndvi
ndvi_gtpt6 = ndvi.copy()
#set all pixels with NDVI < 0.6 to nan, keeping only values > 0.6
ndvi_gtpt6[ndvi<0.6] = np.nan
print('Mean NDVI > 0.6:',round(np.nanmean(ndvi_gtpt6),2))
Mean NDVI > 0.6: 0.87
Now let's plot the values of NDVI after masking out values < 0.6.
neon_hs.plot_aop_refl(ndvi_gtpt6,
serc_refl_md['extent'],
colorlimit=(0.6,1),
title='SERC Subset NDVI > 0.6 \n (VIS = Band 58, NIR = Band 90)',
cmap_title='NDVI',
colormap='RdYlGn')
Calculate the mean spectra, thresholded by NDVI
Below we will demonstrate how to calculate statistics on arrays where you have applied a mask numpy.ma. In this example, the function calculates the mean spectra for values that remain after masking out values by a specified threshold.
import numpy.ma as ma
def calculate_mean_masked_spectra(refl_array,ndvi,ndvi_threshold,ineq='>'):
mean_masked_refl = np.zeros(refl_array.shape[2])
for i in np.arange(refl_array.shape[2]):
refl_band = refl_array[:,:,i]
if ineq == '>':
ndvi_mask = ma.masked_where((ndvi<=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
elif ineq == '<':
ndvi_mask = ma.masked_where((ndvi>=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
else:
print('ERROR: Invalid inequality. Enter < or >')
masked_refl = ma.MaskedArray(refl_band,mask=ndvi_mask.mask)
mean_masked_refl[i] = ma.mean(masked_refl)
return mean_masked_refl
We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3).
Finally, we can create a pandas dataframe to plot the mean spectra.
#Remove water vapor bad band windows & last 10 bands
w = wavelengths.copy()
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan
w[-10:]=np.nan;
nan_ind = np.argwhere(np.isnan(w))
serc_ndvi_gtpt6[nan_ind] = np.nan
serc_ndvi_ltpt3[nan_ind] = np.nan
#Create dataframe with masked NDVI mean spectra, scale by the reflectance scale factor
serc_ndvi_df = pd.DataFrame()
serc_ndvi_df['wavelength'] = w
serc_ndvi_df['mean_refl_ndvi_gtpt6'] = serc_ndvi_gtpt6/serc_refl_md['scale_factor']
serc_ndvi_df['mean_refl_ndvi_ltpt3'] = serc_ndvi_ltpt3/serc_refl_md['scale_factor']
Let's take a look at the first 5 values of this new dataframe:
serc_ndvi_df.head()
wavelength
mean_refl_ndvi_gtpt6
mean_refl_ndvi_ltpt3
0
383.884003
0.055741
0.119835
1
388.891693
0.036432
0.090972
2
393.899506
0.027002
0.076867
3
398.907196
0.022841
0.072207
4
403.915009
0.018748
0.065984
Plot the masked NDVI dataframe to display the mean spectra for NDVI values that exceed 0.6 and that are less than 0.3:
ax = plt.gca();
serc_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_gtpt6',color='green',
edgecolor='none',kind='scatter',label='Mean Spectra where NDVI > 0.6',legend=True);
serc_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_ltpt3',color='red',
edgecolor='none',kind='scatter',label='Mean Spectra where NDVI < 0.3',legend=True);
ax.set_title('Mean Spectra of Reflectance Masked by NDVI')
ax.set_xlim([np.nanmin(w),np.nanmax(w)]);
ax.set_xlabel("Wavelength, nm"); ax.set_ylabel("Reflectance")
ax.grid('on');
This tutorial covers how to read in a NEON lidar Canopy Height Model (CHM) geotiff file into a Python rasterio object, shows some basic information about the raster data, and then ends with classifying the CHM into height bins.
Objectives
After completing this tutorial, you will be able to:
User rasterio to read in a NEON lidar raster geotiff file
Plot a raster tile and histogram of the data values
Create a classified raster object using thresholds
Install Python Packages
gdal
rasterio
requests
Download Data
For this lesson, we will read in a Canopy Height Model data collected at NEON's Lower Teakettle (TEAK) site in California. This data is downloaded in the first part of the tutorial, using the Python requests package.
In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. For more information about NEON data products and the CHM product DP3.30015.001, see the Ecosystem structure data product page on NEON's Data Portal.
First, let's import the required packages and set our plot display to be in-line:
import os
import copy
import requests
import numpy as np
import rasterio as rio
from rasterio.plot import show, show_hist
import matplotlib.pyplot as plt
%matplotlib inline
Next, let's download a file. For this tutorial, we will use the requests package to download a raster file from the public link where the data is stored. For simplicity, we will show how to download to a data folder in the working directory. You can move the data to a different folder, but be sure to update the path to your data accordingly.
# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
if not os.path.isdir(download_dir):
os.makedirs(download_dir)
filename = url.split('/')[-1]
r = requests.get(url, allow_redirects=True)
file_object = open(os.path.join(download_dir,filename),'wb')
file_object.write(r.content)
# public url where the CHM tile is stored
chm_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D17/2021_TEAK_5/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D17_TEAK_DP3_320000_4092000_CHM.tif"
# download the CHM tile
download_url(chm_url,'.\data')
# display the contents in the ./data folder to confirm the download completed
os.listdir('./data')
Open a GeoTIFF with rasterio
Let's look at the TEAK Canopy Height Model (CHM) to start. We can open and read this in Python using the rasterio.open function:
# read the chm file (including the full path) to the variable chm_dataset
chm_name = chm_url.split('/')[-1]
chm_file = os.path.join(".\data",chm_name)
chm_dataset = rio.open(chm_file)
Now we can look at a few properties of this dataset to start to get a feel for the rasterio object:
print('chm_dataset:\n',chm_dataset)
print('\nshape:\n',chm_dataset.shape)
print('\nno data value:\n',chm_dataset.nodata)
print('\nspatial extent:\n',chm_dataset.bounds)
print('\ncoordinate information (crs):\n',chm_dataset.crs)
Plot the Canopy Height Map and Histogram
We can use rasterio's built-in functions show and show_hist to plot and visualize the CHM tile. It is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values.
On your own, adjust the number of bins, and range of the y-axis to get a better sense of the distribution of the canopy height values. We can see that a large portion of the values are zero. These correspond to bare ground. Let's look at a histogram and plot the data without these zero values. To do this, we'll remove all values > 1 m. Due to the vertical range resolution of the lidar sensor, data collected with the Optech Gemini sensor can only resolve the ground to within 2 m, so anything below that height will be rounded down to zero. Our newer sensors (Riegl Q780 and Optech Galaxy) have a higher resolution, so the ground can be resolved to within ~0.7 m.
From the histogram we can see that the majority of the trees are < 60m. But the taller trees are less common in this tile.
Threshold Based Raster Classification
Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:
Class 1: CHM = 0 m
Class 2: 0m < CHM <= 15m
Class 3: 10m < CHM <= 30m
Class 4: 20m < CHM <= 45m
Class 5: CHM > 50m
We can use np.where to find the indices where the specified criteria is met.
When we look at this variable, we can see that it is now populated with values between 1-5:
chm_reclass
Lastly we can use matplotlib to display this re-classified CHM. We will define our own colormap to plot these discrete classifications, and create a custom legend to label the classes. First, to include the spatial information in the plot, create a new variable called ext that pulls from the rasterio "bounds" field to create the extent in the expected format for plotting.
In this introductory tutorial, we demonstrate how to read NEON AOP hyperspectral reflectance (Level 3, tiled) data in Python. This starts with the fundamental steps of reading in and exploring the HDF5 (h5) format that the reflectance data is delivered in. Then you will develop and practice skills to explore and visualize the spectral data.
Learning Objectives
After completing this tutorial, you will be able to:
Use the package h5py and the visititems method to read a reflectance HDF5 file and view data attributes.
Read the data ignore value and scaling factor and apply these values to produce a "cleaned" reflectance array.
Plot a histogram of reflectance values to visualize the range and distribution of values.
Extract and plot a single band of reflectance data.
Apply a histogram stretch and adaptive equalization to improve the contrast of an image.
Install Python Packages
If you don't already have these packages installed, you will need to do so, using pip or conda install:
requests
gdal
h5py
Standard packages used in this tutorial include:
numpy
pandas
matplotlib
Download Data
To complete this tutorial, you will download and read in surface directional reflectance data collected at the NEON Smithsonian Environmental Research Center (SERC) site in Maryland. This data can be downloaded by clicking the link below.
In addition,
NEON'S Airborne Observation Platform provides Algorithm Theoretical Basis Documents (ATBDs) for all of their data products. Please refer to the ATBDs below for a more in-depth understanding ofthe reflectance datad.
Hyperspectral remote sensing data is a useful tool for measuring changes to our environment at the Earth’s surface. In this tutorial we explore how to extract
information from a tile (1000m x 1000m x 426 bands) of NEON AOP orthorectified surface reflectance data, stored in hdf5 format. For more information on this data product, refer to the NEON Data Product Catalog.
Mapping the Invisible: Introduction to Spectral Remote Sensing
For more information on spectral remote sensing watch this video.
Set up
First let's import the required packages:
import os
import requests
import numpy as np
import h5py
from osgeo import gdal
import matplotlib.pyplot as plt
Read in the datasets
To start, we download the NEON surface directional reflectance data (DP3.30006.001), which is provided in hdf5 (.h5) format. You can download the file by clicking on the download link at the top of this lesson. Place the file inside a "data" folder in your working directory, and double check the file is located in the correct location, as follows:
# display the contents in the ./data folder to confirm the download completed
os.listdir('./data')
f = h5py.File('file.h5','r') reads in an h5 file to the variable f.
Using the help
We will be using a number of built-in and user-defined functions and methods throughout the tutorial. If you are uncertain what a certain function does, or how to call it, you can type help() or type a
? at the end of the function or method and run the cell (either select Cell > Run Cells or Shift Enter with your cursor in the cell you want to run). The ? will pop up a window at the bottom of the notebook displaying the function's docstrings, which includes information about the function and usage. We encourage you to use help and ? throughout the tutorial as you come across functions you are unfamiliar with. Try out these commands:
help(h5py)
or
h5py.File?
Now that we have an idea of how to use h5py to read in an h5 file, let's use this to explore the hyperspectral reflectance data. Note that if the h5 file is stored in a different directory than where you are running your notebook, you need to include the path (either relative or absolute) to the directory where that data file is stored. Use os.path.join to create the full path of the file.
# Note that you may need to update this filepath for your local machine
f = h5py.File('./data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5','r')
Explore NEON AOP HDF5 Reflectance Files
We can look inside the HDF5 dataset with the h5py visititems function. The list_dataset function defined below displays all datasets stored in the hdf5 file and their locations within the hdf5 file:
#list_dataset lists the names of datasets in an hdf5 file
def list_dataset(name,node):
if isinstance(node, h5py.Dataset):
print(name)
f.visititems(list_dataset)
You can see that there is a lot of information stored inside this reflectance hdf5 file. Most of this information is metadata (data about the reflectance data), for example, this file stores input parameters used in the atmospheric correction. For this introductory lesson, we will only work with two of these datasets, the reflectance data (hyperspectral cube), and the corresponding geospatial information, stored in Metadata/Coordinate_System:
SERC/Reflectance/Reflectance_Data
SERC/Reflectance/Metadata/Coordinate_System/
We can also display the name, shape, and type of each of these datasets using the ls_dataset function defined below, which is also called with the visititems method:
#ls_dataset displays the name, shape, and type of datasets in hdf5 file
def ls_dataset(name,node):
if isinstance(node, h5py.Dataset):
print(node)
Data Tip: To see what the visititems method (or any method) does, type ? at the end, eg.
f.visititems?
f.visititems(ls_dataset)
<HDF5 dataset "Aerosol_Optical_Depth": shape (1000, 1000), type "<i2">
<HDF5 dataset "Aspect": shape (1000, 1000), type "<f4">
<HDF5 dataset "Cast_Shadow": shape (1000, 1000), type "|u1">
<HDF5 dataset "Dark_Dense_Vegetation_Classification": shape (1000, 1000), type "|u1">
<HDF5 dataset "Data_Selection_Index": shape (1000, 1000), type "<i4">
<HDF5 dataset "Haze_Cloud_Water_Map": shape (1000, 1000), type "|u1">
<HDF5 dataset "Illumination_Factor": shape (1000, 1000), type "|u1">
<HDF5 dataset "Path_Length": shape (1000, 1000), type "<f4">
<HDF5 dataset "Sky_View_Factor": shape (1000, 1000), type "|u1">
<HDF5 dataset "Slope": shape (1000, 1000), type "<f4">
<HDF5 dataset "Smooth_Surface_Elevation": shape (1000, 1000), type "<f4">
<HDF5 dataset "Visibility_Index_Map": shape (1000, 1000), type "|u1">
<HDF5 dataset "Water_Vapor_Column": shape (1000, 1000), type "<f4">
<HDF5 dataset "Weather_Quality_Indicator": shape (1000, 1000, 3), type "|u1">
<HDF5 dataset "Coordinate_System_String": shape (), type "|O">
<HDF5 dataset "EPSG Code": shape (), type "|O">
<HDF5 dataset "Map_Info": shape (), type "|O">
<HDF5 dataset "Proj4": shape (), type "|O">
<HDF5 dataset "ATCOR_Input_file": shape (), type "|O">
<HDF5 dataset "ATCOR_Processing_Log": shape (), type "|O">
<HDF5 dataset "Shadow_Processing_Log": shape (), type "|O">
<HDF5 dataset "Skyview_Processing_Log": shape (), type "|O">
<HDF5 dataset "Solar_Azimuth_Angle": shape (), type "<f4">
<HDF5 dataset "Solar_Zenith_Angle": shape (), type "<f4">
<HDF5 dataset "ATCOR_Input_file": shape (), type "|O">
<HDF5 dataset "ATCOR_Processing_Log": shape (), type "|O">
<HDF5 dataset "Shadow_Processing_Log": shape (), type "|O">
<HDF5 dataset "Skyview_Processing_Log": shape (), type "|O">
<HDF5 dataset "Solar_Azimuth_Angle": shape (), type "<f4">
<HDF5 dataset "Solar_Zenith_Angle": shape (), type "<f4">
<HDF5 dataset "ATCOR_Input_file": shape (), type "|O">
<HDF5 dataset "ATCOR_Processing_Log": shape (), type "|O">
<HDF5 dataset "Shadow_Processing_Log": shape (), type "|O">
<HDF5 dataset "Skyview_Processing_Log": shape (), type "|O">
<HDF5 dataset "Solar_Azimuth_Angle": shape (), type "<f4">
<HDF5 dataset "Solar_Zenith_Angle": shape (), type "<f4">
<HDF5 dataset "FWHM": shape (426,), type "<f4">
<HDF5 dataset "Wavelength": shape (426,), type "<f4">
<HDF5 dataset "to-sensor_azimuth_angle": shape (1000, 1000), type "<f4">
<HDF5 dataset "to-sensor_zenith_angle": shape (1000, 1000), type "<f4">
<HDF5 dataset "Reflectance_Data": shape (1000, 1000, 426), type "<i2">
Now that we can see the structure of the hdf5 file, let's take a look at some of the information that is stored inside. Let's start by extracting the reflectance data, which is nested under SERC/Reflectance/Reflectance_Data:
<HDF5 dataset "Reflectance_Data": shape (1000, 1000, 426), type "<i2">
We can extract the size of this reflectance array that we extracted using the shape method:
refl_shape = serc_reflArray.shape
print('SERC Reflectance Data Dimensions:',refl_shape)
SERC Reflectance Data Dimensions: (1000, 1000, 426)
This 3-D shape (1000,1000,426) corresponds to (y,x,bands), where (x,y) are the dimensions of the reflectance array in pixels. Hyperspectral data sets are often called "cubes" to reflect this 3-dimensional shape.
A "cube" showing a hyperspectral data set. Source: National Ecological Observatory Network
(NEON)
NEON hyperspectral data contain around 426 spectral bands, and when working with tiled data, the spatial dimensions are 1000 x 1000, where each pixel represents 1 meter. Now let's take a look at the wavelength values. First, we will extract wavelength information from the serc_refl variable that we created:
#define the wavelengths variable
wavelengths = serc_refl['Metadata']['Spectral_Data']['Wavelength']
#View wavelength information and values
print('wavelengths:',wavelengths)
wavelengths: <HDF5 dataset "Wavelength": shape (426,), type "<f4">
We can then use numpy (imported as np) to see the minimum and maximum wavelength values:
# Display min & max wavelengths
print('min wavelength:', np.amin(wavelengths),'nm')
print('max wavelength:', np.amax(wavelengths),'nm')
min wavelength: 383.884 nm
max wavelength: 2512.1804 nm
Finally, we can determine the band widths (distance between center bands of two adjacent bands). Let's try this for the first two bands and the last two bands. Remember that Python uses 0-based indexing ([0] represents the first value in an array), and note that you can also use negative numbers to splice values from the end of an array ([-1] represents the last value in an array).
#show the band widths between the first 2 bands and last 2 bands
print('band width between first 2 bands =',(wavelengths[1]-wavelengths[0]),'nm')
print('band width between last 2 bands =',(wavelengths[-1]-wavelengths[-2]),'nm')
band width between first 2 bands = 5.0076904 nm
band width between last 2 bands = 5.0078125 nm
The center wavelengths recorded in this hyperspectral cube range from 383.88 - 2512.18 nm, and each band covers a range of ~5 nm. Now let's extract spatial information, which is stored under SERC/Reflectance/Metadata/Coordinate_System/Map_Info:
Here we can spatial information about the reflectance data. Below is a break down of what each of these values means:
UTM - coordinate system (Universal Transverse Mercator)
1.000, 1.000 -
368000.000, 4307000.0 - UTM coordinates (meters) of the map origin, which refers to the upper-left corner of the image (xMin, yMax).
1.0000000, 1.0000000 - pixel resolution (meters)
18 - UTM zone
N - UTM hemisphere (North for all NEON sites)
WGS-84 - reference ellipoid
Note that the letter b that appears before UTM signifies that the variable-length string data is stored in binary format when it is written to the hdf5 file. Don't worry about it for now, as we will convert the numerical data we need into floating point numbers. For more information on hdf5 strings read the h5py documentation.
You can display this in as a string as follows:
serc_mapInfo[()].decode("utf-8")
Let's extract relevant information from the Map_Info metadata to define the spatial extent of this dataset. To do this, we can use the split method to break up this string into separate values:
#First convert mapInfo to a string
mapInfo_string = serc_mapInfo[()].decode("utf-8") # read in as string
#split the strings using the separator ","
mapInfo_split = mapInfo_string.split(",")
print(mapInfo_split)
Now we can extract the spatial information we need from the map info values, convert them to the appropriate data type (float) and store it in a way that will enable us to access and apply it later when we want to plot the data:
#Extract the resolution & convert to floating decimal number
res = float(mapInfo_split[5]),float(mapInfo_split[6])
print('Resolution:',res)
Resolution: (1.0, 1.0)
#Extract the upper left-hand corner coordinates from mapInfo
xMin = float(mapInfo_split[3])
yMax = float(mapInfo_split[4])
#Calculate the xMax and yMin values from the dimensions
xMax = xMin + (refl_shape[1]*res[0]) #xMax = left edge + (# of columns * x pixel resolution)
yMin = yMax - (refl_shape[0]*res[1]) #yMin = top edge - (# of rows * y pixel resolution)
Now we can define the spatial extent as the tuple (xMin, xMax, yMin, yMax). This is the format required for applying the spatial extent when plotting with matplotlib.pyplot.
#Define extent as a tuple:
serc_ext = (xMin, xMax, yMin, yMax)
print('serc_ext:',serc_ext)
print('serc_ext type:',type(serc_ext))
While it is useful to have all the data contained in a hyperspectral cube, it is difficult to visualize all this information at once. We can extract a single band (representing a ~5nm band, approximating a single wavelength) from the cube by using splicing as follows. Note that we have to cast the reflectance data into the type float. Recall that since Python indexing starts at 0 instead of 1, in order to extract band 56, we need to use the index 55.
Here we can see that we extracted a 2-D array (1000 x 1000) of the scaled reflectance data corresponding to the wavelength band 56. Before we can use the data, we need to clean it up a little. We'll show how to do this below.
Scale factor and No Data Value
This array represents the scaled reflectance for band 56. Recall from exploring the HDF5 data in HDFViewer that NEON AOP reflectance data uses a Data_Ignore_Value of -9999 to represent missing data (often called NaN), and a reflectance Scale_Factor of 10000.0 in order to save disk space (can use lower precision this way).
Screenshot of the NEON HDF5 file format.
Source: National Ecological Observatory Network
We can extract and apply the Data_Ignore_Value and Scale_Factor as follows:
#View and apply scale factor and data ignore value
scaleFactor = serc_reflArray.attrs['Scale_Factor']
noDataValue = serc_reflArray.attrs['Data_Ignore_Value']
print('Scale Factor:',scaleFactor)
print('Data Ignore Value:',noDataValue)
b56[b56==int(noDataValue)]=np.nan
b56 = b56/scaleFactor
print('Cleaned Band 56 Reflectance:\n',b56)
Now we can plot this band using the Python package matplotlib.pyplot, which we imported at the beginning of the lesson as plt. Note that the default colormap is jet unless otherwise specified. You can explore using different colormaps on your own; see the mapplotlib colormaps for for other options.
We can see that this image looks pretty washed out. To see why this is, it helps to look at the range and distribution of reflectance values that we are plotting. We can do this by making a histogram.
Plot histogram
We can plot a histogram using the matplotlib.pyplot.hist function. Note that this function won't work if there are any NaN values, so we can ensure we are only plotting the real data values using the call below. You can also specify the # of bins you want to divide the data into.
plt.hist(b56[~np.isnan(b56)],50); #50 signifies the # of bins
We can see that most of the reflectance values are < 0.4. In order to show more contrast in the image, we can adjust the colorlimit (clim) to 0-0.4:
serc_plot = plt.imshow(b56,extent=serc_ext,cmap='Greys',clim=(0,0.4))
plt.title('SERC Band 56 Reflectance');
Here you can see that adjusting the colorlimit displays features (eg. roads, buildings) much better than when we set the colormap limits to the entire range of reflectance values.
We can also try out some basic image processing to better visualize the reflectance data using the scikit-image package.
Histogram equalization is a method in image processing of contrast adjustment using the image's histogram. Stretching the histogram can improve the contrast of a displayed image, as we will show how to do below.
In this tutorial, we will learn how to extract and plot a spectral profile from a single pixel of a reflectance band in a NEON hyperspectral HDF5 file.
After completing this tutorial, you will be able to:
Plot the spectral signature of a single pixel
Remove bad band windows from a spectra
Use a IPython widget to interactively view spectra of various pixels
Install Python Packages
gdal
h5py
requests
IPython
Data
Data and additional scripts required for this lesson are downloaded programmatically as part of the tutorial.
The hyperspectral imagery file used in this lesson was collected over NEON's Smithsonian Environmental Research Center field site in 2021 and processed at NEON headquarters.
In this exercise, we will learn how to extract and plot a spectral profile from a single pixel of a reflectance band in a NEON hyperspectral hdf5 file. To do this, we will use the aop_h5refl2array function to read in and clean our h5 reflectance data, and the Python package pandas to create a dataframe for the reflectance and associated wavelength data.
Spectral Signatures
A spectral signature is a plot of the amount of light energy reflected by an object throughout the range of wavelengths in the electromagnetic spectrum. The spectral signature of an object conveys useful information about its structural and chemical composition. We can use these signatures to identify and classify different objects from a spectral image.
For example, vegetation has a distinct spectral signature.
Spectral signature of vegetation. Source: Roman, Anamaria & Ursu, Tudor. (2016). Multispectral satellite imagery and airborne laser scanning techniques for the detection of archaeological vegetation marks.
Vegetation has a unique spectral signature characterized by high reflectance in the near infrared wavelengths, and much lower reflectance in the green portion of the visible spectrum. For more details, please see Vegetation Analysis: Using Vegetation Indices in ENVI . We can extract reflectance values in the NIR and visible spectrums from hyperspectral data in order to map vegetation on the earth's surface. You can also use spectral curves as a proxy for vegetation health. We will explore this concept more in the next lesson, where we will caluclate vegetation indices.
Example spectra of water, green grass, dry grass, and soil. Source: National Ecological Observatory Network (NEON)
import os, sys
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
This next function is a handy way to download the Python module and data that we will be using for this lesson. This uses the requests package.
# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
if not os.path.isdir(download_dir):
os.makedirs(download_dir)
filename = url.split('/')[-1]
r = requests.get(url, allow_redirects=True)
file_object = open(os.path.join(download_dir,filename),'wb')
file_object.write(r.content)
Download the module from its location on GitHub, add the python_modules to the path and import the neon_aop_hyperspectral.py module.
module_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/AOP/aop_python_modules/neon_aop_hyperspectral.py"
download_url(module_url,'../python_modules')
# os.listdir('../python_modules') #optionally show the contents of this directory to confirm the file downloaded
sys.path.insert(0, '../python_modules')
# import the neon_aop_hyperspectral module, the semicolon supresses an empty plot from displaying
import neon_aop_hyperspectral as neon_hs;
# define the data_url to point to the cloud storage location of the the hyperspectral hdf5 data file
data_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D02/2021_SERC_5/L3/Spectrometer/Reflectance/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5"
# download the h5 data
download_url(data_url,'.\data')
# read the h5 reflectance file (including the full path) to the variable h5_file_name
h5_file_name = data_url.split('/')[-1]
h5_tile = os.path.join(".\data",h5_file_name)
# read in the data using the neon_hs module
serc_refl, serc_refl_md, wavelengths = neon_hs.aop_h5refl2array(h5_tile,'Reflectance')
Reading in .\data\NEON_D02_SERC_DP3_368000_4306000_reflectance.h5
Optionally, you can view the data stored in the metadata dictionary, and print the minimum, maximum, and mean reflectance values in the tile. In order to ignore NaN values, use numpy.nanmin/nanmax/nanmean.
for item in sorted(serc_refl_md):
print(item + ':',serc_refl_md[item])
print('\nSERC Tile Reflectance Stats:')
print('min:',np.nanmin(serc_refl))
print('max:',round(np.nanmax(serc_refl),2))
print('mean:',round(np.nanmean(serc_refl),2))
We can use pandas to create a dataframe containing the wavelength and reflectance values for a single pixel - in this example, we'll look at the center pixel of the tile (500,500). To extract all reflectance values from a single pixel, use splicing as we did before to select a single band, but now we need to specify (y,x) and select all bands (using :).
We can now plot the spectra, stored in this dataframe structure. pandas has a built in plotting routine, which can be called by typing .plot at the end of the dataframe.
We can see from the spectral profile above that there are spikes in reflectance around ~1400nm and ~1800nm. These result from water vapor which absorbs light between wavelengths 1340-1445 nm and 1790-1955 nm. The atmospheric correction that converts radiance to reflectance subsequently results in a spike at these two bands. The wavelengths of these water vapor bands is stored in the reflectance attributes, which is saved in the reflectance metadata dictionary created with h5refl2array:
bbw1 = serc_refl_md['bad_band_window1'];
bbw2 = serc_refl_md['bad_band_window2'];
print('Bad Band Window 1:',bbw1)
print('Bad Band Window 2:',bbw2)
Bad Band Window 1: [1340 1445]
Bad Band Window 2: [1790 1955]
Below we repeat the plot we made above, but this time draw in the edges of the water vapor band windows that we need to remove.
serc_pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
plt.title('Spectral Signature for SERC Pixel (500,500)')
ax1 = plt.gca(); ax1.grid('on')
ax1.set_xlim([np.min(serc_pixel_df['wavelengths']),np.max(serc_pixel_df['wavelengths'])]);
ax1.set_ylim(0,0.5)
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
#Add in red dotted lines to show boundaries of bad band windows:
ax1.plot((1340,1340),(0,1.5), 'r--');
ax1.plot((1445,1445),(0,1.5), 'r--');
ax1.plot((1790,1790),(0,1.5), 'r--');
ax1.plot((1955,1955),(0,1.5), 'r--');
We can now set these bad band windows to nan, along with the last 10 bands, which are also often noisy (as seen in the spectral profile plotted above). First make a copy of the wavelengths so that the original metadata doesn't change.
w = wavelengths.copy() #make a copy to deal with the mutable data type
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan #can also use bbw1[0] or bbw1[1] to avoid hard-coding in
w[-10:]=np.nan; # the last 10 bands sometimes have noise - best to eliminate
#print(w) #optionally print wavelength values to show that -9999 values are replaced with nan
Interactive Spectra Visualization
Finally, we can create a widget to interactively view the spectra of different pixels along the reflectance tile. Run the cell below, and select different pixel_x and pixel_y values to gain a sense of what the spectra look like for different materials on the ground.
#define refl_band, refl, and metadata, as copies of the original serc_refl data
refl_band = sercb56
refl = serc_refl.copy()
metadata = serc_refl_md.copy()
from IPython.html.widgets import *
def interactive_spectra_plot(pixel_x,pixel_y):
reflectance = refl[pixel_y,pixel_x,:]
pixel_df = pd.DataFrame()
pixel_df['reflectance'] = reflectance
pixel_df['wavelengths'] = w
fig = plt.figure(figsize=(15,5))
ax1 = fig.add_subplot(1,2,1)
# fig, axes = plt.subplots(nrows=1, ncols=2)
pixel_df.plot(ax=ax1,x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
ax1.set_title('Spectra of Pixel (' + str(pixel_x) + ',' + str(pixel_y) + ')')
ax1.set_xlim([np.min(wavelengths),np.max(wavelengths)]);
ax1.set_ylim([np.min(pixel_df['reflectance']),np.max(pixel_df['reflectance']*1.1)])
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
ax1.grid('on')
ax2 = fig.add_subplot(1,2,2)
plot = plt.imshow(refl_band,extent=metadata['extent'],clim=(0,0.1));
plt.title('Pixel Location');
cbar = plt.colorbar(plot,aspect=20); plt.set_cmap('gist_earth');
cbar.set_label('Reflectance',rotation=90,labelpad=20);
ax2.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation
rotatexlabels = plt.setp(ax2.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees
ax2.plot(metadata['extent'][0]+pixel_x,metadata['extent'][3]-pixel_y,'s',markersize=5,color='red')
ax2.set_xlim(metadata['extent'][0],metadata['extent'][1])
ax2.set_ylim(metadata['extent'][2],metadata['extent'][3])
interact(interactive_spectra_plot, pixel_x = (0,refl.shape[1]-1,1),pixel_y=(0,refl.shape[0]-1,1));
In this tutorial, we will learn how to extract and plot a spectral profile from a single pixel of a reflectance band in a NEON hyperspectral HDF5 file.
In this exercise, we will learn how to extract and plot a spectral profile from
a single pixel of a reflectance band in a NEON hyperspectral hdf5 file. To do
this, we will use the aop_h5refl2array function to read in and clean our h5
reflectance data, and the Python package pandas to create a dataframe for the
reflectance and associated wavelength data.
Spectral Signatures
A spectral signature is a plot of the amount of light energy reflected by an
object throughout the range of wavelengths in the electromagnetic spectrum. The
spectral signature of an object conveys useful information about its structural
and chemical composition. We can use these signatures to identify and classify
different objects from a spectral image.
Vegetation has a unique spectral signature characterized by high reflectance in
the near infrared wavelengths, and much lower reflectance in the green portion
of the visible spectrum. We can extract reflectance values in the NIR and visible
spectrums from hyperspectral data in order to map vegetation on the earth's
surface. You can also use spectral curves as a proxy for vegetation health. We
will explore this concept more in the next lesson, where we will caluclate
vegetation indices.
Example spectra of water, green grass, dry grass, and soil. Source: National Ecological Observatory Network (NEON)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') #don't display warnings
Import the hyperspectral functions file that you downloaded into the variable neon_hs (for neon hyperspectral):
import os
# Note: you will need to update this filepath according to your local machine
os.chdir("/Users/olearyd/Git/data/")
import neon_aop_hyperspectral as neon_hs
# Note: you will need to update this filepath according to your local machine
sercRefl, sercRefl_md = neon_hs.aop_h5refl2array('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
Optionally, you can view the data stored in the metadata dictionary, and print the minimum, maximum, and mean reflectance values in the tile. In order to handle any nan values, use Numpynanminnanmax and nanmean.
for item in sorted(sercRefl_md):
print(item + ':',sercRefl_md[item])
print('SERC Tile Reflectance Stats:')
print('min:',np.nanmin(sercRefl))
print('max:',round(np.nanmax(sercRefl),2))
print('mean:',round(np.nanmean(sercRefl),2))
For reference, plot the red band of the tile, using splicing, and the plot_aop_refl function:
We can use pandas to create a dataframe containing the wavelength and reflectance values for a single pixel - in this example, we'll look at the center pixel of the tile (500,500).
import pandas as pd
To extract all reflectance values from a single pixel, use splicing as we did before to select a single band, but now we need to specify (y,x) and select all bands (using :).
We can now plot the spectra, stored in this dataframe structure. pandas has a built in plotting routine, which can be called by typing .plot at the end of the dataframe.
We can see from the spectral profile above that there are spikes in reflectance around ~1400nm and ~1800nm. These result from water vapor which absorbs light between wavelengths 1340-1445 nm and 1790-1955 nm. The atmospheric correction that converts radiance to reflectance subsequently results in a spike at these two bands. The wavelengths of these water vapor bands is stored in the reflectance attributes, which is saved in the reflectance metadata dictionary created with h5refl2array:
bbw1 = sercRefl_md['bad band window1'];
bbw2 = sercRefl_md['bad band window2'];
print('Bad Band Window 1:',bbw1)
print('Bad Band Window 2:',bbw2)
Bad Band Window 1: [1340 1445]
Bad Band Window 2: [1790 1955]
Below we repeat the plot we made above, but this time draw in the edges of the water vapor band windows that we need to remove.
serc_pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
plt.title('Spectral Signature for SERC Pixel (500,500)')
ax1 = plt.gca(); ax1.grid('on')
ax1.set_xlim([np.min(serc_pixel_df['wavelengths']),np.max(serc_pixel_df['wavelengths'])]);
ax1.set_ylim(0,0.5)
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
#Add in red dotted lines to show boundaries of bad band windows:
ax1.plot((1340,1340),(0,1.5), 'r--')
ax1.plot((1445,1445),(0,1.5), 'r--')
ax1.plot((1790,1790),(0,1.5), 'r--')
ax1.plot((1955,1955),(0,1.5), 'r--')
[<matplotlib.lines.Line2D at 0x81aaccb70>]
We can now set these bad band windows to nan, along with the last 10 bands, which are also often noisy (as seen in the spectral profile plotted above). First make a copy of the wavelengths so that the original metadata doesn't change.
import copy
w = copy.copy(sercRefl_md['wavelength']) #make a copy to deal with the mutable data type
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan #can also use bbw1[0] or bbw1[1] to avoid hard-coding in
w[-10:]=np.nan; # the last 10 bands sometimes have noise - best to eliminate
#print(w) #optionally print wavelength values to show that -9999 values are replaced with nan
Interactive Spectra Visualization
Finally, we can create a widget to interactively view the spectra of different pixels along the reflectance tile. Run the two cells below, and interact with them to gain a better sense of what the spectra look like for different materials on the ground.
#define index corresponding to nan values:
nan_ind = np.argwhere(np.isnan(w))
#define refl_band, refl, and metadata
refl_band = sercb56
refl = copy.copy(sercRefl)
metadata = copy.copy(sercRefl_md)
This tutorial introduces NEON RGB camera images (Data Product DP3.30010.001) and uses the Python package rasterio to read in and plot the camera data in Python. In this lesson, we will read in an RGB camera tile collected over the NEON Smithsonian Environmental Research Center (SERC) site and plot the mutliband image, as well as the individual bands. This lesson was adapted from the rasterio plotting documentation.
Objectives
After completing this tutorial, you will be able to:
Plot a NEON RGB camera geotiff tile in Python using rasterio
Package Requirements
This tutorial was run in Python version 3.9, using the following packages:
rasterio
matplotlib
Download the Data
Download the NEON
camera (RGB) imagery tile
collected over the Smithsonian Environmental Research Station (SERC) NEON field site in 2021. Move this data to a desired folder on your local workstation. You will need to know the file path to this data.
You don't have to download from the link above; the tutorial will demonstrate how to download the data directly from Python into your working directory, but we recommend re-organizing in a way that makes sense for you.
Background
As part of the
NEON Airborne Operation Platform's
suite of remote sensing instruments, the digital camera producing high-resolution (<= 10 cm) photographs of the earth’s surface. The camera records light energy that has reflected off the ground in the visible portion (red, green and blue) of the electromagnetic spectrum. Often the camera images are used to provide context for the hyperspectral and LiDAR data, but they can also be used for research purposes in their own right. One such example is the tree-crown mapping work by Weinstein et al. - see the links below for more information!
Reference: Ben G Weinstein, Sergio Marconi, Stephanie A Bohlman, Alina Zare, Aditya Singh, Sarah J Graves, Ethan P White (2021) A remote sensing derived data set of 100 million individual tree crowns for the National Ecological Observatory Network eLife 10:e62922. https://doi.org/10.7554/eLife.62922
In this lesson we will keep it simple and show how to read in and plot a single camera file (1km x 1km ortho-mosaicked tile) - a first step in any research incorporating the AOP camera data (in Python).
Import required packages
First let's import the packages that we'll be using in this lesson.
import os
import requests
import rasterio as rio
from rasterio.plot import show, show_hist
import matplotlib.pyplot as plt
Next, let's download a camera file. For this tutorial, we will use the requests package to download a raster file from the public link where the data is stored. For simplicity, we will show how to download to a data folder in the working directory. You can move the data to a different folder, but if you do that, be sure to update the path to your data accordingly.
def download_url(url,download_dir):
if not os.path.isdir(download_dir):
os.makedirs(download_dir)
filename = url.split('/')[-1]
r = requests.get(url, allow_redirects=True)
file_object = open(os.path.join(download_dir,filename),'wb')
file_object.write(r.content)
# public url where the RGB camera tile is stored
rgb_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D02/2021_SERC_5/L3/Camera/Mosaic/2021_SERC_5_368000_4306000_image.tif"
# download the camera tile to a ./data subfolder in your working directory
download_url(rgb_url,'.\data')
# display the contents in the ./data folder to confirm the download completed
os.listdir('./data')
Open the Camera RGB data with rasterio
We can open and read this RGB data that we downloaded in Python using the rasterio.open function:
# read the RGB file (including the full path) to the variable rgb_dataset
rgb_name = rgb_url.split('/')[-1]
rgb_file = os.path.join(".\data",rgb_name)
rgb_dataset = rio.open(rgb_file)
Let's look at a few properties of this dataset to get a sense of the information stored in the rasterio object:
print('rgb_dataset:\n',rgb_dataset)
print('\nshape:\n',rgb_dataset.shape)
print('\nspatial extent:\n',rgb_dataset.bounds)
print('\ncoordinate information (crs):\n',rgb_dataset.crs)
Unlike the other AOP data products, camera imagery is generated at 10cm resolution, so each 1km x 1km tile will contain 10000 pixels (other 1m resolution data products will have 1000 x 1000 pixels per tile, where each pixel represents 1 meter).
Plot the RGB multiband image
We can use rasterio's built-in functions show to plot the CHM tile.
show(rgb_dataset);
Plot each band of the RGB image
We can also plot each band (red, green, and blue) individually as follows:
That's all for this example! Most of the other AOP raster data are all single band images, but rasterio is a handy Python package for working with any geotiff files. You can download and visualize the lidar and spectrometer derived raster images similarly.