Skip to main content
NSF NEON, Operated by Battelle

Main navigation

  • About
    • NEON Overview
      • Vision and Management
      • Spatial and Temporal Design
      • History
    • About the NEON Biorepository
      • ASU Biorepository Staff
      • Contact the NEON Biorepository
    • Observatory Blog
    • Newsletters
    • Staff
    • FAQ
    • User Accounts
    • Contact Us

    About

  • Data
    • Data Portal
      • Data Availability Charts
      • API & GraphQL
      • Prototype Data
      • Externally Hosted Data
    • Data Collection Methods
      • Airborne Observation Platform (AOP)
      • Instrument System (IS)
        • Instrumented Collection Types
        • Aquatic Instrument System (AIS)
        • Terrestrial Instrument System (TIS)
      • Observational System (OS)
        • Observation Types
        • Observational Sampling Design
        • Sampling Schedules
        • Taxonomic Lists Used by Field Staff
        • Optimizing the Observational Sampling Designs
      • Protocols & Standardized Methods
    • Getting Started with NEON Data
      • neonUtilities for R and Python
      • Learning Hub
      • Code Hub
    • Using Data
      • Data Formats and Conventions
      • Released, Provisional, and Revised Data
      • Data Product Bundles
      • Usage Policies
      • Acknowledging and Citing NEON
      • Publishing Research Outputs
    • Data Notifications
    • NEON Data Management
      • Data Availability
      • Data Processing
      • Data Quality

    Data

  • Samples & Specimens
    • Biorepository Sample Portal at ASU
    • About Samples
      • Sample Types
      • Sample Repositories
      • Megapit and Distributed Initial Characterization Soil Archives
    • Finding and Accessing Sample Data
      • Species Checklists
      • Sample Explorer - Relationships and Data
      • Biorepository API
    • Requesting and Using Samples
      • Loans & Archival Requests
      • Usage Policies

    Samples & Specimens

  • Field Sites
    • Field Site Map and Info
    • Spatial Data Layers & Maps

    Field Sites

  • Resources
    • Getting Started with NEON Data
    • Research Support Services
      • Field Site Coordination
      • Letters of Support
      • Mobile Deployment Platforms
      • Permits and Permissions
      • AOP Flight Campaigns
      • Research Support FAQs
      • Research Support Projects
    • Code Hub
      • neonUtilities for R and Python
      • Code Resources Guidelines
      • Code Resources Submission
      • NEON's GitHub Organization Homepage
    • Learning Hub
      • Tutorials
      • Workshops & Courses
      • Science Videos
      • Teaching Modules
    • Science Seminars and Data Skills Webinars
    • Document Library
    • Funding Opportunities

    Resources

  • Impact
    • Research Highlights
    • Papers & Publications
    • NEON in the News

    Impact

  • Get Involved
    • Upcoming Events
    • Research and Collaborations
      • Environmental Data Science Innovation and Inclusion Lab
      • Collaboration with DOE BER User Facilities and Programs
      • EFI-NEON Ecological Forecasting Challenge
      • NEON Great Lakes User Group
      • NCAR-NEON-Community Collaborations
    • Advisory Groups
      • Science, Technology & Education Advisory Committee (STEAC)
      • Innovation Advisory Committee (IAC)
      • Technical Working Groups (TWG)
    • NEON Ambassador Program
      • Exploring NEON-Derived Data Products Workshop Series
    • Partnerships
    • Community Engagement
    • Work Opportunities

    Get Involved

  • My Account
  • Search

Search

Access and Work with NEON Geolocation Data

This tutorial explores NEON geolocation data. The focus is on the locations of NEON observational sampling and sensor data; NEON remote sensing data are inherently spatial and have dedicated tutorials. If you are interested in connecting remote sensing with ground-based measurements, the methods in the vegetation structure and canopy height model tutorial can be generalized to other data products.

In planning your analyses, consider what level of spatial resolution is required. There is no reason to carefully map each measurement if precise spatial locations aren't required to address your hypothesis! For example, if you want to use the Vegetation structure data product to calculate a site-scale estimate of biomass and production, the spatial coordinates of each tree are probably not needed. If you want to explore relationships between vegetation and beetle communities, you will need to identify the sampling plots where NEON measures both beetles and vegetation, but finer-scale coordinates may not be needed. Finally, if you want to relate vegetation measurements to airborne remote sensing data, you will need very accurate coordinates for each measurement on the ground.

Learning Objectives

After completing this tutorial you will be able to:

  • access NEON spatial data through data downloaded with the neonUtilities package.
  • access and plot specific sampling locations for TOS data products.
  • access and use sensor location data.

Things You’ll Need To Complete This Tutorial

R Programming Language

You will need a current version of R to complete this tutorial. We also recommend the RStudio IDE to work with R.

Setup R Environment

We'll need several R packages in this tutorial. Install the packages, if not already installed, and load the libraries for each.

# run once to get the package, and re-run if you need to get updates

install.packages("ggplot2")  # plotting

install.packages("neonUtilities")  # work with NEON data

install.packages("neonOS")  # work with NEON observational data

install.packages("devtools")  # to use the install_github() function

devtools::install_github("NEONScience/NEON-geolocation/geoNEON")  # work with NEON spatial data



# run every time you start a script

library(ggplot2)

library(neonUtilities)

library(neonOS)

library(geoNEON)



options(stringsAsFactors=F)

Locations for observational data

Plot level locations

Both aquatic and terrestrial observational data downloads include spatial data in the downloaded files. The spatial data in the aquatic data files are the most precise locations available for the sampling events. The spatial data in the terrestrial data downloads represent the locations of the sampling plots. In some cases, the plot is the most precise location available, but for many terrestrial data products, more precise locations can be calculated for specific sampling events.

Here, we'll download the Vegetation structure (DP1.10098.001) data product, examine the plot location data in the download, then calculate the locations of individual trees. These steps can be extrapolated to other terrestrial observational data products; the specific sampling layout varies from data product to data product, but the methods for working with the data are similar.

First, let's download the vegetation structure data from one site, Wind River Experimental Forest (WREF).

If downloading data using the neonUtilities package is new to you, check out the Download and Explore tutorial.

# load veg structure data

vst <- loadByProduct(dpID="DP1.10098.001", 
                     site="WREF",
                     check.size=F)

Data downloaded this way are stored in R as a large list. For this tutorial, we'll work with the individual dataframes within this large list. Alternatively, each dataframe can be assigned as its own object.

To find the spatial data for any given data product, view the variables files to figure out which data table the spatial data are contained in.

View(vst$variables_10098)

Looking through the variables, we can see that the spatial data (decimalLatitude and decimalLongitude, etc) are in the vst_perplotperyear table. Let's take a look at the table.

View(vst$vst_perplotperyear)

As noted above, the spatial data here are at the plot level; the latitude and longitude represent the centroid of the sampling plot. We can map these plots on the landscape using the easting and northing variables; these are the UTM coordinates. At this site, tower plots are 40 m x 40 m, and distributed plots are 20 m x 20 m; we can use the symbols() function to draw boxes of the correct size.

We'll also use the treesPresent variable to subset to only those plots where trees were found and measured.

# start by subsetting data to plots with trees

vst.trees <- vst$vst_perplotperyear[which(
        vst$vst_perplotperyear$treesPresent=="Y"),]



# make variable for plot sizes

plot.size <- numeric(nrow(vst.trees))



# populate plot sizes in new variable

plot.size[which(vst.trees$plotType=="tower")] <- 40

plot.size[which(vst.trees$plotType=="distributed")] <- 20



# create map of plots

symbols(vst.trees$easting,
        vst.trees$northing,
        squares=plot.size, inches=F,
        xlab="Easting", ylab="Northing")

All vegetation structure plots at WREF

We can see where the plots are located across the landscape, and we can see the denser cluster of plots in the area near the micrometeorology tower.

For many analyses, this level of spatial data may be sufficient. Calculating the precise location of each tree is only required for certain hypotheses; consider whether you need these data when working with a data product with plot-level spatial data.

Looking back at the variables_10098 table, notice that there is a table in this data product called vst_mappingandtagging, suggesting we can find mapping data there. Let's take a look.

View(vst$vst_mappingandtagging)

Here we see data fields for stemDistance and stemAzimuth. Looking back at the variables_10098 file, we see these fields contain the distance and azimuth from a pointID to a specific stem. To calculate the precise coordinates of each tree, we would need to get the locations of the pointIDs, and then adjust the coordinates based on distance and azimuth. The Data Product User Guide describes how to carry out these steps, and can be downloaded from the Data Product Details page.

However, carrying out these calculations yourself is not the only option! The geoNEON package contains a function that can do this for you, for the TOS data products with location data more precise than the plot level.

Sampling locations

The getLocTOS() function in the geoNEON package uses the NEON API to access NEON location data and then makes protocol-specific calculations to return precise locations for each sampling effort. This function works for a subset of NEON TOS data products. The list of tables and data products that can be entered is in the package documentation on GitHub.

For more information about the NEON API, see the API tutorial and the API web page. For more information about the location calculations used in each data product, see the Data Product User Guide for each product.

The getLocTOS() function requires two inputs:

  • A data table that contains spatial data from a NEON TOS data product
  • The NEON table name of that data table

For vegetation structure locations, the function call looks like this. This function may take a while to download all the location data. For faster downloads, use an API token.

# calculate individual tree locations

vst.loc <- getLocTOS(data=vst$vst_mappingandtagging,
                     dataProd="vst_mappingandtagging")

What additional data are now available in the data obtained by getLocTOS()?

# print variable names that are new

names(vst.loc)[which(!names(vst.loc) %in% 
                      names(vst$vst_mappingandtagging))]

## [1] "utmZone"                  "adjNorthing"              "adjEasting"              
## [4] "adjCoordinateUncertainty" "adjDecimalLatitude"       "adjDecimalLongitude"     
## [7] "adjElevation"             "adjElevationUncertainty"

Now we have adjusted latitude, longitude, and elevation, and the corresponding easting and northing UTM data. We also have coordinate uncertainty data for these coordinates.

As we did with the plots above, we can use the easting and northing data to plot the locations of the individual trees.

plot(vst.loc$adjEasting, vst.loc$adjNorthing, 
     pch=".", xlab="Easting", ylab="Northing")

All mapped tree locations at WREF

We can see the mapped trees in the same plots we mapped above. We've plotted each individual tree as a ., so all we can see at this scale is the cluster of dots that make up each plot.

Let's zoom in on a single plot:

plot(vst.loc$adjEasting[which(vst.loc$plotID=="WREF_085")], 
     vst.loc$adjNorthing[which(vst.loc$plotID=="WREF_085")], 
     pch=20, xlab="Easting", ylab="Northing")

Tree locations in plot WREF_085

Now we can see the location of each tree within the sampling plot WREF_085. This is interesting, but it would be more interesting if we could see more information about each tree. How are species distributed across the plot, for instance?

We can plot the tree species at each location using the text() function and the vst.loc$taxonID field.

plot(vst.loc$adjEasting[which(vst.loc$plotID=="WREF_085")], 
     vst.loc$adjNorthing[which(vst.loc$plotID=="WREF_085")], 
     type="n", xlab="Easting", ylab="Northing")

text(vst.loc$adjEasting[which(vst.loc$plotID=="WREF_085")], 
     vst.loc$adjNorthing[which(vst.loc$plotID=="WREF_085")],
     labels=vst.loc$taxonID[which(vst.loc$plotID=="WREF_085")],
     cex=0.5)

Tree species and their locations in plot WREF_085

Almost all of the mapped trees in this plot are either Pseudotsuga menziesii or Tsuga heterophylla (Douglas fir and Western hemlock), not too surprising at Wind River.

But suppose we want to map the diameter of each tree? This is a very common way to present a stem map, it gives a visual as if we were looking down on the plot from overhead and had cut off each tree at its measurement height.

Other than taxon, the attributes of the trees, such as diameter, height, growth form, and canopy position, are found in the vst_apparentindividual table, not in the vst_mappingandtagging table. We'll need to join the two tables to get the tree attributes together with their mapped locations.

The neonOS package contains the function joinTableNEON(), which can be used to do this. See the tutorial for the neonOS package for more details about this function.

veg <- joinTableNEON(vst.loc, 
                     vst$vst_apparentindividual,
                     name1="vst_mappingandtagging",
                     name2="vst_apparentindividual")

Now we can use the symbols() function to plot the diameter of each tree, at its spatial coordinates, to create a correctly scaled map of boles in the plot. Note that stemDiameter is in centimeters, while easting and northing UTMs are in meters, so we divide by 100 to scale correctly.

symbols(veg$adjEasting[which(veg$plotID=="WREF_085")], 
        veg$adjNorthing[which(veg$plotID=="WREF_085")], 
        circles=veg$stemDiameter[which(veg$plotID=="WREF_085")]/100/2, 
        inches=F, xlab="Easting", ylab="Northing")

Tree bole diameters in plot WREF_085

If you are interested in taking the vegetation structure data a step further, and connecting measurements of trees on the ground to remotely sensed Lidar data, check out the Vegetation Structure and Canopy Height Model tutorial.

If you are interested in working with other terrestrial observational (TOS) data products, the basic techniques used here to find precise sampling locations and join data tables can be adapted to other TOS data products. Consult the Data Product User Guide for each data product to find details specific to that data product.

Locations for sensor data

Downloads of instrument system (IS) data include a file called sensor_positions.csv. The sensor positions file contains information about the coordinates of each sensor, relative to a reference location.

While the specifics vary, techniques are generalizable for working with sensor data and the sensor_positions.csv file. For this tutorial, let's look at the sensor locations for soil temperature (PAR; DP1.00041.001) at
the NEON Treehaven site (TREE) in July 2018. To reduce our file size, we'll use the 30 minute averaging interval. Our final product from this section is to create a depth profile of soil temperature in one soil plot.

If downloading data using the neonUtilties package is new to you, check out the neonUtilities tutorial.

This function will download about 7 MB of data as written so we have check.size =F for ease of running the code.

# load soil temperature data of interest 

soilT <- loadByProduct(dpID="DP1.00041.001", site="TREE",
                    startdate="2018-07", enddate="2018-07",
                    timeIndex=30, check.size=F)

## Attempting to stack soil sensor data. Note that due to the number of soil sensors at each site, data volume is very high for these data. Consider dividing data processing into chunks, using the nCores= parameter to parallelize stacking, and/or using a high-performance system.

Sensor positions file

Now we can specifically look at the sensor positions file.

# create object for sensor positions file

pos <- soilT$sensor_positions_00041



# view column names

names(pos)

##  [1] "siteID"                           "HOR.VER"                         
##  [3] "sensorLocationID"                 "sensorLocationDescription"       
##  [5] "positionStartDateTime"            "positionEndDateTime"             
##  [7] "referenceLocationID"              "referenceLocationIDDescription"  
##  [9] "referenceLocationIDStartDateTime" "referenceLocationIDEndDateTime"  
## [11] "xOffset"                          "yOffset"                         
## [13] "zOffset"                          "pitch"                           
## [15] "roll"                             "azimuth"                         
## [17] "locationReferenceLatitude"        "locationReferenceLongitude"      
## [19] "locationReferenceElevation"       "eastOffset"                      
## [21] "northOffset"                      "xAzimuth"                        
## [23] "yAzimuth"                         "publicationDate"

# view table

View(pos)

The sensor locations are indexed by the HOR.VER variable - see the file naming conventions page for more details.

Using unique() we can view all the location indices in this file.

unique(pos$HOR.VER)

##  [1] "001.501" "001.502" "001.503" "001.504" "001.505" "001.506" "001.507" "001.508" "001.509" "002.501"
## [11] "002.502" "002.503" "002.504" "002.505" "002.506" "002.507" "002.508" "002.509" "003.501" "003.502"
## [21] "003.503" "003.504" "003.505" "003.506" "003.507" "003.508" "003.509" "004.501" "004.502" "004.503"
## [31] "004.504" "004.505" "004.506" "004.507" "004.508" "004.509" "005.501" "005.502" "005.503" "005.504"
## [41] "005.505" "005.506" "005.507" "005.508" "005.509"

Soil temperature data are collected in 5 instrumented soil plots inside the tower footprint. We see this reflected in the data where HOR = 001 to 005. Within each plot, temperature is measured at 9 depths, seen in VER = 501 to 509. At some sites, the number of depths may differ slightly.

The x, y, and z offsets in the sensor positions file are the relative distance, in meters, to the reference latitude, longitude, and elevation in the file.

The HOR and VER indices in the sensor positions file correspond to the verticalPosition and horizontalPosition fields in soilT$ST_30_minute.

Note that there are two sets of position data for soil plot 001, and that one set has a positionEndDateTime date in the file. This indicates sensors either moved or were relocated; in this case there was a frost heave incident. You can read about it in the issue log, which is displayed on the Data Product Details page, and also included as a table in the data download:

soilT$issueLog_00041[grep("TREE soil plot 1", 
                     soilT$issueLog_00041$locationAffected),]

##      id parentIssueID            issueDate         resolvedDate       dateRangeStart         dateRangeEnd
## 1: 9328            NA 2019-05-23T00:00:00Z 2019-05-23T00:00:00Z 2018-11-07T00:00:00Z 2019-04-19T00:00:00Z
##                                                                                                                          locationAffected
## 1: D05 TREE soil plot 1 measurement levels 1-9 (HOR.VER: 001.501, 001.502, 001.503, 001.504, 001.505, 001.506, 001.507, 001.508, 001.509)
##                                                                                                                                                                                                                           issue
## 1: Soil temperature sensors were pushed or pulled out of the ground by 3 cm over winter, presumably due to freeze-thaw action. The exact timing of this is unknown, but it occurred sometime between 2018-11-07 and 2019-04-19.
##                                                                                        resolution
## 1: Sensor depths were updated in the database with a start date of 2018-11-07 for the new depths.

Since we're working with data from July 2018, and the change in sensor locations is dated Nov 2018, we'll use the original locations. There are a number of ways to drop the later locations from the table; here, we find the rows in which the positionEndDateTime field is empty, indicating no end date, and the rows corresponding to soil plot 001, and drop all the rows that meet both criteria.

pos <- pos[-intersect(grep("001.", pos$HOR.VER),
                      which(pos$positionEndDateTime=="")),]

Our goal is to plot a time series of temperature, stratified by depth, so let's start by joining the data file and sensor positions file, to bring the depth measurements into the same data frame with the data.

# paste horizontalPosition and verticalPosition together

# to match HOR.VER in the sensor positions file

soilT$ST_30_minute$HOR.VER <- paste(soilT$ST_30_minute$horizontalPosition,
                                    soilT$ST_30_minute$verticalPosition,
                                    sep=".")



# left join to keep all temperature records

soilTHV <- merge(soilT$ST_30_minute, pos, 
                 by="HOR.VER", all.x=T)

And now we can plot soil temperature over time for each depth. We'll use ggplot since it's well suited to this kind of stratification. Each soil plot is its own panel, and each depth is its own line:

gg <- ggplot(soilTHV, 
             aes(endDateTime, soilTempMean, 
                 group=zOffset, color=zOffset)) +
             geom_line() + 
        facet_wrap(~horizontalPosition)

gg

## Warning: Removed 1488 rows containing missing values (`geom_line()`).

Tiled figure of temperature by depth in each plot

We can see that as soil depth increases, temperatures become much more stable, while the shallowest measurement has a clear diurnal cycle. We can also see that something has gone wrong with one of the sensors in plot 002. To remove those data, use only values where the final quality flag passed, i.e. finalQF = 0

gg <- ggplot(subset(soilTHV, finalQF==0), 
             aes(endDateTime, soilTempMean, 
                 group=zOffset, color=zOffset)) +
             geom_line() + 
        facet_wrap(~horizontalPosition)

gg

Tiled figure of temperature by depth in each plot with only passing quality flags

Introduction to working with NEON eddy flux data

This data tutorial provides an introduction to working with NEON eddy flux data, using the neonUtilities R package or the neonutilities Python package. If you are new to NEON data, we recommend starting with a more general tutorial, such as the neonUtilities tutorial or the Download and Explore tutorial. Some of the functions and techniques described in those tutorials will be used here, as well as functions and data formats that are unique to the eddy flux system.

This tutorial assumes general familiarity with eddy flux data and associated concepts.

1. Setup and data download

Start by installing and loading packages and setting options.

R

To work with the NEON flux data, we need the rhdf5 package, which is hosted on Bioconductor, and requires a different installation process than CRAN packages:

install.packages('BiocManager')
BiocManager::install('rhdf5')
install.packages('neonUtilities')
install.packages('ggplot2')

Now load packages.

library(neonUtilities)
library(ggplot2)

Python

The necessary helper packages should be installed automatically when we install neonUtilities. We’ll also use several modules that are installed automatically with standard Python installations.

pip install neonutilities

Now load packages.


import neonutilities as nu
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from datetime import datetime
import os

Download data

R

Use the zipsByProduct() function from the neonUtilities package to download flux data from two sites and two months. The transformations and functions below will work on any time range and site(s), but two sites and two months allows us to see all the available functionality while minimizing download size.

Inputs to the zipsByProduct() function for this tutorial:

  • dpID: DP4.00200.001, the bundled eddy covariance product
  • package: basic (the expanded package is not covered in this tutorial)
  • site: NIWO = Niwot Ridge and HARV = Harvard Forest
  • startdate: 2018-06 (both dates are inclusive)
  • enddate: 2018-07 (both dates are inclusive)
  • savepath: path to download to; here we use the working directory
  • check.size: T if you want to see file size before downloading, otherwise F

The download may take a while, especially if you’re on a slow network. For faster downloads, consider using an API token.

zipsByProduct(dpID="DP4.00200.001", package="basic", 
              site=c("NIWO", "HARV"), 
              startdate="2018-06", enddate="2018-07",
              savepath=getwd(), 
              check.size=F)

Python

Use the zips_by_product() function from the neonutilities package to download flux data from two sites and two months. The transformations and functions below will work on any time range and site(s), but two sites and two months allows us to see all the available functionality while minimizing download size.

Inputs to the zips_by_product() function for this tutorial:

  • dpid: DP4.00200.001, the bundled eddy covariance product
  • package: basic (the expanded package is not covered in this tutorial)
  • site: NIWO = Niwot Ridge and HARV = Harvard Forest
  • startdate: 2018-06 (both dates are inclusive)
  • enddate: 2018-07 (both dates are inclusive)
  • savepath: path to download to; here we use the working directory
  • check_size: T if you want to see file size before downloading, otherwise F

The download may take a while, especially if you’re on a slow network. For faster downloads, consider using an API token.

nu.zips_by_product(dpid="DP4.00200.001", package="basic", 
              site=["NIWO","HARV"], 
              startdate="2018-06", enddate="2018-07",
              savepath=os.getcwd(), 
              check_size=False)

2. Data Levels

There are five levels of data contained in the eddy flux bundle. For full details, refer to the NEON algorithm document.

Briefly, the data levels are:

  • Level 0’ (dp0p): Calibrated raw observations
  • Level 1 (dp01): Time-aggregated observations, e.g. 30-minute mean gas concentrations
  • Level 2 (dp02): Time-interpolated data, e.g. rate of change of a gas concentration
  • Level 3 (dp03): Spatially interpolated data, i.e. vertical profiles
  • Level 4 (dp04): Fluxes

The dp0p data are available in the expanded data package and are beyond the scope of this tutorial.

The dp02 and dp03 data are used in storage calculations, and the dp04 data include both the storage and turbulent components. Since many users will want to focus on the net flux data, we’ll start there.

3. Extract Level 4 data (Fluxes!)

R

To extract the Level 4 data from the HDF5 files and merge them into a single table, we’ll use the stackEddy() function from the neonUtilities package.

stackEddy() requires two inputs:

  • filepath: Path to a file or folder, which can be any one of:
    1. A zip file of eddy flux data downloaded from the NEON data portal
    2. A folder of eddy flux data downloaded by the zipsByProduct() function
    3. The folder of files resulting from unzipping either of 1 or 2
    4. One or more HDF5 files of NEON eddy flux data
  • level: dp01-4

Input the filepath you downloaded to using zipsByProduct() earlier, including the filestoStack00200 folder created by the function, and dp04:

flux <- stackEddy(filepath=paste0(getwd(), "/filesToStack00200"),
                 level="dp04")
## Extracting data
## Joining data variables by file
## Stacking files by month
## Getting metadata tables

We now have an object called flux. It’s a named list containing four tables: one table for each site’s data, and variables and objDesc tables.

names(flux)
## [1] "HARV"                        "NIWO"                       
## [3] "variables"                   "objDesc"                    
## [5] "issueLog"                    "citation_00200_RELEASE-2026"

Let’s look at the contents of one of the site data files:

head(flux$NIWO)
##               timeBgn             timeEnd data.fluxCo2.nsae.flux
## 1 2018-06-01 00:00:00 2018-06-01 00:29:59              0.1895604
## 2 2018-06-01 00:30:00 2018-06-01 00:59:59              0.9429929
## 3 2018-06-01 01:00:00 2018-06-01 01:29:59              0.5101887
## 4 2018-06-01 01:30:00 2018-06-01 01:59:59              0.7980803
## 5 2018-06-01 02:00:00 2018-06-01 02:29:59              0.4345524
## 6 2018-06-01 02:30:00 2018-06-01 02:59:59              0.8440311
##   data.fluxCo2.stor.flux data.fluxCo2.turb.flux data.fluxH2o.nsae.flux
## 1            -0.06349451              0.2530549              16.428558
## 2             0.09749592              0.8454970               7.965026
## 3             0.02525844              0.4849303               5.387009
## 4             0.23341588              0.5646644               9.933795
## 5             0.17415575              0.2603966               2.963931
## 6            -0.01496958              0.8590007               4.488221
##   data.fluxH2o.stor.flux data.fluxH2o.turb.flux data.fluxMome.turb.veloFric
## 1             3.34458774              13.083971                   0.2025301
## 2            -1.51509573               9.480122                   0.1922362
## 3            -4.26233930               9.649348                   0.1197608
## 4             0.15083070               9.782964                   0.1190342
## 5            -0.01668207               2.980613                   0.1589262
## 6            -0.52273792               5.010959                   0.1115558
##   data.fluxTemp.nsae.flux data.fluxTemp.stor.flux data.fluxTemp.turb.flux
## 1               4.7823169              -1.4575674               6.2398843
## 2              -0.5981044               0.3404014              -0.9385058
## 3              -4.3428949               0.1870749              -4.5299698
## 4             -12.6516137              -2.4904875             -10.1611262
## 5              -5.0544306              -0.7514722              -4.3029584
## 6              -7.6999704              -1.9047428              -5.7952276
##   data.foot.stat.angZaxsErth data.foot.stat.distReso
## 1                    94.2262                    8.34
## 2                   355.4252                    8.34
## 3                   359.8013                    8.34
## 4                   137.7743                    8.34
## 5                   188.4800                    8.34
## 6                   183.1920                    8.34
##   data.foot.stat.veloYaxsHorSd data.foot.stat.veloZaxsHorSd
## 1                    0.7955893                    0.2713232
## 2                    0.8590177                    0.2300000
## 3                    1.2601763                    0.2300000
## 4                    0.7332641                    0.2300000
## 5                    0.7096286                    0.2300000
## 6                    0.3789859                    0.2300000
##   data.foot.stat.veloFric data.foot.stat.distZaxsMeasDisp
## 1               0.2025428                            8.34
## 2               0.2000000                            8.34
## 3               0.2000000                            8.34
## 4               0.2000000                            8.34
## 5               0.2000000                            8.34
## 6               0.2000000                            8.34
##   data.foot.stat.distZaxsRgh data.foot.stat.distObkv data.foot.stat.paraStbl
## 1                 0.04083968               -72.43385            -0.115139545
## 2                 0.29601180               965.24689             0.008640276
## 3                 0.23002973                27.47032             0.303600430
## 4                 0.83400000                10.29068             0.810442357
## 5                 0.83400000                58.07739             0.143601482
## 6                 0.83400000                15.03590             0.554672317
##   data.foot.stat.distZaxsAbl data.foot.stat.distXaxs90
## 1                       1000                    325.26
## 2                       1000                    258.54
## 3                       1000                    275.22
## 4                       1000                    208.50
## 5                       1000                    208.50
## 6                       1000                    208.50
##   data.foot.stat.distXaxsMax data.foot.stat.distYaxs90 qfqm.fluxCo2.nsae.qfFinl
## 1                     133.44                     25.02                        1
## 2                     108.42                     50.04                        1
## 3                     108.42                     75.06                        1
## 4                      83.40                     75.06                        1
## 5                      83.40                     66.72                        1
## 6                      83.40                     41.70                        1
##   qfqm.fluxCo2.stor.qfFinl qfqm.fluxCo2.turb.qfFinl qfqm.fluxH2o.nsae.qfFinl
## 1                        1                        1                        1
## 2                        1                        1                        1
## 3                        1                        1                        1
## 4                        1                        1                        1
## 5                        1                        1                        1
## 6                        1                        1                        1
##   qfqm.fluxH2o.stor.qfFinl qfqm.fluxH2o.turb.qfFinl qfqm.fluxMome.turb.qfFinl
## 1                        1                        1                         0
## 2                        0                        1                         0
## 3                        0                        1                         1
## 4                        0                        1                         1
## 5                        0                        1                         0
## 6                        0                        1                         0
##   qfqm.fluxTemp.nsae.qfFinl qfqm.fluxTemp.stor.qfFinl qfqm.fluxTemp.turb.qfFinl
## 1                         1                         0                         1
## 2                         1                         0                         1
## 3                         1                         0                         1
## 4                         1                         0                         1
## 5                         1                         0                         1
## 6                         1                         0                         1
##   qfqm.foot.turb.qfFinl
## 1                     0
## 2                     0
## 3                     0
## 4                     0
## 5                     0
## 6                     0

The variables and objDesc tables can help you interpret the column headers in the data table. The objDesc table contains definitions for many of the terms used in the eddy flux data product, but it isn’t complete. To get the terms of interest, we’ll break up the column headers into individual terms and look for them in the objDesc table:

term <- unlist(strsplit(names(flux$NIWO), split=".", fixed=T))
flux$objDesc[which(flux$objDesc$X...Object %in% term),]
##           X...Object
## 22       angZaxsErth
## 84              data
## 91          distReso
## 92        distXaxs90
## 93       distXaxsMax
## 94        distYaxs90
## 95       distZaxsAbl
## 100 distZaxsMeasDisp
## 101      distZaxsRgh
## 125             flux
## 126          fluxCo2
## 128          fluxH2o
## 130         fluxMome
## 132         fluxTemp
## 133             foot
## 200             nsae
## 244           qfFinl
## 274             qfqm
## 451             stat
## 453             stor
## 474          timeBgn
## 475          timeEnd
## 477             turb
## 505         veloFric
## 512    veloYaxsHorSd
## 515    veloZaxsHorSd
##                                                                                                           Description
## 22                                                                                                     Wind direction
## 84                                                                                             Represents data fields
## 91                                                                      Resolution of vector of distances (units = m)
## 92                                     Distance of 90% contriubtion of footprint aligned with dominant wind direction
## 93                                        Max distance of footprint contriubtion aligned with dominant wind direction
## 94  Distance of 90% contriubtion of footprint aligned with direction that is perpendicular to dominant wind direction
## 95                                              Atmospheric boundary layer height used in flux footprint calculations
## 100                                                                     Measurement height minus displacement height 
## 101                                                               Roughness length used in flux footprint calculation
## 125                                                                                             General term for flux
## 126                                                                     CO2 flux (can be turbulent, storage, or nsae)
## 128                                      H2O flux (can be turbulent, storage, or nsae)  - relates to latent heat flux
## 130                                                                Momentum flux (can be turbulent, storage, or nsae)
## 132                             Temperature flux (can be turbulent, storage, or nsae) - relates to sensible heat flux
## 133                                                                                                         Footprint
## 200                                                       Net surface atmospher exchange (turbulent + storage fluxes)
## 244         The final quality flag indicating if the data are valid for the given aggregation period (1=fail, 0=pass)
## 274   Quality flag and quality metrics, represents quality flags and quality metrics that accompany the provided data
## 451                                                                                                        Statistics
## 453                                                                                                           Storage
## 474                                                                      The beginning time of the aggregation period
## 475                                                                            The end time of the aggregation period
## 477                                                                                                         Turbulent
## 505                                                                           Friction velocity/ustar (units = m s-1)
## 512                                        Standard deviation of horizontal wind speed in Y direction (units = m s-1)
## 515                                        Standard deviation of horizontal wind speed in X direction (units = m s-1)

For the terms that aren’t captured here, fluxCo2, fluxH2o, and fluxTemp are self-explanatory. The flux components are

  • turb: Turbulent flux
  • stor: Storage
  • nsae: Net surface-atmosphere exchange

The variables table contains the units for each field:

flux$variables
##    category   system variable             stat           units
## 1      data  fluxCo2     nsae          timeBgn              NA
## 2      data  fluxCo2     nsae          timeEnd              NA
## 3      data  fluxCo2     nsae             flux umolCo2 m-2 s-1
## 4      data  fluxCo2     stor          timeBgn              NA
## 5      data  fluxCo2     stor          timeEnd              NA
## 6      data  fluxCo2     stor             flux umolCo2 m-2 s-1
## 7      data  fluxCo2     turb          timeBgn              NA
## 8      data  fluxCo2     turb          timeEnd              NA
## 9      data  fluxCo2     turb             flux umolCo2 m-2 s-1
## 10     data  fluxH2o     nsae          timeBgn              NA
## 11     data  fluxH2o     nsae          timeEnd              NA
## 12     data  fluxH2o     nsae             flux           W m-2
## 13     data  fluxH2o     stor          timeBgn              NA
## 14     data  fluxH2o     stor          timeEnd              NA
## 15     data  fluxH2o     stor             flux           W m-2
## 16     data  fluxH2o     turb          timeBgn              NA
## 17     data  fluxH2o     turb          timeEnd              NA
## 18     data  fluxH2o     turb             flux           W m-2
## 19     data fluxMome     turb          timeBgn              NA
## 20     data fluxMome     turb          timeEnd              NA
## 21     data fluxMome     turb         veloFric           m s-1
## 22     data fluxTemp     nsae          timeBgn              NA
## 23     data fluxTemp     nsae          timeEnd              NA
## 24     data fluxTemp     nsae             flux           W m-2
## 25     data fluxTemp     stor          timeBgn              NA
## 26     data fluxTemp     stor          timeEnd              NA
## 27     data fluxTemp     stor             flux           W m-2
## 28     data fluxTemp     turb          timeBgn              NA
## 29     data fluxTemp     turb          timeEnd              NA
## 30     data fluxTemp     turb             flux           W m-2
## 31     data     foot     stat          timeBgn              NA
## 32     data     foot     stat          timeEnd              NA
## 33     data     foot     stat      angZaxsErth             deg
## 34     data     foot     stat         distReso               m
## 35     data     foot     stat    veloYaxsHorSd           m s-1
## 36     data     foot     stat    veloZaxsHorSd           m s-1
## 37     data     foot     stat         veloFric           m s-1
## 38     data     foot     stat distZaxsMeasDisp               m
## 39     data     foot     stat      distZaxsRgh               m
## 40     data     foot     stat         distObkv               m
## 41     data     foot     stat         paraStbl               -
## 42     data     foot     stat      distZaxsAbl               m
## 43     data     foot     stat       distXaxs90               m
## 44     data     foot     stat      distXaxsMax               m
## 45     data     foot     stat       distYaxs90               m
## 46     qfqm  fluxCo2     nsae          timeBgn              NA
## 47     qfqm  fluxCo2     nsae          timeEnd              NA
## 48     qfqm  fluxCo2     nsae           qfFinl              NA
## 49     qfqm  fluxCo2     stor           qfFinl              NA
## 50     qfqm  fluxCo2     stor          timeBgn              NA
## 51     qfqm  fluxCo2     stor          timeEnd              NA
## 52     qfqm  fluxCo2     turb          timeBgn              NA
## 53     qfqm  fluxCo2     turb          timeEnd              NA
## 54     qfqm  fluxCo2     turb           qfFinl              NA
## 55     qfqm  fluxH2o     nsae          timeBgn              NA
## 56     qfqm  fluxH2o     nsae          timeEnd              NA
## 57     qfqm  fluxH2o     nsae           qfFinl              NA
## 58     qfqm  fluxH2o     stor           qfFinl              NA
## 59     qfqm  fluxH2o     stor          timeBgn              NA
## 60     qfqm  fluxH2o     stor          timeEnd              NA
## 61     qfqm  fluxH2o     turb          timeBgn              NA
## 62     qfqm  fluxH2o     turb          timeEnd              NA
## 63     qfqm  fluxH2o     turb           qfFinl              NA
## 64     qfqm fluxMome     turb          timeBgn              NA
## 65     qfqm fluxMome     turb          timeEnd              NA
## 66     qfqm fluxMome     turb           qfFinl              NA
## 67     qfqm fluxTemp     nsae          timeBgn              NA
## 68     qfqm fluxTemp     nsae          timeEnd              NA
## 69     qfqm fluxTemp     nsae           qfFinl              NA
## 70     qfqm fluxTemp     stor           qfFinl              NA
## 71     qfqm fluxTemp     stor          timeBgn              NA
## 72     qfqm fluxTemp     stor          timeEnd              NA
## 73     qfqm fluxTemp     turb          timeBgn              NA
## 74     qfqm fluxTemp     turb          timeEnd              NA
## 75     qfqm fluxTemp     turb           qfFinl              NA
## 76     qfqm     foot     turb          timeBgn              NA
## 77     qfqm     foot     turb          timeEnd              NA
## 78     qfqm     foot     turb           qfFinl              NA

Python

To extract the Level 4 data from the HDF5 files and merge them into a single table, we’ll use the stack_eddy() function from the neonutilities package.

stack_eddy() requires two inputs:

  • filepath: Path to a file or folder, which can be any one of:
    1. A zip file of eddy flux data downloaded from the NEON data portal
    2. A folder of eddy flux data downloaded by the zips_by_product() function
    3. The folder of files resulting from unzipping either of 1 or 2
    4. One or more HDF5 files of NEON eddy flux data
  • level: dp01-4

Input the filepath you downloaded to using zips_by_product() earlier, including the filestoStack00200 folder created by the function, and dp04:

flux = nu.stack_eddy(filepath=os.getcwd() + "/filesToStack00200",
                 level="dp04")

We now have an object called flux. It’s a dictionary containing four tables: one table for each site’s data, and variables and objDesc tables.

flux.keys()
## dict_keys(['HARV', 'NIWO', 'variables', 'objDesc', 'issueLog', 'citation_00200_RELEASE-2026'])

Let’s look at the contents of one of the site data files:

flux['NIWO'].head
## <bound method NDFrame.head of                        timeBgn  ... qfqm.foot.turb.qfFinl
## 0    2018-06-01 00:00:00+00:00  ...                     0
## 1    2018-06-01 00:30:00+00:00  ...                     0
## 2    2018-06-01 01:00:00+00:00  ...                     0
## 3    2018-06-01 01:30:00+00:00  ...                     0
## 4    2018-06-01 02:00:00+00:00  ...                     0
## ...                        ...  ...                   ...
## 2923 2018-07-31 21:30:00+00:00  ...                     0
## 2924 2018-07-31 22:00:00+00:00  ...                     0
## 2925 2018-07-31 22:30:00+00:00  ...                     0
## 2926 2018-07-31 23:00:00+00:00  ...                     0
## 2927 2018-07-31 23:30:00+00:00  ...                     0
## 
## [2928 rows x 36 columns]>

The variables and objDesc tables can help you interpret the column headers in the data table. The objDesc table contains definitions for many of the terms used in the eddy flux data product, but it isn’t complete. To get the terms of interest, we’ll break up the column headers into individual terms and look for them in the objDesc table:

termlist = [f.split('.') for f in flux['NIWO'].keys()]
term = [t for sublist in termlist for t in sublist]

objDesc = flux['objDesc'] objDesc[objDesc['Object'].isin(term)]

##                Object                                        Description
## 21        angZaxsErth                                     Wind direction
## 83               data                             Represents data fields
## 90           distReso      Resolution of vector of distances (units = m)
## 91         distXaxs90  Distance of 90% contriubtion of footprint alig...
## 92        distXaxsMax  Max distance of footprint contriubtion aligned...
## 93         distYaxs90  Distance of 90% contriubtion of footprint alig...
## 94        distZaxsAbl  Atmospheric boundary layer height used in flux...
## 99   distZaxsMeasDisp      Measurement height minus displacement height 
## 100       distZaxsRgh  Roughness length used in flux footprint calcul...
## 124              flux                              General term for flux
## 125           fluxCo2      CO2 flux (can be turbulent, storage, or nsae)
## 127           fluxH2o  H2O flux (can be turbulent, storage, or nsae) ...
## 129          fluxMome  Momentum flux (can be turbulent, storage, or n...
## 131          fluxTemp  Temperature flux (can be turbulent, storage, o...
## 132              foot                                          Footprint
## 199              nsae  Net surface atmospher exchange (turbulent + st...
## 243            qfFinl  The final quality flag indicating if the data ...
## 273              qfqm  Quality flag and quality metrics, represents q...
## 450              stat                                         Statistics
## 452              stor                                            Storage
## 473           timeBgn       The beginning time of the aggregation period
## 474           timeEnd             The end time of the aggregation period
## 476              turb                                          Turbulent
## 504          veloFric            Friction velocity/ustar (units = m s-1)
## 511     veloYaxsHorSd  Standard deviation of horizontal wind speed in...
## 514     veloZaxsHorSd  Standard deviation of horizontal wind speed in...

For the terms that aren’t captured here, fluxCo2, fluxH2o, and fluxTemp are self-explanatory. The flux components are

  • turb: Turbulent flux
  • stor: Storage
  • nsae: Net surface-atmosphere exchange

The variables table contains the units for each field:

flux['variables']
##    category    system variable     stat            units
## 0      data   fluxCo2     nsae  timeBgn               NA
## 1      data   fluxCo2     nsae  timeEnd               NA
## 2      data   fluxCo2     nsae     flux  umolCo2 m-2 s-1
## 3      data   fluxCo2     stor  timeBgn               NA
## 4      data   fluxCo2     stor  timeEnd               NA
## ..      ...       ...      ...      ...              ...
## 73     qfqm  fluxTemp     turb  timeEnd               NA
## 74     qfqm  fluxTemp     turb   qfFinl               NA
## 75     qfqm      foot     turb  timeBgn               NA
## 76     qfqm      foot     turb  timeEnd               NA
## 77     qfqm      foot     turb   qfFinl               NA
## 
## [78 rows x 5 columns]

Plot fluxes

Let’s plot some data! First, a brief aside about time stamps, since these are time series data.

Time stamps

NEON sensor data come with time stamps for both the start and end of the averaging period. Depending on the analysis you’re doing, you may want to use one or the other; for general plotting, re-formatting, and transformations, I prefer to use the start time, because there are some small inconsistencies between data products in a few of the end time stamps.

Note that all NEON data use UTC time, aka Greenwich Mean Time. This is true across NEON’s instrumented, observational, and airborne measurements. When working with NEON data, it’s best to keep everything in UTC as much as possible, otherwise it’s very easy to end up with data in mismatched times, which can cause insidious and hard-to-detect problems. In the code below, time stamps and time zones have been handled by stackEddy() and loadByProduct(), so we don’t need to do anything additional. But if you’re writing your own code and need to convert times, remember that if the time zone isn’t specified, R will default to the local time zone it detects on your operating system.

R

plot(flux$NIWO$data.fluxCo2.nsae.flux~flux$NIWO$timeBgn, 
     pch=".", xlab="Date", ylab="CO2 flux",
     ylim=c(-20,20))

There is a clear diurnal pattern, and an increase in daily carbon uptake as the growing season progresses.

Let’s trim down to just two days of data to see a few other details.

plot(flux$NIWO$data.fluxCo2.nsae.flux~flux$NIWO$timeBgn, 
     pch=20, xlab="Date", ylab="CO2 flux",
     xlim=c(as.POSIXct("2018-07-07", tz="GMT"),
            as.POSIXct("2018-07-09", tz="GMT")),
    ylim=c(-20,20), xaxt="n")
axis.POSIXct(1, x=flux$NIWO$timeBgn, 
             format="%Y-%m-%d %H:%M:%S")

Python

plt.scatter(flux['NIWO'].timeBgn, 
            flux['NIWO']['data.fluxCo2.nsae.flux'],
           marker='.')
plt.ylim((-20,20))
## (-20.0, 20.0)
plt.show()

There is a clear diurnal pattern, and an increase in daily carbon uptake as the growing season progresses.

Let’s trim down to just two days of data to see a few other details.

plt.scatter(flux['NIWO'].timeBgn, 
            flux['NIWO']['data.fluxCo2.nsae.flux'],
           marker='.')
plt.ylim((-20,20))
## (-20.0, 20.0)
plt.xlim((datetime.strptime('2018-07-07 00:00:00', '%Y-%m-%d %H:%M:%S'),
          datetime.strptime('2018-07-09 00:00:00', '%Y-%m-%d %H:%M:%S')))
## (np.float64(17719.0), np.float64(17721.0))
plt.show()

Note the timing of C uptake; the UTC time zone is clear here, where uptake occurs at times that appear to be during the night.

4. Merge flux data with other sensor data

Many of the data sets we would use to interpret and model flux data are measured as part of the NEON project, but are not present in the eddy flux data product bundle. In this section, we’ll download PAR data and merge them with the flux data; the steps taken here can be applied to any of the NEON instrumented (IS) data products.

Download PAR data

R

To get NEON PAR data, use the loadByProduct() function from the neonUtilities package. loadByProduct() takes the same inputs as zipsByProduct(), but it loads the downloaded data directly into the current R environment.

Let’s download PAR data matching the Niwot Ridge flux data. The inputs needed are:

  • dpID: DP1.00024.001
  • site: NIWO
  • startdate: 2018-06
  • enddate: 2018-07
  • package: basic
  • timeIndex: 30

The new input here is timeIndex=30, which downloads only the 30-minute data. Since the flux data are at a 30-minute resolution, we can save on download time by disregarding the 1-minute data files (which are of course 30 times larger). The timeIndex input can be left off if you want to download all available averaging intervals.

pr <- loadByProduct("DP1.00024.001", site="NIWO", 
                    timeIndex=30, package="basic", 
                    startdate="2018-06", enddate="2018-07",
                    check.size=F)

Python

To get NEON PAR data, use the load_by_product() function from the neonutilities package. load_by_product() takes the same inputs as zips_by_product(), but it loads the downloaded data directly into the current Python environment.

Let’s download PAR data matching the Niwot Ridge flux data. The inputs needed are:

  • dpid: DP1.00024.001
  • site: NIWO
  • startdate: 2018-06
  • enddate: 2018-07
  • package: basic
  • timeindex: 30

The new input here is timeindex=30, which downloads only the 30-minute data. Since the flux data are at a 30-minute resolution, we can save on download time by disregarding the 1-minute data files (which are of course 30 times larger). The timeindex input can be left off if you want to download all available averaging intervals.

pr = nu.load_by_product("DP1.00024.001", site="NIWO", 
                    timeindex=30, package="basic", 
                    startdate="2018-06", enddate="2018-07",
                    check_size=False)

pr is another named list, and again, metadata and units can be found in the variables table. The PARPAR_30min table contains a verticalPosition field. This field indicates the position on the tower, with 010 being the first tower level, and 020, 030, etc going up the tower.

Join PAR to flux data

We’ll connect PAR data from the tower top to the flux data.

R

pr.top <- pr$PARPAR_30min[which(pr$PARPAR_30min$verticalPosition==
                                max(pr$PARPAR_30min$verticalPosition)),]

As noted above, loadByProduct() automatically converts time stamps to a recognized date-time format when it reads the data. However, the field names for the time stamps differ between the flux data and the other meteorological data: the start of the averaging interval is timeBgn in the flux data and startDateTime in the PAR data.

Let’s create a new variable in the PAR data:

pr.top$timeBgn <- pr.top$startDateTime

And now use the matching time stamp fields to merge the flux and PAR data.

fx.pr <- merge(pr.top, flux$NIWO, by="timeBgn")

And now we can plot net carbon exchange as a function of light availability:

plot(fx.pr$data.fluxCo2.nsae.flux~fx.pr$PARMean,
     pch=".", ylim=c(-20,20),
     xlab="PAR", ylab="CO2 flux")

Python

As noted above, load_by_product() automatically converts time stamps to a recognized date-time format when it reads the data. However, the field names for the time stamps differ between the flux data and the other meteorological data: the start of the averaging interval is timeBgn in the flux data and startDateTime in the PAR data.

Let’s create a new variable in the PAR data and subset to the top of the tower.


par30 = pr['PARPAR_30min']
par30['timeBgn'] = par30['startDateTime']
prtop = par30[par30['verticalPosition']=='040']

And now use the matching time stamp fields to merge the flux and PAR data.


fxpr = pd.merge(flux['NIWO'], prtop, how='outer', on='timeBgn')

And now we can plot net carbon exchange as a function of light availability:

plt.scatter(fxpr.PARMean, 
            fxpr['data.fluxCo2.nsae.flux'],
           marker='.')
plt.ylim((-20,20))
## (-20.0, 20.0)
plt.show()

If you’re interested in data in the eddy covariance bundle besides the net flux data, the rest of this tutorial will guide you through how to get those data out of the bundle.

5. Vertical profile data (Level 3)

The Level 3 (dp03) data are the spatially interpolated profiles of the rates of change of CO2, H2O, and temperature. Extract the Level 3 data from the HDF5 file using stackEddy() with the same syntax as for the Level 4 data.

R

prof <- stackEddy(filepath=paste0(getwd(), "/filesToStack00200"),
                 level="dp03")

As with the Level 4 data, the result is a named list with data tables for each site.

head(prof$NIWO[,1:7])
##               timeBgn             timeEnd data.co2Stor.rateRtioMoleDryCo2.0.1 m
## 1 2018-06-01 00:00:00 2018-06-01 00:29:59                         -2.681938e-04
## 2 2018-06-01 00:30:00 2018-06-01 00:59:59                          1.912094e-05
## 3 2018-06-01 01:00:00 2018-06-01 01:29:59                          3.709964e-04
## 4 2018-06-01 01:30:00 2018-06-01 01:59:59                          2.374054e-03
## 5 2018-06-01 02:00:00 2018-06-01 02:29:59                          3.397046e-03
## 6 2018-06-01 02:30:00 2018-06-01 02:59:59                          3.156952e-03
##   data.co2Stor.rateRtioMoleDryCo2.0.2 m data.co2Stor.rateRtioMoleDryCo2.0.3 m
## 1                         -2.681938e-04                         -2.681938e-04
## 2                          1.912094e-05                          1.912094e-05
## 3                          3.709964e-04                          3.709964e-04
## 4                          2.374054e-03                          2.374054e-03
## 5                          3.397046e-03                          3.397046e-03
## 6                          3.156952e-03                          3.156952e-03
##   data.co2Stor.rateRtioMoleDryCo2.0.4 m data.co2Stor.rateRtioMoleDryCo2.0.5 m
## 1                         -2.681938e-04                         -2.681938e-04
## 2                          1.912094e-05                          1.912094e-05
## 3                          3.709964e-04                          3.709964e-04
## 4                          2.374054e-03                          2.374054e-03
## 5                          3.397046e-03                          3.397046e-03
## 6                          3.156952e-03                          3.156952e-03

Python

Note (Feb 17, 2026): There is a bug in the Python implementation of the Level 3 code. This section may not run correctly, we will update as soon as possible.

prof = nu.stack_eddy(filepath=os.getcwd() + "/filesToStack00200",
                 level="dp03")

As with the Level 4 data, the result is a named list with data tables for each site.

prof['NIWO'].head()

6. Un-interpolated vertical profile data (Level 2)

The Level 2 data are interpolated in time but not in space. They contain the rates of change at each of the measurement heights.

R

Again, they can be extracted from the HDF5 files using stackEddy() with the same syntax:

prof.l2 <- stackEddy(filepath=paste0(getwd(), "/filesToStack00200"),
                 level="dp02")
head(prof.l2$HARV)
##   horizontalPosition verticalPosition             timeBgn             timeEnd
## 1                000              010 2018-06-01 00:00:00 2018-06-01 00:29:59
## 2                000              010 2018-06-01 00:30:00 2018-06-01 00:59:59
## 3                000              010 2018-06-01 01:00:00 2018-06-01 01:29:59
## 4                000              010 2018-06-01 01:30:00 2018-06-01 01:59:59
## 5                000              010 2018-06-01 02:00:00 2018-06-01 02:29:59
## 6                000              010 2018-06-01 02:30:00 2018-06-01 02:59:59
##   data.co2Stor.rateRtioMoleDryCo2.mean data.h2oStor.rateRtioMoleDryH2o.mean
## 1                                  NaN                                  NaN
## 2                          0.002666576                                  NaN
## 3                         -0.011224223                                  NaN
## 4                          0.006133056                                  NaN
## 5                         -0.019554655                                  NaN
## 6                         -0.007855632                                  NaN
##   data.tempStor.rateTemp.mean qfqm.co2Stor.rateRtioMoleDryCo2.qfFinl
## 1                2.583333e-05                                      1
## 2               -2.008056e-04                                      1
## 3               -1.901111e-04                                      1
## 4               -7.419444e-05                                      1
## 5               -1.537083e-04                                      1
## 6               -1.874861e-04                                      1
##   qfqm.h2oStor.rateRtioMoleDryH2o.qfFinl qfqm.tempStor.rateTemp.qfFinl
## 1                                      1                             0
## 2                                      1                             0
## 3                                      1                             0
## 4                                      1                             0
## 5                                      1                             0
## 6                                      1                             0

Python

Again, they can be extracted from the HDF5 files using stackeddy() with the same syntax:

profl2 = nu.stack_eddy(filepath=os.getcwd() + "/filesToStack00200",
                 level="dp02")
profl2['HARV'].head()
##   horizontalPosition  ... qfqm.tempStor.rateTemp.qfFinl
## 0                000  ...                             0
## 1                000  ...                             0
## 2                000  ...                             0
## 3                000  ...                             0
## 4                000  ...                             0
## 
## [5 rows x 10 columns]

Note that here, as in the PAR data, there is a verticalPosition field. It has the same meaning as in the PAR data, indicating the tower level of the measurement.

7. Calibrated raw data (Level 1)

Level 1 (dp01) data are calibrated, and aggregated in time, but otherwise untransformed. Use Level 1 data for raw gas concentrations and atmospheric stable isotopes.

Extract Level 1 data

R

Using stackEddy() to extract Level 1 data requires additional inputs. The Level 1 files are too large to simply pull out all the variables by default, and they include multiple averaging intervals, which can’t be merged. So two additional inputs are needed:

  • avg: The averaging interval to extract
  • var: One or more variables to extract

What variables are available, at what averaging intervals? Another function in the neonUtilities package, getVarsEddy(), returns a list of HDF5 file contents. It requires only one input, a filepath to a single NEON HDF5 file:

vars <- getVarsEddy(paste0(getwd(),
                           "/filesToStack00200/NEON.D01.HARV.DP4.00200.001.nsae.2018-07.basic.20260115T064348Z.h5"))
head(vars)
##    site level category system hor ver tmi subsys       name       otype
## 5  HARV  dp01     data   amrs 000 060 01m   <NA> angNedXaxs H5I_DATASET
## 6  HARV  dp01     data   amrs 000 060 01m   <NA> angNedYaxs H5I_DATASET
## 7  HARV  dp01     data   amrs 000 060 01m   <NA> angNedZaxs H5I_DATASET
## 9  HARV  dp01     data   amrs 000 060 30m   <NA> angNedXaxs H5I_DATASET
## 10 HARV  dp01     data   amrs 000 060 30m   <NA> angNedYaxs H5I_DATASET
## 11 HARV  dp01     data   amrs 000 060 30m   <NA> angNedZaxs H5I_DATASET
##      dclass   dim  oth
## 5  COMPOUND 44640 <NA>
## 6  COMPOUND 44640 <NA>
## 7  COMPOUND 44640 <NA>
## 9  COMPOUND  1488 <NA>
## 10 COMPOUND  1488 <NA>
## 11 COMPOUND  1488 <NA>

Inputs to var can be any values from the name field in the table returned by getVarsEddy(). Let’s take a look at CO2 and H2O, 13C in CO2 and 18O in H2O, at 30-minute aggregation. Let’s look at Harvard Forest for these data, since deeper canopies generally have more interesting profiles:

iso <- stackEddy(filepath=paste0(getwd(), "/filesToStack00200"),
               level="dp01", var=c("rtioMoleDryCo2","rtioMoleDryH2o",
                                   "dlta13CCo2","dlta18OH2o"), avg=30)
head(iso$HARV[,1:8])
##   horizontalPosition verticalPosition             timeBgn             timeEnd
## 1                000              010 2018-06-01 00:00:00 2018-06-01 00:29:59
## 2                000              010 2018-06-01 00:30:00 2018-06-01 00:59:59
## 3                000              010 2018-06-01 01:00:00 2018-06-01 01:29:59
## 4                000              010 2018-06-01 01:30:00 2018-06-01 01:59:59
## 5                000              010 2018-06-01 02:00:00 2018-06-01 02:29:59
## 6                000              010 2018-06-01 02:30:00 2018-06-01 02:59:59
##   data.co2Stor.rtioMoleDryCo2.mean data.co2Stor.rtioMoleDryCo2.min
## 1                         509.3375                        451.4786
## 2                         502.2736                        463.5470
## 3                         521.6139                        442.8649
## 4                         469.6317                        432.6588
## 5                         484.7725                        436.2842
## 6                         476.8554                        443.7055
##   data.co2Stor.rtioMoleDryCo2.max data.co2Stor.rtioMoleDryCo2.vari
## 1                        579.3518                         845.0795
## 2                        533.6622                         161.3652
## 3                        563.0518                         547.9924
## 4                        508.7463                         396.8379
## 5                        537.4641                         662.9449
## 6                        515.6598                         246.6969

Python

Using stack_eddy() to extract Level 1 data requires additional inputs. The Level 1 files are too large to simply pull out all the variables by default, and they include multiple averaging intervals, which can’t be merged. So two additional inputs are needed:

  • avg: The averaging interval to extract
  • var: One or more variables to extract

Inputs to var can be any values from the name field in the table returned by get_vars_eddy(). Let’s take a look at CO2 and H2O, 13C in CO2 and 18O in H2O, at 30-minute aggregation. Let’s look at Harvard Forest for these data, since deeper canopies generally have more interesting profiles:

iso = nu.stack_eddy(filepath=os.getcwd() + "/filesToStack00200",
                 level="dp01", avg=30, 
                    var=["rtioMoleDryCo2","rtioMoleDryH2o",
                          "dlta13CCo2","dlta18OH2o"])
iso['HARV'].head()
##   horizontalPosition  ... ucrt.h2oTurb.rtioMoleDryH2o.se
## 0                000  ...                            NaN
## 1                000  ...                            NaN
## 2                000  ...                            NaN
## 3                000  ...                            NaN
## 4                000  ...                            NaN
## 
## [5 rows x 85 columns]

Plot vertical profiles

Let’s plot vertical profiles of CO2 and 13C in CO2 on a single day.

Here we’ll use the time stamps to select all of the records for a single day. And discard the verticalPosition values that are string values - those are the calibration gases.

R

iso.d <- iso$HARV[grep("2018-06-25", iso$HARV$timeBgn, fixed=T),]
iso.d <- iso.d[-which(is.na(as.numeric(iso.d$verticalPosition))),]

ggplot is well suited to these types of data. We can plot CO2 relative to height on the tower, with separate lines for each time interval.

g <- ggplot(iso.d, aes(y=verticalPosition)) + 
  geom_path(aes(x=data.co2Stor.rtioMoleDryCo2.mean, 
                group=timeBgn, col=timeBgn)) + 
  theme(legend.position="none") + 
  xlab("CO2") + ylab("Tower level")
g

And the same plot for 13C in CO2:

g <- ggplot(iso.d, aes(y=verticalPosition)) + 
  geom_path(aes(x=data.isoCo2.dlta13CCo2.mean, 
                group=timeBgn, col=timeBgn)) + 
  theme(legend.position="none") + 
  xlab("d13C") + ylab("Tower level")
g

Python


isoH = iso['HARV']
isoH6 = isoH[(isoH['timeBgn'] >= '2018-06-25') & (isoH['timeBgn'] < '2018-06-26')]
isod = isoH6[isoH6['horizontalPosition'] == '000']

Let’s plot CO2 relative to height on the tower, with separate lines for each time interval.

fig, ax = plt.subplots()

groups = isod["timeBgn"].unique()

cmap = plt.cm.viridis colors = cmap(np.linspace(0, 1, len(groups)))

for color, group_name in zip(colors, groups): group = isod[isod["timeBgn"] == group_name] ax.plot(group['data.co2Stor.rtioMoleDryCo2.mean'], group['verticalPosition'], color=color)

ax.set_xlabel("CO2") ax.set_ylabel("Tower level")

plt.show()

And the same plot for 13C in CO2:

fig, ax = plt.subplots()

for color, group_name in zip(colors, groups): group = isod[isod["timeBgn"] == group_name] ax.plot(group["data.isoCo2.dlta13CCo2.mean"], group["verticalPosition"], color=color)

ax.set_xlabel("13C") ax.set_ylabel("Tower level")

plt.show()

The legends are omitted for space, see if you can use the concentration and isotope ratio buildup and drawdown below the canopy to work out the times of day the different colors represent.

Compare tree height measured from the ground to a Lidar-based Canopy Height Model

An extremely common task for remote sensing researchers is to connect remote sensing data with data on the ground. This might be a research question in itself - what reflectance wavelengths are most closely correlated with a particular ground feature? - or a ground-truthing for extrapolation and prediction, or development or testing of a model.

This tutorial explores the relationship between a Lidar-derived canopy height model and tree heights measured from the ground, because these two datasets provide a straightforward introduction to thinking about how to relate different sources of data. They are a good exemplar for the two major components of connecting airborne and ground data:

  1. The mechanics: Finding the remotely sensed pixels that correspond to a given ground measurement
  2. The science: Considering biases in each data source, and the ways the measurements might differ even if neither is incorrect

We will explore these issues between these two datasets, and you can use what you learn here as a roadmap for making similar comparisons between different datasets.

The two NEON data products that estimate tree height:

  • DP3.30015.001, Ecosystem structure, aka Canopy Height Model (CHM)
  • DP1.10098.001, Vegetation structure

The CHM data are derived from the Lidar point cloud data collected by the remote sensing platform. The vegetation structure data are collected by by field staff on the ground.

We will be using data from the Wind River Experimental Forest (WREF) NEON field site located in Washington state. The predominant vegetation there is tall evergreen conifers.

Things You’ll Need To Complete This Tutorial

You will need the a recent version of R (4+) or Python (3.9+) loaded on your computer to complete this tutorial.

1. Setup

Start by installing and loading packages (if necessary) and setting options. Installation can be run once, then periodically to get package updates.

R

One of the packages we’ll be using, geoNEON, is only available via GitHub, so it’s installed using the devtools package. The other packages can be installed directly from CRAN.

install.packages("neonUtilities")
install.packages("neonOS")
install.packages("terra")
install.packages("devtools")
devtools::install_github("NEONScience/NEON-geolocation/geoNEON")

Now load packages. This needs to be done every time you run code. We’ll also set a working directory for data downloads (adapt the working directory path for your system).

library(terra)
library(neonUtilities)
library(neonOS)
library(geoNEON)

options(stringsAsFactors=F)

wd <- "~/data"

Python

There are a variety of spatial packages available in Python; we’ll use rasterio and rioxarray. We’ll also use several modules that are installed automatically with standard Python installations.

pip install neonutilities
pip install geopandas
pip install rasterio
pip install rioxarray
pip install rasterstats

Now load packages.


import neonutilities as nu
import pandas as pd
import numpy as np
import rasterstats as rs
import geopandas as gpd
import rioxarray as rxr
import matplotlib.pyplot as plt
import matplotlib.collections
import rasterio
from rasterio import sample
from rasterio.enums import Resampling
import requests
import time
import os

2. Vegetation structure data

In this section we’ll download the vegetation structure data, find the locations of the mapped trees, and join to the species and size data.

R

Download the vegetation structure data using the loadByProduct() function in the neonUtilities package. Inputs to the function are:

  • dpID: data product ID; woody vegetation structure = DP1.10098.001
  • site: (vector of) 4-letter site codes; Wind River = WREF
  • package: basic or expanded; we’ll download basic here
  • release: which data release to download; we’ll use RELEASE-2023
  • check.size: should this function prompt the user with an estimated download size? Set to FALSE here for ease of processing as a script, but good to leave as default TRUE when downloading a dataset for the first time.

Refer to the cheat sheet for the neonUtilities package for more details and the complete index of possible function inputs.

veglist <- loadByProduct(dpID="DP1.10098.001", 
                         site="WREF", 
                         package="basic", 
                         release="RELEASE-2024",
                         check.size = FALSE)

Python

Download the vegetation structure data using the load_by_product() function in the neonutilities package. Inputs to the function are:

  • dpid: data product ID; woody vegetation structure = DP1.10098.001
  • site: (vector of) 4-letter site codes; Wind River = WREF
  • package: basic or expanded; we’ll download basic here
  • release: which data release to download; we’ll use RELEASE-2023
  • check_size: should this function prompt the user with an estimated download size? Set to False here for ease of processing as a script, but good to leave as default True when downloading a dataset for the first time.

Refer to the cheat sheet for the neonUtilities package for more details and the complete index of possible function inputs. The cheat sheet is focused on the R package, but nearly all the inputs are the same.

veglist = nu.load_by_product(dpid="DP1.10098.001", 
                         site="WREF", 
                         package="basic", 
                         release="RELEASE-2024",
                         check_size = False)

Get tree locations

R

Use the getLocTOS() function in the geoNEON package to get precise locations for the tagged plants. Refer to the package documentation for more details.

vegmap <- getLocTOS(veglist$vst_mappingandtagging, 
                          "vst_mappingandtagging")
## Please note locations will be calculated only for mapped woody individuals. To find subplot locations for unmapped individuals, use this function with the vst_apparentindividual, vst_non-woody, and/or vst_shrubgroup tables.

Python

NEON doesn’t currently offer a Python equivalent to the geoNEON R package, so we’ll calculate the tree locations step-by-step. The trees are mapped as distance and azimuth from a reference location. The reference location data are accessible on the NEON API. The steps in this calculation are described in the Data Product User Guide.

First, get the names of the reference locations, and query the NEON API to retrieve their location data:


vegmapall = veglist["vst_mappingandtagging"]
vegmap = vegmapall.loc[vegmapall["pointID"] != ""]
vegmap = vegmap.reindex()
vegmap["points"] = vegmap["namedLocation"] + "." + vegmap["pointID"]
veg_points = list(set(list(vegmap["points"])))

easting = [] northing = [] coord_uncertainty = [] elev_uncertainty = [] drop_points = []


for i in veg_points:
    time.sleep(1)
    vres = requests.get("https://data.neonscience.org/api/v0/locations/"+i)
    vres_json = vres.json()
    if vres_json["data"] is not None:
        easting.append(vres_json["data"]["locationUtmEasting"])
        northing.append(vres_json["data"]["locationUtmNorthing"])
        props = pd.DataFrame.from_dict(vres_json["data"]["locationProperties"])
        cu = props.loc[props["locationPropertyName"] == "Value for Coordinate uncertainty"]["locationPropertyValue"]
        coord_uncertainty.append(cu[cu.index[0]])
        eu = props.loc[props["locationPropertyName"] == "Value for Elevation uncertainty"]["locationPropertyValue"]
        elev_uncertainty.append(eu[eu.index[0]])
    else:
        drop_points.append(i)

veg_points_clean = [v for v in veg_points if v not in drop_points]

ptdct = dict(points=veg_points_clean, 
             easting=easting, 
             northing=northing,
             coordinateUncertainty=coord_uncertainty,
             elevationUncertainty=elev_uncertainty)
ptfrm = pd.DataFrame.from_dict(ptdct)
ptfrm.set_index("points", inplace=True)

vegmap = vegmap.join(ptfrm, 
                     on="points", 
                     how="inner")

Next, use the distance and azimuth data to calculate the precise locations of individuals, relative to the reference locations.


vegmap["adjEasting"] = (vegmap["easting"]
                        + vegmap["stemDistance"]
                        * np.sin(vegmap["stemAzimuth"]
                                   * np.pi / 180))

vegmap["adjNorthing"] = (vegmap["northing"] + vegmap["stemDistance"] * np.cos(vegmap["stemAzimuth"] * np.pi / 180))

Add to the uncertainty estimate to account for error in measurement from the reference location, as described in the User Guide.


vegmap["adjCoordinateUncertainty"] = vegmap["coordinateUncertainty"] + 0.6
vegmap["adjElevationUncertainty"] = vegmap["elevationUncertainty"] + 1

Combine location with tree traits

Now we have the mapped locations of individuals in the vst_mappingandtagging table, and the annual measurements of tree dimensions such as height and diameter in the vst_apparentindividual table. To bring these measurements together, join the two tables, using the joinTableNEON() function from the neonOS package. Refer to the Quick Start Guide for Vegetation structure for more information about the data tables and the joining instructions joinTableNEON() is using.

R

veg <- joinTableNEON(veglist$vst_apparentindividual, 
                     vegmap, 
                     name1="vst_apparentindividual",
                     name2="vst_mappingandtagging")

Python

Like the geoNEON package, there is not currently a Python equivalent to the neonOS package. Refer to the Quick Start Guide for Vegetation structure to find the data field to use to join the two tables.


veglist["vst_apparentindividual"].set_index("individualID", inplace=True)
veg = vegmap.join(veglist["vst_apparentindividual"],
                  on="individualID",
                  how="inner",
                  lsuffix="_MAT",
                  rsuffix="_AI")

Make a stem map

Let’s see what the data look like! Make a stem map, where each tree is mapped by a circle matching its size. This won’t look informative at the scale of the entire site, so we’ll subset to a single plot, WREF_075.

In addition to looking at only one plot, we’ll also target a single year. We want to match height measurements from the ground to remote sensing flights, so we need to pick a year when WREF was flown. We’ll use 2017. Use the eventID field from
vst_apparentindividual to find the 2017 measurements. We use the eventID rather than the date because sampling bouts for vegetation structure are carried out in the winter, to avoid the growing season, and can sometimes extend into the following calendar year.

Note that in both languages the input to the function that draws a circle is a radius, but stemDiameter is just that, a diameter, so we will need to divide by two. And stemDiameter is in centimeters, but the mapping scale is in meters, so we also need to divide by 100 to get the scale right.

R

veg2017 <- veg[which(veg$eventID.y=="vst_WREF_2017"),]

symbols(veg2017$adjEasting[which(veg2017$plotID=="WREF_075")], veg2017$adjNorthing[which(veg2017$plotID=="WREF_075")], circles=veg2017$stemDiameter[which(veg2017$plotID=="WREF_075")]/100/2, inches=F, xlab="Easting", ylab="Northing")

And now overlay the estimated uncertainty in the location of each stem, in blue:

symbols(veg2017$adjEasting[which(veg2017$plotID=="WREF_075")], 
        veg2017$adjNorthing[which(veg2017$plotID=="WREF_075")], 
        circles=veg2017$stemDiameter[which(veg2017$plotID=="WREF_075")]/100/2, 
        inches=F, xlab="Easting", ylab="Northing")
symbols(veg2017$adjEasting[which(veg2017$plotID=="WREF_075")], 
        veg2017$adjNorthing[which(veg2017$plotID=="WREF_075")], 
        circles=veg2017$adjCoordinateUncertainty[which(veg2017$plotID=="WREF_075")], 
        inches=F, add=T, fg="lightblue")

Python

veg2017 = veg.loc[veg["eventID_AI"]=="vst_WREF_2017"]
veg75 = veg2017.loc[veg2017["plotID_AI"]=="WREF_075"]

fig, ax = plt.subplots()

xy = np.array(tuple(zip(veg75.adjEasting, veg75.adjNorthing))) srad = veg75.stemDiameter/100/2 patches = [plt.Circle(center, size) for center, size in zip(xy, srad)]

coll = matplotlib.collections.PatchCollection(patches, facecolors="white", edgecolors="black") ax.add_collection(coll)

ax.margins(0.1) plt.show()

And now overlay the estimated uncertainty in the location of each stem, in blue:

fig, ax = plt.subplots()

sunc = veg75.adjCoordinateUncertainty patchunc = [plt.Circle(center, size) for center, size in zip(xy, sunc)]

coll = matplotlib.collections.PatchCollection(patches, facecolors="None", edgecolors="black") collunc = matplotlib.collections.PatchCollection(patchunc, facecolors="None", edgecolors="lightblue") ax.add_collection(coll) ax.add_collection(collunc)

ax.margins(0.1) plt.show()

3. Canopy height model data

Now we’ll download the CHM tile covering plot WREF_075. Several other plots are also covered by this tile. We could download all tiles that contain vegetation structure plots, but in this exercise we’re sticking to one tile to limit download size and processing time.

The tileByAOP() function in the neonUtilities package allows for download of remote sensing tiles based on easting and northing coordinates, so we’ll give it the coordinates of all the trees in plot WREF_075 and the data product ID, DP3.30015.001 (note that if WREF_075 crossed tile boundaries, this code would download all relevant tiles).

The download will include several metadata files as well as the data tile. Load the data tile into the environment using the terra package in R and the rasterio and `rioxarray`` packages in Python.

R

byTileAOP(dpID="DP3.30015.001", site="WREF", year=2017, 
          easting=veg2017$adjEasting[which(veg2017$plotID=="WREF_075")], 
          northing=veg2017$adjNorthing[which(veg2017$plotID=="WREF_075")],
          check.size=FALSE, savepath=wd)

chm <- rast(paste0(wd, "/DP3.30015.001/neon-aop-products/2017/FullSite/D16/2017_WREF_1/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D16_WREF_DP3_580000_5075000_CHM.tif"))

Let’s view the tile.

plot(chm, col=topo.colors(5))

Python

nu.by_tile_aop(dpid="DP3.30015.001", site="WREF", year="2017", 
          easting=list(veg75.adjEasting), 
          northing=list(veg75.adjNorthing),
          check_size=False, savepath=os.getcwd())

We’ll read in versions of the tile in both rasterio and rioxarray to enable the different data extractions we’ll need to do later in the tutorial.


chm = rasterio.open(os.getcwd() + "/DP3.30015.001/neon-aop-products/2017/FullSite/D16/2017_WREF_1/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D16_WREF_DP3_580000_5075000_CHM.tif")

chmx = rxr.open_rasterio(os.getcwd() + "/DP3.30015.001/neon-aop-products/2017/FullSite/D16/2017_WREF_1/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D16_WREF_DP3_580000_5075000_CHM.tif").squeeze()

Let’s view the tile.

plt.imshow(chm.read(1))
plt.show()

4. Comparing the two datasets

Now we have the heights of individual trees measured from the ground, and the height of the top surface of the canopy, measured from the air. There are many different ways to make a comparison between these two datasets! This section will walk through three different approaches.

Subset the data

First, subset the vegetation structure data to only the trees that fall within this tile. This step isn’t strictly necessary, but it will make the processing faster.

Note that although we downloaded this tile by targeting plot WREF_075, there are other plots in the area covered by this tile - from here forward, we’re working with all measured trees within the tile area.

R

vegsub <- veg2017[which(veg2017$adjEasting >= ext(chm)[1] &
                        veg2017$adjEasting <= ext(chm)[2] &
                        veg2017$adjNorthing >= ext(chm)[3] & 
                        veg2017$adjNorthing <= ext(chm)[4]),]

Python


vegsub = veg2017.loc[(veg2017["adjEasting"] >= chm.bounds[0]) &
                 (veg2017["adjEasting"] <= chm.bounds[1]) &
                 (veg2017["adjNorthing"] >= chm.bounds[2]) & 
                 (veg2017["adjNorthing"] <= chm.bounds[3])]
vegsub = vegsub.reset_index(drop=True)

Canopy height at mapped tree locations

Starting with a very simple first pass: get the CHM value matching the coordinates of each mapped plant. Then make a scatter plot of each tree’s height vs. the CHM value at its location.

R

The extract() function from the terra package gets the values from the tile at the given coordinates.

valCHM <- extract(chm, 
                  cbind(vegsub$adjEasting,
                  vegsub$adjNorthing))

plot(valCHM$NEON_D16_WREF_DP3_580000_5075000_CHM~ vegsub$height, pch=20, xlab="Height", ylab="Canopy height model") lines(c(0,50), c(0,50), col="grey")

How strong is the correlation between the ground and lidar measurements?

cor(valCHM$NEON_D16_WREF_DP3_580000_5075000_CHM, 
    vegsub$height, use="complete")
## [1] 0.3910897

Python

The sample_gen() function from the rasterio.sample module gets the values from the tile at the given coordinates.

valCHM = list(sample.sample_gen(chm, 
                                tuple(zip(vegsub["adjEasting"], 
                                          vegsub["adjNorthing"])),
                                masked=True))

fig, ax = plt.subplots()

ax.plot((0,50), (0,50), linewidth=1, color="black") ax.scatter(vegsub.height, valCHM, s=1)

ax.set_xlabel("Height") ax.set_ylabel("Canopy height model")

plt.show()

How strong is the correlation between the ground and lidar measurements?

CHMlist = np.array([c.tolist()[0] for c in valCHM])
idx = np.intersect1d(np.where(np.isfinite(vegsub.height)), 
                     np.where(CHMlist != None))
np.corrcoef(vegsub.height[idx], list(CHMlist[idx]))[0,1]
## np.float64(0.3913279146475449)

Canopy height within a buffer of mapped tree locations

Now we remember there is uncertainty in the location of each tree, so the precise pixel it corresponds to might not be the right one. Let’s try adding a buffer to the extraction function, to get the tallest tree within the uncertainty of the location of each tree.

R

valCHMbuff <- extract(chm, 
                  buffer(vect(cbind(vegsub$adjEasting,
                  vegsub$adjNorthing)),
                  width=vegsub$adjCoordinateUncertainty),
                  fun=max)

plot(valCHMbuff$NEON_D16_WREF_DP3_580000_5075000_CHM~ vegsub$height, pch=20, xlab="Height", ylab="Canopy height model") lines(c(0,50), c(0,50), col="grey")

cor(valCHMbuff$NEON_D16_WREF_DP3_580000_5075000_CHM, 
    vegsub$height, use="complete")
## [1] 0.3846956

Python

To extract values using a buffer in Python, we need to create a shapefile of the buffered locations, and then extract the maximum value for each area in the shapefile.

vegloc = vegsub[["individualID","adjEasting","adjNorthing","adjCoordinateUncertainty"]]
v = vegloc.rename(columns={"individualID": "indID", "adjEasting": "easting",
               "adjNorthing": "northing", "adjCoordinateUncertainty": "coordUnc"},
               inplace=False)

gdf = gpd.GeoDataFrame( v, geometry=gpd.points_from_xy(v.easting, v.northing)) gdf["geometry"] = gdf["geometry"].buffer(distance=gdf["coordUnc"]) gdf.to_file(os.getcwd() + "/trees_with_buffer.shp")

## /Users/clunch/.virtualenvs/r-reticulate/lib/python3.13/site-packages/pyogrio/geopandas.py:917: UserWarning: 'crs' was not provided.  The output dataset will not have projection information defined and may not be usable in other systems.
##   write(

chm_height = rs.zonal_stats(os.getcwd() + "/trees_with_buffer.shp", chmx.values, 
                            affine=chmx.rio.transform(), 
                            nodata=-9999, stats="max")

valCHMbuff = [h["max"] for h in chm_height]

And plot the results:

fig, ax = plt.subplots()

ax.plot((0,50), (0,50), linewidth=1, color="black") ax.scatter(vegsub.height, valCHMbuff, s=1)

ax.set_xlabel("Height") ax.set_ylabel("Canopy height model")

plt.show()

CHMbufflist = np.array(valCHMbuff)
idx = np.intersect1d(np.where(np.isfinite(vegsub.height)), 
                     np.where(CHMbufflist != None))
np.corrcoef(vegsub.height[idx], list(CHMbufflist[idx]))[0,1]
## np.float64(0.3848610594340662)

Adding the buffer has actually made our correlation slightly weaker. Let’s think about the data.

There are a lot of points clustered on the 1-1 line, but there is also a cloud of points above the line, where the measured height is lower than the canopy height model at the same coordinates. This makes sense, because the tree height data include the understory. There are many plants measured in the vegetation structure data that are not at the top of the canopy, and the CHM sees only the top surface of the canopy.

This also explains why the buffer didn’t improve things. Finding the highest CHM value within the uncertainty of a tree should improve the fit for the tallest trees, but it’s likely to make the fit worse for the understory trees.

How to exclude understory plants from this analysis? Again, there are many possible approaches. We’ll try out two, one map-centric and one tree-centric.

Compare maximum height within 10 meter pixels

Starting with the map-centric approach: select a pixel size, and aggregate both the vegetation structure data and the CHM data to find the tallest point in each pixel. Let’s try this with 10m pixels.

Start by rounding the coordinates of the vegetation structure data, to create 10m bins. Use floor() instead of round() so each tree ends up in the pixel with the same numbering as the raster pixels (the rasters/pixels are numbered by their southwest corners).

R

easting10 <- 10*floor(vegsub$adjEasting/10)
northing10 <- 10*floor(vegsub$adjNorthing/10)
vegsub <- cbind(vegsub, easting10, northing10)

Use the aggregate() function to get the tallest tree in each 10m bin.

vegbin <- stats::aggregate(vegsub, 
                           by=list(vegsub$easting10, 
                                   vegsub$northing10), 
                           FUN=max)

To get the CHM values for the 10m bins, use the terra package version of the aggregate() function. Let’s take a look at the lower-resolution image we get as a result.

CHM10 <- terra::aggregate(chm, fact=10, fun=max)
plot(CHM10, col=topo.colors(5))

Use the extract() function again to get the values from each pixel. Our grids are numbered by the corners, so add 5 to each tree coordinate to make sure it’s in the correct pixel.

vegbin$easting10 <- vegbin$easting10 + 5
vegbin$northing10 <- vegbin$northing10 + 5
binCHM <- extract(CHM10, cbind(vegbin$easting10, 
                               vegbin$northing10))
plot(binCHM$NEON_D16_WREF_DP3_580000_5075000_CHM~
       vegbin$height, pch=20, 
     xlab="Height", ylab="Canopy height model")
lines(c(0,50), c(0,50), col="grey")

cor(binCHM$NEON_D16_WREF_DP3_580000_5075000_CHM, 
    vegbin$height, use="complete")
## [1] 0.3593519

Python


vegsub["easting10"] = 10*np.floor(vegsub.adjEasting/10)
vegsub["northing10"] = 10*np.floor(vegsub.adjNorthing/10)
vegsubloc = vegsub[["height","easting10","northing10"]]

Use the groupby() function to get the tallest tree in each 10m bin.


vegbin = vegsubloc.groupby(["easting10", "northing10"]).max().add_suffix('_max').reset_index()

To get the CHM values for the 10m bins, use the warp.reproject() method in the rasterio package.


target_res = (10, 10)

with rasterio.open(os.getcwd() + "/DP3.30015.001/neon-aop-products/2017/FullSite/D16/2017_WREF_1/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D16_WREF_DP3_580000_5075000_CHM.tif") as src: data, transform = rasterio.warp.reproject(source=src.read(), src_transform=src.transform, src_crs=src.crs, dst_crs=src.crs, dst_nodata=src.nodata, dst_resolution=target_res, resampling=Resampling.max) profile = src.profile profile.update(transform=transform, driver='GTiff', height=data.shape[1], width=data.shape[2])

 with rasterio.open(os.getcwd() + &#39;/CHM_10m.tif&#39;, &#39;w&#39;, **profile) as dst:
                dst.write(data)

chm10 = rasterio.open(os.getcwd() + '/CHM_10m.tif')

Let’s take a look at the lower-resolution image we get as a result.

plt.imshow(chm10.read(1))
plt.show()

Use the sample() function again to get the values from the pixel corresponding to each maximum tree height estimate. Our grids are numbered by the corners, so add 5 to each tree coordinate to make sure it’s in the correct pixel.

valCHM10 = list(sample.sample_gen(chm10, tuple(zip(vegbin["easting10"]+5,
                                                   vegbin["northing10"]+5)),
                                                   masked=True))

fig, ax = plt.subplots()

ax.plot((0,50), (0,50), linewidth=1, color="black") ax.scatter(vegbin.height_max, valCHM10, s=1)

ax.set_xlabel("Height") ax.set_ylabel("Canopy height model")

plt.show()

CHM10list = np.array([c.tolist()[0] for c in valCHM10])
idx = np.intersect1d(np.where(np.isfinite(vegbin.height_max)), 
                     np.where(CHM10list != None))
np.corrcoef(vegbin.height_max[idx], list(CHM10list[idx]))[0,1]
## np.float64(0.35904092842414137)

The understory points are thinned out substantially, but so are the rest, and we’ve lost a lot of the shorter points. We’ve lost a lot of data overall by going to a lower resolution.

Let’s try and see if we can identify the tallest trees by another approach, using the trees as the starting point instead of map area.

Find the top-of-canopy trees and compare to model

Start by sorting the veg structure data by height.

R

vegsub <- vegsub[order(vegsub$height, 
                       decreasing=T),]

Now, for each tree, let’s estimate which nearby trees might be beneath its canopy, and discard those points. To do this:

  1. Calculate the distance of each tree from the target tree.
  2. Pick a reasonable estimate for canopy size, and discard shorter trees within that radius. The radius I used is 0.3 times the height, based on some rudimentary googling about Douglas fir allometry. It could definitely be improved on!
  3. Iterate over all trees.

We’ll use a simple for loop to do this:

vegfil <- vegsub
for(i in 1:nrow(vegsub)) {
    if(is.na(vegfil$height[i]))
        next
    dist <- sqrt((vegsub$adjEasting[i]-vegsub$adjEasting)^2 + 
                (vegsub$adjNorthing[i]-vegsub$adjNorthing)^2)
    vegfil$height[which(dist<0.3*vegsub$height[i] & 
                        vegsub$height<vegsub$height[i])] <- NA
}

vegfil <- vegfil[which(!is.na(vegfil$height)),]

Now extract the raster values, as above.

filterCHM <- extract(chm, 
                     cbind(vegfil$adjEasting, 
                           vegfil$adjNorthing))
plot(filterCHM$NEON_D16_WREF_DP3_580000_5075000_CHM~
       vegfil$height, pch=20, 
     xlab="Height", ylab="Canopy height model")
lines(c(0,50), c(0,50), col="grey")

cor(filterCHM$NEON_D16_WREF_DP3_580000_5075000_CHM,
    vegfil$height)
## [1] 0.8170048

Python


vegfil = vegsub.sort_values(by="height", ascending=False, ignore_index=True)

Now, for each tree, let’s estimate which nearby trees might be beneath its canopy, and discard those points. To do this:

  1. Calculate the distance of each tree from the target tree.
  2. Pick a reasonable estimate for canopy size, and discard shorter trees within that radius. The radius I used is 0.3 times the height, based on some rudimentary googling about Douglas fir allometry. It could definitely be improved on!
  3. Iterate over all trees.

We’ll use a simple for loop to do this:


height = vegfil.height.reset_index()
for i in vegfil.index:
    if height.height[i] is None:
        pass
    else:
        dist = np.sqrt(np.square(vegfil.adjEasting[i]-vegfil.adjEasting) + 
                       np.square(vegfil.adjNorthing[i]-vegfil.adjNorthing))
        idx = vegfil.index[(vegfil.height<height.height[i]) & (dist<0.3*height.height[i])]
        height.loc[idx, "height"] = None

Now extract the raster values, as above.

filterCHM = list(sample.sample_gen(chm, tuple(zip(vegfil["adjEasting"],
                                                  vegfil["adjNorthing"])),
                                                  masked=True))

fig, ax = plt.subplots()

ax.plot((0,50), (0,50), linewidth=1, color="black") ax.scatter(height.height, filterCHM, s=1)

ax.set_xlabel("Height") ax.set_ylabel("Canopy height model")

plt.show()

filCHMlist = np.array([c.tolist()[0] for c in filterCHM])
idx = np.intersect1d(np.where(np.isfinite(height.height)), 
                     np.where(filCHMlist != None))
np.corrcoef(height.height[idx], list(filCHMlist[idx]))[0,1]
## np.float64(0.8393670222898226)

This is quite a bit better! There are still several understory points we failed to exclude, but we were able to filter out most of the understory without losing so many overstory points.

Remove dead trees

Let’s try one more thing. The plantStatus field in the veg structure data indicates whether a plant is dead, broken, or otherwise damaged. In theory, a dead or broken tree can still be the tallest thing around, but it’s less likely, and it’s also less likely to get a good Lidar return. Exclude all trees that aren’t alive:

R

vegfil <- vegfil[which(vegfil$plantStatus=="Live"),]

filterCHM <- extract(chm, cbind(vegfil$adjEasting, vegfil$adjNorthing))

plot(filterCHM$NEON_D16_WREF_DP3_580000_5075000_CHM~ vegfil$height, pch=20, xlab="Height", ylab="Canopy height model")

lines(c(0,50), c(0,50), col="grey")

cor(filterCHM$NEON_D16_WREF_DP3_580000_5075000_CHM,
    vegfil$height)
## [1] 0.9113093

Python

idx = vegfil.index[vegfil.plantStatus!="Live"]
height.loc[idx, "height"] = None

fig, ax = plt.subplots()

ax.plot((0,50), (0,50), linewidth=1, color="black") ax.scatter(height.height, filterCHM, s=1)

ax.set_xlabel("Height") ax.set_ylabel("Canopy height model")

plt.show()

idx = np.intersect1d(np.where(np.isfinite(height.height)), 
                     np.where(filCHMlist != None))
np.corrcoef(height.height[idx], list(filCHMlist[idx]))[0,1]
## np.float64(0.9113648148226839)

Nice!

Final thoughts on intercomparison

This tutorial has explored different ways of relating remotely sensed to ground-based data. Although some of the options we tried resulted in stronger correlations than others, the approach you choose will probably depend most on the research questions you are trying to answer. The goal of this tutorial has been to help you think through the possibilities, and identify some of the pitfalls and biases.

Speaking of biases: however we slice the data, there is a noticeable bias even in the strongly correlated values. The CHM heights are generally a bit shorter than the ground-based estimates of tree height. There are two biases in the CHM data that contribute to this. (1) Lidar returns from short-stature vegetation are difficult to distinguish from returns from the ground itself, so the “ground” estimated by Lidar is often a bit higher than the true ground surface, and (2) the height estimate from Lidar represents the highest return, but the highest return may slightly miss the actual tallest point on a given tree. This is especially likely to happen with conifers, which are the top-of-canopy trees at Wind River.

Finally, as you explore other types of both remote sensing and ground data, keep in mind that the two datasets we examined here, tree height and canopy height model, are an unusual pair in that both are measuring the same quantity in the same units. Attempting to relate remote sensing and ground data can be much more complicated in other scenarios, such as the relationships between leaf chemistry and reflectance indices.

Interacting with the PhenoCam Server using phenocamapi R Package

The phenocamapi R package is developed to simplify interacting with the PhenoCam network dataset and perform data wrangling steps on PhenoCam sites' data and metadata.

This tutorial will show you the basic commands for accessing PhenoCam data through the PhenoCam API. The phenocampapi R package is developed and maintained by the PhenoCam team. The most recent release is available on GitHub (PhenocamAPI). Additional vignettes can be found on how to merge external time-series (e.g. Flux data) with the PhenoCam time-series.

We begin with several useful skills and tools for extracting PhenoCam data directly from the server:

  • Exploring the PhenoCam metadata
  • Filtering the dataset by site attributes
  • Downloading PhenoCam time-series data
  • Extracting the list of midday images
  • Downloading midday images for a given time range

Exploring PhenoCam metadata

Each PhenoCam site has specific metadata including but not limited to how a site is set up and where it is located, what vegetation type is visible from the camera, and its meteorological regime. Each PhenoCam may have zero to several Regions of Interest (ROIs) per vegetation type. The phenocamapi package is an interface to interact with the PhenoCam server to extract those data and process them in an R environment.

To explore the PhenoCam data, we'll use several packages for this tutorial.

library(data.table) #installs package that creates a data frame for visualizing data in row-column table format

library(phenocamapi)  #installs packages of time series and phenocam data from the Phenology Network. Loads required packages rjson, bitops and RCurl

library(lubridate)  #install time series data package

library(jpeg)

We can obtain an up-to-date data.frame of the metadata of the entire PhenoCam network using the get_phenos() function. The returning value would be a data.table in order to simplify further data exploration.

#Obtain phenocam metadata from the Phenology Network in form of a data.table

phenos <- get_phenos()



#Explore metadata table

head(phenos$site) #preview first six rows of the table. These are the first six phenocam sites in the Phenology Network

#> [1] "aafcottawacfiaf14e" "aafcottawacfiaf14n" "aafcottawacfiaf14w" "acadia"            
#> [5] "admixpasture"       "adrycpasture"



colnames(phenos)  #view all column names. 

#>  [1] "site"                      "lat"                       "lon"                      
#>  [4] "elev"                      "active"                    "utc_offset"               
#>  [7] "date_first"                "date_last"                 "infrared"                 
#> [10] "contact1"                  "contact2"                  "site_description"         
#> [13] "site_type"                 "group"                     "camera_description"       
#> [16] "camera_orientation"        "flux_data"                 "flux_networks"            
#> [19] "flux_sitenames"            "dominant_species"          "primary_veg_type"         
#> [22] "secondary_veg_type"        "site_meteorology"          "MAT_site"                 
#> [25] "MAP_site"                  "MAT_daymet"                "MAP_daymet"               
#> [28] "MAT_worldclim"             "MAP_worldclim"             "koeppen_geiger"           
#> [31] "ecoregion"                 "landcover_igbp"            "dataset_version1"         
#> [34] "site_acknowledgements"     "modified"                  "flux_networks_name"       
#> [37] "flux_networks_url"         "flux_networks_description"

#This is all the metadata we have for the phenocams in the Phenology Network

Now we have a better idea of the types of metadata that are available for the Phenocams.

Remove null values

We may want to explore some of the patterns in the metadata before we jump into specific locations. Let's look at Mean Annual Precipitation (MAP) and Mean Annual Temperature (MAT) across the different field site and classify those by the primary vegetation type ('primary_veg_type') for each site.

| Abbreviation | Description | |----------|:-------------:|------:| | AG | agriculture | | DB | deciduous broadleaf | | DN | deciduous needleleaf | | EB | evergreen broadleaf | | EN | evergreen needleleaf | | GR | grassland | | MX | mixed vegetation (generally EN/DN, DB/EN, or DB/EB) | | SH | shrubs | | TN | tundra (includes sedges, lichens, mosses, etc.) | | WT | wetland | | NV | non-vegetated | | RF | reference panel | | XX | unspecified |

To do this we'd first want to remove the sites where there is not data and then plot the data.

# #Some sites do not have data on Mean Annual Precipitation (MAP) and Mean Annual Temperature (MAT).



# removing the sites with unknown MAT and MAP values

phenos <- phenos[!((MAT_worldclim == -9999)|(MAP_worldclim == -9999))]



# Making a plot showing all sites by their vegetation type (represented as different symbols and colors) plotting across meteorology (MAT and MAP) space. Refer to table to identify vegetation type acronyms.

phenos[primary_veg_type=='DB', plot(MAT_worldclim, MAP_worldclim, pch = 19, col = 'green', xlim = c(-5, 27), ylim = c(0, 4000))]

#> NULL

phenos[primary_veg_type=='DN', points(MAT_worldclim, MAP_worldclim, pch = 1, col = 'darkgreen')]

#> NULL

phenos[primary_veg_type=='EN', points(MAT_worldclim, MAP_worldclim, pch = 17, col = 'brown')]

#> NULL

phenos[primary_veg_type=='EB', points(MAT_worldclim, MAP_worldclim, pch = 25, col = 'orange')]

#> NULL

phenos[primary_veg_type=='AG', points(MAT_worldclim, MAP_worldclim, pch = 12, col = 'yellow')]

#> NULL

phenos[primary_veg_type=='SH', points(MAT_worldclim, MAP_worldclim, pch = 23, col = 'red')]

#> NULL



legend('topleft', legend = c('DB','DN', 'EN','EB','AG', 'SH'), 
       pch = c(19, 1, 17, 25, 12, 23), 
       col =  c('green', 'darkgreen', 'brown',  'orange',  'yellow',  'red' ))

Filtering using attributes

Alternatively, we may want to only include Phenocams with certain attributes in our datasets. For example, we may be interested only in sites with a co-located flux tower. For this, we'd want to filter for those with a flux tower using the flux_sitenames attribute in the metadata.

# Create a data table only including the sites that have flux_data available and where the FLUX site name is specified

phenofluxsites <- phenos[flux_data==TRUE&!is.na(flux_sitenames)&flux_sitenames!='', 
                         .(PhenoCam=site, Flux=flux_sitenames)] # return as table



#Specify to retain variables of Phenocam site and their flux tower name

phenofluxsites <- phenofluxsites[Flux!='']



# view the first few rows of the data table

head(phenofluxsites)

#>               PhenoCam                               Flux
#>                 <char>                             <char>
#> 1:        admixpasture                             NZ-ADw
#> 2: alercecosteroforest                             CL-ACF
#> 3:      alligatorriver                             US-NC4
#> 4:            amtsvenn                                 No
#> 5:    arkansaswhitaker                             US-RGW
#> 6:         arsbrooks10 US-Br1: Brooks Field Site 10- Ames

We could further identify which of those Phenocams with a flux tower and in deciduous broadleaf forests (primary_veg_type=='DB').

#list deciduous broadleaf sites with a flux tower

DB.flux <- phenos[flux_data==TRUE&primary_veg_type=='DB', 
                  site]  # return just the site names as a list



# see the first few rows

head(DB.flux)

#> [1] "alligatorriver" "bartlett"       "bartlettir"     "bbc1"           "bbc2"          
#> [6] "bbc3"

PhenoCam time series

PhenoCam time series are extracted time series data obtained from regions of interest (ROI's) for a given site.

Obtain ROIs

To download the phenological time series from the PhenoCam, we need to know the site name, vegetation type and ROI ID. This information can be obtained from each specific PhenoCam page on the PhenoCam website or by using the get_rois() function.

# Obtaining the list of all the available regions of interest (ROI's) on the PhenoCam server and producing a data table

rois <- get_rois()



# view the data variables in the data table

colnames(rois)

#>  [1] "roi_name"          "site"              "lat"               "lon"              
#>  [5] "roitype"           "active"            "show_link"         "show_data_link"   
#>  [9] "sequence_number"   "description"       "first_date"        "last_date"        
#> [13] "site_years"        "missing_data_pct"  "roi_page"          "roi_stats_file"   
#> [17] "one_day_summary"   "three_day_summary" "data_release"



# view first few regions of of interest (ROI) locations

head(rois$roi_name)

#> [1] "aafcottawacfiaf14n_AG_1000"  "admixpasture_AG_1000"        "adrycpasture_AG_1000"       
#> [4] "alercecosteroforest_EN_1000" "alligatorriver_DB_1000"      "almondifapa_AG_1000"

Download time series

The get_pheno_ts() function can download a time series and return the result as a data.table. Let's work with the Duke Forest Hardwood Stand (dukehw) PhenoCam and specifically the ROI DB_1000 we can run the following code.

# list ROIs for dukehw

rois[site=='dukehw',]

#>          roi_name   site      lat       lon roitype active show_link show_data_link
#>            <char> <char>    <num>     <num>  <char> <lgcl>    <lgcl>         <lgcl>
#> 1: dukehw_DB_1000 dukehw 35.97358 -79.10037      DB   TRUE      TRUE           TRUE
#>    sequence_number                                   description first_date  last_date site_years
#>              <num>                                        <char>     <char>     <char>     <char>
#> 1:            1000 canopy level DB forest at awesome Duke forest 2013-06-01 2024-12-30       10.7
#>    missing_data_pct                                            roi_page
#>              <char>                                              <char>
#> 1:              8.0 https://phenocam.nau.edu/webcam/roi/dukehw/DB_1000/
#>                                                                  roi_stats_file
#>                                                                          <char>
#> 1: https://phenocam.nau.edu/data/archive/dukehw/ROI/dukehw_DB_1000_roistats.csv
#>                                                             one_day_summary
#>                                                                      <char>
#> 1: https://phenocam.nau.edu/data/archive/dukehw/ROI/dukehw_DB_1000_1day.csv
#>                                                           three_day_summary data_release
#>                                                                      <char>       <lgcl>
#> 1: https://phenocam.nau.edu/data/archive/dukehw/ROI/dukehw_DB_1000_3day.csv           NA



# Obtain the decidous broadleaf, ROI ID 1000 data from the dukehw phenocam

dukehw_DB_1000 <- get_pheno_ts(site = 'dukehw', vegType = 'DB', roiID = 1000, type = '3day')



# Produces a list of the dukehw data variables

str(dukehw_DB_1000)

#> Classes 'data.table' and 'data.frame':	1414 obs. of  35 variables:
#>  $ date                : chr  "2013-06-01" "2013-06-04" "2013-06-07" "2013-06-10" ...
#>  $ year                : int  2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
#>  $ doy                 : int  152 155 158 161 164 167 170 173 176 179 ...
#>  $ image_count         : int  57 76 77 77 77 78 21 0 0 0 ...
#>  $ midday_filename     : chr  "dukehw_2013_06_01_120111.jpg" "dukehw_2013_06_04_120119.jpg" "dukehw_2013_06_07_120112.jpg" "dukehw_2013_06_10_120108.jpg" ...
#>  $ midday_r            : num  91.3 76.4 60.6 76.5 88.9 ...
#>  $ midday_g            : num  97.9 85 73.2 82.2 95.7 ...
#>  $ midday_b            : num  47.4 33.6 35.6 37.1 51.4 ...
#>  $ midday_gcc          : num  0.414 0.436 0.432 0.42 0.406 ...
#>  $ midday_rcc          : num  0.386 0.392 0.358 0.391 0.377 ...
#>  $ r_mean              : num  87.6 79.9 72.7 80.9 83.8 ...
#>  $ r_std               : num  5.9 6 9.5 8.23 5.89 ...
#>  $ g_mean              : num  92.1 86.9 84 88 89.7 ...
#>  $ g_std               : num  6.34 5.26 7.71 7.77 6.47 ...
#>  $ b_mean              : num  46.1 38 39.6 43.1 46.7 ...
#>  $ b_std               : num  4.48 3.42 5.29 4.73 4.01 ...
#>  $ gcc_mean            : num  0.408 0.425 0.429 0.415 0.407 ...
#>  $ gcc_std             : num  0.00859 0.0089 0.01318 0.01243 0.01072 ...
#>  $ gcc_50              : num  0.408 0.427 0.431 0.416 0.407 ...
#>  $ gcc_75              : num  0.414 0.431 0.435 0.424 0.415 ...
#>  $ gcc_90              : num  0.417 0.434 0.44 0.428 0.421 ...
#>  $ rcc_mean            : num  0.388 0.39 0.37 0.381 0.38 ...
#>  $ rcc_std             : num  0.01176 0.01032 0.01326 0.00881 0.00995 ...
#>  $ rcc_50              : num  0.387 0.391 0.373 0.383 0.382 ...
#>  $ rcc_75              : num  0.391 0.396 0.378 0.388 0.385 ...
#>  $ rcc_90              : num  0.397 0.399 0.382 0.391 0.389 ...
#>  $ max_solar_elev      : num  76 76.3 76.6 76.8 76.9 ...
#>  $ snow_flag           : logi  NA NA NA NA NA NA ...
#>  $ outlierflag_gcc_mean: logi  NA NA NA NA NA NA ...
#>  $ outlierflag_gcc_50  : logi  NA NA NA NA NA NA ...
#>  $ outlierflag_gcc_75  : logi  NA NA NA NA NA NA ...
#>  $ outlierflag_gcc_90  : logi  NA NA NA NA NA NA ...
#>  $ YEAR                : int  2013 2013 2013 2013 2013 2013 2013 2013 2013 2013 ...
#>  $ DOY                 : int  152 155 158 161 164 167 170 173 176 179 ...
#>  $ YYYYMMDD            : chr  "2013-06-01" "2013-06-04" "2013-06-07" "2013-06-10" ...
#>  - attr(*, ".internal.selfref")=<externalptr>

We now have a variety of data related to this ROI from the Hardwood Stand at Duke Forest.

Green Chromatic Coordinate (GCC) is a measure of "greenness" of an area and is widely used in Phenocam images as an indicator of the green pigment in vegetation. Let's use this measure to look at changes in GCC over time at this site. Looking back at the available data, we have several options for GCC. gcc90 is the 90th quantile of GCC in the pixels across the ROI (for more details, PhenoCam v1 description). We'll use this as it tracks the upper greenness values while not including many outliners.

Before we can plot gcc-90 we do need to fix our dates and convert them from Factors to Date to correctly plot.

# Convert date variable into date format

dukehw_DB_1000[,date:=as.Date(date)]



# plot gcc_90

dukehw_DB_1000[,plot(date, gcc_90, col = 'green', type = 'b')]

#> NULL

mtext('Duke Forest, Hardwood', font = 2)

Download midday images

While PhenoCam sites may have many images in a given day, many simple analyses can use just the midday image when the sun is most directly overhead the canopy. Therefore, extracting a list of midday images (only one image a day) can be useful.

# obtaining midday_images for dukehw

duke_middays <- get_midday_list('dukehw')



# see the first few rows

head(duke_middays)

#> [1] "http://phenocam.nau.edu/data/archive/dukehw/2013/05/dukehw_2013_05_31_150331.jpg"
#> [2] "http://phenocam.nau.edu/data/archive/dukehw/2013/06/dukehw_2013_06_01_120111.jpg"
#> [3] "http://phenocam.nau.edu/data/archive/dukehw/2013/06/dukehw_2013_06_02_120109.jpg"
#> [4] "http://phenocam.nau.edu/data/archive/dukehw/2013/06/dukehw_2013_06_03_120110.jpg"
#> [5] "http://phenocam.nau.edu/data/archive/dukehw/2013/06/dukehw_2013_06_04_120119.jpg"
#> [6] "http://phenocam.nau.edu/data/archive/dukehw/2013/06/dukehw_2013_06_05_120110.jpg"

Now we have a list of all the midday images from this Phenocam. Let's download them and plot

# download a file

destfile <- tempfile(fileext = '.jpg')



# download only the first available file

# modify the `[1]` to download other images

download.file(duke_middays[1], destfile = destfile, mode = 'wb')



# plot the image

img <- try(readJPEG(destfile))

if(class(img)!='try-error'){
  par(mar= c(0,0,0,0))
  plot(0:1,0:1, type='n', axes= FALSE, xlab= '', ylab = '')
  rasterImage(img, 0, 0, 1, 1)
}

Download midday images for a given time range

Now we can access all the midday images and download them one at a time. However, we frequently want all the images within a specific time range of interest. We'll learn how to do that next.

# open a temporary directory

tmp_dir <- tempdir()



# download a subset. Example dukehw 2017

download_midday_images(site = 'dukehw', # which site
                       y = 2017, # which year(s)
                       months = 1:12, # which month(s)
                       days = 15, # which days on month(s)
                       download_dir = tmp_dir) # where on your computer



# list of downloaded files

duke_middays_path <- dir(tmp_dir, pattern = 'dukehw*', full.names = TRUE)



head(duke_middays_path)

We can demonstrate the seasonality of Duke forest observed from the camera. (Note this code may take a while to run through the loop).

n <- length(duke_middays_path)

par(mar= c(0,0,0,0), mfrow=c(4,3), oma=c(0,0,3,0))



for(i in 1:n){
  img <- readJPEG(duke_middays_path[i])
  plot(0:1,0:1, type='n', axes= FALSE, xlab= '', ylab = '')
  rasterImage(img, 0, 0, 1, 1)
  mtext(month.name[i], line = -2)
}

mtext('Seasonal variation of forest at Duke Hardwood Forest', font = 2, outer = TRUE)

The goal of this section was to show how to download a limited number of midday images from the PhenoCam server. However, more extensive datasets should be downloaded from the PhenoCam .


The most recent release of the phenocamapi R package is available on GitHub: https://github.com/PhenoCamNetwork/phenocamapi.

Extracting Timeseries from Images using the xROI R Package

In this tutorial, we'll learn how to use an interactive open-source toolkit, the xROI that facilitates the process of time series extraction and improves the quality of the final data. The xROI package provides a responsive environment for scientists to interactively:

a) delineate regions of interest (ROIs), b) handle field of view (FOV) shifts, and c) extract and export time series data characterizing color-based metrics.

Using the xROI R package, the user can detect FOV shifts with minimal difficulty. The software gives user the opportunity to re-adjust mask files or redraw new ones every time an FOV shift occurs.

xROI Design

The R language and Shiny package were used as the main development tool for xROI, while Markdown, HTML, CSS and JavaScript languages were used to improve the interactivity. While Shiny apps are primarily used for web-based applications to be used online, the package authors used Shiny for its graphical user interface capabilities. In other words, both the User Interface (UI) and server modules are run locally from the same machine and hence no internet connection is required (after installation). The xROI's UI element presents a side-panel for data entry and three main tab-pages, each responsible for a specific task. The server-side element consists of R and bash scripts. Image processing and geospatial features were performed using the Geospatial Data Abstraction Library (GDAL) and the rgdal and raster R packages.

Install xROI

The latest release of xROI can be directly downloaded and installed from the development GitHub repository.

# install devtools first
utils::install.packages('devtools', repos = "http://cran.us.r-project.org" )

# use devtools to install from GitHub
devtools::install_github("bnasr/xROI")

xROI depends on many R packages including: raster, rgdal, sp, jpeg, tiff, shiny, shinyjs, shinyBS, shinyAce, shinyTime, shinyFiles, shinydashboard, shinythemes, colourpicker, rjson, stringr, data.table, lubridate, plotly, moments, and RCurl. All the required libraries and packages will be automatically installed with installation of xROI. The package offers a fully interactive high-level interface as well as a set of low-level functions for ROI processing.

Launch xROI

A comprehensive user manual for low-level image processing using xROI is available from GitHub xROI. While the user manual includes a set of examples for each function; here we will learn to use the graphical interactive mode.

Calling the Launch() function, as we'll do below, opens up the interactive mode in your operating system’s default web browser. The landing page offers an example dataset to explore different modules or upload a new dataset of images.

You can launch the interactive mode can be launched from an interactive R environment.

# load xROI
library(xROI)

# launch xROI 
Launch()

Or from the command line (e.g. bash in Linux, Terminal in macOS and Command Prompt in Windows machines) where an R engine is already installed.

Rscript -e “xROI::Launch(Interactive = TRUE)”

End xROI

When you are done with the xROI interface you can close the tab in your browser and end the session in R by using one of the following options

In RStudio: Press the key on your keyboard. In R Terminal: Press <Ctrl + C> on your keyboard.

Use xROI

To get some hands-on experience with xROI, we can analyze images from the dukehw of the PhenoCam network.

You can download the data set from this link (direct download).

Follow the steps below:

First,save and extract (unzip) the file on your computer.

Second, open the data set in xROI by setting the file path to your data

# launch data in ROI
# first edit the path below to the dowloaded directory you just extracted
xROI::Launch('/path/to/extracted/directory')

# alternatively, you can run without specifying a path and use the interface to browse 

Now, draw an ROI and the metadata.

Then, save the metadata and explore its content.

Now we can explore if there is any FOV shift in the dataset using the CLI processer tab.

Finally, we can go to the Time series extraction tab. Extract the time-series. Save the output and explore the dataset in R.

Challenge: Use xROI

Let's use xROI on a little more challenging site with field of view shifts.

Download and extract the data set from this link (direct download, 218 MB) and follow the above steps to extract the time-series.


The xROI R package is developed and maintained by Bijan Seyednarollah. The most recent release is available from https://github.com/bnasr/xROI.

Detecting Foggy Images using the hazer Package

In this tutorial, you will learn how to

  1. perform basic image processing and
  2. estimate image haziness as an indication of fog, cloud or other natural or artificial factors using the hazerR package.

Read & Plot Image

We will use several packages in this tutorial. All are available from CRAN.

# load packages
library(hazer)
library(jpeg)
library(data.table)

Before we start the image processing steps, let's read in and plot an image. This image is an example image that comes with the hazer package.

# read the path to the example image
jpeg_file <- system.file(package = 'hazer', 'pointreyes.jpg')

# read the image as an array
rgb_array <- jpeg::readJPEG(jpeg_file)

# plot the RGB array on the active device panel


# first set the margin in this order:(bottom, left, top, right)
par(mar=c(0,0,3,0))  
plotRGBArray(rgb_array, bty = 'n', main = 'Point Reyes National Seashore')

When we work with images, all data we work with is generally on the scale of each individual pixel in the image. Therefore, for large images we will be working with large matrices that hold the value for each pixel. Keep this in mind before opening some of the matrices we'll be creating this tutorial as it can take a while for them to load.

Histogram of RGB channels

A histogram of the colors can be useful to understanding what our image is made up of. Using the density() function from the base stats package, we can extract density distribution of each color channel.

# color channels can be extracted from the matrix
red_vector <- rgb_array[,,1]
green_vector <- rgb_array[,,2]
blue_vector <- rgb_array[,,3]

# plotting 
par(mar=c(5,4,4,2)) 
plot(density(red_vector), col = 'red', lwd = 2, 
		 main = 'Density function of the RGB channels', ylim = c(0,5))
lines(density(green_vector), col = 'green', lwd = 2)
lines(density(blue_vector), col = 'blue', lwd = 2)

In hazer we can also extract three basic elements of an RGB image :

  1. Brightness
  2. Darkness
  3. Contrast

Brightness

The brightness matrix comes from the maximum value of the R, G, or B channel. We can extract and show the brightness matrix using the getBrightness() function.

# extracting the brightness matrix
brightness_mat <- getBrightness(rgb_array)

# unlike the RGB array which has 3 dimensions, the brightness matrix has only two 
# dimensions and can be shown as a grayscale image,
# we can do this using the same plotRGBArray function
par(mar=c(0,0,3,0))
plotRGBArray(brightness_mat, bty = 'n', main = 'Brightness matrix')

Here the grayscale is used to show the value of each pixel's maximum brightness of the R, G or B color channel.

To extract a single brightness value for the image, depending on our needs we can perform some statistics or we can just use the mean of this matrix.

# the main quantiles
quantile(brightness_mat)

#>         0%        25%        50%        75%       100% 
#> 0.09019608 0.43529412 0.62745098 0.80000000 0.91764706


# create histogram
par(mar=c(5,4,4,2))
hist(brightness_mat)

Why are we getting so many images up in the high range of the brightness? Where does this correlate to on the RGB image?

Darkness

Darkness is determined by the minimum of the R, G or B color channel. Similarly, we can extract and show the darkness matrix using the getDarkness() function.

# extracting the darkness matrix
darkness_mat <- getDarkness(rgb_array)

# the darkness matrix has also two dimensions and can be shown as a grayscale image
par(mar=c(0,0,3,0))
plotRGBArray(darkness_mat, bty = 'n', main = 'Darkness matrix')

# main quantiles
quantile(darkness_mat)

#>         0%        25%        50%        75%       100% 
#> 0.03529412 0.23137255 0.36470588 0.47843137 0.83529412


# histogram
par(mar=c(5,4,4,2))
hist(darkness_mat)

Contrast

The contrast of an image is the difference between the darkness and brightness of the image. The contrast matrix is calculated by difference between the darkness and brightness matrices.

The contrast of the image can quickly be extracted using the getContrast() function.

# extracting the contrast matrix
contrast_mat <- getContrast(rgb_array)

# the contrast matrix has also 2D and can be shown as a grayscale image
par(mar=c(0,0,3,0))
plotRGBArray(contrast_mat, bty = 'n', main = 'Contrast matrix')

# main quantiles
quantile(contrast_mat)

#>        0%       25%       50%       75%      100% 
#> 0.0000000 0.1450980 0.2470588 0.3333333 0.4509804


# histogram
par(mar=c(5,4,4,2))
hist(contrast_mat)

Image fogginess & haziness

Haziness of an image can be estimated using the getHazeFactor() function. This function is based on the method described in Mao et al. (2014). The technique was originally developed to for "detecting foggy images and estimating the haze degree factor" for a wide range of outdoor conditions.

The function returns a vector of two numeric values:

  1. haze as the haze degree and
  2. A0 as the global atmospheric light, as it is explained in the original paper.

The PhenoCam standards classify any image with the haze degree greater than 0.4 as a significantly foggy image.

# extracting the haze matrix
haze_degree <- getHazeFactor(rgb_array)

print(haze_degree)

#> $haze
#> [1] 0.2251633
#> 
#> $A0
#> [1] 0.7105258

Here we have the haze values for our image. Note that the values might be slightly different due to rounding errors on different platforms.

Process sets of images

We can use for loops or the lapply functions to extract the haze values for a stack of images.

You can download the related datasets from here (direct download).

Download and extract the zip file to be used as input data for the following step.

# to download via R
dir.create('data')

#> Warning in dir.create("data"): 'data' already exists

destfile = 'data/pointreyes.zip'
download.file(destfile = destfile, mode = 'wb', url = 'http://bit.ly/2F8w2Ia')
unzip(destfile, exdir = 'data')  


# set up the input image directory
#pointreyes_dir <- '/path/to/image/directory/'
pointreyes_dir <- 'data/pointreyes/'

# get a list of all .jpg files in the directory
pointreyes_images <- dir(path = pointreyes_dir, 
                         pattern = '*.jpg',
                         ignore.case = TRUE, 
                         full.names = TRUE)

Now we can use a for loop to process all of the images to get the haze and A0 values.

(Note, this loop may take a while to process.)

# number of images
n <- length(pointreyes_images)

# create an empty matrix to fill with haze and A0 values
haze_mat <- data.table()

# the process takes a bit, a progress bar lets us know it is working.
pb <- txtProgressBar(0, n, style = 3)

#> 

|
| | 0%

for(i in 1:n) {
  image_path <- pointreyes_images[i]
  img <- jpeg::readJPEG(image_path)
  haze <- getHazeFactor(img)
  
  haze_mat <- rbind(haze_mat, 
                    data.table(file = image_path, 
                               haze = haze[1], 
                               A0 = haze[2]))
  
  setTxtProgressBar(pb, i)
}

#> 

|
|= | 1% |
|== | 3% |
|=== | 4% |
|===== | 6% |
|====== | 7% |
|======= | 8% |
|======== | 10% |
|========= | 11% |
|========== | 13% |
|============ | 14% |
|============= | 15% |
|============== | 17% |
|=============== | 18% |
|================ | 20% |
|================= | 21% |
|================== | 23% |
|==================== | 24% |
|===================== | 25% |
|====================== | 27% |
|======================= | 28% |
|======================== | 30% |
|========================= | 31% |
|=========================== | 32% |
|============================ | 34% |
|============================= | 35% |
|============================== | 37% |
|=============================== | 38% |
|================================ | 39% |
|================================= | 41% |
|=================================== | 42% |
|==================================== | 44% |
|===================================== | 45% |
|====================================== | 46% |
|======================================= | 48% |
|======================================== | 49% |
|========================================== | 51% |
|=========================================== | 52% |
|============================================ | 54% |
|============================================= | 55% |
|============================================== | 56% |
|=============================================== | 58% |
|================================================= | 59% |
|================================================== | 61% |
|=================================================== | 62% |
|==================================================== | 63% |
|===================================================== | 65% |
|====================================================== | 66% |
|======================================================= | 68% |
|========================================================= | 69% |
|========================================================== | 70% |
|=========================================================== | 72% |
|============================================================ | 73% |
|============================================================= | 75% |
|============================================================== | 76% |
|================================================================ | 77% |
|================================================================= | 79% |
|================================================================== | 80% |
|=================================================================== | 82% |
|==================================================================== | 83% |
|===================================================================== | 85% |
|====================================================================== | 86% |
|======================================================================== | 87% |
|========================================================================= | 89% |
|========================================================================== | 90% |
|=========================================================================== | 92% |
|============================================================================ | 93% |
|============================================================================= | 94% |
|=============================================================================== | 96% |
|================================================================================ | 97% |
|================================================================================= | 99% |
|==================================================================================| 100%

Now we have a matrix with haze and A0 values for all our images. Let's compare top five images with low and high haze values.

haze_mat[,haze:=unlist(haze)]

top10_high_haze <-  haze_mat[order(haze), file][1:5]
top10_low_haze <-  haze_mat[order(-haze), file][1:5]

par(mar= c(0,0,0,0), mfrow=c(5,2), oma=c(0,0,3,0))

for(i in 1:5){
  img <- readJPEG(top10_low_haze[i])
  plot(0:1,0:1, type='n', axes= FALSE, xlab= '', ylab = '')
  rasterImage(img, 0, 0, 1, 1)
  
  img <- readJPEG(top10_high_haze[i])
  plot(0:1,0:1, type='n', axes= FALSE, xlab= '', ylab = '')
  rasterImage(img, 0, 0, 1, 1)
}

mtext('Separating out foggy images of Point Reyes, CA', font = 2, outer = TRUE)

Let's classify those into hazy and non-hazy as per the PhenoCam standard of 0.4.

# classify image as hazy: T/F
haze_mat[haze>0.4,foggy:=TRUE]
haze_mat[haze<=0.4,foggy:=FALSE]

head(haze_mat)

#>                                                 file      haze        A0 foggy
#> 1: data/pointreyes//pointreyes_2017_01_01_120056.jpg 0.2249810 0.6970257 FALSE
#> 2: data/pointreyes//pointreyes_2017_01_06_120210.jpg 0.2339372 0.6826148 FALSE
#> 3: data/pointreyes//pointreyes_2017_01_16_120105.jpg 0.2312940 0.7009978 FALSE
#> 4: data/pointreyes//pointreyes_2017_01_21_120105.jpg 0.4536108 0.6209055  TRUE
#> 5: data/pointreyes//pointreyes_2017_01_26_120106.jpg 0.2297961 0.6813884 FALSE
#> 6: data/pointreyes//pointreyes_2017_01_31_120125.jpg 0.4206842 0.6315728  TRUE

Now we can save all the foggy images to a new folder that will retain the foggy images but keep them separate from the non-foggy ones that we want to analyze.

# identify directory to move the foggy images to
foggy_dir <- paste0(pointreyes_dir, 'foggy')
clear_dir <- paste0(pointreyes_dir, 'clear')

# if a new directory, create new directory at this file path
dir.create(foggy_dir,  showWarnings = FALSE)
dir.create(clear_dir,  showWarnings = FALSE)

# copy the files to the new directories
file.copy(haze_mat[foggy==TRUE,file], to = foggy_dir)

#>  [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [15] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [29] FALSE FALSE


file.copy(haze_mat[foggy==FALSE,file], to = clear_dir)

#>  [1] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [15] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
#> [29] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE

Now that we have our images separated, we can get the full list of haze values only for those images that are not classified as "foggy".

# this is an alternative approach instead of a for loop

# loading all the images as a list of arrays
pointreyes_clear_images <- dir(path = clear_dir, 
                         pattern = '*.jpg',
                         ignore.case = TRUE, 
                         full.names = TRUE)

img_list <- lapply(pointreyes_clear_images, FUN = jpeg::readJPEG)

# getting the haze value for the list
# patience - this takes a bit of time
haze_list <- t(sapply(img_list, FUN = getHazeFactor))

# view first few entries
head(haze_list)

#>      haze      A0       
#> [1,] 0.224981  0.6970257
#> [2,] 0.2339372 0.6826148
#> [3,] 0.231294  0.7009978
#> [4,] 0.2297961 0.6813884
#> [5,] 0.2152078 0.6949932
#> [6,] 0.345584  0.6789334

We can then use these values for further analyses and data correction.


The hazer R package is developed and maintained by Bijan Seyednarollah. The most recent release is available from https://github.com/bnasr/hazer.

Download and Explore NEON Data

This tutorial covers downloading NEON data, using the Data Portal and either the neonUtilities R package or the neonutilities Python package, as well as basic instruction in beginning to explore and work with the downloaded data, including guidance in navigating data documentation. We will explore data of 3 different types, and make a simple figure from each.

NEON data

There are 3 basic categories of NEON data:

  1. Remote sensing (AOP) - Data collected by the airborne observation platform, e.g. LIDAR, surface reflectance
  2. Observational (OS) - Data collected by a human in the field, or in an analytical laboratory, e.g. beetle identification, foliar isotopes
  3. Instrumentation (IS) - Data collected by an automated, streaming sensor, e.g.  net radiation, soil carbon dioxide. This category also includes the surface-atmosphere exchange (SAE) data, which are processed and structured in a unique way, distinct from other instrumentation data (see the introductory eddy flux data tutorial for details).

This lesson covers all three types of data. The download procedures are similar for all types, but data navigation differs significantly by type.

Objectives

After completing this activity, you will be able to:

  • Download NEON data using the neonUtilities package.
  • Understand downloaded data sets and load them into R or Python for analyses.

Things You’ll Need To Complete This Tutorial

You can follow either the R or Python code throughout this tutorial. * For R users, we recommend using R version 4+ and RStudio. * For Python users, we recommend using Python 3.9+.

Set up: Install Packages

Packages only need to be installed once, you can skip this step after the first time:

R

  • neonUtilities: Basic functions for accessing NEON data
  • neonOS: Functions for common data wrangling needs for NEON observational data.
  • terra: Spatial data package; needed for working with remote sensing data.
install.packages("neonUtilities")
install.packages("neonOS")
install.packages("terra")

Python

  • neonutilities: Basic functions for accessing NEON data
  • rasterio: Spatial data package; needed for working with remote sensing data.
pip install neonutilities
pip install rasterio

Additional Resources

  • GitHub repository for neonUtilities R package
  • GitHub repository for neonutilities Python package
  • neonUtilities cheat sheet. A quick reference guide for users. Focuses on the R package, but applicable to Python as well.

Set up: Load packages

R

library(neonUtilities)
library(neonOS)
library(terra)

Python


import neonutilities as nu
import os
import rasterio
import pandas as pd
import matplotlib.pyplot as plt

Getting started: Download data from the Portal

Go to the NEON Data Portal and download some data! To follow the tutorial exactly, download Photosynthetically active radiation (PAR) (DP1.00024.001) data from September-November 2019 at Wind River Experimental Forest (WREF). The downloaded file should be a zip file named NEON_par.zip.

If you prefer to explore a different data product, you can still follow this tutorial. But it will be easier to understand the steps in the tutorial, particularly the data navigation, if you choose a sensor data product for this section.

Once you’ve downloaded a zip file of data from the portal, switch over to R or Python to proceed with coding.

Stack the downloaded data files: stackByTable()

The stackByTable() (or stack_by_table()) function will unzip and join the files in the downloaded zip file.

R

# Modify the file path to match the path to your zip file
stackByTable("~/Downloads/NEON_par.zip")

Python

# Modify the file path to match the path to your zip file
nu.stack_by_table(os.path.expanduser("~/Downloads/NEON_par.zip"))

In the directory where the zipped file was saved, you should now have an unzipped folder of the same name. When you open this you will see a new folder called stackedFiles, which should contain at least seven files: PARPAR_30min.csv, PARPAR_1min.csv, sensor_positions.csv, variables_00024.csv, readme_00024.txt, issueLog_00024.csv, and citation_00024_RELEASE-202X.txt.

Navigate data downloads: IS

Let’s start with a brief description of each file. This set of files is typical of a NEON IS data product.

  • PARPAR_30min.csv: PAR data at 30-minute averaging intervals
  • PARPAR_1min.csv: PAR data at 1-minute averaging intervals
  • sensor_positions.csv: The physical location of each sensor collecting PAR measurements. There is a PAR sensor at each level of the WREF tower, and this table lets you connect the tower level index to the height of the sensor in meters.
  • variables_00024.csv: Definitions and units for each data field in the PARPAR_#min tables.
  • readme_00024.txt: Basic information about the PAR data product.
  • issueLog_00024.csv: A record of known issues associated with PAR data.
  • citation_00024_RELEASE-202X.txt: The citation to use when you publish a paper using these data, in BibTeX format.

We’ll explore the 30-minute data. To read the file, use the function readTableNEON() or read_table_neon(), which uses the variables file to assign data types to each column of data:

R

par30 <- readTableNEON(
  dataFile="~/Downloads/NEON_par_R/stackedFiles/PARPAR_30min.csv", 
  varFile="~/Downloads/NEON_par_R/stackedFiles/variables_00024.csv")
head(par30)
par30 <- readTableNEON(
  dataFile="~/Downloads/NEON_par/stackedFiles/PARPAR_30min.csv", 
  varFile="~/Downloads/NEON_par/stackedFiles/variables_00024.csv")
head(par30)

Python


par30 = nu.read_table_neon(
  data_file=os.path.expanduser(
    "~/Downloads/NEON_par/stackedFiles/PARPAR_30min.csv"), 
  var_file=os.path.expanduser(
    "~/Downloads/NEON_par/stackedFiles/variables_00024.csv"))
# Open the par30 table in the table viewer of your choice

The first four columns are added by stackByTable() when it merges files across sites, months, and tower heights. The column publicationDate is the date-time stamp indicating when the data were published, and the release column indicates which NEON data release the data belong to. For more information about NEON data releases, see the Data Product Revisions and Releases page.

Information about each data column can be found in the variables file, where you can see definitions and units for each column of data.

Plot PAR data

Now that we know what we’re looking at, let’s plot PAR from the top tower level. We’ll use the mean PAR from each averaging interval, and we can see from the sensor positions file that the vertical index 080 corresponds to the highest tower level. To explore the sensor positions data in more depth, see the spatial data tutorial.

R

plot(PARMean~endDateTime, 
     data=par30[which(par30$verticalPosition=="080"),],
     type="l")

Python

par30top = par30[par30.verticalPosition=="080"]
fig, ax = plt.subplots()
ax.plot(par30top.endDateTime, par30top.PARMean)
plt.show()

Looks good! The sun comes up and goes down every day, and some days are cloudy.

Plot more PAR data

To see another layer of data, add PAR from a lower tower level to the plot.

R

plot(PARMean~endDateTime, 
     data=par30[which(par30$verticalPosition=="080"),],
     type="l")

lines(PARMean~endDateTime, data=par30[which(par30$verticalPosition=="020"),], col="orange")

Python

par30low = par30[par30.verticalPosition=="020"]
fig, ax = plt.subplots()
ax.plot(par30top.endDateTime, par30top.PARMean)
ax.plot(par30low.endDateTime, par30low.PARMean)
plt.show()

We can see there is a lot of light attenuation through the canopy.

Download files and load directly to R: loadByProduct()

At the start of this tutorial, we downloaded data from the NEON data portal. NEON also provides an API, and the neonUtilities packages provide methods for downloading programmatically.

The steps we carried out above - downloading from the portal, stacking the downloaded files, and reading in to R or Python - can all be carried out in one step by the neonUtilities function loadByProduct().

To get the same PAR data we worked with above, we would run this line of code using loadByProduct():

R

parlist <- loadByProduct(dpID="DP1.00024.001", 
                         site="WREF", 
                         startdate="2019-09",
                         enddate="2019-11")

Python

parlist = nu.load_by_product(dpid="DP1.00024.001", 
                site="WREF", 
                startdate="2019-09",
                enddate="2019-11")

Explore loaded data

The object returned by loadByProduct() in R is a named list, and the object returned by load_by_product() in Python is a dictionary. The objects contained in the list or dictionary are the same set of tables we ended with after stacking the data from the portal above. You can see this by checking the names of the tables in parlist:

R

names(parlist)
## [1] "citation_00024_RELEASE-2024" "issueLog_00024"             
## [3] "PARPAR_1min"                 "PARPAR_30min"               
## [5] "readme_00024"                "sensor_positions_00024"     
## [7] "variables_00024"

Python

parlist.keys()
## dict_keys(['PARPAR_1min', 'PARPAR_30min', 'citation_00024_RELEASE-2024', 'issueLog_00024', 'readme_00024', 'sensor_positions_00024', 'variables_00024'])

Now let’s walk through the details of the inputs and options in loadByProduct().

This function downloads data from the NEON API, merges the site-by-month files, and loads the resulting data tables into the programming environment, assigning each data type to the appropriate class. This is a popular choice for NEON data users because it ensures you’re always working with the latest data, and it ends with ready-to-use tables. However, if you use it in a workflow you run repeatedly, keep in mind it will re-download the data every time. See below for suggestions on saving the data locally to save time and compute resources.

loadByProduct() works on most observational (OS) and sensor (IS) data, but not on surface-atmosphere exchange (SAE) data and remote sensing (AOP) data. For functions that download AOP data, see the final section in this tutorial. For functions that work with SAE data, see the NEON eddy flux data tutorial.

The inputs to loadByProduct() control which data to download and how to manage the processing. The list below shows the R syntax; in Python, the inputs are the same but all lowercase (e.g. `dpid` instead of `dpID`) and `.` is replaced by `_`.

  • dpID: the data product ID, e.g. DP1.00002.001
  • site: defaults to “all”, meaning all sites with available data; can be a vector of 4-letter NEON site codes, e.g.  c("HARV","CPER","ABBY") (or ["HARV","CPER","ABBY"] in Python)
  • startdate and enddate: defaults to NA, meaning all dates with available data; or a date in the form YYYY-MM, e.g.  2017-06. Since NEON data are provided in month packages, finer scale querying is not available. Both start and end date are inclusive.
  • package: either basic or expanded data package. Expanded data packages generally include additional information about data quality, such as chemical standards and quality flags. Not every data product has an expanded package; if the expanded package is requested but there isn’t one, the basic package will be downloaded.
  • timeIndex: defaults to “all”, to download all data; or the number of minutes in the averaging interval. Only applicable to IS data.
  • release: Specify a NEON data release to download. Defaults to the most recent release plus provisional data. See the release tutorial for more information.
  • include.provisional: T or F: should Provisional data be included in the download? Defaults to F to return only Released data, which are citable by a DOI and do not change over time. Provisional data are subject to change.
  • check.size: T or F: should the function pause before downloading data and warn you about the size of your download? Defaults to T; if you are using this function within a script or batch process you will want to set it to F.
  • token: Optional NEON API token for faster downloads. See this tutorial for instructions on using a token.
  • progress: Set to F to turn off progress bars.
  • cloud.mode: Can be set to T if you are working in a cloud environment; enables more efficient data transfer from NEON’s cloud storage.

The dpID is the data product identifier of the data you want to download. The DPID can be found on the Explore Data Products page. It will be in the form DP#.#####.###

Download observational data

To explore observational data, we’ll download aquatic plant chemistry data (DP1.20063.001) from three lake sites: Prairie Lake (PRLA), Suggs Lake (SUGG), and Toolik Lake (TOOK).

R

apchem <- loadByProduct(dpID="DP1.20063.001", 
                  site=c("PRLA","SUGG","TOOK"), 
                  package="expanded",
                  release="RELEASE-2024",
                  check.size=F)

Python

apchem = nu.load_by_product(dpid="DP1.20063.001", 
                  site=["PRLA", "SUGG", "TOOK"], 
                  package="expanded",
                  release="RELEASE-2024",
                  check_size=False)

Navigate data downloads: OS

As we saw above, the object returned by loadByProduct() is a named list of data frames. Let’s check out what’s the same and what’s different from the IS data tables.

R

names(apchem)
##  [1] "apl_biomass"                       "apl_clipHarvest"                  
##  [3] "apl_plantExternalLabDataPerSample" "apl_plantExternalLabQA"           
##  [5] "asi_externalLabPOMSummaryData"     "categoricalCodes_20063"           
##  [7] "citation_20063_RELEASE-2024"       "issueLog_20063"                   
##  [9] "readme_20063"                      "validation_20063"                 
## [11] "variables_20063"

Python

apchem.keys()
## dict_keys(['apl_biomass', 'apl_clipHarvest', 'apl_plantExternalLabDataPerSample', 'apl_plantExternalLabQA', 'asi_externalLabPOMSummaryData', 'categoricalCodes_20063', 'citation_20063_RELEASE-2024', 'issueLog_20063', 'readme_20063', 'validation_20063', 'variables_20063'])

Explore tables

As with the sensor data, we have some data tables and some metadata tables. Most of the metadata files are the same as the sensor data: readme, variables, issueLog, and citation. These files contain the same type of metadata here that they did in the IS data product. Let’s look at the other files:

  • apl_clipHarvest: Data from the clip harvest collection of aquatic plants
  • apl_biomass: Biomass data from the collected plants
  • apl_plantExternalLabDataPerSample: Chemistry data from the collected plants
  • apl_plantExternalLabQA: Quality assurance data from the chemistry analyses
  • asi_externalLabPOMSummaryData: Quality metrics from the chemistry lab
  • validation_20063: For observational data, a major method for ensuring data quality is to control data entry. This file contains information about the data ingest rules applied to each input data field.
  • categoricalCodes_20063: Definitions of each value for categorical data, such as growth form and sample condition

You can work with these tables from the named list object, but many people find it easier to extract each table from the list and work with it as an independent object. To do this, use the list2env() function in R or globals().update() in Python:

R

list2env(apchem, .GlobalEnv)
## <environment: R_GlobalEnv>

Python


globals().update(apchem)

Save data locally

Keep in mind that using loadByProduct() will re-download the data every time you run your code. In some cases this may be desirable, but it can be a waste of time and compute resources. To come back to these data without re-downloading, you’ll want to save the tables locally. The most efficient option is to save the named list in total - this will also preserve the data types.

R

saveRDS(apchem, 
        "~/Downloads/aqu_plant_chem.rds")

Python


# There are a variety of ways to do this in Python; NEON
# doesn't currently have a specific recommendation. If 
# you don't have a data-saving workflow you already use, 
# we suggest you check out the pickle module.

Then you can re-load the object to a programming environment any time.

Other options for saving data locally:

  1. Similar to the workflow we started this tutorial with, but using neonUtilities to download instead of the Portal: Use zipsByProduct() and stackByTable() instead of loadByProduct(). With this option, use the function readTableNEON() to read the files, to get the same column type assignment that loadByProduct() carries out. Details can be found in our neonUtilities tutorial.
  2. Try out the community-developed neonstore package, which is designed for maintaining a local store of the NEON data you use. The neonUtilities function stackFromStore() works with files downloaded by neonstore. See the neonstore tutorial for more information.

Now let’s explore the aquatic plant data. OS data products are simple in that the data are generally tabular, and data volumes are lower than the other NEON data types, but they are complex in that almost all consist of multiple tables containing information collected at different times in different ways. For example, samples collected in the field may be shipped to a laboratory for analysis. Data associated with the field collection will appear in one data table, and the analytical results will appear in another. Complexity in working with OS data usually involves bringing data together from multiple measurements or scales of analysis.

As with the IS data, the variables file can tell you more about the data tables.

OS data products each come with a Data Product User Guide, which can be downloaded with the data, or accessed from the document library on the Data Portal, or the Product Details page for the data product. The User Guide is designed to give a basic introduction to the data product, including a brief summary of the protocol and descriptions of data format and structure.

Explore isotope data

To get started with the aquatic plant chemistry data, let’s take a look at carbon isotope ratios in plants across the three sites we downloaded. The chemical analytes are reported in the apl_plantExternalLabDataPerSample table, and the table is in long format, with one record per sample per analyte, so we’ll subset to only the carbon isotope analyte:

R

boxplot(analyteConcentration~siteID, 
        data=apl_plantExternalLabDataPerSample, 
        subset=analyte=="d13C",
        xlab="Site", ylab="d13C")

Python

apl13C = apl_plantExternalLabDataPerSample[
         apl_plantExternalLabDataPerSample.analyte=="d13C"]
grouped = apl13C.groupby("siteID")["analyteConcentration"]
fig, ax = plt.subplots()
ax.boxplot(x=[group.values for name, group in grouped],
           tick_labels=grouped.groups.keys())
plt.show()

We see plants at Suggs and Toolik are quite low in 13C, with more spread at Toolik than Suggs, and plants at Prairie Lake are relatively enriched. Clearly the next question is what species these data represent. But taxonomic data aren’t present in the apl_plantExternalLabDataPerSample table, they’re in the apl_biomass table. We’ll need to join the two tables to get chemistry by taxon.

Every NEON data product has a Quick Start Guide (QSG), and for OS products it includes a section describing how to join the tables in the data product. Since it’s a pdf file, loadByProduct() doesn’t bring it in, but you can view the Aquatic plant chemistry QSG on the Product Details page. In R, the neonOS package uses the information from the QSGs to provide an automated table-joining function, joinTableNEON().

Explore isotope data by species

R

apct <- joinTableNEON(apl_biomass, 
            apl_plantExternalLabDataPerSample)

Using the merged data, now we can plot carbon isotope ratio for each taxon.

boxplot(analyteConcentration~scientificName, 
        data=apct, subset=analyte=="d13C", 
        xlab=NA, ylab="d13C", 
        las=2, cex.axis=0.7)

Python

There is not yet an equivalent to the neonOS package in Python, so we will code the table join manually, based on the info in the Quick Start Guide:


apct = pd.merge(apl_biomass, 
            apl_plantExternalLabDataPerSample,
            left_on=["siteID", "chemSubsampleID"],
            right_on=["siteID", "sampleID"],
            how="outer")

Using the merged data, now we can plot carbon isotope ratio for each taxon.

apl13Cspp = apct[apct.analyte=="d13C"]
grouped = apl13Cspp.groupby("scientificName")["analyteConcentration"]
fig, ax = plt.subplots()
ax.boxplot(x=[group.values for name, group in grouped],
           tick_labels=grouped.groups.keys())
ax.tick_params(axis='x', labelrotation=90)
plt.show()

And now we can see most of the sampled plants have carbon isotope ratios around -30, with just a few species accounting for most of the more enriched samples.

Download remote sensing data: byFileAOP() and byTileAOP()

Remote sensing data files are very large, so downloading them can take a long time. byFileAOP() and byTileAOP() enable easier programmatic downloads, but be aware it can take a very long time to download large amounts of data.

Input options for the AOP functions are:

  • dpID: the data product ID, e.g. DP1.00002.001
  • site: the 4-letter code of a single site, e.g. HARV
  • year: the 4-digit year to download
  • savepath: the file path you want to download to; defaults to the working directory
  • check.size: T or F: should the function pause before downloading data and warn you about the size of your download? Defaults to T; if you are using this function within a script or batch process you will want to set it to F.
  • easting: byTileAOP() only. Vector of easting UTM coordinates whose corresponding tiles you want to download
  • northing: byTileAOP() only. Vector of northing UTM coordinates whose corresponding tiles you want to download
  • buffer: byTileAOP() only. Size in meters of buffer to include around coordinates when deciding which tiles to download
  • token: Optional NEON API token for faster downloads.
  • chunk_size: Only in Python. Set the chunk size of chunked downloads, can improve efficiency in some cases. Defaults to 1 MB.

Here, we’ll download one tile of Ecosystem structure (Canopy Height Model) (DP3.30015.001) from WREF in 2017.

R

byTileAOP(dpID="DP3.30015.001", site="WREF", 
          year=2017,easting=580000, 
          northing=5075000, 
          savepath="~/Downloads")

Python

nu.by_tile_aop(dpid="DP3.30015.001", site="WREF", 
               year=2017,easting=580000, 
               northing=5075000, 
               savepath=os.path.expanduser(
                 "~/Downloads"))

In the directory indicated in savepath, you should now have a folder named DP3.30015.001 with several nested subfolders, leading to a tif file of a canopy height model tile.

Navigate data downloads: AOP

To work with AOP data, the best bet in R is the terra package. It has functionality for most analyses you might want to do. In Python, we’ll use the rasterio package here; explore NEON remote sensing tutorials for more guidance.

First let’s read in the tile we downloaded:

R

chm <- rast("~/Downloads/DP3.30015.001/neon-aop-products/2017/FullSite/D16/2017_WREF_1/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D16_WREF_DP3_580000_5075000_CHM.tif")

Python


chm = rasterio.open(os.path.expanduser("~/Downloads/DP3.30015.001/neon-aop-products/2017/FullSite/D16/2017_WREF_1/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D16_WREF_DP3_580000_5075000_CHM.tif"))

Plot canopy height model

R

plot(chm, col=topo.colors(6))

Python

plt.imshow(chm.read(1))
plt.show()

Now we can see canopy height across the downloaded tile; the tallest trees are over 60 meters, not surprising in the Pacific Northwest. There is a clearing or clear cut in the lower right quadrant.

Next steps

Now that you’ve learned the basics of downloading and understanding NEON data, where should you go to learn more? There are many more NEON tutorials to explore, including how to align remote sensing and ground-based measurements, a deep dive into the data quality flagging in the sensor data products, and much more. For a recommended suite of tutorials for new users, check out the Getting Started with NEON Data tutorial series.

Unsupervised Spectral Classification in Python: KMeans & PCA

In this tutorial, we will use the Spectral Python (SPy) package to run a KMeans unsupervised classification algorithm and then we will run Principal Component Analysis to reduce data dimensionality.

Objectives

After completing this tutorial, you will be able to:

  • Run kmeans unsupervised classification on AOP hyperspectral data
  • Reduce data dimensionality using Principal Component Analysis (PCA)

Install Python Packages

To run this notebook, the following Python packages need to be installed. You can install required packages from the command line (prior to opening your notebook), e.g. pip install gdal h5py neonutilities scikit-learn spectral requests. If already in a Jupyter Notebook, run the same command in a Code cell, but start with !pip install.

  • gdal
  • h5py
  • neonutilities
  • scikit-image
  • spectral
  • requests

For visualization (optional)

In order to make use of the interactive graphics capabilities of spectralpython, such as N-Dimensional Feature Display, you will need the additional packages below. These are not required to complete this lesson.

For more information, refer to Spectral Python Graphics.

  • pip install wxPython
  • pip install PyOpenGL PyOpenGL_accelerate

Download Data

This tutorial uses am AOP Hyperspectral Surface Bidirectional Reflectance tile (1 km x 1 km) from the NEON Smithsonian Environmental Research Center (SERC) site.

The data required for this lesson will be downloaded in the beginning of the tutorial using the Python neonutilities package.

In this tutorial, we will use the Spectral Python (SPy) package to run KMeans unsupervised classification algorithm as well as Principal Component Analysis (PCA).

To learn more about the Spectral Python packages read:

  • Spectral Python User Guide.
  • Spectral Python Unsupervised Classification.

KMeans Clustering

KMeans is an iterative clustering algorithm used to classify unsupervised data (eg. data without a training set) into a specified number of groups. The algorithm begins with an initial set of randomly determined cluster centers. Each pixel in the image is then assigned to the nearest cluster center (using distance in N-space as the distance metric) and each cluster center is then re-computed as the centroid of all pixels assigned to the cluster. This process repeats until a desired stopping criterion is reached (e.g. max number of iterations).

Read more on KMeans clustering from Spectral Python.

To visualize how the algorithm works, it's easier look at a 2D data set. In the example below, watch how the cluster centers shift with progressive iterations,

KMeans clustering demonstration Source: Sandipan Deyn

Principal Component Analysis (PCA) - Dimensionality Reduction

Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands).

Read more about PCA with Spectral Python.

Let's get started! First, import the required packages.

import h5py
import matplotlib
import neonutilities as nu
import numpy as np
import os
import requests
from spectral import *
from time import time
home_dir = os.path.expanduser("~")
data_dir = os.path.join(home_dir,'data')

For this example, we will download a bidirectional surface reflectance data cube at the SERC site, collected in 2022.

nu.by_tile_aop(dpid='DP3.30006.002',
               site='SERC',
               year='2022',
               easting=368005,
               northing=4306005,
               include_provisional=True,
               #token=your_token_here
               savepath=os.path.join(data_dir)) # save to the home directory under a 'data' subfolder
Provisional data are included. To exclude provisional data, use input parameter include_provisional=False.


Continuing will download 2 files totaling approximately 692.0 MB. Do you want to proceed? (y/n)  y

Let's see what data were downloaded.

# iterating over directory and subdirectory to get desired result
for root, dirs, files in os.walk(data_dir):
    for name in files:
        if name.endswith('.h5'):
            print(os.path.join(root, name))  # printing file name
~\data\DP3.30006.002\neon-aop-provisional-products\2022\FullSite\D02\2022_SERC_6\L3\Spectrometer\Reflectance\NEON_D02_SERC_DP3_368000_4306000_bidirectional_reflectance.h5
h5_tile = r'~\data\DP3.30006.002\neon-aop-provisional-products\2022\FullSite\D02\2022_SERC_6\L3\Spectrometer\Reflectance\NEON_D02_SERC_DP3_368000_4306000_bidirectional_reflectance.h5'
# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
    if not os.path.isdir(download_dir):
        os.makedirs(download_dir)
    filename = url.split('/')[-1]
    r = requests.get(url, allow_redirects=True)
    file_object = open(os.path.join(download_dir,filename),'wb')
    file_object.write(r.content)
module_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/AOP/aop_python_modules/neon_aop_hyperspectral.py"
download_url(module_url,'../python_modules')
# os.listdir('../python_modules') #optionally show the contents of this directory to confirm the file downloaded
sys.path.insert(0, '../python_modules')
# import the neon_aop_hyperspectral module, the semicolon supresses an empty plot from displaying
import neon_aop_hyperspectral as neon_hs;
# read in the reflectance data using the aop_h5refl2array function, this may also take a bit of time
start_time = time()
refl, refl_metadata, wavelengths = neon_hs.aop_h5refl2array(h5_tile,'Reflectance')
print("--- It took %s seconds to read in the data ---" % round((time() - start_time),0))
Reading in  C:\Users\bhass\data\DP3.30006.002\neon-aop-provisional-products\2022\FullSite\D02\2022_SERC_6\L3\Spectrometer\Reflectance\NEON_D02_SERC_DP3_368000_4306000_bidirectional_reflectance.h5
--- It took 27.0 seconds to read in the data ---

The next few cells show how you can look at the contents, values, and dimensions of the refl_metadata, wavelengths, and refl variables, respectively.

refl_metadata
{'shape': (1000, 1000, 426),
 'no_data_value': -9999.0,
 'scale_factor': 10000.0,
 'bad_band_window1': array([1340, 1445]),
 'bad_band_window2': array([1790, 1955]),
 'projection': b'+proj=UTM +zone=18 +ellps=WGS84 +datum=WGS84 +units=m +no_defs',
 'EPSG': 32618,
 'res': {'pixelWidth': 1.0, 'pixelHeight': 1.0},
 'extent': (368000.0, 369000.0, 4306000.0, 4307000.0),
 'ext_dict': {'xMin': 368000.0,
  'xMax': 369000.0,
  'yMin': 4306000.0,
  'yMax': 4307000.0},
 'source': 'C:\\Users\\bhass\\data\\DP3.30006.002\\neon-aop-provisional-products\\2022\\FullSite\\D02\\2022_SERC_6\\L3\\Spectrometer\\Reflectance\\NEON_D02_SERC_DP3_368000_4306000_bidirectional_reflectance.h5'}
print('First and last 5 center wavelengths, in nm:')
print(wavelengths[:5])
print(wavelengths[-5:])
First and last 5 center wavelengths, in nm:
[383.884003 388.891693 393.899506 398.907196 403.915009]
[2492.149414 2497.157227 2502.165039 2507.172607 2512.18042 ]
refl.shape
(1000, 1000, 426)

Next let's define a function to clean and subset the data.

def clean_neon_refl_data(data, metadata, wavelengths, subset_factor=1):
    """Clean h5 reflectance data and metadata
    1. set data ignore value (-9999) to NaN
    2. apply reflectance scale factor (10000)
    3. remove bad bands (water vapor band windows + last 10 bands): 
        Band_Window_1_Nanometers = 1340, 1445
        Band_Window_2_Nanometers = 1790, 1955
    4. if subset_factor, subset by that factor
    """
    
    # use copy so original data and metadata doesn't change
    data_clean = data.copy().astype(float)
    metadata_clean = metadata.copy()
    
    #set data ignore value (-9999) to NaN:
    if metadata['no_data_value'] in data:
        nodata_ind = np.where(data_clean==metadata['no_data_value'])
        data_clean[nodata_ind]=np.nan 
    
    #apply reflectance scale factor (divide by 10000)
    data_clean = data_clean/metadata['scale_factor']
    
    #remove bad bands 
    #1. define indices corresponding to min/max center wavelength for each bad band window:
    bb1_ind0 = np.max(np.where(np.asarray(wavelengths<float(metadata['bad_band_window1'][0]))))
    bb1_ind1 = np.min(np.where(np.asarray(wavelengths>float(metadata['bad_band_window1'][1]))))

    bb2_ind0 = np.max(np.where(np.asarray(wavelengths<float(metadata['bad_band_window2'][0]))))
    bb2_ind1 = np.min(np.where(np.asarray(wavelengths>float(metadata['bad_band_window2'][1]))))
    bb3_ind0 = len(wavelengths)-15
    
    #define valid band ranges from indices:
    vb1 = list(range(10,bb1_ind0)); 
    vb2 = list(range(bb1_ind1,bb2_ind0))
    vb3 = list(range(bb2_ind1,bb3_ind0))
    # combine them to get a list of the valid bands
    vbs = vb1 + vb2 + vb3
    # subset by subset_factor (if subset_factor = 1 this will return the original valid_bands list)
    valid_bands_subset = vbs[::subset_factor]

    # subset the reflectance data by the valid_bands_subset
    data_clean = data_clean[:,:,valid_bands_subset]

    # subset the wavelengths by the same valid_bands_subset
    wavelengths_clean =[wavelengths[i] for i in valid_bands_subset]
    
    return data_clean, wavelengths_clean
# clean the data - remove the band bands and subset
start_time = time()
refl_clean, wavelengths_clean = clean_neon_refl_data(refl, refl_metadata, wavelengths, subset_factor=2)
print("--- It took %s seconds to clean and subset the reflectance data ---" % round((time() - start_time),0))
--- It took 12.0 seconds to clean and subset the reflectance data ---
# Look at the dimensions of the data after cleaning:
print('Cleaned Data Dimensions:',refl_clean.shape)
print('Cleaned Wavelengths:',len(wavelengths_clean))
Cleaned Data Dimensions: (1000, 1000, 173)
Cleaned Wavelengths: 173
start_time = time()
# run kmeans with 5 clusters and 50 iterations
(m,c) = kmeans(refl_clean, 5, 50) 
print("--- It took %s minutes to run kmeans on the reflectance data ---" % round((time() - start_time)/60,1))
spectral:INFO: k-means iteration 1 - 373101 pixels reassigned.
k-means iteration 1 - 373101 pixels reassigned.
spectral:INFO: k-means iteration 2 - 135441 pixels reassigned.
k-means iteration 2 - 135441 pixels reassigned.
spectral:INFO: k-means iteration 3 - 54918 pixels reassigned.
k-means iteration 3 - 54918 pixels reassigned.
...
spectral:INFO: k-means iteration 49 - 12934 pixels reassigned.
k-means iteration 49 - 12934 pixels reassigned.
spectral:INFO: k-means iteration 50 - 10783 pixels reassigned.
k-means iteration 50 - 10783 pixels reassigned.
spectral:INFO: kmeans terminated with 5 clusters after 50 iterations.
kmeans terminated with 5 clusters after 50 iterations.


--- It took 3.7 minutes to run kmeans on the reflectance data ---

Note that the algorithm still had on the order of 10000 clusters reassigning, when the 50 iterations were reached. You may extend the # of iterations.

Data Tip: You can iterrupt the algorithm with a keyboard interrupt (CTRL-C) if you notice that the number of reassigned pixels drops off. Kmeans catches the KeyboardInterrupt exception and returns the clusters generated at the end of the previous iteration. If you are running the algorithm interactively, this feature allows you to set the max number of iterations to an arbitrarily high number and then stop the algorithm when the clusters have converged to an acceptable level. If you happen to set the max number of iterations too small (many pixels are still migrating at the end of the final iteration), you cancall kmeans again to resume processing by passing the cluster centers generated by the previous call as the optional start_clusters argument to the function.

Let's try that now:

start_time = time()
# run kmeans with 5 clusters and 50 iterations
(m, c) = kmeans(refl_clean, 5, 50, start_clusters=c) 
print("--- It took %s minutes to run kmeans on the reflectance data ---" % round((time() - start_time)/60,1))
spectral:INFO: k-means iteration 1 - 787247 pixels reassigned.
k-means iteration 1 - 787247 pixels reassigned.
spectral:INFO: k-means iteration 2 - 7684 pixels reassigned.
k-means iteration 2 - 7684 pixels reassigned.
spectral:INFO: k-means iteration 3 - 6552 pixels reassigned.
k-means iteration 3 - 6552 pixels reassigned.
...
k-means iteration 49 - 11 pixels reassigned.
spectral:INFO: k-means iteration 50 - 13 pixels reassigned.
k-means iteration 50 - 13 pixels reassigned.
spectral:INFO: kmeans terminated with 5 clusters after 50 iterations.
kmeans terminated with 5 clusters after 50 iterations.


--- It took 3.6 minutes to run kmeans on the reflectance data ---

Passing the initial clusters in sped up the convergence considerably, the second time around.

Let's take a look at the new cluster centers c. In this case, these represent spectral signatures of the five clusters (classes) that the data were grouped into. First we can take a look at the shape:

print(c.shape)
(5, 173)

c contains 5 groups of spectral curves with 173 bands (the # of bands we've kept after subsetting and removing the water vapor windows, first 10 noisy bands and last 15 noisy bands). We can plot these spectral classes as follows:

import pylab
pylab.figure()
for i in range(c.shape[0]):
    pylab.plot(wavelengths_clean, c[i],'.')
pylab.show
pylab.title('Spectral Classes from K-Means Clustering')
pylab.xlabel('Wavelength (nm)')
pylab.ylabel('Reflectance');

png

Next, we can look at the classes in map view, as well as a true color image.

view = imshow(refl_clean, bands=(58,34,19),stretch=0.01, classes=m, extent=refl_metadata['extent'])
view.set_display_mode('overlay')
view.class_alpha = 1 #set transparency
view.show_data;

png

view = imshow(refl_clean, bands=(24,12,4), stretch=0.03, extent=refl_metadata['extent'])
view.show_data;

png

Challenge Questions: K-Means

  1. What do you think the spectral classes in the figure you just created represent?
  2. Try using a different number of clusters in the kmeans algorithm (e.g., 3 or 10) to see what spectral classes and classifications result.
  3. Try using different (higher) subset_factor in the clean_neon_refl_data function, like 3 or 5. Does this factor change the final classes that are created in the kmeans algorithm? By how much can you subset the data by and still achieve similar classification results?

Principal Component Analysis (PCA)

This next section follows the Spectral Python Dimensionality Reduction section closely.

Many of the bands within hyperspectral images are often strongly correlated. The principal components transformation represents a linear transformation of the original image bands to a set of new, uncorrelated features. These new features correspond to the eigenvectors of the image covariance matrix, where the associated eigenvalue represents the variance in the direction of the eigenvector. A very large percentage of the image variance can be captured in a relatively small number of principal components (compared to the original number of bands) .

pc = principal_components(refl_clean)
pc_view = imshow(pc.cov, extent=refl_metadata['extent'])
xdata = pc.transform(refl_clean)

png

In the covariance matrix display, lighter values indicate strong positive covariance, darker values indicate strong negative covariance, and grey values indicate covariance near zero.

To reduce dimensionality using principal components, we can sort the eigenvalues in descending order and then retain enough eigenvalues (anD corresponding eigenvectors) to capture a desired fraction of the total image variance. We then reduce the dimensionality of the image pixels by projecting them onto the remaining eigenvectors. We will choose to retain a minimum of 99.9% of the total image variance.

pc_999 = pc.reduce(fraction=0.999)

# How many eigenvalues are left?
print('# of eigenvalues:',len(pc_999.eigenvalues))

img_pc = pc_999.transform(refl_clean)
print(img_pc.shape)

v = imshow(img_pc[:,:,:3], stretch_all=True, extent=refl_metadata['extent']);
# of eigenvalues: 9
(1000, 1000, 9)

png

You can see that even though we've only retained a subset of the bands, a lot of the details about the scene are still visible.

If you had training data, you could use a Gaussian maximum likelihood classifier (GMLC) for the reduced principal components to train and classify against the training data.

Challenge Question: PCA

  1. Run the k-means classification after running PCA and see if you get similar results. Does / how does reducing the data dimensionality affect the classification results?

Calculate NDVI & Extract Spectra Using Masks in Python

In this tutorial, we will calculate the Normalized Difference Vegetation Index (NDVI) from hyperspectral reflectance data using Python functions.

This tutorial uses the Level 3 Spectrometer orthorectified surface directional reflectance - mosaic.

Objectives

After completing this tutorial, you will be able to:

  • Calculate NDVI from hyperspectral data in Python.

Calculate the mean spectr of all pixels whose NDVI is greater than or less than a specified value.I

Install Python Packages

  • requests
  • pandas
  • gdal
  • h5py

Data

Data and additional scripts required for this lesson are downloaded programmatically as part of the tutorial.

The hyperspectral imagery file used in this lesson was collected over the National Ecological Observatory Network's Smithsonian Environmental Research Center field site in 2021 and processed at NEON headquarters.

The entire dataset can be accessed on the NEON Data Portal.

Calculate NDVI & Extract Spectra with Masks

Background:

The Normalized Difference Vegetation Index (NDVI) is a standard band-ratio calculation frequently used to analyze ecological remote sensing data. NDVI indicates whether the remotely-sensed target contains live green vegetation. When sunlight strikes objects, certain wavelengths of the electromagnetic spectrum are absorbed and other wavelengths are reflected. The pigment chlorophyll in plant leaves strongly absorbs visible light (with wavelengths in the range of 400-700 nm) for use in photosynthesis. The cell structure of the leaves, however, strongly reflects near-infrared light (wavelengths ranging from 700 - 1100 nm). Plants reflect up to 60% more light in the near infrared portion of the spectrum than they do in the green portion of the spectrum. By calculating the ratio of Near Infrared (NIR) to Visible (VIS) bands in hyperspectral data, we can obtain a metric of vegetation density and health.

The formula for NDVI is: $$NDVI = \frac{(NIR - VIS)}{(NIR+ VIS)}$$

NDVI is calculated from the visible and near-infrared light reflected by vegetation. Healthy vegetation (left) absorbs most of the visible light that hits it, and reflects a large portion of near-infrared light. Unhealthy or sparse vegetation (right) reflects more visible light and less near-infrared light. Source: Figure 1 in Wu et. al. 2014. PLOS.

Start by setting plot preferences and loading the neon_aop_hyperspectral.py module:

import os, sys
from copy import copy
import requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt

This next function is a handy way to download the Python module and data that we will be using for this lesson. This uses the requests package.

# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
    if not os.path.isdir(download_dir):
        os.makedirs(download_dir)
    filename = url.split('/')[-1]
    r = requests.get(url, allow_redirects=True)
    file_object = open(os.path.join(download_dir,filename),'wb')
    file_object.write(r.content)

Download the module from its location on GitHub, add the python_modules to the path and import the neon_aop_hyperspectral.py module as neon_hs.

# download the neon_aop_hyperspectral.py module from GitHub
module_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/AOP/aop_python_modules/neon_aop_hyperspectral.py"
download_url(module_url,'../python_modules')

# add the python_modules to the path and import the python neon download and hyperspectral functions
sys.path.insert(0, '../python_modules')
# import the neon_aop_hyperspectral module, the semicolon supresses an empty plot from displaying
import neon_aop_hyperspectral as neon_hs;
# define the data_url to point to the cloud storage location of the the hyperspectral hdf5 data file
data_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D02/2021_SERC_5/L3/Spectrometer/Reflectance/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5"
# download the h5 data and display how much time it took to download (uncomment 1st and 3rd lines)
# start_time = time.time()
download_url(data_url,'.\data')
# print("--- It took %s seconds to download the data ---" % round((time.time() - start_time),1))

Read in SERC Reflectance Tile

# read the h5 reflectance file (including the full path) to the variable h5_file_name
h5_file_name = data_url.split('/')[-1]
h5_tile = os.path.join(".\data",h5_file_name)
print(f'h5_tile: {h5_tile}')
h5_tile: .\data\NEON_D02_SERC_DP3_368000_4306000_reflectance.h5
# Note you will need to update this filepath for your local machine
serc_refl, serc_refl_md, wavelengths = neon_hs.aop_h5refl2array(h5_tile,'Reflectance')
Reading in  .\data\NEON_D02_SERC_DP3_368000_4306000_reflectance.h5

Extract NIR and VIS bands

Now that we have uploaded all the required functions, we can calculate NDVI and plot it. Below we print the center wavelengths that these bands correspond to:

print('band 58 center wavelength (nm): ', wavelengths[57])
print('band 90 center wavelength (nm) : ', wavelengths[89])
band 58 center wavelength (nm):  669.3261
band 90 center wavelength (nm) :  829.5743

Calculate & Plot NDVI

Here we see that band 58 represents red visible light, while band 90 is in the NIR portion of the spectrum. Let's extract these two bands from the reflectance array and calculate the ratio using the numpy.true_divide which divides arrays element-wise. This also handles a case where the denominator = 0, which would otherwise throw a warning or error.

vis = serc_refl[:,:,57]
nir = serc_refl[:,:,89]

# handle a divide by zero by setting the numpy errstate as follows
with np.errstate(divide='ignore', invalid='ignore'):
    ndvi = np.true_divide((nir-vis),(nir+vis))
    ndvi[ndvi == np.inf] = 0
    ndvi = np.nan_to_num(ndvi)
Let's take a look at the min, mean, and max values of NDVI that we calculated:
print(f'NDVI Min: {ndvi.min()}')
print(f'NDVI Mean: {round(ndvi.mean(),2)}')
print(f'NDVI Max: {ndvi.max()}')
NDVI Min: -1.0
NDVI Mean: 0.63
NDVI Max: 1.0

We can use the function plot_aop_refl to plot this, and choose the seismic color pallette to highlight the difference between positive and negative NDVI values. Since this is a normalized index, the values should range from -1 to +1.

neon_hs.plot_aop_refl(ndvi,serc_refl_md['extent'],
                      colorlimit = (np.min(ndvi),np.max(ndvi)),
                      title='SERC Subset NDVI \n (VIS = Band 58, NIR = Band 90)',
                      cmap_title='NDVI',
                      colormap='seismic')

png

Extract Spectra Using Masks

In the second part of this tutorial, we will learn how to extract the average spectra of pixels whose NDVI exceeds a specified threshold value. There are several ways to do this using numpy, including the mask functions numpy.ma, as well as numpy.where and finally using boolean indexing.

To start, lets copy the NDVI calculated above and use booleans to create an array only containing NDVI > 0.6.

# make a copy of ndvi
ndvi_gtpt6 = ndvi.copy()
#set all pixels with NDVI < 0.6 to nan, keeping only values > 0.6
ndvi_gtpt6[ndvi<0.6] = np.nan  
print('Mean NDVI > 0.6:',round(np.nanmean(ndvi_gtpt6),2))
Mean NDVI > 0.6: 0.87

Now let's plot the values of NDVI after masking out values < 0.6.

neon_hs.plot_aop_refl(ndvi_gtpt6,
                      serc_refl_md['extent'],
                      colorlimit=(0.6,1),
                      title='SERC Subset NDVI > 0.6 \n (VIS = Band 58, NIR = Band 90)',
                      cmap_title='NDVI',
                      colormap='RdYlGn')

png

Calculate the mean spectra, thresholded by NDVI

Below we will demonstrate how to calculate statistics on arrays where you have applied a mask numpy.ma. In this example, the function calculates the mean spectra for values that remain after masking out values by a specified threshold.

import numpy.ma as ma
def calculate_mean_masked_spectra(refl_array,ndvi,ndvi_threshold,ineq='>'):
    mean_masked_refl = np.zeros(refl_array.shape[2])
    for i in np.arange(refl_array.shape[2]):
        refl_band = refl_array[:,:,i]
        if ineq == '>':
            ndvi_mask = ma.masked_where((ndvi<=ndvi_threshold) | (np.isnan(ndvi)),ndvi)
        elif ineq == '<':
            ndvi_mask = ma.masked_where((ndvi>=ndvi_threshold) | (np.isnan(ndvi)),ndvi)   
        else:
            print('ERROR: Invalid inequality. Enter < or >')
        masked_refl = ma.MaskedArray(refl_band,mask=ndvi_mask.mask)
        mean_masked_refl[i] = ma.mean(masked_refl)
    return mean_masked_refl

We can test out this function for various NDVI thresholds. We'll test two together, and you can try out different values on your own. Let's look at the average spectra for healthy vegetation (NDVI > 0.6), and for a lower threshold (NDVI < 0.3).

serc_ndvi_gtpt6 = calculate_mean_masked_spectra(serc_refl,ndvi,0.6)
serc_ndvi_ltpt3 = calculate_mean_masked_spectra(serc_refl,ndvi,0.3,ineq='<') 

Finally, we can create a pandas dataframe to plot the mean spectra.

#Remove water vapor bad band windows & last 10 bands 
w = wavelengths.copy()
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan
w[-10:]=np.nan;  

nan_ind = np.argwhere(np.isnan(w))

serc_ndvi_gtpt6[nan_ind] = np.nan
serc_ndvi_ltpt3[nan_ind] = np.nan

#Create dataframe with masked NDVI mean spectra, scale by the reflectance scale factor
serc_ndvi_df = pd.DataFrame()
serc_ndvi_df['wavelength'] = w
serc_ndvi_df['mean_refl_ndvi_gtpt6'] = serc_ndvi_gtpt6/serc_refl_md['scale_factor']
serc_ndvi_df['mean_refl_ndvi_ltpt3'] = serc_ndvi_ltpt3/serc_refl_md['scale_factor']

Let's take a look at the first 5 values of this new dataframe:

serc_ndvi_df.head()
wavelength mean_refl_ndvi_gtpt6 mean_refl_ndvi_ltpt3
0 383.884003 0.055741 0.119835
1 388.891693 0.036432 0.090972
2 393.899506 0.027002 0.076867
3 398.907196 0.022841 0.072207
4 403.915009 0.018748 0.065984

Plot the masked NDVI dataframe to display the mean spectra for NDVI values that exceed 0.6 and that are less than 0.3:

ax = plt.gca();
serc_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_gtpt6',color='green',
                  edgecolor='none',kind='scatter',label='Mean Spectra where NDVI > 0.6',legend=True);
serc_ndvi_df.plot(ax=ax,x='wavelength',y='mean_refl_ndvi_ltpt3',color='red',
                  edgecolor='none',kind='scatter',label='Mean Spectra where NDVI < 0.3',legend=True);
ax.set_title('Mean Spectra of Reflectance Masked by NDVI')
ax.set_xlim([np.nanmin(w),np.nanmax(w)]);
ax.set_xlabel("Wavelength, nm"); ax.set_ylabel("Reflectance")
ax.grid('on'); 

png

Classify a Lidar Raster in Python

This tutorial covers how to read in a NEON lidar Canopy Height Model (CHM) geotiff file into a Python rasterio object, shows some basic information about the raster data, and then ends with classifying the CHM into height bins.

Objectives

After completing this tutorial, you will be able to:

  • User rasterio to read in a NEON lidar raster geotiff file
  • Plot a raster tile and histogram of the data values
  • Create a classified raster object using thresholds

Install Python Packages

  • gdal
  • rasterio
  • requests

Download Data

For this lesson, we will read in a Canopy Height Model data collected at NEON's Lower Teakettle (TEAK) site in California. This data is downloaded in the first part of the tutorial, using the Python requests package.

In this tutorial, we will work with the NEON AOP L3 LiDAR ecoysystem structure (Canopy Height Model) data product. For more information about NEON data products and the CHM product DP3.30015.001, see the Ecosystem structure data product page on NEON's Data Portal.

First, let's import the required packages and set our plot display to be in-line:

import os
import copy
import requests
import numpy as np
import rasterio as rio
from rasterio.plot import show, show_hist
import matplotlib.pyplot as plt
%matplotlib inline

Next, let's download a file. For this tutorial, we will use the requests package to download a raster file from the public link where the data is stored. For simplicity, we will show how to download to a data folder in the working directory. You can move the data to a different folder, but be sure to update the path to your data accordingly.

# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
    if not os.path.isdir(download_dir):
        os.makedirs(download_dir)
    filename = url.split('/')[-1]
    r = requests.get(url, allow_redirects=True)
    file_object = open(os.path.join(download_dir,filename),'wb')
    file_object.write(r.content)
# public url where the CHM tile is stored
chm_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D17/2021_TEAK_5/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D17_TEAK_DP3_320000_4092000_CHM.tif"

# download the CHM tile
download_url(chm_url,'.\data')

# display the contents in the ./data folder to confirm the download completed
os.listdir('./data')

Open a GeoTIFF with rasterio

Let's look at the TEAK Canopy Height Model (CHM) to start. We can open and read this in Python using the rasterio.open function:

# read the chm file (including the full path) to the variable chm_dataset
chm_name = chm_url.split('/')[-1]
chm_file = os.path.join(".\data",chm_name)
chm_dataset = rio.open(chm_file)

Now we can look at a few properties of this dataset to start to get a feel for the rasterio object:

print('chm_dataset:\n',chm_dataset)
print('\nshape:\n',chm_dataset.shape)
print('\nno data value:\n',chm_dataset.nodata)
print('\nspatial extent:\n',chm_dataset.bounds)
print('\ncoordinate information (crs):\n',chm_dataset.crs)

Plot the Canopy Height Map and Histogram

We can use rasterio's built-in functions show and show_hist to plot and visualize the CHM tile. It is often useful to plot a histogram of the geotiff data in order to get a sense of the range and distribution of values.

fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12,5))
show(chm_dataset, ax=ax1);

show_hist(chm_dataset, bins=50, histtype='stepfilled',
          lw=0.0, stacked=False, alpha=0.3, ax=ax2);
ax2.set_xlabel("Canopy Height (meters)");
ax2.get_legend().remove()

plt.show();

On your own, adjust the number of bins, and range of the y-axis to get a better sense of the distribution of the canopy height values. We can see that a large portion of the values are zero. These correspond to bare ground. Let's look at a histogram and plot the data without these zero values. To do this, we'll remove all values > 1 m. Due to the vertical range resolution of the lidar sensor, data collected with the Optech Gemini sensor can only resolve the ground to within 2 m, so anything below that height will be rounded down to zero. Our newer sensors (Riegl Q780 and Optech Galaxy) have a higher resolution, so the ground can be resolved to within ~0.7 m.

chm_data = chm_dataset.read(1)
valid_data = chm_data[chm_data>2]

plt.hist(valid_data.flatten(),bins=30);

From the histogram we can see that the majority of the trees are < 60m. But the taller trees are less common in this tile.

Threshold Based Raster Classification

Next, we will create a classified raster object. To do this, we will use the numpy.where function to create a new raster based off boolean classifications. Let's classify the canopy height into five groups:

  • Class 1: CHM = 0 m
  • Class 2: 0m < CHM <= 15m
  • Class 3: 10m < CHM <= 30m
  • Class 4: 20m < CHM <= 45m
  • Class 5: CHM > 50m

We can use np.where to find the indices where the specified criteria is met.

chm_reclass = chm_data.copy()
chm_reclass[np.where(chm_data==0)] = 1 # CHM = 0 : Class 1
chm_reclass[np.where((chm_data>0) & (chm_data<=10))] = 2 # 0m < CHM <= 10m - Class 2
chm_reclass[np.where((chm_data>10) & (chm_data<=20))] = 3 # 10m < CHM <= 20m - Class 3
chm_reclass[np.where((chm_data>20) & (chm_data<=30))] = 4 # 20m < CHM <= 30m - Class 4
chm_reclass[np.where(chm_data>30)] = 5 # CHM > 30m - Class 5

When we look at this variable, we can see that it is now populated with values between 1-5:

chm_reclass

Lastly we can use matplotlib to display this re-classified CHM. We will define our own colormap to plot these discrete classifications, and create a custom legend to label the classes. First, to include the spatial information in the plot, create a new variable called ext that pulls from the rasterio "bounds" field to create the extent in the expected format for plotting.

ext = [chm_dataset.bounds.left,
       chm_dataset.bounds.right,
       chm_dataset.bounds.bottom,
       chm_dataset.bounds.top]
ext
import matplotlib.colors as colors
plt.figure(); 
cmap_chm = colors.ListedColormap(['lightblue','yellow','orange','green','red'])
plt.imshow(chm_reclass,extent=ext,cmap=cmap_chm)
plt.title('TEAK CHM Classification')
ax=plt.gca(); ax.ticklabel_format(useOffset=False, style='plain') #do not use scientific notation 
rotatexlabels = plt.setp(ax.get_xticklabels(),rotation=90) #rotate x tick labels 90 degrees

# Create custom legend to label the four canopy height classes:
import matplotlib.patches as mpatches
class1 = mpatches.Patch(color='lightblue', label='0 m')
class2 = mpatches.Patch(color='yellow', label='0-15 m')
class3 = mpatches.Patch(color='orange', label='15-30 m')
class4 = mpatches.Patch(color='green', label='30-45 m')
class5 = mpatches.Patch(color='red', label='>30 m')

ax.legend(handles=[class1,class2,class3,class4,class5],
          handlelength=0.7,bbox_to_anchor=(1.05, 0.4),loc='lower left',borderaxespad=0.);

Challenge: Try Another Classification

Create the following threshold classified outputs:

An NDVI raster where values are classified into the following categories:

  • Low greenness: NDVI < 0.3
  • Medium greenness: 0.3 < NDVI < 0.6
  • High greenness: NDVI > 0.6

A classified aspect raster where the data is grouped into North and South facing slopes (or all four cardinal directions):

  • North: 0-45 & 315-360 degrees
  • South: 135-225 degrees

Pagination

  • First page
  • Previous page
  • …
  • Page 48
  • Page 49
  • Page 50
  • Page 51
  • Current page 52
  • Page 53
  • Page 54
  • Page 55
  • Page 56
  • …
  • Next page
  • Last page
Subscribe to
NSF NEON, Operated by Battelle

Follow Us:

Join Our Newsletter

Get updates on events, opportunities, and how NEON is being used today.

Subscribe Now

Footer

  • About Us
  • Contact Us
  • Terms & Conditions
  • Careers
  • Code of Conduct

Copyright © Battelle, 2026

The National Ecological Observatory Network is a major facility fully funded by the U.S. National Science Foundation.

Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of the U.S. National Science Foundation.