Skip to main content
NSF NEON, Operated by Battelle

Main navigation

  • About Us
    • Overview
      • Spatial and Temporal Design
      • History
    • Vision and Management
    • Advisory Groups
      • Science, Technology & Education Advisory Committee
      • Technical Working Groups (TWGs)
    • FAQ
    • Contact Us
      • Contact NEON Biorepository
      • Field Offices
    • User Accounts
    • Staff
    • Code of Conduct

    About Us

  • Data & Samples
    • Data Portal
      • Explore Data Products
      • Data Availability Charts
      • Spatial Data & Maps
      • Document Library
      • API & GraphQL
      • Prototype Data
      • External Lab Data Ingest (restricted)
    • Data Themes
      • Atmosphere
      • Biogeochemistry
      • Ecohydrology
      • Land Cover and Processes
      • Organisms, Populations, and Communities
    • Samples & Specimens
      • Discover and Use NEON Samples
        • Sample Types
        • Sample Repositories
        • Sample Explorer
        • Megapit and Distributed Initial Characterization Soil Archives
      • Sample Processing
      • Sample Quality
      • Taxonomic Lists
    • Collection Methods
      • Protocols & Standardized Methods
      • Airborne Remote Sensing
        • Flight Box Design
        • Flight Schedules and Coverage
        • Daily Flight Reports
          • AOP Flight Report Sign Up
        • Camera
        • Imaging Spectrometer
        • Lidar
      • Automated Instruments
        • Site Level Sampling Design
        • Sensor Collection Frequency
        • Instrumented Collection Types
          • Meteorology
          • Phenocams
          • Soil Sensors
          • Ground Water
          • Surface Water
      • Observational Sampling
        • Site Level Sampling Design
        • Sampling Schedules
        • Observation Types
          • Aquatic Organisms
            • Aquatic Microbes
            • Fish
            • Macroinvertebrates & Zooplankton
            • Periphyton, Phytoplankton, and Aquatic Plants
          • Terrestrial Organisms
            • Birds
            • Ground Beetles
            • Mosquitoes
            • Small Mammals
            • Soil Microbes
            • Terrestrial Plants
            • Ticks
          • Hydrology & Geomorphology
            • Discharge
            • Geomorphology
          • Biogeochemistry
          • DNA Sequences
          • Pathogens
          • Sediments
          • Soils
            • Soil Descriptions
        • Optimizing the Observational Sampling Designs
    • Data Notifications
    • Data Guidelines and Policies
      • Acknowledging and Citing NEON
      • Publishing Research Outputs
      • Usage Policies
    • Data Management
      • Data Availability
      • Data Formats and Conventions
      • Data Processing
      • Data Quality
      • Data Product Bundles
      • Data Product Revisions and Releases
        • Release 2021
        • Release 2022
        • Release 2023
        • Release 2024
        • Release-2025
      • NEON and Google
      • Externally Hosted Data

    Data & Samples

  • Field Sites
    • About Field Sites and Domains
    • Explore Field Sites
    • Site Management Data Product

    Field Sites

  • Impact
    • Observatory Blog
    • Case Studies
    • Papers & Publications
    • Newsroom
      • NEON in the News
      • Newsletter Archive
      • Newsletter Sign Up

    Impact

  • Resources
    • Getting Started with NEON Data & Resources
    • Documents and Communication Resources
      • Papers & Publications
      • Document Library
      • Outreach Materials
    • Code Hub
      • Code Resources Guidelines
      • Code Resources Submission
      • NEON's GitHub Organization Homepage
    • Learning Hub
      • Science Videos
      • Tutorials
      • Workshops & Courses
      • Teaching Modules
    • Research Support Services
      • Field Site Coordination
      • Letters of Support
      • Mobile Deployment Platforms
      • Permits and Permissions
      • AOP Flight Campaigns
      • Research Support FAQs
      • Research Support Projects
    • Funding Opportunities

    Resources

  • Get Involved
    • Advisory Groups
      • Science, Technology & Education Advisory Committee
      • Technical Working Groups
    • Upcoming Events
    • NEON Ambassador Program
      • Exploring NEON-Derived Data Products Workshop Series
    • Research and Collaborations
      • Environmental Data Science Innovation and Inclusion Lab
      • Collaboration with DOE BER User Facilities and Programs
      • EFI-NEON Ecological Forecasting Challenge
      • NEON Great Lakes User Group
      • NEON Science Summit
      • NCAR-NEON-Community Collaborations
        • NCAR-NEON Community Steering Committee
    • Community Engagement
      • How Community Feedback Impacts NEON Operations
    • Science Seminars and Data Skills Webinars
      • Past Years
    • Work Opportunities
      • Careers
      • Seasonal Fieldwork
      • Internships
        • Intern Alumni
    • Partners

    Get Involved

  • My Account
  • Search

Search

Plot NEON RGB Camera Imagery in Python

This tutorial introduces NEON RGB camera images (Data Product DP3.30010.001) and uses the Python package rasterio to read in and plot the camera data in Python. In this lesson, we will read in an RGB camera tile collected over the NEON Smithsonian Environmental Research Center (SERC) site and plot the mutliband image, as well as the individual bands. This lesson was adapted from the rasterio plotting documentation.

Objectives

After completing this tutorial, you will be able to:

  • Plot a NEON RGB camera geotiff tile in Python using rasterio

Package Requirements

This tutorial was run in Python version 3.9, using the following packages:

  • rasterio
  • matplotlib

Download the Data

Download the NEON camera (RGB) imagery tile collected over the Smithsonian Environmental Research Station (SERC) NEON field site in 2021. Move this data to a desired folder on your local workstation. You will need to know the file path to this data.

You don't have to download from the link above; the tutorial will demonstrate how to download the data directly from Python into your working directory, but we recommend re-organizing in a way that makes sense for you.

Background

As part of the NEON Airborne Operation Platform's suite of remote sensing instruments, the digital camera producing high-resolution (<= 10 cm) photographs of the earth’s surface. The camera records light energy that has reflected off the ground in the visible portion (red, green and blue) of the electromagnetic spectrum. Often the camera images are used to provide context for the hyperspectral and LiDAR data, but they can also be used for research purposes in their own right. One such example is the tree-crown mapping work by Weinstein et al. - see the links below for more information!

  • Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks
  • A remote sensing derived data set of 100 million individual tree crowns for the National Ecological Observatory Network
  • DeepForest: A Python package for RGB deep learning tree crown delineation
Locations of 37 NEON sites included in the NEON crowns data set and examples of tree predictions shown with RGB imagery for six sites. (Weinstein et al 2021)

Reference: Ben G Weinstein, Sergio Marconi, Stephanie A Bohlman, Alina Zare, Aditya Singh, Sarah J Graves, Ethan P White (2021) A remote sensing derived data set of 100 million individual tree crowns for the National Ecological Observatory Network eLife 10:e62922. https://doi.org/10.7554/eLife.62922

In this lesson we will keep it simple and show how to read in and plot a single camera file (1km x 1km ortho-mosaicked tile) - a first step in any research incorporating the AOP camera data (in Python).

Import required packages

First let's import the packages that we'll be using in this lesson.

import os
import requests
import rasterio as rio
from rasterio.plot import show, show_hist
import matplotlib.pyplot as plt

Next, let's download a camera file. For this tutorial, we will use the requests package to download a raster file from the public link where the data is stored. For simplicity, we will show how to download to a data folder in the working directory. You can move the data to a different folder, but if you do that, be sure to update the path to your data accordingly.

def download_url(url,download_dir):
    if not os.path.isdir(download_dir):
        os.makedirs(download_dir)
    filename = url.split('/')[-1]
    r = requests.get(url, allow_redirects=True)
    file_object = open(os.path.join(download_dir,filename),'wb')
    file_object.write(r.content)
# public url where the RGB camera tile is stored
rgb_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D02/2021_SERC_5/L3/Camera/Mosaic/2021_SERC_5_368000_4306000_image.tif"

# download the camera tile to a ./data subfolder in your working directory
download_url(rgb_url,'.\data')

# display the contents in the ./data folder to confirm the download completed
os.listdir('./data')

Open the Camera RGB data with rasterio

We can open and read this RGB data that we downloaded in Python using the rasterio.open function:

# read the RGB file (including the full path) to the variable rgb_dataset
rgb_name = rgb_url.split('/')[-1]
rgb_file = os.path.join(".\data",rgb_name)
rgb_dataset = rio.open(rgb_file)

Let's look at a few properties of this dataset to get a sense of the information stored in the rasterio object:

print('rgb_dataset:\n',rgb_dataset)
print('\nshape:\n',rgb_dataset.shape)
print('\nspatial extent:\n',rgb_dataset.bounds)
print('\ncoordinate information (crs):\n',rgb_dataset.crs)

Unlike the other AOP data products, camera imagery is generated at 10cm resolution, so each 1km x 1km tile will contain 10000 pixels (other 1m resolution data products will have 1000 x 1000 pixels per tile, where each pixel represents 1 meter).

Plot the RGB multiband image

We can use rasterio's built-in functions show to plot the CHM tile.

show(rgb_dataset);

png

Plot each band of the RGB image

We can also plot each band (red, green, and blue) individually as follows:

fig, (axr, axg, axb) = plt.subplots(1,3, figsize=(21,7))
show((rgb_dataset, 1), ax=axr, cmap='Reds', title='red channel')
show((rgb_dataset, 2), ax=axg, cmap='Greens', title='green channel')
show((rgb_dataset, 3), ax=axb, cmap='Blues', title='blue channel')
plt.show()

png

That's all for this example! Most of the other AOP raster data are all single band images, but rasterio is a handy Python package for working with any geotiff files. You can download and visualize the lidar and spectrometer derived raster images similarly.

Using neonUtilities in Python

The instructions below will guide you through using the neonUtilities R package in Python, via the rpy2 package. rpy2 creates an R environment you can interact with from Python.

The assumption in this tutorial is that you want to work with NEON data in Python, but you want to use the handy download and merge functions provided by the neonUtilities R package to access and format the data for analysis. If you want to do your analyses in R, use one of the R-based tutorials linked below.

For more information about the neonUtilities package, and instructions for running it in R directly, see the Download and Explore tutorial and/or the neonUtilities tutorial.

Install and set up

Before starting, you will need:

  1. Python 3 installed. It is probably possible to use this workflow in Python 2, but these instructions were developed and tested using 3.7.4.
  2. R installed. You don't need to have ever used it directly. We wrote this tutorial using R 4.1.1, but most other recent versions should also work.
  3. rpy2 installed. Run the line below from the command line, it won't run within a Python script. See Python documentation for more information on how to install packages. rpy2 often has install problems on Windows, see "Windows Users" section below if you are running Windows.
  4. You may need to install pip before installing rpy2, if you don't have it installed already.

From the command line, run pip install rpy2

Windows users

The rpy2 package was built for Mac, and doesn't always work smoothly on Windows. If you have trouble with the install, try these steps.

  1. Add C:\Program Files\R\R-3.3.1\bin\x64 to the Windows Environment Variable “Path”
  2. Install rpy2 manually from https://www.lfd.uci.edu/~gohlke/pythonlibs/#rpy2
    1. Pick the correct version. At the download page the portion of the files with cp## relate to the Python version. e.g., rpy2 2.9.2 cp36 cp36m win_amd64.whl is the correct download when 2.9.2 is the latest version of rpy2 and you are running Python 36 and 64 bit Windows (amd64).
    2. Save the whl file, navigate to it in windows then run pip directly on the file as follows “pip install rpy2 2.9.2 cp36 cp36m win_amd64.whl”
  3. Add an R_HOME Windows environment variable with the path C:\Program Files\R\R-3.4.3 (or whichever version you are running)
  4. Add an R_USER Windows environment variable with the path C:\Users\yourUserName\AppData\Local\Continuum\Anaconda3\Lib\site-packages\rpy2

Additional troubleshooting

If you're still having trouble getting R to communicate with Python, you can try pointing Python directly to your R installation path.

  1. Run R.home() in R.
  2. Run import os in Python.
  3. Run os.environ['R_HOME'] = '/Library/Frameworks/R.framework/Resources' in Python, substituting the file path you found in step 1.

Load packages

Now open up your Python interface of choice (Jupyter notebook, Spyder, etc) and import rpy2 into your session.

import rpy2
import rpy2.robjects as robjects
from rpy2.robjects.packages import importr

Load the base R functionality, using the rpy2 function importr().

base = importr('base')
utils = importr('utils')
stats = importr('stats')

The basic syntax for running R code via rpy2 is package.function(inputs), where package is the R package in use, function is the name of the function within the R package, and inputs are the inputs to the function. In other words, it's very similar to running code in R as package::function(inputs). For example:

stats.rnorm(6, 0, 1)

FloatVector with 6 elements.

<td>
0.920298
</td>

<td>
-0.318376
</td>

<td>
0.906473
</td>

<td>
-1.004184
</td>

<td>
-0.267872
</td>

<td>
-0.470278
</td>

Suppress R warnings. This step can be skipped, but will result in messages getting passed through from R that Python will interpret as warnings.

from rpy2.rinterface_lib.callbacks import logger as rpy2_logger
import logging
rpy2_logger.setLevel(logging.ERROR)

Install the neonUtilities R package. Here I've specified the RStudio CRAN mirror as the source, but you can use a different one if you prefer.

You only need to do this step once to use the package, but we update the neonUtilities package every few months, so reinstalling periodically is recommended.

This installation step carries out the same steps in the same places on your hard drive that it would if run in R directly, so if you use R regularly and have already installed neonUtilities on your machine, you can skip this step. And be aware, this also means if you install other packages, or new versions of packages, via rpy2, they'll be updated the next time you use R, too.

The semicolon at the end of the line (here, and in some other function calls below) can be omitted. It suppresses a note indicating the output of the function is null. The output is null because these functions download or modify files on your local drive, but none of the data are read into the Python or R environments.

utils.install_packages('neonUtilities', repos='https://cran.rstudio.com/');
The downloaded binary packages are in
	/var/folders/_k/gbjn452j1h3fk7880d5ppkx1_9xf6m/T//Rtmpl5OpMA/downloaded_packages

Now load the neonUtilities package. This does need to be run every time you use the code; if you're familiar with R, importr() is roughly equivalent to the library() function in R.

neonUtilities = importr('neonUtilities')

Join data files: stackByTable()

The function stackByTable() in neonUtilities merges the monthly, site-level files the NEON Data Portal provides. Start by downloading the dataset you're interested in from the Portal. Here, we'll assume you've downloaded IR Biological Temperature. It will download as a single zip file named NEON_temp-bio.zip. Note the file path it's saved to and proceed.

Run the stackByTable() function to stack the data. It requires only one input, the path to the zip file you downloaded from the NEON Data Portal. Modify the file path in the code below to match the path on your machine.

For additional, optional inputs to stackByTable(), see the R tutorial for neonUtilities.

neonUtilities.stackByTable(filepath='/Users/Shared/NEON_temp-bio.zip');
Stacking operation across a single core.
Stacking table IRBT_1_minute
Stacking table IRBT_30_minute
Merged the most recent publication of sensor position files for each site and saved to /stackedFiles
Copied the most recent publication of variable definition file to /stackedFiles
Finished: Stacked 2 data tables and 3 metadata tables!
Stacking took 2.019079 secs
All unzipped monthly data folders have been removed.

Check the folder containing the original zip file from the Data Portal; you should now have a subfolder containing the unzipped and stacked files called stackedFiles. To import these data to Python, skip ahead to the "Read downloaded and stacked files into Python" section; to learn how to use neonUtilities to download data, proceed to the next section.

Download files to be stacked: zipsByProduct()

The function zipsByProduct() uses the NEON API to programmatically download data files for a given product. The files downloaded by zipsByProduct() can then be fed into stackByTable().

Run the downloader with these inputs: a data product ID (DPID), a set of 4-letter site IDs (or "all" for all sites), a download package (either basic or expanded), the filepath to download the data to, and an indicator to check the size of your download before proceeding or not (TRUE/FALSE).

The DPID is the data product identifier, and can be found in the data product box on the NEON Explore Data page. Here we'll download Breeding landbird point counts, DP1.10003.001.

There are two differences relative to running zipsByProduct() in R directly:

  1. check.size becomes check_size, because dots have programmatic meaning in Python
  2. TRUE (or T) becomes 'TRUE' because the values TRUE and FALSE don't have special meaning in Python the way they do in R, so it interprets them as variables if they're unquoted.

check_size='TRUE' does not work correctly in the Python environment. In R, it estimates the size of the download and asks you to confirm before proceeding, and the interactive question and answer don't work correctly outside R. Set check_size='FALSE' to avoid this problem, but be thoughtful about the size of your query since it will proceed to download without checking.

neonUtilities.zipsByProduct(dpID='DP1.10003.001', 
                            site=base.c('HARV','BART'), 
                            savepath='/Users/Shared',
                            package='basic', 
                            check_size='FALSE');
Finding available files
  |======================================================================| 100%

Downloading files totaling approximately 4.217543 MB
Downloading 18 files
  |======================================================================| 100%
18 files successfully downloaded to /Users/Shared/filesToStack10003

The message output by zipsByProduct() indicates the file path where the files have been downloaded.

Now take that file path and pass it to stackByTable().

neonUtilities.stackByTable(filepath='/Users/Shared/filesToStack10003');
Unpacking zip files using 1 cores.
Stacking operation across a single core.
Stacking table brd_countdata
Stacking table brd_perpoint
Copied the most recent publication of validation file to /stackedFiles
Copied the most recent publication of categoricalCodes file to /stackedFiles
Copied the most recent publication of variable definition file to /stackedFiles
Finished: Stacked 2 data tables and 4 metadata tables!
Stacking took 0.4586661 secs
All unzipped monthly data folders have been removed.

Read downloaded and stacked files into Python

We've downloaded biological temperature and bird data, and merged the site by month files. Now let's read those data into Python so you can proceed with analyses.

First let's take a look at what's in the output folders.

import os
os.listdir('/Users/Shared/filesToStack10003/stackedFiles/')
['categoricalCodes_10003.csv',
 'issueLog_10003.csv',
 'brd_countdata.csv',
 'brd_perpoint.csv',
 'readme_10003.txt',
 'variables_10003.csv',
 'validation_10003.csv']
os.listdir('/Users/Shared/NEON_temp-bio/stackedFiles/')
['IRBT_1_minute.csv',
 'sensor_positions_00005.csv',
 'issueLog_00005.csv',
 'IRBT_30_minute.csv',
 'variables_00005.csv',
 'readme_00005.txt']

Each data product folder contains a set of data files and metadata files. Here, we'll read in the data files and take a look at the contents; for more details about the contents of NEON data files and how to interpret them, see the Download and Explore tutorial.

There are a variety of modules and methods for reading tabular data into Python; here we'll use the pandas module, but feel free to use your own preferred method.

First, let's read in the two data tables in the bird data: brd_countdata and brd_perpoint.

import pandas
brd_perpoint = pandas.read_csv('/Users/Shared/filesToStack10003/stackedFiles/brd_perpoint.csv')
brd_countdata = pandas.read_csv('/Users/Shared/filesToStack10003/stackedFiles/brd_countdata.csv')

And take a look at the contents of each file. For descriptions and unit of each column, see the variables_10003 file.

brd_perpoint
uid namedLocation domainID siteID plotID plotType pointID nlcdClass decimalLatitude decimalLongitude ... endRH observedHabitat observedAirTemp kmPerHourObservedWindSpeed laboratoryName samplingProtocolVersion remarks measuredBy publicationDate release
0 32ab1419-b087-47e1-829d-b1a67a223a01 BART_025.birdGrid.brd D01 BART BART_025 distributed C1 evergreenForest 44.060146 -71.315479 ... 56.0 evergreen forest 18.0 1.0 Bird Conservancy of the Rockies NEON.DOC.014041vG NaN JRUEB 20211222T013942Z RELEASE-2022
1 f02e2458-caab-44d8-a21a-b3b210b71006 BART_025.birdGrid.brd D01 BART BART_025 distributed B1 evergreenForest 44.060146 -71.315479 ... 56.0 deciduous forest 19.0 3.0 Bird Conservancy of the Rockies NEON.DOC.014041vG NaN JRUEB 20211222T013942Z RELEASE-2022
2 58ccefb8-7904-4aa6-8447-d6f6590ccdae BART_025.birdGrid.brd D01 BART BART_025 distributed A1 evergreenForest 44.060146 -71.315479 ... 56.0 mixed deciduous/evergreen forest 17.0 0.0 Bird Conservancy of the Rockies NEON.DOC.014041vG NaN JRUEB 20211222T013942Z RELEASE-2022
3 1b14ead4-03fc-4d47-bd00-2f6e31cfe971 BART_025.birdGrid.brd D01 BART BART_025 distributed A2 evergreenForest 44.060146 -71.315479 ... 56.0 deciduous forest 19.0 0.0 Bird Conservancy of the Rockies NEON.DOC.014041vG NaN JRUEB 20211222T013942Z RELEASE-2022
4 3055a0a5-57ae-4e56-9415-eeb7704fab02 BART_025.birdGrid.brd D01 BART BART_025 distributed B2 evergreenForest 44.060146 -71.315479 ... 56.0 deciduous forest 16.0 0.0 Bird Conservancy of the Rockies NEON.DOC.014041vG NaN JRUEB 20211222T013942Z RELEASE-2022
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
1405 56d2f3b3-3ee5-41b9-ae22-e78a814d83e4 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed A2 evergreenForest 42.451400 -72.250100 ... 71.0 mixed deciduous/evergreen forest 16.0 1.0 Bird Conservancy of the Rockies NEON.DOC.014041vK NaN KKLAP 20221129T224415Z PROVISIONAL
1406 8f61949b-d0cc-49c2-8b59-4e2938286da0 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed A3 evergreenForest 42.451400 -72.250100 ... 71.0 mixed deciduous/evergreen forest 17.0 0.0 Bird Conservancy of the Rockies NEON.DOC.014041vK NaN KKLAP 20221129T224415Z PROVISIONAL
1407 36574bab-3725-44d4-b96c-3fc6dcea0765 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B3 evergreenForest 42.451400 -72.250100 ... 71.0 mixed deciduous/evergreen forest 19.0 0.0 Bird Conservancy of the Rockies NEON.DOC.014041vK NaN KKLAP 20221129T224415Z PROVISIONAL
1408 eb6dcb4a-cc6c-4ec1-9ee2-6932b7aefc54 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed A1 evergreenForest 42.451400 -72.250100 ... 71.0 deciduous forest 19.0 2.0 Bird Conservancy of the Rockies NEON.DOC.014041vK NaN KKLAP 20221129T224415Z PROVISIONAL
1409 51ff3c20-397f-4c88-84e9-f34c2f52d6a8 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B2 evergreenForest 42.451400 -72.250100 ... 71.0 evergreen forest 19.0 3.0 Bird Conservancy of the Rockies NEON.DOC.014041vK NaN KKLAP 20221129T224415Z PROVISIONAL

1410 rows × 31 columns

brd_countdata
uid namedLocation domainID siteID plotID plotType pointID startDate eventID pointCountMinute ... vernacularName observerDistance detectionMethod visualConfirmation sexOrAge clusterSize clusterCode identifiedBy publicationDate release
0 4e22256f-5e86-4a2c-99be-dd1c7da7af28 BART_025.birdGrid.brd D01 BART BART_025 distributed C1 2015-06-14T09:23Z BART_025.C1.2015-06-14 1 ... Black-capped Chickadee 42.0 singing No Male 1.0 NaN JRUEB 20211222T013942Z RELEASE-2022
1 93106c0d-06d8-4816-9892-15c99de03c91 BART_025.birdGrid.brd D01 BART BART_025 distributed C1 2015-06-14T09:23Z BART_025.C1.2015-06-14 1 ... Red-eyed Vireo 9.0 singing No Male 1.0 NaN JRUEB 20211222T013942Z RELEASE-2022
2 5eb23904-9ae9-45bf-af27-a4fa1efd4e8a BART_025.birdGrid.brd D01 BART BART_025 distributed C1 2015-06-14T09:23Z BART_025.C1.2015-06-14 2 ... Black-and-white Warbler 17.0 singing No Male 1.0 NaN JRUEB 20211222T013942Z RELEASE-2022
3 99592c6c-4cf7-4de8-9502-b321e925684d BART_025.birdGrid.brd D01 BART BART_025 distributed C1 2015-06-14T09:23Z BART_025.C1.2015-06-14 2 ... Black-throated Green Warbler 50.0 singing No Male 1.0 NaN JRUEB 20211222T013942Z RELEASE-2022
4 6c07d9fb-8813-452b-8182-3bc5e139d920 BART_025.birdGrid.brd D01 BART BART_025 distributed C1 2015-06-14T09:23Z BART_025.C1.2015-06-14 1 ... Black-throated Green Warbler 12.0 singing No Male 1.0 NaN JRUEB 20211222T013942Z RELEASE-2022
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
15378 cffdd5e4-f664-411b-9aea-e6265071332a HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B2 2022-06-12T13:31Z HARV_021.B2.2022-06-12 3 ... Belted Kingfisher 37.0 calling No Unknown 1.0 NaN KKLAP 20221129T224415Z PROVISIONAL
15379 92b58b34-077f-420a-871d-116ac5b1c98a HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B2 2022-06-12T13:31Z HARV_021.B2.2022-06-12 5 ... Common Yellowthroat 8.0 calling Yes Male 1.0 NaN KKLAP 20221129T224415Z PROVISIONAL
15380 06ccb684-da77-4cdf-a8f7-b0d9ac106847 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B2 2022-06-12T13:31Z HARV_021.B2.2022-06-12 1 ... Ovenbird 28.0 singing No Unknown 1.0 NaN KKLAP 20221129T224415Z PROVISIONAL
15381 0254f165-0052-406e-b9ae-b76ef4109df1 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B2 2022-06-12T13:31Z HARV_021.B2.2022-06-12 2 ... Veery 50.0 calling No Unknown 1.0 NaN KKLAP 20221129T224415Z PROVISIONAL
15382 432c797d-c4ea-4bfd-901c-5c2481b845c4 HARV_021.birdGrid.brd D01 HARV HARV_021 distributed B2 2022-06-12T13:31Z HARV_021.B2.2022-06-12 4 ... Pine Warbler 29.0 singing No Unknown 1.0 NaN KKLAP 20221129T224415Z PROVISIONAL

15383 rows × 24 columns

And now let's do the same with the 30-minute data table for biological temperature.

IRBT30 = pandas.read_csv('/Users/Shared/NEON_temp-bio/stackedFiles/IRBT_30_minute.csv')
IRBT30
domainID siteID horizontalPosition verticalPosition startDateTime endDateTime bioTempMean bioTempMinimum bioTempMaximum bioTempVariance bioTempNumPts bioTempExpUncert bioTempStdErMean finalQF publicationDate release
0 D18 BARR 0 10 2021-09-01T00:00:00Z 2021-09-01T00:30:00Z 7.82 7.43 8.39 0.03 1800.0 0.60 0.00 0 20211219T025212Z PROVISIONAL
1 D18 BARR 0 10 2021-09-01T00:30:00Z 2021-09-01T01:00:00Z 7.47 7.16 7.75 0.01 1800.0 0.60 0.00 0 20211219T025212Z PROVISIONAL
2 D18 BARR 0 10 2021-09-01T01:00:00Z 2021-09-01T01:30:00Z 7.43 6.89 8.11 0.07 1800.0 0.60 0.01 0 20211219T025212Z PROVISIONAL
3 D18 BARR 0 10 2021-09-01T01:30:00Z 2021-09-01T02:00:00Z 7.36 6.78 8.15 0.06 1800.0 0.60 0.01 0 20211219T025212Z PROVISIONAL
4 D18 BARR 0 10 2021-09-01T02:00:00Z 2021-09-01T02:30:00Z 6.91 6.50 7.27 0.03 1800.0 0.60 0.00 0 20211219T025212Z PROVISIONAL
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
13099 D18 BARR 3 0 2021-11-30T21:30:00Z 2021-11-30T22:00:00Z -14.62 -14.78 -14.46 0.00 1800.0 0.57 0.00 0 20211206T221914Z PROVISIONAL
13100 D18 BARR 3 0 2021-11-30T22:00:00Z 2021-11-30T22:30:00Z -14.59 -14.72 -14.50 0.00 1800.0 0.57 0.00 0 20211206T221914Z PROVISIONAL
13101 D18 BARR 3 0 2021-11-30T22:30:00Z 2021-11-30T23:00:00Z -14.56 -14.65 -14.45 0.00 1800.0 0.57 0.00 0 20211206T221914Z PROVISIONAL
13102 D18 BARR 3 0 2021-11-30T23:00:00Z 2021-11-30T23:30:00Z -14.50 -14.60 -14.39 0.00 1800.0 0.57 0.00 0 20211206T221914Z PROVISIONAL
13103 D18 BARR 3 0 2021-11-30T23:30:00Z 2021-12-01T00:00:00Z -14.45 -14.57 -14.32 0.00 1800.0 0.57 0.00 0 20211206T221914Z PROVISIONAL

13104 rows × 16 columns

Download remote sensing files: byFileAOP()

The function byFileAOP() uses the NEON API to programmatically download data files for remote sensing (AOP) data products. These files cannot be stacked by stackByTable() because they are not tabular data. The function simply creates a folder in your working directory and writes the files there. It preserves the folder structure for the subproducts.

The inputs to byFileAOP() are a data product ID, a site, a year, a filepath to save to, and an indicator to check the size of the download before proceeding, or not. As above, set check_size="FALSE" when working in Python. Be especially cautious about download size when downloading AOP data, since the files are very large.

Here, we'll download Ecosystem structure (Canopy Height Model) data from Hopbrook (HOPB) in 2017.

neonUtilities.byFileAOP(dpID='DP3.30015.001', site='HOPB',
                        year='2017', check_size='FALSE',
                       savepath='/Users/Shared');
Downloading files totaling approximately 147.930656 MB 
Downloading 217 files
  |======================================================================| 100%
Successfully downloaded 217 files to /Users/Shared/DP3.30015.001

Let's read one tile of data into Python and view it. We'll use the rasterio and matplotlib modules here, but as with tabular data, there are other options available.

import rasterio
CHMtile = rasterio.open('/Users/Shared/DP3.30015.001/neon-aop-products/2017/FullSite/D01/2017_HOPB_2/L3/DiscreteLidar/CanopyHeightModelGtif/NEON_D01_HOPB_DP3_718000_4709000_CHM.tif')
import matplotlib.pyplot as plt
from rasterio.plot import show
fig, ax = plt.subplots(figsize = (8,3))
show(CHMtile)

png

<AxesSubplot:>
fig

png


Resources for Learning R

There are myriad resources out there to learn programming in R. After linking to a tutorial on how to install R and RStudio on your computer, we then outline a few different paths to learn R basics depending on how you enjoy learning, and finally we include a few resources for intermediate and advanced learning.

Setting Up your Computer

Start out by installing R and, we recommend, RStudio, on your computer. RStudio is an Interactive Development Environment (IDE) for the R program. It is optional, but recommended when working with R. Directions for installing can be found within the tutorial
Install Git, Bash Shell, R & RStudio. You will need administrator permissions on your computer.

Pathways to Learning the Basics of R

In-person trainings

If you prefer to learn through in-person trainings, consider local workshops from The Carpentries Software Carpentry or Data Carpentry (generally ~$25 for a 2-day workshop), courses offered by a local college or university (prices vary), or organize your colleagues to meet regularly to learn R together (free!).

Online interactive courses

If you prefer to learn in a semi-structured online environment, there are a wide variety of online courses for learning R including Data Camp, Coursera, edX, and Lynda.com. Many of these options include free introductory lessons or trial periods as well as paid courses. We do not have personal experience with these courses and do not recommend or specifically promote any course.

In program interactive course

Swirl is guided introduction to R where you code along with the instructions in R. You get direct feedback when you type a command incorrectly. To use this package, once you have R or RStudio open and running, use the following commands to start the first lesson.

install.packages("swirl")

library(swirl)

swirl()

Online tutorials

If you prefer a less structured online environment, these tutorial series may be better suited for you.

  • Software Carpentry’s Programming with R
    • Learn R with a focus on tools needed for effective programming. Beyond the basics, it covers functions, loops, command line, and other key skills
  • Data Carpentry’s R for data analysis and visualization of Ecological Data
    • Learn R with a focus on data analysis. Beyond the basics, it covers dyplr for data aggregation & manipulation, ggplot2 for plotting, and touches on interacting with an SQL database. Designed to be taught by an instructor but the materials also work for independent learning online.
  • Ethan White’s Data Carpentry for Biologists Semester Course (online content)
    • This comprehensive course contains an R section. While the overall focus is on data science skills, learning R is a portion of it (note, this is an extensive course).
  • RStudio’s list
    • RStudio links to many other learning opportunities. Start with the 'Beginners' learning path.

Video tutorials

A blend of having an instructor and self-paced, video tutorials may also be of interest. New stand-alone video tutorials are out each day, so we aren’t going to recommend a specific series. Find what works for you by searching “R Programming video tutorials” on YouTube.

Books

Books are still a great way to learn R (and other languages). Many books are available at local libraries (university or community) or online, if you want to try them out before buying. Below are a few of the many, many books that data scientists working on the NEON project have found useful.

  • Michael Crawley’s The R Book is a classic that takes you from beginning steps to analyses and modelling.
  • Grolemun and Wickham’s R for Data Science focuses on using R in data science applications using Hadley Wickham’s “tidyverse”. It does assume some basic familiarity with R. Bonus: it is available online or in book format! (If you are completely new, they recommend starting with Hands-on Programming with R).

Beyond the Basics

There are many intermediate and advanced courses, lessons, and tutorials linked in the above resources. For example, the Swirl package offers intermediate and advanced courses on specific topics, as does RStudio's list. See courses here; development is ongoing so new courses may be added.

However, once the basics are handled, you will find that much of your learning will happen through solving individual problems you encounter. To solve these problems, your favorite search engine is your friend. Paste the error (without specifics to your file/data) into the search menu and find answers from those who have had similar questions.

For more on working with NEON data in particular, be sure to check out the other NEON data tutorials.

Install & Set Up Docker For Use With eddy4R

This tutorial provides the basics on how to set up Docker on one's local computer and then connect to an eddy4R Docker container in order to use the eddy4R R package. There are no specific skills needed for this tutorial, however, you will need to know how to access the command line tool for your operating system (basic instructions given).

Learning Objectives

After completing this tutorial, you will be able to:

  • Access Docker on your local computer.
  • Access the eddy4R package in a RStudio Docker environment.

Things You’ll Need To Complete This Tutorial

You will need internet access and an up to date browser.

Sources

The directions on how to install docker are heavily borrowed from the author's of CyVerse's Container Camp's Intro to Docker and we thank them for providing the information.

The directions for how to access eddy4R comes from

Metzger, S., D. Durden, C. Sturtevant, H. Luo, N. Pingintha-durden, and T. Sachs (2017). eddy4R 0.2.0: a DevOps model for community-extensible processing and analysis of eddy-covariance data based on R, Git, Docker, and HDF5. Geoscientific Model Development 10:3189–3206. doi: 10.5194/gmd-10-3189-2017.

The eddy4R versions within the tutorial have been updated to the 1.0.0 release that accompanied the following manuscript:

Metzger, S., E. Ayres, D. Durden, C. Florian, R. Lee, C. Lunch, H. Luo, N. Pingintha-Durden, J.A. Roberti, M. SanClements, C. Sturtevant, K. Xu, and R.C. Zulueta, 2019: From NEON Field Sites to Data Portal: A Community Resource for Surface–Atmosphere Research Comes Online. Bull. Amer. Meteor. Soc., 100, 2305–2325, https://doi.org/10.1175/BAMS-D-17-0307.1.

In the tutorial below, we give the very barest of information to get Docker set up for use with the NEON R package eddy4R. For more information on using Docker, consider reading through the content from CyVerse's Container Camp's Intro to Docker.

Install Docker

To work with the eddy4R–Docker image, you first need to sign up for an account at DockerHub.

Once logged in, getting Docker up and running on your favorite operating system (Mac/Windows/Linux) is very easy. The "getting started" guide on Docker has detailed instructions for setting up Docker. Unless you plan on being a very active user and devoloper in Docker, we recommend starting with the stable channel (not edge channel) as you may encounter fewer problems.

  • Mac
  • Windows
  • Linux

If you're using Docker for Windows make sure you have shared your drive.

If you're using an older version of Windows or MacOS, you may need to use Docker Machine instead.

Test Docker installation

Once you are done installing Docker, test your Docker installation by running the following command to make sure you are using version 1.13 or higher.

You will need an open shell window (Linux; Mac=Terminal) or the Docker Quickstart Terminal (Windows).

docker --version
	

When run, you will see which version of Docker you are currently running.

Note: If you run just the word docker you should see a whole bunch of lines showing the different options available with docker. Alternatively you can test your installation by running the following:

docker run hello-world

Notice that the first line states that the image can't be found locally. The next few lines are pulling the image, so if you were to run the hello-world prompt again, it would already be local and you'd see the message start at "Hello from Docker!".

If these steps work, you are ready to go on to access the eddy4R-Docker image that houses the suite of eddy4R R packages. If these steps have not worked, follow the installation instructions a second time.

Accessing eddy4R

Download of the eddy4R–Docker image and subsequent creation of a local container can be performed by two simple commands in an open shell (Linux; Mac = Terminal) or the Docker Quickstart Terminal (Windows).

The first command docker login will prompt you for your DockerHub ID and password.

The second command docker run -d -p 8787:8787 -e PASSWORD=YOURPASSWORD stefanmet/eddy4r:1.0.0 will download the latest eddy4R–Docker image and start a Docker container that utilizes port 8787 for establishing a graphical interface via web browser.

  • docker run: docker will preform some process on an isolated container
  • -d: the container will start in a detached mode, which means the container run in the background and will print the container ID
  • -p: publish a container to a specified port (which follows)
  • 8787:8787: specify which port you want to use. The default 8787:8787 is great if you are running locally. The first 4 digits are the port on your machine, the last 4 digits are the port communicating with RStudio on Docker. You can change the first 4 digits if you want to use a different port on your machine, or if you are running many containers or are on a shared network, but the last 4 digits need to be 8787.
  • -e PASSWORD=YOURPASSWORD: define a password environmental variable to use upon login to the Rstudio instance. YOURPASSWORD can be anything you want.
  • stefanmet/eddy4r:1.0.0: finally, which container do you want to run.

Now try it.

docker login 

docker run -d -p 8787:8787 -e PASSWORD=YOURPASSWORD stefanmet/eddy4r:1.0.0

This last command will run a specified release version (eddy4r:1.0.0) of the Docker image. Alternatively you can use eddy4r:latest to get the most up-to-date development image of eddy4r.

If you are using data stored on your local machine, rather than cloud hosting, a physical file system location on the host computer (local/dir) can be mounted to a file system location inside the Docker container (docker/dir). This is achieved with the Docker run option -v local/dir:docker/dir.

Access RStudio session

Now you can access the interactive RStudio session for using eddy4r by using any web browser and going to http://host-ip-address:8787 where host-ip-address is the internal IP address of the Docker host. For example, if your host IP address is 10.100.90.169 then you should type http://10.100.90.169:8787 into your browser.

To determine the IP address of your Docker host, follow the instructions below for your operating system.


Windows

Depending on the version of Docker, older Docker Toolbox versus the newer Docker Desktop for Windows, there are different way to get the docker machine IP address:

  • Docker Toolbox - Type docker-machine ip default into cmd.exe window. The output will be your local IP address for the docker machine.
  • Docker Desktop for Windows - Type ipconfig into cmd.exe window. The output will include either DockerNAT IPv4 address or vEthernet IPv4 address that docker uses to communicate to the internet, which in most cases will be 10.0.75.1.

Mac

Type ifconfig | grep "inet " | grep -v 127.0.0.1 into your Terminal window. The output will be one or more local IP addresses for the docker machine. Use the numbers after the first inet output.

Linux

Type localhost in a shell session and the local IP will be the output.


Once in the web browser you can log into this instance of the RStudio session with the username as rstudio and password as defined by YOURPASSWORD. Once complete you are now in a RStudio user interface with eddy4R installed and ready to use.

Additional information about the use of RStudio and eddy4R packages in Docker containers can be found on the rocker-org/rocker website and the eddy4RWiki pages.

Using eddy4R

To learn to use the eddy4R package to calculate fluxes, please visit the eddy4R vignette (link pending).

Teaching Module

Macrosystems Ecology Teaching Modules from Macrosystems EDDIE

Teaching Module

Data Management using NEON Small Mammal Data

Teaching Module

Quantifying The Drivers and Impacts of Natural Disturbance Events – The 2013 Colorado Floods

Use the neonUtilities Package to Access NEON Data

This tutorial provides an overview of functions in the neonUtilities package in R and the neonutilities package in Python. These packages provide a toolbox of basic functionality for working with NEON data.

This tutorial is primarily an index of functions and their inputs; for more in-depth guidance in using these functions to work with NEON data, see the Download and Explore tutorial. If you are already familiar with the neonUtilities package, and need a quick reference guide to function inputs and notation, see the neonUtilities cheat sheet.

Function index

The neonUtilities/neonutilities package contains several functions (use the R and Python tabs to see the syntax in each language):

R

  • stackByTable(): Takes zip files downloaded from the Data Portal or downloaded by zipsByProduct(), unzips them, and joins the monthly files by data table to create a single file per table.
  • zipsByProduct(): A wrapper for the NEON API; downloads data based on data product and site criteria. Stores downloaded data in a format that can then be joined by stackByTable().
  • loadByProduct(): Combines the functionality of zipsByProduct(),
    stackByTable(), and readTableNEON(): Downloads the specified data, stacks the files, and loads the files to the R environment.
  • byFileAOP(): A wrapper for the NEON API; downloads remote sensing data based on data product, site, and year criteria. Preserves the file structure of the original data.
  • byTileAOP(): Downloads remote sensing data for the specified data product, subset to tiles that intersect a list of coordinates.
  • readTableNEON(): Reads NEON data tables into R, using the variables file to assign R classes to each column.
  • getCitation(): Get a BibTeX citation for a particular data product and release.

Python

  • stack_by_table(): Takes zip files downloaded from the Data Portal or downloaded by zips_by_product(), unzips them, and joins the monthly files by data table to create a single file per table.
  • zips_by_product(): A wrapper for the NEON API; downloads data based on data product and site criteria. Stores downloaded data in a format that can then be joined by stack_by_table().
  • load_by_product(): Combines the functionality of zips_by_product(),
    stack_by_table(), and read_table_neon(): Downloads the specified data, stacks the files, and loads the files to the R environment.
  • by_file_aop(): A wrapper for the NEON API; downloads remote sensing data based on data product, site, and year criteria. Preserves the file structure of the original data.
  • by_tile_aop(): Downloads remote sensing data for the specified data product, subset to tiles that intersect a list of coordinates.
  • read_table_neon(): Reads NEON data tables into R, using the variables file to assign R classes to each column.
  • get_citation(): Get a BibTeX citation for a particular data product and release.

If you are only interested in joining data files downloaded from the NEON Data Portal, you will only need to use stackByTable(). Follow the instructions in the first section of the Download and Explore tutorial.

Install and load packages

First, install and load the package. The installation step only needs to be run once, and then periodically to update when new package versions are released. The load step needs to be run every time you run your code.

R

## 
## # install neonUtilities - can skip if already installed
## install.packages("neonUtilities")
## 
## # load neonUtilities
library(neonUtilities)
## 

Python

# install neonutilities - can skip if already installed
# do this in the command line
pip install neonutilities

# load neonutilities in working environment
import neonutilities as nu

Download files and load to working environment

The most popular function in neonUtilities is loadByProduct() (or load_by_product() in neonutilities). This function downloads data from the NEON API, merges the site-by-month files, and loads the resulting data tables into the programming environment, classifying each variable’s data type appropriately. It combines the actions of the zipsByProduct(), stackByTable(), and readTableNEON() functions, described below.

This is a popular choice because it ensures you’re always working with the latest data, and it ends with ready-to-use tables. However, if you use it in a workflow you run repeatedly, keep in mind it will re-download the data every time.

loadByProduct() works on most observational (OS) and sensor (IS) data, but not on surface-atmosphere exchange (SAE) data, remote sensing (AOP) data, and some of the data tables in the microbial data products. For functions that download AOP data, see the byFileAOP() and byTileAOP() sections in this tutorial. For functions that work with SAE data, see the NEON eddy flux data tutorial. SAE functions are not yet available in Python.

The inputs to loadByProduct() control which data to download and how to manage the processing:

R

  • dpID: The data product ID, e.g. DP1.00002.001
  • site: Defaults to “all”, meaning all sites with available data; can be a vector of 4-letter NEON site codes, e.g.  c("HARV","CPER","ABBY").
  • startdate and enddate: Defaults to NA, meaning all dates with available data; or a date in the form YYYY-MM, e.g.  2017-06. Since NEON data are provided in month packages, finer scale querying is not available. Both start and end date are inclusive.
  • package: Either basic or expanded data package. Expanded data packages generally include additional information about data quality, such as chemical standards and quality flags. Not every data product has an expanded package; if the expanded package is requested but there isn’t one, the basic package will be downloaded.
  • timeIndex: Defaults to “all”, to download all data; or the number of minutes in the averaging interval. See example below; only applicable to IS data.
  • release: Specify a particular data Release, e.g.  "RELEASE-2024". Defaults to the most recent Release. For more details and guidance, see the Release and Provisional tutorial.
  • include.provisional: T or F: Should provisional data be downloaded? If release is not specified, set to T to include provisional data in the download. Defaults to F.
  • savepath: the file path you want to download to; defaults to the working directory.
  • check.size: T or F: should the function pause before downloading data and warn you about the size of your download? Defaults to T; if you are using this function within a script or batch process you will want to set it to F.
  • token: Optional API token for faster downloads. See the API token tutorial.
  • nCores: Number of cores to use for parallel processing. Defaults to 1, i.e. no parallelization.

Python

  • dpid: the data product ID, e.g. DP1.00002.001
  • site: defaults to “all”, meaning all sites with available data; can be a list of 4-letter NEON site codes, e.g.  ["HARV","CPER","ABBY"].
  • startdate and enddate: defaults to NA, meaning all dates with available data; or a date in the form YYYY-MM, e.g.  2017-06. Since NEON data are provided in month packages, finer scale querying is not available. Both start and end date are inclusive.
  • package: either basic or expanded data package. Expanded data packages generally include additional information about data quality, such as chemical standards and quality flags. Not every data product has an expanded package; if the expanded package is requested but there isn’t one, the basic package will be downloaded.
  • timeindex: defaults to “all”, to download all data; or the number of minutes in the averaging interval. See example below; only applicable to IS data.
  • release: Specify a particular data Release, e.g.  "RELEASE-2024". Defaults to the most recent Release. For more details and guidance, see the Release and Provisional tutorial.
  • include_provisional: True or False: Should provisional data be downloaded? If release is not specified, set to T to include provisional data in the download. Defaults to F.
  • savepath: the file path you want to download to; defaults to the working directory.
  • check_size: True or False: should the function pause before downloading data and warn you about the size of your download? Defaults to True; if you are using this function within a script or batch process you will want to set it to False.
  • token: Optional API token for faster downloads. See the API token tutorial.
  • cloud_mode: Can be set to True if you are working in a cloud environment; provides more efficient data transfer from NEON cloud storage to other cloud environments.
  • progress: Set to False to omit the progress bar during download and stacking.

The dpID (dpid) is the data product identifier of the data you want to download. The DPID can be found on the Explore Data Products page. It will be in the form DP#.#####.###

Demo data download and read

Let’s get triple-aspirated air temperature data (DP1.00003.001) from Moab and Onaqui (MOAB and ONAQ), from May–August 2018, and name the data object triptemp:

R

triptemp <- loadByProduct(dpID="DP1.00003.001", 
                          site=c("MOAB","ONAQ"),
                          startdate="2018-05", 
                          enddate="2018-08")

Python

triptemp = nu.load_by_product(dpid="DP1.00003.001", 
                              site=["MOAB","ONAQ"],
                              startdate="2018-05", 
                              enddate="2018-08")

View downloaded data

The object returned by loadByProduct() is a named list of data tables, or a dictionary of data tables in Python. To work with each of them, select them from the list.

R

names(triptemp)
## [1] "citation_00003_RELEASE-2024" "issueLog_00003"             
## [3] "readme_00003"                "sensor_positions_00003"     
## [5] "TAAT_1min"                   "TAAT_30min"                 
## [7] "variables_00003"
temp30 <- triptemp$TAAT_30min

If you prefer to extract each table from the list and work with it as an independent object, you can use the list2env() function:

list2env(trip.temp, .GlobalEnv)

Python

triptemp.keys()
## dict_keys(['TAAT_1min', 'TAAT_30min', 'citation_00003_RELEASE-2024', 'issueLog_00003', 'readme_00003', 'sensor_positions_00003', 'variables_00003'])
temp30 = triptemp["TAAT_30min"]

If you prefer to extract each table from the list and work with it as an independent object, you can use
globals().update():

globals().update(triptemp)

For more details about the contents of the data tables and metadata tables, check out the Download and Explore tutorial.

Join data files: stackByTable()

The function stackByTable() joins the month-by-site files from a data download. The output will yield data grouped into new files by table name. For example, the single aspirated air temperature data product contains 1 minute and 30 minute interval data. The output from this function is one .csv with 1 minute data and one .csv with 30 minute data.

Depending on your file size this function may run for a while. For example, in testing for this tutorial, 124 MB of temperature data took about 4 minutes to stack. A progress bar will display while the stacking is in progress.

Download the Data

To stack data from the Portal, first download the data of interest from the NEON Data Portal. To stack data downloaded from the API, see the zipsByProduct() section below.

Your data will download from the Portal in a single zipped file.

The stacking function will only work on zipped Comma Separated Value (.csv) files and not the NEON data stored in other formats (HDF5, etc).

Run stackByTable()

The example data below are single-aspirated air temperature.

To run the stackByTable() function, input the file path to the downloaded and zipped file.

R

# Modify the file path to the file location on your computer
stackByTable(filepath="~neon/data/NEON_temp-air-single.zip")

Python

# Modify the file path to the file location on your computer
nu.stack_by_table(filepath="/neon/data/NEON_temp-air-single.zip")

In the same directory as the zipped file, you should now have an unzipped directory of the same name. When you open this you will see a new directory called stackedFiles. This directory contains one or more .csv files (depends on the data product you are working with) with all the data from the months & sites you downloaded. There will also be a single copy of the associated variables, validation, and sensor_positions files, if applicable (validation files are only available for observational data products, and sensor position files are only available for instrument data products).

These .csv files are now ready for use with the program of your choice.

To read the data tables, we recommend using readTableNEON(), which will assign each column to the appropriate data type, based on the metadata in the variables file. This ensures time stamps and missing data are interpreted correctly.

Load data to environment

R

SAAT30 <- readTableNEON(
  dataFile='~/stackedFiles/SAAT_30min.csv',
  varFile='~/stackedFiles/variables_00002.csv'
)

Python

SAAT30 = nu.read_table_neon(
  dataFile='/stackedFiles/SAAT_30min.csv',
  varFile='/stackedFiles/variables_00002.csv'
)

Other function inputs

Other input options in stackByTable() are:

  • savepath : allows you to specify the file path where you want the stacked files to go, overriding the default. Set to "envt" to load the files to the working environment.
  • saveUnzippedFiles : allows you to keep the unzipped, unstacked files from an intermediate stage of the process; by default they are discarded.

Example usage:

R

stackByTable(filepath="~neon/data/NEON_temp-air-single.zip", 
             savepath="~data/allTemperature", saveUnzippedFiles=T)

tempsing <- stackByTable(filepath="~neon/data/NEON_temp-air-single.zip", savepath="envt", saveUnzippedFiles=F)

Python

nu.stack_by_table(filepath="/neon/data/NEON_temp-air-single.zip", 
                  savepath="/data/allTemperature", 
                  saveUnzippedFiles=True)

tempsing <- nu.stack_by_table(filepath="/neon/data/NEON_temp-air-single.zip", savepath="envt", saveUnzippedFiles=False)

Download files to be stacked: zipsByProduct()

The function zipsByProduct() is a wrapper for the NEON API, it downloads zip files for the data product specified and stores them in a format that can then be passed on to stackByTable().

Input options for zipsByProduct() are the same as those for loadByProduct() described above.

Here, we’ll download single-aspirated air temperature (DP1.00002.001) data from Wind River Experimental Forest (WREF) for April and May of 2019.

R

zipsByProduct(dpID="DP1.00002.001", site="WREF", 
              startdate="2019-04", enddate="2019-05",
              package="basic", check.size=T)

Downloaded files can now be passed to stackByTable() to be stacked.

stackByTable(filepath=paste(getwd(), 
                            "/filesToStack00002", 
                            sep=""))

Python

nu.zips_by_product(dpid="DP1.00002.001", site="WREF", 
                   startdate="2019-04", enddate="2019-05",
                   package="basic", check_size=True)

Downloaded files can now be passed to stackByTable() to be stacked.

nu.stack_by_table(filepath=os.getcwd()+
                  "/filesToStack00002")

For many sensor data products, download sizes can get very large, and stackByTable() takes a long time. The 1-minute or 2-minute files are much larger than the longer averaging intervals, so if you don’t need high- frequency data, the timeIndex input option lets you choose which averaging interval to download.

This option is only applicable to sensor (IS) data, since OS data are not averaged.

Download by averaging interval

Download only the 30-minute data for single-aspirated air temperature at WREF:

R

zipsByProduct(dpID="DP1.00002.001", site="WREF", 
              startdate="2019-04", enddate="2019-05",
              package="basic", timeIndex=30, 
              check.size=T)

Python

nu.zips_by_product(dpid="DP1.00002.001", site="WREF", 
                   startdate="2019-04", 
                   enddate="2019-05", package="basic", 
                   timeindex=30, check_size=True)

The 30-minute files can be stacked and loaded as usual.

Download remote sensing files

Remote sensing data files can be very large, and NEON remote sensing (AOP) data are stored in a directory structure that makes them easier to navigate. byFileAOP() downloads AOP files from the API while preserving their directory structure. This provides a convenient way to access AOP data programmatically.

Be aware that downloads from byFileAOP() can take a VERY long time, depending on the data you request and your connection speed. You may need to run the function and then leave your machine on and downloading for an extended period of time.

Here the example download is the Ecosystem Structure data product at Hop Brook (HOPB) in 2017; we use this as the example because it’s a relatively small year-site-product combination.

R

byFileAOP("DP3.30015.001", site="HOPB", 
          year=2017, check.size=T)

Python

nu.by_file_aop(dpid="DP3.30015.001", 
               site="HOPB", year=2017, 
               check_size=True)

The files should now be downloaded to a new folder in your working directory.

Download remote sensing files for specific coordinates

Often when using remote sensing data, we only want data covering a certain area - usually the area where we have coordinated ground sampling. byTileAOP() queries for data tiles containing a specified list of coordinates. It only works for the tiled, AKA mosaicked, versions of the remote sensing data, i.e. the ones with data product IDs beginning with “DP3”.

Here, we’ll download tiles of vegetation indices data (DP3.30026.001) corresponding to select observational sampling plots. For more information about accessing NEON spatial data, see the API tutorial and the in-development geoNEON package.

For now, assume we’ve used the API to look up the plot centroids of plots SOAP_009 and SOAP_011 at the Soaproot Saddle site. You can also look these up in the Spatial Data folder of the document library. The coordinates of the two plots in UTMs are 298755,4101405 and 299296,4101461. These are 40x40m plots, so in looking for tiles that contain the plots, we want to include a 20m buffer. The “buffer” is actually a square, it’s a delta applied equally to both the easting and northing coordinates.

R

byTileAOP(dpID="DP3.30026.001", site="SOAP", 
          year=2018, easting=c(298755,299296),
          northing=c(4101405,4101461),
          buffer=20)

Python

nu.by_tile_aop(dpid="DP3.30026.001", 
               site="SOAP", year=2018, 
               easting=[298755,299296],
               northing=[4101405,4101461],
               buffer=20)

The 2 tiles covering the SOAP_009 and SOAP_011 plots have
been downloaded.

Work with NEON's Single-Aspirated Air Temperature Data

In this tutorial, we explore the NEON single-aspirated air temperature data. We then discuss how to interpret the variables, how to work with date-time and date formats, and finally how to plot the data.

This tutorial is part of a series on how to work with both discrete and continuous time series data with NEON plant phenology and temperature data products.

Objectives

After completing this activity, you will be able to:

  • work with "stacked" NEON Single-Aspirated Air Temperature data.
  • correctly format date-time data.
  • use dplyr functions to filter data.
  • plot time series data in scatter plots using ggplot function.

Things You’ll Need To Complete This Tutorial

You will need the most current version of R and, preferably, RStudio loaded on your computer to complete this tutorial.

Install R Packages

  • neonUtilities: install.packages("neonUtilities")
  • ggplot2: install.packages("ggplot2")
  • dplyr: install.packages("dplyr")
  • tidyr: install.packages("tidyr")

More on Packages in R – Adapted from Software Carpentry.

Additional Resources

  • NEON data portal
  • RStudio's data wrangling (dplyr/tidyr) cheatsheet
  • NEONScience GitHub Organization
  • nneo API wrapper on CRAN
  • RStudio's data wrangling (dplyr/tidyr) cheatsheet
  • Hadley Wickham's documentation on the ggplot2 package.
  • Winston Chang's
*Cookbook for R* site based on his *R Graphics Cookbook* text.

Background Information About NEON Air Temperature Data

Air temperature is continuously monitored by NEON by two methods. At terrestrial sites temperature at the top of the tower is derived from a triple redundant aspirated air temperature sensor. This is provided as NEON data product DP1.00003.001. Single Aspirated Air Temperature sensors (SAAT) are deployed to develop temperature profiles at multiple levels on the tower at NEON terrestrial sites and on the meteorological stations at NEON aquatic sites. This is provided as NEON data product DP1.00002.001.

When designing a research project using this data, consult the Data Product Details Page for more detailed documentation.

Single-aspirated Air Temperature

Air temperature profiles are ascertained by deploying SAATs at various heights on NEON tower infrastructure. Air temperature at aquatic sites is measured using a single SAAT at a standard height of 3m above ground level. Air temperature for this data product is provided as one- and thirty-minute averages of 1 Hz observations. Temperature observations are made using platinum resistance thermometers, which are housed in a fan aspirated shield to reduce radiative heating. The temperature is measured in Ohms and subsequently converted to degrees Celsius during data processing. Details on the conversion can be found in the associated Algorithm Theoretic Basis Document (ATBD; see Product Details page linked above).

Available Data Tables

The SAAT data product contains two data tables for each site and month selected, consisting of the 1-minute and 30-minute averaging intervals. In addition, there are several metadata files that provide additional useful information.

  • readme with information on the data product and the download
  • variables file that defines the terms, data types, and units
  • EML file with machine readable metadata in standardized Ecological Metadata Language

Access NEON Data

There are several ways to access NEON data, directly from the NEON data portal, access through a data partner (select data products only), writing code to directly pull data from the NEON API, or, as we'll do here, using the neonUtilities package which is a wrapper for the API to make working with the data easier.

Downloading from the Data Portal

If you prefer to download data from the data portal, please review the Getting started and Stack the downloaded data sections of the Download and Explore NEON Data tutorial. This will get you to the point where you can download data from sites or dates of interest and resume this tutorial.

Downloading Data Using neonUtilities

First, we need to set up our environment with the packages needed for this tutorial.

# Install needed package (only uncomment & run if not already installed)

#install.packages("neonUtilities")

#install.packages("ggplot2")

#install.packages("dplyr")

#install.packages("tidyr")





# Load required libraries

library(neonUtilities)  # for accessing NEON data

library(ggplot2)  # for plotting

library(dplyr)  # for data munging

library(tidyr)  # for data munging



# set working directory

# this step is optional, only needed if you plan to save the 

# data files at the end of the tutorial

wd <- "~/data" # enter your working directory here

setwd(wd)

This tutorial is part of series working with discrete plant phenology data and (nearly) continuous temperature data. Our overall "research" question is to see if there is any correlation between plant phenology and temperature. Therefore, we will want to work with data that align with the plant phenology data that we worked with in the first tutorial. If you are only interested in working with the temperature data, you do not need to complete the previous tutorial.

Our data of interest will be the temperature data from 2018 from NEON's Smithsonian Conservation Biology Institute (SCBI) field site located in Virginia near the northern terminus of the Blue Ridge Mountains.

NEON single aspirated air temperature data is available in two averaging intervals, 1 minute and 30 minute intervals. Which data you want to work with is going to depend on your research questions. Here, we're going to only download and work with the 30 minute interval data as we're primarily interest in longer term (daily, weekly, annual) patterns.

This will download 7.7 MB of data. check.size is set to false (F) to improve flow of the script but is always a good idea to view the size with true (T) before downloading a new dataset.

# download data of interest - Single Aspirated Air Temperature

saat <- loadByProduct(dpID="DP1.00002.001", site="SCBI", 
                      startdate="2018-01", enddate="2018-12", 
                      package="basic", timeIndex="30",
                      check.size = F)

Explore Temperature Data

Now that you have the data, let's take a look at the structure and understand what's in the data. The data (saat) come in as a large list of four items.

View(saat)

So what exactly are these five files and why would you want to use them?

  • data file(s): There will always be one or more dataframes that include the primary data of the data product you downloaded. Since we downloaded only the 30 minute averaged data we only have one data table SAAT_30min.
  • readme_xxxxx: The readme file, with the corresponding 5 digits from the data product number, provides you with important information relevant to the data product and the specific instance of downloading the data.
  • sensor_positions_xxxxx: This table contains the spatial coordinates of each sensor, relative to a reference location.
  • variables_xxxxx: This table contains all the variables found in the associated data table(s). This includes full definitions, units, and rounding.
  • issueLog_xxxxx: This table contains records of any known issues with the data product, such as sensor malfunctions.
  • scienceReviewFlags_xxxxx: This table may or may not be present. It contains descriptions of adverse events that led to manual flagging of the data, and is usually more detailed than the issue log. It only contains records relevant to the sites and dates of data downloaded.

Since we want to work with the individual files, let's make the elements of the list into independent objects.

list2env(saat, .GlobalEnv)

## <environment: R_GlobalEnv>

Now let's take a look at the data table.

str(SAAT_30min)

## 'data.frame':	87600 obs. of  16 variables:
##  $ domainID           : chr  "D02" "D02" "D02" "D02" ...
##  $ siteID             : chr  "SCBI" "SCBI" "SCBI" "SCBI" ...
##  $ horizontalPosition : chr  "000" "000" "000" "000" ...
##  $ verticalPosition   : chr  "010" "010" "010" "010" ...
##  $ startDateTime      : POSIXct, format: "2018-01-01 00:00:00" "2018-01-01 00:30:00" "2018-01-01 01:00:00" ...
##  $ endDateTime        : POSIXct, format: "2018-01-01 00:30:00" "2018-01-01 01:00:00" "2018-01-01 01:30:00" ...
##  $ tempSingleMean     : num  -11.8 -11.8 -12 -12.2 -12.4 ...
##  $ tempSingleMinimum  : num  -12.1 -12.2 -12.3 -12.6 -12.8 ...
##  $ tempSingleMaximum  : num  -11.4 -11.3 -11.3 -11.7 -12.1 ...
##  $ tempSingleVariance : num  0.0208 0.0315 0.0412 0.0393 0.0361 0.0289 0.0126 0.0211 0.0115 0.0022 ...
##  $ tempSingleNumPts   : num  1800 1800 1800 1800 1800 1800 1800 1800 1800 1800 ...
##  $ tempSingleExpUncert: num  0.13 0.13 0.13 0.13 0.129 ...
##  $ tempSingleStdErMean: num  0.0034 0.0042 0.0048 0.0047 0.0045 0.004 0.0026 0.0034 0.0025 0.0011 ...
##  $ finalQF            : num  0 0 0 0 0 0 0 0 0 0 ...
##  $ publicationDate    : chr  "20221210T185420Z" "20221210T185420Z" "20221210T185420Z" "20221210T185420Z" ...
##  $ release            : chr  "undetermined" "undetermined" "undetermined" "undetermined" ...

Quality Flags

The sensor data undergo a variety of automated quality assurance and quality control checks. You can read about them in detail in the Quality Flags and Quality Metrics ATBD, in the Documentation section of the product details page. The expanded data package includes all of these quality flags, which can allow you to decide if not passing one of the checks will significantly hamper your research and if you should therefore remove the data from your analysis. Here, we're using the basic data package, which only includes the final quality flag (finalQF), which is aggregated from the full set of quality flags.

A pass of the check is 0, while a fail is 1. Let's see what percentage of the data we downloaded passed the quality checks.

sum(SAAT_30min$finalQF==1)/nrow(SAAT_30min)

## [1] 0.2340297

What should we do with the 23% of the data that are flagged? This may depend on why it is flagged and what questions you are asking, and the expanded data package would be useful for determining this.

For now, for demonstration purposes, we'll keep the flagged data.

What about null (NA) data?

sum(is.na(SAAT_30min$tempSingleMean))/nrow(SAAT_30min)

## [1] 0.2239269

mean(SAAT_30min$tempSingleMean)

## [1] NA

22% of the mean temperature values are NA. Note that this is not additive with the flagged data! Empty data records are flagged, so this indicates nearly all of the flagged data in our download are empty records.

Why was there no output from the calculation of mean temperature?

The R programming language, by default, won't calculate a mean (and many other summary statistics) in data that contain NA values. We could override this using the input parameter na.rm=TRUE in the mean() function, or just remove the empty values from our analysis.

# create new dataframe without NAs

SAAT_30min_noNA <- SAAT_30min %>%
	drop_na(tempSingleMean)  # tidyr function



# alternate base R

# SAAT_30min_noNA <- SAAT_30min[!is.na(SAAT_30min$tempSingleMean),]



# did it work?

sum(is.na(SAAT_30min_noNA$tempSingleMean))

## [1] 0

Scatterplots with ggplot

We can use ggplot to create scatter plots. Which data should we plot, as we have several options?

  • tempSingleMean: the mean temperature for the interval
  • tempSingleMinimum: the minimum temperature during the interval
  • tempSingleMaximum: the maximum temperature for the interval

Depending on exactly what question you are asking you may prefer to use one over the other. For many applications, the mean temperature of the 1- or 30-minute interval will provide the best representation of the data.

Let's plot it. (This is a plot of a large amount of data. It can take 1-2 mins to process. It is not essential for completing the next steps if this takes too much of your computer memory.)

# plot temp data

tempPlot <- ggplot(SAAT_30min, aes(startDateTime, tempSingleMean)) +
    geom_point(size=0.3) +
    ggtitle("Single Aspirated Air Temperature") +
    xlab("Date") + ylab("Temp (C)") +
    theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
    theme(text = element_text(size=18))



tempPlot

## Warning: Removed 19616 rows containing missing values (`geom_point()`).

Scatter plot of mean temperatures for the year 2018 at the Smithsonian Conservation Biology Institute (SCBI). Plotted data shows erroneous sensor readings occured during late April/May 2018.

What patterns can you see in the data?

Something odd seems to have happened in late April/May 2018. Since it is unlikely Virginia experienced -50C during this time, these are probably erroneous sensor readings and why we should probably remove data that are flagged with those quality flags.

Right now we are also looking at all the data points in the dataset. However, we may want to view or aggregate the data differently:

  • aggregated data: min, mean, or max over a some duration
  • the number of days since a freezing temperatures
  • or some other segregation of the data.

Given that in the previous tutorial, Work With NEON's Plant Phenology Data, we were working with phenology data collected on a daily scale let's aggregate to that level.

To make this plot better, lets do two things

  1. Remove flagged data
  2. Aggregate to a daily mean.

Subset to remove quality flagged data

We already removed the empty records. Now we'll subset the data to remove the remaining flagged data.

# subset and add C to name for "clean"

SAAT_30minC <- filter(SAAT_30min_noNA, SAAT_30min_noNA$finalQF==0)



# Do any quality flags remain?

sum(SAAT_30minC$finalQF==1)

## [1] 0

Now we can plot only the unflagged data.

# plot temp data

tempPlot <- ggplot(SAAT_30minC, aes(startDateTime, tempSingleMean)) +
    geom_point(size=0.3) +
    ggtitle("Single Aspirated Air Temperature") +
    xlab("Date") + ylab("Temp (C)") +
    theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
    theme(text = element_text(size=18))



tempPlot

Scatter plot of mean temperatures for the year 2018 at the Smithsonian Conservation Biology Institute (SCBI). Plotted data now has been cleaned of the erroneous sensor readings by filtering out flagged data.

That looks better! But we're still working with the 30-minute data.

Aggregate Data by Day

We can use the dplyr package functions to aggregate the data. However, we have to choose which data we want to aggregate. Again, you might want daily minimum temps, mean temperature or maximum temps depending on your question.

In the context of phenology, minimum temperatures might be very important if you are interested in a species that is very frost susceptible. Any days with a minimum temperature below 0C could dramatically change the phenophase. For other species or meteorological zones, maximum thresholds may be very important. Or you might be mostinterested in the daily mean.

And note that you can combine different input values with different aggregation functions - for example, you could calculate the minimum of the half-hourly average temperature, or the average of the half-hourly maximum temperature.

For this tutorial, let's use maximum daily temperature, i.e. the maximum of the tempSingleMax values for the day.

# convert to date, easier to work with

SAAT_30minC$Date <- as.Date(SAAT_30minC$startDateTime)



# max of mean temp each day

temp_day <- SAAT_30minC %>%
	group_by(Date) %>%
	distinct(Date, .keep_all=T) %>%
	mutate(dayMax=max(tempSingleMaximum))

Now we can plot the cleaned up daily temperature.

# plot Air Temperature Data across 2018 using daily data

tempPlot_dayMax <- ggplot(temp_day, aes(Date, dayMax)) +
    geom_point(size=0.5) +
    ggtitle("Daily Max Air Temperature") +
    xlab("") + ylab("Temp (C)") +
    theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
    theme(text = element_text(size=18))



tempPlot_dayMax

Scatter plot of daily maximum temperatures(of 30 minute interval means) for the year 2018 at the Smithsonian Conservation Biology Institute (SCBI).

Thought questions:

  • What do we gain by this visualization?
  • What do we lose relative to the 30 minute intervals?

ggplot - Subset by Time

Sometimes we want to scale the x- or y-axis to a particular time subset without subsetting the entire data_frame. To do this, we can define start and end times. We can then define these limits in the scale_x_date object as follows:

scale_x_date(limits=start.end) +

Let's plot just the first three months of the year.

# Define Start and end times for the subset as R objects that are the time class

startTime <- as.Date("2018-01-01")

endTime <- as.Date("2018-03-31")



# create a start and end time R object

start.end <- c(startTime,endTime)

str(start.end)

##  Date[1:2], format: "2018-01-01" "2018-03-31"

# View data for first 3 months only

# And we'll add some color for a change. 

tempPlot_dayMax3m <- ggplot(temp_day, aes(Date, dayMax)) +
           geom_point(color="blue", size=0.5) +  
           ggtitle("Air Temperature\n Jan - March") +
           xlab("Date") + ylab("Air Temperature (C)")+ 
           (scale_x_date(limits=start.end, 
                date_breaks="1 week",
                date_labels="%b %d"))

 

tempPlot_dayMax3m

## Warning: Removed 268 rows containing missing values (`geom_point()`).

Scatter plot showing daily maximum temperatures(of 30 minute interval means) from the beginning of January 2018 through the end of March 2018 at the Smithsonian Conservation Biology Institute (SCBI).

Now we have the temperature data matching our Phenology data from the previous tutorial, we want to save it to our computer to use in future analyses (or the next tutorial). This is optional if you are continuing directly to the next tutorial as you already have the data in R.

# Write .csv - this step is optional 

# This will write to the working directory we set at the start of the tutorial

write.csv(temp_day , file="NEONsaat_daily_SCBI_2018.csv", row.names=F)

Work With NEON's Plant Phenology Data

Many organisms, including plants, show patterns of change across seasons - the different stages of this observable change are called phenophases. In this tutorial we explore how to work with NEON plant phenophase data.

Objectives

After completing this activity, you will be able to:

  • work with NEON Plant Phenology Observation data.
  • use dplyr functions to filter data.
  • plot time series data in a bar plot using ggplot the function.

Things You’ll Need To Complete This Tutorial

You will need the most current version of R and, preferably, RStudio loaded on your computer to complete this tutorial.

Install R Packages

  • neonUtilities: install.packages("neonUtilities")
  • ggplot2: install.packages("ggplot2")
  • dplyr: install.packages("dplyr")

More on Packages in R – Adapted from Software Carpentry.

Download Data

This tutorial is designed to have you download data directly from the NEON portal API using the neonUtilities package. However, you can also directly download this data, prepackaged, from FigShare. This data set includes all the files needed for the Work with NEON OS & IS Data - Plant Phenology & Temperature tutorial series. The data are in the format you would receive if downloading them using the zipsByProduct() function in the neonUtilities package.

Direct Download: NEON Phenology & Temp Time Series Teaching Data Subset (v2 - 2017-2019 data) (12 MB)


Additional Resources

  • NEON data portal
  • NEON Plant Phenology Observations data product user guide
  • RStudio's data wrangling (dplyr/tidyr) cheatsheet
  • NEONScience GitHub Organization
  • nneo API wrapper on CRAN

Plants change throughout the year - these are phenophases. Why do they change?

Explore Phenology Data

The following sections provide a brief overview of the NEON plant phenology observation data. When designing a research project using this data, you need to consult the documents associated with this data product and not rely solely on this summary.

The following description of the NEON Plant Phenology Observation data is modified from the data product user guide.

NEON Plant Phenology Observation Data

NEON collects plant phenology data and provides it as NEON data product DP1.10055.001.

The plant phenology observations data product provides in-situ observations of the phenological status and intensity of tagged plants (or patches) during discrete observations events.

Sampling occurs at all terrestrial field sites at site and season specific intervals. During Phase I (dominant species) sampling (pre-2021), three species with 30 individuals each are sampled. In 2021, Phase II (community) sampling will begin, with <=20 species with 5 or more individuals sampled will occur.

Status-based Monitoring

NEON employs status-based monitoring, in which the phenological condition of an individual is reported any time that individual is observed. At every observations bout, records are generated for every phenophase that is occurring and for every phenophase not occurring. With this approach, events (such as leaf emergence in Mediterranean zones, or flowering in many desert species) that may occur multiple times during a single year, can be captured. Continuous reporting of phenophase status enables quantification of the duration of phenophases rather than just their date of onset while allows enabling the explicit quantification of uncertainty in phenophase transition dates that are introduced by monitoring in discrete temporal bouts.

Specific products derived from this sampling include the observed phenophase status (whether or not a phenophase is occurring) and the intensity of phenophases for individuals in which phenophase status = ‘yes’. Phenophases reported are derived from the USA National Phenology Network (USA-NPN) categories. The number of phenophases observed varies by growth form and ranges from 1 phenophase (cactus) to 7 phenophases (semi-evergreen broadleaf). In this tutorial we will focus only on the state of the phenophase, not the phenophase intensity data.

Phenology Transects

Plant phenology observations occurs at all terrestrial NEON sites along an 800 meter square loop transect (primary) and within a 200 m x 200 m plot located within view of a canopy level, tower-mounted, phenology camera.

Diagram of a phenology transect layout, with meter layout marked. Point-level geolocations are recorded at eight reference
	points along the perimeter; plot-level geolocation at the plot centroid(star).
Diagram of a phenology transect layout, with meter layout marked. Point-level geolocations are recorded at eight reference points along the perimeter, plot-level geolocation at the plot centroid (star). Source: National Ecological Observatory Network (NEON)

Timing of Observations

At each site, there are:

  • ~50 observation bouts per year.
  • no more that 100 sampling points per phenology transect.
  • no more than 9 sampling points per phenocam plot.
  • 1 annual measurement per year to collect annual size and disease status measurements from each sampling point.

Available Data Tables

In the downloaded data packet, data are available in two main files

  • phe_statusintensity: Plant phenophase status and intensity data
  • phe_perindividual: Geolocation and taxonomic identification for phenology plants
  • phe_perindividualperyear: recorded once a year, essentially the "metadata" about the plant: DBH, height, etc.

There are other files in each download including a readme with information on the data product and the download; a variables file that defines the term descriptions, data types, and units; a validation file with data entry validation and parsing rules; and an XML with machine readable metadata.

Stack NEON Data

NEON data are delivered in a site and year-month format. When you download data, you will get a single zipped file containing a directory for each month and site that you've requested data for. Dealing with these separate tables from even one or two sites over a 12 month period can be a bit overwhelming. Luckily NEON provides an R package neonUtilities that takes the unzipped downloaded file and joining the data files. The teaching data downloaded with this tutorial is already stacked. If you are working with other NEON data, please go through the tutorial to stack the data in R or in Python and then return to this tutorial.

Work with NEON Data

When we do this for phenology data we get three files, one for each data table, with all the data from your site and date range of interest.

First, we need to set up our R environment.

# install needed package (only uncomment & run if not already installed)
#install.packages("neonUtilities")
#install.packages("dplyr")
#install.packages("ggplot2")

# load needed packages
library(neonUtilities)
library(dplyr)
library(ggplot2)


options(stringsAsFactors=F) #keep strings as character type not factors

# set working directory to ensure R can find the file we wish to import and where
# we want to save our files. Be sure to move the download into your working directory!
wd <- "~/Git/data/" # Change this to match your local environment
setwd(wd)

Let's start by loading our data of interest. For this series, we'll work with date from the NEON Domain 02 sites:

  • Blandy Farm (BLAN)
  • Smithsonian Conservation Biology Institute (SCBI)
  • Smithsonian Environmental Research Center (SERC)

And we'll use data from January 2017 to December 2019. This downloads over 9MB of data. If this is too large, use a smaller date range. If you opt to do this, your figures and some output may look different later in the tutorial.

With this information, we can download our data using the neonUtilities package. If you are not using a NEON token to download your data, remove the token = Sys.getenv(NEON_TOKEN) line of code (learn more about NEON API tokens in the Using an API Token when Accessing NEON Data with neonUtilities tutorial).

If you are using the data downloaded at the start of the tutorial, use the commented out code in the second half of this code chunk.

## Two options for accessing data - programmatic or from the example dataset
# Read data from data portal 

phe <- loadByProduct(dpID = "DP1.10055.001", site=c("BLAN","SCBI","SERC"), 
										 startdate = "2017-01", enddate="2019-12", 
										 token = Sys.getenv("NEON_TOKEN"),
										 check.size = F) 

## API token was not recognized. Public rate limit applied.
## Finding available files
## 

|
| | 0% |
|= | 1% |
|= | 2% |
|== | 3% |
|=== | 4% |
|=== | 5% |
|==== | 6% |
|==== | 7% |
|===== | 8% |
|====== | 9% |
|====== | 11% |
|======= | 12% |
|======== | 13% |
|======== | 14% |
|========= | 15% |
|========== | 16% |
|========== | 17% |
|=========== | 18% |
|============ | 19% |
|============ | 20% |
|============= | 21% |
|============= | 22% |
|============== | 23% |
|=============== | 24% |
|=============== | 25% |
|================ | 26% |
|================= | 27% |
|================= | 28% |
|================== | 29% |
|=================== | 31% |
|=================== | 32% |
|==================== | 33% |
|===================== | 34% |
|===================== | 35% |
|====================== | 36% |
|====================== | 37% |
|======================= | 38% |
|======================== | 39% |
|======================== | 40% |
|========================= | 41% |
|========================== | 42% |
|========================== | 43% |
|=========================== | 44% |
|============================ | 45% |
|============================ | 46% |
|============================= | 47% |
|============================== | 48% |
|============================== | 49% |
|=============================== | 51% |
|=============================== | 52% |
|================================ | 53% |
|================================= | 54% |
|================================= | 55% |
|================================== | 56% |
|=================================== | 57% |
|=================================== | 58% |
|==================================== | 59% |
|===================================== | 60% |
|===================================== | 61% |
|====================================== | 62% |
|======================================= | 63% |
|======================================= | 64% |
|======================================== | 65% |
|======================================== | 66% |
|========================================= | 67% |
|========================================== | 68% |
|========================================== | 69% |
|=========================================== | 71% |
|============================================ | 72% |
|============================================ | 73% |
|============================================= | 74% |
|============================================== | 75% |
|============================================== | 76% |
|=============================================== | 77% |
|================================================ | 78% |
|================================================ | 79% |
|================================================= | 80% |
|================================================= | 81% |
|================================================== | 82% |
|=================================================== | 83% |
|=================================================== | 84% |
|==================================================== | 85% |
|===================================================== | 86% |
|===================================================== | 87% |
|====================================================== | 88% |
|======================================================= | 89% |
|======================================================= | 91% |
|======================================================== | 92% |
|========================================================= | 93% |
|========================================================= | 94% |
|========================================================== | 95% |
|========================================================== | 96% |
|=========================================================== | 97% |
|============================================================ | 98% |
|============================================================ | 99% |
|=============================================================| 100% ## ## Downloading files totaling approximately 7.985319 MB ## Downloading 95 files ## |
| | 0% |
|= | 1% |
|= | 2% |
|== | 3% |
|=== | 4% |
|=== | 5% |
|==== | 6% |
|===== | 7% |
|===== | 9% |
|====== | 10% |
|====== | 11% |
|======= | 12% |
|======== | 13% |
|======== | 14% |
|========= | 15% |
|========== | 16% |
|========== | 17% |
|=========== | 18% |
|============ | 19% |
|============ | 20% |
|============= | 21% |
|============== | 22% |
|============== | 23% |
|=============== | 24% |
|================ | 26% |
|================ | 27% |
|================= | 28% |
|================== | 29% |
|================== | 30% |
|=================== | 31% |
|=================== | 32% |
|==================== | 33% |
|===================== | 34% |
|===================== | 35% |
|====================== | 36% |
|======================= | 37% |
|======================= | 38% |
|======================== | 39% |
|========================= | 40% |
|========================= | 41% |
|========================== | 43% |
|=========================== | 44% |
|=========================== | 45% |
|============================ | 46% |
|============================= | 47% |
|============================= | 48% |
|============================== | 49% |
|============================== | 50% |
|=============================== | 51% |
|================================ | 52% |
|================================ | 53% |
|================================= | 54% |
|================================== | 55% |
|================================== | 56% |
|=================================== | 57% |
|==================================== | 59% |
|==================================== | 60% |
|===================================== | 61% |
|====================================== | 62% |
|====================================== | 63% |
|======================================= | 64% |
|======================================== | 65% |
|======================================== | 66% |
|========================================= | 67% |
|========================================== | 68% |
|========================================== | 69% |
|=========================================== | 70% |
|=========================================== | 71% |
|============================================ | 72% |
|============================================= | 73% |
|============================================= | 74% |
|============================================== | 76% |
|=============================================== | 77% |
|=============================================== | 78% |
|================================================ | 79% |
|================================================= | 80% |
|================================================= | 81% |
|================================================== | 82% |
|=================================================== | 83% |
|=================================================== | 84% |
|==================================================== | 85% |
|===================================================== | 86% |
|===================================================== | 87% |
|====================================================== | 88% |
|======================================================= | 89% |
|======================================================= | 90% |
|======================================================== | 91% |
|======================================================== | 93% |
|========================================================= | 94% |
|========================================================== | 95% |
|========================================================== | 96% |
|=========================================================== | 97% |
|============================================================ | 98% |
|============================================================ | 99% |
|=============================================================| 100% ## ## Unpacking zip files using 1 cores. ## Stacking operation across a single core. ## Stacking table phe_perindividual ## Stacking table phe_statusintensity ## Stacking table phe_perindividualperyear ## Copied the most recent publication of validation file to /stackedFiles ## Copied the most recent publication of categoricalCodes file to /stackedFiles ## Copied the most recent publication of variable definition file to /stackedFiles ## Finished: Stacked 3 data tables and 3 metadata tables! ## Stacking took 1.46806 secs

# if you aren't sure you can handle the data file size use check.size = T. 

# save dataframes from the downloaded list
ind <- phe$phe_perindividual  #individual information
status <- phe$phe_statusintensity  #status & intensity info


##If choosing to use example dataset downloaded from this tutorial: 

# Stack multiple files within the downloaded phenology data
#stackByTable("NEON-pheno-temp-timeseries_v2/filesToStack10055", folder = T)

# read in data - readTableNEON uses the variables file to assign the correct
# data type for each variable
#ind <- readTableNEON('NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/phe_perindividual.csv', 'NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/variables_10055.csv')

#status <- readTableNEON('NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/phe_statusintensity.csv', 'NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/variables_10055.csv')

Let's explore the data. Let's get to know what the ind dataframe looks like.

# What are the fieldnames in this dataset?
names(ind)

##  [1] "uid"                         "namedLocation"              
##  [3] "domainID"                    "siteID"                     
##  [5] "plotID"                      "decimalLatitude"            
##  [7] "decimalLongitude"            "geodeticDatum"              
##  [9] "coordinateUncertainty"       "elevation"                  
## [11] "elevationUncertainty"        "subtypeSpecification"       
## [13] "transectMeter"               "directionFromTransect"      
## [15] "ninetyDegreeDistance"        "sampleLatitude"             
## [17] "sampleLongitude"             "sampleGeodeticDatum"        
## [19] "sampleCoordinateUncertainty" "sampleElevation"            
## [21] "sampleElevationUncertainty"  "date"                       
## [23] "editedDate"                  "individualID"               
## [25] "taxonID"                     "scientificName"             
## [27] "identificationQualifier"     "taxonRank"                  
## [29] "nativeStatusCode"            "growthForm"                 
## [31] "vstTag"                      "samplingProtocolVersion"    
## [33] "measuredBy"                  "identifiedBy"               
## [35] "recordedBy"                  "remarks"                    
## [37] "dataQF"                      "publicationDate"            
## [39] "release"

# Unsure of what some of the variables are you? Look at the variables table!
View(phe$variables_10055)
# if using the pre-downloaded data, you need to read in the variables file 
# or open and look at it on your desktop
#var <- read.csv('NEON-pheno-temp-timeseries_v2/filesToStack10055/stackedFiles/variables_10055.csv')
#View(var)

# how many rows are in the data?
nrow(ind)

## [1] 433

# look at the first six rows of data.
#head(ind) #this is a good function to use but looks messy so not rendering it 

# look at the structure of the dataframe.
str(ind)

## 'data.frame':	433 obs. of  39 variables:
##  $ uid                        : chr  "76bf37d9-c834-43fc-a430-83d87e4b9289" "cf0239bb-2953-44a8-8fd2-051539be5727" "833e5f41-d5cb-4550-ba60-e6f000a2b1b6" "6c2e348d-d19e-4543-9d22-0527819ee964" ...
##  $ namedLocation              : chr  "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" ...
##  $ domainID                   : chr  "D02" "D02" "D02" "D02" ...
##  $ siteID                     : chr  "BLAN" "BLAN" "BLAN" "BLAN" ...
##  $ plotID                     : chr  "BLAN_061" "BLAN_061" "BLAN_061" "BLAN_061" ...
##  $ decimalLatitude            : num  39.1 39.1 39.1 39.1 39.1 ...
##  $ decimalLongitude           : num  -78.1 -78.1 -78.1 -78.1 -78.1 ...
##  $ geodeticDatum              : chr  NA NA NA NA ...
##  $ coordinateUncertainty      : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ elevation                  : num  183 183 183 183 183 183 183 183 183 183 ...
##  $ elevationUncertainty       : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ subtypeSpecification       : chr  "primary" "primary" "primary" "primary" ...
##  $ transectMeter              : num  491 464 537 15 753 506 527 305 627 501 ...
##  $ directionFromTransect      : chr  "Left" "Right" "Left" "Left" ...
##  $ ninetyDegreeDistance       : num  0.5 4 2 3 2 1 2 3 2 3 ...
##  $ sampleLatitude             : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ sampleLongitude            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ sampleGeodeticDatum        : chr  "WGS84" "WGS84" "WGS84" "WGS84" ...
##  $ sampleCoordinateUncertainty: num  NA NA NA NA NA NA NA NA NA NA ...
##  $ sampleElevation            : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ sampleElevationUncertainty : num  NA NA NA NA NA NA NA NA NA NA ...
##  $ date                       : POSIXct, format: "2016-04-20" ...
##  $ editedDate                 : POSIXct, format: "2016-05-09" ...
##  $ individualID               : chr  "NEON.PLA.D02.BLAN.06290" "NEON.PLA.D02.BLAN.06501" "NEON.PLA.D02.BLAN.06204" "NEON.PLA.D02.BLAN.06223" ...
##  $ taxonID                    : chr  "RHDA" "SOAL6" "RHDA" "LOMA6" ...
##  $ scientificName             : chr  "Rhamnus davurica Pall." "Solidago altissima L." "Rhamnus davurica Pall." "Lonicera maackii (Rupr.) Herder" ...
##  $ identificationQualifier    : chr  NA NA NA NA ...
##  $ taxonRank                  : chr  "species" "species" "species" "species" ...
##  $ nativeStatusCode           : chr  "I" "N" "I" "I" ...
##  $ growthForm                 : chr  "Deciduous broadleaf" "Forb" "Deciduous broadleaf" "Deciduous broadleaf" ...
##  $ vstTag                     : chr  NA NA NA NA ...
##  $ samplingProtocolVersion    : chr  NA "NEON.DOC.014040vJ" "NEON.DOC.014040vJ" "NEON.DOC.014040vJ" ...
##  $ measuredBy                 : chr  "jcoloso@neoninc.org" "jward@battelleecology.org" "alandes@field-ops.org" "alandes@field-ops.org" ...
##  $ identifiedBy               : chr  "shackley@neoninc.org" "llemmon@field-ops.org" "llemmon@field-ops.org" "llemmon@field-ops.org" ...
##  $ recordedBy                 : chr  "shackley@neoninc.org" NA NA NA ...
##  $ remarks                    : chr  "Nearly dead shaded out" "no entry" "no entry" "no entry" ...
##  $ dataQF                     : chr  NA NA NA NA ...
##  $ publicationDate            : chr  "20201218T103411Z" "20201218T103411Z" "20201218T103411Z" "20201218T103411Z" ...
##  $ release                    : chr  "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" ...

Notice that the neonUtilities package read the data type from the variables file and then automatically converts the data to the correct date type in R.

(Note that if you first opened your data file in Excel, you might see 06/14/2014 as the format instead of 2014-06-14. Excel can do some ~~weird~~ interesting things to dates.)

Phenology status

Now let's look at the status data.

# What variables are included in this dataset?
names(status)

##  [1] "uid"                           "namedLocation"                
##  [3] "domainID"                      "siteID"                       
##  [5] "plotID"                        "date"                         
##  [7] "editedDate"                    "dayOfYear"                    
##  [9] "individualID"                  "phenophaseName"               
## [11] "phenophaseStatus"              "phenophaseIntensityDefinition"
## [13] "phenophaseIntensity"           "samplingProtocolVersion"      
## [15] "measuredBy"                    "recordedBy"                   
## [17] "remarks"                       "dataQF"                       
## [19] "publicationDate"               "release"

nrow(status)

## [1] 219357

#head(status)   #this is a good function to use but looks messy so not rendering it 
str(status)

## 'data.frame':	219357 obs. of  20 variables:
##  $ uid                          : chr  "b69ada55-41d1-41c7-9031-149c54de51f9" "9be6f7ad-4422-40ac-ba7f-e32e0184782d" "58e7aeaf-163c-4ea2-ad75-db79a580f2f8" "efe7ca02-d09e-4964-b35d-aebdac8f3efb" ...
##  $ namedLocation                : chr  "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" "BLAN_061.phenology.phe" ...
##  $ domainID                     : chr  "D02" "D02" "D02" "D02" ...
##  $ siteID                       : chr  "BLAN" "BLAN" "BLAN" "BLAN" ...
##  $ plotID                       : chr  "BLAN_061" "BLAN_061" "BLAN_061" "BLAN_061" ...
##  $ date                         : POSIXct, format: "2017-02-24" ...
##  $ editedDate                   : POSIXct, format: "2017-03-31" ...
##  $ dayOfYear                    : num  55 55 55 55 55 55 55 55 55 55 ...
##  $ individualID                 : chr  "NEON.PLA.D02.BLAN.06229" "NEON.PLA.D02.BLAN.06226" "NEON.PLA.D02.BLAN.06222" "NEON.PLA.D02.BLAN.06223" ...
##  $ phenophaseName               : chr  "Leaves" "Leaves" "Leaves" "Leaves" ...
##  $ phenophaseStatus             : chr  "no" "no" "no" "no" ...
##  $ phenophaseIntensityDefinition: chr  NA NA NA NA ...
##  $ phenophaseIntensity          : chr  NA NA NA NA ...
##  $ samplingProtocolVersion      : chr  NA NA NA NA ...
##  $ measuredBy                   : chr  "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" ...
##  $ recordedBy                   : chr  "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" "llemmon@neoninc.org" ...
##  $ remarks                      : chr  NA NA NA NA ...
##  $ dataQF                       : chr  "legacyData" "legacyData" "legacyData" "legacyData" ...
##  $ publicationDate              : chr  "20201217T203824Z" "20201217T203824Z" "20201217T203824Z" "20201217T203824Z" ...
##  $ release                      : chr  "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" "RELEASE-2021" ...

# date range
min(status$date)

## [1] "2017-02-24 GMT"

max(status$date)

## [1] "2019-12-12 GMT"

Clean up the Data

  • remove duplicates (full rows)
  • convert to date format
  • retain only the most recent editedDate in the perIndividual and status table.

Remove Duplicates

The individual table (ind) file is included in each site by month-year file. As a result when all the tables are stacked there are many duplicates.

Let's remove any duplicates that exist.

# drop UID as that will be unique for duplicate records
ind_noUID <- select(ind, -(uid))

status_noUID <- select(status, -(uid))

# remove duplicates
## expect many

ind_noD <- distinct(ind_noUID)
nrow(ind_noD)

## [1] 433

status_noD<-distinct(status_noUID)
nrow(status_noD)

## [1] 216837

Variable Overlap between Tables

From the initial inspection of the data we can see there is overlap in variable names between the fields.

Let's see what they are.

# where is there an intersection of names
intersect(names(status_noD), names(ind_noD))

##  [1] "namedLocation"           "domainID"               
##  [3] "siteID"                  "plotID"                 
##  [5] "date"                    "editedDate"             
##  [7] "individualID"            "samplingProtocolVersion"
##  [9] "measuredBy"              "recordedBy"             
## [11] "remarks"                 "dataQF"                 
## [13] "publicationDate"         "release"

There are several fields that overlap between the datasets. Some of these are expected to be the same and will be what we join on.

However, some of these will have different values in each table. We want to keep those distinct value and not join on them. Therefore, we can rename these fields before joining:

  • date
  • editedDate
  • measuredBy
  • recordedBy
  • samplingProtocolVersion
  • remarks
  • dataQF
  • publicationDate

Now we want to rename the variables that would have duplicate names. We can rename all the variables in the status object to have "Stat" at the end of the variable name.

# in Status table rename like columns 
status_noD <- rename(status_noD, dateStat=date, 
										 editedDateStat=editedDate, measuredByStat=measuredBy, 
										 recordedByStat=recordedBy, 
										 samplingProtocolVersionStat=samplingProtocolVersion, 
										 remarksStat=remarks, dataQFStat=dataQF, 
										 publicationDateStat=publicationDate)

Filter to last editedDate

The individual (ind) table contains all instances that any of the location or taxonomy data of an individual was updated. Therefore there are many rows for some individuals. We only want the latest editedDate on ind.

# retain only the max of the date for each individualID
ind_last <- ind_noD %>%
	group_by(individualID) %>%
	filter(editedDate==max(editedDate))

# oh wait, duplicate dates, retain only the most recent editedDate
ind_lastnoD <- ind_last %>%
	group_by(editedDate, individualID) %>%
	filter(row_number()==1)

Join Dataframes

Now we can join the two data frames on all the variables with the same name. We use a left_join() from the dpylr package because we want to match all the rows from the "left" (first) dataframe to any rows that also occur in the "right" (second) dataframe.

Check out RStudio's data wrangling (dplyr/tidyr) cheatsheet for other types of joins.

# Create a new dataframe "phe_ind" with all the data from status and some from ind_lastnoD
phe_ind <- left_join(status_noD, ind_lastnoD)

## Joining, by = c("namedLocation", "domainID", "siteID", "plotID", "individualID", "release")

Now that we have clean datasets we can begin looking into our particular data to address our research question: do plants show patterns of changes in phenophase across season?

Patterns in Phenophase

From our larger dataset (several sites, species, phenophases), let's create a dataframe with only the data from a single site, species, and phenophase and call it phe_1sp.

Select Site(s) of Interest

To do this, we'll first select our site of interest. Note how we set this up with an object that is our site of interest. This will allow us to more easily change which site or sites if we want to adapt our code later.

# set site of interest
siteOfInterest <- "SCBI"

# use filter to select only the site of Interest 
## using %in% allows one to add a vector if you want more than one site. 
## could also do it with == instead of %in% but won't work with vectors

phe_1st <- filter(phe_ind, siteID %in% siteOfInterest)

Select Species of Interest

Now we may only want to view a single species or a set of species. Let's first look at the species that are present in our data. We could do this just by looking at the taxonID field which give the four letter UDSA plant code for each species. But if we don't know all the plant codes, we can get a bit fancier and view both

# see which species are present - taxon ID only
unique(phe_1st$taxonID)

## [1] "JUNI" "MIVI" "LITU"

# or see which species are present with taxon ID + species name
unique(paste(phe_1st$taxonID, phe_1st$scientificName, sep=' - ')) 

## [1] "JUNI - Juglans nigra L."                      
## [2] "MIVI - Microstegium vimineum (Trin.) A. Camus"
## [3] "LITU - Liriodendron tulipifera L."

For now, let's choose only the flowering tree Liriodendron tulipifera (LITU). By writing it this way, we could also add a list of species to the speciesOfInterest object to select for multiple species.

speciesOfInterest <- "LITU"

#subset to just "LITU"
# here just use == but could also use %in%
phe_1sp <- filter(phe_1st, taxonID==speciesOfInterest)

# check that it worked
unique(phe_1sp$taxonID)

## [1] "LITU"

Select Phenophase of Interest

And, perhaps a single phenophase.

# see which phenophases are present
unique(phe_1sp$phenophaseName)

## [1] "Open flowers"         "Breaking leaf buds"  
## [3] "Colored leaves"       "Increasing leaf size"
## [5] "Falling leaves"       "Leaves"

phenophaseOfInterest <- "Leaves"

#subset to just the phenosphase of interest 
phe_1sp <- filter(phe_1sp, phenophaseName %in% phenophaseOfInterest)

# check that it worked
unique(phe_1sp$phenophaseName)

## [1] "Leaves"

Select only Primary Plots

NEON plant phenology observations are collected along two types of plots.

  • Primary plots: an 800 meter square phenology loop transect
  • Phenocam plots: a 200 m x 200 m plot located within view of a canopy level, tower-mounted, phenology camera

In the data, these plots are differentiated by the subtypeSpecification. Depending on your question you may want to use only one or both of these plot types. For this activity, we're going to only look at the primary plots.

**Data Tip:** How do I learn this on my own? Read the Data Product User Guide and use the variables files with the data download to find the corresponding variables names.
# what plots are present?
unique(phe_1sp$subtypeSpecification)

## [1] "primary"  "phenocam"

# filter
phe_1spPrimary <- filter(phe_1sp, subtypeSpecification == 'primary')

# check that it worked
unique(phe_1spPrimary$subtypeSpecification)

## [1] "primary"

Total in Phenophase of Interest

The phenophaseState is recorded as "yes" or "no" that the individual is in that phenophase. The phenophaseIntensity are categories for how much of the individual is in that state. For now, we will stick with phenophaseState.

We can now calculate the total number of individuals with that state. We use n_distinct(indvidualID) count the individuals (and not the records) in case there are duplicate records for an individual.

But later on we'll also want to calculate the percent of the observed individuals in the "leaves" status, therefore, we're also adding in a step here to retain the sample size so that we can calculate % later.

Here we use pipes %>% from the dpylr package to "pass" objects onto the next function.

# Calculate sample size for later use
sampSize <- phe_1spPrimary %>%
  group_by(dateStat) %>%
  summarise(numInd= n_distinct(individualID))

# Total in status by day for distinct individuals
inStat <- phe_1spPrimary%>%
  group_by(dateStat, phenophaseStatus)%>%
  summarise(countYes=n_distinct(individualID))

## `summarise()` has grouped output by 'dateStat'. You can override using the `.groups` argument.

inStat <- full_join(sampSize, inStat, by="dateStat")

# Retain only Yes
inStat_T <- filter(inStat, phenophaseStatus %in% "yes")

# check that it worked
unique(inStat_T$phenophaseStatus)

## [1] "yes"

Now that we have the data we can plot it.

Plot with ggplot

The ggplot() function within the ggplot2 package gives us considerable control over plot appearance. Three basic elements are needed for ggplot() to work:

  1. The data_frame: containing the variables that we wish to plot,
  2. aes (aesthetics): which denotes which variables will map to the x-, y- (and other) axes,
  3. geom_XXXX (geometry): which defines the data's graphical representation (e.g. points (geom_point), bars (geom_bar), lines (geom_line), etc).

The syntax begins with the base statement that includes the data_frame (inStat_T) and associated x (date) and y (n) variables to be plotted:

ggplot(inStat_T, aes(date, n))

**Data Tip:** For a more detailed introduction to using `ggplot()`, visit *Time Series 05: Plot Time Series with ggplot2 in R* tutorial.

Bar Plots with ggplot

To successfully plot, the last piece that is needed is the geometry type. To create a bar plot, we set the geom element from to geom_bar().

The default setting for a ggplot bar plot - geom_bar() - is a histogram designated by stat="bin". However, in this case, we want to plot count values. We can use geom_bar(stat="identity") to force ggplot to plot actual values.

# plot number of individuals in leaf
phenoPlot <- ggplot(inStat_T, aes(dateStat, countYes)) +
    geom_bar(stat="identity", na.rm = TRUE) 

phenoPlot

Bar plot showing the count of Liriodendrum tulipifera (LITU) individuals from January 2017 through December 2019 at the Smithsonian Conservation Biology Institute (SCBI). Counts represent individuals that were recorded as a 'yes' for the phenophase of interest,'Leaves', and were from the primary plots.

# Now let's make the plot look a bit more presentable
phenoPlot <- ggplot(inStat_T, aes(dateStat, countYes)) +
    geom_bar(stat="identity", na.rm = TRUE) +
    ggtitle("Total Individuals in Leaf") +
    xlab("Date") + ylab("Number of Individuals") +
    theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
    theme(text = element_text(size=18))

phenoPlot

Bar plot showing the count of Liriodendrum tulipifera (LITU) individuals from January 2017 through December 2019 at the Smithsonian Conservation Biology Institute (SCBI). Counts represent individuals that were recorded as a 'yes' for the phenophase of interest,'Leaves', and were from the primary plots. Axis labels and title have been added to make the graph more presentable.

We could also covert this to percentage and plot that.

# convert to percent
inStat_T$percent<- ((inStat_T$countYes)/inStat_T$numInd)*100

# plot percent of leaves
phenoPlot_P <- ggplot(inStat_T, aes(dateStat, percent)) +
    geom_bar(stat="identity", na.rm = TRUE) +
    ggtitle("Proportion in Leaf") +
    xlab("Date") + ylab("% of Individuals") +
    theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
    theme(text = element_text(size=18))

phenoPlot_P

It might also be useful to visualize the data in different ways while exploring the data. As such, before plotting, we can convert our count data into a percentage by writting an expression that divides the number of individuals with a 'yes' for the phenophase of interest, 'Leaves', by the total number of individuals and then multiplies the result by 100. Using this newly generated dataset of percentages, we can plot the data similarly to how we did in the previous plot. Only this time, the y-axis range goes from 0 to 100 to reflect the percentage data we just generated. The resulting plot now shows a bar plot of the proportion of Liriodendrum tulipifera (LITU) individuals from January 2017 through December 2019 at the Smithsonian Conservation Biology Institute (SCBI). The y-axis represents the percent of individuals that were recorded as a 'yes' for the phenophase of interest,'Leaves', and were from the primary plots.

The plots demonstrate the nice expected pattern of increasing leaf-out, peak, and drop-off.

Drivers of Phenology

Now that we see that there are differences in and shifts in phenophases, what are the drivers of phenophases?

The NEON phenology measurements track sensitive and easily observed indicators of biotic responses to meteorological variability by monitoring the timing and duration of phenological stages in plant communities. Plant phenology is affected by forces such as temperature, timing and duration of pest infestations and disease outbreaks, water fluxes, nutrient budgets, carbon dynamics, and food availability and has feedbacks to trophic interactions, carbon sequestration, community composition and ecosystem function. (quoted from Plant Phenology Observations user guide.)

Filter by Date

In the next part of this series, we will be exploring temperature as a driver of phenology. Temperature date is quite large (NEON provides this in 1 minute or 30 minute intervals) so let's trim our phenology date down to only one year so that we aren't working with as large a data.

Let's filter to just 2018 data.

# use filter to select only the date of interest 
phe_1sp_2018 <- filter(inStat_T, dateStat >= "2018-01-01" & dateStat <= "2018-12-31")

# did it work?
range(phe_1sp_2018$dateStat)

## [1] "2018-04-13 GMT" "2018-11-20 GMT"

How does that look?

# Now let's make the plot look a bit more presentable
phenoPlot18 <- ggplot(phe_1sp_2018, aes(dateStat, countYes)) +
    geom_bar(stat="identity", na.rm = TRUE) +
    ggtitle("Total Individuals in Leaf") +
    xlab("Date") + ylab("Number of Individuals") +
    theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
    theme(text = element_text(size=18))

phenoPlot18

In the previous step, we filtered our data by date to only include data from 2018. Reviewing the newly generated dataset we get a bar plot showing the count of Liriodendrum tulipifera (LITU) individuals at the Smithsonian Conservation Biology Institute (SCBI) for the year 2018. Counts represent individuals that were recorded as a 'yes' for the phenophase of interest,'Leaves', and were from the primary plots.

Now that we've filtered down to just the 2018 data from SCBI for LITU in leaf, we may want to save that subsetted data for another use. To do that you can write the data frame to a .csv file.

You do not need to follow this step if you are continuing on to the next tutorials in this series as you already have the data frame in your environment. Of course if you close R and then come back to it, you will need to re-load this data and instructions for that are provided in the relevant tutorials.

# Write .csv - this step is optional 
# This will write to your current working directory, change as desired.
write.csv( phe_1sp_2018 , file="NEONpheno_LITU_Leaves_SCBI_2018.csv", row.names=F)

#If you are using the downloaded example date, this code will write it to the 
# pheno data file. Note - this file is already a part of the download.

#write.csv( phe_1sp_2018 , file="NEON-pheno-temp-timeseries_v2/NEONpheno_LITU_Leaves_SCBI_2018.csv", row.names=F)

Pagination

  • First page
  • Previous page
  • …
  • Page 46
  • Page 47
  • Page 48
  • Page 49
  • Current page 50
  • Page 51
  • Page 52
  • Page 53
  • Page 54
  • …
  • Next page
  • Last page
Subscribe to
NSF NEON, Operated by Battelle

Follow Us:

Join Our Newsletter

Get updates on events, opportunities, and how NEON is being used today.

Subscribe Now

Footer

  • About Us
  • Newsroom
  • Contact Us
  • Terms & Conditions
  • Careers
  • Code of Conduct

Copyright © Battelle, 2025

The National Ecological Observatory Network is a major facility fully funded by the U.S. National Science Foundation.

Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of the U.S. National Science Foundation.