In this tutorial, we will learn how to extract and plot a spectral profile from a single pixel of a reflectance band in a NEON hyperspectral HDF5 file.
In this exercise, we will learn how to extract and plot a spectral profile from
a single pixel of a reflectance band in a NEON hyperspectral hdf5 file. To do
this, we will use the aop_h5refl2array function to read in and clean our h5
reflectance data, and the Python package pandas to create a dataframe for the
reflectance and associated wavelength data.
Spectral Signatures
A spectral signature is a plot of the amount of light energy reflected by an
object throughout the range of wavelengths in the electromagnetic spectrum. The
spectral signature of an object conveys useful information about its structural
and chemical composition. We can use these signatures to identify and classify
different objects from a spectral image.
Vegetation has a unique spectral signature characterized by high reflectance in
the near infrared wavelengths, and much lower reflectance in the green portion
of the visible spectrum. We can extract reflectance values in the NIR and visible
spectrums from hyperspectral data in order to map vegetation on the earth's
surface. You can also use spectral curves as a proxy for vegetation health. We
will explore this concept more in the next lesson, where we will caluclate
vegetation indices.
Example spectra of water, green grass, dry grass, and soil. Source: National Ecological Observatory Network (NEON)
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings('ignore') #don't display warnings
Import the hyperspectral functions file that you downloaded into the variable neon_hs (for neon hyperspectral):
import os
# Note: you will need to update this filepath according to your local machine
os.chdir("/Users/olearyd/Git/data/")
import neon_aop_hyperspectral as neon_hs
# Note: you will need to update this filepath according to your local machine
sercRefl, sercRefl_md = neon_hs.aop_h5refl2array('/Users/olearyd/Git/data/NEON_D02_SERC_DP3_368000_4306000_reflectance.h5')
Optionally, you can view the data stored in the metadata dictionary, and print the minimum, maximum, and mean reflectance values in the tile. In order to handle any nan values, use Numpynanminnanmax and nanmean.
for item in sorted(sercRefl_md):
print(item + ':',sercRefl_md[item])
print('SERC Tile Reflectance Stats:')
print('min:',np.nanmin(sercRefl))
print('max:',round(np.nanmax(sercRefl),2))
print('mean:',round(np.nanmean(sercRefl),2))
For reference, plot the red band of the tile, using splicing, and the plot_aop_refl function:
We can use pandas to create a dataframe containing the wavelength and reflectance values for a single pixel - in this example, we'll look at the center pixel of the tile (500,500).
import pandas as pd
To extract all reflectance values from a single pixel, use splicing as we did before to select a single band, but now we need to specify (y,x) and select all bands (using :).
We can now plot the spectra, stored in this dataframe structure. pandas has a built in plotting routine, which can be called by typing .plot at the end of the dataframe.
We can see from the spectral profile above that there are spikes in reflectance around ~1400nm and ~1800nm. These result from water vapor which absorbs light between wavelengths 1340-1445 nm and 1790-1955 nm. The atmospheric correction that converts radiance to reflectance subsequently results in a spike at these two bands. The wavelengths of these water vapor bands is stored in the reflectance attributes, which is saved in the reflectance metadata dictionary created with h5refl2array:
bbw1 = sercRefl_md['bad band window1'];
bbw2 = sercRefl_md['bad band window2'];
print('Bad Band Window 1:',bbw1)
print('Bad Band Window 2:',bbw2)
Bad Band Window 1: [1340 1445]
Bad Band Window 2: [1790 1955]
Below we repeat the plot we made above, but this time draw in the edges of the water vapor band windows that we need to remove.
serc_pixel_df.plot(x='wavelengths',y='reflectance',kind='scatter',edgecolor='none');
plt.title('Spectral Signature for SERC Pixel (500,500)')
ax1 = plt.gca(); ax1.grid('on')
ax1.set_xlim([np.min(serc_pixel_df['wavelengths']),np.max(serc_pixel_df['wavelengths'])]);
ax1.set_ylim(0,0.5)
ax1.set_xlabel("Wavelength, nm"); ax1.set_ylabel("Reflectance")
#Add in red dotted lines to show boundaries of bad band windows:
ax1.plot((1340,1340),(0,1.5), 'r--')
ax1.plot((1445,1445),(0,1.5), 'r--')
ax1.plot((1790,1790),(0,1.5), 'r--')
ax1.plot((1955,1955),(0,1.5), 'r--')
[<matplotlib.lines.Line2D at 0x81aaccb70>]
We can now set these bad band windows to nan, along with the last 10 bands, which are also often noisy (as seen in the spectral profile plotted above). First make a copy of the wavelengths so that the original metadata doesn't change.
import copy
w = copy.copy(sercRefl_md['wavelength']) #make a copy to deal with the mutable data type
w[((w >= 1340) & (w <= 1445)) | ((w >= 1790) & (w <= 1955))]=np.nan #can also use bbw1[0] or bbw1[1] to avoid hard-coding in
w[-10:]=np.nan; # the last 10 bands sometimes have noise - best to eliminate
#print(w) #optionally print wavelength values to show that -9999 values are replaced with nan
Interactive Spectra Visualization
Finally, we can create a widget to interactively view the spectra of different pixels along the reflectance tile. Run the two cells below, and interact with them to gain a better sense of what the spectra look like for different materials on the ground.
#define index corresponding to nan values:
nan_ind = np.argwhere(np.isnan(w))
#define refl_band, refl, and metadata
refl_band = sercb56
refl = copy.copy(sercRefl)
metadata = copy.copy(sercRefl_md)
This tutorial introduces NEON RGB camera images (Data Product DP3.30010.001) and uses the Python package rasterio to read in and plot the camera data in Python. In this lesson, we will read in an RGB camera tile collected over the NEON Smithsonian Environmental Research Center (SERC) site and plot the mutliband image, as well as the individual bands. This lesson was adapted from the rasterio plotting documentation.
Objectives
After completing this tutorial, you will be able to:
Plot a NEON RGB camera geotiff tile in Python using rasterio
Package Requirements
This tutorial was run in Python version 3.9, using the following packages:
rasterio
matplotlib
Download the Data
Download the NEON
camera (RGB) imagery tile
collected over the Smithsonian Environmental Research Station (SERC) NEON field site in 2021. Move this data to a desired folder on your local workstation. You will need to know the file path to this data.
You don't have to download from the link above; the tutorial will demonstrate how to download the data directly from Python into your working directory, but we recommend re-organizing in a way that makes sense for you.
Background
As part of the
NEON Airborne Operation Platform's
suite of remote sensing instruments, the digital camera producing high-resolution (<= 10 cm) photographs of the earth’s surface. The camera records light energy that has reflected off the ground in the visible portion (red, green and blue) of the electromagnetic spectrum. Often the camera images are used to provide context for the hyperspectral and LiDAR data, but they can also be used for research purposes in their own right. One such example is the tree-crown mapping work by Weinstein et al. - see the links below for more information!
Reference: Ben G Weinstein, Sergio Marconi, Stephanie A Bohlman, Alina Zare, Aditya Singh, Sarah J Graves, Ethan P White (2021) A remote sensing derived data set of 100 million individual tree crowns for the National Ecological Observatory Network eLife 10:e62922. https://doi.org/10.7554/eLife.62922
In this lesson we will keep it simple and show how to read in and plot a single camera file (1km x 1km ortho-mosaicked tile) - a first step in any research incorporating the AOP camera data (in Python).
Import required packages
First let's import the packages that we'll be using in this lesson.
import os
import requests
import rasterio as rio
from rasterio.plot import show, show_hist
import matplotlib.pyplot as plt
Next, let's download a camera file. For this tutorial, we will use the requests package to download a raster file from the public link where the data is stored. For simplicity, we will show how to download to a data folder in the working directory. You can move the data to a different folder, but if you do that, be sure to update the path to your data accordingly.
def download_url(url,download_dir):
if not os.path.isdir(download_dir):
os.makedirs(download_dir)
filename = url.split('/')[-1]
r = requests.get(url, allow_redirects=True)
file_object = open(os.path.join(download_dir,filename),'wb')
file_object.write(r.content)
# public url where the RGB camera tile is stored
rgb_url = "https://storage.googleapis.com/neon-aop-products/2021/FullSite/D02/2021_SERC_5/L3/Camera/Mosaic/2021_SERC_5_368000_4306000_image.tif"
# download the camera tile to a ./data subfolder in your working directory
download_url(rgb_url,'.\data')
# display the contents in the ./data folder to confirm the download completed
os.listdir('./data')
Open the Camera RGB data with rasterio
We can open and read this RGB data that we downloaded in Python using the rasterio.open function:
# read the RGB file (including the full path) to the variable rgb_dataset
rgb_name = rgb_url.split('/')[-1]
rgb_file = os.path.join(".\data",rgb_name)
rgb_dataset = rio.open(rgb_file)
Let's look at a few properties of this dataset to get a sense of the information stored in the rasterio object:
print('rgb_dataset:\n',rgb_dataset)
print('\nshape:\n',rgb_dataset.shape)
print('\nspatial extent:\n',rgb_dataset.bounds)
print('\ncoordinate information (crs):\n',rgb_dataset.crs)
Unlike the other AOP data products, camera imagery is generated at 10cm resolution, so each 1km x 1km tile will contain 10000 pixels (other 1m resolution data products will have 1000 x 1000 pixels per tile, where each pixel represents 1 meter).
Plot the RGB multiband image
We can use rasterio's built-in functions show to plot the CHM tile.
show(rgb_dataset);
Plot each band of the RGB image
We can also plot each band (red, green, and blue) individually as follows:
That's all for this example! Most of the other AOP raster data are all single band images, but rasterio is a handy Python package for working with any geotiff files. You can download and visualize the lidar and spectrometer derived raster images similarly.
The instructions below will guide you through using the neonUtilities R package
in Python, via the rpy2 package. rpy2 creates an R environment you can interact
with from Python.
The assumption in this tutorial is that you want to work with NEON data in
Python, but you want to use the handy download and merge functions provided by
the neonUtilities R package to access and format the data for analysis. If
you want to do your analyses in R, use one of the R-based tutorials linked
below.
For more information about the neonUtilities package, and instructions for
running it in R directly, see the Download and Explore tutorial
and/or the neonUtilities tutorial.
Install and set up
Before starting, you will need:
Python 3 installed. It is probably possible to use this workflow in Python 2,
but these instructions were developed and tested using 3.7.4.
R installed. You don't need to have ever used it directly. We wrote this
tutorial using R 4.1.1, but most other recent versions should also work.
rpy2 installed. Run the line below from the command line, it won't run within
a Python script. See Python documentation for more information on how to install packages.
rpy2 often has install problems on Windows, see "Windows Users" section below if
you are running Windows.
You may need to install pip before installing rpy2, if you don't have it
installed already.
From the command line, run pip install rpy2
Windows users
The rpy2 package was built for Mac, and doesn't always work smoothly on Windows.
If you have trouble with the install, try these steps.
Add C:\Program Files\R\R-3.3.1\bin\x64 to the Windows Environment Variable “Path”
Pick the correct version. At the download page the portion of the files
with cp## relate to the Python version. e.g., rpy2 2.9.2 cp36 cp36m win_amd64.whl
is the correct download when 2.9.2 is the latest version of rpy2 and you are
running Python 36 and 64 bit Windows (amd64).
Save the whl file, navigate to it in windows then run pip directly on the file
as follows “pip install rpy2 2.9.2 cp36 cp36m win_amd64.whl”
Add an R_HOME Windows environment variable with the path C:\Program Files\R\R-3.4.3
(or whichever version you are running)
Add an R_USER Windows environment variable with the path C:\Users\yourUserName\AppData\Local\Continuum\Anaconda3\Lib\site-packages\rpy2
Additional troubleshooting
If you're still having trouble getting R to communicate with Python, you can try
pointing Python directly to your R installation path.
Run R.home() in R.
Run import os in Python.
Run os.environ['R_HOME'] = '/Library/Frameworks/R.framework/Resources' in Python, substituting the file path you found in step 1.
Load packages
Now open up your Python interface of choice (Jupyter notebook, Spyder, etc) and import rpy2 into your session.
import rpy2
import rpy2.robjects as robjects
from rpy2.robjects.packages import importr
Load the base R functionality, using the rpy2 function importr().
base = importr('base')
utils = importr('utils')
stats = importr('stats')
The basic syntax for running R code via rpy2 is package.function(inputs),
where package is the R package in use, function is the name of the function
within the R package, and inputs are the inputs to the function. In other
words, it's very similar to running code in R as package::function(inputs).
For example:
Suppress R warnings. This step can be skipped, but will result in messages
getting passed through from R that Python will interpret as warnings.
from rpy2.rinterface_lib.callbacks import logger as rpy2_logger
import logging
rpy2_logger.setLevel(logging.ERROR)
Install the neonUtilities R package. Here I've specified the RStudio
CRAN mirror as the source, but you can use a different one if you
prefer.
You only need to do this step once to use the package, but we update
the neonUtilities package every few months, so reinstalling
periodically is recommended.
This installation step carries out the same steps in the same places on
your hard drive that it would if run in R directly, so if you use R
regularly and have already installed neonUtilities on your machine,
you can skip this step. And be aware, this also means if you install
other packages, or new versions of packages, via rpy2, they'll
be updated the next time you use R, too.
The semicolon at the end of the line (here, and in some other function
calls below) can be omitted. It suppresses a note indicating the output
of the function is null. The output is null because these functions download
or modify files on your local drive, but none of the data are read into the
Python or R environments.
The downloaded binary packages are in
/var/folders/_k/gbjn452j1h3fk7880d5ppkx1_9xf6m/T//Rtmpl5OpMA/downloaded_packages
Now load the neonUtilities package. This does need to be run every time
you use the code; if you're familiar with R, importr() is roughly
equivalent to the library() function in R.
neonUtilities = importr('neonUtilities')
Join data files: stackByTable()
The function stackByTable() in neonUtilities merges the monthly,
site-level files the NEON Data Portal
provides. Start by downloading the dataset you're interested in from the
Portal. Here, we'll assume you've downloaded IR Biological Temperature.
It will download as a single zip file named NEON_temp-bio.zip. Note the
file path it's saved to and proceed.
Run the stackByTable() function to stack the data. It requires only one
input, the path to the zip file you downloaded from the NEON Data Portal.
Modify the file path in the code below to match the path on your machine.
For additional, optional inputs to stackByTable(), see the R tutorial
for neonUtilities.
Stacking operation across a single core.
Stacking table IRBT_1_minute
Stacking table IRBT_30_minute
Merged the most recent publication of sensor position files for each site and saved to /stackedFiles
Copied the most recent publication of variable definition file to /stackedFiles
Finished: Stacked 2 data tables and 3 metadata tables!
Stacking took 2.019079 secs
All unzipped monthly data folders have been removed.
Check the folder containing the original zip file from the Data Portal;
you should now have a subfolder containing the unzipped and stacked files
called stackedFiles. To import these data to Python, skip ahead to the
"Read downloaded and stacked files into Python" section; to learn how to
use neonUtilities to download data, proceed to the next section.
Download files to be stacked: zipsByProduct()
The function zipsByProduct() uses the NEON API to programmatically download
data files for a given product. The files downloaded by zipsByProduct()
can then be fed into stackByTable().
Run the downloader with these inputs: a data product ID (DPID), a set of
4-letter site IDs (or "all" for all sites), a download package (either
basic or expanded), the filepath to download the data to, and an
indicator to check the size of your download before proceeding or not
(TRUE/FALSE).
The DPID is the data product identifier, and can be found in the data product
box on the NEON Explore Data page.
Here we'll download Breeding landbird point counts, DP1.10003.001.
There are two differences relative to running zipsByProduct() in R directly:
check.size becomes check_size, because dots have programmatic meaning
in Python
TRUE (or T) becomes 'TRUE' because the values TRUE and FALSE don't
have special meaning in Python the way they do in R, so it interprets them
as variables if they're unquoted.
check_size='TRUE' does not work correctly in the Python environment. In R,
it estimates the size of the download and asks you to confirm before
proceeding, and the interactive question and answer don't work correctly
outside R. Set check_size='FALSE' to avoid this problem, but be thoughtful
about the size of your query since it will proceed to download without checking.
Unpacking zip files using 1 cores.
Stacking operation across a single core.
Stacking table brd_countdata
Stacking table brd_perpoint
Copied the most recent publication of validation file to /stackedFiles
Copied the most recent publication of categoricalCodes file to /stackedFiles
Copied the most recent publication of variable definition file to /stackedFiles
Finished: Stacked 2 data tables and 4 metadata tables!
Stacking took 0.4586661 secs
All unzipped monthly data folders have been removed.
Read downloaded and stacked files into Python
We've downloaded biological temperature and bird data, and merged
the site by month files. Now let's read those data into Python so you
can proceed with analyses.
First let's take a look at what's in the output folders.
import os
os.listdir('/Users/Shared/filesToStack10003/stackedFiles/')
Each data product folder contains a set of data files and metadata files.
Here, we'll read in the data files and take a look at the contents; for
more details about the contents of NEON data files and how to interpret them,
see the Download and Explore tutorial.
There are a variety of modules and methods for reading tabular data into
Python; here we'll use the pandas module, but feel free to use your own
preferred method.
First, let's read in the two data tables in the bird data:
brd_countdata and brd_perpoint.
The function byFileAOP() uses the NEON API
to programmatically download data files for remote sensing (AOP) data
products. These files cannot be stacked by stackByTable() because they
are not tabular data. The function simply creates a folder in your working
directory and writes the files there. It preserves the folder structure
for the subproducts.
The inputs to byFileAOP() are a data product ID, a site, a year,
a filepath to save to, and an indicator to check the size of the
download before proceeding, or not. As above, set check_size="FALSE"
when working in Python. Be especially cautious about download size
when downloading AOP data, since the files are very large.
Here, we'll download Ecosystem structure (Canopy Height Model) data from
Hopbrook (HOPB) in 2017.
Downloading files totaling approximately 147.930656 MB
Downloading 217 files
|======================================================================| 100%
Successfully downloaded 217 files to /Users/Shared/DP3.30015.001
Let's read one tile of data into Python and view it. We'll use the
rasterio and matplotlib modules here, but as with tabular data,
there are other options available.
There are myriad resources out there to learn programming in R. After linking to
a tutorial on how to install R and RStudio on your computer, we then outline a
few different paths to learn R basics depending on how you enjoy learning, and
finally we include a few resources for intermediate and advanced learning.
Setting Up your Computer
Start out by installing R and, we recommend, RStudio, on your computer. RStudio
is an Interactive Development Environment (IDE) for the R program. It
is optional, but recommended when working with R. Directions
for installing can be found within the tutorial Install Git, Bash Shell, R & RStudio.
You will need administrator permissions on your computer.
Pathways to Learning the Basics of R
In-person trainings
If you prefer to learn through in-person trainings, consider local workshops
from The Carpentries Software Carpentry or Data Carpentry (generally ~$25 for a
2-day workshop), courses offered by a local college or university (prices vary),
or organize your colleagues to meet regularly to learn R together (free!).
Online interactive courses
If you prefer to learn in a semi-structured online environment, there are a wide
variety of online courses for learning R including Data Camp, Coursera, edX, and
Lynda.com. Many of these options include free introductory lessons or trial
periods as well as paid courses. We do not have personal experience with
these courses and do not recommend or specifically promote any course.
In program interactive course
Swirl
is guided introduction to R where you code along with the instructions in R. You
get direct feedback when you type a command incorrectly. To use this package,
once you have R or RStudio open and running, use the following commands to start
the first lesson.
install.packages("swirl")
library(swirl)
swirl()
Online tutorials
If you prefer a less structured online environment, these tutorial series may be
better suited for you.
Learn R with a focus on data analysis. Beyond the basics, it covers dyplr for
data aggregation & manipulation, ggplot2 for plotting, and touches on
interacting with an SQL database. Designed to be taught by an instructor but the
materials also work for independent learning online.
This comprehensive course contains an R section. While the overall focus is on
data science skills, learning R is a portion of it (note, this is an extensive
course).
RStudio links to many other learning opportunities. Start with the 'Beginners'
learning path.
Video tutorials
A blend of having an instructor and self-paced, video tutorials may also be of
interest. New stand-alone video tutorials are out each day, so we aren’t going
to recommend a specific series. Find what works for you by searching
“R Programming video tutorials” on YouTube.
Books
Books are still a great way to learn R (and other languages). Many books are
available at local libraries (university or community) or online, if you want to
try them out before buying. Below are a few of the many, many books that data
scientists working on the NEON project have found useful.
Michael Crawley’s The R Book
is a classic that takes you from beginning steps to analyses and modelling.
Grolemun and Wickham’s R for Data Science
focuses on using R in data science applications using Hadley Wickham’s
“tidyverse”. It does assume some basic familiarity with R. Bonus: it is available
online or in book format!
(If you are completely new, they recommend starting with
Hands-on Programming with R).
Beyond the Basics
There are many intermediate and advanced courses, lessons, and tutorials linked
in the above resources. For example, the Swirl package offers intermediate and
advanced courses on specific topics, as does RStudio's list. See courses here;
development is ongoing so new courses may be added.
However, once the basics are handled, you will find that much of your learning
will happen through solving individual problems you encounter. To solve these
problems, your favorite search engine is your friend. Paste the error (without
specifics to your file/data) into the search menu and find answers from those
who have had similar questions.
For more on working with NEON data in particular, be sure to check out the other
NEON data tutorials.
This tutorial provides the basics on how to set up Docker on one's local computer
and then connect to an eddy4R Docker container in order to use the eddy4R R package.
There are no specific skills needed for this tutorial, however, you will need to
know how to access the command line tool for your operating system
(basic instructions given).
Learning Objectives
After completing this tutorial, you will be able to:
Access Docker on your local computer.
Access the eddy4R package in a RStudio Docker environment.
Things You’ll Need To Complete This Tutorial
You will need internet access and an up to date browser.
Sources
The directions on how to install docker are heavily borrowed from the author's
of CyVerse's Container Camp's
Intro to Docker and we thank them for providing the information.
The directions for how to access eddy4R comes from
Metzger, S., D. Durden, C. Sturtevant, H. Luo, N. Pingintha-durden, and T. Sachs (2017). eddy4R 0.2.0: a DevOps model for community-extensible processing and analysis of eddy-covariance data based on R, Git, Docker, and HDF5. Geoscientific Model Development 10:3189–3206. doi:
10.5194/gmd-10-3189-2017.
The eddy4R versions within the tutorial have been updated to the 1.0.0 release that accompanied the following manuscript:
Metzger, S., E. Ayres, D. Durden, C. Florian, R. Lee, C. Lunch, H. Luo, N. Pingintha-Durden, J.A. Roberti, M. SanClements, C. Sturtevant, K. Xu, and R.C. Zulueta, 2019: From NEON Field Sites to Data Portal: A Community Resource for Surface–Atmosphere Research Comes Online. Bull. Amer. Meteor. Soc., 100, 2305–2325, https://doi.org/10.1175/BAMS-D-17-0307.1.
In the tutorial below, we give the very barest of information to get Docker set
up for use with the NEON R package eddy4R. For more information on using Docker,
consider reading through the content from CyVerse's Container Camp's
Intro to Docker.
Install Docker
To work with the eddy4R–Docker image, you first need to sign up for an
account at DockerHub.
Once logged in, getting Docker up and running on your favorite operating system
(Mac/Windows/Linux) is very easy. The "getting started" guide on Docker has
detailed instructions for setting up Docker. Unless you plan on being a very
active user and devoloper in Docker, we recommend starting with the stable channel
(not edge channel) as you may encounter fewer problems.
If you're using Docker for Windows make sure you have
shared your drive.
If you're using an older version of Windows or MacOS, you may need to use
Docker Machine
instead.
Test Docker installation
Once you are done installing Docker, test your Docker installation by running
the following command to make sure you are using version 1.13 or higher.
You will need an open shell window (Linux; Mac=Terminal) or the Docker
Quickstart Terminal (Windows).
docker --version
When run, you will see which version of Docker you are currently running.
Note: If you run just the word docker you should see a whole bunch of
lines showing the different options available with docker. Alternatively
you can test your installation by running the following:
docker run hello-world
Notice that the first line states that the image can't be found locally. The
next few lines are pulling the image, so if you were to run the hello-world
prompt again, it would already be local and you'd see the message start at
"Hello from Docker!".
If these steps work, you are ready to go on to access the
eddy4R-Docker image that houses the suite of eddy4R R
packages. If these steps have not worked, follow the installation
instructions a second time.
Accessing eddy4R
Download of the eddy4R–Docker image and subsequent creation of a local container
can be performed by two simple commands in an open shell (Linux; Mac = Terminal)
or the Docker Quickstart Terminal (Windows).
The first command docker login will prompt you for your DockerHub ID and password.
The second command docker run -d -p 8787:8787 -e PASSWORD=YOURPASSWORD stefanmet/eddy4r:1.0.0 will
download the latest eddy4R–Docker image and start a Docker container that
utilizes port 8787 for establishing a graphical interface via web browser.
docker run: docker will preform some process on an isolated container
-d: the container will start in a detached mode, which means the container
run in the background and will print the container ID
-p: publish a container to a specified port (which follows)
8787:8787: specify which port you want to use. The default 8787:8787
is great if you are running locally. The first 4 digits are the
port on your machine, the last 4 digits are the port communicating with
RStudio on Docker. You can change the first 4 digits if you want to use a
different port on your machine, or if you are running many containers or
are on a shared network, but the last 4 digits need to be 8787.
-e PASSWORD=YOURPASSWORD: define a password environmental variable to use upon login to the Rstudio instance. YOURPASSWORD can be anything you want.
stefanmet/eddy4r:1.0.0: finally, which container do you want to run.
Now try it.
docker login
docker run -d -p 8787:8787 -e PASSWORD=YOURPASSWORD stefanmet/eddy4r:1.0.0
This last command will run a specified release version (eddy4r:1.0.0) of the
Docker image. Alternatively you can use eddy4r:latest to get the most up-to-date
development image of eddy4r.
If you are using data stored on your local machine, rather than cloud hosting, a
physical file system location on the host computer (local/dir) can be mounted
to a file system location inside the Docker container (docker/dir). This is
achieved with the Docker run option -v local/dir:docker/dir.
Access RStudio session
Now you can access the interactive RStudio session for using eddy4r by using any
web browser and going to http://host-ip-address:8787 where host-ip-address
is the internal IP address of the Docker host. For example, if your host IP address
is 10.100.90.169 then you should type http://10.100.90.169:8787 into your browser.
To determine the IP address of your Docker host, follow the instructions below
for your operating system.
Windows
Depending on the version of Docker, older Docker Toolbox versus the newer Docker Desktop for Windows, there are different way to get the docker machine IP address:
Docker Toolbox - Type docker-machine ip default into cmd.exe window. The output will be your local IP address for the docker machine.
Docker Desktop for Windows - Type ipconfig into cmd.exe window. The output will include either DockerNAT IPv4 address or vEthernet IPv4 address that docker uses to communicate to the internet, which in most cases will be 10.0.75.1.
Mac
Type ifconfig | grep "inet " | grep -v 127.0.0.1 into your Terminal window.
The output will be one or more local IP addresses for the docker machine. Use
the numbers after the first inet output.
Linux
Type localhost in a shell session and the local IP will be the output.
Once in the web browser you can log into this instance of the RStudio session
with the username as rstudio and password as defined by YOURPASSWORD. Once complete you are now in
a RStudio user interface with eddy4R installed and ready to use.
This tutorial provides an overview of functions in the
neonUtilities package in R and the
neonutilities package in Python. These packages provide a
toolbox of basic functionality for working with NEON data.
This tutorial is primarily an index of functions and their inputs;
for more in-depth guidance in using these functions to work with NEON
data, see the
Download
and Explore tutorial. If you are already familiar with the
neonUtilities package, and need a quick reference guide to
function inputs and notation, see the
neonUtilities
cheat sheet.
Function index
The neonUtilities/neonutilities package
contains several functions (use the R and Python tabs to see the syntax
in each language):
R
stackByTable(): Takes zip files downloaded from the
Data Portal or
downloaded by zipsByProduct(), unzips them, and joins the
monthly files by data table to create a single file per table.
zipsByProduct(): A wrapper for the
NEON
API; downloads data based on data product and site criteria. Stores
downloaded data in a format that can then be joined by
stackByTable().
loadByProduct(): Combines the functionality of
zipsByProduct(), stackByTable(), and readTableNEON(): Downloads
the specified data, stacks the files, and loads the files to the R
environment.
byFileAOP(): A wrapper for the NEON API; downloads
remote sensing data based on data product, site, and year criteria.
Preserves the file structure of the original data.
byTileAOP(): Downloads remote sensing data for the
specified data product, subset to tiles that intersect a list of
coordinates.
readTableNEON(): Reads NEON data tables into R, using
the variables file to assign R classes to each column.
getCitation(): Get a BibTeX citation for a particular
data product and release.
Python
stack_by_table(): Takes zip files downloaded from the
Data Portal or
downloaded by zips_by_product(), unzips them, and joins the
monthly files by data table to create a single file per table.
zips_by_product(): A wrapper for the
NEON
API; downloads data based on data product and site criteria. Stores
downloaded data in a format that can then be joined by
stack_by_table().
load_by_product(): Combines the functionality of
zips_by_product(), stack_by_table(), and read_table_neon():
Downloads the specified data, stacks the files, and loads the files to
the R environment.
by_file_aop(): A wrapper for the NEON API; downloads
remote sensing data based on data product, site, and year criteria.
Preserves the file structure of the original data.
by_tile_aop(): Downloads remote sensing data for the
specified data product, subset to tiles that intersect a list of
coordinates.
read_table_neon(): Reads NEON data tables into R, using
the variables file to assign R classes to each column.
get_citation(): Get a BibTeX citation for a particular
data product and release.
If you are only interested in joining data
files downloaded from the NEON Data Portal, you will only need to use
stackByTable(). Follow the instructions in the first
section of the
Download
and Explore tutorial.
Install and load packages
First, install and load the package. The installation step only needs
to be run once, and then periodically to update when new package
versions are released. The load step needs to be run every time you run
your code.
R
##
## # install neonUtilities - can skip if already installed
## install.packages("neonUtilities")
##
## # load neonUtilities
library(neonUtilities)
##
Python
# install neonutilities - can skip if already installed
# do this in the command line
pip install neonutilities
# load neonutilities in working environment
import neonutilities as nu
Download files and load to working environment
The most popular function in neonUtilities is
loadByProduct() (or load_by_product() in
neonutilities). This function downloads data from the NEON
API, merges the site-by-month files, and loads the resulting data tables
into the programming environment, classifying each variable’s data type
appropriately. It combines the actions of the
zipsByProduct(), stackByTable(), and
readTableNEON() functions, described below.
This is a popular choice because it ensures you’re always working
with the latest data, and it ends with ready-to-use tables. However, if
you use it in a workflow you run repeatedly, keep in mind it will
re-download the data every time.
loadByProduct() works on most observational (OS) and
sensor (IS) data, but not on surface-atmosphere exchange (SAE) data,
remote sensing (AOP) data, and some of the data tables in the microbial
data products. For functions that download AOP data, see the
byFileAOP() and byTileAOP() sections in this
tutorial. For functions that work with SAE data, see the
NEON
eddy flux data tutorial. SAE functions are not yet available in
Python.
The inputs to loadByProduct() control which data to
download and how to manage the processing:
R
dpID: The data product ID, e.g. DP1.00002.001
site: Defaults to “all”, meaning all sites with
available data; can be a vector of 4-letter NEON site codes, e.g.
c("HARV","CPER","ABBY").
startdate and enddate: Defaults to NA,
meaning all dates with available data; or a date in the form YYYY-MM,
e.g. 2017-06. Since NEON data are provided in month packages, finer
scale querying is not available. Both start and end date are
inclusive.
package: Either basic or expanded data package.
Expanded data packages generally include additional information about
data quality, such as chemical standards and quality flags. Not every
data product has an expanded package; if the expanded package is
requested but there isn’t one, the basic package will be
downloaded.
timeIndex: Defaults to “all”, to download all data; or
the number of minutes in the averaging interval. See example below; only
applicable to IS data.
release: Specify a particular data Release, e.g.
"RELEASE-2024". Defaults to the most recent Release. For
more details and guidance, see the
Release and Provisional tutorial.
include.provisional: T or F: Should provisional data be
downloaded? If release is not specified, set to T to
include provisional data in the download. Defaults to F.
savepath: the file path you want to download to;
defaults to the working directory.
check.size: T or F: should the function pause before
downloading data and warn you about the size of your download? Defaults
to T; if you are using this function within a script or batch process
you will want to set it to F.
token: Optional API token for faster downloads. See the
API token tutorial.
nCores: Number of cores to use for parallel processing.
Defaults to 1, i.e. no parallelization.
Python
dpid: the data product ID, e.g. DP1.00002.001
site: defaults to “all”, meaning all sites with
available data; can be a list of 4-letter NEON site codes, e.g.
["HARV","CPER","ABBY"].
startdate and enddate: defaults to NA,
meaning all dates with available data; or a date in the form YYYY-MM,
e.g. 2017-06. Since NEON data are provided in month packages, finer
scale querying is not available. Both start and end date are
inclusive.
package: either basic or expanded data package.
Expanded data packages generally include additional information about
data quality, such as chemical standards and quality flags. Not every
data product has an expanded package; if the expanded package is
requested but there isn’t one, the basic package will be
downloaded.
timeindex: defaults to “all”, to download all data; or
the number of minutes in the averaging interval. See example below; only
applicable to IS data.
release: Specify a particular data Release, e.g.
"RELEASE-2024". Defaults to the most recent Release. For
more details and guidance, see the
Release and Provisional tutorial.
include_provisional: True or False: Should provisional
data be downloaded? If release is not specified, set to T
to include provisional data in the download. Defaults to F.
savepath: the file path you want to download to;
defaults to the working directory.
check_size: True or False: should the function pause
before downloading data and warn you about the size of your download?
Defaults to True; if you are using this function within a script or
batch process you will want to set it to False.
token: Optional API token for faster downloads. See the
API token tutorial.
cloud_mode: Can be set to True if you are working in a
cloud environment; provides more efficient data transfer from NEON cloud
storage to other cloud environments.
progress: Set to False to omit the progress bar during
download and stacking.
The dpID (dpid) is the data product
identifier of the data you want to download. The DPID can be found on
the
Explore Data Products page. It will be in the form DP#.#####.###
Demo data download and read
Let’s get triple-aspirated air temperature data (DP1.00003.001) from
Moab and Onaqui (MOAB and ONAQ), from May–August 2018, and name the data
object triptemp:
The object returned by loadByProduct() is a named list
of data tables, or a dictionary of data tables in Python. To work with
each of them, select them from the list.
If you prefer to extract each table from the list and work with it as
an independent object, you can use globals().update():
globals().update(triptemp)
For more details about the contents of the data tables and metadata
tables, check out the
Download
and Explore tutorial.
Join data files: stackByTable()
The function stackByTable() joins the month-by-site
files from a data download. The output will yield data grouped into new
files by table name. For example, the single aspirated air temperature
data product contains 1 minute and 30 minute interval data. The output
from this function is one .csv with 1 minute data and one .csv with 30
minute data.
Depending on your file size this function may run for a while. For
example, in testing for this tutorial, 124 MB of temperature data took
about 4 minutes to stack. A progress bar will display while the stacking
is in progress.
Download the Data
To stack data from the Portal, first download the data of interest
from the NEON
Data Portal. To stack data downloaded from the API, see the
zipsByProduct() section below.
Your data will download from the Portal in a single zipped file.
The stacking function will only work on zipped Comma Separated Value
(.csv) files and not the NEON data stored in other formats (HDF5,
etc).
Run stackByTable()
The example data below are single-aspirated air temperature.
To run the stackByTable() function, input the file path
to the downloaded and zipped file.
R
# Modify the file path to the file location on your computer
stackByTable(filepath="~neon/data/NEON_temp-air-single.zip")
Python
# Modify the file path to the file location on your computer
nu.stack_by_table(filepath="/neon/data/NEON_temp-air-single.zip")
In the same directory as the zipped file, you should now have an
unzipped directory of the same name. When you open this you will see a
new directory called stackedFiles. This directory
contains one or more .csv files (depends on the data product you are
working with) with all the data from the months & sites you
downloaded. There will also be a single copy of the associated
variables, validation, and sensor_positions files, if applicable
(validation files are only available for observational data products,
and sensor position files are only available for instrument data
products).
These .csv files are now ready for use with the program of your
choice.
To read the data tables, we recommend using
readTableNEON(), which will assign each column to the
appropriate data type, based on the metadata in the variables file. This
ensures time stamps and missing data are interpreted correctly.
savepath : allows you to specify the file path where
you want the stacked files to go, overriding the default. Set to
"envt" to load the files to the working environment.
saveUnzippedFiles : allows you to keep the unzipped,
unstacked files from an intermediate stage of the process; by default
they are discarded.
The function zipsByProduct() is a wrapper for the NEON
API, it downloads zip files for the data product specified and stores
them in a format that can then be passed on to
stackByTable().
Input options for zipsByProduct() are the same as those
for loadByProduct() described above.
Here, we’ll download single-aspirated air temperature (DP1.00002.001)
data from Wind River Experimental Forest (WREF) for April and May of
2019.
For many sensor data products, download sizes can get very large, and
stackByTable() takes a long time. The 1-minute or 2-minute
files are much larger than the longer averaging intervals, so if you
don’t need high- frequency data, the timeIndex input option
lets you choose which averaging interval to download.
This option is only applicable to sensor (IS) data, since OS data are
not averaged.
Download by averaging interval
Download only the 30-minute data for single-aspirated air temperature
at WREF:
The 30-minute files can be stacked and loaded as usual.
Download remote sensing files
Remote sensing data files can be very large, and NEON remote sensing
(AOP) data are stored in a directory structure that makes them easier to
navigate. byFileAOP() downloads AOP files from the API
while preserving their directory structure. This provides a convenient
way to access AOP data programmatically.
Be aware that downloads from byFileAOP() can take a VERY
long time, depending on the data you request and your connection speed.
You may need to run the function and then leave your machine on and
downloading for an extended period of time.
Here the example download is the Ecosystem Structure data product at
Hop Brook (HOPB) in 2017; we use this as the example because it’s a
relatively small year-site-product combination.
The files should now be downloaded to a new folder in your working
directory.
Download remote sensing files for specific
coordinates
Often when using remote sensing data, we only want data covering a
certain area - usually the area where we have coordinated ground
sampling. byTileAOP() queries for data tiles containing a
specified list of coordinates. It only works for the tiled, AKA
mosaicked, versions of the remote sensing data, i.e. the ones with data
product IDs beginning with “DP3”.
Here, we’ll download tiles of vegetation indices data (DP3.30026.001)
corresponding to select observational sampling plots. For more
information about accessing NEON spatial data, see the
API tutorial and the in-development
geoNEON package.
For now, assume we’ve used the API to look up the plot centroids of
plots SOAP_009 and SOAP_011 at the Soaproot Saddle site. You can also
look these up in the Spatial Data folder of the
document library. The coordinates of the two plots in UTMs are
298755,4101405 and 299296,4101461. These are 40x40m plots, so in looking
for tiles that contain the plots, we want to include a 20m buffer. The
“buffer” is actually a square, it’s a delta applied equally to both the
easting and northing coordinates.
In this tutorial, we explore the NEON single-aspirated air temperature data.
We then discuss how to interpret the variables, how to work with date-time and
date formats, and finally how to plot the data.
This tutorial is part of a series on how to work with both discrete and continuous
time series data with NEON plant phenology and temperature data products.
Objectives
After completing this activity, you will be able to:
work with "stacked" NEON Single-Aspirated Air Temperature data.
correctly format date-time data.
use dplyr functions to filter data.
plot time series data in scatter plots using ggplot function.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
Background Information About NEON Air Temperature Data
Air temperature is continuously monitored by NEON by two methods. At terrestrial
sites temperature at the top of the tower is derived from a triple
redundant aspirated air temperature sensor. This is provided as NEON data
product DP1.00003.001. Single Aspirated Air Temperature sensors (SAAT) are
deployed to develop temperature profiles at multiple levels on the tower at NEON
terrestrial sites and on the meteorological stations at NEON aquatic sites. This
is provided as NEON data product DP1.00002.001.
When designing a research project using this data, consult the
Data Product Details Page
for more detailed documentation.
Single-aspirated Air Temperature
Air temperature profiles are ascertained by deploying SAATs at various heights
on NEON tower infrastructure. Air temperature at aquatic sites is measured
using a single SAAT at a standard height of 3m above ground level. Air temperature
for this data product is provided as one- and thirty-minute averages of 1 Hz
observations. Temperature observations are made using platinum resistance
thermometers, which are housed in a fan aspirated shield to reduce radiative
heating. The temperature is measured in Ohms and subsequently converted to degrees
Celsius during data processing. Details on the conversion can be found in the
associated Algorithm Theoretic Basis Document (ATBD; see Product Details page
linked above).
Available Data Tables
The SAAT data product contains two data tables for each site and month selected,
consisting of the 1-minute and 30-minute averaging intervals. In addition, there
are several metadata files that provide additional useful information.
readme with information on the data product and the download
variables file that defines the terms, data types, and units
EML file with machine readable metadata in standardized Ecological Metadata Language
Access NEON Data
There are several ways to access NEON data, directly from the NEON data portal,
access through a data partner (select data products only), writing code to
directly pull data from the NEON API, or, as we'll do here, using the neonUtilities
package which is a wrapper for the API to make working with the data easier.
Downloading from the Data Portal
If you prefer to download data from the data portal, please
review the Getting started and Stack the downloaded data sections of the
Download and Explore NEON Data tutorial.
This will get you to the point where you can download data from sites or dates
of interest and resume this tutorial.
Downloading Data Using neonUtilities
First, we need to set up our environment with the packages needed for this tutorial.
# Install needed package (only uncomment & run if not already installed)
#install.packages("neonUtilities")
#install.packages("ggplot2")
#install.packages("dplyr")
#install.packages("tidyr")
# Load required libraries
library(neonUtilities) # for accessing NEON data
library(ggplot2) # for plotting
library(dplyr) # for data munging
library(tidyr) # for data munging
# set working directory
# this step is optional, only needed if you plan to save the
# data files at the end of the tutorial
wd <- "~/data" # enter your working directory here
setwd(wd)
This tutorial is part of series working with discrete plant phenology data and
(nearly) continuous temperature data. Our overall "research" question is to see if
there is any correlation between plant phenology and temperature.
Therefore, we will want to work with data that
align with the plant phenology data that we worked with in the first tutorial.
If you are only interested in working with the temperature data, you do not need
to complete the previous tutorial.
Our data of interest will be the temperature data from 2018 from NEON's
Smithsonian Conservation Biology Institute (SCBI) field site located in Virginia
near the northern terminus of the Blue Ridge Mountains.
NEON single aspirated air temperature data is available in two averaging intervals,
1 minute and 30 minute intervals. Which data you want to work with is going to
depend on your research questions. Here, we're going to only download and work
with the 30 minute interval data as we're primarily interest in longer term (daily,
weekly, annual) patterns.
This will download 7.7 MB of data. check.size is set to false (F) to improve flow
of the script but is always a good idea to view the size with true (T) before
downloading a new dataset.
# download data of interest - Single Aspirated Air Temperature
saat <- loadByProduct(dpID="DP1.00002.001", site="SCBI",
startdate="2018-01", enddate="2018-12",
package="basic", timeIndex="30",
check.size = F)
Explore Temperature Data
Now that you have the data, let's take a look at the structure and understand
what's in the data. The data (saat) come in as a large list of four items.
View(saat)
So what exactly are these five files and why would you want to use them?
data file(s): There will always be one or more dataframes that include the
primary data of the data product you downloaded. Since we downloaded only the 30
minute averaged data we only have one data table SAAT_30min.
readme_xxxxx: The readme file, with the corresponding 5 digits from the data
product number, provides you with important information relevant to the data
product and the specific instance of downloading the data.
sensor_positions_xxxxx: This table contains the spatial coordinates
of each sensor, relative to a reference location.
variables_xxxxx: This table contains all the variables found in the associated
data table(s). This includes full definitions, units, and rounding.
issueLog_xxxxx: This table contains records of any known issues with the
data product, such as sensor malfunctions.
scienceReviewFlags_xxxxx: This table may or may not be present. It contains
descriptions of adverse events that led to manual flagging of the data, and is
usually more detailed than the issue log. It only contains records relevant to
the sites and dates of data downloaded.
Since we want to work with the individual files, let's make the elements of the
list into independent objects.
The sensor data undergo a variety of automated quality assurance and quality control
checks. You can read about them in detail in the Quality Flags and Quality Metrics ATBD, in the Documentation section of the product details page.
The expanded data package
includes all of these quality flags, which can allow you to decide if not passing
one of the checks will significantly hamper your research and if you should
therefore remove the data from your analysis. Here, we're using the
basic data package, which only includes the final quality flag (finalQF),
which is aggregated from the full set of quality flags.
A pass of the check is 0, while a fail is 1. Let's see what percentage
of the data we downloaded passed the quality checks.
What should we do with the 23% of the data that are flagged?
This may depend on why it is flagged and what questions you are asking,
and the expanded data package would be useful for determining this.
For now, for demonstration purposes, we'll keep the flagged data.
What about null (NA) data?
sum(is.na(SAAT_30min$tempSingleMean))/nrow(SAAT_30min)
## [1] 0.2239269
mean(SAAT_30min$tempSingleMean)
## [1] NA
22% of the mean temperature values are NA. Note that this is not
additive with the flagged data! Empty data records are flagged, so this
indicates nearly all of the flagged data in our download are empty records.
Why was there no output from the calculation of mean temperature?
The R programming language, by default, won't calculate a mean (and many other
summary statistics) in data that contain NA values. We could override this
using the input parameter na.rm=TRUE in the mean() function, or just
remove the empty values from our analysis.
# create new dataframe without NAs
SAAT_30min_noNA <- SAAT_30min %>%
drop_na(tempSingleMean) # tidyr function
# alternate base R
# SAAT_30min_noNA <- SAAT_30min[!is.na(SAAT_30min$tempSingleMean),]
# did it work?
sum(is.na(SAAT_30min_noNA$tempSingleMean))
## [1] 0
Scatterplots with ggplot
We can use ggplot to create scatter plots. Which data should we plot, as we have
several options?
tempSingleMean: the mean temperature for the interval
tempSingleMinimum: the minimum temperature during the interval
tempSingleMaximum: the maximum temperature for the interval
Depending on exactly what question you are asking you may prefer to use one over
the other. For many applications, the mean temperature of the 1- or 30-minute
interval will provide the best representation of the data.
Let's plot it. (This is a plot of a large amount of data. It can take 1-2 mins
to process. It is not essential for completing the next steps if this takes too
much of your computer memory.)
Something odd seems to have happened in late April/May 2018. Since it is unlikely
Virginia experienced -50C during this time, these are probably erroneous sensor
readings and why we should probably remove data that are flagged with those quality
flags.
Right now we are also looking at all the data points in the dataset. However, we may
want to view or aggregate the data differently:
aggregated data: min, mean, or max over a some duration
the number of days since a freezing temperatures
or some other segregation of the data.
Given that in the previous tutorial,
Work With NEON's Plant Phenology Data,
we were working with phenology data collected on a daily scale let's aggregate
to that level.
To make this plot better, lets do two things
Remove flagged data
Aggregate to a daily mean.
Subset to remove quality flagged data
We already removed the empty records. Now we'll
subset the data to remove the remaining flagged data.
# subset and add C to name for "clean"
SAAT_30minC <- filter(SAAT_30min_noNA, SAAT_30min_noNA$finalQF==0)
# Do any quality flags remain?
sum(SAAT_30minC$finalQF==1)
## [1] 0
That looks better! But we're still working with the 30-minute data.
Aggregate Data by Day
We can use the dplyr package functions to aggregate the data. However, we have to
choose which data we want to aggregate. Again, you might want daily
minimum temps, mean temperature or maximum temps depending on your question.
In the context of phenology, minimum temperatures might be very important if you
are interested in a species that is very frost susceptible. Any days with a
minimum temperature below 0C could dramatically change the phenophase. For other
species or meteorological zones, maximum thresholds may be very important. Or you
might be mostinterested in the daily mean.
And note that you can combine different input values with different aggregation
functions - for example, you could calculate the minimum of the half-hourly
average temperature, or the average of the half-hourly maximum temperature.
For this tutorial, let's use maximum daily temperature, i.e. the maximum of the
tempSingleMax values for the day.
# convert to date, easier to work with
SAAT_30minC$Date <- as.Date(SAAT_30minC$startDateTime)
# max of mean temp each day
temp_day <- SAAT_30minC %>%
group_by(Date) %>%
distinct(Date, .keep_all=T) %>%
mutate(dayMax=max(tempSingleMaximum))
Now we can plot the cleaned up daily temperature.
# plot Air Temperature Data across 2018 using daily data
tempPlot_dayMax <- ggplot(temp_day, aes(Date, dayMax)) +
geom_point(size=0.5) +
ggtitle("Daily Max Air Temperature") +
xlab("") + ylab("Temp (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold", size = 20)) +
theme(text = element_text(size=18))
tempPlot_dayMax
Thought questions:
What do we gain by this visualization?
What do we lose relative to the 30 minute intervals?
ggplot - Subset by Time
Sometimes we want to scale the x- or y-axis to a particular time subset without
subsetting the entire data_frame. To do this, we can define start and end
times. We can then define these limits in the scale_x_date object as
follows:
scale_x_date(limits=start.end) +
Let's plot just the first three months of the year.
# Define Start and end times for the subset as R objects that are the time class
startTime <- as.Date("2018-01-01")
endTime <- as.Date("2018-03-31")
# create a start and end time R object
start.end <- c(startTime,endTime)
str(start.end)
## Date[1:2], format: "2018-01-01" "2018-03-31"
# View data for first 3 months only
# And we'll add some color for a change.
tempPlot_dayMax3m <- ggplot(temp_day, aes(Date, dayMax)) +
geom_point(color="blue", size=0.5) +
ggtitle("Air Temperature\n Jan - March") +
xlab("Date") + ylab("Air Temperature (C)")+
(scale_x_date(limits=start.end,
date_breaks="1 week",
date_labels="%b %d"))
tempPlot_dayMax3m
## Warning: Removed 268 rows containing missing values (`geom_point()`).
Now we have the temperature data matching our Phenology data from the previous
tutorial, we want to save it to our computer to use in future analyses (or the
next tutorial). This is optional if you are continuing directly to the next tutorial
as you already have the data in R.
# Write .csv - this step is optional
# This will write to the working directory we set at the start of the tutorial
write.csv(temp_day , file="NEONsaat_daily_SCBI_2018.csv", row.names=F)