In this tutorial, we will review the fundamental principles, packages and
metadata/raster attributes that are needed to work with raster data in R.
We discuss the three core metadata elements that we need to understand to work
with rasters in R: CRS, extent and resolution. We also explore
missing and bad data values as stored in a raster and how R handles these
elements. Finally, we introduce the GeoTiff file format.
Learning Objectives
After completing this tutorial, you will be able to:
Understand what a raster dataset is and its fundamental attributes.
Know how to explore raster attributes in R.
Be able to import rasters into R using the terra package.
Be able to quickly plot a raster file in R.
Understand the difference between single- and multi-band rasters.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
Set Working Directory: This lesson will explain how to set the working directory. You may wish to set your working directory to some other location, depending on how you prefer to organize your data.
Raster or "gridded" data are stored as a grid of values which are rendered on a map as pixels. Each pixel value represents an area on the Earth's surface.
Source: National Ecological Observatory Network (NEON)
Raster Data in R
Let's first import a raster dataset into R and explore its metadata. To open rasters in R, we will use the terra package.
library(terra)
# set working directory, you can change this if desired
wd <- "~/data/"
setwd(wd)
Download LiDAR Raster Data
We can use the neonUtilities function byTileAOP to download a single elevation tiles (DSM and DTM). You can run help(byTileAOP) to see more details on what the various inputs are. For this exercise, we'll specify the UTM Easting and Northing to be (732000, 4713500), which will download the tile with the lower left corner (732000,4713000). By default, the function will check the size total size of the download and ask you whether you wish to proceed (y/n). This file is ~8 MB, so make sure you have enough space on your local drive. You can set check.size=TRUE if you want to check the file size before downloading.
byTileAOP(dpID='DP3.30024.001', # lidar elevation
site='HARV',
year='2022',
easting=732000,
northing=4713500,
check.size=FALSE, # set to TRUE or remove if you want to check the size before downloading
savepath = wd)
This file will be downloaded into a nested subdirectory under the ~/data folder, inside a folder named DP3.30024.001 (the Data Product ID). The file should show up in this location: ~/data/DP3.30024.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/DiscreteLidar/DSMGtif/NEON_D01_HARV_DP3_732000_4713000_DSM.tif.
Open a Raster in R
We can use terra's rast("path-to-raster-here") function to open a raster in R.
Data Tip: VARIABLE NAMES! To improve code
readability, file and object names should be used that make it clear what is in
the file. The data for this tutorial were collected over from Harvard Forest (HARV),
sowe'll use the naming convention of DATATYPE_HARV.
# Load raster into R
dsm_harv_file <- paste0(wd, "DP3.30024.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/DiscreteLidar/DSMGtif/NEON_D01_HARV_DP3_732000_4713000_DSM.tif")
DSM_HARV <- rast(dsm_harv_file)
# View raster structure
DSM_HARV
## class : SpatRaster
## dimensions : 1000, 1000, 1 (nrow, ncol, nlyr)
## resolution : 1, 1 (x, y)
## extent : 732000, 733000, 4713000, 4714000 (xmin, xmax, ymin, ymax)
## coord. ref. : WGS 84 / UTM zone 18N (EPSG:32618)
## source : NEON_D01_HARV_DP3_732000_4713000_DSM.tif
## name : NEON_D01_HARV_DP3_732000_4713000_DSM
## min value : 317.91
## max value : 433.94
# plot raster
plot(DSM_HARV, main="Digital Surface Model - HARV")
Types of Data Stored in Raster Format
Raster data can be continuous or categorical. Continuous rasters can have a
range of quantitative values. Some examples of continuous rasters include:
Precipitation maps.
Maps of tree height derived from LiDAR data.
Elevation values for a region.
The raster we loaded and plotted earlier was a digital surface model, or a map of the elevation for Harvard Forest derived from the
NEON AOP LiDAR sensor. Elevation is represented as a continuous numeric variable in this map.
The legend shows the continuous range of values in the data from around 300 to 420 meters.
Some rasters contain categorical data where each pixel represents a discrete
class such as a landcover type (e.g., "forest" or "grassland") rather than a
continuous value such as elevation or temperature. Some examples of classified
maps include:
Landcover/land-use maps.
Tree height maps classified as short, medium, tall trees.
Elevation maps classified as low, medium and high elevation.
Categorical Elevation Map of the NEON Harvard Forest Site
The legend of this map shows the colors representing each discrete class.
# add a color map with 5 colors
col=terrain.colors(3)
# add breaks to the colormap (4 breaks = 3 segments)
brk <- c(250,350, 380,500)
# Expand right side of clipping rect to make room for the legend
par(xpd = FALSE,mar=c(5.1, 4.1, 4.1, 4.5))
# DEM with a custom legend
plot(DSM_HARV,
col=col,
breaks=brk,
main="Classified Elevation Map - HARV",
legend = FALSE
)
# turn xpd back on to force the legend to fit next to the plot.
par(xpd = TRUE)
# add a legend - but make it appear outside of the plot
legend( 733100, 4713700,
legend = c("High Elevation", "Middle","Low Elevation"),
fill = rev(col))
What is a GeoTIFF??
Raster data can come in many different formats. In this tutorial, we will use the
geotiff format which has the extension .tif. A .tif file stores metadata
or attributes about the file as embedded tif tags. For instance, your camera
might store a tag that describes the make and model of the camera or the date the
photo was taken when it saves a .tif. A GeoTIFF is a standard .tif image
format with additional spatial (georeferencing) information embedded in the file
as tags. These tags can include the following raster metadata:
A Coordinate Reference System (CRS)
Spatial Extent (extent)
Values that represent missing data (NoDataValue)
The resolution of the data
In this tutorial we will discuss all of these metadata tags.
The Coordinate Reference System or CRS tells R where the raster is located
in geographic space. It also tells R what method should be used to "flatten"
or project the raster in geographic space.
Maps of the United States in different projections. Notice the
differences in shape associated with each different projection. These
differences are a direct result of the calculations used to "flatten" the
data onto a 2-dimensional map. Source: M. Corey, opennews.org
What Makes Spatial Data Line Up On A Map?
There are many great resources that describe coordinate reference systems and
projections in greater detail (read more, below). For the purposes of this
activity, it is important to understand that data from the same location
but saved in different projections will not line up in any GIS or other
program. Thus, it's important when working with spatial data in a program like
R to identify the coordinate reference system applied to the data and retain
it throughout data processing and analysis.
Check out this short video, from
Buzzfeed,
highlighting how map projections can make continents seems proportionally larger or smaller than they actually are!
View Raster Coordinate Reference System (CRS) in R
We can view the CRS string associated with our R object using thecrs()
method. We can assign this string to an R object, too.
# view crs description
crs(DSM_HARV,describe=TRUE)
## name authority code
## 1 WGS 84 / UTM zone 18N EPSG 32618
## area
## 1 Between 78°W and 72°W, northern hemisphere between equator and 84°N, onshore and offshore. Bahamas. Canada - Nunavut; Ontario; Quebec. Colombia. Cuba. Ecuador. Greenland. Haiti. Jamaica. Panama. Turks and Caicos Islands. United States (USA). Venezuela
## extent
## 1 -78, -72, 84, 0
# assign crs to an object (class) to use for reprojection and other tasks
harvCRS <- crs(DSM_HARV)
The CRS of our DSM_HARV object tells us that our data are in the UTM projection, in zone 18N.
The UTM zones across the continental United States. Source: Chrismurf, wikimedia.org.
The CRS in this case is in a char format. This means that the projection
information is strung together as a series of text elements.
We'll focus on the first few components of the CRS, as described above.
name: The projection of the dataset. Our data are in WGS84 (World Geodetic System 1984) / UTM (Universal Transverse Mercator) zone 18N. WGS84 is the datum. The UTM projection divides up the world into zones, this element tells you which zone the data are in. Harvard Forest is in Zone 18.
authority: EPSG (European Petroleum Survey Group) - organization that maintains a geodetic parameter database with standard codes
code: The EPSG code. For more details, see EPSG 32618.
Extent
The spatial extent is the geographic area that the raster data covers.
Image Source: National Ecological Observatory Network (NEON)
The spatial extent of an R spatial object represents the geographic "edge" or
location that is the furthest north, south, east and west. In other words, extent
represents the overall geographic coverage of the spatial object.
A raster has horizontal (x and y) resolution. This resolution represents the
area on the ground that each pixel covers. The units for our data are in meters.
Given our data resolution is 1 x 1, this means that each pixel represents a
1 x 1 meter area on the ground.
Source: National Ecological Observatory Network (NEON)
The best way to view resolution units is to look at the coordinate reference system string crs(rast,proj=TRUE). Notice our data contains: +units=m.
It can be useful to know the minimum or maximum values of a raster dataset. In
this case, given we are working with elevation data, these values represent the
min/max elevation range at our site.
Raster statistics are often calculated and embedded in a Geotiff for us.
However if they weren't already calculated, we can calculate them using the
min() or max() functions.
# view the min and max values
min(DSM_HARV)
## class : SpatRaster
## dimensions : 1000, 1000, 1 (nrow, ncol, nlyr)
## resolution : 1, 1 (x, y)
## extent : 732000, 733000, 4713000, 4714000 (xmin, xmax, ymin, ymax)
## coord. ref. : WGS 84 / UTM zone 18N (EPSG:32618)
## source(s) : memory
## varname : NEON_D01_HARV_DP3_732000_4713000_DSM
## name : min
## min value : 317.91
## max value : 433.94
We can see that the elevation at our site ranges from 317.91 m to 433.94 m.
NoData Values in Rasters
Raster data often has a NoDataValue associated with it. This is a value
assigned to pixels where data are missing or no data were collected.
By default the shape of a raster is always square or rectangular. So if we
have a dataset that has a shape that isn't square or rectangular, some pixels
at the edge of the raster will have NoDataValues. This often happens when the
data were collected by an airplane which only flew over some part of a defined
region.
Let's take a look at some of the RGB Camera data over HARV, this time downloading a tile at the edge of the flight box.
byTileAOP(dpID='DP3.30010.001',
site='HARV',
year='2022',
easting=737500,
northing=4701500,
check.size=FALSE, # set to TRUE or remove if you want to check the size before downloading
savepath = wd)
This file will be downloaded into a nested subdirectory under the ~/data folder, inside a folder named DP3.30010.001 (the Camera Data Product ID). The file should show up in this location: ~/data/DP3.30010.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/Camera/Mosaic/2022_HARV_7_737000_4701000_image.tif.
In the image below, the pixels that are black have NoDataValues. The camera did not collect data in these areas.
# Use rast function to read in all bands
RGB_HARV <-
rast(paste0(wd,"DP3.30010.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/Camera/Mosaic/2022_HARV_7_737000_4701000_image.tif"))
# Create an RGB image from the raster
par(col.axis="white",col.lab="white",tck=0)
plotRGB(RGB_HARV, r = 1, g = 2, b = 3, axes=TRUE)
In the next image, the black edges have been assigned NoDataValue. R doesn't
render pixels that contain a specified NoDataValue. R assigns missing data
with the NoDataValue as NA.
# reassign cells with 0,0,0 to NA
func <- function(x) {
x[rowSums(x == 0) == 3, ] <- NA
x}
newRGBImage <- app(RGB_HARV, func)
##
|---------|---------|---------|---------|
par(col.axis="white",col.lab="white",tck=0)
# Create an RGB image from the raster stack
plotRGB(newRGBImage, r = 1, g = 2, b = 3, axis=TRUE)
## Warning in plot.window(...): "axis" is not a graphical parameter
## Warning in plot.xy(xy, type, ...): "axis" is not a graphical parameter
## Warning in title(...): "axis" is not a graphical parameter
NoData Value Standard
The assigned NoDataValue varies across disciplines; -9999 is a common value
used in both the remote sensing field and the atmospheric fields. It is also
the standard used by the
National Ecological Observatory Network (NEON).
If we are lucky, our GeoTIFF file has a tag that tells us what is the
NoDataValue. If we are less lucky, we can find that information in the
raster's metadata. If a NoDataValue was stored in the GeoTIFF tag, when R
opens up the raster, it will assign each instance of the value to NA. Values
of NA will be ignored by R as demonstrated above.
Bad Data Values in Rasters
Bad data values are different from NoDataValues. Bad data values are values
that fall outside of the applicable range of a dataset.
Examples of Bad Data Values:
The normalized difference vegetation index (NDVI), which is a measure of
greenness, has a valid range of -1 to 1. Any value outside of that range would
be considered a "bad" value.
Reflectance data in an image should range from 0-1 (or 0-10,000 depending
upon how the data are scaled). Thus a value greater than 1 or greater than 10,000
is likely caused by an error in either data collection or processing. These
erroneous values can occur, for example, in water vapor absorption bands, which
contain invalid data, and are meant to be disregarded.
Find Bad Data Values
Sometimes a raster's metadata will tell us the range of expected values for a
raster. Values outside of this range are suspect and we need to consider than
when we analyze the data. Sometimes, we need to use some common sense and
scientific insight as we examine the data - just as we would for field data to
identify questionable values.
Create A Histogram of Raster Values
We can explore the distribution of values contained within our raster using the
hist() function which produces a histogram. Histograms are often useful in
identifying outliers and bad data values in our raster data.
# view histogram of data
hist(DSM_HARV,
main="Distribution of Digital Surface Model Values\n NEON Harvard Forest (HARV)",
xlab="DSM Elevation Value (m)",
ylab="Frequency",
col="lightblue")
The distribution of elevation values for our Digital Surface Model (DSM) looks
reasonable. It is likely there are no bad data values in this particular raster.
Raster Bands
The Digital Surface Model object (DSM_HARV) that we've been working with
is a single band raster. This means that there is only one dataset stored in
the raster: surface elevation in meters for one time period.
Source: National Ecological Observatory Network (NEON).
A raster dataset can contain one or more bands. We can use the rast()function to import all bands from a single OR multi-band raster. We can view the number of bands in a raster using thenlyr()` function.
# view number of bands in the Lidar DSM raster
nlyr(DSM_HARV)
## [1] 1
# view number of bands in the RGB Camera raster
nlyr(RGB_HARV)
## [1] 3
As we see from the RGB camera raster, raster data can also be multi-band,
meaning one raster file contains data for more than one variable or time period
for each cell. By default the terra::rast() function imports all bands of a
multi-band raster. You can set lyrs = 1 if you only want to read in the first
layer, for example.
Remember that a GeoTIFF contains a set of embedded tags that contain
metadata about the raster. So far, we've explored raster metadata after
importing it in R. However, we can use the describe("path-to-raster-here")
function to view raster information (such as metadata) before we open a file in
R. Use help(describe) to see other options for exploring the file contents.
# view metadata attributes before opening the file
describe(path.expand(dsm_harv_file),meta=TRUE)
## [1] "AREA_OR_POINT=Area"
## [2] "TIFFTAG_ARTIST=Created by the National Ecological Observatory Network (NEON)"
## [3] "TIFFTAG_COPYRIGHT=The National Ecological Observatory Network is a project sponsored by the National Science Foundation and managed under cooperative agreement by Battelle. This material is based in part upon work supported by the National Science Foundation under Grant No. DBI-0752017."
## [4] "TIFFTAG_DATETIME=Flown on 2022080312, 2022080412, 2022081213, 2022081413"
## [5] "TIFFTAG_IMAGEDESCRIPTION=Elevation LiDAR - NEON.DP3.30024 acquired at HARV by RIEGL LASER MEASUREMENT SYSTEMS Q780 2220855 as part of 2022-P3C1"
## [6] "TIFFTAG_MAXSAMPLEVALUE=434"
## [7] "TIFFTAG_MINSAMPLEVALUE=318"
## [8] "TIFFTAG_RESOLUTIONUNIT=2 (pixels/inch)"
## [9] "TIFFTAG_SOFTWARE=Tif file created with a Matlab script (write_gtiff.m) written by Tristan Goulden (tgoulden@battelleecology.org) with data processed from the following scripts: create_tiles_from_mosaic.m, combine_dtm_dsm_gtif.m, lastools_workflow.csh which implemented LAStools version 210418."
## [10] "TIFFTAG_XRESOLUTION=1"
## [11] "TIFFTAG_YRESOLUTION=1"
Specifying options=c("stats") will show some summary statistics:
# view summary statistics before opening the file
describe(path.expand(dsm_harv_file),options=c("stats"))
## [1] "Driver: GTiff/GeoTIFF"
## [2] "Files: C:/Users/bhass/Documents/data/DP3.30024.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/DiscreteLidar/DSMGtif/NEON_D01_HARV_DP3_732000_4713000_DSM.tif"
## [3] " C:/Users/bhass/Documents/data/DP3.30024.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/DiscreteLidar/DSMGtif/NEON_D01_HARV_DP3_732000_4713000_DSM.tif.aux.xml"
## [4] "Size is 1000, 1000"
## [5] "Coordinate System is:"
## [6] "PROJCRS[\"WGS 84 / UTM zone 18N\","
## [7] " BASEGEOGCRS[\"WGS 84\","
## [8] " ENSEMBLE[\"World Geodetic System 1984 ensemble\","
## [9] " MEMBER[\"World Geodetic System 1984 (Transit)\"],"
## [10] " MEMBER[\"World Geodetic System 1984 (G730)\"],"
## [11] " MEMBER[\"World Geodetic System 1984 (G873)\"],"
## [12] " MEMBER[\"World Geodetic System 1984 (G1150)\"],"
## [13] " MEMBER[\"World Geodetic System 1984 (G1674)\"],"
## [14] " MEMBER[\"World Geodetic System 1984 (G1762)\"],"
## [15] " MEMBER[\"World Geodetic System 1984 (G2139)\"],"
## [16] " ELLIPSOID[\"WGS 84\",6378137,298.257223563,"
## [17] " LENGTHUNIT[\"metre\",1]],"
## [18] " ENSEMBLEACCURACY[2.0]],"
## [19] " PRIMEM[\"Greenwich\",0,"
## [20] " ANGLEUNIT[\"degree\",0.0174532925199433]],"
## [21] " ID[\"EPSG\",4326]],"
## [22] " CONVERSION[\"UTM zone 18N\","
## [23] " METHOD[\"Transverse Mercator\","
## [24] " ID[\"EPSG\",9807]],"
## [25] " PARAMETER[\"Latitude of natural origin\",0,"
## [26] " ANGLEUNIT[\"degree\",0.0174532925199433],"
## [27] " ID[\"EPSG\",8801]],"
## [28] " PARAMETER[\"Longitude of natural origin\",-75,"
## [29] " ANGLEUNIT[\"degree\",0.0174532925199433],"
## [30] " ID[\"EPSG\",8802]],"
## [31] " PARAMETER[\"Scale factor at natural origin\",0.9996,"
## [32] " SCALEUNIT[\"unity\",1],"
## [33] " ID[\"EPSG\",8805]],"
## [34] " PARAMETER[\"False easting\",500000,"
## [35] " LENGTHUNIT[\"metre\",1],"
## [36] " ID[\"EPSG\",8806]],"
## [37] " PARAMETER[\"False northing\",0,"
## [38] " LENGTHUNIT[\"metre\",1],"
## [39] " ID[\"EPSG\",8807]]],"
## [40] " CS[Cartesian,2],"
## [41] " AXIS[\"(E)\",east,"
## [42] " ORDER[1],"
## [43] " LENGTHUNIT[\"metre\",1]],"
## [44] " AXIS[\"(N)\",north,"
## [45] " ORDER[2],"
## [46] " LENGTHUNIT[\"metre\",1]],"
## [47] " USAGE["
## [48] " SCOPE[\"Navigation and medium accuracy spatial referencing.\"],"
## [49] " AREA[\"Between 78°W and 72°W, northern hemisphere between equator and 84°N, onshore and offshore. Bahamas. Canada - Nunavut; Ontario; Quebec. Colombia. Cuba. Ecuador. Greenland. Haiti. Jamaica. Panama. Turks and Caicos Islands. United States (USA). Venezuela.\"],"
## [50] " BBOX[0,-78,84,-72]],"
## [51] " ID[\"EPSG\",32618]]"
## [52] "Data axis to CRS axis mapping: 1,2"
## [53] "Origin = (732000.000000000000000,4714000.000000000000000)"
## [54] "Pixel Size = (1.000000000000000,-1.000000000000000)"
## [55] "Metadata:"
## [56] " AREA_OR_POINT=Area"
## [57] " TIFFTAG_ARTIST=Created by the National Ecological Observatory Network (NEON)"
## [58] " TIFFTAG_COPYRIGHT=The National Ecological Observatory Network is a project sponsored by the National Science Foundation and managed under cooperative agreement by Battelle. This material is based in part upon work supported by the National Science Foundation under Grant No. DBI-0752017."
## [59] " TIFFTAG_DATETIME=Flown on 2022080312, 2022080412, 2022081213, 2022081413"
## [60] " TIFFTAG_IMAGEDESCRIPTION=Elevation LiDAR - NEON.DP3.30024 acquired at HARV by RIEGL LASER MEASUREMENT SYSTEMS Q780 2220855 as part of 2022-P3C1"
## [61] " TIFFTAG_MAXSAMPLEVALUE=434"
## [62] " TIFFTAG_MINSAMPLEVALUE=318"
## [63] " TIFFTAG_RESOLUTIONUNIT=2 (pixels/inch)"
## [64] " TIFFTAG_SOFTWARE=Tif file created with a Matlab script (write_gtiff.m) written by Tristan Goulden (tgoulden@battelleecology.org) with data processed from the following scripts: create_tiles_from_mosaic.m, combine_dtm_dsm_gtif.m, lastools_workflow.csh which implemented LAStools version 210418."
## [65] " TIFFTAG_XRESOLUTION=1"
## [66] " TIFFTAG_YRESOLUTION=1"
## [67] "Image Structure Metadata:"
## [68] " INTERLEAVE=BAND"
## [69] "Corner Coordinates:"
## [70] "Upper Left ( 732000.000, 4714000.000) ( 72d10'28.52\"W, 42d32'36.84\"N)"
## [71] "Lower Left ( 732000.000, 4713000.000) ( 72d10'29.98\"W, 42d32' 4.46\"N)"
## [72] "Upper Right ( 733000.000, 4714000.000) ( 72d 9'44.73\"W, 42d32'35.75\"N)"
## [73] "Lower Right ( 733000.000, 4713000.000) ( 72d 9'46.20\"W, 42d32' 3.37\"N)"
## [74] "Center ( 732500.000, 4713500.000) ( 72d10' 7.36\"W, 42d32'20.11\"N)"
## [75] "Band 1 Block=1000x1 Type=Float32, ColorInterp=Gray"
## [76] " Min=317.910 Max=433.940 "
## [77] " Minimum=317.910, Maximum=433.940, Mean=358.584, StdDev=17.156"
## [78] " NoData Value=-9999"
## [79] " Metadata:"
## [80] " STATISTICS_MAXIMUM=433.94000244141"
## [81] " STATISTICS_MEAN=358.58371301653"
## [82] " STATISTICS_MINIMUM=317.91000366211"
## [83] " STATISTICS_STDDEV=17.156044149253"
## [84] " STATISTICS_VALID_PERCENT=100"
It can be useful to use describe to explore your file before reading it into R.
Challenge: Explore Raster Metadata
Without using the terra function to read the file into R, determine the following information about the DTM file. This was downloaded at the same time as the DSM file, and as long as you didn't move the data, it should be located here: ~/data/DP3.30024.001/neon-aop-products/2022/FullSite/D01/2022_HARV_7/L3/DiscreteLidar/DTMGtif/NEON_D01_HARV_DP3_732000_4713000_DTM.tif.
This tutorial explains how to crop a raster using the extent of a vector
shapefile. We will also cover how to extract values from a raster that occur
within a set of polygons, or in a buffer (surrounding) region around a set of
points.
Learning Objectives
After completing this tutorial, you will be able to:
Crop a raster to the extent of a vector layer.
Extract values from raster that correspond to a vector file
overlay.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Crop a Raster to Vector Extent
We often work with spatial layers that have different spatial extents.
The spatial extent of a shapefile or R spatial object represents
the geographic "edge" or location that is the furthest north, south east and
west. Thus is represents the overall geographic coverage of the spatial
object. Image Source: National Ecological Observatory Network (NEON)
The graphic below illustrates the extent of several of the spatial layers that
we have worked with in this vector data tutorial series:
Area of interest (AOI) -- blue
Roads and trails -- purple
Vegetation plot locations -- black
and a raster file, that we will introduce this tutorial:
A canopy height model (CHM) in GeoTIFF format -- green
Frequent use cases of cropping a raster file include reducing file size and
creating maps.
Sometimes we have a raster file that is much larger than our study area or area
of interest. In this case, it is often most efficient to crop the raster to the extent of our
study area to reduce file sizes as we process our data.
Cropping a raster can also be useful when creating visually appealing maps so that the
raster layer matches the extent of the desired vector layers.
Import Data
We will begin by importing four vector shapefiles (field site boundary,
roads/trails, tower location, and veg study plot locations) and one raster
GeoTIFF file, a Canopy Height Model for the Harvard Forest, Massachusetts.
These data can be used to create maps that characterize our study location.
# load necessary packages
library(rgdal) # for vector work; sp package should always load with rgdal.
library (raster)
# set working directory to data folder
# setwd("pathToDirHere")
# Imported in Vector 00: Vector Data in R - Open & Plot Data
# shapefile
aoiBoundary_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV/",
"HarClip_UTMZ18")
# Import a line shapefile
lines_HARV <- readOGR( "NEON-DS-Site-Layout-Files/HARV/",
"HARV_roads")
# Import a point shapefile
point_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV/",
"HARVtower_UTM18N")
# Imported in Vector 02: .csv to Shapefile in R
# import raster Canopy Height Model (CHM)
chm_HARV <-
raster("NEON-DS-Airborne-Remote-Sensing/HARV/CHM/HARV_chmCrop.tif")
Crop a Raster Using Vector Extent
We can use the crop function to crop a raster to the extent of another spatial
object. To do this, we need to specify the raster to be cropped and the spatial
object that will be used to crop the raster. R will use the extent of the
spatial object as the cropping boundary.
# plot full CHM
plot(chm_HARV,
main="LiDAR CHM - Not Cropped\nNEON Harvard Forest Field Site")
# crop the chm
chm_HARV_Crop <- crop(x = chm_HARV, y = aoiBoundary_HARV)
# plot full CHM
plot(extent(chm_HARV),
lwd=4,col="springgreen",
main="LiDAR CHM - Cropped\nNEON Harvard Forest Field Site",
xlab="easting", ylab="northing")
plot(chm_HARV_Crop,
add=TRUE)
We can see from the plot above that the full CHM extent (plotted in green) is
much larger than the resulting cropped raster. Our new cropped CHM now has the
same extent as the aoiBoundary_HARV object that was used as a crop extent
(blue boarder below).
We can look at the extent of all the other objects.
# lets look at the extent of all of our objects
extent(chm_HARV)
## class : Extent
## xmin : 731453
## xmax : 733150
## ymin : 4712471
## ymax : 4713838
extent(chm_HARV_Crop)
## class : Extent
## xmin : 732128
## xmax : 732251
## ymin : 4713209
## ymax : 4713359
extent(aoiBoundary_HARV)
## class : Extent
## xmin : 732128
## xmax : 732251.1
## ymin : 4713209
## ymax : 4713359
Which object has the largest extent? Our plot location extent is not the
largest but it is larger than the AOI Boundary. It would be nice to see our
vegetation plot locations with the Canopy Height Model information.
### Challenge: Crop to Vector Points Extent
Crop the Canopy Height Model to the extent of the study plot locations.
Plot the vegetation plot location points on top of the Canopy Height Model.
If you completed the
.csv to Shapefile in R tutorial
you have these plot locations as the spatial R spatial object
plot.locationsSp_HARV. Otherwise, import the locations from the
\HARV\PlotLocations_HARV.shp shapefile in the downloaded data.
In the plot above, created in the challenge, all the vegetation plot locations
(blue) appear on the Canopy Height Model raster layer except for one. One is
situated on the white space. Why?
A modification of the first figure in this tutorial is below, showing the
relative extents of all the spatial objects. Notice that the extent for our
vegetation plot layer (black) extends further west than the extent of our CHM
raster (bright green). The crop function will make a raster extent smaller, it
will not expand the extent in areas where there are no data. Thus, extent of our
vegetation plot layer will still extend further west than the extent of our
(cropped) raster data (dark green).
Define an Extent
We can also use an extent() method to define an extent to be used as a cropping
boundary. This creates an object of class extent.
Once we have defined the extent, we can use the crop function to crop our
raster.
# crop raster
CHM_HARV_manualCrop <- crop(x = chm_HARV, y = new.extent)
# plot extent boundary and newly cropped raster
plot(aoiBoundary_HARV,
main = "Manually Cropped Raster\n NEON Harvard Forest Field Site")
plot(new.extent,
col="brown",
lwd=4,
add = TRUE)
plot(CHM_HARV_manualCrop,
add = TRUE)
Notice that our manually set new.extent (in red) is smaller than the
aoiBoundary_HARV and that the raster is now the same as the new.extent
object.
See the documentation for the extent() function for more ways
to create an extent object using ??raster::extent
Extract Raster Pixels Values Using Vector Polygons
Often we want to extract values from a raster layer for particular locations -
for example, plot locations that we are sampling on the ground.
Extract raster information using a polygon boundary. We can
extract all pixel values within 20m of our x,y point of interest. These can
then be summarized into some value of interest (e.g. mean, maximum, total).
Source: National Ecological Observatory Network (NEON).
To do this in R, we use the extract() function. The extract() function
requires:
The raster that we wish to extract values from
The vector layer containing the polygons that we wish to use as a boundary or
boundaries
NOTE: We can tell it to store the output values in a data.frame using
df=TRUE (optional, default is to NOT return a data.frame) .
We will begin by extracting all canopy height pixel values located within our
aoiBoundary polygon which surrounds the tower located at the NEON Harvard
Forest field site.
# extract tree height for AOI
# set df=TRUE to return a data.frame rather than a list of values
tree_height <- raster::extract(x = chm_HARV,
y = aoiBoundary_HARV,
df = TRUE)
# view the object
head(tree_height)
## ID HARV_chmCrop
## 1 1 21.20
## 2 1 23.85
## 3 1 23.83
## 4 1 22.36
## 5 1 23.95
## 6 1 23.89
nrow(tree_height)
## [1] 18450
When we use the extract command, R extracts the value for each pixel located
within the boundary of the polygon being used to perform the extraction, in
this case the aoiBoundary object (1 single polygon). Using the aoiBoundary as the boundary polygon, the
function extracted values from 18,450 pixels.
The extract function returns a list of values as default, but you can tell R
to summarize the data in some way or to return the data as a data.frame
(df=TRUE).
We can create a histogram of tree height values within the boundary to better
understand the structure or height distribution of trees. We can also use the
summary() function to view descriptive statistics including min, max and mean
height values to help us better understand vegetation at our field
site.
# view histogram of tree heights in study area
hist(tree_height$HARV_chmCrop,
main="Histogram of CHM Height Values (m) \nNEON Harvard Forest Field Site",
col="springgreen",
xlab="Tree Height", ylab="Frequency of Pixels")
# view summary of values
summary(tree_height$HARV_chmCrop)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 2.03 21.36 22.81 22.43 23.97 38.17
Check out the documentation for the extract() function for more details
(??raster::extract).
Summarize Extracted Raster Values
We often want to extract summary values from a raster. We can tell R the type
of summary statistic we are interested in using the fun= method. Let's extract
a mean height value for our AOI.
# extract the average tree height (calculated using the raster pixels)
# located within the AOI polygon
av_tree_height_AOI <- raster::extract(x = chm_HARV,
y = aoiBoundary_HARV,
fun=mean,
df=TRUE)
# view output
av_tree_height_AOI
## ID HARV_chmCrop
## 1 1 22.43018
It appears that the mean height value, extracted from our LiDAR data derived
canopy height model is 22.43 meters.
Extract Data using x,y Locations
We can also extract pixel values from a raster by defining a buffer or area
surrounding individual point locations using the extract() function. To do this
we define the summary method (fun=mean) and the buffer distance (buffer=20)
which represents the radius of a circular region around each point.
The units of the buffer are the same units of the data CRS.
Extract raster information using a buffer region. All pixels
that are touched by the buffer region are included in the extract.
Source: National Ecological Observatory Network (NEON).
Let's put this into practice by figuring out the average tree height in the
20m around the tower location.
# what are the units of our buffer
crs(point_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# extract the average tree height (height is given by the raster pixel value)
# at the tower location
# use a buffer of 20 meters and mean function (fun)
av_tree_height_tower <- raster::extract(x = chm_HARV,
y = point_HARV,
buffer=20,
fun=mean,
df=TRUE)
# view data
head(av_tree_height_tower)
## ID HARV_chmCrop
## 1 1 22.38812
# how many pixels were extracted
nrow(av_tree_height_tower)
## [1] 1
### Challenge: Extract Raster Height Values For Plot Locations
Use the plot location points shapefile HARV/plot.locations_HARV.shp or spatial
object plot.locationsSp_HARV to extract an average tree height value for the
area within 20m of each vegetation plot location in the study area.
Create a simple plot showing the mean tree height of each plot using the plot()
function in base-R.
This tutorial will review how to import spatial points stored in .csv (Comma
Separated Value) format into
R as a spatial object - a SpatialPointsDataFrame. We will also
reproject data imported in a shapefile format, export a shapefile from an
R spatial object, and plot raster and vector data as
layers in the same plot.
Learning Objectives
After completing this tutorial, you will be able to:
Import .csv files containing x,y coordinate locations into R.
Convert a .csv to a spatial object.
Project coordinate locations provided in a Geographic
Coordinate System (Latitude, Longitude) to a projected coordinate system (UTM).
Plot raster and vector data in the same plot to create a map.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Spatial Data in Text Format
The HARV_PlotLocations.csv file contains x, y (point) locations for study
plots where NEON collects data on
vegetation and other ecological metrics.
We would like to:
Create a map of these plot locations.
Export the data in a shapefile format to share with our colleagues. This
shapefile can be imported into any GIS software.
Create a map showing vegetation height with plot locations layered on top.
Spatial data are sometimes stored in a text file format (.txt or .csv). If
the text file has an associated x and y location column, then we can
convert it into an R spatial object, which, in the case of point data,
will be a SpatialPointsDataFrame. The SpatialPointsDataFrame
allows us to store both the x,y values that represent the coordinate location
of each point and the associated attribute data, or columns describing each
feature in the spatial object.
**Data Tip:** There is a `SpatialPoints` object (not
`SpatialPointsDataFrame`) in R that does not allow you to store associated
attributes.
We will use the rgdal and raster libraries in this tutorial.
# load packages
library(rgdal) # for vector work; sp package should always load with rgdal
library (raster) # for metadata/attributes- vectors or rasters
# set working directory to data folder
# setwd("pathToDirHere")
Import .csv
To begin let's import the .csv file that contains plot coordinate x, y
locations at the NEON Harvard Forest Field Site (HARV_PlotLocations.csv) into
R. Note that we set stringsAsFactors=FALSE so our data imports as a
character rather than a factor class.
Also note that plot.locations_HARV is a data.frame that contains 21
locations (rows) and 15 variables (attributes).
Next, let's explore data.frame to determine whether it contains
columns with coordinate values. If we are lucky, our .csv will contain columns
labeled:
"X" and "Y" OR
Latitude and Longitude OR
easting and northing (UTM coordinates)
Let's check out the column names of our file to look for these.
View the column names, we can see that our data.frame that contains several
fields that might contain spatial information. The plot.locations_HARV$easting
and plot.locations_HARV$northing columns contain these coordinate values.
# view first 6 rows of the X and Y columns
head(plot.locations_HARV$easting)
## [1] 731405.3 731934.3 731754.3 731724.3 732125.3 731634.3
head(plot.locations_HARV$northing)
## [1] 4713456 4713415 4713115 4713595 4713846 4713295
# note that you can also call the same two columns using their COLUMN NUMBER
# view first 6 rows of the X and Y columns
head(plot.locations_HARV[,1])
## [1] 731405.3 731934.3 731754.3 731724.3 732125.3 731634.3
head(plot.locations_HARV[,2])
## [1] 4713456 4713415 4713115 4713595 4713846 4713295
So, we have coordinate values in our data.frame but in order to convert our
data.frame to a SpatialPointsDataFrame, we also need to know the CRS
associated with these coordinate values.
There are several ways to figure out the CRS of spatial data in text format.
We can explore the file itself to see if CRS information is embedded in the
file header or somewhere in the data columns.
Following the easting and northing columns, there is a geodeticDa and a
utmZone column. These appear to contain CRS information
(datum and projection), so let's view those next.
# view first 6 rows of the X and Y columns
head(plot.locations_HARV$geodeticDa)
## [1] "WGS84" "WGS84" "WGS84" "WGS84" "WGS84" "WGS84"
head(plot.locations_HARV$utmZone)
## [1] "18N" "18N" "18N" "18N" "18N" "18N"
It is not typical to store CRS information in a column, but this particular
file contains CRS information this way. The geodeticDa and utmZone columns
contain the information that helps us determine the CRS:
To create the proj4 associated with UTM Zone 18 WGS84 we could look up the
projection on the
spatial reference website
which contains a list of CRS formats for each projection:
However, if we have other data in the UTM Zone 18N projection, it's much
easier to simply assign the crs() in proj4 format from that object to our
new spatial object. Let's import the roads layer from Harvard forest and check
out its CRS.
Note: if you do not have a CRS to borrow from another raster, see Option 2 in
the next section for how to convert to a spatial object and assign a
CRS.
# Import the line shapefile
lines_HARV <- readOGR( "NEON-DS-Site-Layout-Files/HARV/", "HARV_roads")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARV_roads"
## with 13 features
## It has 15 fields
# view CRS
crs(lines_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view extent
extent(lines_HARV)
## class : Extent
## xmin : 730741.2
## xmax : 733295.5
## ymin : 4711942
## ymax : 4714260
Exploring the data above, we can see that the lines shapefile is in
UTM zone 18N. We can thus use the CRS from that spatial object to convert our
non-spatial data.frame into a spatialPointsDataFrame.
Next, let's create a crs object that we can use to define the CRS of our
SpatialPointsDataFrame when we create it.
Let's convert our data.frame into a SpatialPointsDataFrame. To do
this, we need to specify:
The columns containing X (easting) and Y (northing) coordinate values
The CRS that the column coordinate represent (units are included in the CRS).
Optional: the other columns stored in the data frame that you wish to append
as attributes to your spatial object.
We can add the CRS in two ways; borrow the CRS from another raster that
already has it assigned (Option 1) or to add it directly using the proj4string
(Option 2).
Option 1: Borrow CRS
We will use the SpatialPointsDataFrame() function to perform the conversion
and add the CRS from our utm18nCRS object.
# note that the easting and northing columns are in columns 1 and 2
plot.locationsSp_HARV <- SpatialPointsDataFrame(plot.locations_HARV[,1:2],
plot.locations_HARV, #the R object to convert
proj4string = utm18nCRS) # assign a CRS
# look at CRS
crs(plot.locationsSp_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
Option 2: Assigning CRS
If we didn't have a raster from which to borrow the CRS, we can directly assign
it using either of two equivalent, but slightly different syntaxes.
# first, convert the data.frame to spdf
r <- SpatialPointsDataFrame(plot.locations_HARV[,1:2],
plot.locations_HARV)
# second, assign the CRS in one of two ways
r <- crs("+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
+ellps=WGS84 +towgs84=0,0,0" )
# or
crs(r) <- "+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
+ellps=WGS84 +towgs84=0,0,0"
Plot Spatial Object
We now have a spatial R object, we can plot our newly created spatial object.
# plot spatial object
plot(plot.locationsSp_HARV,
main="Map of Plot Locations")
Define Plot Extent
In
Open and Plot Shapefiles in R
we learned about spatial object extent. When we plot several spatial layers in
R, the first layer that is plotted becomes the extent of the plot. If we add
additional layers that are outside of that extent, then the data will not be
visible in our plot. It is thus useful to know how to set the spatial extent of
a plot using xlim and ylim.
Let's first create a SpatialPolygon object from the
NEON-DS-Site-Layout-Files/HarClip_UTMZ18 shapefile. (If you have completed
Vector 00-02 tutorials in this
Introduction to Working with Vector Data in R
series, you can skip this code as you have already created this object.)
# create boundary object
aoiBoundary_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV/",
"HarClip_UTMZ18")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HarClip_UTMZ18"
## with 1 features
## It has 1 fields
## Integer64 fields read as strings: id
To begin, let's plot our aoiBoundary object with our vegetation plots.
When we attempt to plot the two layers together, we can see that the plot
locations are not rendered. Our data are in the same projection,
so what is going on?
# view extent of each
extent(aoiBoundary_HARV)
## class : Extent
## xmin : 732128
## xmax : 732251.1
## ymin : 4713209
## ymax : 4713359
extent(plot.locationsSp_HARV)
## class : Extent
## xmin : 731405.3
## xmax : 732275.3
## ymin : 4712845
## ymax : 4713846
# add extra space to right of plot area;
# par(mar=c(5.1, 4.1, 4.1, 8.1), xpd=TRUE)
plot(extent(plot.locationsSp_HARV),
col="purple",
xlab="easting",
ylab="northing", lwd=8,
main="Extent Boundary of Plot Locations \nCompared to the AOI Spatial Object",
ylim=c(4712400,4714000)) # extent the y axis to make room for the legend
plot(extent(aoiBoundary_HARV),
add=TRUE,
lwd=6,
col="springgreen")
legend("bottomright",
#inset=c(-0.5,0),
legend=c("Layer One Extent", "Layer Two Extent"),
bty="n",
col=c("purple","springgreen"),
cex=.8,
lty=c(1,1),
lwd=6)
The extents of our two objects are different. plot.locationsSp_HARV is
much larger than aoiBoundary_HARV. When we plot aoiBoundary_HARV first, R
uses the extent of that object to as the plot extent. Thus the points in the
plot.locationsSp_HARV object are not rendered. To fix this, we can manually
assign the plot extent using xlims and ylims. We can grab the extent
values from the spatial object that has a larger extent. Let's try it.
The spatial extent of a shapefile or R spatial object
represents the geographic edge or location that is the furthest
north, south, east and west. Thus is represents the overall geographic
coverage of the spatial object. Source: National Ecological Observatory
Network (NEON)
plotLoc.extent <- extent(plot.locationsSp_HARV)
plotLoc.extent
## class : Extent
## xmin : 731405.3
## xmax : 732275.3
## ymin : 4712845
## ymax : 4713846
# grab the x and y min and max values from the spatial plot locations layer
xmin <- plotLoc.extent@xmin
xmax <- plotLoc.extent@xmax
ymin <- plotLoc.extent@ymin
ymax <- plotLoc.extent@ymax
# adjust the plot extent using x and ylim
plot(aoiBoundary_HARV,
main="NEON Harvard Forest Field Site\nModified Extent",
border="darkgreen",
xlim=c(xmin,xmax),
ylim=c(ymin,ymax))
plot(plot.locationsSp_HARV,
pch=8,
col="purple",
add=TRUE)
# add a legend
legend("bottomright",
legend=c("Plots", "AOI Boundary"),
pch=c(8,NA),
lty=c(NA,1),
bty="n",
col=c("purple","darkgreen"),
cex=.8)
## Challenge - Import & Plot Additional Points
We want to add two phenology plots to our existing map of vegetation plot
locations.
Import the .csv: HARV/HARV_2NewPhenPlots.csv into R and do the following:
Find the X and Y coordinate locations. Which value is X and which value is Y?
These data were collected in a geographic coordinate system (WGS84). Convert
the data.frame into an R spatialPointsDataFrame.
Plot the new points with the plot location points from above. Be sure to add
a legend. Use a different symbol for the 2 new points! You may need to adjust
the X and Y limits of your plot to ensure that both points are rendered by R!
If you have extra time, feel free to add roads and other layers to your map!
In this tutorial, we will create a base map of our study site using a United States
state and country boundary accessed from the
United States Census Bureau.
We will learn how to map vector data that are in different CRS and thus
don't line up on a map.
Learning Objectives
After completing this tutorial, you will be able to:
Identify the CRS of a spatial dataset.
Differentiate between with geographic vs. projected coordinate reference systems.
Use the proj4 string format which is one format used used to
store & reference the CRS of a spatial object.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Working With Spatial Data From Different Sources
To support a project, we often need to gather spatial datasets for from
different sources and/or data that cover different spatial extents. Spatial
data from different sources and that cover different extents are often in
different Coordinate Reference Systems (CRS).
Some reasons for data being in different CRS include:
The data are stored in a particular CRS convention used by the data
provider; perhaps a federal agency or a state planning office.
The data are stored in a particular CRS that is customized to a region.
For instance, many states prefer to use a State Plane projection customized
for that state.
Maps of the United States using data in different projections.
Notice the differences in shape associated with each different projection.
These differences are a direct result of the calculations used to "flatten"
the data onto a 2-dimensional map. Often data are stored purposefully in a
particular projection that optimizes the relative shape and size of
surrounding geographic boundaries (states, counties, countries, etc).
Source: M. Corey,
opennews.org
Check out this short video from
Buzzfeed
highlighting how map projections can make continents
seems proportionally larger or smaller than they actually are!
In this tutorial we will learn how to identify and manage spatial data
in different projections. We will learn how to reproject the data so that they
are in the same projection to support plotting / mapping. Note that these skills
are also required for any geoprocessing / spatial analysis, as data need to be in
the same CRS to ensure accurate results.
We will use the rgdal and raster libraries in this tutorial.
# load packages
library(rgdal) # for vector work; sp package should always load with rgdal.
library (raster) # for metadata/attributes- vectors or rasters
# set working directory to data folder
# setwd("pathToDirHere")
Import US Boundaries - Census Data
There are many good sources of boundary base layers that we can use to create a
basemap. Some R packages even have these base layers built in to support quick
and efficient mapping. In this tutorial, we will use boundary layers for the
United States, provided by the
United States Census Bureau.
It is useful to have shapefiles to work with because we can add additional
attributes to them if need be - for project specific mapping.
Read US Boundary File
We will use the readOGR() function to import the
/US-Boundary-Layers/US-State-Boundaries-Census-2014 layer into R. This layer
contains the boundaries of all continental states in the U.S.. Please note that
these data have been modified and reprojected from the original data downloaded
from the Census website to support the learning goals of this tutorial.
# Read the .csv file
State.Boundary.US <- readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers",
"US-State-Boundaries-Census-2014")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/US-Boundary-Layers", layer: "US-State-Boundaries-Census-2014"
## with 58 features
## It has 10 fields
## Integer64 fields read as strings: ALAND AWATER
## Warning in readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers", "US-
## State-Boundaries-Census-2014"): Z-dimension discarded
# look at the data structure
class(State.Boundary.US)
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
Note: the Z-dimension warning is normal. The readOGR() function doesn't import
z (vertical dimension or height) data by default. This is because not all
shapefiles contain z dimension data.
Now, let's plot the U.S. states data.
# view column names
plot(State.Boundary.US,
main="Map of Continental US State Boundaries\n US Census Bureau Data")
U.S. Boundary Layer
We can add a boundary layer of the United States to our map to make it look
nicer. We will import
NEON-DS-Site-Layout-Files/US-Boundary-Layers/US-Boundary-Dissolved-States.
If we specify a thicker line width using lwd=4 for the border layer, it will
make our map pop!
# Read the .csv file
Country.Boundary.US <- readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers",
"US-Boundary-Dissolved-States")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/US-Boundary-Layers", layer: "US-Boundary-Dissolved-States"
## with 1 features
## It has 9 fields
## Integer64 fields read as strings: ALAND AWATER
## Warning in readOGR("NEON-DS-Site-Layout-Files/US-Boundary-Layers", "US-
## Boundary-Dissolved-States"): Z-dimension discarded
# look at the data structure
class(Country.Boundary.US)
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
# view column names
plot(State.Boundary.US,
main="Map of Continental US State Boundaries\n US Census Bureau Data",
border="gray40")
# view column names
plot(Country.Boundary.US,
lwd=4,
border="gray18",
add=TRUE)
Next, let's add the location of a flux tower where our study area is.
As we are adding these layers, take note of the class of each object.
# Import a point shapefile
point_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV/",
"HARVtower_UTM18N")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARVtower_UTM18N"
## with 1 features
## It has 14 fields
class(point_HARV)
## [1] "SpatialPointsDataFrame"
## attr(,"package")
## [1] "sp"
# plot point - looks ok?
plot(point_HARV,
pch = 19,
col = "purple",
main="Harvard Fisher Tower Location")
The plot above demonstrates that the tower point location data are readable and
will plot! Let's next add it as a layer on top of the U.S. states and boundary
layers in our basemap plot.
# plot state boundaries
plot(State.Boundary.US,
main="Map of Continental US State Boundaries \n with Tower Location",
border="gray40")
# add US border outline
plot(Country.Boundary.US,
lwd=4,
border="gray18",
add=TRUE)
# add point tower location
plot(point_HARV,
pch = 19,
col = "purple",
add=TRUE)
What do you notice about the resultant plot? Do you see the tower location in
purple in the Massachusetts area? No! So what went wrong?
Let's check out the CRS (crs()) of both datasets to see if we can identify any
issues that might cause the point location to not plot properly on top of our
U.S. boundary layers.
# view CRS of our site data
crs(point_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view crs of census data
crs(State.Boundary.US)
## CRS arguments: +proj=longlat +datum=WGS84 +no_defs
crs(Country.Boundary.US)
## CRS arguments: +proj=longlat +datum=WGS84 +no_defs
It looks like our data are in different CRS. We can tell this by looking at
the CRS strings in proj4 format.
Understanding CRS in Proj4 Format
The CRS for our data are given to us by R in proj4 format. Let's break
down the pieces of proj4 string. The string contains all of the individual
CRS elements that R or another GIS might need. Each element is specified
with a + sign, similar to how a .csv file is delimited or broken up by
a ,. After each + we see the CRS element being defined. For example
projection (proj=) and datum (datum=).
UTM Proj4 String
Our project string for point_HARV specifies the UTM projection as follows:
proj=longlat: the data are in a geographic (latitude and longitude)
coordinate system
datum=WGS84: the datum is WGS84
ellps=WGS84: the ellipsoid is WGS84
Note that there are no specified units above. This is because this geographic
coordinate reference system is in latitude and longitude which is most
often recorded in Decimal Degrees.
**Data Tip:** the last portion of each `proj4` string
is `+towgs84=0,0,0 `. This is a conversion factor that is used if a datum
conversion is required. We will not deal with datums in this tutorial series.
CRS Units - View Object Extent
Next, let's view the extent or spatial coverage for the point_HARV spatial
object compared to the State.Boundary.US object.
# extent for HARV in UTM
extent(point_HARV)
## class : Extent
## xmin : 732183.2
## xmax : 732183.2
## ymin : 4713265
## ymax : 4713265
# extent for object in geographic
extent(State.Boundary.US)
## class : Extent
## xmin : -124.7258
## xmax : -66.94989
## ymin : 24.49813
## ymax : 49.38436
Note the difference in the units for each object. The extent for
State.Boundary.US is in latitude and longitude, which yields smaller numbers
representing decimal degree units; however, our tower location point
is in UTM, which is represented in meters.
To view a list of datum conversion factors, type projInfo(type = "datum")
into the R console.
Reproject Vector Data
Now we know our data are in different CRS. To address this, we have to modify
or reproject the data so they are all in the same CRS. We can use
spTransform() function to reproject our data. When we reproject the data, we
specify the CRS that we wish to transform our data to. This CRS contains
the datum, units and other information that R needs to reproject our data.
The spTransform() function requires two inputs:
The name of the object that you wish to transform
The CRS that you wish to transform that object too. In this case we can
use the crs() of the State.Boundary.US object as follows:
crs(State.Boundary.US)
**Data Tip:** `spTransform()` will only work if your
original spatial object has a CRS assigned to it AND if that CRS is the
correct CRS!
Next, let's reproject our point layer into the geographic latitude and
longitude WGS84 coordinate reference system (CRS).
# reproject data
point_HARV_WGS84 <- spTransform(point_HARV,
crs(State.Boundary.US))
# what is the CRS of the new object
crs(point_HARV_WGS84)
## CRS arguments: +proj=longlat +datum=WGS84 +no_defs
# does the extent look like decimal degrees?
extent(point_HARV_WGS84)
## class : Extent
## xmin : -72.17266
## xmax : -72.17266
## ymin : 42.5369
## ymax : 42.5369
Once our data are reprojected, we can try to plot again.
# plot state boundaries
plot(State.Boundary.US,
main="Map of Continental US State Boundaries\n With Fisher Tower Location",
border="gray40")
# add US border outline
plot(Country.Boundary.US,
lwd=4,
border="gray18",
add=TRUE)
# add point tower location
plot(point_HARV_WGS84,
pch = 19,
col = "purple",
add=TRUE)
Reprojecting our data ensured that things line up on our map! It will also
allow us to perform any required geoprocessing (spatial calculations /
transformations) on our data.
## Challenge - Reproject Spatial Data
Create a map of the North Eastern United States as follows:
Import and plot Boundary-US-State-NEast.shp. Adjust line width as necessary.
Reproject the layer into UTM zone 18 north.
Layer the Fisher Tower point location point_HARV on top of the above plot.
Add a title to your plot.
Add a legend to your plot that shows both the state boundary (line) and
the Tower location point.
This tutorial explains what shapefile attributes are and how to work with
shapefile attributes in R. It also covers how to identify and query shapefile
attributes, as well as subset shapefiles by specific attribute values.
Finally, we will review how to plot a shapefile according to a set of attribute
values.
Learning Objectives
After completing this tutorial, you will be able to:
Query shapefile attributes.
Subset shapefiles using specific attribute values.
Plot a shapefile, colored by unique attribute values.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Shapefile Metadata & Attributes
When we import a shapefile into R, the readOGR() function automatically
stores metadata and attributes associated with the file.
Load the Data
To work with vector data in R, we can use the rgdal library. The raster
package also allows us to explore metadata using similar commands for both
raster and vector files.
We will import three shapefiles. The first is our AOI or area of
interest boundary polygon that we worked with in
Open and Plot Shapefiles in R.
The second is a shapefile containing the location of roads and trails within the
field site. The third is a file containing the Fisher tower location.
# load packages
# rgdal: for vector work; sp package should always load with rgdal.
library(rgdal)
# raster: for metadata/attributes- vectors or rasters
library (raster)
# set working directory to data folder
# setwd("pathToDirHere")
# Import a polygon shapefile
aoiBoundary_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV",
"HarClip_UTMZ18", stringsAsFactors = T)
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HarClip_UTMZ18"
## with 1 features
## It has 1 fields
## Integer64 fields read as strings: id
# Import a line shapefile
lines_HARV <- readOGR( "NEON-DS-Site-Layout-Files/HARV", "HARV_roads", stringsAsFactors = T)
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARV_roads"
## with 13 features
## It has 15 fields
# Import a point shapefile
point_HARV <- readOGR("NEON-DS-Site-Layout-Files/HARV",
"HARVtower_UTM18N", stringsAsFactors = T)
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HARVtower_UTM18N"
## with 1 features
## It has 14 fields
class() - Describes the type of vector data stored in the object.
length() - How many features are in this spatial object?
object extent() - The spatial extent (geographic area covered by) features
in the object.
coordinate reference system (crs()) - The spatial projection that the data are
in.
Let's explore the metadata for our point_HARV object.
# view class
class(x = point_HARV)
## [1] "SpatialPointsDataFrame"
## attr(,"package")
## [1] "sp"
# x= isn't actually needed; it just specifies which object
# view features count
length(point_HARV)
## [1] 1
# view crs - note - this only works with the raster package loaded
crs(point_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view extent- note - this only works with the raster package loaded
extent(point_HARV)
## class : Extent
## xmin : 732183.2
## xmax : 732183.2
## ymin : 4713265
## ymax : 4713265
# view metadata summary
point_HARV
## class : SpatialPointsDataFrame
## features : 1
## extent : 732183.2, 732183.2, 4713265, 4713265 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 14
## names : Un_ID, Domain, DomainName, SiteName, Type, Sub_Type, Lat, Long, Zone, Easting, Northing, Ownership, County, annotation
## value : A, 1, Northeast, Harvard Forest, Core, Advanced Tower, 42.5369, -72.17266, 18, 732183.193774, 4713265.041137, Harvard University, LTER, Worcester, C1
About Shapefile Attributes
Shapefiles often contain an associated database or spreadsheet of values called
attributes that describe the vector features in the shapefile. You can think
of this like a spreadsheet with rows and columns. Each column in the spreadsheet
is an individual attribute that describes an object. Shapefile attributes
include measurements that correspond to the geometry of the shapefile features.
For example, the HARV_Roads shapefile (lines_HARV object) contains an
attribute called TYPE. Each line in the shapefile has an associated TYPE
which describes the type of road (woods road, footpath, boardwalk, or
stone wall).
The shapefile format allows us to store attributes for each
feature (vector object) stored in the shapefile. The attribute table is
similar to a spreadsheet. There is a row for each feature. The first column
contains the unique ID of the feature. We can add additional columns that
describe the feature. Image Source: National Ecological Observatory Network
(NEON)
We can look at all of the associated data attributes by printing the contents of
the data slot with objectName@data. We can use the base R length
function to count the number of attributes associated with a spatial object too.
# just view the attributes & first 6 attribute values of the data
head(lines_HARV@data)
## OBJECTID_1 OBJECTID TYPE NOTES MISCNOTES RULEID
## 0 14 48 woods road Locust Opening Rd <NA> 5
## 1 40 91 footpath <NA> <NA> 6
## 2 41 106 footpath <NA> <NA> 6
## 3 211 279 stone wall <NA> <NA> 1
## 4 212 280 stone wall <NA> <NA> 1
## 5 213 281 stone wall <NA> <NA> 1
## MAPLABEL SHAPE_LENG LABEL BIKEHORSE RESVEHICLE
## 0 Locust Opening Rd 1297.35706 Locust Opening Rd Y R1
## 1 <NA> 146.29984 <NA> Y R1
## 2 <NA> 676.71804 <NA> Y R2
## 3 <NA> 231.78957 <NA> <NA> <NA>
## 4 <NA> 45.50864 <NA> <NA> <NA>
## 5 <NA> 198.39043 <NA> <NA> <NA>
## RECMAP Shape_Le_1 ResVehic_1
## 0 Y 1297.10617 R1 - All Research Vehicles Allowed
## 1 Y 146.29983 R1 - All Research Vehicles Allowed
## 2 Y 676.71807 R2 - 4WD/High Clearance Vehicles Only
## 3 <NA> 231.78962 <NA>
## 4 <NA> 45.50859 <NA>
## 5 <NA> 198.39041 <NA>
## BicyclesHo
## 0 Bicycles and Horses Allowed
## 1 Bicycles and Horses Allowed
## 2 Bicycles and Horses Allowed
## 3 <NA>
## 4 <NA>
## 5 <NA>
# how many attributes are in our vector data object?
length(lines_HARV@data)
## [1] 15
We can view the individual name of each attribute using the
names(lines_HARV@data) method in R. We could also view just the first 6 rows
of attribute values using head(lines_HARV@data).
Let's give it a try.
# view just the attribute names for the lines_HARV spatial object
names(lines_HARV@data)
## [1] "OBJECTID_1" "OBJECTID" "TYPE" "NOTES" "MISCNOTES"
## [6] "RULEID" "MAPLABEL" "SHAPE_LENG" "LABEL" "BIKEHORSE"
## [11] "RESVEHICLE" "RECMAP" "Shape_Le_1" "ResVehic_1" "BicyclesHo"
### Challenge: Attributes for Different Spatial Classes
Explore the attributes associated with the `point_HARV` and `aoiBoundary_HARV`
spatial objects.
How many attributes do each have?
Who owns the site in the point_HARV data object?
Which of the following is NOT an attribute of the point data object?
A) Latitude B) County C) Country
Explore Values within One Attribute
We can explore individual values stored within a particular attribute.
Again, comparing attributes to a spreadsheet or a data.frame is similar
to exploring values in a column. We can do this using the $ and the name of
the attribute: objectName$attributeName.
# view all attributes in the lines shapefile within the TYPE field
lines_HARV$TYPE
## [1] woods road footpath footpath stone wall stone wall stone wall
## [7] stone wall stone wall stone wall boardwalk woods road woods road
## [13] woods road
## Levels: boardwalk footpath stone wall woods road
# view unique values within the "TYPE" attributes
levels(lines_HARV@data$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
Notice that two of our TYPE attribute values consist of two separate words:
stone wall and woods road. There are really four unique TYPE values, not six
TYPE values.
Subset Shapefiles
We can use the objectName$attributeName syntax to select a subset of features
from a spatial object in R.
# select features that are of TYPE "footpath"
# could put this code into other function to only have that function work on
# "footpath" lines
lines_HARV[lines_HARV$TYPE == "footpath",]
## class : SpatialLinesDataFrame
## features : 2
## extent : 731954.5, 732232.3, 4713131, 4713726 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 15
## names : OBJECTID_1, OBJECTID, TYPE, NOTES, MISCNOTES, RULEID, MAPLABEL, SHAPE_LENG, LABEL, BIKEHORSE, RESVEHICLE, RECMAP, Shape_Le_1, ResVehic_1, BicyclesHo
## min values : 40, 91, footpath, NA, NA, 6, NA, 146.299844868, NA, Y, R1, Y, 146.299831389, R1 - All Research Vehicles Allowed, Bicycles and Horses Allowed
## max values : 41, 106, footpath, NA, NA, 6, NA, 676.71804248, NA, Y, R2, Y, 676.718065323, R2 - 4WD/High Clearance Vehicles Only, Bicycles and Horses Allowed
# save an object with only footpath lines
footpath_HARV <- lines_HARV[lines_HARV$TYPE == "footpath",]
footpath_HARV
## class : SpatialLinesDataFrame
## features : 2
## extent : 731954.5, 732232.3, 4713131, 4713726 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 15
## names : OBJECTID_1, OBJECTID, TYPE, NOTES, MISCNOTES, RULEID, MAPLABEL, SHAPE_LENG, LABEL, BIKEHORSE, RESVEHICLE, RECMAP, Shape_Le_1, ResVehic_1, BicyclesHo
## min values : 40, 91, footpath, NA, NA, 6, NA, 146.299844868, NA, Y, R1, Y, 146.299831389, R1 - All Research Vehicles Allowed, Bicycles and Horses Allowed
## max values : 41, 106, footpath, NA, NA, 6, NA, 676.71804248, NA, Y, R2, Y, 676.718065323, R2 - 4WD/High Clearance Vehicles Only, Bicycles and Horses Allowed
# how many features are in our new object
length(footpath_HARV)
## [1] 2
Our subsetting operation reduces the features count from 13 to 2. This means
that only two feature lines in our spatial object have the attribute
"TYPE=footpath".
We can plot our subsetted shapefiles.
# plot just footpaths
plot(footpath_HARV,
lwd=6,
main="NEON Harvard Forest Field Site\n Footpaths")
Interesting! Above, it appeared as if we had 2 features in our footpaths subset.
Why does the plot look like there is only one feature?
Let's adjust the colors used in our plot. If we have 2 features in our vector
object, we can plot each using a unique color by assigning unique colors (col=)
to our features. We use the syntax
col="c("colorOne","colorTwo")
to do this.
# plot just footpaths
plot(footpath_HARV,
col=c("green","blue"), # set color for each feature
lwd=6,
main="NEON Harvard Forest Field Site\n Footpaths \n Feature one = blue, Feature two= green")
Now, we see that there are in fact two features in our plot!
### Challenge: Subset Spatial Line Objects
Subset out all:
boardwalk from the lines layer and plot it.
stone wall features from the lines layer and plot it.
For each plot, color each feature using a unique color.
Plot Lines by Attribute Value
To plot vector data with the color determined by a set of attribute values, the
attribute values must be class = factor. A factor is similar to a category.
you can group vector objects by a particular category value - for example you
can group all lines of TYPE=footpath. However, in R, a factor can also have
a determined order.
By default, R will import spatial object attributes as factors.
**Data Tip:** If our data attribute values are not
read in as factors, we can convert the categorical
attribute values using `as.factor()`.
# view the original class of the TYPE column
class(lines_HARV$TYPE)
## [1] "factor"
# view levels or categories - note that there are no categories yet in our data!
# the attributes are just read as a list of character elements.
levels(lines_HARV$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
# Convert the TYPE attribute into a factor
# Only do this IF the data do not import as a factor!
# lines_HARV$TYPE <- as.factor(lines_HARV$TYPE)
# class(lines_HARV$TYPE)
# levels(lines_HARV$TYPE)
# how many features are in each category or level?
summary(lines_HARV$TYPE)
## boardwalk footpath stone wall woods road
## 1 2 6 4
When we use plot(), we can specify the colors to use for each attribute using
the col= element. To ensure that R renders each feature by it's associated
factor / attribute value, we need to create a vector or colors - one for each
feature, according to it's associated attribute value / factor value.
To create this vector we can use the following syntax:
a vector of colors - one for each factor value (unique attribute value)
the attribute itself ([object$factor]) of class factor
Let's give this a try.
# Check the class of the attribute - is it a factor?
class(lines_HARV$TYPE)
## [1] "factor"
# how many "levels" or unique values does the factor have?
# view factor values
levels(lines_HARV$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
# count the number of unique values or levels
length(levels(lines_HARV$TYPE))
## [1] 4
# create a color palette of 4 colors - one for each factor level
roadPalette <- c("blue","green","grey","purple")
roadPalette
## [1] "blue" "green" "grey" "purple"
# create a vector of colors - one for each feature in our vector object
# according to its attribute value
roadColors <- c("blue","green","grey","purple")[lines_HARV$TYPE]
roadColors
## [1] "purple" "green" "green" "grey" "grey" "grey" "grey"
## [8] "grey" "grey" "blue" "purple" "purple" "purple"
# plot the lines data, apply a diff color to each factor level)
plot(lines_HARV,
col=roadColors,
lwd=3,
main="NEON Harvard Forest Field Site\n Roads & Trails")
Adjust Line Width
We can also adjust the width of our plot lines using lwd. We can set all lines
to be thicker or thinner using lwd=.
# make all lines thicker
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails\n All Lines Thickness=6",
lwd=6)
Adjust Line Width by Attribute
If we want a unique line width for each factor level or attribute category
in our spatial object, we can use the same syntax that we used for colors, above:
Note that this requires the attribute to be of class factor. Let's give it a
try.
class(lines_HARV$TYPE)
## [1] "factor"
levels(lines_HARV$TYPE)
## [1] "boardwalk" "footpath" "stone wall" "woods road"
# create vector of line widths
lineWidths <- (c(1,2,3,4))[lines_HARV$TYPE]
# adjust line width by level
# in this case, boardwalk (the first level) is the widest.
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails \n Line width varies by TYPE Attribute Value",
lwd=lineWidths)
### Challenge: Plot Line Width by Attribute
We can customize the width of each line, according to specific attribute value,
too. To do this, we create a vector of line width values, and map that vector
to the factor levels - using the same syntax that we used above for colors.
HINT: `lwd=(vector of line width thicknesses)[spatialObject$factorAttribute]`
Create a plot of roads using the following line thicknesses:
woods road lwd=8
Boardwalks lwd = 2
footpath lwd=4
stone wall lwd=3
**Data Tip:** Given we have a factor with 4 levels,
we can create an vector of numbers, each of which specifies the thickness of each
feature in our `SpatialLinesDataFrame` by factor level (category): `c(6,4,1,2)[lines_HARV$TYPE]`
Add Plot Legend
We can add a legend to our plot too. When we add a legend, we use the following
elements to specify labels and colors:
bottomright: We specify the location of our legend by using a default
keyword. We could also use top, topright, etc.
levels(objectName$attributeName): Label the legend elements using the
categories of levels in an attribute (e.g., levels(lines_HARV$TYPE) means use
the levels boardwalk, footpath, etc).
fill=: apply unique colors to the boxes in our legend. palette() is
the default set of colors that R applies to all plots.
Let's add a legend to our plot.
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails\n Default Legend")
# we can use the color object that we created above to color the legend objects
roadPalette
## [1] "blue" "green" "grey" "purple"
# add a legend to our map
legend("bottomright", # location of legend
legend=levels(lines_HARV$TYPE), # categories or elements to render in
# the legend
fill=roadPalette) # color palette to use to fill objects in legend.
We can tweak the appearance of our legend too.
bty=n: turn off the legend BORDER
cex: change the font size
Let's try it out.
plot(lines_HARV,
col=roadColors,
main="NEON Harvard Forest Field Site\n Roads & Trails \n Modified Legend")
# add a legend to our map
legend("bottomright",
legend=levels(lines_HARV$TYPE),
fill=roadPalette,
bty="n", # turn off the legend border
cex=.8) # decrease the font / legend size
We can modify the colors used to plot our lines by creating a new color vector
directly in the plot code rather than creating a separate object.
col=(newColors)[lines_HARV$TYPE]
Let's try it!
# manually set the colors for the plot!
newColors <- c("springgreen", "blue", "magenta", "orange")
newColors
## [1] "springgreen" "blue" "magenta" "orange"
# plot using new colors
plot(lines_HARV,
col=(newColors)[lines_HARV$TYPE],
main="NEON Harvard Forest Field Site\n Roads & Trails \n Pretty Colors")
# add a legend to our map
legend("bottomright",
levels(lines_HARV$TYPE),
fill=newColors,
bty="n", cex=.8)
**Data Tip:** You can modify the default R color palette
using the palette method. For example `palette(rainbow(6))` or
`palette(terrain.colors(6))`. You can reset the palette colors using
`palette("default")`!
### Challenge: Plot Lines by Attribute
Create a plot that emphasizes only roads where bicycles and horses are allowed.
To emphasize this, make the lines where bicycles are not allowed THINNER than
the roads where bicycles are allowed.
NOTE: this attribute information is located in the `lines_HARV$BicyclesHo`
attribute.
Be sure to add a title and legend to your map! You might consider a color
palette that has all bike/horse-friendly roads displayed in a bright color. All
other lines can be grey.
### Challenge: Plot Polygon by Attribute
Create a map of the State boundaries in the United States using the data
located in your downloaded data folder: NEON-DS-Site-Layout-Files/US-Boundary-Layers\US-State-Boundaries-Census-2014.
Apply a fill color to each state using its region value. Add a legend.
Using the NEON-DS-Site-Layout-Files/HARV/PlotLocations_HARV.shp shapefile,
create a map of study plot locations, with each point colored by the soil type
(soilTypeOr). Question: How many different soil types are there at this particular field site?
BONUS -- modify the field site plot above. Plot each point,
using a different symbol. HINT: you can assign the symbol using pch= value.
You can create a vector object of symbols by factor level using the syntax
syntax that we used above to create a vector of lines widths and colors:
pch=c(15,17)[lines_HARV$soilTypeOr]. Type ?pch to learn more about pch or
use google to find a list of pch symbols that you can use in R.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
About Vector Data
Vector data are composed of discrete geometric locations (x,y values) known as
vertices that define the "shape" of the spatial object. The organization
of the vertices, determines the type of vector that we are working
with: point, line or polygon.
There are 3 types of vector objects: points, lines or
polygons. Each object type has a different structure.
Image Source: National Ecological Observatory Network (NEON)
Points: Each individual point is defined by a single x, y coordinate.
There can be many points in a vector point file. Examples of point data include:
sampling locations, the location of individual trees or the location of plots.
Lines: Lines are composed of many (at least 2) vertices, or points, that
are connected. For instance, a road or a stream may be represented by a line. This
line is composed of a series of segments, each "bend" in the road or stream
represents a vertex that has defined x, y location.
Polygons: A polygon consists of 3 or more vertices that are connected and
"closed". Thus the outlines of plot boundaries, lakes, oceans, and states or
countries are often represented by polygons. Occasionally, a polygon can have a
hole in the middle of it (like a doughnut), this is something to be aware of but
not an issue we will deal with in this tutorial.
**Data Tip:** Sometimes, boundary layers such as
states and countries, are stored as lines rather than polygons. However, these
boundaries, when represented as a line, will not create a closed object with a defined "area" that can be "filled".
Shapefiles: Points, Lines, and Polygons
Geospatial data in vector format are often stored in a shapefile format.
Because the structure of points, lines, and polygons are different, each
individual shapefile can only contain one vector type (all points, all lines
or all polygons). You will not find a mixture of point, line and polygon
objects in a single shapefile.
Objects stored in a shapefile often have a set of associated attributes that
describe the data. For example, a line shapefile that contains the locations of
streams, might contain the associated stream name, stream "order" and other
information about each stream line object.
We will use the rgdal package to work with vector data in R. Notice that the
sp package automatically loads when rgdal is loaded. We will also load the
raster package so we can explore raster and vector spatial metadata using similar commands.
# load required libraries
# for vector work; sp package will load with rgdal.
library(rgdal)
# for metadata/attributes- vectors or rasters
library(raster)
# set working directory to the directory location on your computer where
# you downloaded and unzipped the data files for the tutorial
# setwd("pathToDirHere")
The shapefiles that we will import are:
A polygon shapefile representing our field site boundary,
The first shapefile that we will open contains the boundary of our study area
(or our Area Of Interest or AOI, hence the name aoiBoundary). To import
shapefiles we use the R function readOGR().
readOGR() requires two components:
The directory where our shapefile lives: NEON-DS-Site-Layout-Files/HARV
The name of the shapefile (without the extension): HarClip_UTMZ18
Let's import our AOI.
# Import a polygon shapefile: readOGR("path","fileName")
# no extension needed as readOGR only imports shapefiles
aoiBoundary_HARV <- readOGR(dsn=path.expand("NEON-DS-Site-Layout-Files/HARV"),
layer="HarClip_UTMZ18")
## OGR data source with driver: ESRI Shapefile
## Source: "/Users/olearyd/Git/data/NEON-DS-Site-Layout-Files/HARV", layer: "HarClip_UTMZ18"
## with 1 features
## It has 1 fields
## Integer64 fields read as strings: id
**Data Tip:** The acronym, OGR, refers to the
OpenGIS Simple Features Reference Implementation.
Learn more about OGR.
Shapefile Metadata & Attributes
When we import the HarClip_UTMZ18 shapefile layer into R (as our
aoiBoundary_HARV object), the readOGR() function automatically stores
information about the data. We are particularly interested in the geospatial
metadata, describing the format, CRS, extent, and other components of
the vector data, and the attributes which describe properties associated
with each individual vector object.
**Data Tip:** The
*Shapefile Metadata & Attributes in R*
tutorial provides more information on both metadata and attributes
and using attributes to subset and plot data.
Spatial Metadata
Key metadata for all shapefiles include:
Object Type: the class of the imported object.
Coordinate Reference System (CRS): the projection of the data.
Extent: the spatial extent (geographic area that the shapefile covers) of
the shapefile. Note that the spatial extent for a shapefile represents the
extent for ALL spatial objects in the shapefile.
We can view shapefile metadata using the class, crs and extent methods:
# view just the class for the shapefile
class(aoiBoundary_HARV)
## [1] "SpatialPolygonsDataFrame"
## attr(,"package")
## [1] "sp"
# view just the crs for the shapefile
crs(aoiBoundary_HARV)
## CRS arguments:
## +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
# view just the extent for the shapefile
extent(aoiBoundary_HARV)
## class : Extent
## xmin : 732128
## xmax : 732251.1
## ymin : 4713209
## ymax : 4713359
# view all metadata at same time
aoiBoundary_HARV
## class : SpatialPolygonsDataFrame
## features : 1
## extent : 732128, 732251.1, 4713209, 4713359 (xmin, xmax, ymin, ymax)
## crs : +proj=utm +zone=18 +datum=WGS84 +units=m +no_defs
## variables : 1
## names : id
## value : 1
Our aoiBoundary_HARV object is a polygon of class SpatialPolygonsDataFrame,
in the CRS UTM zone 18N. The CRS is critical to interpreting the object
extent values as it specifies units.
The spatial extent of a shapefile or R spatial object represents
the geographic "edge" or location that is the furthest north, south east and
west. Thus is represents the overall geographic coverage of the spatial object.
Image Source: National Ecological Observatory Network (NEON)
Spatial Data Attributes
Each object in a shapefile has one or more attributes associated with it.
Shapefile attributes are similar to fields or columns in a spreadsheet. Each row
in the spreadsheet has a set of columns associated with it that describe the row
element. In the case of a shapefile, each row represents a spatial object - for
example, a road, represented as a line in a line shapefile, will have one "row"
of attributes associated with it. These attributes can include different types
of information that describe objects stored within a shapefile. Thus, our road,
may have a name, length, number of lanes, speed limit, type of road and other
attributes stored with it.
Each spatial feature in an R spatial object has the same set of
associated attributes that describe or characterize the feature.
Attribute data are stored in a separate *.dbf file. Attribute data can be
compared to a spreadsheet. Each row in a spreadsheet represents one feature
in the spatial object.
Image Source: National Ecological Observatory Network (NEON)
We view the attributes of a SpatialPolygonsDataFrame using objectName@data
(e.g., aoiBoundary_HARV@data).
# alternate way to view attributes
aoiBoundary_HARV@data
## id
## 0 1
In this case, our polygon object only has one attribute: id.
Metadata & Attribute Summary
We can view a metadata & attribute summary of each shapefile by entering
the name of the R object in the console. Note that the metadata output
includes the class, the number of features, the extent, and the
coordinate reference system (crs) of the R object. The last two lines of
summary show a preview of the R object attributes.
# view a summary of metadata & attributes associated with the spatial object
summary(aoiBoundary_HARV)
## Object of class SpatialPolygonsDataFrame
## Coordinates:
## min max
## x 732128 732251.1
## y 4713209 4713359.2
## Is projected: TRUE
## proj4string :
## [+proj=utm +zone=18 +datum=WGS84 +units=m +no_defs]
## Data attributes:
## id
## Length:1
## Class :character
## Mode :character
Plot a Shapefile
Next, let's visualize the data in our R spatialpolygonsdataframe object using
plot().
# create a plot of the shapefile
# 'lwd' sets the line width
# 'col' sets internal color
# 'border' sets line color
plot(aoiBoundary_HARV, col="cyan1", border="black", lwd=3,
main="AOI Boundary Plot")
### Challenge: Import Line and Point Shapefiles
Using the steps above, import the HARV_roads and HARVtower_UTM18N layers into
R. Call the Harv_roads object `lines_HARV` and the HARVtower_UTM18N
`point_HARV`.
Answer the following questions:
What type of R spatial object is created when you import each layer?
What is the CRS and extentfor each object?
Do the files contain, points, lines or polygons?
How many spatial objects are in each file?
Plot Multiple Shapefiles
The plot() function can be used for basic plotting of spatial objects.
We use the add = TRUE argument to overlay shapefiles on top of each other, as
we would when creating a map in a typical GIS application like QGIS.
We can use main="" to give our plot a title. If we want the title to span two
lines, we use \n where the line should break.
# Plot multiple shapefiles
plot(aoiBoundary_HARV, col = "lightgreen",
main="NEON Harvard Forest\nField Site")
plot(lines_HARV, add = TRUE)
# use the pch element to adjust the symbology of the points
plot(point_HARV, add = TRUE, pch = 19, col = "purple")
### Challenge: Plot Raster & Vector Data Together
You can plot vector data layered on top of raster data using the add=TRUE
plot attribute. Create a plot that uses the NEON AOP Canopy Height Model NEON_RemoteSensing/HARV/CHM/HARV_chmCrop.tif as a base layer. On top of the
CHM, please add:
Sometimes we want to perform a calculation, or a set of calculations, multiple
times in our code. We could write out the equation over and over in our code --
OR -- we could chose to build a function that allows us to repeat several
operations with a single command. This tutorial will focus on creating functions
in R.
Learning Objectives
After completing this tutorial, you will be able to:
Explain why we should divide programs into small, single-purpose functions.
Use a function that takes parameters (input values).
Return a value from a function.
Set default values for function parameters.
Write, or define, a function.
Test and debug a function. (This section in construction).
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded
on your computer to complete this tutorial.
Set Working Directory: This lesson assumes that you have set your working
directory to the location of the downloaded and unzipped data subsets.
R Script & Challenge Code: NEON data lessons often contain challenges that
reinforce learned skills. If available, the code for challenge solutions is found
in the downloadable R script of the entire lesson, available in the footer of
each lesson page.
Creating Functions
Sometimes we want to perform a calculation, or a set of calculations, multiple
times in our code. For example, we might need to convert units from Celsius to
Kelvin, across multiple datasets and save if for future use.
We could write out the equation over and over in our code -- OR -- we could chose
to build a function that allows us to repeat several operations with a single
command. This tutorial will focus on creating functions in R.
Getting Started
Let's start by defining a function fahr_to_kelvin that converts temperature
values from Fahrenheit to Kelvin:
The definition begins with the name of your new function. Use a good descriptor
of the function you are doing and make sure it isn't the same as a
a commonly used R function!
This is followed by the call to make it a function and a parenthesized list of
parameter names. The parameters are the input values that the function will use
to perform any calculations. In the case of fahr_to_kelvin, the input will be
the temperature value that we wish to convert from fahrenheit to kelvin. You can
have as many input parameters as you would like (but too many are poor style).
The body, or implementation, is surrounded by curly braces { }. Leaving the
initial curly bracket at the end of the first line and the final one on its own
line makes functions easier to read (for the human, the machine doesn't care).
In many languages, the body of the function - the statements that are executed
when it runs - must be indented, typically using 4 spaces.
**Data Tip:** While it is not mandatory in R to indent
your code 4 spaces within a function, it is strongly recommended as good
practice!
When we call the function, the values we pass to it are assigned to those
variables so that we can use them inside the function.
The last line within the function is what R will evaluate as a returning value.
Remember that the last line has to be a command that will print to the screen,
and not an object definition, otherwise the function will return nothing - it
will work, but will provide no output. In our example we print the value of
the object Kelvin.
Calling our own function is no different from calling any other built in R
function that you are familiar with. Let's try running our function.
# call function for F=32 degrees
fahr_to_kelvin(32)
## [1] 273.15
# We could use `paste()` to create a sentence with the answer
paste('The boiling point of water (212 Fahrenheit) is',
fahr_to_kelvin(212),
'degrees Kelvin.')
## [1] "The boiling point of water (212 Fahrenheit) is 373.15 degrees Kelvin."
We've successfully called the function that we defined, and we have access to
the value that we returned.
Question: What would happen if we instead wrote our function as:
Nothing is returned! This is because we didn't specify what the output was in
the final line of the function.
However, we can see that the function still worked by assigning the function to
object "a" and calling "a".
# assign to a
a <- fahr_to_kelvin_test(32)
# value of a
a
## [1] 273.15
We can see that even though there was no output from the function, the function
was still operational.
###Variable Scope
In R, variables assigned a value within a function do not retain their values
outside of the function.
x <- 1:3
x
## [1] 1 2 3
# define a function to add 1 to the temporary variable 'input'
plus_one <- function(input) {
input <- input + 1
}
# run our function
plus_one(x)
# x has not actually changed outside of the function
x
## [1] 1 2 3
To change a variable outside of a function you must assign the funciton's output
to that variable.
plus_one <- function(input) {
output <- input + 1 # store results to output variable
output # return output variable
}
# assign the results of our function to x
x <- plus_one(x)
x
## [1] 2 3 4
### Challenge: Writing Functions
Now that we've seen how to turn Fahrenheit into Kelvin, try your hand at
converting Kelvin to Celsius. Remember, for the same temperature Kelvin is 273.15
degrees less than Celsius.
Compound Functions
What about converting Fahrenheit to Celsius? We could write out the formula as a
new function or we can combine the two functions we have already created. It
might seem a bit silly to do this just for converting from Fahrenheit to Celcius
but think about the other applications where you will use functions!
# use two functions (F->K & K->C) to create a new one (F->C)
fahr_to_celsius <- function(temp) {
temp_k <- fahr_to_kelvin(temp)
temp_c <- kelvin_to_celsius(temp_k)
temp_c
}
paste('freezing point of water (32 Fahrenheit) in Celsius:',
fahr_to_celsius(32.0))
## [1] "freezing point of water (32 Fahrenheit) in Celsius: 0"
This is our first taste of how larger programs are built: we define basic
operations, then combine them in ever-large chunks to get the effect we want.
Real-life functions will usually be larger than the ones shown here—typically
half a dozen to a few dozen lines—but they shouldn't ever be much longer than
that, or the next person who reads it won't be able to understand what's going
on.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Recommended Tutorials
This tutorial uses both dplyr and ggplot2. If you are new to either of these
R packages, we recommend the following NEON Data Skills tutorials before
working through this one.
Normalized Difference Vegetation Index (NDVI) is an indicator of how green
vegetation is.
Watch this two and a half minute video from
Karen Joyce
that explains what NDVI is and why it is used.
NDVI is derived from remote sensing data based on a ratio the
reluctance of visible red spectra and near-infrared spectra. The NDVI values
vary from -1.0 to 1.0.
The imagery data used to create this NDVI data were collected over the National
Ecological Observatory Network's
Harvard Forest
field site.
We need to read in two datasets: the 2009-2011 micrometeorological data and the
2011 NDVI data for the Harvard Forest.
# Remember it is good coding technique to add additional libraries to the top of
# your script
library(lubridate) # for working with dates
library(ggplot2) # for creating graphs
library(scales) # to access breaks/formatting functions
library(gridExtra) # for arranging plots
library(grid) # for arranging plots
library(dplyr) # for subsetting by season
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# read in the Harvard micro-meteorological data; if you don't already have it
harMetDaily.09.11 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv"),
stringsAsFactors = FALSE
)
#check out the data
str(harMetDaily.09.11)
## 'data.frame': 1095 obs. of 47 variables:
## $ X : int 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 ...
## $ date : chr "2009-01-01" "2009-01-02" "2009-01-03" "2009-01-04" ...
## $ jd : int 1 2 3 4 5 6 7 8 9 10 ...
## $ airt : num -15.1 -9.1 -5.5 -6.4 -2.4 -4.9 -2.6 -3.2 -9.9 -11.1 ...
## $ f.airt : chr "" "" "" "" ...
## $ airtmax : num -9.2 -3.7 -1.6 0 0.7 0 -0.2 -0.5 -6.1 -8 ...
## $ f.airtmax: chr "" "" "" "" ...
## $ airtmin : num -19.1 -15.8 -9.5 -11.4 -6.4 -10.1 -5.1 -9.9 -12.5 -15.9 ...
## $ f.airtmin: chr "" "" "" "" ...
## $ rh : int 58 75 69 59 77 65 97 93 78 77 ...
## $ f.rh : chr "" "" "" "" ...
## $ rhmax : int 76 97 97 78 97 88 100 100 89 92 ...
## $ f.rhmax : chr "" "" "" "" ...
## $ rhmin : int 33 47 41 40 45 38 77 76 63 54 ...
## $ f.rhmin : chr "" "" "" "" ...
## $ dewp : num -21.9 -12.9 -10.9 -13.3 -6.2 -10.9 -3 -4.2 -13.1 -14.5 ...
## $ f.dewp : chr "" "" "" "" ...
## $ dewpmax : num -20.4 -6.2 -6.4 -9.1 -1.7 -7.5 -0.5 -0.6 -11.2 -10.5 ...
## $ f.dewpmax: chr "" "" "" "" ...
## $ dewpmin : num -23.5 -21 -14.3 -16.3 -12.1 -13 -7.6 -11.8 -15.2 -18 ...
## $ f.dewpmin: chr "" "" "" "" ...
## $ prec : num 0 0 0 0 1 0 26.2 0.8 0 1 ...
## $ f.prec : chr "" "" "" "" ...
## $ slrt : num 8.4 3.7 8.1 8.3 2.9 6.3 0.8 2.8 8.8 5.7 ...
## $ f.slrt : chr "" "" "" "" ...
## $ part : num 16.7 7.3 14.8 16.2 5.4 11.7 1.8 7 18.2 11.4 ...
## $ f.part : chr "" "" "" "" ...
## $ netr : num -39.4 -16.6 -35.3 -24.7 -19.4 -18.9 5.6 -21.7 -31.1 -16 ...
## $ f.netr : chr "" "" "" "" ...
## $ bar : int 1011 1005 1004 1008 1006 1009 991 987 1005 1015 ...
## $ f.bar : chr "" "" "" "" ...
## $ wspd : num 2.4 1.4 2.7 1.9 2.1 1 1.4 0 1.3 1 ...
## $ f.wspd : chr "" "" "" "" ...
## $ wres : num 2.1 1 2.5 1.6 1.9 0.7 1.3 0 1.1 0.6 ...
## $ f.wres : chr "" "" "" "" ...
## $ wdir : int 294 237 278 292 268 257 86 0 273 321 ...
## $ f.wdir : chr "" "" "" "" ...
## $ wdev : int 29 42 24 31 26 44 24 0 20 50 ...
## $ f.wdev : chr "" "" "" "" ...
## $ gspd : num 13.4 8.1 13.9 8 11.6 5.1 9.1 0 10.1 5 ...
## $ f.gspd : chr "" "" "" "" ...
## $ s10t : num 1 1 1 1 1 1 1.1 1.2 1.4 1.3 ...
## $ f.s10t : logi NA NA NA NA NA NA ...
## $ s10tmax : num 1.1 1 1 1 1.1 1.1 1.1 1.3 1.4 1.4 ...
## $ f.s10tmax: logi NA NA NA NA NA NA ...
## $ s10tmin : num 1 1 1 1 1 1 1 1.1 1.3 1.2 ...
## $ f.s10tmin: logi NA NA NA NA NA NA ...
# read in the NDVI CSV data; if you dont' already have it
NDVI.2011 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/NDVI/meanNDVI_HARV_2011.csv"),
stringsAsFactors = FALSE
)
# check out the data
str(NDVI.2011)
## 'data.frame': 11 obs. of 6 variables:
## $ X : int 1 2 3 4 5 6 7 8 9 10 ...
## $ meanNDVI : num 0.365 0.243 0.251 0.599 0.879 ...
## $ site : chr "HARV" "HARV" "HARV" "HARV" ...
## $ year : int 2011 2011 2011 2011 2011 2011 2011 2011 2011 2011 ...
## $ julianDay: int 5 37 85 133 181 197 213 229 245 261 ...
## $ Date : chr "2011-01-05" "2011-02-06" "2011-03-26" "2011-05-13" ...
In the NDVI dataset, we have the following variables:
'X': an integer identifying each row
meanNDVI: the daily total NDVI for that area. (It is a mean of all pixels in
the original raster).
site: "HARV" means all NDVI values are from the Harvard Forest
year: "2011" all values are from 2011
julianDay: the numeric day of the year
Date: a date in format "YYYY-MM-DD"; currently in chr class
### Challenge: Class Conversion & Subset by Time
The goal of this challenge is to get our datasets ready so that we can work
with data from each, within the same plots or analyses.
Ensure that date fields within both datasets are in the Date class. If not,
convert the data to the Date class.
The NDVI data are limited to 2011, however, the meteorological data are from
2009-2011. Subset and retain only the 2011 meteorological data. Name it
harMet.daily2011.
Now that we have our datasets with Date class dates and limited to 2011, we can
begin working with both.
Plot NDVI Data from a .csv
These NDVI data were derived from a raster and are now integers in a
data.frame, therefore we can plot it like any of our other values using
ggplot(). Here we plot meanNDVI by Date.
# plot NDVI by date
ggplot(NDVI.2011, aes(Date, meanNDVI))+
geom_point(colour = "forestgreen", size = 4) +
ggtitle("Daily NDVI at Harvard Forest, 2011")+
theme(legend.position = "none",
plot.title = element_text(lineheight=.8, face="bold",size = 20),
text = element_text(size=20))
Two y-axes or Side-by-Side Plots?
When we have different types of data like NDVI (scale: 0-1 index units),
Photosynthetically Active Radiation (PAR, scale: 0-65.8 mole per meter squared),
or temperature (scale: -20 to 30 C) that we want to plot over time, we cannot
simply plot them on the same plot as they have different y-axes.
One option, would be to plot both data types in the same plot space but each
having it's own axis (one on left of plot and one on right of plot). However,
there is a line of graphical representation thought that this is not a good
practice. The creator of ggplot2 ascribes to this dislike of different y-axes
and so neither qplot nor ggplot have this functionality.
Instead, plots of different types of data can be plotted next to each other to
allow for comparison. Depending on how the plots are being viewed, they can
have a vertical or horizontal arrangement.
### Challenge: Plot Air Temperature and NDVI
Plot the NDVI vs Date (previous plot) and PAR vs Date (create a new plot) in the
same viewer so we can more easily compare them.
The figures from this Challenge are nice but a bit confusing as the dates on the
x-axis don't exactly line up. To fix this we can assign the same min and max
to both x-axes so that they align. The syntax for this is:
limits=c(min=VALUE,max=VALUE).
In our case we want the min and max values to
be based on the min and max of the NDVI.2011$Date so we'll use a function
specifying this instead of a single value.
We can also assign the date format for the x-axis and clearly label both axes.
### Challenge: Plot Air Temperature and NDVI
Create a plot, complementary to those above, showing air temperature (`airt`)
throughout 2011. Choose colors and symbols that show the data well.
Second, plot PAR, air temperature and NDVI in a single pane for ease of
comparison.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Recommended Tutorials
This tutorial uses both dplyr and ggplot2. If you are new to either of these
R packages, we recommend the following NEON Data Skills tutorials before
working through this one.
In this tutorial we will learn how to create a panel of individual plots - known
as facets in ggplot2. Each plot represents a particular data_frame
time-series subset, for example a year or a season.
Load the Data
We will use the daily micro-meteorology data for 2009-2011 from the Harvard
Forest. If you do not have this data loaded into an R data_frame, please
load them and convert date-time columns to a date-time class now.
# Remember it is good coding technique to add additional libraries to the top of
# your script
library(lubridate) # for working with dates
library(ggplot2) # for creating graphs
library(scales) # to access breaks/formatting functions
library(gridExtra) # for arranging plots
library(grid) # for arranging plots
library(dplyr) # for subsetting by season
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# daily HARV met data, 2009-2011
harMetDaily.09.11 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv"),
stringsAsFactors = FALSE
)
# covert date to Date class
harMetDaily.09.11$date <- as.Date(harMetDaily.09.11$date)
ggplot2 Facets
Facets allow us to plot subsets of data in one cleanly organized panel. We use
facet_grid() to create a plot of a particular variable subsetted by a
particular group.
Let's plot air temperature as we did previously. We will name the ggplot
object AirTempDaily.
**Data Tip:** If you are working with a date & time
class (e.g. POSIXct), you can use `scale_x_datetime` instead of `scale_x_date`.
This plot tells us a lot about the annual increase and decrease of temperature
at the NEON Harvard Forest field site. However, what if we want to plot each
year's worth of data individually?
We can use the facet() element in ggplot to create facets or a panel of
plots that are grouped by a particular category or time period. To create a
plot for each year, we will first need a year column in our data to use as a
subset factor. We created a year column using the year function in the
lubridate package in the
Subset and Manipulate Time Series Data with dplyr tutorial.
# add year column to daily values
harMetDaily.09.11$year <- year(harMetDaily.09.11$date)
# view year column head and tail
head(harMetDaily.09.11$year)
## [1] 2009 2009 2009 2009 2009 2009
tail(harMetDaily.09.11$year)
## [1] 2011 2011 2011 2011 2011 2011
Facet by Year
Once we have a column that can be used to group or subset our data, we can
create a faceted plot - plotting each year's worth of data in an individual,
labelled panel.
# run this code to plot the same plot as before but with one plot per season
AirTempDaily + facet_grid(. ~ year)
## Error: At least one layer must contain all faceting variables: `year`.
## * Plot is missing `year`
## * Layer 1 is missing `year`
Oops - what happened? The plot did not render because we added the year column
after creating the ggplot object AirTempDaily. Let's rerun the plotting code
to ensure our newly added column is recognized.
The faceted plot is interesting, however the x-axis on each plot is formatted
as: month-day-year starting in 2009 and ending in 2011. This means that the data
for 2009 is on the left end of the x-axis and the data for 2011 is on the right
end of the x-axis of the 2011 plot.
Our plots would be easier to visually compare if the days were formatted in
Julian or year days rather than date. We have Julian days stored in our
data_frame (harMetDaily.09.11) as jd.
Using Julian day, our plots are easier to visually compare. Arranging our plots
this way, side by side, allows us to quickly scan for differences along the
y-axis. Notice any differences in min vs max air temperature across the three
years?
Arrange Facets
We can rearrange the facets in different ways, too.
# move labels to the RIGHT and stack all plots
AirTempDaily_jd + facet_grid(year ~ .)
If we use facet_wrap we can specify the number of columns.
# display in two columns
AirTempDaily_jd + facet_wrap(~year, ncol = 2)
Graph Two Variables on One Plot
Next, let's explore the relationship between two variables - air temperature
and soil temperature. We might expect soil temperature to fluctuate with changes
in air temperature over time.
We will use ggplot() to plot airt and s10t (soil temperature 10 cm below
the ground).
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature\n NEON Harvard Forest Field Site\n 2009-2011") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
airSoilTemp_Plot
The plot above suggests a relationship between the air and soil temperature as
we might expect. However, it clumps all three years worth of data into one plot.
Let's create a stacked faceted plot of air vs. soil temperature grouped by year.
Lucky for us, we can do this quickly with one line of code while reusing the
plot we created above.
Have a close look at the data. Are there any noticeable min/max temperature
differences between the three years?
### Challenge: Faceted Plot
Create a faceted plot of air temperature vs soil temperature by month rather
than year.
HINT: To create this plot, you will want to add a month column to our
data_frame. We can use lubridate month in the same way we used year to add
a year column.
Faceted Plots & Categorical Groups
In the challenge above, we grouped our data by month - specified by a numeric
value between 1 (January) and 12 (December). However, what if we wanted to
organize our plots using a categorical (character) group such as month name?
Let's do that next.
If we want to group our data by month name, we first need to create a month
name column in our data_frame. We can create this column using the following
syntax:
format(harMetDaily.09.11$date,"%B"),
which tells R to extract the month name (%B) from the date field.
# add text month name column
harMetDaily.09.11$month_name <- format(harMetDaily.09.11$date,"%B")
# view head and tail
head(harMetDaily.09.11$month_name)
## [1] "January" "January" "January" "January" "January" "January"
tail(harMetDaily.09.11$month_name)
## [1] "December" "December" "December" "December" "December" "December"
# recreate plot
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature \n NEON Harvard Forest Field Site\n 2009-2011") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
# create faceted panel
airSoilTemp_Plot + facet_wrap(~month_name, nc=3)
Great! We've created a nice set of plots by month. However, how are the plots
ordered? It looks like R is ordering things alphabetically, yet we know
that months are ordinal not character strings. To account for order, we can
reassign the month_name field to a factor. This will allow us to specify
an order to each factor "level" (each month is a level).
The syntax for this operation is
Turn field into a factor: factor(fieldName) .
Designate the levels using a list c(level1, level2, level3).
In our case, each level will be a month.
# order the factors
harMetDaily.09.11$month_name = factor(harMetDaily.09.11$month_name,
levels=c('January','February','March',
'April','May','June','July',
'August','September','October',
'November','December'))
Once we have specified the factor column and its associated levels, we can plot
again. Remember, that because we have modified a column in our data_frame, we
need to rerun our ggplot code.
# recreate plot
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature \n NEON Harvard Forest Field Site\n 2009-2011") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
# create faceted panel
airSoilTemp_Plot + facet_wrap(~month_name, nc=3)
Subset by Season - Advanced Topic
Sometimes we want to group data by custom time periods. For example, we might
want to group by season. However, the definition of various seasons may vary by
region which means we need to manually define each time period.
In the next coding section, we will add a season column to our data using a
manually defined query. Our field site is Harvard Forest (Massachusetts),
located in the northeastern portion of the United States. Based on the weather
of this region, we can divide the year into 4 seasons as follows:
Winter: December - February
Spring: March - May
Summer: June - August
Fall: September - November
In order to subset the data by season we will use the dplyr package. We
can use the numeric month column that we added to our data earlier in this
tutorial.
# add month to data_frame - note we already performed this step above.
harMetDaily.09.11$month <- month(harMetDaily.09.11$date)
# view head and tail of column
head(harMetDaily.09.11$month)
## [1] 1 1 1 1 1 1
tail(harMetDaily.09.11$month)
## [1] 12 12 12 12 12 12
We can use mutate() and a set of ifelse statements to create a new
categorical variable called season by grouping three months together.
Within dplyr%in% is short-hand for "contained within". So the syntax
ifelse(month %in% c(12, 1, 2), "Winter",
can be read as "if the month column value is 12 or 1 or 2, then assign the
value "Winter"".
Our ifelse statement ends with
ifelse(month %in% c(9, 10, 11), "Fall", "Error")
which we can translate this as "if the month column value is 9 or 10 or 11,
then assign the value "Winter"."
The last portion , "Error" tells R that if a month column value does not
fall within any of the criteria laid out in previous ifelse statements,
assign the column the value of "Error".
Now that we have a season column, we can plot our data by season!
# recreate plot
airSoilTemp_Plot <- ggplot(harMetDaily.09.11, aes(airt, s10t)) +
geom_point() +
ggtitle("Air vs. Soil Temperature\n 2009-2011\n NEON Harvard Forest Field Site") +
xlab("Air Temperature (C)") + ylab("Soil Temperature (C)") +
theme(plot.title = element_text(lineheight=.8, face="bold",
size = 20)) +
theme(text = element_text(size=18))
# run this code to plot the same plot as before but with one plot per season
airSoilTemp_Plot + facet_grid(. ~ season)
Note, that once again, we re-ran our ggplot code to make sure our new column
is recognized by R. We can experiment with various facet layouts next.
# for a landscape orientation of the plots we change the order of arguments in
# facet_grid():
airSoilTemp_Plot + facet_grid(season ~ .)
Once again, R is arranging the plots in an alphabetical order not an order
relevant to the data.
### Challenge: Create Plots by Season
The goal of this challenge is to create plots that show the relationship between
air and soil temperature across the different seasons with seasons arranged in
an ecologically meaningful order.
Create a factor class season variable by converting the season column that
we just created to a factor, then organize the seasons chronologically as
follows: Winter, Spring, Summer, Fall.
Create a new faceted plot that is 2 x 2 (2 columns of plots). HINT: One can
neatly plot multiple variables using facets as follows:
facet_grid(variable1 ~ variable2).
Create a plot of air vs soil temperature grouped by year and season.
Work with Year-Month Data: base R and zoo Package
Some data will have month formatted in Year-Month
(e.g. met_monthly_HARV$date).
(Note: You will load this file in the Challenge below)
For many analyses, we might want to summarize this data into a yearly total.
Base R does NOT have a distinct year-month date class. Instead to work with a
year-month field using base R, we need to convert to a Date class, which
necessitates adding an associated day value. The syntax would be:
The syntax above creates a Date column from the met_montly_HARV$date column.
We then add the arbitrary date - the first ("-01"). The final bit of code
(sep="") designates the character string used to separate the month, day,
and year portions of the returned string (in our case nothing, as we have
included the hyphen with our inserted date value).
Alternatively, to work directly with a year-month data we could use the zoo
package and its included year-month date class - as.yearmon. With zoo the
syntax would be:
as.Date(as.yearmon(met_monthly_HARV$date))
### Challenge: Convert Year-Month Data
The goal of this challenge is to use both the base R and the `zoo` package
methods for working with year-month data.
Load the NEON-DS-Met-Time-Series/HARV/FisherTower-Met/hf001-04-monthly-m.csv
file and give it the name met_monthly_HARV. Then:
Convert the date field into a date/time class using both base R and the
zoo package. Name the new fields date_base and ymon_zoo respectively.
Look at the format and check the class of both new date fields.
Convert the ymon_zoo field into a new Date class field (date_zoo) so it
can be used in base R, ggplot, etc.
HINT: be sure to load the zoo package, if you have not already.
Do you prefer to use base R or zoo to convert these data to a date/time
class?
**Data Tip:** `zoo` date/time classes cannot be used
directly with ggplot2. If you deal with date formats that make sense to
primarily use `zoo` date/time classes, you can use ggplot2 with the addition of
other functions. For details see the
ggplot2.zoo documentation.
### Challenge: Plot Year-Month Data
Using the date field `date_base` (or `date_zoo`) that you created in the
previous challenge, create a faceted plot of annual air temperature for each
year (2001-2015) with month as the x-axis for the NEON Harvard Forest field
site.
This tutorial uses ggplot2 to create customized plots of time
series data. We will learn how to adjust x- and y-axis ticks using the scales
package, how to add trend lines to a scatter plot and how to customize plot
labels, colors and overall plot appearance using ggthemes.
Learning Objectives
After completing this tutorial, you will be able to:
Create basic time series plots using ggplot() in R.
Explain the syntax of ggplot() and know how to find out more about the
package.
Plot data using scatter and bar plots.
Things You’ll Need To Complete This Tutorial
You will need the most current version of R and, preferably, RStudio loaded on
your computer to complete this tutorial.
R Script & Challenge Code: NEON data lessons often contain challenges that reinforce
learned skills. If available, the code for challenge solutions is found in the
downloadable R script of the entire lesson, available in the footer of each lesson page.
Plotting our data allows us to quickly see general patterns including
outlier points and trends. Plots are also a useful way to communicate the
results of our research. ggplot2 is a powerful R package that we use to
create customized, professional plots.
Load the Data
We will use the lubridate, ggplot2, scales and gridExtra packages in
this tutorial.
Our data subset will be the daily meteorology data for 2009-2011 for the NEON
Harvard Forest field site
(NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv).
If this subset is not already loaded, please load it now.
# Remember it is good coding technique to add additional packages to the top of
# your script
library(lubridate) # for working with dates
library(ggplot2) # for creating graphs
library(scales) # to access breaks/formatting functions
library(gridExtra) # for arranging plots
# set working directory to ensure R can find the file we wish to import
wd <- "~/Git/data/"
# daily HARV met data, 2009-2011
harMetDaily.09.11 <- read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Met_HARV_Daily_2009_2011.csv"),
stringsAsFactors = FALSE)
# covert date to Date class
harMetDaily.09.11$date <- as.Date(harMetDaily.09.11$date)
# monthly HARV temperature data, 2009-2011
harTemp.monthly.09.11<-read.csv(
file=paste0(wd,"NEON-DS-Met-Time-Series/HARV/FisherTower-Met/Temp_HARV_Monthly_09_11.csv"),
stringsAsFactors=FALSE
)
# datetime field is actually just a date
#str(harTemp.monthly.09.11)
# convert datetime from chr to date class & rename date for clarification
harTemp.monthly.09.11$date <- as.Date(harTemp.monthly.09.11$datetime)
Plot with qplot
We can use the qplot() function in the ggplot2 package to quickly plot a
variable such as air temperature (airt) across all three years of our daily
average time series data.
# plot air temp
qplot(x=date, y=airt,
data=harMetDaily.09.11, na.rm=TRUE,
main="Air temperature Harvard Forest\n 2009-2011",
xlab="Date", ylab="Temperature (°C)")
The resulting plot displays the pattern of air temperature increasing and
decreasing over three years. While qplot() is a quick way to plot data, our
ability to customize the output is limited.
Plot with ggplot
The ggplot() function within the ggplot2 package gives us more control
over plot appearance. However, to use ggplot we need to learn a slightly
different syntax. Three basic elements are needed for ggplot() to work:
The data_frame: containing the variables that we wish to plot,
aes (aesthetics): which denotes which variables will map to the x-, y-
(and other) axes,
geom_XXXX (geometry): which defines the data's graphical representation
(e.g. points (geom_point), bars (geom_bar), lines (geom_line), etc).
The syntax begins with the base statement that includes the data_frame
(harMetDaily.09.11) and associated x (date) and y (airt) variables to be
plotted:
ggplot(harMetDaily.09.11, aes(date, airt))
To successfully plot, the last piece that is needed is the geometry type. In
this case, we want to create a scatterplot so we can add + geom_point().
Let's create an air temperature scatterplot.
# plot Air Temperature Data across 2009-2011 using daily data
ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE)
Customize A Scatterplot
We can customize our plot in many ways. For instance, we can change the size and
color of the points using size=, shape pch=, and color= in the geom_point
element.
geom_point(na.rm=TRUE, color="blue", size=1)
# plot Air Temperature Data across 2009-2011 using daily data
ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="blue", size=3, pch=18)
Modify Title & Axis Labels
We can modify plot attributes by adding elements using the + symbol.
For example, we can add a title by using + ggtitle="TEXT", and axis
labels using + xlab("TEXT") + ylab("TEXT").
# plot Air Temperature Data across 2009-2011 using daily data
ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="blue", size=1) +
ggtitle("Air Temperature 2009-2011\n NEON Harvard Forest Field Site") +
xlab("Date") + ylab("Air Temperature (C)")
**Data Tip:** Use `help(ggplot2)` to review the many
elements that can be defined and added to a `ggplot2` plot.
Name Plot Objects
We can create a ggplot object by assigning our plot to an object name.
When we do this, the plot will not render automatically. To render the plot, we
need to call it in the code.
Assigning plots to an R object allows us to effectively add on to,
and modify the plot later. Let's create a new plot and call it AirTempDaily.
# plot Air Temperature Data across 2009-2011 using daily data
AirTempDaily <- ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="purple", size=1) +
ggtitle("Air Temperature\n 2009-2011\n NEON Harvard Forest") +
xlab("Date") + ylab("Air Temperature (C)")
# render the plot
AirTempDaily
Format Dates in Axis Labels
We can adjust the date display format (e.g. 2009-07 vs. Jul 09) and the number
of major and minor ticks for axis date values using scale_x_date. Let's
format the axis ticks so they read "month year" (%b %y). To do this, we will
use the syntax:
scale_x_date(labels=date_format("%b %y")
Rather than re-coding the entire plot, we can add the scale_x_date element
to the plot object AirTempDaily that we just created.
**Data Tip:** You can type `?strptime` into the R
console to find a list of date format conversion specifications (e.g. %b = month).
Type `scale_x_date` for a list of parameters that allow you to format dates
on the x-axis.
**Data Tip:** If you are working with a date & time
class (e.g. POSIXct), you can use `scale_x_datetime` instead of `scale_x_date`.
Adjust Date Ticks
We can adjust the date ticks too. In this instance, having 1 tick per year may
be enough. If we have the scales package loaded, we can use
breaks=date_breaks("1 year") within the scale_x_date element to create
a tick for every year. We can adjust this as needed (e.g. 10 days, 30 days, 1
month).
From R HELP (?date_breaks): width an interval specification, one of "sec",
"min", "hour", "day", "week", "month", "year". Can be by an integer and a
space, or followed by "s".
# format x-axis: dates
AirTempDaily_1y <- AirTempDaily +
(scale_x_date(breaks=date_breaks("1 year"),
labels=date_format("%b %y")))
AirTempDaily_1y
**Data Tip:** We can adjust the tick spacing and
format for x- and y-axes using `scale_x_continuous` or `scale_y_continuous` to
format a continue variable. Check out `?scale_x_` (tab complete to view the
various x and y scale options)
ggplot - Subset by Time
Sometimes we want to scale the x- or y-axis to a particular time subset without
subsetting the entire data_frame. To do this, we can define start and end
times. We can then define the limits in the scale_x_date object as
follows:
scale_x_date(limits=start.end) +
# Define Start and end times for the subset as R objects that are the time class
startTime <- as.Date("2011-01-01")
endTime <- as.Date("2012-01-01")
# create a start and end time R object
start.end <- c(startTime,endTime)
start.end
## [1] "2011-01-01" "2012-01-01"
# View data for 2011 only
# We will replot the entire plot as the title has now changed.
AirTempDaily_2011 <- ggplot(harMetDaily.09.11, aes(date, airt)) +
geom_point(na.rm=TRUE, color="purple", size=1) +
ggtitle("Air Temperature\n 2011\n NEON Harvard Forest") +
xlab("Date") + ylab("Air Temperature (C)")+
(scale_x_date(limits=start.end,
breaks=date_breaks("1 year"),
labels=date_format("%b %y")))
AirTempDaily_2011
ggplot() Themes
We can use the theme() element to adjust figure elements.
There are some nice pre-defined themes that we can use as a starting place.
# Apply a black and white stock ggplot theme
AirTempDaily_bw<-AirTempDaily_1y +
theme_bw()
AirTempDaily_bw
Using the theme_bw() we now have a white background rather than grey.
Import New Themes BonusTopic
There are externally developed themes built by the R community that are worth
mentioning. Feel free to experiment with the code below to install ggthemes.
A list of themes loaded in the ggthemes library is found here.
Customize ggplot Themes
We can customize theme elements manually too. Let's customize the font size and
style.
# format x axis with dates
AirTempDaily_custom<-AirTempDaily_1y +
# theme(plot.title) allows to format the Title separately from other text
theme(plot.title = element_text(lineheight=.8, face="bold",size = 20)) +
# theme(text) will format all text that isn't specifically formatted elsewhere
theme(text = element_text(size=18))
AirTempDaily_custom
### Challenge: Plot Total Daily Precipitation
Create a plot of total daily precipitation using data in the `harMetDaily.09.11`
`data_frame`.
Format the dates on the x-axis: Month-Year.
Create a plot object called PrecipDaily.
Be sure to add an appropriate title in addition to x and y axis labels.
Increase the font size of the plot text and adjust the number of ticks on the
x-axis.
Bar Plots with ggplot
We can use ggplot to create bar plots too. Let's create a bar plot of total
daily precipitation next. A bar plot might be a better way to represent a total
daily value. To create a bar plot, we change the geom element from
geom_point() to geom_bar().
The default setting for a ggplot bar plot - geom_bar() - is a histogram
designated by stat="bin". However, in this case, we want to plot actual
precipitation values. We can use geom_bar(stat="identity") to force ggplot to
plot actual values.
Note that some of the bars in the resulting plot appear grey rather than black.
This is because R will do it's best to adjust colors of bars that are closely
spaced to improve readability. If we zoom into the plot, all of the bars are
black.
### Challenge: Plot with scale_x_data()
Without creating a subsetted dataframe, plot the precipitation data for
*2010 only*. Customize the plot with:
a descriptive title and axis labels,
breaks every 4 months, and
x-axis labels as only the full month (spelled out).
HINT: you will need to rebuild the precipitation plot as you will have to
specify a new scale_x_data() element.
We can change the bar fill color by within the
geom_bar(colour="blue") element. We can also specify a separate fill and line
color using fill= and line=. Colors can be specified by name (e.g.,
"blue") or hexadecimal color codes (e.g, #FF9999).
There are many color cheatsheets out there to help with color selection!
# specifying color by name
PrecipDailyBarB <- PrecipDailyBarA+
geom_bar(stat="identity", colour="darkblue")
PrecipDailyBarB
**Data Tip:** For more information on color,
including color blind friendly color palettes, checkout the
ggplot2 color information from Winston Chang's *Cookbook* *for* *R* site
based on the _R_ _Graphics_ _Cookbook_ text.
Figures with Lines
We can create line plots too using ggplot. To do this, we use geom_line()
instead of bar or point.
Note that lines may not be the best way to represent air temperature data given
lines suggest that the connecting points are directly related. It is important
to consider what type of plot best represents the type of data that you are
presenting.
### Challenge: Combine Points & Lines
You can combine geometries within one plot. For example, you can have a
`geom_line()` and `geom_point` element in a plot. Add `geom_line(na.rm=TRUE)` to
the `AirTempDaily`, a `geom_point` plot. What happens?
Trend Lines
We can add a trend line, which is a statistical transformation of our data to
represent general patterns, using stat_smooth(). stat_smooth() requires a
statistical method as follows:
For data with < 1000 observations: the default model is a loess model
(a non-parametric regression model)
For data with > 1,000 observations: the default model is a GAM (a general
additive model)
A specific model/method can also be specified: for example, a linear regression (method="lm").
For this tutorial, we will use the default trend line model. The gam method will
be used with given we have 1,095 measurements.
**Data Tip:** Remember a trend line is a statistical
transformation of the data, so prior to adding the line one must understand if a
particular statistical transformation is appropriate for the data.
# adding on a trend lin using loess
AirTempDaily_trend <- AirTempDaily + stat_smooth(colour="green")
AirTempDaily_trend
## `geom_smooth()` using method = 'gam' and formula 'y ~ s(x, bs = "cs")'
### Challenge: A Trend in Precipitation?
Create a bar plot of total daily precipitation. Add a:
Trend line for total daily precipitation.
Make the bars purple (or your favorite color!).
Make the trend line grey (or your other favorite color).
Adjust the tick spacing and format the dates to appear as "Jan 2009".
Render the title in italics.
## `geom_smooth()` using formula 'y ~ x'
### Challenge: Plot Monthly Air Temperature
Plot the monthly air temperature across 2009-2011 using the
harTemp.monthly.09.11 data_frame. Name your plot "AirTempMonthly". Be sure to
label axes and adjust the plot ticks as you see fit.
Display Multiple Figures in Same Panel
It is often useful to arrange plots in a panel rather than displaying them
individually. In base R, we use par(mfrow=()) to accomplish this. However
the grid.arrange() function from the gridExtra package provides a more
efficient approach!
grid.arrange requires 2 things:
the names of the plots that you wish to render in the panel.
the number of columns (ncol).
grid.arrange(plotOne, plotTwo, ncol=1)
Let's plot AirTempMonthly and AirTempDaily on top of each other. To do this,
we'll specify one column.
# note - be sure library(gridExtra) is loaded!
# stack plots in one column
grid.arrange(AirTempDaily, AirTempMonthly, ncol=1)
### Challenge: Create Panel of Plots
Plot AirTempMonthly and AirTempDaily next to each other rather than stacked
on top of each other.
Additional ggplot2 Resources
In this tutorial, we've covered the basics of ggplot. There are many great
resources the cover refining ggplot figures. A few are below: