Skip to main content
NSF NEON, Operated by Battelle

Main navigation

  • About Us
    • Overview
      • Spatial and Temporal Design
      • History
    • Vision and Management
    • Advisory Groups
      • Science, Technology & Education Advisory Committee
      • Technical Working Groups (TWGs)
    • FAQ
    • Contact Us
      • Contact NEON Biorepository
      • Field Offices
    • User Accounts
    • Staff
    • Code of Conduct

    About Us

  • Data & Samples
    • Data Portal
      • Explore Data Products
      • Data Availability Charts
      • Spatial Data & Maps
      • Document Library
      • API & GraphQL
      • Prototype Data
      • External Lab Data Ingest (restricted)
    • Data Themes
      • Atmosphere
      • Biogeochemistry
      • Ecohydrology
      • Land Cover and Processes
      • Organisms, Populations, and Communities
    • Samples & Specimens
      • Discover and Use NEON Samples
        • Sample Types
        • Sample Repositories
        • Sample Explorer
        • Megapit and Distributed Initial Characterization Soil Archives
      • Sample Processing
      • Sample Quality
      • Taxonomic Lists
    • Collection Methods
      • Protocols & Standardized Methods
      • Airborne Remote Sensing
        • Flight Box Design
        • Flight Schedules and Coverage
        • Daily Flight Reports
          • AOP Flight Report Sign Up
        • Camera
        • Imaging Spectrometer
        • Lidar
      • Automated Instruments
        • Site Level Sampling Design
        • Sensor Collection Frequency
        • Instrumented Collection Types
          • Meteorology
          • Phenocams
          • Soil Sensors
          • Ground Water
          • Surface Water
      • Observational Sampling
        • Site Level Sampling Design
        • Sampling Schedules
        • Observation Types
          • Aquatic Organisms
            • Aquatic Microbes
            • Fish
            • Macroinvertebrates & Zooplankton
            • Periphyton, Phytoplankton, and Aquatic Plants
          • Terrestrial Organisms
            • Birds
            • Ground Beetles
            • Mosquitoes
            • Small Mammals
            • Soil Microbes
            • Terrestrial Plants
            • Ticks
          • Hydrology & Geomorphology
            • Discharge
            • Geomorphology
          • Biogeochemistry
          • DNA Sequences
          • Pathogens
          • Sediments
          • Soils
            • Soil Descriptions
        • Optimizing the Observational Sampling Designs
    • Data Notifications
    • Data Guidelines and Policies
      • Acknowledging and Citing NEON
      • Publishing Research Outputs
      • Usage Policies
    • Data Management
      • Data Availability
      • Data Formats and Conventions
      • Data Processing
      • Data Quality
      • Data Product Bundles
      • Data Product Revisions and Releases
        • Release 2021
        • Release 2022
        • Release 2023
        • Release 2024
        • Release-2025
      • NEON and Google
      • Externally Hosted Data

    Data & Samples

  • Field Sites
    • About Field Sites and Domains
    • Explore Field Sites
    • Site Management Data Product

    Field Sites

  • Impact
    • Observatory Blog
    • Case Studies
    • Papers & Publications
    • Newsroom
      • NEON in the News
      • Newsletter Archive
      • Newsletter Sign Up

    Impact

  • Resources
    • Getting Started with NEON Data & Resources
    • Documents and Communication Resources
      • Papers & Publications
      • Document Library
      • Outreach Materials
    • Code Hub
      • Code Resources Guidelines
      • Code Resources Submission
      • NEON's GitHub Organization Homepage
    • Learning Hub
      • Science Videos
      • Tutorials
      • Workshops & Courses
      • Teaching Modules
    • Research Support Services
      • Field Site Coordination
      • Letters of Support
      • Mobile Deployment Platforms
      • Permits and Permissions
      • AOP Flight Campaigns
      • Research Support FAQs
      • Research Support Projects
    • Funding Opportunities

    Resources

  • Get Involved
    • Advisory Groups
      • Science, Technology & Education Advisory Committee
      • Technical Working Groups
    • Upcoming Events
    • NEON Ambassador Program
      • Exploring NEON-Derived Data Products Workshop Series
    • Research and Collaborations
      • Environmental Data Science Innovation and Inclusion Lab
      • Collaboration with DOE BER User Facilities and Programs
      • EFI-NEON Ecological Forecasting Challenge
      • NEON Great Lakes User Group
      • NEON Science Summit
      • NCAR-NEON-Community Collaborations
        • NCAR-NEON Community Steering Committee
    • Community Engagement
      • How Community Feedback Impacts NEON Operations
    • Science Seminars and Data Skills Webinars
      • Past Years
    • Work Opportunities
      • Careers
      • Seasonal Fieldwork
      • Internships
        • Intern Alumni
    • Partners

    Get Involved

  • My Account
  • Search

Search

Hyperspectral Variation Uncertainty Analysis in Python

This tutorial teaches how to open a NEON AOP HDF5 file with a function, batch processing several HDF5 files, relative comparison between several NIS observations of the same target from different view angles, error checking.

Objectives

After completing this tutorial, you will be able to:

  • Open NEON AOP HDF5 files using a function
  • Batch process several HDF5 files
  • Complete relative comparisons between several imaging spectrometer observations of the same target from different view angles
  • Error check the data.

Install Python Packages

  • numpy
  • csv
  • gdal
  • matplotlib.pyplot
  • h5py
  • time

Download Data

To complete this tutorial, you will use data available from the NEON 2017 Data Institute teaching dataset available for download.

This tutorial will use the files contained in the 'F07A' Directory in this ShareFile Directory. You will want to download the entire directory as a single ZIP file, then extract that file into a location where you store your data.

Download Dataset

Caution: This dataset includes all the data for the 2017 Data Institute, including hyperspectral and lidar datasets and is therefore a large file (12 GB). Ensure that you have sufficient space on your hard drive before you begin the download. If not, download to an external hard drive and make sure to correct for the change in file path when working through the tutorial.

The LiDAR and imagery data used to create this raster teaching data subset were collected over the National Ecological Observatory Network's field sites and processed at NEON headquarters. The entire dataset can be accessed on the NEON data portal.

These data are a part of the NEON 2017 Remote Sensing Data Institute. The complete archive may be found here - NEON Teaching Data Subset: Data Institute 2017 Data Set

Recommended Prerequisites

We recommend you complete the following tutorials prior to this tutorial to have the necessary background.

  1. NEON AOP Hyperspectral Data in HDF5 format with Python
  2. Band Stacking, RGB & False Color Images, and Interactive Widgets in Python
  3. Plot a Spectral Signature in Python

The NEON AOP has flown several special flight plans called BRDF (bi-directional reflectance distribution function) flights. These flights were designed to quantify the the effect of observing targets from a variety of different look-angles, and with varying surface roughness. This allows an assessment of the sensitivity of the NEON imaging spectrometer (NIS) results to these paraemters. THe BRDF flight plan takes the form of a star pattern with repeating overlapping flight lines in each direction. In the center of the pattern is an area where nearly all the flight lines overlap. This area allows us to retrieve a reflectance curve of the same targat from the many different flight lines to visualize how then change for each acquisition. The following figure displays a BRDF flight plan as well as the number of flightlines (samples) which are overlapping.

Top: Flight lines from a bi-directional reflectance distribution function flight at ORNL. Bottom: A graphical representation of the number of samples in each area of the sampling. Source: National Ecological Observatory Network (NEON)

To date (June 2017), the NEON AOP has flown a BRDF flight at SJER and SOAP (D17) and ORNL (D07). We will work with the ORNL BRDF flight and retrieve reflectance curves from up to 18 lines and compare them to visualize the differences in the resulting curves. To reduce the file size, each of the BRDF flight lines have been reduced to a rectangular area covering where all lines are overlapping, additionally several of the ancillary rasters normally included have been removed in order to reduce file size.

We'll start off by again adding necessary libraries and our NEON AOP HDF5 reader function.

import h5py
import csv
import numpy as np
import os
import gdal
import matplotlib.pyplot as plt
import sys
from math import floor
import time
import warnings
warnings.filterwarnings('ignore')

def h5refl2array(h5_filename):
    hdf5_file = h5py.File(h5_filename,'r')

    #Get the site name
    file_attrs_string = str(list(hdf5_file.items()))
    file_attrs_string_split = file_attrs_string.split("'")
    sitename = file_attrs_string_split[1]
    refl = hdf5_file[sitename]['Reflectance']
    reflArray = refl['Reflectance_Data']
    refl_shape = reflArray.shape
    wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
    #Create dictionary containing relevant metadata information
    metadata = {}
    metadata['shape'] = reflArray.shape
    metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
    #Extract no data value & set no data value to NaN\n",
    metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
    metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
    metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
    metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
    metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'].value
    metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'].value)
    mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'].value
    mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
    mapInfo_split = mapInfo_string.split(",")
    #Extract the resolution & convert to floating decimal number
    metadata['res'] = {}
    metadata['res']['pixelWidth'] = mapInfo_split[5]
    metadata['res']['pixelHeight'] = mapInfo_split[6]
    #Extract the upper left-hand corner coordinates from mapInfo\n",
    xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
    yMax = float(mapInfo_split[4])
    #Calculate the xMax and yMin values from the dimensions\n",
    xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
    yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
    metadata['extent'] = (xMin,xMax,yMin,yMax),
    metadata['ext_dict'] = {}
    metadata['ext_dict']['xMin'] = xMin
    metadata['ext_dict']['xMax'] = xMax
    metadata['ext_dict']['yMin'] = yMin
    metadata['ext_dict']['yMax'] = yMax
    hdf5_file.close        
    return reflArray, metadata, wavelengths

print('Starting BRDF Analysis')
Starting BRDF Analysis

First we will define the extents of the rectangular array containing the section from each BRDF flightline.

BRDF_rectangle = np.array([[740315,3982265],[740928,3981839]],np.float)

Next we will define the coordinates of the target of interest. These can be set as any coordinate pait that falls within the rectangle above, therefore the coordaintes must be in UTM Zone 16 N.

x_coord = 740600
y_coord = 3982000

To prevent the function of failing, we will first check to ensure the coordinates are within the rectangular bounding box. If they are not, we throw an error message and exit from the script.

if BRDF_rectangle[0,0] <= x_coord <= BRDF_rectangle[1,0] and BRDF_rectangle[1,1] <= y_coord <= BRDF_rectangle[0,1]:
    print('Point in bounding area')
    y_index = floor(x_coord - BRDF_rectangle[0,0])
    x_index = floor(BRDF_rectangle[0,1] - y_coord)
else:
    print('Point not in bounding area, exiting')
    raise Exception('exit')
Point in bounding area

Now we will define the location of the all the subset NEON AOP h5 files from the BRDF flight

## You will need to update this filepath for your local data directory
h5_directory = "/Users/olearyd/Git/data/F07A/"

Now we will grab all files / folders within the defined directory and then cycle through them and retain only the h5files

files = os.listdir(h5_directory)
h5_files = [i for i in files if i.endswith('.h5')]

Now we will print the h5 files to make sure they have been included and set up a figure for plotting all of the reflectance curves

print(h5_files)

fig=plt.figure()
ax = plt.subplot(111)
['NEON_D07_F07A_DP1_20160611_162007_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_172430_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_170118_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_164259_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_171403_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_160846_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_170922_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_162514_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_160444_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_170538_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_171852_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_163945_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_163424_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_165240_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_161228_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_162951_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_161532_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_165711_reflectance_modify.h5', 'NEON_D07_F07A_DP1_20160611_164809_reflectance_modify.h5']

png

Now we will begin cycling through all of the h5 files and retrieving the information we need also print the file that is currently being processed

Inside the for loop we will

  1. read in the reflectance data and the associated metadata, but construct the file name from the generated file list

  2. Determine the indexes of the water vapor bands (bad band windows) in order to mask out all of the bad bands

  3. Read in the reflectance dataset using the NEON AOP H5 reader function

  4. Check the first value the first value of the reflectance curve (actually any value). If it is equivalent to the NO DATA value, then coordainte chosen did not intersect a pixel for the flight line. We will just continue and move to the next line.

  5. Apply NaN values to the areas contianing the bad bands

  6. Split the contents of the file name so we can get the line number for labelling in the plot.

  7. Plot the curve

for file in h5_files:

    print('Working on ' + file)
    
    [reflArray,metadata,wavelengths] = h5refl2array(h5_directory+file)
    bad_band_window1 = (metadata['bad_band_window1'])
    bad_band_window2 = (metadata['bad_band_window2'])

    index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
    index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]
    
    index_bad_windows = index_bad_window1+index_bad_window2
    reflectance_curve = np.asarray(reflArray[y_index,x_index,:], dtype=np.float32)
    if reflectance_curve[0] == metadata['noDataVal']:
        continue
    reflectance_curve[index_bad_windows] = np.nan
    filename_split = (file).split("_") 
    ax.plot(wavelengths,reflectance_curve/metadata['scaleFactor'],label = filename_split[5]+' Reflectance')
Working on NEON_D07_F07A_DP1_20160611_162007_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_172430_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_170118_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_164259_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_171403_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_160846_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_170922_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_162514_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_160444_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_170538_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_171852_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_163945_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_163424_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_165240_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_161228_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_162951_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_161532_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_165711_reflectance_modify.h5
Working on NEON_D07_F07A_DP1_20160611_164809_reflectance_modify.h5

This plots the reflectance curves from all lines onto the same plot. Now, we will add the appropriate legend and plot labels, display and save the plot with the coordaintes in the file name so we can repeat the position of the target

box = ax.get_position()
ax.set_position([box.x0, box.y0, box.width * 0.8, box.height])
ax.legend(loc='center left',bbox_to_anchor=(1,0.5))
plt.title('BRDF Reflectance Curves at ' + str(x_coord) +' '+ str(y_coord))
plt.xlabel('Wavelength (nm)'); plt.ylabel('Refelctance (%)')
fig.savefig('BRDF_uncertainty_at_' + str(x_coord) +'_'+ str(y_coord)+'.png',dpi=500,orientation='landscape',bbox_inches='tight',pad_inches=0.1)
plt.show()

png

It is possible that the figure above does not display properly, which is why we use the fig.save() method above to store the resulting figure as its own PNG file in the same directory as this Jupyter Notebook file.

The result is a plot with all the curves in which we can visualize the difference in the observations simply by chaging the flight direction with repect to the ground target.

Experiment with changing the coordinates to analyze different targets within the rectangle.

Assessing Spectrometer Accuracy using Validation Tarps with Python

In this tutorial we will learn how to retrieve reflectance curves from a pre-specified coordinate in a NEON AOP HDF5 file, retrieve bad band window indexes and mask portions of a reflectance curve, plot reflectance curves, and gain an understanding of some sources of uncertainty in Neon Imaging Spectrometer (NIS) data.

Objectives

After completing this tutorial, you will be able to:

  • Retrieve reflectance curves from a pre-specified coordinate in a NEON AOP HDF5 file
  • Retrieve bad band window indexes and mask these invalid portions of the reflectance curves
  • Plot reflectance curves on a graph and save the file
  • Explain some sources of uncertainty in NEON Image Spectrometry data

Install Python Packages

  • gdal
  • h5py
  • requests

Download Data

To complete this tutorial, you will use data from the NEON 2017 Data Institute. You can read in and download all the required data for this lesson as follows, and as described later on.

This tutorial uses the following files:

  • CHEQ_Tarp_03_02_refl_bavg.txt (9 KB)
  • CHEQ_Tarp_48_01_refl_bavg.txt (9 KB)
  • NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5 (2.7 GB)

The first two files can be downloaded from: NEON-Data-Skills GitHub.

Download the CHEQ Reflectance File: NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5

Note: The imagery data used to create this raster teaching data subset were collected over the National Ecological Observatory Network's field sites and processed at NEON headquarters. The entire dataset can be accessed on the NEON Data Portal.

Recommended prerequisites

We recommend you complete the following tutorials prior to this lesson:

  1. NEON AOP Hyperspectral Data in HDF5 format with Python
  2. Band Stacking, RGB & False Color Images, and Interactive Widgets in Python
  3. Plot a Spectral Signature in Python

In this tutorial we will be examining the accuracy of the Neon Imaging Spectrometer (NIS) against targets with known reflectance values. The targets consist of two 10 x 10 m tarps which have been specially designed to have 3% reflectance (black tarp) and 48% reflectance (white tarp) across all of the wavelengths collected by the NIS (see images below). During the Sept. 12 2016 flight over the Chequamegon-Nicolet National Forest (CHEQ), an area in D05 which is part of NEON's Steigerwaldt (STEI) site, these tarps were deployed in a gravel pit. During the airborne overflight, observations were also taken over the tarps with an Analytical Spectral Device (ASD), which is a hand-held field spectrometer. The ASD measurements provide a validation source against the airborne measurements.

The validation tarps, 3% reflectance (black tarp) and 48% reflectance (white tarp), laid out in the field. Source: National Ecological Observatory Network (NEON)

To test the accuracy, we will plot reflectance curves from the ASD measurments over the spectral tarps as well as reflectance curves from the NIS over the associated flight line. We can then carry out absolute and relative comparisons. The major error sources in the NIS can be generally categorized into the following components:

  1. Calibration of the sensor
  2. Quality of ortho-rectification
  3. Accuracy of radiative transfer code and subsequent ATCOR interpolation
  4. Selection of atmospheric input parameters
  5. Terrain relief
  6. Terrain cover

Note that ATCOR (the atmospheric correction software used by AOP) specifies the accuracy of reflectance retrievals to be between 3 and 5% of total reflectance. The tarps are located in a flat area, therefore, influences by terrain relief should be minimal. We will have to keep the remining errors in mind as we analyze the data.

Get Started

We'll start by importing all of the necessary packages to run the Python script.

import os, sys
import numpy as np
import requests
import h5py
import csv
import gdal
import matplotlib.pyplot as plt
%matplotlib inline

Define a function to read the hdf5 reflectance files and associated metadata into Python:

def h5refl2array(h5_filename):
    hdf5_file = h5py.File(h5_filename,'r')

    #Get the site name
    file_attrs_string = str(list(hdf5_file.items()))
    file_attrs_string_split = file_attrs_string.split("'")
    sitename = file_attrs_string_split[1]
    refl = hdf5_file[sitename]['Reflectance']
    reflArray = refl['Reflectance_Data']
    refl_shape = reflArray.shape
    wavelengths = refl['Metadata']['Spectral_Data']['Wavelength']
    #Create dictionary containing relevant metadata information
    metadata = {}
    metadata['shape'] = reflArray.shape
    metadata['mapInfo'] = refl['Metadata']['Coordinate_System']['Map_Info']
    #Extract no data value & set no data value to NaN\n",
    metadata['scaleFactor'] = float(reflArray.attrs['Scale_Factor'])
    metadata['noDataVal'] = float(reflArray.attrs['Data_Ignore_Value'])
    metadata['bad_band_window1'] = (refl.attrs['Band_Window_1_Nanometers'])
    metadata['bad_band_window2'] = (refl.attrs['Band_Window_2_Nanometers'])
    metadata['projection'] = refl['Metadata']['Coordinate_System']['Proj4'][()]
    metadata['EPSG'] = int(refl['Metadata']['Coordinate_System']['EPSG Code'][()])
    mapInfo = refl['Metadata']['Coordinate_System']['Map_Info'][()]
    mapInfo_string = str(mapInfo); #print('Map Info:',mapInfo_string)\n",
    mapInfo_split = mapInfo_string.split(",")
    #Extract the resolution & convert to floating decimal number
    metadata['res'] = {}
    metadata['res']['pixelWidth'] = mapInfo_split[5]
    metadata['res']['pixelHeight'] = mapInfo_split[6]
    #Extract the upper left-hand corner coordinates from mapInfo\n",
    xMin = float(mapInfo_split[3]) #convert from string to floating point number\n",
    yMax = float(mapInfo_split[4])
    #Calculate the xMax and yMin values from the dimensions\n",
    xMax = xMin + (refl_shape[1]*float(metadata['res']['pixelWidth'])) #xMax = left edge + (# of columns * resolution)\n",
    yMin = yMax - (refl_shape[0]*float(metadata['res']['pixelHeight'])) #yMin = top edge - (# of rows * resolution)\n",
    metadata['extent'] = (xMin,xMax,yMin,yMax),
    metadata['ext_dict'] = {}
    metadata['ext_dict']['xMin'] = xMin
    metadata['ext_dict']['xMax'] = xMax
    metadata['ext_dict']['yMin'] = yMin
    metadata['ext_dict']['yMax'] = yMax
    hdf5_file.close        
    return reflArray, metadata, wavelengths

Set up the directory where you are storing the data for this lesson. The variable h5_filename is the flightline which covers the tarps. Save the h5 file which you downloaded (see the Download Data instructions at the beginning of the tutorial) to your working directory. For this lesson we've set up a subfolder './data' in the current working directory to save all the data. You can save it elsewhere, but just need to update your code to point to the correct directory.

## You will need to change these filepaths according to how you've set up your directory
## As you can see here, I saved the files downloaded above into a sub-directory named "./data"
h5_filename = r'./data/NEON_D05_CHEQ_DP1_20160912_160540_reflectance.h5'

Define a function that will read in the contents of a url and write it out to a file:

def url_to_file(url,outfile):
    response = requests.get(url)
    open(outfile,"wb").write(response.content)

Run the function on the two urls where the ASD reflectance data is saved, in the NEON-Data-Skills GitHub repository for this lesson.

tarp_03_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/uncertainty-and-validation/hyperspectral_validation_py/data/CHEQ_Tarp_03_02_refl_bavg.txt"
tarp_48_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/Hyperspectral/uncertainty-and-validation/hyperspectral_validation_py/data/CHEQ_Tarp_48_01_refl_bavg.txt"

url_to_file(tarp_48_url,'./data/CHEQ_Tarp_48_01_refl_bavg.txt')
url_to_file(tarp_03_url,'./data/CHEQ_Tarp_03_02_refl_bavg.txt')

We can now set the path to these files. The files read into the variables tarp_48_filename and tarp_03_filename contain the field validated spectra for the white and black tarp respectively, organized by wavelength and reflectance.

tarp_48_filename = r'./data/CHEQ_Tarp_48_01_refl_bavg.txt'
tarp_03_filename = r'./data/CHEQ_Tarp_03_02_refl_bavg.txt'

We want to pull the spectra from the airborne data from the center of the tarp to minimize any errors introduced by infiltrating light in adjacent pixels, or through errors in ortho-rectification (source 2). We have pre-determined the coordinates for the center of each tarp which are as follows:

48% reflectance tarp UTMx: 727487, UTMy: 5078970

3% reflectance tarp UTMx: 727497, UTMy: 5078970

The validation tarps, 3% reflectance (black tarp) and 48% reflectance (white tarp), laid out in the field. Source: National Ecological Observatory Network (NEON)

Let's define these coordinates:

tarp_48_center = np.array([727487,5078970])
tarp_03_center = np.array([727497,5078970])

Now we'll use our function designed for NEON AOP's HDF5 files to access the hyperspectral data

[reflArray,metadata,wavelengths] = h5refl2array(h5_filename)

Within the reflectance curves there are areas with noisy data due to atmospheric windows in the water absorption bands. For this exercise we do not want to plot these areas as they obscure details in the plots due to their anomolous values. The metadata associated with these band locations is contained in the metadata gathered by our function. We will pull out these areas as 'bad band windows' and determine which indexes in the reflectance curves encompass these bad bands.

bad_band_window1 = (metadata['bad_band_window1'])
bad_band_window2 = (metadata['bad_band_window2'])

index_bad_window1 = [i for i, x in enumerate(wavelengths) if x > bad_band_window1[0] and x < bad_band_window1[1]]
index_bad_window2 = [i for i, x in enumerate(wavelengths) if x > bad_band_window2[0] and x < bad_band_window2[1]]

# join the lists of indexes into a single variable
index_bad_windows = index_bad_window1 + index_bad_window2

The reflectance data is saved in files which are 'tab delimited.' We will use a numpy function np.genfromtxt to read in the tarp reflectance data observed with the ASD using the tab ('\t') delimiter.

tarp_48_data = np.genfromtxt(tarp_48_filename, delimiter = '\t')
tarp_03_data = np.genfromtxt(tarp_03_filename, delimiter = '\t')

Now we'll set all the data inside of the bad band windows to NaNs (not a number) so they will not be included in the plots.

tarp_48_data[index_bad_windows] = np.nan
tarp_03_data[index_bad_windows] = np.nan

The next step is to determine which pixel in the reflectance data belongs to the center of each tarp. To do this, we will subtract the tarp center pixel location from the upper left corner pixels specified in the map info of the H5 file. This information is saved in the metadata dictionary output from our function that reads NEON AOP HDF5 files. The difference between these coordinates gives us the x and y index of the reflectance curve.

x_tarp_48_index = int((tarp_48_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_48_index = int((metadata['ext_dict']['yMax'] - tarp_48_center[1])/float(metadata['res']['pixelHeight']))

x_tarp_03_index = int((tarp_03_center[0] - metadata['ext_dict']['xMin'])/float(metadata['res']['pixelWidth']))
y_tarp_03_index = int((metadata['ext_dict']['yMax'] - tarp_03_center[1])/float(metadata['res']['pixelHeight']))

Next, we will plot both the curve from the airborne data taken at the center of the tarps as well as the curves obtained from the ASD data to provide a visualization of their consistency for both tarps. Once generated, we will also save the figure to a pre-determined location.

plt.figure(1)
tarp_48_reflectance = np.asarray(reflArray[y_tarp_48_index,x_tarp_48_index,:], dtype=np.float32)/metadata['scaleFactor']
tarp_48_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_48_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_48_data[:,1], label = 'ASD Reflectance')
plt.title('CHEQ 20160912 48% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Reflectance (%)')
plt.legend()
#plt.savefig('CHEQ_20160912_48_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)

plt.figure(2)
tarp_03_reflectance = np.asarray(reflArray[y_tarp_03_index,x_tarp_03_index,:], dtype=np.float32)/ metadata['scaleFactor']
tarp_03_reflectance[index_bad_windows] = np.nan
plt.plot(wavelengths,tarp_03_reflectance,label = 'Airborne Reflectance')
plt.plot(wavelengths,tarp_03_data[:,1],label = 'ASD Reflectance')
plt.title('CHEQ 20160912 3% tarp')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Reflectance (%)')
plt.legend();
#plt.savefig('CHEQ_20160912_3_tarp.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)

png

png

This produces plots showing the results of the ASD and airborne measurements over the 48% tarp. Visually, the comparison between the two appears to be fairly good. However, over the 3% tarp we appear to be over-estimating the reflectance. Large absolute differences could be associated with ATCOR input parameters (source 4). For example, the user must input the local visibility, which is related to aerosol optical thickness (AOT). We don't measure this at every site, therefore input a standard parameter for all sites.

Given the 3% reflectance tarp has much lower overall reflectance, it may be more informative to determine what the absolute difference between the two curves are and plot that as well.

plt.figure(3)
plt.plot(wavelengths,tarp_48_reflectance-tarp_48_data[:,1])
plt.title('CHEQ 20160912 48% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Reflectance Difference (%)')
#plt.savefig('CHEQ_20160912_48_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)

plt.figure(4)
plt.plot(wavelengths,tarp_03_reflectance-tarp_03_data[:,1])
plt.title('CHEQ 20160912 3% tarp absolute difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Absolute Reflectance Difference (%)');
#plt.savefig('CHEQ_20160912_3_tarp_absolute_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)

png

png

From this we are able to see that the 48% tarp actually has larger absolute differences than the 3% tarp. The 48% tarp performs poorly at the shortest and longest wavelenghths as well as near the edges of the bad band windows. This is related to difficulty in calibrating the sensor in these sensitive areas.

Let's now determine the result of the percent difference, which is the metric used by ATCOR to report accuracy. We can do this by calculating the ratio of the absolute difference between curves to the total reflectance

plt.figure(5)
relative_diff_48 = 100*(np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]))
plt.plot(wavelengths,100*np.divide(tarp_48_reflectance-tarp_48_data[:,1],tarp_48_data[:,1]));
plt.title('CHEQ 20160912 48% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Reflectance Difference')
plt.ylim((-100,100))
#plt.savefig('CHEQ_20160912_48_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)

plt.figure(6)
plt.plot(wavelengths,100*np.divide(tarp_03_reflectance-tarp_03_data[:,1],tarp_03_data[:,1]));
plt.title('CHEQ 20160912 3% tarp percent difference')
plt.xlabel('Wavelength (nm)'); plt.ylabel('Percent Reflectance Difference')
plt.ylim((-100,150));
#plt.savefig('CHEQ_20160912_3_tarp_relative_diff.png',dpi=300,orientation='landscape',bbox_inches='tight',pad_inches=0.1)

png

png

From these plots we can see that even though the absolute error on the 48% tarp was larger, the relative error on the 48% tarp is generally much smaller. The 3% tarp can have errors exceeding 50% for most of the tarp. This indicates that targets with low reflectance values may have higher relative errors.

Git 07: Updating Your Repo by Setting Up a Remote

This tutorial covers how to set up a Central Repo as a remote to your local repo in order to update your local fork with updates. You want to do this every time before starting new edits in your local repo.

Learning Objectives

At the end of this activity, you will be able to:

  • Explain why it is important to update a local repo before beginning edits.
  • Update your local repository from a remote (upstream) central repo.

Additional Resources

  • Diagram of Git Commands: this diagram includes more commands than we will learn in this series.
  • GitHub Help Learning Git resources

We now have done the following:

  1. We've forked (made an individual copy of) the NEONScience/DI-NEON-participants repo to our github.com account.
  2. We've cloned the forked repo - making a copy of it on our local computers.
  3. We've added files and content to our local copy of the repo and committed the changes.
  4. We've pushed those changes back up to our forked repo on github.com.
  5. We've completed a Pull Request to update the central repository with our changes.

Once you're all setup to work on your project, you won't need to repeat the fork and clone steps. But you do want to update your local repository with any changes other's may have added to the central repository. How do we do this?

We will do this by directly pulling the updates from the central repo to our local repo by setting up the local repo as a "remote". A "remote" repo is any repo which is not the repo that you are currently working in.

Graphic showing the entire workflow after you have forked and cloned the repository. You will fork and clone the repository only once. Graphic showing the entire workflow once a repository has been established. Subsequent updates to the forked repository from the central repository will be made by setting it up as a remote and pulling from it using the git pull command.
LEFT: You will fork and clone a repo only once . RIGHT: After that, you will update your fork from the central repository by setting it up as a remote and pulling from it with git pull . Source: National Ecological Observatory Network (NEON)

Update, then Work

Once you've established working in your repo, you should follow these steps when starting to work each time in the repo:

  1. Update your local repo from the central repo (git pull upstream master).
  2. Make edits, save, git add, and git commit all in your local repo.
  3. Push changes from local repo to your fork on github.com (git push origin master)
  4. Update the central repo from your fork (Pull Request)
  5. Repeat.

Notice that we've already learned how to do steps 2-4, now we are completing the circle by learning to update our local repo directly with any changes from the central repo.

The order of steps above is important as it ensures that you incorporate any changes that have been made to the NEON central repository into your forked & local repos prior to adding changes to the central repo. If you do not sync in this order, you are at greater risk of creating a merge conflict.

What's A Merge Conflict?

A merge conflict occurs when two users edit the same part of a file at the same time. Git cannot decide which edit was first and which was last, and therefore which edit should be in the most current copy. Hence the conflict.

Graphic showing how merge conflicts may occur when updates are made. Merge conflicts occur when the same part of a script or document has been changed simultaneously and Git can't determine which change should be applied.
Merge conflicts occur when the same part of a script or document has been changed simultaneously and Git can’t determine which change should be applied. Source: Atlassian

Set up Upstream Remote

We want to directly update our local repo with any changes made in the central repo prior to starting our next edits or additions. To do this we need to set up the central repository as an upstream remote for our repo.

Step 1: Get Central Repository URL

First, we need the URL of the central repository. Navigate to the central repository in GitHub NEONScience/DI-NEON-participants. Select the green Clone or Download button (just like we did when we cloned the repo) to copy the URL of the repo.

Step 2: Add the Remote

Second, we need to connect the upstream remote -- the central repository to our local repo.

Make sure you are still in you local repository in bash

First, navigate to the desired directory.

$ cd ~/Documents/GitHub/DI-NEON-participants

and then type:

$ git remote add upstream https://github.com/NEONScience/DI-NEON-participants.git

Here you are identifying that is is a git command with git and then that you are adding an upstream remote with the given URL.

Step 3: Update Local Repo

Use git pull to sync your local repo with the forked GitHub.com repo.

Second, update local repo using git pull with the added directions of upstream indicating the central repository and master specifying which branch you are pulling down (remember, branches are a great tool to look into once you're comfortable with Git and GitHub, but we aren't going to focus on them. Just use master).

$ git pull upstream master

remote: Counting objects: 25, done.
remote: Compressing objects: 100% (15/15), done.
remote: Total 25 (delta 16), reused 19 (delta 10), pack-reused 0
Unpacking objects: 100% (25/25), done.
From https://github.com/NEONScience/DI-NEON-participants
    74d9b7b..463e6f0  master   -> origin/master
Auto-merging _posts/institute-materials/example.md

Understand the output: The output will change with every update, several things to look for in the output:

  • remote: …: tells you how many items have changed.
  • From https:URL: which remote repository is data being pulled from. We set up the central repository as the remote but it can be lots of other repos too.
  • Section with + and - : this visually shows you which documents are updated and the types of edits (additions/deletions) that were made.

Now that you've synced your local repo, let's check the status of the repo.

$ git status

Step 4: Complete the Cycle

Now you are set up with the additions, you will need to add and commit those changes. Once you've done that, you can push the changes back up to your fork on github.com.

$ git push origin master

Now your commits are added to your forked repo on github.com and you're ready to repeat the loop with a Pull Request.

Workflow Summary

Syncing Central Repo with Local Repo

Setting It Up (only do this the initial time)

  • Find & copy Central Repo URL
  • git remote add upstream https://github.com/NEONScience/DI-NEON-participants.git

After Initial Set Up

  • Update your Local Repo & Push Changes

    • git pull upstream master - pull down any changes and sync the local repo with the central repo
    • make changes, git add and git commit
    • git push origin master - push your changes up to your fork
    • Repeat

    Have questions? No problem. Leave your question in the comment box below. It's likely some of your colleagues have the same question, too! And also likely someone else knows the answer.

Read in and visualize hyperspectral data in Python using functions

In this tutorial, you will learn how to efficiently read in hdf5 reflectance data and metadata, plot a single band and Red-Green-Blue (RGB) band combinations of a reflectance data tile using Python functions created for working with and visualizing NEON AOP hyperspectral data.

This tutorial uses the Level 3 Spectrometer orthorectified surface bidirectional reflectance - mosaic data product.

Learning Objectives

After completing this tutorial, you will be able to:

  • Work with custom Python modules and functions for AOP data
  • Download and read in tiled NEON AOP reflectance hdf5 data and associated metadata
  • Plot a single band of reflectance data
  • Stack and plot 3-band combinations to visualize true color and false color images

Things you'll need to complete this tutorial

Python

You will need a current version of Python (3.9+) to complete this tutorial. We also recommend the Jupyter Notebook IDE to follow along with this notebook.

Install Python Packages

  • h5py
  • gdal
  • neonutilities
  • pandas
  • python-dotenv
  • requests
  • scikit-image

Data

Data and additional scripts required for this lesson are downloaded programmatically as part of the tutorial. The data used in this lesson were collected over NEON's Disney Wilderness Preserve (DSNY) field site in 2023. The dataset can also be downloaded from the NEON Data Portal.

Other Set-Up Requirements

Set up a NEON user account and token, if you haven't already done so. Follow the tutorial Using an API Token when Accessing NEON Data with neonUtilities to learn how to do this (check the Python tabs in the code cells for the Python syntax).

Note: for this lesson, we have set up the token as an environment variable, following "Option 2: Set token as environment variable" in the linked tutorial above.

Additional Resources

If you are new to AOP hyperspectral data, we recommend exploring the following tutorial series:

Introduction to Hyperspectral Remote Sensing Data in Python

Background

We can combine any three bands from the NEON reflectance data to make an RGB image that will depict different information about the Earth's surface. A natural color image, made with bands from the red, green, and blue wavelengths looks close to what we would see with the naked eye. We can also choose band combinations from other wavelenghts, and map them to the red, blue, and green colors to highlight different features. A false color image is made with one or more bands from a non-visible portion of the electromagnetic spectrum that are mapped to red, green, and blue colors. These images can display other information about the landscape that is not easily seen with a natural color image.

The NASA Goddard Media Studio video "Peeling Back Landsat's Layers of Data" gives a good quick overview of natural and false color band combinations. Note that the Landsat multispectral sensor collects information from 11 bands, while NEON AOP hyperspectral data captures information spanning 426 bands!

Peeling Back Landsat's Layers of Data Video

Further Reading

  • Check out the NASA Earth Observatory article How to Interpret a False-Color Satellite Image.
  • Read the supporting article for the video above, Landsat 8 Onion Skin.

Load Function Module

First, import the required packages and the neon_aop_hyperspectral module, which includes functions that we will use to read in and visualize the hyperspectral hdf5 data.

import dotenv
import h5py
import matplotlib.pyplot as plt
import neonutilities as nu
import numpy as np
import os
import requests
import sys
import time

This next function is a handy way to download the Python module that we will be use in this lesson. This uses the requests package.

# function to download data stored on the internet in a public url to a local file
def download_url(url,download_dir):
    if not os.path.isdir(download_dir):
        os.makedirs(download_dir)
    filename = url.split('/')[-1]
    r = requests.get(url, allow_redirects=True)
    file_object = open(os.path.join(download_dir,filename),'wb')
    file_object.write(r.content)

Download the module from its location on GitHub, add the ../python_modules directory to the path and import the neon_aop_hyperspectral.py module as neon_hs.

module_url = "https://raw.githubusercontent.com/NEONScience/NEON-Data-Skills/main/tutorials/Python/AOP/aop_python_modules/neon_aop_hyperspectral.py"
download_url(module_url,'../python_modules')
# os.listdir('../python_modules') #optionally show the contents of this directory to confirm the file downloaded

sys.path.insert(0, '../python_modules')
# import the neon_aop_hyperspectral module
import neon_aop_hyperspectral as neon_hs

The first function we will use is aop_h5refl2array. We encourage you to look through the code to understand what it is doing behind the scenes. This function automates the steps required to read AOP hdf5 reflectance files into a Python numpy array. This function also cleans the data: it sets any no data values within the reflectance tile to nan (not a number) and applies the reflectance scale factor so the final array that is returned represents unitless scaled reflectance, with values ranging between 0 and 1 (0-100%).

Data Tip: If you forget the inputs to a function or want to see more details on what the function does, you can use help() or ? to display the associated docstrings.

help(neon_hs.aop_h5refl2array)
# neon_hs.aop_h5refl2array? #uncomment for an alternate way to show the help
Help on function aop_h5refl2array in module neon_aop_hyperspectral:

aop_h5refl2array(h5_filename, raster_type_: Literal['Cast_Shadow', 'Data_Selection_Index', 'GLT_Data', 'Haze_Cloud_Water_Map', 'IGM_Data', 'Illumination_Factor', 'OBS_Data', 'Radiance', 'Reflectance', 'Sky_View_Factor', 'to-sensor_Azimuth_Angle', 'to-sensor_Zenith_Angle', 'Visibility_Index_Map', 'Weather_Quality_Indicator'], only_metadata=False)
    read in NEON AOP reflectance hdf5 file and return the un-scaled 
    reflectance array, associated metadata, and wavelengths
           
    Parameters
    ----------
        h5_filename : string
            reflectance hdf5 file name, including full or relative path
        raster : string
            name of raster value to read in; this will typically be the reflectance data, 
            but other data stored in the h5 file can be accessed as well
            valid options: 
                Cast_Shadow (ATCOR input)
                Data_Selection_Index
                GLT_Data
                Haze_Cloud_Water_Map (ATCOR output)
                IGM_Data
                Illumination_Factor (ATCOR input)
                OBS_Data 
                Reflectance
                Radiance
                Sky_View_Factor (ATCOR input)
                to-sensor_Azimuth_Angle
                to-sensor_Zenith_Angle
                Visibility_Index_Map: sea level values of visibility index / total optical thickeness
                Weather_Quality_Indicator: estimated percentage of overhead cloud cover during acquisition
    
    Returns 
    --------
    raster_array : ndarray
        array of reflectance values
    metadata: dictionary 
        associated metadata containing
            bad_band_window1 (tuple)
            bad_band_window2 (tuple)
            bands: # of bands (float)
            data ignore value: value corresponding to no data (float)
            epsg: coordinate system code (float)
            map info: coordinate system, datum & ellipsoid, pixel dimensions, and origin coordinates (string)
            reflectance scale factor: factor by which reflectance is scaled (float)
    wavelengths: array
            wavelength values, in nm
    --------
    Example Execution:
    --------
    refl, refl_metadata = aop_h5refl2array('NEON_D02_SERC_DP3_368000_4306000_reflectance.h5','Reflectance')

Now that we have an idea of how this function works, let's try it out. First, we need to download a reflectance file. We can download a single 1 km x 1 km reflectance data tile for the DSNY site using the neonutilities by_tile_aop function as shown below. This downloads to a data folder specified in savepath. Before downloading a tile, let's take a quick look at when data were collected (and are avaiable) at this site using the list_available_dates function.

# display dates of available data for the directional and bidirectional reflectance data at DNSY
print('Directional reflectance data availability:')
nu.list_available_dates('DP3.30006.001','DSNY') # directional reflectance data ends with .001
print('\nBidirectional reflectance data availability:')
nu.list_available_dates('DP3.30006.002','DSNY') # BRDF and topographic corrected reflectance data ends with .002
Directional reflectance data availability:

RELEASE-2025 Available Dates: 2014-05, 2016-09, 2017-09, 2018-10, 2019-04, 2021-09

    
Bidirectional reflectance data availability:

PROVISIONAL Available Dates: 2023-04

Next we can also look at the tile extents so we can roughly determine the valid values to enter for the easting and northing, which are input parameters to the by_tile_aop function. First, let's set our NEON token as follows:

dotenv.set_key(dotenv_path=".env", key_to_set="NEON_TOKEN", value_to_set="YOUR_TOKEN_HERE")

dotenv.load_dotenv()
my_token=os.environ.get("NEON_TOKEN")
# optionally display the token to double check
# print('my token: ',my_token)
dsny_bounds = nu.get_aop_tile_extents('DP3.30006.002','DSNY','2023',token=my_token)
Easting Bounds: (451000, 464000)
Northing Bounds: (3099000, 3114000)
# display the first and last UTM coordinates of the DSNY site:
print('First 3 coordinates:\n',dsny_bounds[:3])
print('Last 3 coordinates:\n',dsny_bounds[-3:])
First 3 coordinates:
 [(451000, 3103000), (451000, 3104000), (451000, 3105000)]
Last 3 coordinates:
 [(463000, 3112000), (464000, 3108000), (464000, 3111000)]

Set up the data directory where we want to download our data.

Data Tip: If you are working from a Windows Operating System (OS), there may be a path length limitation which might cause an error in downloading, since the neon download function maintains the full folder structure the data, as it is stored on Google Cloud Storage (GCS). If you see the following warning: "UserWarning: Filepaths on Windows are limited to 260 characters. Attempting to download a filepath that is 291 characters long. Set the working or savepath directory to be closer to the root directory or enable long path support in Windows.", you will either need to enable long path support in Windows (a quick online search will show you how to do this) or set the savepath directory so that it is shorter. You can use os.path.abspath to see the full path, if you have specified a relative path. For this example, we will set a short savepath by creating a neon_data directly directly under the home directory as follows:

home_dir = os.path.expanduser('~')
data_dir = os.path.join(home_dir,'neon_data')
# optionally display the full path to the data_dir as follows:
# os.path.abspath(data_dir)
nu.by_tile_aop(dpid='DP3.30006.002',
               site='DSNY',
               year='2023',
               easting=454000,
               northing=3113000,
               include_provisional=True,
               savepath=data_dir,
               token=my_token)
Provisional NEON data are included. To exclude provisional data, use input parameter include_provisional=False.


Continuing will download 2 NEON data files totaling approximately 713.3 MB. Do you want to proceed? (y/n)  y

Define some functions that will help us explore the files that were downloaded. You can also look in the File Explorer (Windows) or Finder (Mac) to check out the contents more interactively.

def list_data_subfolders(data_dir):
    """
    Recursively finds and lists subfolders within a directory that contain data (files)
    and excludes subfolders that only contain other subfolders.

    Args:
        data_dir: The path to the root directory to search.

    Returns:
        A list of paths to the subfolders containing data.
    """
    data_subfolders = []
    for root, dirs, files in os.walk(data_dir):
        # Check if the current directory has both subdirectories and files
        if dirs and files:
            # Iterate through subdirectories to find those that contain files
            for dir_name in dirs:
                dir_path = os.path.join(root, dir_name)
                if any(os.path.isfile(os.path.join(dir_path, f)) for f in os.listdir(dir_path)):
                    data_subfolders.append(dir_path)
        # If the current directory has no subdirectories, but has files, we still want to keep the directory.
        elif files:
            if root != data_dir:  # Avoid adding the root directory itself if it has files
                data_subfolders.append(root)

    return data_subfolders

def list_data_files(data_dir):
    """
    Lists all files within a specified directory and its subdirectories.

    Args:
        data_dir (str): The path to the data directory to start the search from.

    Returns:
        list: A list of full paths to all files found.
    """
    all_files = []
    for root, _, files in os.walk(data_dir):
        for file in files:
            full_path = os.path.join(root, file)
            all_files.append(full_path)
    return all_files

We can use these functions to explore the contents of the data that were downloaded. You can also go into File Explorer (Windows) or Finder (Mac) to explore the contents in a more interactive way.

neon_data_subfolders = list_data_subfolders(data_dir)
# display the paths starting with `neon_data` to shorten:
neon_subfolders_short = [f.replace(home_dir,'') for f in neon_data_subfolders]
neon_subfolders_short
['\\neon_data\\DP3.30006.002\\neon-aop-provisional-products\\2023\\FullSite\\D03\\2023_DSNY_7\\L3\\Spectrometer\\Reflectance',
 '\\neon_data\\DP3.30006.002\\neon-publication\\NEON.DOM.SITE.DP3.30006.002\\DSNY\\20230401T000000--20230501T000000\\basic']

Data were downloaded into two nested subfolders. The reflectance data is saved in the path 2023\FullSite\D03\2023_DSNY_7\L3\Spectrometer\Reflectance. This is the standard format where you can expect to find L3 data. Note that before 2023 there is a neon-aop-provisional-products folder. This is because the DSNY data from 2023 is available provisionally. If the data were released, it would be found under neon-aop-products.

Next let's use the list_data_files function to see the actual files that we downloaded. If you included a larger range of points in the Easting and Northing, or used by_file_aop, this list could be much longer.

downloaded_refl_files = list_data_files(data_dir)
# display the files starting with `neon_data` to shorten:
downloaded_refl_files_short = [f.replace(home_dir,'') for f in downloaded_refl_files]
downloaded_refl_files_short
['\\neon_data\\DP3.30006.002\\citation_DP3.30006.002_PROVISIONAL.txt',
 '\\neon_data\\DP3.30006.002\\issueLog_DP3.30006.002.csv',
 '\\neon_data\\DP3.30006.002\\neon-aop-provisional-products\\2023\\FullSite\\D03\\2023_DSNY_7\\L3\\Spectrometer\\Reflectance\\NEON_D03_DSNY_DP3_454000_3113000_bidirectional_reflectance.h5',
 '\\neon_data\\DP3.30006.002\\neon-publication\\NEON.DOM.SITE.DP3.30006.002\\DSNY\\20230401T000000--20230501T000000\\basic\\NEON.D03.DSNY.DP3.30006.002.readme.20241220T001434Z.txt']
# read the h5 reflectance file (including the full path) to the variable h5_file_name
# h5_file_name = data_url.split('/')[-1]
h5_tiles = [f for f in downloaded_refl_files if f.endswith('.h5')]
h5_tile = h5_tiles[0]
# print(f'h5_tile: {h5_tile}')

Now that we've specified our reflectance tile, we can call aop_h5refl2array to read in the reflectance tile as a python array called refl, the metadata into a dictionary called refl_metadata, and the wavelengths into an array. Let's read it it and then take a quick look at the metadata and the first 5 wavelength values.

# read in the reflectance data using the aop_h5refl2array function, this may also take a bit of time
start_time = time.time()
refl, refl_metadata, wavelengths = neon_hs.aop_h5refl2array(h5_tile,'Reflectance')
print("--- It took %s seconds to read in the data ---" % round((time.time() - start_time),0))
Reading in  C:\Users\bhass\neon_data\DP3.30006.002\neon-aop-provisional-products\2023\FullSite\D03\2023_DSNY_7\L3\Spectrometer\Reflectance\NEON_D03_DSNY_DP3_454000_3113000_bidirectional_reflectance.h5
--- It took 23.0 seconds to read in the data ---
# display the reflectance metadata dictionary contents
refl_metadata
{'shape': (1000, 1000, 426),
 'no_data_value': -9999.0,
 'scale_factor': 10000.0,
 'bad_band_window1': array([1340, 1445]),
 'bad_band_window2': array([1790, 1955]),
 'projection': b'+proj=UTM +zone=17 +ellps=WGS84 +datum=WGS84 +units=m +no_defs',
 'EPSG': 32617,
 'res': {'pixelWidth': 1.0, 'pixelHeight': 1.0},
 'extent': (454000.0, 455000.0, 3113000.0, 3114000.0),
 'ext_dict': {'xMin': 454000.0,
  'xMax': 455000.0,
  'yMin': 3113000.0,
  'yMax': 3114000.0},
 'source': 'C:\\Users\\bhass\\neon_data\\DP3.30006.002\\neon-aop-provisional-products\\2023\\FullSite\\D03\\2023_DSNY_7\\L3\\Spectrometer\\Reflectance\\NEON_D03_DSNY_DP3_454000_3113000_bidirectional_reflectance.h5'}
# display the first 5 values of the wavelengths
wavelengths[:5]
array([383.884003, 388.891693, 393.899506, 398.907196, 403.915009])

We can use the shape method to see the dimensions of the array we read in. Use this method to confirm that the size of the reflectance array makes sense given the hyperspectral data cube, which is 1000 meters x 1000 meters x 426 bands.

refl.shape
(1000, 1000, 426)

Plot a single band

Next we'll use the function plot_aop_refl to plot a single band of the reflectance data. You can use help to understand the required inputs and data types for each of these; only the band and spatial extent are required inputs, the rest are optional inputs. If specified, these optional inputs allow you to set the range color values, specify the axis, add a title, colorbar, colorbar title, and change the colormap (default is to plot in greyscale).

band56 = refl[:,:,55]
neon_hs.plot_aop_refl(band56/refl_metadata['scale_factor'],
                      refl_metadata['extent'],
                      colorlimit=(0,0.3),
                      title='DSNY Tile Band 56',
                      cmap_title='Reflectance',
                      colormap='gist_earth')

png

RGB Plots - Band Stacking

It is often useful to look at several bands together. We can extract and stack three reflectance bands in the red, green, and blue (RGB) spectrums to produce a color image that looks like what we see with our eyes; this is your typical camera image. In the next part of this tutorial, we will learn to stack multiple bands and make a geotif raster from the compilation of these bands. We can see that different combinations of bands allow for different visualizations of the remotely-sensed objects and also conveys useful information about the chemical makeup of the Earth's surface.

We will select bands that fall within the visible range of the electromagnetic spectrum (400-700 nm) and at specific points that correspond to what we see as red, green, and blue.

NEON Imaging Spectrometer bands and their respective wavelengths. Source: National Ecological Observatory Network (NEON)

For this exercise, we'll first use the function stack_rgb to extract the bands we want to stack. This function uses splicing to extract the nth band from the reflectance array, and then uses the numpy function stack to create a new 3D array (1000 x 1000 x 3) consisting of only the three bands we want.

# pull out the true-color band combinations
rgb_bands = (58,34,19) # set the red, green, and blue bands

# stack the 3-band combinations (rgb and cir) using stack_rgb function
rgb_unscaled = neon_hs.stack_rgb(refl,rgb_bands)

# apply the reflectance scale factor
rgb = rgb_unscaled/refl_metadata['scale_factor']

We can display the red, green, and blue band center wavelengths, whose indices were defined above. To confirm that these band indices correspond to wavelengths in the expected portion of the spectrum, we can print out the wavelength values in nanometers.

print('Center wavelengths:')
print('Band 58: %.1f' %(wavelengths[57]),'nm')
print('Band 33: %.1f' %(wavelengths[33]),'nm')
print('Band 19: %.1f' %(wavelengths[18]),'nm')
Center wavelengths:
Band 58: 669.3 nm
Band 33: 549.1 nm
Band 19: 474.0 nm

Plot an RGB band combination

Next, we can use the function plot_aop_rgb to plot the band stack as follows:

# plot the true color image (rgb)
neon_hs.plot_aop_rgb(rgb,
                     refl_metadata['extent'],
                     plot_title='DSNY Reflectance RGB Image')

png

False Color Image - Color Infrared (CIR)

We can also create an image from bands outside of the visible spectrum. An image containing one or more bands outside of the visible range is called a false-color image. Here we'll use the green and blue bands as before, but we replace the red band with a near-infrared (NIR) band.

For more information about non-visible wavelengths, false color images, and some frequently used false-color band combinations, refer to NASA's Earth Observatory page.

cir_bands = (90,34,19)
print('Band 90 Center Wavelength = %.1f' %(wavelengths[89]),'nm')
print('Band 34 Center Wavelength = %.1f' %(wavelengths[33]),'nm')
print('Band 19 Center Wavelength = %.1f' %(wavelengths[18]),'nm')

cir = neon_hs.stack_rgb(refl,cir_bands)
neon_hs.plot_aop_rgb(cir,
                     refl_metadata['extent'],
                     ls_pct=20,
                     plot_title='DSNY Color Infrared Image')
Band 90 Center Wavelength = 829.6 nm
Band 34 Center Wavelength = 549.1 nm
Band 19 Center Wavelength = 474.0 nm

png

Recap

Congratulations! You have successfully downloaded a NEON reflectance tile using the neonutilities by_tile_aop function. You have also pulled in some pre-defined functions and used these to read in and visualize the reflectance data. You are now well poised to start carrying out more in-depth analysis using the hyperspectral data with Python.

References

Kekesi, Alex et al. "NASA | Peeling Back Landsat's Layers of Data". Published on Feb 24, 2014.

Riebeek, Holli. "Why is that Forest Red and that Cloud Blue? How to Interpret a False-Color Satellite Image" . Published on March 4, 2014.

Assignment: Reproducible Workflows with Jupyter Notebooks

In this tutorial you will learn how to open a .tiff file in Jupyter Notebook and learn about kernels.

The goal of the activity is simply to ensure that you have basic familiarity with Jupyter Notebooks and that the environment, especially the gdal package is correctly set up before you pursue more programming tutorials. If you already are familiar with Jupyter Notebooks using Python, you may be able to complete the assignment without working through the instructions.

This will be accomplished by: *Create a new Jupyter kernel *Download a GEOTIFF file *Import file onto Jupyter Notebooks *Check the raster size

Assignment: Open a Tiff File in Jupyter Notebook

Set up Environment

First, we will set up the environment as you would need for each of the live coding sections of the Data Institute. The following directions are copied over from the Data Institute Set up Materials.

In your terminal application, navigate to the directory (cd) that where you want the Jupyter Notebooks to be saved (or where they already exist).

We need to create a new Jupyter kernel for the Python 3.8 conda environment (py38) that Jupyter Notebooks will use.

In your Command Prompt/Terminal, type:

python -m ipykernel install --user --name py34 --display-name "Python 3.8 NEON-RSDI"

In your Command Prompt/Terminal, navigate to the directory (cd) that you created last week in the GitHub materials. This is where the Jupyter Notebook will be saved and the easiest way to access existing notebooks.

###Open Jupyter Notebook Open Jupyter Notebook by typing into a command terminal:

jupyter notebook

Once the notebook is open, check which version of Python you are in.

 # Check what version of Python.  Should be 3.8. 
 import sys
 sys.version

To ensure that the correct kernel will operate, navigate to Kernel in the menu, select Kernel/Restart Kernel And Clear All Outputs.

Navigate to 'Kernel' in the top navigation bar, then select 'Restart & Clear Output'.
To ensure that the correct kernel will operate, navigate to Kernel in the menu, select "Restart/Restart & Clear Output". Source: National Ecological Observatory Network (NEON)

You should now be able to work in the notebook.

#Download the digital terrain model (GEOTIFF file) Download the NEON GeoTiFF file of a digital terrain model (dtm) of the San Joaquin Experimental Range. Click this link to download dtm data: https://ndownloader.figshare.com/articles/2009586/versions/10. This will download a zippped full of data originally from a NEON data carpentry tutorial (https://datacarpentry.org/geospatial-workshop/data/).

Once downloaded, navigate through the folder to C:NEON-DS-Airborne-Remote-Sensing.zip\NEON-DS-Airborne-Remote-Sensing\SJER\DTM and save this file onto your own personal working directory. .

###Open GEOTIFF file in Jupyter Notebooks using gdal

The gdal package that occasionally has problems with some versions of Python. Therefore test out loading it using:

import gdal.

If you have trouble, ensure that 'gdal' is installed on your current environment.

Establish your directory

Place the downloaded dtm file in a repository of your choice (or your current working directory). Navigate to that directory. wd= '/your-file-path-here' #Input the directory to where you saved the .tif file

Import the TIFF

Import the NEON GeoTiFF file of the digital terrain model (DTM) from San Joaquin Experimental Range. Open the file using the gdal.Open command.Determine the size of the raster and (optional) plot the raster.

#Use GDAL to open GEOTIFF file stored in your directory SJER_DTM = gdal.Open(wd + 'SJER_dtmCrop.tif')>

#Determine the raster size.

  SJER_DTM.RasterXSize

Add in both code chunks and text (markdown) chunks to fully explain what is done. If you would like to also plot the file, feel free to do so.

Push .ipynb to GitHub.

When finished, save as a .ipynb file.

Introduction to using Jupyter Notebooks

Setting up Jupyter Notebooks

You can set up your notebook in several ways. Here we present the Anaconda Python distribution method so as to follow the Data Institute set up instructions.

Browser

First, make sure you have an updated browser on which to run the app. Both Mozilla Firefox and Google Chrome work well.

Installation

Data Institute participants should have already installed Jupyter Notebooks through the Anaconda installation during the Data Institute set up instructions.

If you install Python using pip you can install the Jupyter package with the following code.

 
# Python2
pip install jupyter
# Python 3
pip3 install jupyter

Set up Environment

We need to set up the Python environment that we will be working in for the Notebook. This allows us to have different Python environments for different projects. The following directions pertain directly to the set up for the 2018 Data Institute on Remote Sensing with Reproducible Workflows, however, you can adapt them to the specific Python version and packages you wish to work with.

If you haven't yet created a Python 3.8 environment (released October 2019), you'll need to do that now. You can use the single line provided below, or refer back to the Python section of the installation instructions, for more details. To create this Python 3.8 environment, you must first install Anaconda Navigator onto your computer, then open the Anaconda Prompt application (or your terminal) and type the following into the prompt window:

conda create -n p38 python=3.8 anaconda

And activate the Python 3.8 environment:

On Mac:

source activate p38

On Windows:

activate p38

In the terminal application, navigate to the directory (cd) where you want the Jupyter Notebooks to be saved (or where they already exist).

Once here, we want to create a new Jupyter kernel for the Python 3.8 conda environment (p38) that we'll be using with Jupyter Notebooks.

With the p38 environment activated, in your Command Prompt/Terminal, type:

python -m ipykernel install --user --name p38 --display-name "Python 3.8 NEON-RSDI"

This command tells Python to create a new ipy (aka Jupyter Notebook) kernel using the Python environment we set up and called "p38". Then we tell it to use the display name for this new kernel as "Python 3.8 NEON-RSDI". You will use this name to identify the specific kernel you want to work with in the Notebook space, so name it descriptively, especially if you think you'll be using several different kernels.

Using Jupyter Notebooks

Launching the Application

To launch the application either launch it from the Anaconda Navigator or by typing jupyter notebook into your terminal or command window.

 
# Launch Jupyter
jupyter notebook

More information can be found in the Read the Docs Running the Jupyter Notebook.

Navigating the Jupyter Python Interface

The following information is adapted from Griffin Chure's Tutorial 0b: Using Jupyter Notebooks

If everything launched correctly, you should be able to see a screen which looks something like this. Note that the home directory will be whatever directory you have navigated to in your terminal before launching Jupyter Notebooks.

Upon opening the application, you should see a screen similar to this one. Source: Griffin Chure's Tutorial 0b: Using Jupyter Notebooks

To start a new Python notebook, click on the right-hand side of the application window and select New (the expanded menu is shown in the screen shot above). This will give you several options for new notebook kernels depending on what is installed on your computer. In the above screenshot, there are two available Python kernels and one Matlab kernel. When starting a notebook, you should choose Python 3 if it is available or conda(root) .

Once you start a new notebook, you will be brought to the following screen.

Upon opening a new Python notebook, you should see a screen similar to this one. Source: Griffin Chure's Tutorial 0b: Using Jupyter Notebooks

Welcome to your first look at a Jupyter notebook!

There are many available buttons for you to click. The three most important components of the notebook are highlighted in colored boxes.

  • In blue is the name of the notebook. By clicking this, you can rename the notebook.
  • In red is the cell formatting assignment. By default, it is registered as code, but it can also be set to markdown as described later.
  • In purple, is the code cell. In this cell, you can type an execute Python code as well as text that will be formatted in a nicely readable format.

Selecting a Kernel

A kernel is a server that enables you to run commands within Jupyter Notebook. It is visible via a prompt window that logs all your actions in the notebook, making it helpful to refer to when encountering errors. You'll be prompted to select a kernel when you open a new notebook, however, if you are opening an existing notebook you will want to ensure that you are using the correct kernel. The commands for selecting and changing kernels are in the Kernel menu.

When you select or switch a kernel, you may want to use the navigate to Kernel in the menu, select Restart/ClearOutlook. The Restart/ClearOutlook option ensures that the correct kernel will operate.

You can always check what version of Python you are running by typing the following into a code cell.

 # Check what version of Python.  Should be 3.5. 
 import sys
 sys.version

Writing & running code

The following information is adapted from Griffin Chure's Tutorial 0b: Using Jupyter Notebooks

All code you write in the notebook will be in the code cell. You can write single lines, to entire loops, to complete functions. As an example, we can write and evaluate a print statement in a code cell, as is shown below.

If you would like to write several lines of code, hit Enter to continue entering code into another line. To execute the code, we can simply hit Shift + Enter while our cursor is in the code cell.

 # This is a comment and is not read by Python
 print('Hello! This is the print function. Python will print this line below')

 Hello! This is the print function. Python will print this line below

We can also write a 'for' loop as an example of executing multiple lines of code at once.

 # Write a basic for loop. In this case a range of numbers 0-4.
 for i in range(5):
# Multiply the value of i by two and assign it to a variable. 
temp_variable = 2 * i

# Print the value of the temp variable.
print(temp_variable)

 0
 2
 4
 6
 8

There are two other useful keyboard shortcuts for running code:

  • Alt + Enter runs the current cell and inserts a new one below
  • Ctrl + Enter run the current cell and enters command mode.

For more keyboard shortcuts, check out weidadeyue's Shortcut cheatsheet.

**Data Tip:** Code cells can be executed in any order. This means that you can overwrite your current variables by running things out of order. When coding in notebooks, be cautious of the order in which you run cells.

If you would like more details on running code in Jupyter Notebooks, please go through the following short tutorial by Running Code by contributors to the Jupyter project. This tutorial touches on start and stopping the kernel and using multiple kernels (e.g., Python and R) in one notebook.

Writing Text

The following information is adapted from Griffin Chure's Tutorial 0b: Using Jupyter Notebooks

Arguably the most useful component of the Jupyter Notebook is the ability to interweave code and explanatory text into a single, coherent document. Through out the Data Institute (and one's everyday workflow), we encourage all code and plots should be accompanied with explanatory text.

Each cell in a notebook can exist either as a code cell or as a text-formatting cell called a markdown cell. Markdown is a mark-up language that very easily converts to other type-setting formats such as HTML and PDF.

Whenever you make a new cell, its default assignment will be a code cell. This means when you want to write text, you will need to specifically change it to a markdown cell. You can do this by clicking on the drop-down menu that reads code' (highlighted in red in the second figure of this page) and selecting 'Markdown'. You can then type in the code cell and all Python syntax highlighting will be removed.

Resources for Learning Markdown

  • Review the NEON tutorial Git 04: Markdown Files
  • Adam Pritchard's Markdown Cheatsheet

Saving & Quitting

The following information is adapted from Griffin Chure's Tutorial 0b: Using Jupyter Notebooks

Jupyter notebooks are set up to autosave your work every 15 or so minutes. However, you should not rely on the autosave feature! Save your work frequently by clicking on the floppy disk icon located in the upper left-hand corner of the toolbar.

To navigate back to the root of your Jupyter notebook server, you can click on the Jupyter logo at any time.

To quit your Jupyter notebook, you can simply close the browser window and the Jupyter notebook server running in your terminal.

Converting to HTML and PDF

In addition to sharing notebooks in the.ipynb format, it may useful to convert these notebooks to highly-portable formats such as HTML and PDF.

To convert, you can either use the dropdown menu option

File -> download as -> ...

or via the command line by using the following lines:

 jupyter nbconvert --to pdf notebook_name.ipynb 

Where "notebook_name.ipynb" matches the name of the notebook you want to convert. Prior to converting the notebook, you must be in the same working directory as your notebook or use the correct file path from your current working directory.

Converting to PDF requires both Pandoc and LaTeX to be installed. You can find out more in the ReadTheDoc for nbconvert.

If you prefer to convert to a different format, like HTML, you simply change the file type. jupyter nbconvert --to html notebook_name.ipynb Read more on what formats you can convert to and more about the nbconvert package .

Additional Resources

Using Jupyter Notebooks

  • Jupyter Documentation on ReadTheDocs
  • Griffin Chure's multi-part course on Using Jupyter Notebooks for Scientific Computing . Much of the material above is adapted from Tutorial 0b: Using Jupyter Notebooks .
  • Jupyter Project's Running Code

Using Python

  • Software Carpentry's Programming with Python workshop
  • Data Carpentry's Python for Ecologists workshop
  • Many, many others that a simple web search will bring up...

Document & Publish Your Workflow: Jupyter Notebooks

In this tutorial we learn how to effectively and efficiently document and publish our workflows online.

Learning Objectives

At the end of this activity, you will be able to:

  • Explain why documenting and publishing one's code is important.
  • Describe two tools that enable ease of publishing code & output: Jupyter Notebooks with the Python kernel.

Documentation Is Important

As we read in the Reproducible Science overview, the four facets of reproducible science are:

  • Documentation
  • Organization
  • Automation and
  • Dissemination.

This week we will learn about the Jupyter Notebook as a tool to document and publish (disseminate) your code and code output.

View Slideshow: Share, Publish & Archive - from the Reproducible Science Curriculum

Jupyter Notebook

“The Jupyter Notebook is an open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text. Uses include: data cleaning and transformation, numerical simulation, statistical modeling, machine learning and much more." -- Jupyter Notebook documentation.

We use markdown syntax in Notebook documents to document workflows and to share data processing, analysis and visualization outputs. We can also use it to create documents that combine code in your language of choice, output and text.

The Jupyter Notebooks grew out of iPython. Jupyter is a close acronym meaning Julia, Python, and R, which were the first languages outside Python that the Jupyter application was designed for. Jupyter Notebooks now supports over 40 coding languages. You may still find some references to iPython in materials related to Jupyter Notebooks. This series will focus on using Jupyter Notebooks with Python, but the information presented can apply to other languages as well.

The Jupyter Notebooks application is a browser-based application. Therefore, you need an updated browser (the Jupyter programmers recommend Mozilla Firefox or Google Chrome, but not Microsoft Explorer). When installed on your computer, you can always access the app even without internet access. You can also use Jupyter installed on a remote server. For example, Jupyter runs a training (temporary) server based version.

Why Jupyter Notebooks?

There are many advantages to using Jupyter Notebooks in your work:

  • Human readable syntax.
  • Simple syntax - it can be learned quickly.
  • All components of your work are clearly documented. You don't have to remember what steps, assumptions, tests were used.
  • You can easily extend or refine analyses by modifying existing or adding new code blocks.
  • Analysis results can be disseminated in various formats including HTML, PDF, slideshows and more.
  • Code and data can be shared with a colleague to replicate the workflow.

Explore Examples of Notebooks

Before we jump into how to work with notebooks, check out a few shared notebooks. As you look at these different notebooks, what aspects of the layout do you like, what don't you like? Is there a place in your current workflow that these notebooks would be useful?

  • Jupyter's GitHub Wiki: A gallery of interesting Jupyter Notebooks. Not only is this a great collection of example notebooks, but also it is a valuable resource to learn other skills associated with using Python and Jupyter Notebooks.
  • Fabian Pedregosa's Notebook Gallery

In the next tutorial, Introduction to using Jupyter Notebooks, we will learn more about working with Jupyter Notebooks.

Data Institute Activity: Calculate Index of Interest

Remote Sensing Indices

There are many different indices you might want in your research. NEON provides several indices as data products that have already been calculated and can will be available for download from the NEON data portal.

NEON Remote Sensing Vegetation Indices, Data Products, and Uncertainty

In this 20 minute video David Hulslander describes NEON Data Products including several remote sensing vegetation indices.

Activity Steps

  1. Choose an index of interest. You may want to check out Verena Henrich & Katharina Brüser's Index Database for ideas: www.indexdatabase.de/ .

  2. Work with your small group to create a script to calculate this index from the NEON data. Be sure to add comments so that the script is useful to others.

  3. Add your script to the GitHub Repo: DI-NEON-participants to share with your colleagues. Save scripts to the DI-NEON-participants/2018-RemoteSensing/rs-indices.
    Be sure to provide a clear file name reflecting the contents. If you are comfortable, we recommend you put you names in the script as others may want to contact you about it.

Kids explore science and technology at NEON for a day

Install Git, Bash Shell, Python

This page outlines the tools and resources that you will need to install Git, Bash and Python applications onto your computer as the first step of our Python skills tutorial series.

Checklist

Detailed directions to accomplish each objective are below.

  • Install Bash shell (or shell of preference)
  • Install Git
  • Install Python 3.x

Bash/Shell Setup

Install Bash for Windows

  1. Download the Git for Windows installer.
  2. Run the installer and follow the steps bellow:
    1. Welcome to the Git Setup Wizard: Click on "Next".
    2. Information: Click on "Next".
    3. Select Destination Location: Click on "Next".
    4. Select Components: Click on "Next".
    5. Select Start Menu Folder: Click on "Next".
    6. Adjusting your PATH environment: Select "Use Git from the Windows Command Prompt" and click on "Next". If you forgot to do this programs that you need for the event will not work properly. If this happens rerun the installer and select the appropriate option.
    7. Configuring the line ending conversions: Click on "Next". Keep "Checkout Windows-style, commit Unix-style line endings" selected.
    8. Configuring the terminal emulator to use with Git Bash: Select "Use Windows' default console window" and click on "Next".
    9. Configuring experimental performance tweaks: Click on "Next".
    10. Completing the Git Setup Wizard: Click on "Finish".

This will provide you with both Git and Bash in the Git Bash program.

Install Bash for Mac OS X

The default shell in all versions of Mac OS X is bash, so no need to install anything. You access bash from the Terminal (found in /Applications/Utilities). You may want to keep Terminal in your dock for this workshop.

Install Bash for Linux

The default shell is usually Bash, but if your machine is set up differently you can run it by opening a terminal and typing bash. There is no need to install anything.

Git Setup

Git is a version control system that lets you track who made changes to what when and has options for easily updating a shared or public version of your code on GitHub. You will need a supported web browser (current versions of Chrome, Firefox or Safari, or Internet Explorer version 9 or above).

Git installation instructions borrowed and modified from Software Carpentry.

Git for Windows

Git should be installed on your computer as part of your Bash install.

Git on Mac OS X

Video Tutorial

Install Git on Macs by downloading and running the most recent installer for "mavericks" if you are using OS X 10.9 and higher -or- if using an earlier OS X, choose the most recent "snow leopard" installer, from this list. After installing Git, there will not be anything in your /Applications folder, as Git is a command line program.

**Data Tip:** If you are running Mac OSX El Capitan, you might encounter errors when trying to use git. Make sure you update XCODE. Read more - a Stack Overflow Issue.

Git on Linux

If Git is not already available on your machine you can try to install it via your distro's package manager. For Debian/Ubuntu run sudo apt-get install git and for Fedora run sudo yum install git.

Setting Up Python

Python is a popular language for scientific computing and data science, as well as being a great for general-purpose programming. Installing all of the scientific packages individually can be a bit difficult, so we recommend using an all-in-one installer, like Anaconda.

Regardless of how you choose to install it, **please make sure your environment is set up with Python version 3.7 (at the time of writing, the gdal package did not work with the newest Python version 3.6). Python 2.x is quite different from Python 3.x so you do need to install 3.x and set up with the 3.7 environment.

We will teach using Python in the Jupyter Notebook environment, a programming environment that runs in a web browser. For this to work you will need a reasonably up-to-date browser. The current versions of the Chrome, Safari and Firefox browsers are all supported (some older browsers, including Internet Explorer version 9 and below, are not). You can choose to not use notebooks in the course, however, we do recommend you download and install the library so that you can explore this tool.

Windows

Download and install Anaconda. Download the default Python 3 installer (3.7). Use all of the defaults for installation except make sure to check Make Anaconda the default Python.

Mac OS X

Download and install Anaconda. Download the Python 3.x installer, choosing either the graphical installer or the command-line installer (3.7). For the graphical installer, use all of the defaults for installation. For the command-line installer open Terminal, navigate to the directory with the download then enter:

bash Anaconda3-2020.11-MacOSX-x86_64.sh (or whatever you file name is)

Linux

Download and install Anaconda. Download the installer that matches your operating system and save it in your home folder. Download the default Python 3 installer.

Open a terminal window and navigate to your downloads folder. Type

bash Anaconda3-2020.11-Linux-ppc64le.sh

and then press tab. The name of the file you just downloaded should appear.

Press enter. You will follow the text-only prompts. When there is a colon at the bottom of the screen press the down arrow to move down through the text. Type yes and press enter to approve the license. Press enter to approve the default location for the files. Type yes and press enter to prepend Anaconda to your PATH (this makes the Anaconda distribution the default Python).

Install Python packages

We need to install several packages to the Python environment to be able to work with the remote sensing data

  • gdal
  • h5py

If you are new to working with command line you may wish to complete the next setup instructions which provides and intro to command line (bash) prior to completing these package installation instructions.

Windows

Create a new Python 3.7 environment by opening Windows Command Prompt and typing

conda create –n py37 python=3.7 anaconda

When prompted, activate the py37 environment in Command Prompt by typing

activate py37

You should see (py37) at the beginning of the command line. You can also test that you are using the correct version by typing python --version.

Install Python package(s):

  • gdal: conda install gdal
  • h5py: conda install h5py

Note: You may need to only install gdal as the others may be included in the default.

Mac OS X

Create a new Python 3.7 environment by opening Terminal and typing

conda create –n py37 python=3.7 anaconda

This may take a minute or two.

When prompted, activate the py37 environment in Command Prompt by typing

source activate py37

You should see (py37) at the beginning of the command line. You can also test that you are using the correct version by typing python --version.

Install Python package(s):

  • gdal: conda install gdal
  • h5py: conda install h5py

Linux

Open default terminal application (on Ubuntu that will be gnome-terminal).

Launch Python.

Install Python package(s):

  • gdal: conda install gdal
  • h5py: conda install h5py

Set up Jupyter Notebook Environment

In your terminal application, navigate to the directory (cd) that where you want the Jupyter Notebooks to be saved (or where they already exist).

Open Jupyter Notebook with

jupyter notebook

Once the notebook is open, check which version of Python you are in by using the prompts

# check what version of Python you are using.
import sys
sys.version

You should now be able to work in the notebook.

The gdal package that occasionally has problems with some versions of Python. Therefore test out loading it using

import gdal.

Additional Resources

  • Setting up the Python Environment section from the Python Bootcamp
  • Conda Help: setting up an environment
  • iPython documentation: Kernals

Pagination

  • First page
  • Previous page
  • …
  • Page 47
  • Page 48
  • Page 49
  • Page 50
  • Current page 51
  • Page 52
  • Page 53
  • Page 54
  • Page 55
  • …
  • Next page
  • Last page
Subscribe to
NSF NEON, Operated by Battelle

Follow Us:

Join Our Newsletter

Get updates on events, opportunities, and how NEON is being used today.

Subscribe Now

Footer

  • About Us
  • Newsroom
  • Contact Us
  • Terms & Conditions
  • Careers
  • Code of Conduct

Copyright © Battelle, 2025

The National Ecological Observatory Network is a major facility fully funded by the U.S. National Science Foundation.

Any opinions, findings and conclusions or recommendations expressed in this material do not necessarily reflect the views of the U.S. National Science Foundation.