neonUtilities 1.3.2 is now available on CRAN! There are several updates in this version, primarily around improving performance and retaining more complete metadata from the original download.
stackByTablehas optional parallel processing. Use the input
nCoresto specify the number of cores to use in stacking the data, for speedier processing. This is helpful primarily in dealing with long time series of sensor data. The function may override
nCoresif it determines parallelization will not significantly improve performance.
stackByTableretains a version of t he readme file. In downloads, readme files are generated for each site by month combination, so the readme file provided by
stackByTabledeletes the site- and date-specific information.
stackByTablealso retains the sensor_positions files, which are provided with downloads of streaming sensor data. sensor_positions files are specific to a site, so the version provided by
stackByTableis stacked across sites, in the same way as the data tables.
stackByTableno longer requires the ‘folder’ input to distinguish between portal and
zipsByProductdownloads. It won’t fail if ‘folder’ is included, so any existing code using it will continue to work.
stackByTableadds a new column to each data table, called publicationDate, populated with the date-time stamp of publication for each record of data. These dates are provided with the raw downloaded files, but were not tracked during stacking in previous versions.
- Publication date is also taken into consideration in the variables and validation files - the file kept by
stackByTableis now reliably the most recently published version.
- A new function,
readTableNEON, takes a NEON data table and a NEON variables file as inputs and assigns classes to data frame columns based on the data types specified in the variables file. Dates are stored as POSIXct, and URLs are stored as strings.
readTableNEON, so data accessed by
loadByProductare read into R according to data type.
If you have any issues or questions about this release, please visit our issues log on GitHub.