Title: | Facilitates 'PhenoCam' Data Access and Time Series Post-Processing |
---|---|
Description: | Programmatic interface to the 'PhenoCam' web services (<https://phenocam.nau.edu/webcam>). Allows for easy downloading of 'PhenoCam' data directly to your R workspace or your computer and provides post-processing routines for consistent and easy timeseries outlier detection, smoothing and estimation of phenological transition dates. Methods for this package are described in detail in Hufkens et. al (2018) <doi:10.1111/2041-210X.12970>. |
Authors: | Koen Hufkens [aut, cre] , BlueGreen Labs [cph, fnd] |
Maintainer: | Koen Hufkens <[email protected]> |
License: | AGPL-3 |
Version: | 1.1.5 |
Built: | 2024-10-29 03:17:09 UTC |
Source: | https://github.com/bluegreen-labs/phenocamr |
Reverts the 'expand_phenocam()' function in order to save space and generate files as outlined in the cited data paper. This routine is used as a post-production measure.
contract_phenocam( data, internal = TRUE, no_padding = FALSE, out_dir = tempdir() )
contract_phenocam( data, internal = TRUE, no_padding = FALSE, out_dir = tempdir() )
data |
a phenocam data file with a 1 or 3 day time step |
internal |
return a data structure if given a file on disk
( |
no_padding |
allow for padding to REMAIN or not
( |
out_dir |
output directory where to store data (default = tempdir()) |
A contracted PhenoCam 3-day time series to its original 3-day time step (if provided at a 1-day interval), also removes padding introduced by processing for 1-day data.
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # Overwrites the original file, increasing # it's file size. expand_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # Contracts the file to it's original size, skipping # two days. contract_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # Overwrites the original file, increasing # it's file size. expand_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # Contracts the file to it's original size, skipping # two days. contract_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
This routine uses Forsythe et al. 1995.
daylength(doy, latitude)
daylength(doy, latitude)
doy |
a vector with doy values 1 - 365(6) |
latitude |
a given latitude |
nested list with daylength (daylength) and solar elevation (solar_elev) elements
## Not run: # calcualte the hours of sunlight and solar elevation on day of year 1 # and latitude 51 ephem <- daylength(1, 51) print(ephem) ## End(Not run)
## Not run: # calcualte the hours of sunlight and solar elevation on day of year 1 # and latitude 51 ephem <- daylength(1, 51) print(ephem) ## End(Not run)
The function fills in the existing column to hold outlier flags, and either overwrites the original file or outputs a data structure.
detect_outliers( data, iterations = 20, sigma = 2, grvi = FALSE, snowflag = FALSE, plot = FALSE, internal = TRUE, out_dir = tempdir() )
detect_outliers( data, iterations = 20, sigma = 2, grvi = FALSE, snowflag = FALSE, plot = FALSE, internal = TRUE, out_dir = tempdir() )
data |
PhenoCam data structure or filename |
iterations |
number of itterations in order to detect outliers () |
sigma |
number of deviations to exclude outliers at |
grvi |
reverse the direction of the screening intervals to accomodate for GRVI outliers |
snowflag |
use manual snow flag labels as outliers |
plot |
visualize the process, mostly for debugging
( |
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data |
## Not run: # download demo data (do not detect outliers) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", outlier_detection = FALSE) # detect outliers in the downloaded file detect_outliers(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
## Not run: # download demo data (do not detect outliers) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", outlier_detection = FALSE) # detect outliers in the downloaded file detect_outliers(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
This is a wrapper around most of all the other functions. It downloads a time series and extract relevant phenological transition dates or phenophases.
download_phenocam( site = "harvard$", veg_type = NULL, frequency = "3", roi_id = NULL, outlier_detection = TRUE, smooth = TRUE, contract = FALSE, daymet = FALSE, trim_daymet = TRUE, trim = NULL, phenophase = FALSE, out_dir = tempdir(), internal = FALSE )
download_phenocam( site = "harvard$", veg_type = NULL, frequency = "3", roi_id = NULL, outlier_detection = TRUE, smooth = TRUE, contract = FALSE, daymet = FALSE, trim_daymet = TRUE, trim = NULL, phenophase = FALSE, out_dir = tempdir(), internal = FALSE )
site |
the site name, as mentioned on the PhenoCam web page expressed as a regular expression ("harvard$" == exact match) |
veg_type |
vegetation type (DB, EN, ... default = ALL) |
frequency |
frequency of the time series product (1, 3, "roistats") |
roi_id |
the id of the ROI to download (default = ALL) |
outlier_detection |
TRUE or FALSE, detect outliers |
smooth |
smooth data (logical, default is |
contract |
contract 3-day data (logical, default is |
daymet |
TRUE or FALSE, merges the daymet data |
trim_daymet |
TRUE or FALSE, trims data to match PhenoCam data |
trim |
year (numeric) to which to constrain the output (default = |
phenophase |
logical, calculate transition dates (default = |
out_dir |
output directory where to store downloaded data (default = tempdir()) |
internal |
allow for the data element to be returned to the workspace |
Downloaded files in out_dir of requested time series products, as well as derived phenophase estimates based upon these time series.
## Not run: # download the first ROI time series for the Harvard PhenoCam site # at an aggregation frequency of 3-days. download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read phenocam data into phenocamr data structure df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
## Not run: # download the first ROI time series for the Harvard PhenoCam site # at an aggregation frequency of 3-days. download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read phenocam data into phenocamr data structure df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
Necessary step to guarantee consistent data processing between 1 and 3-day data products. Should rarely be used independent of 'download_phenocam()'.
expand_phenocam(data, truncate = NULL, internal = TRUE, out_dir = tempdir())
expand_phenocam(data, truncate = NULL, internal = TRUE, out_dir = tempdir())
data |
a PhenoCam file |
truncate |
year (numerical), limit the time series to a particular year (default = NULL) |
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data (default = tempdir()) |
Expanded PhenoCam data structure or file, including 90 day padding if requested.
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # Overwrites the original file, increasing # it's file size. expand_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # Contracts the file to it's original size, skipping # two days. contract_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # Overwrites the original file, increasing # it's file size. expand_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # Contracts the file to it's original size, skipping # two days. contract_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
The GRVI is defined as the normalized ratio between the red and green channel of a RGB image or digital number triplet. However, the blue channel can be used as well using a weighting factor. As such a paramter vector is provided so different channels / DN can be weighted separately.
grvi(data, par = c(1, 1, 1), internal = TRUE, out_dir = tempdir())
grvi(data, par = c(1, 1, 1), internal = TRUE, out_dir = tempdir())
data |
a PhenoCam data file or data frame (when using a file provide a full path if not in the current working directory) |
par |
grvi parameters (digital number weights) |
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data |
Inserts a GRVI data column into the provided PhenoCam data structure or file.
## Not run: # with defaults, outputting a data frame # with smoothed values, overwriting the original # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # calculate and append the GRVI for a file (overwrites the original) grvi(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # as all functions this also works on a PhenoCam data structure df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) df <- grvi(df, par = c(1, 1, 0)) ## End(Not run)
## Not run: # with defaults, outputting a data frame # with smoothed values, overwriting the original # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # calculate and append the GRVI for a file (overwrites the original) grvi(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # as all functions this also works on a PhenoCam data structure df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) df <- grvi(df, par = c(1, 1, 0)) ## End(Not run)
The ROI list can be helpful in determining which time series to download using 'download_phenocam()'.
list_rois(out_dir = tempdir(), internal = TRUE)
list_rois(out_dir = tempdir(), internal = TRUE)
out_dir |
output directory (default = tempdir()) |
internal |
TRUE or FALSE (default = TRUE) |
A data frame with ROIs for all available cameras
## Not run: # download the site meta-data df <- list_rois() ## End(Not run)
## Not run: # download the site meta-data df <- list_rois() ## End(Not run)
The site list can be helpful in determining which time series to download using 'download_phenocam()'. The site list also includes meta-data concerning plant functional types, general climatological conditions such as mean annual temperature or geographic location.
list_sites(out_dir = tempdir(), internal = TRUE)
list_sites(out_dir = tempdir(), internal = TRUE)
out_dir |
output directory (default = tempdir()) |
internal |
TRUE or FALSE (default = TRUE) |
A data frame with meta-data for all available sites.
## Not run: # download the site meta-data df <- list_sites() ## End(Not run)
## Not run: # download the site meta-data df <- list_sites() ## End(Not run)
Combine PhenoCam time series with matching climatological variables from Daymet.
merge_daymet(data, trim = FALSE, internal = TRUE, out_dir = tempdir())
merge_daymet(data, trim = FALSE, internal = TRUE, out_dir = tempdir())
data |
a PhenoCam data file or data structure |
trim |
logical, trim the daymet data to the length of the
PhenoCam time series or include the whole Daymet time series (1980-current).
(default = |
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data (default = tempdir()) |
A PhenoCam data structure or file which combines PhenoCam time series data with Daymet based climate values (columns will be added).
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # merge data with daymet data merge_daymet(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # merge data with daymet data merge_daymet(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
Combine PhenoCam time series with MODIS data for matching dates.
merge_modis( data, product, band, trim = FALSE, internal = TRUE, out_dir = tempdir() )
merge_modis( data, product, band, trim = FALSE, internal = TRUE, out_dir = tempdir() )
data |
a PhenoCam data file or data structure |
product |
which MODIS product to query (character vector) |
band |
which MODIS band(s) to include (character vector) |
trim |
logical, trim the MODIS data to the length of the
PhenoCam time series or include the whole Daymet time series (1980-current).
(default = |
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data (default = tempdir()) |
A PhenoCam data structure or file which combines PhenoCam time series data with MODIS values (columns will be added). Data is queried from the ORNL MODIS subsets service using the 'MODISTools' package, please consult either sources on product and band names.
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # merge data with daymet data df <- merge_modis(file.path(tempdir(), "harvard_DB_1000_3day.csv"), product = "MOD13Q1", band = "250m_16_days_NDVI") ## End(Not run)
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # merge data with daymet data df <- merge_modis(file.path(tempdir(), "harvard_DB_1000_3day.csv"), product = "MOD13Q1", band = "250m_16_days_NDVI") ## End(Not run)
Normalize PhenoCam data between 0-1 to to standardize further processing, independent of the relative amplitude of the time series (works on vectors not data frames). For internal use only.
normalize_ts(df, percentile = 90)
normalize_ts(df, percentile = 90)
df |
a PhenoCam data frame |
percentile |
percentile value to interprete |
A normalized PhenoCam time series.
# Internal function only, should not be used stand-alone. # As such no documentation is provided.
# Internal function only, should not be used stand-alone. # As such no documentation is provided.
The optimal span is calculated based upon the bayesian information criterion (BIC).
optimal_span( y, x = NULL, weights = NULL, step = 0.01, label = NULL, plot = FALSE )
optimal_span( y, x = NULL, weights = NULL, step = 0.01, label = NULL, plot = FALSE )
y |
a vector with measurement values to smooth |
x |
a vector with dates / time steps |
weights |
optional values to weigh the loess fit with |
step |
span increment size |
label |
title to be used when plotting function output |
plot |
plot visual output of the optimization routine |
Returns an optimal span to smooth a provided vector using the 'loess()' smoother.
## Not run: # Internal function only, should not be used stand-alone. l <- sin(seq(1,10,0.01)) l <- l + runif(length(l)) optimal_span(l, plot = TRUE) ## End(Not run)
## Not run: # Internal function only, should not be used stand-alone. l <- sin(seq(1,10,0.01)) l <- l + runif(length(l)) optimal_span(l, plot = TRUE) ## End(Not run)
The GUI allows you to interactively download data and visualize time series.
phenocam_explorer()
phenocam_explorer()
## Not run: # Starts the PhenoCam explorer GUI in a browser phenocam_explorer() ## End(Not run)
## Not run: # Starts the PhenoCam explorer GUI in a browser phenocam_explorer() ## End(Not run)
This routine combines a forward and backward run of transition_dates function to calculate the phenophases in both rising and falling parts of a PhenoCam time series.
phenophases(data, mat, internal = TRUE, out_dir = tempdir(), ...)
phenophases(data, mat, internal = TRUE, out_dir = tempdir(), ...)
data |
a PhenoCam data file (or data frame) |
mat |
mean annual temperature |
internal |
return PhenoCam data file or data frame |
out_dir |
output directory |
... |
pass parameters to the transition_dates() function |
Estimates of transition dates for both rising and falling parts of a PhenoCam time series. All time series are evaluated (gcc_90, gcc_75, etc). The function returns a nested list with UNIX time based values including uncertainties on these estimates and their associated thresholds. When written to disk UNIX dates are converted to YYYY-MM-DD. The nested list has named locations rising and falling, or location 1 and 2 in the list respectivelly.
## Not run: # downloads a time series download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read in data as data frame and calculate phenophases df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) my_dates <- phenophases(df, internal = TRUE) # print results print(my_dates) ## End(Not run)
## Not run: # downloads a time series download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read in data as data frame and calculate phenophases df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) my_dates <- phenophases(df, internal = TRUE) # print results print(my_dates) ## End(Not run)
Wrapper around other more basic funtions, in order to generate phenocam data products.
process_phenocam( file, outlier_detection = TRUE, smooth = TRUE, contract = FALSE, expand = TRUE, truncate, phenophase = TRUE, snow_flag = FALSE, penalty = 0.5, out_dir = tempdir(), internal = FALSE, ... )
process_phenocam( file, outlier_detection = TRUE, smooth = TRUE, contract = FALSE, expand = TRUE, truncate, phenophase = TRUE, snow_flag = FALSE, penalty = 0.5, out_dir = tempdir(), internal = FALSE, ... )
file |
1 or 3-day PhenoCam time series file path |
outlier_detection |
TRUE or FALSE, detect outliers |
smooth |
smooth data (logical, default is |
contract |
contract 3-day data upon output
(logical, default is |
expand |
expand 3-day data upon input
(logical, default is |
truncate |
year (numeric) to which to constrain the output |
phenophase |
logical, calculate transition dates (default = |
snow_flag |
integrate snow flags? |
penalty |
how sensitive is the change point algorithm, lower is more sensitve (< 1, default = 0.5) |
out_dir |
output directory where to store downloaded data (default = tempdir()) |
internal |
allow for the data element to be returned to the workspace |
... |
additional parameters to be forwarded to the phenophases() function, used internally in the routine |
Downloaded files in out_dir of requested time series products, as well as derived phenophase estimates based upon these time series.
## Not run: # download the first ROI time series for the Harvard PhenoCam site # at an aggregation frequency of 3-days. download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read phenocam data into phenocamr data structure df <- process_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
## Not run: # download the first ROI time series for the Harvard PhenoCam site # at an aggregation frequency of 3-days. download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read phenocam data into phenocamr data structure df <- process_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) ## End(Not run)
Reads PhenoCam data into a nested list, preserving header data and critical file name information.
read_phenocam(filename)
read_phenocam(filename)
filename |
a PhenoCam data file |
A nested data structure including site meta-data, the full header and the data as a 'data.frame()'.
## Not run: # download demo data (do not smooth) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", smooth = FALSE) # read the phenocamo data file df = read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # print data structure print(summary(df)) # write the phenocamo data file write_phenocam(df, out_dir = tempdir()) ## End(Not run)
## Not run: # download demo data (do not smooth) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", smooth = FALSE) # read the phenocamo data file df = read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # print data structure print(summary(df)) # write the phenocamo data file write_phenocam(df, out_dir = tempdir()) ## End(Not run)
Smooths time series iteratively using a Akaike information criterion (AIC) to find an optimal smoothing parameter and curve.
smooth_ts( data, metrics = c("gcc_mean", "gcc_50", "gcc_75", "gcc_90", "rcc_mean", "rcc_50", "rcc_75", "rcc_90"), force = TRUE, internal = TRUE, out_dir = tempdir() )
smooth_ts( data, metrics = c("gcc_mean", "gcc_50", "gcc_75", "gcc_90", "rcc_mean", "rcc_50", "rcc_75", "rcc_90"), force = TRUE, internal = TRUE, out_dir = tempdir() )
data |
a PhenoCam data file or data structure |
metrics |
which metrics to process, normally all default ones |
force |
|
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data |
An PhenoCam data structure or file with optimally smoothed time series objects added to the original file. Smoothing is required for 'phenophase()' and 'transition_dates()' functions.
## Not run: # with defaults, outputting a data frame # with smoothed values, overwriting the original # download demo data (do not smooth) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", smooth = FALSE) # smooth the downloaded file (and overwrite the original) smooth_ts(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # the function also works on a PhenoCam data frame df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) df <- smooth_ts(df) ## End(Not run)
## Not run: # with defaults, outputting a data frame # with smoothed values, overwriting the original # download demo data (do not smooth) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", smooth = FALSE) # smooth the downloaded file (and overwrite the original) smooth_ts(file.path(tempdir(),"harvard_DB_1000_3day.csv")) # the function also works on a PhenoCam data frame df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) df <- smooth_ts(df) ## End(Not run)
Segments of a PhenoCam time series and calculates threshold based transition dates for all segments. This function is rarely called stand alone and 'phenophases()' should be preferred when evaluating PhenoCam time series.
transition_dates( data, lower_thresh = 0.1, middle_thresh = 0.25, upper_thresh = 0.5, percentile = 90, penalty = 0.5, seg_length = 14, reverse = FALSE, plot = FALSE )
transition_dates( data, lower_thresh = 0.1, middle_thresh = 0.25, upper_thresh = 0.5, percentile = 90, penalty = 0.5, seg_length = 14, reverse = FALSE, plot = FALSE )
data |
a PhenoCam data file or data structure |
lower_thresh |
the minimum threshold used (default = 0.1) |
middle_thresh |
the middle threshold used (default = 0.25) |
upper_thresh |
the maximum threshold used (default = 0.5) |
percentile |
time series percentiles to process (mean, 50, 75, 90) |
penalty |
how sensitive is the algorithm, lower is more sensitve (< 1 ) |
seg_length |
minimum length of a segment to be evaluated |
reverse |
flip the direction of the processing |
plot |
plot for debugging purposes |
Transition date estimates in UNIX time, including uncertainties and the threshold values estimated for each section of a time series.
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read the data and calculate transition dates df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) my_dates <- transition_dates(df, lower_thresh = 0.1, middle_thresh = 0.25, upper_thresh = 0.5, percentile = 90, reverse = FALSE, plot = FALSE) ## End(Not run)
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # read the data and calculate transition dates df <- read_phenocam(file.path(tempdir(),"harvard_DB_1000_3day.csv")) my_dates <- transition_dates(df, lower_thresh = 0.1, middle_thresh = 0.25, upper_thresh = 0.5, percentile = 90, reverse = FALSE, plot = FALSE) ## End(Not run)
The 'expand_phenocam()' function provides a similar functionality and is prefered. This function remains as it might serve a purpose to some. Might be deprecated in the future.
truncate_phenocam(data, year = 2015, internal = TRUE, out_dir = tempdir())
truncate_phenocam(data, year = 2015, internal = TRUE, out_dir = tempdir())
data |
a PhenoCam file or data frame |
year |
the last valid year, discard the rest |
internal |
return a data structure if given a file on disk
( |
out_dir |
output directory where to store data (default = tempdir()) |
A truncated PhenoCam data structure or file, with data limited to the year specified.
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # overwrites the original file, increasing # decreasing the file size, with given year as maximum. truncate_phenocam(file.paste(tempdir(),"harvard_DB_1000_3day.csv"), year = 2015) ## End(Not run)
## Not run: # download demo data download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3") # overwrites the original file, increasing # decreasing the file size, with given year as maximum. truncate_phenocam(file.paste(tempdir(),"harvard_DB_1000_3day.csv"), year = 2015) ## End(Not run)
Writes a nested data structure of class phenocamr to file, reconstructing the original data structure from included headers and data components.
write_phenocam(df = NULL, out_dir = tempdir())
write_phenocam(df = NULL, out_dir = tempdir())
df |
a nested data structure of class phenocamr |
out_dir |
output directory where to store data |
writes PhenoCam data structure to file, retains proper header info and inserts a processing time stamp.
## Not run: # download demo data (do not smooth) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", smooth = FALSE) # read the phenocamo data file df = read_phenocam(file.paste(tempdir(),"harvard_DB_1000_3day.csv")) # print data structure print(summary(df)) # write the phenocamo data file write_phenocam(df, out_dir = tempdir()) ## End(Not run)
## Not run: # download demo data (do not smooth) download_phenocam(site = "harvard$", veg_type = "DB", roi_id = "1000", frequency = "3", smooth = FALSE) # read the phenocamo data file df = read_phenocam(file.paste(tempdir(),"harvard_DB_1000_3day.csv")) # print data structure print(summary(df)) # write the phenocamo data file write_phenocam(df, out_dir = tempdir()) ## End(Not run)