API Reference
Core Pipeline
EddyPipeline
Peddy.EddyPipeline — Type
EddyPipeline(; sensor, quality_control, gas_analyzer, despiking, make_continuous, gap_filling,
double_rotation, mrd, output, logger)High-level orchestrator for the Peddy.jl processing pipeline.
Each step is a pluggable component that implements its respective abstract interface. Any step can be set to nothing to skip it. The typical order is:
- Quality control (
quality_control!) - Gas analyzer correction (
correct_gas_analyzer!) - Despiking (
despike!) - Make continuous (
make_continuous!) – insert missing timestamps with NaNs - Gap filling / interpolation (
fill_gaps!) - Double rotation (
rotate!) - MRD decomposition (
decompose!) - Output writing (
write_data)
Fields
sensor::AbstractSensor: Sensor providing metadata (and coefficients)quality_control::OptionalPipelineStepgas_analyzer::OptionalPipelineStepdespiking::OptionalPipelineStepmake_continuous::OptionalPipelineStepgap_filling::OptionalPipelineStepdouble_rotation::OptionalPipelineStepmrd::OptionalPipelineStepoutput::AbstractOutput: Writer implementationlogger::AbstractProcessingLogger: Logger for events and timing (default:NoOpLogger())
Peddy.process! — Function
process!(pipeline::EddyPipeline, high_frequency_data::DimArray,
low_frequency_data::Union{Nothing,DimArray}; kwargs...) -> NothingRun the configured pipeline over the provided data. Steps that are nothing are skipped. Progress is shown via a spinner with status messages.
Arguments
pipeline: AnEddyPipelineinstancehigh_frequency_data: DimArray with fast measurements (must haveVarandTidims)low_frequency_data: DimArray with slow data ornothingkwargs...: Forwarded to step implementations
Peddy.check_data — Function
check_data(high_frequency_data::DimArray, low_frequency_data::Union{Nothing,DimArray},
sensor::AbstractSensor) -> NothingValidate that the high-frequency data has the required dimensions and variables for the specified sensor. Throws an ArgumentError if a requirement is not met. Currently only the high-frequency data is validated.
Quality Control (QC)
Abstract Types
Peddy.AbstractQC — Type
AbstractQCAbstract supertype for quality control (QC) pipeline steps.
Peddy.quality_control! — Function
quality_control!(qc::AbstractQC, high_frequency_data, low_frequency_data, sensor; kwargs...)Apply quality control to the data in-place.
Implementations
PhysicsBoundsCheck
Validates that measurements fall within physically plausible ranges. This is a critical first step to remove obviously erroneous data.
qc = PhysicsBoundsCheck()
# Or with custom limits:
qc = PhysicsBoundsCheck(
Ux = Limit(-50, 50),
Uy = Limit(-50, 50),
Uz = Limit(-30, 30),
Ts = Limit(-40, 50)
)Default Physical Limits:
Ux, Uy: [-100, 100] m/sUz: [-50, 50] m/sTs: [-50, 50] °CCO2: [0, ∞] ppmH2O: [0, ∞] mmol/molT: [-50, 50] °CP: [0, ∞] Pa
OnlyDiagnostics
Checks only sensor diagnostic flags without applying physical bounds. Useful when you trust your data range but want to verify sensor health.
qc = OnlyDiagnostics()Gas Analyzer Correction
H2O Calibration
Peddy.H2OCalibration — Type
H2OCalibrationGas analyzer correction for H2O measurements using calibration coefficients. Implements bias correction based on relative humidity reference measurements. Calibration coefficients are extracted from the sensor during correction.
Fields:
h2o_variable: H2O variable name, must be in high frequency data (default: :H2O)pressure_var: Pressure variable name, must be in high frequency data (default: :P)temp_var: Temperature variable name in slow data (default: :TA)rh_var: Relative humidity variable name in slow data (default: :RH)
Peddy.correct_gas_analyzer! — Function
correct_gas_analyzer!(step::AbstractGasAnalyzer, high_frequency_data, low_frequency_data, sensor; kwargs...)Apply gas analyzer corrections (e.g. H2O correction).
Peddy.get_calibration_coefficients — Function
get_calibration_coefficients(sensor)Extract H2O calibration coefficients from sensor if available. Returns nothing if sensor doesn't have calibration coefficients.
How it works:
- Extracts calibration coefficients from the sensor
- Resamples high-frequency H2O and pressure to low-frequency grid
- Computes reference H2O concentration from temperature and relative humidity
- Solves cubic polynomial to find absorptance
- Applies correction back to high-frequency data
Required variables:
- High-frequency:
:H2O,:P(pressure) - Low-frequency:
:TA(temperature),:RH(relative humidity)
Despiking
SimpleSigmundDespiking
Peddy.SimpleSigmundDespiking — Type
SimpleSigmundDespiking(; window_minutes=5.0, variable_groups=VariableGroup[])Implements the modified MAD (Median Absolute Deviation) filter for spike detection based on Sigmund et al. (2022). This despiking method:
- Calculates rolling median and MAD over a specified window
- Computes deviation patterns using neighboring points
- Identifies spikes based on variable group thresholds (each group can have its own threshold)
- Sets detected spikes to NaN
Parameters
window_minutes: Window size in minutes for rolling statistics (default: 5.0)variable_groups: Vector of VariableGroup structs, each with its own thresholduse_mad_floor: When true, clamp MAD values to the per-variable floors inmad_floormad_floor: Dict of variable => minimum MAD (default 0.02 for Ux, Uy, Uz, Ts)
Examples
# Default: wind components and temperature combined
SimpleSigmundDespiking()
# Custom groups with different thresholds
SimpleSigmundDespiking(
variable_groups=[
VariableGroup("Wind Components", [:Ux, :Uy, :Uz], spike_threshold=6.0),
VariableGroup("Sonic Temperature", [:Ts], spike_threshold=6.0),
VariableGroup("Gas Analyzer", [:CO2, :H2O], spike_threshold=6.0)
]
)
# Single variables with individual thresholds
SimpleSigmundDespiking(
variable_groups=[
VariableGroup("Wind U", [:Ux], spike_threshold=6.0),
VariableGroup("Wind V", [:Uy], spike_threshold=6.0),
VariableGroup("Wind W", [:Uz], spike_threshold=7.0)
]
)Peddy.VariableGroup — Type
VariableGroup(name::String, variables::Vector{Symbol}; spike_threshold::Real=6.0)Defines a group of variables that are combined for spike detection.
Parameters
name: Descriptive name for the variable groupvariables: Vector of variable symbols to include in this groupspike_threshold: Spike detection threshold for this group (default: 6.0, normalized by 0.6745)
Examples
# Wind components group
wind_group = VariableGroup("Wind Components", [:Ux, :Uy, :Uz], spike_threshold=6.0)
# Gas analyzer group with different threshold
gas_group = VariableGroup("Gas Analyzer", [:CO2, :H2O], spike_threshold=6.0)Peddy.despike! — Function
despike!(desp::AbstractDespiking, high_frequency_data, low_frequency_data; kwargs...)Apply despiking to the data in-place.
Algorithm: Modified Median Absolute Deviation (MAD) based on Sigmund et al. (2022)
The method:
- Computes rolling median and MAD over a specified window
- Identifies spikes as deviations exceeding threshold × 0.6745 × MAD
- Sets detected spikes to NaN
- Supports per-group thresholds for different variable types
Example with multiple groups:
wind = VariableGroup("Wind", [:Ux, :Uy, :Uz], spike_threshold=6.0)
temp = VariableGroup("Sonic T", [:Ts], spike_threshold=6.0)
gas = VariableGroup("Gas", [:H2O], spike_threshold=5.0)
desp = SimpleSigmundDespiking(
window_minutes=5.0,
variable_groups=[wind, temp, gas]
)Make Continuous
MakeContinuous
Peddy.MakeContinuous — Type
MakeContinuous(; step_size_ms=50, max_gap_minutes=5.0)Pipeline step that ensures a continuous high-frequency time axis by inserting missing timestamps (up to max_gap_minutes) at a fixed resolution given by step_size_ms. All non-time variables for inserted rows are filled with NaN.
Gaps larger than max_gap_minutes are left untouched (a warning is emitted).
Fields
step_size_ms::Int: Expected sampling interval in milliseconds (default 50 ms ⇒ 20 Hz).max_gap_minutes::Real: Maximum gap length to fill (default 5 minutes).
Peddy.make_continuous! — Function
make_continuous!(step::AbstractMakeContinuous, high_frequency_data, low_frequency_data; kwargs...)Insert missing timestamps and fill inserted rows with NaN.
Purpose: Ensures a continuous time axis by inserting missing timestamps.
Behavior:
- Inserts timestamps for gaps up to
max_gap_minutes - Fills inserted rows with NaN for all variables
- Warns about gaps larger than
max_gap_minutes - Returns a new DimArray with expanded time dimension
Example:
mc = MakeContinuous(step_size_ms=50, max_gap_minutes=5.0)
hf_continuous = make_continuous!(mc, hf, lf)Gap Filling / Interpolation
InterpolationMethod
Peddy.Linear — Type
Linear <: InterpolationMethodLinear interpolation method.
Peddy.Quadratic — Type
Quadratic <: InterpolationMethodQuadratic interpolation method.
Peddy.Cubic — Type
Cubic <: InterpolationMethodCubic spline interpolation method.
GeneralInterpolation
Peddy.GeneralInterpolation — Type
GeneralInterpolation{T, M} <: AbstractGapFillingGeneral interpolation gap filling method that interpolates small gaps (≤ maxgapsize consecutive missing values) in time series data using various interpolation methods via Interpolations.jl.
Fields
max_gap_size::Int: Maximum number of consecutive missing values to interpolate (default: 10)variables::Vector{Symbol}: Variables to apply gap filling tomethod::InterpolationMethod: Interpolation method to use (Linear, Quadratic, Cubic)
Peddy.fill_gaps! — Function
fill_gaps!(gap::AbstractGapFilling, high_frequency_data, low_frequency_data; kwargs...)Fill small gaps (represented as NaN) in-place.
Features:
- Interpolates only gaps ≤
max_gap_sizeconsecutive missing values - Supports multiple interpolation methods
- Applies to specified variables only
- Larger gaps are left as NaN
Example:
gap = GeneralInterpolation(
max_gap_size=10,
variables=[:Ux, :Uy, :Uz, :Ts, :H2O],
method=Cubic()
)Double Rotation
WindDoubleRotation
Peddy.WindDoubleRotation — Type
WindDoubleRotation(; block_duration_minutes=30.0, variables=[:Ux, :Uy, :Uz])Implements double rotation coordinate transformation to align wind measurements with the mean streamline coordinate system.
The double rotation method:
- Divides data into blocks of specified duration
- First rotation: sets mean(v) = 0 by rotating around z-axis
- Second rotation: sets mean(w) = 0 by rotating around y-axis
- Applies rotations to transform wind components in-place
Parameters
block_duration_minutes: Duration of each averaging block in minutes (default: 30.0)variables: Wind component variables to rotate (default: [:Ux, :Uy, :Uz])
Examples
# Default 30-minute blocks
double_rotation = WindDoubleRotation()
# Custom 15-minute blocks
double_rotation = WindDoubleRotation(block_duration_minutes=15.0)
# Custom variables
double_rotation = WindDoubleRotation(Ux=:u, Uy=:v, Uz=:w)References
Standard eddy covariance double rotation method for coordinate transformation.
Peddy.rotate! — Function
rotate!(step::AbstractDoubleRotation, high_frequency_data, low_frequency_data; kwargs...)Apply coordinate rotation to wind components in-place.
Algorithm: Aligns wind measurements with the mean streamline coordinate system
- First rotation (θ): Rotates around z-axis to set mean(v) = 0
- Second rotation (φ): Rotates around y-axis to set mean(w) = 0
Block-based processing:
- Divides data into blocks of specified duration
- Computes rotation angles per block
- Applies rotations to all wind components
Example:
rot = WindDoubleRotation(block_duration_minutes=30.0)
rotate!(rot, hf, lf)Multi-Resolution Decomposition (MRD)
OrthogonalMRD
Peddy.OrthogonalMRD — Type
OrthogonalMRD(; M=11, Mx=0, shift=256, a=:Uz, b=:Ts,
gap_threshold_seconds=10.0, normalize=false, regular_grid=false)Multi-Resolution Decomposition (MRD) step adapted from the pepy project (Vickers & Mahrt 2003; Howell & Mahrt 1997).
Computes an orthogonal multiresolution covariance between variables a and b over sliding, gap-aware blocks of length 2^M samples, stepped by shift samples.
Minimal results are stored in m.results and can be retrieved with get_mrd_results.
Peddy.decompose! — Function
decompose!(step::AbstractMRD, high_frequency_data, low_frequency_data; kwargs...)Run a multi-resolution decomposition. Implementations store results inside step.
Peddy.MRDResults — Type
MRDResultsStrongly-typed MRD output container.
Fields:
scales::Vector{T}: time scales (seconds) for indices1..Mmrd::Matrix{T}: mean of window-mean products per scale and block (M × N)mrd_std::Matrix{T}: sample std (ddof=1) across window-mean products (M × N)times::AbstractVector{<:Dates.TimeType}: midpoint times per block
Peddy.get_mrd_results — Function
get_mrd_results(m::OrthogonalMRD)Return MRD results stored in the step, or nothing if not computed.
Algorithm: Orthogonal multiresolution covariance analysis (Vickers & Mahrt 2003; Howell & Mahrt 1997)
Key concepts:
- Decomposes covariance into multiple scales (2^1, 2^2, ..., 2^M samples)
- Computes mean of window-mean products per scale
- Handles gaps intelligently (skips blocks with large gaps)
- Optionally normalizes by centered moving average
Parameters:
M: Maximum scale exponent (block length = 2^M samples)shift: Step size between blocks (samples)a,b: Variables to correlate (e.g.,:Uzand:Ts)gap_threshold_seconds: Maximum allowed gap within a blocknormalize: Apply normalizationregular_grid: Backfill invalid blocks with NaN to maintain regular grid
Example:
mrd = OrthogonalMRD(
M=11,
shift=256,
a=:Uz,
b=:Ts,
gap_threshold_seconds=10.0,
regular_grid=false
)
decompose!(mrd, hf, lf)
results = get_mrd_results(mrd)
if results !== nothing
@show results.scales # Time scales in seconds
@show size(results.mrd) # (M, nblocks)
@show results.times # Block midpoint times
endPlotting MRD results:
using Plots
plot(results) # Heatmap of MRD values across scales and timeInput/Output
Input
Peddy.AbstractInput — Type
AbstractInputAbstract supertype for data inputs. Implementations must provide a read_data(input::AbstractInput, sensor::AbstractSensor; kwargs...) method that returns a tuple (high_frequency_data, low_frequency_data) where the second element may be nothing if not available.
Peddy.read_data — Function
read_data(input::AbstractInput, sensor::AbstractSensor; kwargs...) -> (hf, lf)Generic interface for reading data for the pipeline. Returns a high-frequency DimArray and optionally a low-frequency DimArray (or nothing). Concrete inputs such as DotDatDirectory implement this method.
Peddy.DotDatDirectory — Type
Input that reads high-frequency (required) and optional low-frequency .dat files from a directory using glob patterns.
Fields:
directory: Root directory to searchhigh_frequency_file_glob: Glob to find high-frequency files (e.g. "fast")high_frequency_file_options:FileOptionsfor the high-frequency fileslow_frequency_file_glob: Optional glob for low-frequency fileslow_frequency_file_options: OptionalFileOptionsfor low-frequency files
Peddy.FileOptions — Type
Options describing how to read a single .dat (CSV-like) file.
Fields:
header: Header row index (0 means no header provided in file)delimiter: Field delimitercomment: Comment markertimestamp_column: Name of the timestamp column in the filetime_format: Dates.DateFormat used to parse timestampsnodata: Numeric sentinel in files that should be replaced withNaN
Reading from .dat files:
sensor = CSAT3()
input = DotDatDirectory(
directory="/path/to/data",
high_frequency_file_glob="*fast*",
high_frequency_file_options=FileOptions(
timestamp_column=:TIMESTAMP,
time_format=dateformat"yyyy-mm-dd HH:MM:SS.s"
),
low_frequency_file_glob="*slow*",
low_frequency_file_options=FileOptions(
timestamp_column=:TIMESTAMP,
time_format=dateformat"yyyy-mm-dd HH:MM:SS"
)
)
hf, lf = read_data(input, sensor)Output
Peddy.AbstractOutput — Type
AbstractOutputAbstract supertype for output writers.
Peddy.write_data — Function
write_data(output::AbstractOutput, high_frequency_data, low_frequency_data=nothing; kwargs...)Write processed data to an output sink (files, memory, etc.).
Peddy.MemoryOutput — Type
MemoryOutput{T} <: AbstractOutputOutput struct that stores processed data in memory instead of writing to files. Useful for testing and when you want to keep results in memory for further processing.
Fields
high_frequency_data::T: Processed high frequency datalow_frequency_data::T: Processed low frequency data
Peddy.ICSVOutput — Type
ICSVOutput(; base_filename, location, fields=DEFAULT_VARIABLE_METADATA,
field_delimiter=",", nodata=-9999.0, other_metadata=Dict())Output backend that writes data in the .icsv format using the InteroperableCSV.jl package.
Fields:
base_filename::String: Base path without suffix;_hf/_lfand.icsvare appended.location: Site location metadata (acceptsLocationMetadataorInteroperableCSV.Loc).fields::Dict{Symbol,VariableMetadata}: Per-variable metadata written to the Fields section.field_delimiter::String: Delimiter for the data section.nodata::Float64: Fill value used for NaN/missing.other_metadata::Dict{Symbol,String}: Additional key-value pairs for the MetaData section.
Writes one file for high-frequency data and, if provided, one for low-frequency data.
Peddy.NetCDFOutput — Type
NetCDFOutput(; base_filename, location, fields=DEFAULT_VARIABLE_METADATA, fill_value=-9999.0)Output backend that writes CF-compliant NetCDF files using NCDatasets.jl.
Fields:
base_filename::String: Base path (without suffix);_hf/_lfand.ncare appended.location::LocationMetadata: Site coordinates/elevation are stored as scalar coordinates.fields::Dict{Symbol,VariableMetadata}: Per-variable metadata for CF attributes.fill_value::Float64: Fill value for missing/NaN entries.
Peddy.OutputSplitter — Type
No documentation found.
Binding Peddy.OutputSplitter does not exist.
Output options:
MemoryOutput: Keep results in memory (for exploration)
out = MemoryOutput()
process!(pipeline, hf, lf)
hf_res, lf_res = Peddy.get_results(out)ICSVOutput: Write to CSV files
out = ICSVOutput("/path/to/output")NetCDFOutput: Write to NetCDF format
out = NetCDFOutput("/path/to/output")OutputSplitter: Write to multiple formats simultaneously
out = OutputSplitter(
ICSVOutput("/path/csv"),
NetCDFOutput("/path/nc")
)Metadata
Peddy.LocationMetadata — Type
LocationMetadata(; latitude, longitude, elevation=nothing, instrument_height=nothing)Geospatial metadata for a site/instrument location.
Fields:
latitude::Float64longitude::Float64elevation::Union{Float64,Nothing}: meters above ground (optional)instrument_height::Union{Float64,Nothing}: meters above ground (optional)
Peddy.VariableMetadata — Type
VariableMetadata(; standard_name, unit="", long_name="", description="")Lightweight container for per-variable metadata used by output backends.
Fields:
standard_name::String: CF-style standard name or canonical name.unit::String: Physical unit (free text, e.g. "m s^-1").long_name::String: Human-readable name.description: Free-form description.
Peddy.get_default_metadata — Function
get_default_metadata() -> Dict{Symbol,VariableMetadata}Return the default metadata dictionary used by output backends.
Peddy.metadata_for — Function
metadata_for(name::Union{Symbol,AbstractString}) -> VariableMetadataReturn metadata for a variable. If the variable is not present in the DEFAULT_VARIABLE_METADATA registry, a generic VariableMetadata is returned using the variable name for both standard_name and long_name and empty unit/description.
Sensors
Supported Sensors
Peddy.CSAT3 — Type
CSAT3(; diag_sonic=63)Campbell Scientific CSAT3 sonic anemometer.
The diag_sonic threshold controls which diagnostic values are considered invalid. During check_diagnostics!, records exceeding this threshold are set to NaN for wind components and sonic temperature.
Peddy.CSAT3B — Type
CSAT3B(; diag_sonic=0)Campbell Scientific CSAT3B sonic anemometer.
The diag_sonic threshold controls which diagnostic values are considered invalid. During check_diagnostics!, records exceeding this threshold are set to NaN for wind components and sonic temperature.
Peddy.IRGASON — Type
IRGASON(; diag_sonic=0, diag_gas=0)LI-COR IRGASON integrated sonic anemometer + gas analyzer.
During check_diagnostics!, records with diagnostics exceeding the configured thresholds are set to NaN for the affected variables.
Peddy.LICOR — Type
LICOR(; number_type=Float64, diag_sonic=0, diag_gas=240, calibration_coefficients=nothing)LI-COR gas analyzer / sonic configuration with optional H2O calibration coefficients.
If calibration_coefficients is provided, it can be used by gas analyzer correction steps (e.g. H2OCalibration).
Sensor selection:
# Campbell CSAT3 sonic anemometer
sensor = CSAT3()
# Campbell CSAT3B (updated version)
sensor = CSAT3B()
# LI-COR IRGASON (sonic + CO2/H2O)
sensor = IRGASON()
# LI-COR with H2O calibration coefficients
sensor = LICOR(
calibration_coefficients=H2OCalibrationCoefficients(
A=4.82004e3,
B=3.79290e6,
C=-1.15477e8,
H2O_Zero=0.7087,
H20_Span=0.9885
)
)Logging
AbstractProcessingLogger
Peddy.AbstractProcessingLogger — Type
AbstractProcessingLoggerAbstract base type for processing loggers. Enables type-stable dispatch and zero-cost abstraction when logging is disabled.
Subtypes:
ProcessingLogger: Active logger that records eventsNoOpLogger: Singleton that compiles away to no-ops
Peddy.ProcessingLogger — Type
ProcessingLogger()Mutable logger that accumulates processing events and stage durations. Events are stored in memory and can be written to CSV via write_processing_log.
Example
logger = ProcessingLogger()
log_event!(logger, :qc, :bounds; variable=:Ux, start_time=now())
record_stage_time!(logger, :qc, 1.5)
write_processing_log(logger, "processing.csv")Peddy.NoOpLogger — Type
NoOpLogger()Singleton logger that performs no operations. All logging calls compile to no-ops, providing zero runtime overhead when logging is disabled.
Example
logger = NoOpLogger()
log_event!(logger, :qc, :bounds) # Compiles to nothingPeddy.log_event! — Function
log_event!(logger, stage, category; variable=nothing, start_time=nothing, end_time=nothing, kwargs...)Record a processing event. Duration is automatically computed if both timestamps are provided.
Peddy.record_stage_time! — Function
record_stage_time!(logger, stage, seconds)Accumulate runtime for a pipeline stage. Multiple calls for the same stage are summed.
Peddy.write_processing_log — Function
write_processing_log(logger, filepath)Write all logged events and stage durations to a CSV file.
Peddy.log_index_runs! — Function
log_index_runs!(logger, stage, category, variable, timestamps, indices; include_run_length=false, kwargs...)Log contiguous runs of indices as separate events. Useful for logging flagged data points.
Peddy.log_mask_runs! — Function
log_mask_runs!(logger, stage, category, variable, timestamps, mask; kwargs...)Log contiguous runs of true values in a boolean mask as separate events.
Peddy.is_logging_enabled — Function
is_logging_enabled(logger::AbstractProcessingLogger) -> BoolReturn true if logger is an active logger that records events, and false for NoOpLogger.
Usage:
# Active logging
logger = ProcessingLogger()
pipeline = EddyPipeline(
sensor=sensor,
output=output,
logger=logger
)
process!(pipeline, hf, lf)
# Write log to file
write_processing_log(logger, "/path/to/log.csv")
# Or use no-op logger (zero overhead)
logger = NoOpLogger()Utility Functions
mean_skipnan
Compute mean while ignoring NaN values. Returns NaN if all values are NaN.
result = Peddy.mean_skipnan(arr)Data Format
All data is represented using DimArray from DimensionalData.jl:
using DimensionalData
# High-frequency data with Var and Ti dimensions
hf = DimArray(
data_matrix,
(Var([:Ux, :Uy, :Uz, :Ts, :H2O]), Ti(times))
)
# Access variables
ux = hf[Var=At(:Ux)]
ts = hf[Var=At(:Ts)]
# Access time slice
slice = hf[Ti=At(DateTime(2024, 1, 1, 12, 0, 0))]Abstract Types for Extension
All pipeline steps inherit from PipelineStep and implement specific abstract interfaces:
abstract type PipelineStep end
abstract type AbstractQC <: PipelineStep end
abstract type AbstractDespiking <: PipelineStep end
abstract type AbstractGapFilling <: PipelineStep end
abstract type AbstractMakeContinuous <: PipelineStep end
abstract type AbstractGasAnalyzer <: PipelineStep end
abstract type AbstractDoubleRotation <: PipelineStep end
abstract type AbstractMRD <: PipelineStep end
abstract type AbstractOutput <: PipelineStep endSee the Extension Guide for implementing custom steps.