API Reference

Core Pipeline

EddyPipeline

Peddy.EddyPipelineType
EddyPipeline(; sensor, quality_control, gas_analyzer, despiking, make_continuous, gap_filling,
              double_rotation, mrd, output, logger)

High-level orchestrator for the Peddy.jl processing pipeline.

Each step is a pluggable component that implements its respective abstract interface. Any step can be set to nothing to skip it. The typical order is:

  1. Quality control (quality_control!)
  2. Gas analyzer correction (correct_gas_analyzer!)
  3. Despiking (despike!)
  4. Make continuous (make_continuous!) – insert missing timestamps with NaNs
  5. Gap filling / interpolation (fill_gaps!)
  6. Double rotation (rotate!)
  7. MRD decomposition (decompose!)
  8. Output writing (write_data)

Fields

  • sensor::AbstractSensor: Sensor providing metadata (and coefficients)
  • quality_control::OptionalPipelineStep
  • gas_analyzer::OptionalPipelineStep
  • despiking::OptionalPipelineStep
  • make_continuous::OptionalPipelineStep
  • gap_filling::OptionalPipelineStep
  • double_rotation::OptionalPipelineStep
  • mrd::OptionalPipelineStep
  • output::AbstractOutput: Writer implementation
  • logger::AbstractProcessingLogger: Logger for events and timing (default: NoOpLogger())
Peddy.process!Function
process!(pipeline::EddyPipeline, high_frequency_data::DimArray,
         low_frequency_data::Union{Nothing,DimArray}; kwargs...) -> Nothing

Run the configured pipeline over the provided data. Steps that are nothing are skipped. Progress is shown via a spinner with status messages.

Arguments

  • pipeline: An EddyPipeline instance
  • high_frequency_data: DimArray with fast measurements (must have Var and Ti dims)
  • low_frequency_data: DimArray with slow data or nothing
  • kwargs...: Forwarded to step implementations
Peddy.check_dataFunction
check_data(high_frequency_data::DimArray, low_frequency_data::Union{Nothing,DimArray},
           sensor::AbstractSensor) -> Nothing

Validate that the high-frequency data has the required dimensions and variables for the specified sensor. Throws an ArgumentError if a requirement is not met. Currently only the high-frequency data is validated.

Quality Control (QC)

Abstract Types

Peddy.AbstractQCType
AbstractQC

Abstract supertype for quality control (QC) pipeline steps.

Peddy.quality_control!Function
quality_control!(qc::AbstractQC, high_frequency_data, low_frequency_data, sensor; kwargs...)

Apply quality control to the data in-place.

Implementations

PhysicsBoundsCheck

Validates that measurements fall within physically plausible ranges. This is a critical first step to remove obviously erroneous data.

qc = PhysicsBoundsCheck()
# Or with custom limits:
qc = PhysicsBoundsCheck(
    Ux = Limit(-50, 50),
    Uy = Limit(-50, 50),
    Uz = Limit(-30, 30),
    Ts = Limit(-40, 50)
)

Default Physical Limits:

  • Ux, Uy: [-100, 100] m/s
  • Uz: [-50, 50] m/s
  • Ts: [-50, 50] °C
  • CO2: [0, ∞] ppm
  • H2O: [0, ∞] mmol/mol
  • T: [-50, 50] °C
  • P: [0, ∞] Pa

OnlyDiagnostics

Checks only sensor diagnostic flags without applying physical bounds. Useful when you trust your data range but want to verify sensor health.

qc = OnlyDiagnostics()

Gas Analyzer Correction

H2O Calibration

Peddy.H2OCalibrationType
H2OCalibration

Gas analyzer correction for H2O measurements using calibration coefficients. Implements bias correction based on relative humidity reference measurements. Calibration coefficients are extracted from the sensor during correction.

Fields:

  • h2o_variable: H2O variable name, must be in high frequency data (default: :H2O)
  • pressure_var: Pressure variable name, must be in high frequency data (default: :P)
  • temp_var: Temperature variable name in slow data (default: :TA)
  • rh_var: Relative humidity variable name in slow data (default: :RH)
Peddy.correct_gas_analyzer!Function
correct_gas_analyzer!(step::AbstractGasAnalyzer, high_frequency_data, low_frequency_data, sensor; kwargs...)

Apply gas analyzer corrections (e.g. H2O correction).

Peddy.get_calibration_coefficientsFunction
get_calibration_coefficients(sensor)

Extract H2O calibration coefficients from sensor if available. Returns nothing if sensor doesn't have calibration coefficients.

How it works:

  1. Extracts calibration coefficients from the sensor
  2. Resamples high-frequency H2O and pressure to low-frequency grid
  3. Computes reference H2O concentration from temperature and relative humidity
  4. Solves cubic polynomial to find absorptance
  5. Applies correction back to high-frequency data

Required variables:

  • High-frequency: :H2O, :P (pressure)
  • Low-frequency: :TA (temperature), :RH (relative humidity)

Despiking

SimpleSigmundDespiking

Peddy.SimpleSigmundDespikingType
SimpleSigmundDespiking(; window_minutes=5.0, variable_groups=VariableGroup[])

Implements the modified MAD (Median Absolute Deviation) filter for spike detection based on Sigmund et al. (2022). This despiking method:

  1. Calculates rolling median and MAD over a specified window
  2. Computes deviation patterns using neighboring points
  3. Identifies spikes based on variable group thresholds (each group can have its own threshold)
  4. Sets detected spikes to NaN

Parameters

  • window_minutes: Window size in minutes for rolling statistics (default: 5.0)
  • variable_groups: Vector of VariableGroup structs, each with its own threshold
  • use_mad_floor: When true, clamp MAD values to the per-variable floors in mad_floor
  • mad_floor: Dict of variable => minimum MAD (default 0.02 for Ux, Uy, Uz, Ts)

Examples

# Default: wind components and temperature combined
SimpleSigmundDespiking()

# Custom groups with different thresholds
SimpleSigmundDespiking(
    variable_groups=[
        VariableGroup("Wind Components", [:Ux, :Uy, :Uz], spike_threshold=6.0),
        VariableGroup("Sonic Temperature", [:Ts], spike_threshold=6.0),
        VariableGroup("Gas Analyzer", [:CO2, :H2O], spike_threshold=6.0)
    ]
)

# Single variables with individual thresholds
SimpleSigmundDespiking(
    variable_groups=[
        VariableGroup("Wind U", [:Ux], spike_threshold=6.0),
        VariableGroup("Wind V", [:Uy], spike_threshold=6.0),
        VariableGroup("Wind W", [:Uz], spike_threshold=7.0)
    ]
)
Peddy.VariableGroupType
VariableGroup(name::String, variables::Vector{Symbol}; spike_threshold::Real=6.0)

Defines a group of variables that are combined for spike detection.

Parameters

  • name: Descriptive name for the variable group
  • variables: Vector of variable symbols to include in this group
  • spike_threshold: Spike detection threshold for this group (default: 6.0, normalized by 0.6745)

Examples

# Wind components group
wind_group = VariableGroup("Wind Components", [:Ux, :Uy, :Uz], spike_threshold=6.0)

# Gas analyzer group with different threshold
gas_group = VariableGroup("Gas Analyzer", [:CO2, :H2O], spike_threshold=6.0)
Peddy.despike!Function
despike!(desp::AbstractDespiking, high_frequency_data, low_frequency_data; kwargs...)

Apply despiking to the data in-place.

Algorithm: Modified Median Absolute Deviation (MAD) based on Sigmund et al. (2022)

The method:

  1. Computes rolling median and MAD over a specified window
  2. Identifies spikes as deviations exceeding threshold × 0.6745 × MAD
  3. Sets detected spikes to NaN
  4. Supports per-group thresholds for different variable types

Example with multiple groups:

wind = VariableGroup("Wind", [:Ux, :Uy, :Uz], spike_threshold=6.0)
temp = VariableGroup("Sonic T", [:Ts], spike_threshold=6.0)
gas = VariableGroup("Gas", [:H2O], spike_threshold=5.0)

desp = SimpleSigmundDespiking(
    window_minutes=5.0,
    variable_groups=[wind, temp, gas]
)

Make Continuous

MakeContinuous

Peddy.MakeContinuousType
MakeContinuous(; step_size_ms=50, max_gap_minutes=5.0)

Pipeline step that ensures a continuous high-frequency time axis by inserting missing timestamps (up to max_gap_minutes) at a fixed resolution given by step_size_ms. All non-time variables for inserted rows are filled with NaN.

Gaps larger than max_gap_minutes are left untouched (a warning is emitted).

Fields

  • step_size_ms::Int: Expected sampling interval in milliseconds (default 50 ms ⇒ 20 Hz).
  • max_gap_minutes::Real: Maximum gap length to fill (default 5 minutes).
Peddy.make_continuous!Function
make_continuous!(step::AbstractMakeContinuous, high_frequency_data, low_frequency_data; kwargs...)

Insert missing timestamps and fill inserted rows with NaN.

Purpose: Ensures a continuous time axis by inserting missing timestamps.

Behavior:

  • Inserts timestamps for gaps up to max_gap_minutes
  • Fills inserted rows with NaN for all variables
  • Warns about gaps larger than max_gap_minutes
  • Returns a new DimArray with expanded time dimension

Example:

mc = MakeContinuous(step_size_ms=50, max_gap_minutes=5.0)
hf_continuous = make_continuous!(mc, hf, lf)

Gap Filling / Interpolation

InterpolationMethod

Peddy.LinearType
Linear <: InterpolationMethod

Linear interpolation method.

Peddy.QuadraticType
Quadratic <: InterpolationMethod

Quadratic interpolation method.

Peddy.CubicType
Cubic <: InterpolationMethod

Cubic spline interpolation method.

GeneralInterpolation

Peddy.GeneralInterpolationType
GeneralInterpolation{T, M} <: AbstractGapFilling

General interpolation gap filling method that interpolates small gaps (≤ maxgapsize consecutive missing values) in time series data using various interpolation methods via Interpolations.jl.

Fields

  • max_gap_size::Int: Maximum number of consecutive missing values to interpolate (default: 10)
  • variables::Vector{Symbol}: Variables to apply gap filling to
  • method::InterpolationMethod: Interpolation method to use (Linear, Quadratic, Cubic)
Peddy.fill_gaps!Function
fill_gaps!(gap::AbstractGapFilling, high_frequency_data, low_frequency_data; kwargs...)

Fill small gaps (represented as NaN) in-place.

Features:

  • Interpolates only gaps ≤ max_gap_size consecutive missing values
  • Supports multiple interpolation methods
  • Applies to specified variables only
  • Larger gaps are left as NaN

Example:

gap = GeneralInterpolation(
    max_gap_size=10,
    variables=[:Ux, :Uy, :Uz, :Ts, :H2O],
    method=Cubic()
)

Double Rotation

WindDoubleRotation

Peddy.WindDoubleRotationType
WindDoubleRotation(; block_duration_minutes=30.0, variables=[:Ux, :Uy, :Uz])

Implements double rotation coordinate transformation to align wind measurements with the mean streamline coordinate system.

The double rotation method:

  1. Divides data into blocks of specified duration
  2. First rotation: sets mean(v) = 0 by rotating around z-axis
  3. Second rotation: sets mean(w) = 0 by rotating around y-axis
  4. Applies rotations to transform wind components in-place

Parameters

  • block_duration_minutes: Duration of each averaging block in minutes (default: 30.0)
  • variables: Wind component variables to rotate (default: [:Ux, :Uy, :Uz])

Examples

# Default 30-minute blocks
double_rotation = WindDoubleRotation()

# Custom 15-minute blocks
double_rotation = WindDoubleRotation(block_duration_minutes=15.0)

# Custom variables
double_rotation = WindDoubleRotation(Ux=:u, Uy=:v, Uz=:w)

References

Standard eddy covariance double rotation method for coordinate transformation.

Peddy.rotate!Function
rotate!(step::AbstractDoubleRotation, high_frequency_data, low_frequency_data; kwargs...)

Apply coordinate rotation to wind components in-place.

Algorithm: Aligns wind measurements with the mean streamline coordinate system

  1. First rotation (θ): Rotates around z-axis to set mean(v) = 0
  2. Second rotation (φ): Rotates around y-axis to set mean(w) = 0

Block-based processing:

  • Divides data into blocks of specified duration
  • Computes rotation angles per block
  • Applies rotations to all wind components

Example:

rot = WindDoubleRotation(block_duration_minutes=30.0)
rotate!(rot, hf, lf)

Multi-Resolution Decomposition (MRD)

OrthogonalMRD

Peddy.OrthogonalMRDType
OrthogonalMRD(; M=11, Mx=0, shift=256, a=:Uz, b=:Ts,
                gap_threshold_seconds=10.0, normalize=false, regular_grid=false)

Multi-Resolution Decomposition (MRD) step adapted from the pepy project (Vickers & Mahrt 2003; Howell & Mahrt 1997).

Computes an orthogonal multiresolution covariance between variables a and b over sliding, gap-aware blocks of length 2^M samples, stepped by shift samples.

Minimal results are stored in m.results and can be retrieved with get_mrd_results.

Peddy.decompose!Function
decompose!(step::AbstractMRD, high_frequency_data, low_frequency_data; kwargs...)

Run a multi-resolution decomposition. Implementations store results inside step.

Peddy.MRDResultsType
MRDResults

Strongly-typed MRD output container.

Fields:

  • scales::Vector{T}: time scales (seconds) for indices 1..M
  • mrd::Matrix{T}: mean of window-mean products per scale and block (M × N)
  • mrd_std::Matrix{T}: sample std (ddof=1) across window-mean products (M × N)
  • times::AbstractVector{<:Dates.TimeType}: midpoint times per block
Peddy.get_mrd_resultsFunction
get_mrd_results(m::OrthogonalMRD)

Return MRD results stored in the step, or nothing if not computed.

Algorithm: Orthogonal multiresolution covariance analysis (Vickers & Mahrt 2003; Howell & Mahrt 1997)

Key concepts:

  • Decomposes covariance into multiple scales (2^1, 2^2, ..., 2^M samples)
  • Computes mean of window-mean products per scale
  • Handles gaps intelligently (skips blocks with large gaps)
  • Optionally normalizes by centered moving average

Parameters:

  • M: Maximum scale exponent (block length = 2^M samples)
  • shift: Step size between blocks (samples)
  • a, b: Variables to correlate (e.g., :Uz and :Ts)
  • gap_threshold_seconds: Maximum allowed gap within a block
  • normalize: Apply normalization
  • regular_grid: Backfill invalid blocks with NaN to maintain regular grid

Example:

mrd = OrthogonalMRD(
    M=11,
    shift=256,
    a=:Uz,
    b=:Ts,
    gap_threshold_seconds=10.0,
    regular_grid=false
)

decompose!(mrd, hf, lf)
results = get_mrd_results(mrd)

if results !== nothing
    @show results.scales       # Time scales in seconds
    @show size(results.mrd)    # (M, nblocks)
    @show results.times        # Block midpoint times
end

Plotting MRD results:

using Plots
plot(results)  # Heatmap of MRD values across scales and time

Input/Output

Input

Peddy.AbstractInputType
AbstractInput

Abstract supertype for data inputs. Implementations must provide a read_data(input::AbstractInput, sensor::AbstractSensor; kwargs...) method that returns a tuple (high_frequency_data, low_frequency_data) where the second element may be nothing if not available.

Peddy.read_dataFunction
read_data(input::AbstractInput, sensor::AbstractSensor; kwargs...) -> (hf, lf)

Generic interface for reading data for the pipeline. Returns a high-frequency DimArray and optionally a low-frequency DimArray (or nothing). Concrete inputs such as DotDatDirectory implement this method.

Peddy.DotDatDirectoryType

Input that reads high-frequency (required) and optional low-frequency .dat files from a directory using glob patterns.

Fields:

  • directory: Root directory to search
  • high_frequency_file_glob: Glob to find high-frequency files (e.g. "fast")
  • high_frequency_file_options: FileOptions for the high-frequency files
  • low_frequency_file_glob: Optional glob for low-frequency files
  • low_frequency_file_options: Optional FileOptions for low-frequency files
Peddy.FileOptionsType

Options describing how to read a single .dat (CSV-like) file.

Fields:

  • header: Header row index (0 means no header provided in file)
  • delimiter: Field delimiter
  • comment: Comment marker
  • timestamp_column: Name of the timestamp column in the file
  • time_format: Dates.DateFormat used to parse timestamps
  • nodata: Numeric sentinel in files that should be replaced with NaN

Reading from .dat files:

sensor = CSAT3()

input = DotDatDirectory(
    directory="/path/to/data",
    high_frequency_file_glob="*fast*",
    high_frequency_file_options=FileOptions(
        timestamp_column=:TIMESTAMP,
        time_format=dateformat"yyyy-mm-dd HH:MM:SS.s"
    ),
    low_frequency_file_glob="*slow*",
    low_frequency_file_options=FileOptions(
        timestamp_column=:TIMESTAMP,
        time_format=dateformat"yyyy-mm-dd HH:MM:SS"
    )
)

hf, lf = read_data(input, sensor)

Output

Peddy.write_dataFunction
write_data(output::AbstractOutput, high_frequency_data, low_frequency_data=nothing; kwargs...)

Write processed data to an output sink (files, memory, etc.).

Peddy.MemoryOutputType
MemoryOutput{T} <: AbstractOutput

Output struct that stores processed data in memory instead of writing to files. Useful for testing and when you want to keep results in memory for further processing.

Fields

  • high_frequency_data::T: Processed high frequency data
  • low_frequency_data::T: Processed low frequency data
Peddy.ICSVOutputType
ICSVOutput(; base_filename, location, fields=DEFAULT_VARIABLE_METADATA,
            field_delimiter=",", nodata=-9999.0, other_metadata=Dict())

Output backend that writes data in the .icsv format using the InteroperableCSV.jl package.

Fields:

  • base_filename::String: Base path without suffix; _hf/_lf and .icsv are appended.
  • location: Site location metadata (accepts LocationMetadata or InteroperableCSV.Loc).
  • fields::Dict{Symbol,VariableMetadata}: Per-variable metadata written to the Fields section.
  • field_delimiter::String: Delimiter for the data section.
  • nodata::Float64: Fill value used for NaN/missing.
  • other_metadata::Dict{Symbol,String}: Additional key-value pairs for the MetaData section.

Writes one file for high-frequency data and, if provided, one for low-frequency data.

Peddy.NetCDFOutputType
NetCDFOutput(; base_filename, location, fields=DEFAULT_VARIABLE_METADATA, fill_value=-9999.0)

Output backend that writes CF-compliant NetCDF files using NCDatasets.jl.

Fields:

  • base_filename::String: Base path (without suffix); _hf/_lf and .nc are appended.
  • location::LocationMetadata: Site coordinates/elevation are stored as scalar coordinates.
  • fields::Dict{Symbol,VariableMetadata}: Per-variable metadata for CF attributes.
  • fill_value::Float64: Fill value for missing/NaN entries.

Output options:

MemoryOutput: Keep results in memory (for exploration)

out = MemoryOutput()
process!(pipeline, hf, lf)
hf_res, lf_res = Peddy.get_results(out)

ICSVOutput: Write to CSV files

out = ICSVOutput("/path/to/output")

NetCDFOutput: Write to NetCDF format

out = NetCDFOutput("/path/to/output")

OutputSplitter: Write to multiple formats simultaneously

out = OutputSplitter(
    ICSVOutput("/path/csv"),
    NetCDFOutput("/path/nc")
)

Metadata

Peddy.LocationMetadataType
LocationMetadata(; latitude, longitude, elevation=nothing, instrument_height=nothing)

Geospatial metadata for a site/instrument location.

Fields:

  • latitude::Float64
  • longitude::Float64
  • elevation::Union{Float64,Nothing}: meters above ground (optional)
  • instrument_height::Union{Float64,Nothing}: meters above ground (optional)
Peddy.VariableMetadataType
VariableMetadata(; standard_name, unit="", long_name="", description="")

Lightweight container for per-variable metadata used by output backends.

Fields:

  • standard_name::String: CF-style standard name or canonical name.
  • unit::String: Physical unit (free text, e.g. "m s^-1").
  • long_name::String: Human-readable name.
  • description: Free-form description.
Peddy.get_default_metadataFunction
get_default_metadata() -> Dict{Symbol,VariableMetadata}

Return the default metadata dictionary used by output backends.

Peddy.metadata_forFunction
metadata_for(name::Union{Symbol,AbstractString}) -> VariableMetadata

Return metadata for a variable. If the variable is not present in the DEFAULT_VARIABLE_METADATA registry, a generic VariableMetadata is returned using the variable name for both standard_name and long_name and empty unit/description.

Sensors

Supported Sensors

Peddy.CSAT3Type
CSAT3(; diag_sonic=63)

Campbell Scientific CSAT3 sonic anemometer.

The diag_sonic threshold controls which diagnostic values are considered invalid. During check_diagnostics!, records exceeding this threshold are set to NaN for wind components and sonic temperature.

Peddy.CSAT3BType
CSAT3B(; diag_sonic=0)

Campbell Scientific CSAT3B sonic anemometer.

The diag_sonic threshold controls which diagnostic values are considered invalid. During check_diagnostics!, records exceeding this threshold are set to NaN for wind components and sonic temperature.

Peddy.IRGASONType
IRGASON(; diag_sonic=0, diag_gas=0)

LI-COR IRGASON integrated sonic anemometer + gas analyzer.

During check_diagnostics!, records with diagnostics exceeding the configured thresholds are set to NaN for the affected variables.

Peddy.LICORType
LICOR(; number_type=Float64, diag_sonic=0, diag_gas=240, calibration_coefficients=nothing)

LI-COR gas analyzer / sonic configuration with optional H2O calibration coefficients.

If calibration_coefficients is provided, it can be used by gas analyzer correction steps (e.g. H2OCalibration).

Sensor selection:

# Campbell CSAT3 sonic anemometer
sensor = CSAT3()

# Campbell CSAT3B (updated version)
sensor = CSAT3B()

# LI-COR IRGASON (sonic + CO2/H2O)
sensor = IRGASON()

# LI-COR with H2O calibration coefficients
sensor = LICOR(
    calibration_coefficients=H2OCalibrationCoefficients(
        A=4.82004e3,
        B=3.79290e6,
        C=-1.15477e8,
        H2O_Zero=0.7087,
        H20_Span=0.9885
    )
)

Logging

AbstractProcessingLogger

Peddy.AbstractProcessingLoggerType
AbstractProcessingLogger

Abstract base type for processing loggers. Enables type-stable dispatch and zero-cost abstraction when logging is disabled.

Subtypes:

  • ProcessingLogger: Active logger that records events
  • NoOpLogger: Singleton that compiles away to no-ops
Peddy.ProcessingLoggerType
ProcessingLogger()

Mutable logger that accumulates processing events and stage durations. Events are stored in memory and can be written to CSV via write_processing_log.

Example

logger = ProcessingLogger()
log_event!(logger, :qc, :bounds; variable=:Ux, start_time=now())
record_stage_time!(logger, :qc, 1.5)
write_processing_log(logger, "processing.csv")
Peddy.NoOpLoggerType
NoOpLogger()

Singleton logger that performs no operations. All logging calls compile to no-ops, providing zero runtime overhead when logging is disabled.

Example

logger = NoOpLogger()
log_event!(logger, :qc, :bounds)  # Compiles to nothing
Peddy.log_event!Function
log_event!(logger, stage, category; variable=nothing, start_time=nothing, end_time=nothing, kwargs...)

Record a processing event. Duration is automatically computed if both timestamps are provided.

Peddy.record_stage_time!Function
record_stage_time!(logger, stage, seconds)

Accumulate runtime for a pipeline stage. Multiple calls for the same stage are summed.

Peddy.write_processing_logFunction
write_processing_log(logger, filepath)

Write all logged events and stage durations to a CSV file.

Peddy.log_index_runs!Function
log_index_runs!(logger, stage, category, variable, timestamps, indices; include_run_length=false, kwargs...)

Log contiguous runs of indices as separate events. Useful for logging flagged data points.

Peddy.log_mask_runs!Function
log_mask_runs!(logger, stage, category, variable, timestamps, mask; kwargs...)

Log contiguous runs of true values in a boolean mask as separate events.

Peddy.is_logging_enabledFunction
is_logging_enabled(logger::AbstractProcessingLogger) -> Bool

Return true if logger is an active logger that records events, and false for NoOpLogger.

Usage:

# Active logging
logger = ProcessingLogger()

pipeline = EddyPipeline(
    sensor=sensor,
    output=output,
    logger=logger
)

process!(pipeline, hf, lf)

# Write log to file
write_processing_log(logger, "/path/to/log.csv")

# Or use no-op logger (zero overhead)
logger = NoOpLogger()

Utility Functions

mean_skipnan

Compute mean while ignoring NaN values. Returns NaN if all values are NaN.

result = Peddy.mean_skipnan(arr)

Data Format

All data is represented using DimArray from DimensionalData.jl:

using DimensionalData

# High-frequency data with Var and Ti dimensions
hf = DimArray(
    data_matrix,
    (Var([:Ux, :Uy, :Uz, :Ts, :H2O]), Ti(times))
)

# Access variables
ux = hf[Var=At(:Ux)]
ts = hf[Var=At(:Ts)]

# Access time slice
slice = hf[Ti=At(DateTime(2024, 1, 1, 12, 0, 0))]

Abstract Types for Extension

All pipeline steps inherit from PipelineStep and implement specific abstract interfaces:

abstract type PipelineStep end

abstract type AbstractQC <: PipelineStep end
abstract type AbstractDespiking <: PipelineStep end
abstract type AbstractGapFilling <: PipelineStep end
abstract type AbstractMakeContinuous <: PipelineStep end
abstract type AbstractGasAnalyzer <: PipelineStep end
abstract type AbstractDoubleRotation <: PipelineStep end
abstract type AbstractMRD <: PipelineStep end
abstract type AbstractOutput <: PipelineStep end

See the Extension Guide for implementing custom steps.