Timeseries Database
The Influx class provides a client for interacting with an InfluxDB database. This database is specifically designed for storing and querying time-series data, such as sensor readings (IoT), application metrics, or financial data.
A key feature of this class is its support for downsampling. It can automatically manage data resolution, keeping high-frequency data for recent timeframes while aggregating older data into lower-resolution buckets (e.g., keeping raw data for 24 hours, but only daily averages for data older than a year).
Constructor and Member Functions
create
Creates an instance of the InfluxDB client, configuring it to connect to a specific database URL with the necessary credentials.
Parameters
url: The URL of your InfluxDB instance (e.g.,http://localhost:8086).token: The authentication token with the required permissions for your organization and buckets.org: The name of the organization in InfluxDB.
Example
writePoint
Writes a single data point to a specific bucket and measurement. This is the standard way to record a piece of data.
Parameters
bucket: The name of the bucket to write to.measurement: The name of the measurement (e.g., "temperature", "cpu_load").data: The value to record. Can be a number, string, boolean, or a JSON object.tags: An optional object of key-value pairs to tag the data (e.g.,{ sensorId: "s1" }).
Example
Output
Returns true on success.
writePoints
Writes multiple data points at once to a specific bucket and measurement. This is more efficient than calling writePoint multiple times in a loop.
Parameters
bucket: The name of the bucket.measurement: The name of the measurement.data: An array of values to record.tags: Optional tags. If provided as an array, it must match the length of thedataarray (one tag object per data point). If provided as a single object, it applies to all points.
Example
Output
Returns true on success.
writeDownsampled
Writes numeric data to the high-frequency bucket (H+) specifically for the purpose of automatic downsampling. The system will automatically aggregate this data into lower-resolution buckets (daily, weekly, etc.) over time based on the internal pipeline.
Parameters
measurement: The name of the measurement.data: The numeric value or array of numbers to store.tags: Optional tags to associate with the data.
Example
Output
Returns true on success.
read
Reads time-series data from a specific bucket and measurement. It offers powerful options for filtering by time, limiting results, and aggregating data (e.g., calculating averages) into time windows.
{% hint style="info" %} Internal Bucket Names When using the internal database, bucket names indicate retention: F (Forever), Y (Year), M (Month), W (Week), D (Day), H (Hour). {% endhint %}
Parameters
bucket: The name of the bucket to query.measurement: The name of the measurement.options: An optional object to refine the query.start: The earliest time to include (e.g.,'-12h','-7d','2025-01-01T00:00:00Z'). Defaults to'-1y'.stop: The latest time to include. Defaults to'now()'.limit: Limits the result to the firstndata points.tail: Limits the result to the lastndata points.every: Duration of time windows for aggregation (e.g.,'15m'). Requiresfunc.func: The aggregation function (e.g.,'mean','sum','count','last').tags: An object of tags to filter by.
Example 1: Get raw data from the last hour
Example 2: Get average temperature every 15 minutes for the last day
Output
An array of objects, where each object has a date (ISO timestamp) and a value.
readDownsampled
A "smart" query function that automatically stitches together data from different aggregated buckets. It retrieves high-resolution data for recent timeframes and lower-resolution (aggregated) data for older timeframes, providing an optimized view of long-term history without processing millions of raw data points.
Parameters
measurement: The name of the measurement.options: Query options.start: Earliest time (default'-1y').stop: Latest time (default'now()').limit: Limit to firstnresults.tail: Limit to lastnresults.tags: Filter by tags.
Example
Output
An array of objects containing statistical data (mean, min, max, count) for each time point.
query
Allows you to execute a raw Flux query string. This provides maximum flexibility for complex database operations not covered by the helper functions.
Parameters
flux: The raw Flux query string.
Example
Output
The raw result rows from InfluxDB.
delete
Deletes data from a measurement. You can delete the entire measurement or specify a time range to delete only a portion of the data.
Parameters
bucket: The name of the bucket.measurement: The name of the measurement to delete.options: Time range options.start: Start time (defaults to1970-01-01...).stop: End time (defaults to now).
Example
Output
Returns true on success.
flush
Manually forces any buffered data (pending writes) to be sent to the database immediately. This is useful during testing or before shutting down a process to ensure no data is lost.
Output
Returns true (void promise) when complete.
reset
Danger Zone. This function deletes all data from all measurements in all buckets connected to this instance. The buckets themselves are preserved, but they will be empty.
Output
Returns true when the reset is complete.
listBuckets
Retrieves a list of all available buckets in the connected organization.
Parameters
None.
Output
An array of bucket objects.
listMeasurements
Lists detailed information about all measurements across all buckets.
Parameters
options:includeStats: Iftrue, calculates row count and cardinality (Warning: this can be slow).statsRangeStart: Time range for stats (default'-1y').
Example
Output
An array of measurement details.
getMeasurementDetails
Retrieves the schema (fields and tags) for a specific measurement in a specific bucket.
Parameters
bucket: The bucket name.measurement: The measurement name.
Example
Output
getMeasurementStats
Calculates statistics (row count and cardinality) for a specific measurement over a given time range.
Parameters
bucket: The bucket name.measurement: The measurement name.options:start: Start time (default'-30d').stop: End time (default'now()').
Example
Output
Understanding Downsampling & Data Retention
This class uses an advanced Downsampling Pipeline to store your data efficiently. This system allows you to write high-frequency data (like sensor readings every second) without running out of storage or slowing down your dashboards when querying long time ranges (like "last year").
The Concept: "Hot" vs. "Cold" Data
Think of your data like news.
"Hot" Data (Recent): You care about every detail. Example: "What is the temperature right now? Did it spike 5 seconds ago?"
"Cold" Data (Historical): You care about trends, not microseconds. Example: "What was the average temperature last month?"
Our system automatically moves data through "buckets" as it ages, reducing its resolution (granularity) to save space while keeping the statistical accuracy you need.
1. The Pipeline Structure
Data flows automatically through a series of stages. You only write to the start of the pipeline; the system handles the rest.
Bucket Name
Resolution (Granularity)
Retention (How long it stays)
Used For
H+
Raw (Every point)
1 Day
Real-time monitoring, debugging recent events.
D+
5 Minutes
1 Week
zooming into last week's performance.
W+
1 Hour
1 Month
Weekly trends and patterns.
M+
1 Day
1 Year
Monthly analysis and seasonal trends.
Y+
1 Week
Forever
Long-term historical archiving.
2. How Writing Works
As a developer, you don't need to choose which bucket to write to. You simply send data to the system, and it lands in the Raw (H+) bucket automatically.
Background Tasks:
Behind the scenes, scheduled tasks wake up periodically to process this data. For example, every 5 minutes, a task takes all the raw data from H+, calculates the Mean, Min, Max, and Count, and saves a single summary point into D+.
3. How Reading Works (The "Smart Stitching")
This is the most powerful feature. When you ask for data, you don't need to know which bucket it is in. You just provide a time range, and the readDownsampled function acts as a smart broker:
It looks at your requested
startandstoptimes.It automatically selects the highest-resolution bucket available for that period.
If your request spans across boundaries (e.g., "last 2 hours" to "last 2 weeks"), it stitches data together seamlessly.
Example Scenario:
If you ask for "The last 2 days", the system might return:
The last 24 hours from the Raw (
H+) bucket (high detail).The 24 hours before that from the 5-minute (
D+) bucket (medium detail).
4. Configuration Examples
Here is how you use this in your low-code functions.
Scenario A: Real-Time Debugging
You want to see exactly what happened in the last 15 minutes. The system will pull from the Raw (H+) bucket.
Scenario B: Monthly Reporting
You want to visualize the trend over the last 30 days. Loading raw data for 30 days would be millions of points (slow!). The system automatically pulls from the Hourly (W+) or Daily (M+) buckets, making the query instant.
Scenario C: The "Stitched" View
You want the last 50 data points, regardless of how old they are. The system will look at the newest data first, and if it doesn't find enough, it will automatically look further back in time into older buckets.
Summary of Aggregated Fields
When data is downsampled, we don't just keep the average. We preserve four key statistics for every window, so you never lose the context of what happened:
mean: The average value (good for smooth lines).max: The highest value seen (good for detecting spikes).min: The lowest value seen.count: How many raw data points went into this period (good for understanding data density).
Last updated
