Timeseries Database

The Timeseries Database class provides a specialized client for interacting with InfluxDB. It is designed for high-frequency data where the time of the recording is just as important as the value itself—such as sensor readings (IoT), machine performance metrics, or energy consumption.

circle-check

Key feature: Intelligent downsampling

Quick start: The internal instance

Heisenware provides a pre-initialized instance called internal-influx. It is globally available and ready for use. You do not need to call a "create" function; simply select internal-influx in your function's instance field to start logging time-series data immediately.

circle-info

Direct data recording with the recorder

For the fastest way to log data, you can use the Recorder extension. Simply click the + icon on any function's output or modifier and select the Recorder. By default, it is configured to log data directly into the internal-influx instance on the fly, without needing extra function blocks in your flow.

Connecting an external database

To connect to an external InfluxDB instance, use the create function. As with all Heisenware integrations, consider your location:

  • Cloud connection: Connect directly if your InfluxDB server is accessible via the internet.

  • Local connection (via Agent): If your InfluxDB is hosted on a private network, deploy an Edge Agent in that network and create your instance within that agent.

circle-info

One set of functions: Whether you use the managed internal-influx or a custom connection via an Agent, the functions for writing and querying data remain identical.

Constructor and Member Functions

create

Creates an instance of the InfluxDB client, configuring it to connect to a specific database URL with the necessary credentials.

circle-check

Parameters

  • url: The URL of your InfluxDB instance (e.g., http://localhost:8086).

  • token: The authentication token with the required permissions for your organization and buckets.

  • org: The name of the organization in InfluxDB.

Example

writePoint

Writes a single data point to a specific bucket and measurement. This is the standard way to record a piece of data.

Parameters

  • bucket: The name of the bucket to write to.

  • measurement: The name of the measurement (e.g., "temperature", "cpu_load").

  • data: The value to record. Can be a number, string, boolean, or a JSON object.

  • tags: An optional object of key-value pairs to tag the data (e.g., { sensorId: "s1" }).

Example

Output

Returns true on success.

writePoints

Writes multiple data points at once to a specific bucket and measurement. This is more efficient than calling writePoint multiple times in a loop.

Parameters

  • bucket: The name of the bucket.

  • measurement: The name of the measurement.

  • data: An array of values to record.

  • tags: Optional tags. If provided as an array, it must match the length of the data array (one tag object per data point). If provided as a single object, it applies to all points.

Example

Output

Returns true on success.

writeDownsampled

Writes numeric data to the high-frequency bucket (H+) specifically for the purpose of automatic downsampling. The system will automatically aggregate this data into lower-resolution buckets (daily, weekly, etc.) over time based on the internal pipeline.

Parameters

  • measurement: The name of the measurement.

  • data: The numeric value or array of numbers to store.

  • tags: Optional tags to associate with the data.

Example

Output

Returns true on success.

chevron-rightUnderstanding Downsamplinghashtag

This class uses an advanced Downsampling Pipeline to store your data efficiently. This system allows you to write high-frequency data (like sensor readings every second) without running out of storage or slowing down your dashboards when querying long time ranges (like "last year").

The Concept: "Hot" vs. "Cold" Data

Think of your data like news.

  • "Hot" Data (Recent): You care about every detail. Example: "What is the temperature right now? Did it spike 5 seconds ago?"

  • "Cold" Data (Historical): You care about trends, not microseconds. Example: "What was the average temperature last month?"

Our system automatically moves data through "buckets" as it ages, reducing its resolution (granularity) to save space while keeping the statistical accuracy you need.

1. The Pipeline Structure

Data flows automatically through a series of stages. You only write to the start of the pipeline; the system handles the rest.

Bucket Name

Resolution (Granularity)

Retention (How long it stays)

Used For

H+

Raw (Every point)

1 Day

Real-time monitoring, debugging recent events.

D+

5 Minutes

1 Week

zooming into last week's performance.

W+

1 Hour

1 Month

Weekly trends and patterns.

M+

1 Day

1 Year

Monthly analysis and seasonal trends.

Y+

1 Week

Forever

Long-term historical archiving.

2. How Writing Works

As a developer, you don't need to choose which bucket to write to. You simply send data to the system, and it lands in the Raw (H+) bucket automatically.

Background Tasks:

Behind the scenes, scheduled tasks wake up periodically to process this data. For example, every 5 minutes, a task takes all the raw data from H+, calculates the Mean, Min, Max, and Count, and saves a single summary point into D+.

3. How Reading Works (The "Smart Stitching")

This is the most powerful feature. When you ask for data, you don't need to know which bucket it is in. You just provide a time range, and the readDownsampled function acts as a smart broker:

  1. It looks at your requested start and stop times.

  2. It automatically selects the highest-resolution bucket available for that period.

  3. If your request spans across boundaries (e.g., "last 2 hours" to "last 2 weeks"), it stitches data together seamlessly.

Example Scenario:

If you ask for "The last 2 days", the system might return:

  • The last 24 hours from the Raw (H+) bucket (high detail).

  • The 24 hours before that from the 5-minute (D+) bucket (medium detail).

4. Configuration Examples

Here is how you use this in your low-code functions.

Scenario A: Real-Time Debugging

You want to see exactly what happened in the last 15 minutes. The system will pull from the Raw (H+) bucket.

Scenario B: Monthly Reporting

You want to visualize the trend over the last 30 days. Loading raw data for 30 days would be millions of points (slow!). The system automatically pulls from the Hourly (W+) or Daily (M+) buckets, making the query instant.

Scenario C: The "Stitched" View

You want the last 50 data points, regardless of how old they are. The system will look at the newest data first, and if it doesn't find enough, it will automatically look further back in time into older buckets.

Summary of Aggregated Fields

When data is downsampled, we don't just keep the average. We preserve four key statistics for every window, so you never lose the context of what happened:

  • mean: The average value (good for smooth lines).

  • max: The highest value seen (good for detecting spikes).

  • min: The lowest value seen.

  • count: How many raw data points went into this period (good for understanding data density).

read

Reads time-series data from a specific bucket and measurement. It offers powerful options for filtering by time, limiting results, and aggregating data (e.g., calculating averages) into time windows.

circle-info

Internal bucket names when using the internal database, bucket names indicate retention:

F (Forever), Y (Year), M (Month), W (Week), D (Day), H (Hour).

Parameters

  • bucket: The name of the bucket to query.

  • measurement: The name of the measurement.

  • options: An optional object to refine the query.

    • start: The earliest time to include (e.g., '-12h', '-7d', '2025-01-01T00:00:00Z'). Defaults to '-1y'.

    • stop: The latest time to include. Defaults to 'now()'.

    • limit: Limits the result to the first n data points.

    • tail: Limits the result to the last n data points.

    • every: Duration of time windows for aggregation (e.g., '15m'). Requires func.

    • func: The aggregation function (e.g., 'mean', 'sum', 'count', 'last').

    • tags: An object of tags to filter by.

Example 1: Get raw data from the last hour

Example 2: Get average temperature every 15 minutes for the last day

Output

An array of objects, where each object has a date (ISO timestamp) and a value.

chevron-rightUnderstanding Aggregationhashtag

When working with time-series data, you often have thousands of individual data points (like sensor readings every second). To make sense of this data or to visualize it effectively, you typically want to group these points into larger time windows and summarize them. This process is called aggregation.

In the read function, this is controlled by two parameters:

  • every: Defines the size of the time window (e.g., '1h' for one hour, '15m' for 15 minutes, '1d' for one day).

  • func: Defines the mathematical calculation to apply to the data points within each window.

Available Functions

The following functions can be used in the func parameter to summarize your data:

Function

Description

Typical Use Case

mean

Calculates the average value.

Smoothing out noisy sensor data (e.g., average temperature per hour).

median

Finds the middle value.

Finding the "typical" value while ignoring extreme outliers.

min

Finds the lowest value.

Detecting the coldest temperature or lowest battery level in a period.

max

Finds the highest value.

Detecting peak power usage or maximum pressure in a day.

sum

Adds up all values.

Calculating total energy consumption (kWh) or total volume flowed.

count

Counts the number of data points.

Counting how many times a machine cycled or how many error logs occurred.

last

Takes the very last value in the window.

Seeing the final state of a system at the end of each period.

first

Takes the very first value in the window.

Seeing the starting state of a system at the beginning of each period.

Examples

The following examples show how to configure the read function options to perform different types of analysis.

1. Smoothing Data (Hourly Average)

If you have a sensor sending data every second, a chart might look very "spiky." To see a smooth trend, you can calculate the average (mean) for every hour (1h).

2. Peak Detection (Daily Maximum)

To monitor a motor's performance, you might want to know the highest speed or temperature it reached each day over the past month.

3. Usage Totals (Monthly Sum)

If you are tracking water flow where each data point represents liters used since the last reading, you can sum them up to get the total monthly consumption.

4. Incident Counting

If your application writes a value of 1 every time an error occurs, you can count how many errors happened in 15-minute intervals.

readDownsampled

A "smart" query function that automatically stitches together data from different aggregated buckets. It retrieves high-resolution data for recent timeframes and lower-resolution (aggregated) data for older timeframes, providing an optimized view of long-term history without processing millions of raw data points.

Parameters

  • measurement: The name of the measurement.

  • options: Query options.

    • start: Earliest time (default '-1y').

    • stop: Latest time (default 'now()').

    • limit: Limit to first n results.

    • tail: Limit to last n results.

    • tags: Filter by tags.

Example

Output

An array of objects containing statistical data (mean, min, max, count) for each time point.

query

Allows you to execute a raw Flux query string. This provides maximum flexibility for complex database operations not covered by the helper functions.

Parameters

  • flux: The raw Flux query string.

Example

Output

The raw result rows from InfluxDB.

delete

Deletes data from a measurement. You can delete the entire measurement or specify a time range to delete only a portion of the data.

Parameters

  • bucket: The name of the bucket.

  • measurement: The name of the measurement to delete.

  • options: Time range options.

    • start: Start time (defaults to 1970-01-01...).

    • stop: End time (defaults to now).

Example

Output

Returns true on success.

flush

Manually forces any buffered data (pending writes) to be sent to the database immediately. This is useful during testing or before shutting down a process to ensure no data is lost.

Output

Returns true (void promise) when complete.

reset

Danger Zone. This function deletes all data from all measurements in all buckets connected to this instance. The buckets themselves are preserved, but they will be empty.

Output

Returns true when the reset is complete.

listBuckets

Retrieves a list of all available buckets in the connected organization.

Parameters

None.

Output

An array of bucket objects.

listMeasurements

Lists detailed information about all measurements across all buckets.

Parameters

  • options:

    • includeStats: If true, calculates row count and cardinality (Warning: this can be slow).

    • statsRangeStart: Time range for stats (default '-1y').

Example

Output

An array of measurement details.

getMeasurementDetails

Retrieves the schema (fields and tags) for a specific measurement in a specific bucket.

Parameters

  • bucket: The bucket name.

  • measurement: The measurement name.

Example

Output

getMeasurementStats

Calculates statistics (row count and cardinality) for a specific measurement over a given time range.

Parameters

  • bucket: The bucket name.

  • measurement: The measurement name.

  • options:

    • start: Start time (default '-30d').

    • stop: End time (default 'now()').

Example

Output

Last updated

Was this helpful?