Forecasting workbench / self-hosted
Public release coming soon

Serious time series forecasting,
built on a canvas.

Compose end-to-end forecasting pipelines from typed nodes. Inspect data, split safely, engineer features, train statistical, ML, and deep learning models with prediction interval controls, compare results, and export artifacts - all running on infrastructure you own.

15 core node types 3 model families Walk-forward CV Self-hosted deployment
Powered by Polars
Demand Planning / monthly_pipeline Run
TempusFlow canvas showing a weekly employment forecasting pipeline with data source, transform, split, N-HiTS, and XGBoost model nodes
Why teams pick TempusFlow
A workbench, not a notebook graveyard.
15
Core node types
3
Stat / ML / DL families
10
Inspection tabs
0
Lines of glue code required
The platform

A workbench that thinks like a forecaster.

Every node is typed. Every edge enforces a contract. The canvas blocks invalid wiring, and the executor applies leakage-safe ordering for split-aware transforms and feature engineering.

TempusFlow evaluation screen showing forecast results, metrics, prediction interval coverage, and a forecast focus chart
01

Typed nodes, enforced contracts

Data sources, inspections, transforms, splits, features, models, and evaluations are first-class building blocks. Invalid edges are blocked before they become confusing runs.

02

Three model families, one canvas

Statistical forecasting, sktime-powered ML, and Darts deep learning live in the same graph. Prediction interval controls are available across the model families, with clear warnings when a method cannot produce usable bounds.

03

Proper evaluation, not eyeballs

Single-holdout evaluation and walk-forward CV. SMAPE, MASE, RMSSE, mean error, residual diagnostics, PI coverage, Winkler score, horizon breakdowns, and directional accuracy.

04

Compare side-by-side

Train competing models on the same data with identical metrics. Focused versus full-history views help teams compare forecasts without losing temporal context.

05

Export. Reproduce. Govern.

Export readable Python or notebooks, save trained models to the registry, track versions and model cards, then run inference-only or retraining workflows from the same pipeline.

15 core node types

The full taxonomy. Nothing hidden.

Every primitive you need to take raw data to a versioned, evaluated, exported forecast. Drag any node onto the canvas, configure it, and keep the whole workflow visible.

01D

Data Source

Files, demo data, Snowflake, DuckDB/MotherDuck, Postgres.

02I

Data Inspection

10 tabs: quality, stats, shape, gaps, lags, outliers, ACF/PACF, seasonality, tests, decomp.

03T

Transformation

13 transforms for cleaning, filtering, aggregating, resampling, filling, and differencing.

04S

Split

By proportion or recent periods. Chronological.

05F

Feature Engineering

Calendar, cyclic encoding, lags, rolling stats, holidays, interactions.

06A

Statistical Model

AutoARIMA, Prophet, AutoETS, AutoTheta, TBATS, DOT, Random Walk, Drift, Seasonal Naive.

07M

ML Model

Ridge, Huber, SVR, RF, GBM, HistGBM, XGBoost, LightGBM with tuning and coverage controls.

08L

DL Model

RNN, LSTM, GRU, BlockRNN, TCN, N-HiTS, N-BEATS with Optuna and probabilistic options.

09B

Backtesting

Walk-forward CV across multiple time windows.

10C

Compare Baselines

Train selected models on the same data and rank them with identical metrics.

11E

Compare Evaluations

Metric matrix, radar chart, and focused or full-history forecast overlay.

12R

Reconciliation

Hierarchical coherence for panel and aggregate forecasts.

13N

Ensemble

Combine multiple forecasts with average, median, trimmed mean, or accuracy weighting.

14V

Evaluation

Forecasts, prediction intervals, residuals, metrics, tuning history, artifacts.

15O

Output

Export models, forecasts, metrics, manifests, and run artifacts.

Self-hosted deployment

Your infrastructure.
Your forecasts. Your data.

Deploy the v1.0.0 stack on a customer-owned VM with Docker Compose. PostgreSQL, Redis, FastAPI, Celery, and S3-compatible storage stay under your control.

Application tierstateless
W
Web UI
React / TypeScript / React Flow
A
API Gateway
Node / Express / auth + RBAC
F
Forecasting Engine
Python / FastAPI / Celery
P
Polars Data Plane
columnar / typed / execution-ready
Modeling librariesopen-source
S
StatsForecast
AutoARIMA / AutoETS / AutoTheta
D
Darts
N-HiTS / N-BEATS / TCN / RNN
K
sktime
Reduction strategies / ML pipelines
O
Optuna
Hyperparameter tuning
Data tierstateful
P
PostgreSQL
metadata / registry / audit
R
Redis
queue / cache / session
S
S3-Compatible
artifacts / datasets / exports
C
Connectors
Snowflake / DuckDB / Postgres
Deploys on docker compose up PostgreSQL + Redis S3-compatible artifacts Hetzner demo track / customer single-VM track
Data engine

The pipeline engine is built around Polars.

TempusFlow uses Polars throughout the Python execution path for loading, transforming, inspecting, feature engineering, connector materialization, and artifact generation.

+
Columnar dataframes. Uploaded files, demo datasets, database query snapshots, transforms, and feature pipelines move through a fast columnar execution path.
+
Forecasting-aware transforms. Split handling, lag features, rolling windows, holiday features, and interactions are implemented with leakage-conscious Polars operations.
+
Parquet-first snapshots. Database query results and uploads are materialized into reproducible datasets before pipeline execution, so reruns know exactly what they trained on.
+
Clean handoff to model libraries. Polars data feeds StatsForecast, Prophet, sktime, and Darts through normalized contracts while preserving time, target, entity, and covariate metadata.
execution path / v1.0.0

From source to forecast artifacts

files / database snapshots / model results

Load and snapshot
CSV / Parquet / DB query
01
Transform and feature
split-safe Polars ops
02
Train and export
models / metrics / manifests
03

The goal is not a synthetic speed chart. It is a repeatable execution path: source data becomes a governed dataset, the graph runs in order, and artifacts are saved with lineage.

Enterprise

Built for governed forecasting teams.

RBAC, connection management, audit logs, a model registry, and a plugin system. The operational pieces are visible, configurable, and owned by the customer.

R

Full RBAC

Manage users, groups, project membership, owner/editor/viewer roles, and per-user capability flags from the admin console.

  • project-scoped access
  • node and capability controls
  • admin-visible effective permissions
M

Model Registry

Save trained models, track versions, inspect scores and dataset labels, and run pipelines in inference-only or retraining mode.

  • latest version badges
  • archive and promote actions
  • model cards with artifacts
P

Plugin System

Add your own node types. Two reference plugins ship with the platform: Hello World and a PyOD anomaly detector.

  • typed node contracts
  • manifest-driven UI
  • custom result viewers
A

Audit Log

Track pipeline runs, model registry actions, connection changes, and permission events with actor, resource, timestamp, and details.

  • admin filters
  • CSV export
  • resource-level history
C

Connections & Storage

Admins manage Snowflake, Postgres, and DuckDB/MotherDuck database connections, plus AWS S3 or S3-compatible storage for datasets and artifacts.

  • connection test flow
  • query preview and snapshots
  • Azure Blob and GCS planned
O

Customer-Owned Deployment

Run TempusFlow on infrastructure your team controls, with data, artifacts, credentials, and operational policy kept inside your environment.

  • customer-owned binaries
  • offline-friendly operation
  • no usage-based pricing
From raw data to evaluated forecast

Five steps. Same canvas.

STEP 01

Connect data

Start from files, demo datasets, or approved database queries from Snowflake, DuckDB/MotherDuck, and Postgres.

STEP 02

Inspect & transform

Review quality warnings, ACF/PACF, stationarity tests, seasonality detection, then split and engineer features.

STEP 03

Pick models

Choose statistical, ML, or DL models, or run several families on one canvas with tuned hyperparameters and PI settings.

STEP 04

Compare & backtest

Use walk-forward validation, residual diagnostics, side-by-side metrics, and comparison views to choose the strongest candidate.

STEP 05

Export & govern

Export Python or notebook code, register trained models, download forecasts and metrics, and preserve run artifacts.

TempusFlow v1.0.0 / self-hosted forecasting

Build forecast pipelines
your team can trust.

Public release coming soon. TempusFlow is being prepared as a customer-owned forecasting platform for teams that need evaluated, reproducible, governable forecasts without turning every workflow into bespoke notebook glue.