One platform to generate, test, and evaluate your synthetic time-series data
Rockfish is a modular platform for generating, testing, and evaluating synthetic time-series data. Each module does one job — and they compose into end-to-end workflows for ML training and agent evaluation.
The Rockfish Workflow
You generate data or train a model, check its quality, then branch into two downstream tracks depending on what you need.
Input
Data or Schema
Production time-series, a schema definition, or plain-language intent
Step 1
Rockfish Platfom
Train Rockfish Model or Generae Data with DataFuel or SchemaFuel
Step 2
Eval Studio
Measure data fidelity, privacy, and agent correctness — with actionable scores
Output
Model or Data
Trained custom Model or Baseline Data ready for downstream use
Then choose your track
Track A - Generate More Data
Scenario Studio
Takes your model or dataset and generates scenario-specific variations — injecting edge cases, rare events, and incident patterns, blending to expand your training data.
Track B - Evaluate Agents
Scenario Studio + AgentFuel
Scenario Studio injects patterns into your data, then AgentFuel generates prompts, queries, and expected responses from those scenarios ready for AgentEval to score.