30-40% of a performance engineer's week is data entry.

From Recording to Production-Ready Script

Six phases turn a browser recording into a fully correlated, validated performance test. Each phase uses the right tool for the job — deterministic scanning where speed matters, AI where judgment is needed, runtime proof where nothing else will do.

01

Record

Capture the User Journey

Upload a HAR file from your browser's developer tools, or use LoadMagic's browser transactions plugin to record directly. The recording captures every HTTP request, response, header, and cookie in the user flow — the raw material for everything that follows.

Supports any web application. No agents to install, no proxy to configure, no application changes required.

02

Scan

Deterministic Observation

Every value in every response is checked against every subsequent request. Session tokens, CSRF parameters, dynamic IDs, timestamps, auth headers — the scanner identifies them all mechanically, exhaustively, in seconds.

This is the work that machines are unambiguously better at than humans. No AI is needed here — just thoroughness. The output feeds the World View: a structured map of how dynamic data flows through the application.

The scanner also detects framework signatures, token formats (JWT, UUID, Base64), and propagation patterns. This context-rich observation gives downstream phases exactly the data they need to make good decisions.

03

Correlate

Intelligent Extraction

This is where judgment matters. Not every dynamic value needs extraction. Some are cosmetic. Some are timestamps that change on every request but never break anything. Some are session tokens that are critical to the entire flow.

Specialist AI agents evaluate the scanner's observations and make extraction decisions — what to correlate, what type of extractor to use, where to place it, and how it interacts with other correlations. Each agent is purpose-built for a specific type of work: regex patterns, JSON path extraction, Groovy scripting, or end-to-end correlation strategy.

AI is used here because it's the right tool — but it works from the high-quality, pre-filtered data that Phase 2 already prepared, keeping inference efficient and focused.

04

Validate

Runtime Proof

Decisions that aren't tested are assumptions. The platform runs the correlated script against the live application and checks what actually happened. Did each extractor capture the right value? Did substitutions work? Did the server accept the requests?

TRACE records a chronological log of the execution — which extractors fired, what they captured, where failures occurred. This turns opaque test results into diagnosable events with clear resolution paths.

When validation succeeds, the extraction strategy becomes proven. When it fails, the system knows exactly whether the observation was wrong, the decision was wrong, or the application changed.

05

Assess

Quality Gate

SCOUT analyses the correlated script against the original recording and produces a structured quality report — coverage grades for each request, risk flags for uncorrelated dynamic values, and specific recommendations for anything that needs attention.

It answers "is this script production-ready?" with evidence, not hope. When something needs work, SCOUT tells you exactly what and where.

06

Maintain

Self-Healing & Learning

Applications change. Authentication flows get updated. Token formats shift. Endpoints are restructured. In traditional approaches, every change means manual rework.

LoadMagic maintains a golden baseline — the set of validated extraction strategies that represent the known-good state of the script. When something breaks, the platform doesn't just re-run blindly. It diagnoses: what changed in the traffic, whether the strategy still applies, and what repair is needed.

M.I.N.D., the persistent learning system, carries forward what has worked across sessions. Validated knowledge earns trust over time. Approaches that prove reliable are promoted. The system compounds its understanding with every run — so the next correlation is better than the last.

Meet the Agents

Five specialists, each purpose-built for one part of the pipeline.

George — JMeter expert and error debugger

George

JMeter Genius

Encyclopaedic JMeter knowledge. Scans for errors in real time, identifies root causes, and suggests precise fixes.

Rupert — regex and JSON path extraction specialist

Rupert

Regex Researcher

Generates tailored regex and JSON path extractors that work in JMeter's specific engine. Fast and precise.

Suzy — Groovy scripting and code generation specialist

Suzy

Script Scholar

Groovy and JSR223 specialist. Converts between languages, generates scripts from plain text, handles complex logic.

Carrie — lead AI correlation agent

Carrie

The Correlator

Leads the correlation pipeline. Orchestrates the full extraction strategy from scan through to validated proof.

Quinn — QA assessment and quality gate specialist

Quinn

QA Assessor

Last line of defence. Scans for missing assertions, correlation gaps, and configuration issues before scripts go live.

You Control the Autonomy

Choose how much the AI does on its own. Trust builds gradually.

👁

Passive

Observes and logs everything. Shows what it found and what it would do — but makes no changes. You review, learn, and decide.

Best for: learning, audit, first run
💬

Suggest

Recommends specific actions with explanations. You approve, modify, or reject each one. The AI learns from your decisions.

Best for: standard workflow

Agent

Executes the full pipeline autonomously with guardrails. Scans, correlates, validates, and reports — you review the results.

Best for: experienced users, CI pipelines

See the pipeline in action on your own application.

Book a Demo Try SCOUT Free