12 KiB
Phase 7.5 Design: Proactive Playout Timing
This document summarizes the timing-specific findings from ARCHITECTURE_RESILIENCE_REVIEW.md and turns them into a focused bridge phase after PHASE_7_BACKEND_LIFECYCLE_PLAYOUT_DESIGN.md.
Phase 7 made backend lifecycle, playout policy, ready-frame queueing, late/drop recovery, and backend playout health explicit. Phase 7.5 should use those foundations to move output production from demand-filled scheduling toward proactive, deadline-aware playout.
Status
- Phase 7.5 design package: proposed.
- Phase 7.5 implementation: Step 1 in progress.
- Current alignment: Phase 7 is complete.
RenderOutputQueue,VideoPlayoutPolicy,VideoPlayoutScheduler,VideoBackendLifecycle, and backend playout telemetry exist. The backend worker fills the ready queue on completion demand, but render production is not yet proactively driven by queue pressure or video cadence.
Current footholds:
RenderEngineowns normal GL work on the render thread.VideoBackendowns backend lifecycle, completion processing, ready-frame queue use, and backend playout health reporting.RenderOutputQueuereports depth, capacity, pushed, popped, dropped, and underrun counts.VideoPlayoutPolicynames ready-frame headroom and catch-up policy.HealthTelemetry::BackendPlayoutSnapshotexposes queue depth, underruns, late/drop streaks, and recovery decisions.- Step 1 adds baseline timing fields for ready-queue min/max/zero-depth samples and output render duration.
Timing Review Findings
The resilience review highlights several timing risks that remain after basic render-thread and backend ownership cleanup:
- playout is still effectively filled on demand instead of continuously produced ahead
- output buffering is named, but queue depth is not yet tuned against measured render/readback cost
- GPU readback has an asynchronous path, but the miss path can still fall back to synchronous readback
- preview presentation is best-effort, but it still shares render-thread budget with playout
- telemetry is improving, but render timing is still too coarse to distinguish draw, pack, fence wait, readback copy, and preview cost
The practical concern is not average frame time. It is what happens during a short spike. A single slow render, readback wait, preview present, or callback scheduling delay can drain playout headroom and cause late or dropped output frames.
Why Phase 7.5 Exists
Phase 7 made the backend safer and observable, but Step 5 intentionally stopped at demand-filled queue behavior:
- a completion arrives
- the backend worker fills the ready queue to target depth
- the backend schedules one ready frame
That is better than callback-thread rendering, but it still couples frame production to output completion pressure. Phase 7.5 should make render production proactive:
- keep the ready queue near target depth before the device asks for the next frame
- let DeckLink consume already-prepared frames
- treat queue depth as the pressure signal between render and backend
- make preview and readback fallback subordinate to output deadlines
Goals
Phase 7.5 should establish:
- a proactive output producer that fills
RenderOutputQueuebased on queue pressure - a clear trigger model for output production: queue-low, cadence tick, or both
- a bounded sleep/yield strategy when the ready queue is full
- explicit priority rules between playout, preview, screenshots, shader work, and background render requests
- readback miss behavior that does not blindly return to the most timing-sensitive synchronous path
- telemetry that can explain why the queue drains: render cost, readback wait, preview cost, or scheduling pressure
- pure tests for producer pressure policy where possible
Non-Goals
Phase 7.5 should not require:
- replacing the renderer
- replacing DeckLink support
- a full telemetry subsystem rewrite
- perfect adaptive latency
- a new UI
- changing live-state layering or persistence semantics
This phase is about output timing behavior, not broad subsystem redesign.
Target Timing Model
The target model is:
Video cadence / queue pressure
-> proactive output producer request
-> RenderEngine renders and reads back output frame
-> RenderOutputQueue stores ready frame
-> VideoBackend consumes ready frame for DeckLink scheduling
The important difference from Phase 7 is that output production should not wait until a completion has already created demand. The queue should usually have headroom before the completion worker needs to schedule.
Suggested pressure rules:
- if ready depth is below
targetReadyFrames, request output production immediately - if ready depth is at or above
maxReadyFrames, producer sleeps or yields - if late/drop streak grows, temporarily bias toward output production over preview
- if readback is late, prefer stale/black underrun policy over blocking the deadline path
- if preview is due but output queue is below target, skip or delay preview
Proposed Collaborators
OutputProductionController
Small policy owner that decides when to request another output frame.
Responsibilities:
- evaluate ready queue depth and capacity
- evaluate late/drop/underrun pressure
- decide whether to produce, sleep, or yield
- keep policy testable without DeckLink or GL
Non-responsibilities:
- GL rendering
- DeckLink scheduling
- live-state composition
OutputProducerWorker
Worker or render-thread-adjacent loop that keeps output frames ready.
Responsibilities:
- wake on queue-low pressure
- request render-thread output production
- push completed frames into
RenderOutputQueue - stop cleanly before render/backend teardown
Non-responsibilities:
- device callback handling
- hardware scheduling
- persistent state mutation
RenderTimingBreakdown
Lightweight render timing sample for the output path.
Initial fields:
- total output render time
- draw/composite time
- output pack time
- readback fence wait time
- readback copy time
- synchronous readback fallback count
- preview present cost
- preview skipped count
This can be reported into existing telemetry first, then Phase 8 can fold it into the broader health model.
Migration Plan
Step 1. Snapshot Current Timing Behavior
Use existing Phase 7 telemetry to capture baseline behavior before changing production cadence.
Initial target:
- record ready queue depth over time while running
- record underrun count, late/drop streaks, and catch-up frames
- record output render duration and completion interval
- identify whether queue depth regularly falls to zero
Exit criteria:
- there is a clear before/after baseline for proactive production
- runtime-state output exposes enough values to diagnose whether queue starvation is happening
Implementation notes:
HealthTelemetry::BackendPlayoutSnapshotexposes current, min, max, and zero-depth ready-queue samples.VideoBackendsamples ready-queue depth before demand-fill, after queue fill, and after scheduling from the queue.VideoBackendrecords last, smoothed, and max output render duration for demand-produced output frames.- Runtime-state JSON exposes the baseline under
backendPlayout.readyQueueandbackendPlayout.outputRender.
Step 2. Extract Output Production Policy
Introduce a pure policy helper for queue-pressure decisions.
Initial target:
- input: ready depth, capacity, target depth, late/drop streaks, underrun count
- output: produce, wait, or throttle
- tests cover low queue, full queue, late/drop pressure, and normalized policy values
Exit criteria:
- production cadence policy can evolve without touching DeckLink or GL code
Step 3. Add A Proactive Producer Loop
Move from demand-filled output production to queue-pressure production.
Initial target:
- producer wakes when queue depth is below target
- producer requests render-thread output production until target depth is reached
- producer stops when backend stops or render thread shuts down
- completion worker mostly schedules from already-ready frames
Exit criteria:
- normal playback does not depend on completion processing to fill the queue from empty
- callback/completion pressure and render production pressure are separate
Step 4. Prioritize Playout Over Preview
Make preview explicitly subordinate to output playout deadlines.
Initial target:
- skip or delay preview when ready queue depth is below target
- count skipped previews
- record preview present cost separately from output render cost
Exit criteria:
- preview cannot drain output headroom invisibly
- runtime telemetry shows preview skips and preview present cost
Step 5. Make Readback Miss Policy Deadline-Aware
Avoid turning a late async readback fence into synchronous deadline pressure by default.
Initial target:
- count async readback misses
- count synchronous fallback uses
- allow policy to prefer stale/black output over synchronous fallback when queue pressure is high
- keep current fallback available while behavior is measured
Exit criteria:
- readback fallback is an explicit policy decision
- late GPU fences do not automatically block the most timing-sensitive path
Step 6. Tune Headroom Policy
Use measured behavior to choose default queue depth and latency tradeoffs.
Initial target:
- compare 30fps and 60fps behavior
- tune
targetReadyFramesandmaxReadyFrames - document expected latency cost of each default
- keep the setting centralized in
VideoPlayoutPolicy
Exit criteria:
- default headroom values are based on observed timing, not guesswork
- latency versus resilience tradeoff is documented
Testing Strategy
Recommended tests:
- production policy requests work when queue is below target
- production policy throttles when queue is full
- late/drop pressure biases toward production
- preview policy skips when output queue is below target
- readback miss policy selects stale/black versus synchronous fallback according to pressure
- producer shutdown drains or cancels work without touching destroyed render/backend state
Useful homes:
- a new
OutputProductionControllerTests RenderOutputQueueTestsfor pressure-adjacent queue behaviorVideoPlayoutSchedulerTestsfor recovery/pressure interactions- non-GL fakes for producer loop wake/stop behavior
Risks
Latency Risk
More ready frames means more latency. Phase 7.5 should make that latency a visible, measured policy choice.
Producer Runaway Risk
A proactive producer must not spin when the queue is full or when output is stopped.
Buffer Ownership Risk
Ready frames must not be reused while DeckLink or the render path still owns their buffers.
Readback Policy Risk
Stale or black output may be preferable to a missed deadline, but it can be visually obvious. External keying may make stale/black fallback more sensitive.
Preview Regression Risk
Treating preview as subordinate may make desktop preview less smooth. That is acceptable only if playout quality improves and preview skips are visible.
Phase 7.5 Exit Criteria
Phase 7.5 can be considered complete once the project can say:
- output production is driven by queue pressure or cadence, not only by completion demand
- completion handling normally schedules already-ready frames
- preview work is explicitly lower priority than playout
- readback miss behavior is explicit and deadline-aware
- queue depth, underruns, render timing, readback misses, and preview skips are visible
- default ready-frame headroom is documented for target frame rates
- production policy has non-DeckLink tests
Open Questions
- Should proactive production be driven by a timer, queue-low notifications, or both?
- Should the producer live inside
VideoBackend,RenderEngine, or a small playout controller between them? - Should underrun default to black, last scheduled, or newest completed output once proactive production exists?
- How much latency is acceptable at 30fps and 60fps?
- Should preview have a hard minimum frame rate, or be fully opportunistic under playout pressure?
- Should synchronous readback fallback be disabled automatically after repeated late/drop pressure?
Short Version
Phase 7 made playout observable and safer. Phase 7.5 should make it proactive.
The render side should keep the output queue warm before DeckLink needs the next frame. DeckLink should consume ready frames. Preview and synchronous readback fallback should never quietly steal the budget needed to hit output deadlines.