95 Commits

Author SHA1 Message Date
Aiden
c38c22834d Preroll udpate
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m29s
CI / Windows Release Package (push) Successful in 2m30s
2026-05-10 22:30:47 +10:00
Aiden
c8a4bd4c7b adjustments to control and stack saving
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m28s
CI / Windows Release Package (push) Successful in 2m44s
2026-05-10 22:10:54 +10:00
Aiden
46129a6044 UI fix
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m26s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-10 21:27:13 +10:00
Aiden
8fcb51d140 example data store
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m24s
CI / Windows Release Package (push) Successful in 2m34s
2026-05-10 21:11:17 +10:00
Aiden
944773c248 added new layer input pass
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m25s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-10 21:00:34 +10:00
Aiden
7777cfc194 data storage
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m23s
CI / Windows Release Package (push) Successful in 2m46s
2026-05-10 20:39:28 +10:00
Aiden
198639ae3f OSC sync back
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Successful in 2m29s
2026-05-10 18:58:26 +10:00
Aiden
d7ca42b51b OSC fixes
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Successful in 2m43s
2026-05-10 18:37:30 +10:00
Aiden
f11d531e0c OSC bind address
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m41s
CI / Windows Release Package (push) Successful in 2m30s
2026-05-10 17:23:28 +10:00
Aiden
a3635b5d31 Revert "preview changes"
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m23s
CI / Windows Release Package (push) Successful in 2m28s
This reverts commit 98f5cbe309.
2026-05-09 16:47:45 +10:00
Aiden
bc9aa6fbad Revert "Video backend"
This reverts commit 4ffbb97abf.
2026-05-09 16:47:43 +10:00
Aiden
0c16665610 Revert "Decklink separation"
Some checks failed
CI / Windows Release Package (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Native Windows Build And Tests (push) Has been cancelled
This reverts commit 46f2f1ece5.
2026-05-09 16:47:33 +10:00
Aiden
46f2f1ece5 Decklink separation 2026-05-09 14:42:11 +10:00
Aiden
4ffbb97abf Video backend
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m43s
CI / Windows Release Package (push) Successful in 2m54s
2026-05-09 14:15:49 +10:00
Aiden
98f5cbe309 preview changes
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m23s
CI / Windows Release Package (push) Successful in 2m41s
2026-05-09 13:53:00 +10:00
Aiden
93d856b3b6 CPU optimisations
Some checks failed
CI / React UI Build (push) Successful in 37s
CI / Windows Release Package (push) Has been cancelled
CI / Native Windows Build And Tests (push) Has been cancelled
2026-05-09 13:50:27 +10:00
6ea6971dd6 more shaders and updates/changes
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Successful in 2m35s
2026-05-08 20:32:19 +10:00
163d70e9bd Annotations
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-08 20:01:22 +10:00
8afef5065a Update README.md
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m20s
CI / Windows Release Package (push) Successful in 2m34s
2026-05-08 19:14:31 +10:00
27bf2ae45c doc updates
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Successful in 2m27s
2026-05-08 18:49:27 +10:00
1ea44ba3ae fix typo
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m19s
CI / Windows Release Package (push) Successful in 2m31s
2026-05-08 18:43:48 +10:00
0af9a72937 removed redundant code
Some checks failed
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Has been cancelled
2026-05-08 18:40:56 +10:00
d650cac857 control layout updates
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m20s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-08 18:28:28 +10:00
a0cc86f189 description updates
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m20s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-08 18:11:26 +10:00
f322abf79a updates
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m22s
CI / Windows Release Package (push) Has been cancelled
2026-05-08 18:07:45 +10:00
eede6938cb Update multipass shader test
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m17s
CI / Windows Release Package (push) Successful in 2m27s
2026-05-08 17:41:53 +10:00
ad24a20fdb Multi pass test
Some checks failed
CI / React UI Build (push) Successful in 10s
CI / Windows Release Package (push) Has been cancelled
CI / Native Windows Build And Tests (push) Has been cancelled
2026-05-08 17:40:09 +10:00
5ae43513a7 annotations
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m16s
CI / Windows Release Package (push) Has been cancelled
2026-05-08 17:35:48 +10:00
cc23e73d51 Removed uneeded code 2026-05-08 17:33:57 +10:00
f85abef237 Multi pass
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m16s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-08 17:28:48 +10:00
596d370f43 Add manifest support for pass declarations
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m17s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-08 17:19:30 +10:00
87cb55b80b Layer program split
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m14s
CI / Windows Release Package (push) Successful in 2m26s
2026-05-08 17:10:29 +10:00
f458eb0130 Texture binding 2026-05-08 17:04:28 +10:00
7d8f9a39d1 render target pool
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 2m15s
CI / Windows Release Package (push) Successful in 2m31s
2026-05-08 16:59:43 +10:00
5b6e30ad13 Render class 2026-05-08 16:55:16 +10:00
07a5c91427 shader validation checks
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m16s
CI / Windows Release Package (push) Successful in 2m27s
2026-05-08 16:46:03 +10:00
53b980913b docs update 2026-05-08 16:42:23 +10:00
4e2ac4a091 re organisation
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m42s
CI / Windows Release Package (push) Successful in 2m31s
2026-05-08 16:38:47 +10:00
3eb5bb5de3 Splitting out rendering 2026-05-08 16:33:55 +10:00
ebbc11bb34 Decklink abstraction
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m41s
CI / Windows Release Package (push) Successful in 2m20s
2026-05-08 16:27:40 +10:00
6d5a606107 Greenscreen adjsutments
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 1m35s
CI / Windows Release Package (push) Successful in 2m22s
2026-05-08 16:11:43 +10:00
0831e18c2d Updated shader and fixed PNG output
All checks were successful
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 2m15s
CI / Windows Release Package (push) Successful in 2m10s
2026-05-08 15:52:58 +10:00
05d0bcbedd PNG writer
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m35s
CI / Windows Release Package (push) Successful in 2m17s
2026-05-08 15:33:40 +10:00
6ea70d9497 shader adjustment
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m11s
2026-05-08 15:12:48 +10:00
bc536bd751 Control ui adjsutments
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m54s
CI / Windows Release Package (push) Successful in 2m9s
2026-05-08 13:54:02 +10:00
7035cde8c8 added random seed
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m11s
2026-05-08 13:38:27 +10:00
5eff189bbf random float
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m45s
CI / Windows Release Package (push) Has been cancelled
2026-05-08 13:35:15 +10:00
c9fed70a60 16bit processing
All checks were successful
CI / React UI Build (push) Successful in 38s
CI / Native Windows Build And Tests (push) Successful in 1m48s
CI / Windows Release Package (push) Successful in 2m16s
2026-05-08 13:27:41 +10:00
fb9122ecdc Update README.md
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m4s
2026-05-07 06:54:59 +00:00
bff27c42a7 Update README.md
All checks were successful
CI / React UI Build (push) Successful in 16s
CI / Native Windows Build And Tests (push) Successful in 1m45s
CI / Windows Release Package (push) Successful in 2m28s
2026-05-07 06:11:53 +00:00
cea435b609 shader tweak for LUT application
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m21s
2026-05-06 16:53:54 +10:00
f9ea2d6900 LUT interpolation
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m47s
CI / Windows Release Package (push) Successful in 2m31s
2026-05-06 16:44:39 +10:00
96e7e66b0d Install step
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m35s
CI / Windows Release Package (push) Successful in 2m6s
2026-05-06 14:51:19 +10:00
e5221b329f Added xyla shader
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Windows Release Package (push) Has been cancelled
CI / Native Windows Build And Tests (push) Has been cancelled
2026-05-06 14:50:00 +10:00
70be7312b8 Timing and saftey pass
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m23s
2026-05-06 14:35:41 +10:00
b2f4d6677c Footer
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m14s
2026-05-06 14:15:57 +10:00
08e039aebe Shader compile thread seperation
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m31s
CI / Windows Release Package (push) Successful in 2m6s
2026-05-06 14:11:18 +10:00
6502344d0a Added trigger
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m33s
CI / Windows Release Package (push) Successful in 2m4s
2026-05-06 14:01:23 +10:00
e59677c212 Typography improvements
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m31s
CI / Windows Release Package (push) Successful in 2m20s
2026-05-06 13:16:02 +10:00
3dc7af6fc0 control
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m32s
CI / Windows Release Package (push) Successful in 2m4s
2026-05-06 13:03:35 +10:00
ef829bf3ef Added control interface
All checks were successful
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m32s
CI / Windows Release Package (push) Successful in 2m20s
2026-05-06 12:55:36 +10:00
ff1b7519a0 Added bad shader warning instead of hard fail
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m32s
CI / Windows Release Package (push) Failing after 2m15s
2026-05-06 12:44:22 +10:00
414ef62479 Added clock time
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Successful in 1m32s
CI / Windows Release Package (push) Failing after 2m7s
2026-05-06 12:38:23 +10:00
d2cf852eb2 Update ci.yml
Some checks failed
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Successful in 1m30s
CI / Windows Release Package (push) Failing after 2m29s
2026-05-06 12:15:34 +10:00
73e0af5d2e Update ci.yml
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Failing after 17s
CI / Windows Release Package (push) Has been skipped
2026-05-06 12:12:08 +10:00
99e8fb4681 Updated runner
Some checks failed
CI / React UI Build (push) Successful in 11s
CI / Native Windows Build And Tests (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 12:11:51 +10:00
a58f8aaf43 Start up procedures
Some checks failed
CI / React UI Build (push) Successful in 10s
CI / Native Windows Build And Tests (push) Failing after 19s
CI / Windows Release Package (push) Has been skipped
2026-05-06 11:56:02 +10:00
515f58b848 Video format refactor
Some checks failed
CI / Native Windows Build And Tests (push) Failing after 4s
CI / React UI Build (push) Successful in 11s
CI / Windows Release Package (push) Has been skipped
2026-05-06 11:51:08 +10:00
02a8a64360 com updates
Some checks failed
CI / Native Windows Build And Tests (push) Failing after 4s
CI / React UI Build (push) Successful in 11s
CI / Windows Release Package (push) Has been skipped
2026-05-06 11:41:27 +10:00
a526887ff6 temporal manifest tests
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 11:34:53 +10:00
d2ac369fdc Pacing problems
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 11:31:48 +10:00
2317a80ce5 stutter fix
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 11:23:40 +10:00
3cb8d3cfad added tests
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 11:09:15 +10:00
8b9e2916df Shader ownership
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 11:03:16 +10:00
bbbc678c83 Simplify ownership/lifetime
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:57:59 +10:00
1b67777c4a Extract frame transfer callbacks
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:53:53 +10:00
5fd24b3f06 Hide renderer internals
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:48:50 +10:00
35f5a024fd Decklink helper
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:44:55 +10:00
6918306336 decklink separation
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:31:21 +10:00
8ec87685b8 Shader clean up
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:26:38 +10:00
8c8028dd1f Separate history
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 10:14:55 +10:00
9e480db31c Further refactor
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 09:31:44 +10:00
0bfffa6552 Refactor
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 09:28:46 +10:00
437199f3f0 Additional shaders
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-06 00:23:20 +10:00
cf31c91831 Merge pull request 'Text-and-Fonts' (#1) from Text-and-Fonts into main
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
Reviewed-on: #1
2026-05-05 13:57:23 +00:00
7e4ab5cbd8 V1 text, needs improvements
Some checks failed
CI / Native Windows Build And Tests (pull_request) Failing after 18s
CI / React UI Build (pull_request) Has been cancelled
CI / Windows Release Package (pull_request) Has been cancelled
2026-05-05 23:57:02 +10:00
6ce09c0e9c making text pretty 2026-05-05 23:51:02 +10:00
62c3ded1f8 Font working 2026-05-05 23:47:08 +10:00
3e8b472f74 Initial font work 2026-05-05 23:18:50 +10:00
fd0ebb8d40 Update README.md
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-05 22:56:56 +10:00
fcdc5bac6e Update README.md
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-05 22:52:53 +10:00
fecc936a14 Input optional
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-05 22:52:41 +10:00
536f65bf88 Todo
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-05 22:50:46 +10:00
ce5905373a Added new shaders
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-05 22:36:52 +10:00
119e49aec1 Updated build steps
Some checks failed
CI / Native Windows Build And Tests (push) Has been cancelled
CI / React UI Build (push) Has been cancelled
CI / Windows Release Package (push) Has been cancelled
2026-05-05 21:39:33 +10:00
193 changed files with 53512 additions and 5044 deletions

View File

@@ -12,27 +12,60 @@ on:
jobs:
native-windows:
name: Native Windows Build And Tests
runs-on: windows-latest
runs-on: windows-2022
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Verify Visual Studio ATL
shell: powershell
run: |
$atlHeaders = @(Get-ChildItem -Path "${env:ProgramFiles(x86)}\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC" -Filter atlbase.h -Recurse -ErrorAction SilentlyContinue)
if ($atlHeaders.Count -eq 0) {
Write-Error "Visual Studio Build Tools is missing ATL. Install the 'C++ ATL for latest v143 build tools (x86 & x64)' component, component ID Microsoft.VisualStudio.Component.VC.ATL, then restart the runner service."
exit 1
}
Write-Host "Found ATL header: $($atlHeaders[0].FullName)"
- name: Configure Debug
shell: powershell
run: cmake --preset vs2022-x64-debug
run: |
$slangRoot = "${{ vars.SLANG_ROOT }}"
if ([string]::IsNullOrWhiteSpace($slangRoot)) {
$slangRoot = $env:SLANG_ROOT
}
if ([string]::IsNullOrWhiteSpace($slangRoot)) {
$slangRoot = Join-Path $PWD "3rdParty\slang-2026.8-windows-x86_64"
}
$requiredFiles = @(
(Join-Path $slangRoot "bin\slangc.exe"),
(Join-Path $slangRoot "bin\slang-compiler.dll"),
(Join-Path $slangRoot "bin\slang-glslang.dll"),
(Join-Path $slangRoot "LICENSE")
)
$missingFiles = @($requiredFiles | Where-Object { -not (Test-Path -LiteralPath $_) })
if ($missingFiles.Count -gt 0) {
Write-Error "Missing native third-party dependencies. Set Gitea repository variable SLANG_ROOT, or pre-populate the repo-local 3rdParty folder on the Windows runner. Missing: $($missingFiles -join ', ')"
exit 1
}
Write-Host "Using SLANG_ROOT=$slangRoot"
cmake --preset vs2022-x64-debug -DSLANG_ROOT="$slangRoot"
- name: Build Debug
shell: powershell
run: cmake --build --preset build-debug
- name: Run Native Tests
- name: Run Native Tests And Shader Validation
shell: powershell
run: cmake --build --preset build-debug --target RUN_TESTS
ui-ubuntu:
name: React UI Build
runs-on: nubuntu-latest
runs-on: ubuntu-latest
steps:
- name: Checkout
@@ -48,7 +81,7 @@ jobs:
package-windows:
name: Windows Release Package
runs-on: windows-latest
runs-on: windows-2022
needs:
- native-windows
- ui-ubuntu
@@ -57,6 +90,16 @@ jobs:
- name: Checkout
uses: actions/checkout@v4
- name: Verify Visual Studio ATL
shell: powershell
run: |
$atlHeaders = @(Get-ChildItem -Path "${env:ProgramFiles(x86)}\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC" -Filter atlbase.h -Recurse -ErrorAction SilentlyContinue)
if ($atlHeaders.Count -eq 0) {
Write-Error "Visual Studio Build Tools is missing ATL. Install the 'C++ ATL for latest v143 build tools (x86 & x64)' component, component ID Microsoft.VisualStudio.Component.VC.ATL, then restart the runner service."
exit 1
}
Write-Host "Found ATL header: $($atlHeaders[0].FullName)"
- name: Build UI
shell: powershell
working-directory: ui
@@ -66,7 +109,30 @@ jobs:
- name: Configure Release
shell: powershell
run: cmake --preset vs2022-x64-release
run: |
$slangRoot = "${{ vars.SLANG_ROOT }}"
if ([string]::IsNullOrWhiteSpace($slangRoot)) {
$slangRoot = $env:SLANG_ROOT
}
if ([string]::IsNullOrWhiteSpace($slangRoot)) {
$slangRoot = Join-Path $PWD "3rdParty\slang-2026.8-windows-x86_64"
}
$requiredFiles = @(
(Join-Path $slangRoot "bin\slangc.exe"),
(Join-Path $slangRoot "bin\slang-compiler.dll"),
(Join-Path $slangRoot "bin\slang-glslang.dll"),
(Join-Path $slangRoot "LICENSE")
)
$missingFiles = @($requiredFiles | Where-Object { -not (Test-Path -LiteralPath $_) })
if ($missingFiles.Count -gt 0) {
Write-Error "Missing native third-party dependencies. Set Gitea repository variable SLANG_ROOT, or pre-populate the repo-local 3rdParty folder on the Windows runner. Missing: $($missingFiles -join ', ')"
exit 1
}
Write-Host "Using SLANG_ROOT=$slangRoot"
cmake --preset vs2022-x64-release -DSLANG_ROOT="$slangRoot"
- name: Build Release
shell: powershell
@@ -81,7 +147,8 @@ jobs:
run: Compress-Archive -Path dist/VideoShader/* -DestinationPath dist/VideoShader.zip -Force
- name: Upload Runtime Package
uses: actions/upload-artifact@v4
# Gitea/GHES-compatible runners do not support the v4 artifact backend yet.
uses: actions/upload-artifact@v3
with:
name: VideoShader-windows-release
path: dist/VideoShader.zip

View File

@@ -7,68 +7,143 @@ set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_CXX_EXTENSIONS OFF)
set(APP_DIR "${CMAKE_CURRENT_SOURCE_DIR}/apps/LoopThroughWithOpenGLCompositing")
set(GPUDIRECT_DIR "${CMAKE_CURRENT_SOURCE_DIR}/3rdParty/Blackmagic DeckLink SDK 16.0/Win/Samples/NVIDIA_GPUDirect" CACHE PATH "Path to the NVIDIA_GPUDirect sample directory from the Blackmagic DeckLink SDK")
set(SLANG_ROOT "${CMAKE_CURRENT_SOURCE_DIR}/3rdParty/slang-2026.8-windows-x86_64" CACHE PATH "Path to a Slang binary release containing bin/slangc.exe")
if(NOT EXISTS "${APP_DIR}/LoopThroughWithOpenGLCompositing.cpp")
message(FATAL_ERROR "Imported app sources were not found under ${APP_DIR}")
endif()
if(NOT EXISTS "${GPUDIRECT_DIR}/lib/x64/dvp.lib")
message(FATAL_ERROR "NVIDIA GPUDirect library not found under ${GPUDIRECT_DIR}")
set(SLANG_RUNTIME_FILES
"${SLANG_ROOT}/bin/slangc.exe"
"${SLANG_ROOT}/bin/slang-compiler.dll"
"${SLANG_ROOT}/bin/slang-glslang.dll"
)
foreach(SLANG_RUNTIME_FILE IN LISTS SLANG_RUNTIME_FILES)
if(NOT EXISTS "${SLANG_RUNTIME_FILE}")
message(FATAL_ERROR "Required Slang runtime file not found: ${SLANG_RUNTIME_FILE}")
endif()
endforeach()
set(SLANG_LICENSE_FILE "${SLANG_ROOT}/LICENSE")
if(NOT EXISTS "${SLANG_LICENSE_FILE}")
message(FATAL_ERROR "Slang license file not found: ${SLANG_LICENSE_FILE}")
endif()
set(APP_SOURCES
"${APP_DIR}/ControlServer.cpp"
"${APP_DIR}/ControlServer.h"
"${APP_DIR}/DeckLinkAPI_i.c"
"${APP_DIR}/GLExtensions.cpp"
"${APP_DIR}/GLExtensions.h"
"${APP_DIR}/videoio/decklink/DeckLinkAPI_i.c"
"${APP_DIR}/control/ControlServer.cpp"
"${APP_DIR}/control/ControlServer.h"
"${APP_DIR}/control/OscServer.cpp"
"${APP_DIR}/control/OscServer.h"
"${APP_DIR}/control/RuntimeControlBridge.cpp"
"${APP_DIR}/control/RuntimeControlBridge.h"
"${APP_DIR}/control/RuntimeServices.cpp"
"${APP_DIR}/control/RuntimeServices.h"
"${APP_DIR}/videoio/decklink/DeckLinkAPI_h.h"
"${APP_DIR}/videoio/decklink/DeckLinkDisplayMode.cpp"
"${APP_DIR}/videoio/decklink/DeckLinkDisplayMode.h"
"${APP_DIR}/videoio/decklink/DeckLinkFrameTransfer.cpp"
"${APP_DIR}/videoio/decklink/DeckLinkFrameTransfer.h"
"${APP_DIR}/videoio/decklink/DeckLinkSession.cpp"
"${APP_DIR}/videoio/decklink/DeckLinkSession.h"
"${APP_DIR}/videoio/decklink/DeckLinkVideoIOFormat.cpp"
"${APP_DIR}/videoio/decklink/DeckLinkVideoIOFormat.h"
"${APP_DIR}/gl/renderer/GLExtensions.cpp"
"${APP_DIR}/gl/renderer/GLExtensions.h"
"${APP_DIR}/gl/shader/GlobalParamsBuffer.cpp"
"${APP_DIR}/gl/shader/GlobalParamsBuffer.h"
"${APP_DIR}/gl/renderer/GlRenderConstants.h"
"${APP_DIR}/gl/renderer/GlScopedObjects.h"
"${APP_DIR}/gl/shader/GlShaderSources.cpp"
"${APP_DIR}/gl/shader/GlShaderSources.h"
"${APP_DIR}/gl/OpenGLComposite.cpp"
"${APP_DIR}/gl/OpenGLComposite.h"
"${APP_DIR}/gl/OpenGLCompositeRuntimeControls.cpp"
"${APP_DIR}/gl/pipeline/OpenGLRenderPass.cpp"
"${APP_DIR}/gl/pipeline/OpenGLRenderPass.h"
"${APP_DIR}/gl/pipeline/OpenGLRenderPipeline.cpp"
"${APP_DIR}/gl/pipeline/OpenGLRenderPipeline.h"
"${APP_DIR}/gl/pipeline/RenderPassDescriptor.h"
"${APP_DIR}/gl/pipeline/ShaderFeedbackBuffers.cpp"
"${APP_DIR}/gl/pipeline/ShaderFeedbackBuffers.h"
"${APP_DIR}/gl/renderer/OpenGLRenderer.cpp"
"${APP_DIR}/gl/renderer/OpenGLRenderer.h"
"${APP_DIR}/gl/renderer/RenderTargetPool.cpp"
"${APP_DIR}/gl/renderer/RenderTargetPool.h"
"${APP_DIR}/gl/pipeline/OpenGLVideoIOBridge.cpp"
"${APP_DIR}/gl/pipeline/OpenGLVideoIOBridge.h"
"${APP_DIR}/gl/shader/OpenGLShaderPrograms.cpp"
"${APP_DIR}/gl/shader/OpenGLShaderPrograms.h"
"${APP_DIR}/gl/pipeline/PngScreenshotWriter.cpp"
"${APP_DIR}/gl/pipeline/PngScreenshotWriter.h"
"${APP_DIR}/gl/shader/ShaderProgramCompiler.cpp"
"${APP_DIR}/gl/shader/ShaderProgramCompiler.h"
"${APP_DIR}/gl/shader/ShaderBuildQueue.cpp"
"${APP_DIR}/gl/shader/ShaderBuildQueue.h"
"${APP_DIR}/gl/shader/ShaderTextureBindings.cpp"
"${APP_DIR}/gl/shader/ShaderTextureBindings.h"
"${APP_DIR}/gl/shader/Std140Buffer.h"
"${APP_DIR}/gl/shader/TextRasterizer.cpp"
"${APP_DIR}/gl/shader/TextRasterizer.h"
"${APP_DIR}/gl/shader/TextureAssetLoader.cpp"
"${APP_DIR}/gl/shader/TextureAssetLoader.h"
"${APP_DIR}/gl/pipeline/TemporalHistoryBuffers.cpp"
"${APP_DIR}/gl/pipeline/TemporalHistoryBuffers.h"
"${APP_DIR}/LoopThroughWithOpenGLCompositing.cpp"
"${APP_DIR}/LoopThroughWithOpenGLCompositing.h"
"${APP_DIR}/LoopThroughWithOpenGLCompositing.rc"
"${APP_DIR}/NativeHandles.h"
"${APP_DIR}/NativeSockets.h"
"${APP_DIR}/OpenGLComposite.cpp"
"${APP_DIR}/OpenGLComposite.h"
"${APP_DIR}/OscServer.cpp"
"${APP_DIR}/OscServer.h"
"${APP_DIR}/platform/NativeHandles.h"
"${APP_DIR}/platform/NativeSockets.h"
"${APP_DIR}/resource.h"
"${APP_DIR}/RuntimeHost.cpp"
"${APP_DIR}/RuntimeHost.h"
"${APP_DIR}/RuntimeJson.cpp"
"${APP_DIR}/RuntimeJson.h"
"${APP_DIR}/RuntimeParameterUtils.cpp"
"${APP_DIR}/RuntimeParameterUtils.h"
"${APP_DIR}/ShaderCompiler.cpp"
"${APP_DIR}/ShaderCompiler.h"
"${APP_DIR}/ShaderPackageRegistry.cpp"
"${APP_DIR}/ShaderPackageRegistry.h"
"${APP_DIR}/ShaderTypes.h"
"${APP_DIR}/runtime/RuntimeHost.cpp"
"${APP_DIR}/runtime/RuntimeHost.h"
"${APP_DIR}/runtime/RuntimeClock.cpp"
"${APP_DIR}/runtime/RuntimeClock.h"
"${APP_DIR}/runtime/RuntimeJson.cpp"
"${APP_DIR}/runtime/RuntimeJson.h"
"${APP_DIR}/runtime/RuntimeParameterUtils.cpp"
"${APP_DIR}/runtime/RuntimeParameterUtils.h"
"${APP_DIR}/shader/ShaderCompiler.cpp"
"${APP_DIR}/shader/ShaderCompiler.h"
"${APP_DIR}/shader/ShaderPackageRegistry.cpp"
"${APP_DIR}/shader/ShaderPackageRegistry.h"
"${APP_DIR}/shader/ShaderTypes.h"
"${APP_DIR}/stdafx.cpp"
"${APP_DIR}/stdafx.h"
"${APP_DIR}/targetver.h"
"${APP_DIR}/VideoFrameTransfer.cpp"
"${APP_DIR}/VideoFrameTransfer.h"
"${APP_DIR}/videoio/VideoIOFormat.cpp"
"${APP_DIR}/videoio/VideoIOFormat.h"
"${APP_DIR}/videoio/VideoIOTypes.h"
"${APP_DIR}/videoio/VideoPlayoutScheduler.cpp"
"${APP_DIR}/videoio/VideoPlayoutScheduler.h"
)
add_executable(LoopThroughWithOpenGLCompositing WIN32 ${APP_SOURCES})
target_include_directories(LoopThroughWithOpenGLCompositing PRIVATE
"${APP_DIR}"
"${GPUDIRECT_DIR}/include"
)
target_link_directories(LoopThroughWithOpenGLCompositing PRIVATE
"${GPUDIRECT_DIR}/lib/x64"
"${APP_DIR}/control"
"${APP_DIR}/gl"
"${APP_DIR}/gl/pipeline"
"${APP_DIR}/gl/renderer"
"${APP_DIR}/gl/shader"
"${APP_DIR}/platform"
"${APP_DIR}/runtime"
"${APP_DIR}/shader"
"${APP_DIR}/videoio"
"${APP_DIR}/videoio/decklink"
)
target_link_libraries(LoopThroughWithOpenGLCompositing PRIVATE
dvp.lib
opengl32
glu32
Ws2_32
Crypt32
Advapi32
Gdiplus
Ole32
Windowscodecs
)
target_compile_definitions(LoopThroughWithOpenGLCompositing PRIVATE
@@ -81,12 +156,13 @@ if(MSVC)
endif()
add_executable(RuntimeJsonTests
"${APP_DIR}/RuntimeJson.cpp"
"${APP_DIR}/runtime/RuntimeJson.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/RuntimeJsonTests.cpp"
)
target_include_directories(RuntimeJsonTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/runtime"
)
if(MSVC)
@@ -96,14 +172,32 @@ endif()
enable_testing()
add_test(NAME RuntimeJsonTests COMMAND RuntimeJsonTests)
add_executable(RuntimeClockTests
"${APP_DIR}/runtime/RuntimeClock.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/RuntimeClockTests.cpp"
)
target_include_directories(RuntimeClockTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/runtime"
)
if(MSVC)
target_compile_options(RuntimeClockTests PRIVATE /W3)
endif()
add_test(NAME RuntimeClockTests COMMAND RuntimeClockTests)
add_executable(RuntimeParameterUtilsTests
"${APP_DIR}/RuntimeJson.cpp"
"${APP_DIR}/RuntimeParameterUtils.cpp"
"${APP_DIR}/runtime/RuntimeJson.cpp"
"${APP_DIR}/runtime/RuntimeParameterUtils.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/RuntimeParameterUtilsTests.cpp"
)
target_include_directories(RuntimeParameterUtilsTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/runtime"
"${APP_DIR}/shader"
)
if(MSVC)
@@ -112,14 +206,32 @@ endif()
add_test(NAME RuntimeParameterUtilsTests COMMAND RuntimeParameterUtilsTests)
add_executable(Std140BufferTests
"${CMAKE_CURRENT_SOURCE_DIR}/tests/Std140BufferTests.cpp"
)
target_include_directories(Std140BufferTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/gl"
"${APP_DIR}/gl/shader"
)
if(MSVC)
target_compile_options(Std140BufferTests PRIVATE /W3)
endif()
add_test(NAME Std140BufferTests COMMAND Std140BufferTests)
add_executable(ShaderPackageRegistryTests
"${APP_DIR}/RuntimeJson.cpp"
"${APP_DIR}/ShaderPackageRegistry.cpp"
"${APP_DIR}/runtime/RuntimeJson.cpp"
"${APP_DIR}/shader/ShaderPackageRegistry.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/ShaderPackageRegistryTests.cpp"
)
target_include_directories(ShaderPackageRegistryTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/runtime"
"${APP_DIR}/shader"
)
if(MSVC)
@@ -128,13 +240,38 @@ endif()
add_test(NAME ShaderPackageRegistryTests COMMAND ShaderPackageRegistryTests)
add_executable(ShaderSlangValidationTests
"${APP_DIR}/runtime/RuntimeJson.cpp"
"${APP_DIR}/shader/ShaderCompiler.cpp"
"${APP_DIR}/shader/ShaderPackageRegistry.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/ShaderSlangValidationTests.cpp"
)
target_include_directories(ShaderSlangValidationTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/platform"
"${APP_DIR}/runtime"
"${APP_DIR}/shader"
)
if(MSVC)
target_compile_options(ShaderSlangValidationTests PRIVATE /W3)
endif()
add_test(NAME ShaderSlangValidationTests COMMAND ShaderSlangValidationTests)
set_tests_properties(ShaderSlangValidationTests PROPERTIES
ENVIRONMENT "SLANG_ROOT=${SLANG_ROOT}"
)
add_executable(OscServerTests
"${APP_DIR}/OscServer.cpp"
"${APP_DIR}/control/OscServer.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/OscServerTests.cpp"
)
target_include_directories(OscServerTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/control"
"${APP_DIR}/platform"
)
target_link_libraries(OscServerTests PRIVATE
@@ -147,17 +284,72 @@ endif()
add_test(NAME OscServerTests COMMAND OscServerTests)
add_custom_command(TARGET LoopThroughWithOpenGLCompositing POST_BUILD
COMMAND ${CMAKE_COMMAND} -E copy_if_different
"${GPUDIRECT_DIR}/bin/x64/dvp.dll"
"$<TARGET_FILE_DIR:LoopThroughWithOpenGLCompositing>/dvp.dll"
add_executable(VideoIOFormatTests
"${APP_DIR}/videoio/decklink/DeckLinkVideoIOFormat.cpp"
"${APP_DIR}/videoio/VideoIOFormat.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/VideoIOFormatTests.cpp"
)
target_include_directories(VideoIOFormatTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/videoio"
"${APP_DIR}/videoio/decklink"
)
if(MSVC)
target_compile_options(VideoIOFormatTests PRIVATE /W3)
endif()
add_test(NAME VideoIOFormatTests COMMAND VideoIOFormatTests)
add_executable(VideoPlayoutSchedulerTests
"${APP_DIR}/videoio/VideoPlayoutScheduler.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/VideoPlayoutSchedulerTests.cpp"
)
target_include_directories(VideoPlayoutSchedulerTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/videoio"
"${APP_DIR}/videoio/decklink"
)
if(MSVC)
target_compile_options(VideoPlayoutSchedulerTests PRIVATE /W3)
endif()
add_test(NAME VideoPlayoutSchedulerTests COMMAND VideoPlayoutSchedulerTests)
add_executable(VideoIODeviceFakeTests
"${APP_DIR}/videoio/VideoIOFormat.cpp"
"${CMAKE_CURRENT_SOURCE_DIR}/tests/VideoIODeviceFakeTests.cpp"
)
target_include_directories(VideoIODeviceFakeTests PRIVATE
"${APP_DIR}"
"${APP_DIR}/videoio"
"${APP_DIR}/videoio/decklink"
)
if(MSVC)
target_compile_options(VideoIODeviceFakeTests PRIVATE /W3)
endif()
add_test(NAME VideoIODeviceFakeTests COMMAND VideoIODeviceFakeTests)
install(TARGETS LoopThroughWithOpenGLCompositing
RUNTIME DESTINATION "."
)
install(FILES "${GPUDIRECT_DIR}/bin/x64/dvp.dll"
install(FILES ${SLANG_RUNTIME_FILES}
DESTINATION "3rdParty/slang/bin"
)
install(FILES "${SLANG_LICENSE_FILE}"
DESTINATION "third_party_notices"
RENAME "SLANG_LICENSE.txt"
)
install(FILES "${CMAKE_CURRENT_SOURCE_DIR}/shaders/SHADER_CONTRACT.md"
DESTINATION "."
)

View File

@@ -36,7 +36,7 @@
"preArgs": "",
"typeTags": "",
"decimals": 2,
"target": "127.0.0.1:9000",
"target": "192.168.1.46:9000",
"ignoreDefaults": false,
"bypass": false,
"onCreate": "",
@@ -53,8 +53,8 @@
"visible": true,
"interaction": true,
"comments": "XY control for Fisheye Reproject pan and tilt.",
"width": 420,
"height": 420,
"width": 460,
"height": 250,
"expand": false,
"colorText": "auto",
"colorWidget": "auto",
@@ -70,14 +70,14 @@
"css": "",
"pips": true,
"snap": false,
"spring": false,
"spring": true,
"rangeX": {
"min": -60,
"max": 60
"min": -1,
"max": 1
},
"rangeY": {
"min": 45,
"max": -45
"min": 1,
"max": -1
},
"logScaleX": false,
"logScaleY": false,
@@ -94,13 +94,13 @@
"address": "/VideoShaderToys/fisheye-reproject/xy",
"preArgs": "",
"typeTags": "",
"decimals": "2f",
"target": "127.0.0.1:9000",
"decimals": "3f",
"target": "192.168.1.46:9000",
"ignoreDefaults": false,
"bypass": true,
"onCreate": "",
"onValue": "var pan = Array.isArray(value) ? Number(value[0]) : 0;\nvar tilt = Array.isArray(value) ? Number(value[1]) : 0;\nsend('127.0.0.1:9000', '/VideoShaderToys/fisheye-reproject/panDegrees', {type: 'f', value: pan});\nsend('127.0.0.1:9000', '/VideoShaderToys/fisheye-reproject/tiltDegrees', {type: 'f', value: tilt});",
"onTouch": "",
"onCreate": "var state = globalThis.__fisheyePanTiltStick = globalThis.__fisheyePanTiltStick || {};\nstate.target = '192.168.1.46:9000';\nstate.panAddress = '/VideoShaderToys/fisheye-reproject/panDegrees';\nstate.tiltAddress = '/VideoShaderToys/fisheye-reproject/tiltDegrees';\nstate.minPan = -60;\nstate.maxPan = 60;\nstate.minTilt = -45;\nstate.maxTilt = 45;\nstate.pan = 0;\nstate.tilt = 0;\nstate.stickX = 0;\nstate.stickY = 0;\nstate.tickMs = 16;\nstate.stepPan = 0.75;\nstate.stepTilt = 0.75;\nstate.deadzone = 0.14;\nstate.applyCurve = function(input) {\n var amount = Math.abs(input);\n if (amount <= state.deadzone) {\n return 0;\n }\n var normalized = (amount - state.deadzone) / (1 - state.deadzone);\n var softened = normalized * normalized * (3 - (2 * normalized));\n return (input < 0 ? -1 : 1) * softened;\n};\nif (state.timer) {\n clearInterval(state.timer);\n state.timer = null;\n}",
"onValue": "var state = globalThis.__fisheyePanTiltStick = globalThis.__fisheyePanTiltStick || {};\nvar stickX = Array.isArray(value) ? Number(value[0]) : 0;\nvar stickY = Array.isArray(value) ? Number(value[1]) : 0;\nstate.stickX = isFinite(stickX) ? state.applyCurve(stickX) : 0;\nstate.stickY = isFinite(stickY) ? state.applyCurve(stickY) : 0;",
"onTouch": "var state = globalThis.__fisheyePanTiltStick = globalThis.__fisheyePanTiltStick || {};\nif (value) {\n if (!state.timer) {\n state.timer = setInterval(function() {\n if (!state.stickX && !state.stickY) {\n return;\n }\n state.pan = Math.max(state.minPan, Math.min(state.maxPan, state.pan + (state.stickX * state.stepPan)));\n state.tilt = Math.max(state.minTilt, Math.min(state.maxTilt, state.tilt + (state.stickY * state.stepTilt)));\n send(state.target, state.panAddress, {type: 'f', value: state.pan});\n send(state.target, state.tiltAddress, {type: 'f', value: state.tilt});\n }, state.tickMs);\n }\n} else {\n state.stickX = 0;\n state.stickY = 0;\n if (state.timer) {\n clearInterval(state.timer);\n state.timer = null;\n }\n}",
"pointSize": 20,
"ephemeral": false,
"label": "",
@@ -121,7 +121,7 @@
"interaction": true,
"comments": "",
"width": 90,
"height": 420,
"height": 250,
"expand": false,
"colorText": "auto",
"colorWidget": "auto",
@@ -144,90 +144,29 @@
"gradient": [],
"snap": false,
"touchZone": "all",
"spring": false,
"spring": true,
"doubleTap": false,
"range": {
"min": 100,
"max": 10
"min": -1,
"max": 1
},
"logScale": false,
"sensitivity": 1,
"steps": "",
"origin": "auto",
"value": "",
"default": 90,
"value": 0,
"default": 0,
"linkId": "",
"address": "/VideoShaderToys/fisheye-reproject/virtualFovDegrees",
"preArgs": "",
"typeTags": "",
"decimals": 2,
"target": "127.0.0.1:9000",
"ignoreDefaults": false,
"bypass": false,
"onCreate": "",
"onValue": "",
"onTouch": ""
},
{
"type": "xy",
"top": 700,
"left": 190,
"lock": false,
"id": "Pan Pad",
"visible": true,
"interaction": true,
"comments": "",
"width": "auto",
"height": "auto",
"expand": false,
"colorText": "auto",
"colorWidget": "auto",
"colorStroke": "auto",
"colorFill": "auto",
"alphaStroke": "auto",
"alphaFillOff": "auto",
"alphaFillOn": "auto",
"lineWidth": "auto",
"borderRadius": "auto",
"padding": "auto",
"html": "",
"css": "",
"pointSize": 20,
"ephemeral": false,
"pips": true,
"label": "",
"snap": false,
"spring": false,
"rangeX": {
"min": -1,
"max": 1
},
"rangeY": {
"min": -1,
"max": 1
},
"logScaleX": false,
"logScaleY": false,
"stepsX": false,
"stepsY": false,
"clipX": "",
"clipY": "",
"axisLock": "",
"doubleTap": false,
"sensitivity": 1,
"value": "",
"default": "",
"linkId": "",
"address": "/VideoShaderToys/video-transform/pan",
"preArgs": "",
"typeTags": "",
"decimals": 2,
"target": "",
"decimals": "3f",
"target": "192.168.1.46:9000",
"ignoreDefaults": false,
"bypass": true,
"onCreate": "",
"onValue": "var x = Array.isArray(value) ? Number(value[0]) : 0;\nvar y = Array.isArray(value) ? Number(value[1]) : 0;\nsend('127.0.0.1:9000', '/VideoShaderToys/video-transform/pan', {type: 'f', value: x}, {type: 'f', value: y});",
"onTouch": ""
"onCreate": "var state = globalThis.__fisheyeFovStick = globalThis.__fisheyeFovStick || {};\nstate.target = '192.168.1.46:9000';\nstate.address = '/VideoShaderToys/fisheye-reproject/virtualFovDegrees';\nstate.minFov = 10;\nstate.maxFov = 100;\nstate.fov = 90;\nstate.stick = 0;\nstate.tickMs = 16;\nstate.stepFov = 0.6;\nstate.deadzone = 0.14;\nstate.applyCurve = function(input) {\n var amount = Math.abs(input);\n if (amount <= state.deadzone) {\n return 0;\n }\n var normalized = (amount - state.deadzone) / (1 - state.deadzone);\n var softened = normalized * normalized * (3 - (2 * normalized));\n return (input < 0 ? -1 : 1) * softened;\n};\nif (state.timer) {\n clearInterval(state.timer);\n state.timer = null;\n}",
"onValue": "var state = globalThis.__fisheyeFovStick = globalThis.__fisheyeFovStick || {};\nvar stick = Number(value);\nstate.stick = isFinite(stick) ? state.applyCurve(stick) : 0;",
"onTouch": "var state = globalThis.__fisheyeFovStick = globalThis.__fisheyeFovStick || {};\nif (value) {\n if (!state.timer) {\n state.timer = setInterval(function() {\n if (!state.stick) {\n return;\n }\n state.fov = Math.max(state.minFov, Math.min(state.maxFov, state.fov - (state.stick * state.stepFov)));\n send(state.target, state.address, {type: 'f', value: state.fov});\n }, state.tickMs);\n }\n} else {\n state.stick = 0;\n if (state.timer) {\n clearInterval(state.timer);\n state.timer = null;\n }\n}"
}
],
"tabs": []

103
README.md
View File

@@ -1,8 +1,8 @@
# Video Shader
Native video shader host with an OpenGL/DeckLink render path, Slang shader packages, and a local React control UI.
Native video shader host with an OpenGL render path, pluggable video I/O boundary, DeckLink backend, Slang shader packages, and a local React control UI.
The app loads shader packages from `shaders/`, compiles Slang to GLSL at runtime, renders a configurable layer stack, and exposes a browser-based control surface over a local HTTP/WebSocket server.
The app loads shader packages from `shaders/`, compiles Slang to GLSL at runtime, renders a configurable layer stack, and exposes a browser-based control surface over a local HTTP/WebSocket server. Shader compilation is prepared off the frame path where possible, then committed on the render thread so editing shader files does not block video output for the whole compile.
## Repository Layout
@@ -15,26 +15,32 @@ The app loads shader packages from `shaders/`, compiles Slang to GLSL at runtime
- `tests/`: focused native tests for pure runtime logic.
- `.gitea/workflows/ci.yml`: Gitea Actions CI for Windows native tests and Ubuntu UI build.
Native app internals are grouped by boundary:
- `videoio/`: backend-neutral video I/O contracts, formats, and playout timing.
- `videoio/decklink/`: DeckLink-specific device adapter, callbacks, and SDK bindings.
- `gl/renderer/`: low-level OpenGL resources and extension helpers.
- `gl/pipeline/`: frame pipeline, render passes, video I/O bridge, preview/readback, and screenshots.
- `gl/shader/`: shader compilation, texture/text assets, UBO packing, and shader program ownership.
## Requirements
- Windows with Visual Studio 2022 C++ tooling.
- CMake 3.24 or newer.
- Node.js and npm for the control UI.
- Blackmagic DeckLink SDK 16.0 with the NVIDIA GPUDirect sample files available locally.
- Slang compiler available under the repo/tooling paths expected by the runtime, or otherwise discoverable by the existing app setup.
- Blackmagic Desktop Video drivers and a DeckLink device for the current production video I/O backend.
- Slang binary release with `slangc.exe`, `slang-compiler.dll`, `slang-glslang.dll`, and `LICENSE`.
The Blackmagic/GPUDirect SDK should not be committed to this repository. `CMakeLists.txt` exposes `GPUDIRECT_DIR` as a cache path so local machines and CI runners can point at their installed SDK location.
Default expected SDK path:
Default expected Slang path:
```text
3rdParty/Blackmagic DeckLink SDK 16.0/Win/Samples/NVIDIA_GPUDirect
3rdParty/slang-2026.8-windows-x86_64
```
Override example:
```powershell
cmake --preset vs2022-x64-debug -DGPUDIRECT_DIR="D:/SDKs/Blackmagic DeckLink SDK 16.0/Win/Samples/NVIDIA_GPUDirect"
cmake --preset vs2022-x64-debug -DSLANG_ROOT="D:/SDKs/slang-2026.8-windows-x86_64"
```
## Build
@@ -56,6 +62,14 @@ npm run build
The native app serves `ui/dist` when it exists, otherwise it falls back to the source UI directory during development.
The control UI provides:
- A searchable shader library for adding layers.
- Compact parameter rows with inline descriptions and OSC copy controls.
- Stack save/recall presets.
- Manual shader reload.
- Screenshot capture from the final output render target.
## Package
Build the UI, build the native Release target, then install into a portable runtime folder:
@@ -75,14 +89,19 @@ The package folder will contain:
```text
dist/VideoShader/
LoopThroughWithOpenGLCompositing.exe
dvp.dll
config/
shaders/
3rdParty/slang/bin/
ui/dist/
docs/
SHADER_CONTRACT.md
runtime/templates/
third_party_notices/
```
You can run `LoopThroughWithOpenGLCompositing.exe` directly from that folder. In packaged mode, the app resolves `config/`, `shaders/`, `ui/dist/`, and `runtime/templates/` relative to the exe folder. In development mode, it still falls back to repo-root discovery.
You can run `LoopThroughWithOpenGLCompositing.exe` directly from that folder. In packaged mode, the app resolves `config/`, `shaders/`, `3rdParty/slang/bin/slangc.exe`, `ui/dist/`, and `runtime/templates/` relative to the exe folder. In development mode, it still falls back to repo-root discovery.
The install step copies only the Slang runtime files required by the shader compiler (`slangc.exe`, `slang-compiler.dll`, and `slang-glslang.dll`) plus `third_party_notices/SLANG_LICENSE.txt`. It does not copy the full Slang release folder.
Create a zip for distribution:
@@ -109,7 +128,10 @@ Current native test coverage includes:
- JSON parsing and serialization.
- Parameter normalization and preset filename safety.
- Shader manifest parsing and package registry scanning.
- Shader manifest parsing, temporal manifest validation, and package registry scanning.
- Video I/O format helpers, v210/Ay10 row-byte math, v210 pack/unpack math, playout scheduler timing, and fake backend contract coverage.
- OSC packet parsing.
- Slang validation for every available shader package.
## Runtime Configuration
@@ -119,7 +141,9 @@ Current native test coverage includes:
{
"shaderLibrary": "shaders",
"serverPort": 8080,
"oscBindAddress": "127.0.0.1",
"oscPort": 9000,
"oscSmoothing": 0.18,
"inputVideoFormat": "1080p",
"inputFrameRate": "59.94",
"outputVideoFormat": "1080p",
@@ -130,7 +154,7 @@ Current native test coverage includes:
}
```
`inputVideoFormat`/`inputFrameRate` select the DeckLink capture mode. `outputVideoFormat`/`outputFrameRate` select the playout mode. The shader stack runs at input resolution and the final rendered frame is scaled once into the configured output mode. Common examples include `720p`/`50`, `720p`/`59.94`, `1080i`/`50`, `1080i`/`59.94`, `1080p`/`25`, `1080p`/`50`, `1080p`/`59.94`, and `2160p`/`59.94`, depending on card support.
`inputVideoFormat`/`inputFrameRate` select the video capture mode. `outputVideoFormat`/`outputFrameRate` select the playout mode. With the current DeckLink backend, supported modes depend on the installed card and driver. The shader stack runs at input resolution and the final rendered frame is scaled once into the configured output mode. Common examples include `720p`/`50`, `720p`/`59.94`, `1080i`/`50`, `1080i`/`59.94`, `1080p`/`25`, `1080p`/`50`, `1080p`/`59.94`, and `2160p`/`59.94`.
Legacy `videoFormat` and `frameRate` keys are still accepted and apply to both input and output unless the explicit input/output keys are present.
@@ -169,15 +193,25 @@ http://127.0.0.1:<serverPort>/docs
Use those docs to inspect the `/api/state`, layer control, stack preset, and reload endpoints. Live state updates are also sent over the `/ws` WebSocket.
The control UI has a **Reload shaders** button. It rescans `shaders/`, re-reads manifests, queues shader compilation, refreshes shader availability/errors, and keeps the previous working shader stack running if a changed shader fails to compile.
Each parameter row also includes a small **OSC** button. Clicking it copies that parameter's OSC route to the clipboard.
The control UI also has a **Screenshot** button. It queues a capture of the final output render target and writes a PNG under:
```text
runtime/screenshots/
```
## OSC Control
The native host also listens for local OSC parameter control on the configured `oscPort`:
The native host also listens for OSC parameter control on the configured `oscBindAddress` and `oscPort`:
```text
/VideoShaderToys/{LayerNameOrID}/{ParameterNameOrID}
```
For example, `/VideoShaderToys/VHS/intensity` updates the `intensity` parameter on the first matching `VHS` layer. The listener accepts float, integer, string, and boolean OSC values, and validates them through the same shader parameter path as the REST API. See `docs/OSC_CONTROL.md` for details.
For example, `/VideoShaderToys/VHS/intensity` updates the `intensity` parameter on the first matching `VHS` layer. The listener accepts float, integer, string, and boolean OSC values, and validates them through the same shader parameter path as the REST API. OSC updates are coalesced and applied once per render tick, UI state broadcasts are throttled, and OSC-driven parameter changes are not autosaved to `runtime/runtime_state.json`. `oscSmoothing` adds a small per-frame easing amount for numeric OSC controls such as floats, `vec2`, and `color`, while booleans, enums, text, and triggers stay immediate. The default bind address is `127.0.0.1`; set `oscBindAddress` to `0.0.0.0` to accept OSC on all IPv4 interfaces. See `docs/OSC_CONTROL.md` for details.
## Shader Packages
@@ -187,9 +221,11 @@ Each shader package lives under:
shaders/<id>/
shader.json
shader.slang
optional-extra-pass.slang
optional-font-or-texture-assets
```
See `SHADER_CONTRACT.md` for the manifest schema, parameter types, texture assets, temporal history support, and the Slang entry point contract.
See `SHADER_CONTRACT.md` for the manifest schema, parameter types, texture assets, font/text assets, temporal history support, optional render-pass declarations, and the Slang entry point contract. `shaders/text-overlay/` is the reference live text package and bundles Roboto Regular with its OFL license. Broken shader packages are shown as unavailable in the selector with their error text instead of preventing the app from launching.
## Generated Files
@@ -200,6 +236,7 @@ Runtime-generated files are intentionally ignored:
- `runtime/shader_cache/active_shader.frag`
- `runtime/runtime_state.json` autosaved latest stack and parameter state.
- `runtime/stack_presets/*.json`
- `runtime/screenshots/*.png` screenshots captured from the final output render target.
Only `runtime/templates/` and `runtime/README.md` are tracked.
@@ -207,7 +244,37 @@ Only `runtime/templates/` and `runtime/README.md` are tracked.
The Gitea workflow expects two act runners:
- `windows-latest`: builds the native app and runs native tests.
- `windows-2022`: builds the native app and runs native tests.
- `ubuntu-latest`: installs UI dependencies and runs the Vite build.
If your Windows runner stores the Blackmagic SDK outside the repo, configure `GPUDIRECT_DIR` in the runner environment or adjust the workflow configure command to pass `-DGPUDIRECT_DIR=...`.
The Windows jobs validate native third-party dependencies before configuring CMake. Because `3rdParty/` is ignored, configure this path on the runner or in a Gitea repository variable:
- `SLANG_ROOT`: path to the Slang binary release folder containing `bin/slangc.exe`.
The Windows runner also needs the Visual Studio ATL component installed. In Visual Studio Build Tools 2022, add `C++ ATL for latest v143 build tools (x86 & x64)`, component ID `Microsoft.VisualStudio.Component.VC.ATL`.
Example runner paths:
```text
D:\SDKs\slang-2026.8-windows-x86_64
```
If `SLANG_ROOT` is not set, the workflow falls back to the repo-local default under `3rdParty/`.
## Still Todo
- Audio.
- Genlock.
- Logs.
- Add more video I/O backends now that the DeckLink path is behind `videoio/`.
- Support a separate sound shader `.slang` file in shader packages. (https://www.shadertoy.com/view/XsBXWt)
- Add WebView2 for an embedded native control surface.
- MSDF typography rasterisation
- More shader-library organisation and filtering as the built-in library grows.
- Optional linear-light compositing mode.
- compute shaders or a small 1x1 or nx1 RGBA16f render target for arbitrary data storage
- allow shaders to read other shaders data store based on name? or output over OSC
- Mipmapping for shader-declared textures
- Anotate included shaders
- allow 3 vector exposed controls
- add nearest sampling to the extra shader pass

View File

@@ -1,419 +0,0 @@
# Shader Package Contract
This document explains how to create shaders for the Video Shader runtime.
Each shader is a small package under `shaders/<id>/`:
```text
shaders/my-effect/
shader.json
shader.slang
optional-texture.png
```
The runtime reads `shader.json`, generates a Slang wrapper from `runtime/templates/shader_wrapper.slang.in`, includes your `shader.slang`, compiles the result to GLSL, and exposes the shader in the local control UI.
## Quick Start
Create a folder:
```text
shaders/my-effect/
```
Add `shader.json`:
```json
{
"id": "my-effect",
"name": "My Effect",
"description": "A simple starter shader.",
"category": "Custom",
"entryPoint": "shadeVideo",
"parameters": [
{
"id": "strength",
"label": "Strength",
"type": "float",
"default": 0.5,
"min": 0.0,
"max": 1.0,
"step": 0.01
}
]
}
```
Add `shader.slang`:
```slang
float4 shadeVideo(ShaderContext context)
{
float4 color = context.sourceColor;
color.rgb = lerp(color.rgb, 1.0 - color.rgb, strength);
return saturate(color);
}
```
With `autoReload` enabled in `config/runtime-host.json`, edits to shader source, manifests, and declared texture assets are picked up automatically.
## Manifest Fields
`shader.json` is the runtime-facing description of the shader.
Required fields:
- `id`: package ID used by state/presets. Hyphenated names are OK here, for example `my-effect`.
- `name`: display name in the UI.
- `parameters`: array of exposed controls. Use `[]` if there are no user parameters.
Optional fields:
- `description`: display/help text for the shader library.
- `category`: UI grouping label.
- `entryPoint`: Slang function to call. Defaults to `shadeVideo`.
- `textures`: texture assets to load and expose as samplers.
- `temporal`: history-buffer requirements.
Shader-visible identifiers must be valid Slang-style identifiers:
- `entryPoint`
- parameter `id`
- texture `id`
Use letters, numbers, and underscores only, and start with a letter or underscore. For example, `logoTexture` is valid; `logo-texture` is not valid as a shader-visible texture ID.
## Slang Entry Point
Your shader file must implement the manifest `entryPoint`.
Default:
```slang
float4 shadeVideo(ShaderContext context)
{
return context.sourceColor;
}
```
The runtime owns the real fragment shader entry point. Your function is called from the wrapper, and the runtime handles final bypass/mix behavior:
```slang
return lerp(context.sourceColor, effectedColor, mixValue);
```
That means:
- Return the fully effected color from your function.
- Respect alpha if your shader produces an overlay or sprite.
- The runtime will blend your result with the source according to `mixAmount` and bypass state.
## ShaderContext
Your entry point receives:
```slang
struct ShaderContext
{
float2 uv;
float4 sourceColor;
float2 inputResolution;
float2 outputResolution;
float time;
float frameCount;
float mixAmount;
float bypass;
int sourceHistoryLength;
int temporalHistoryLength;
};
```
Fields:
- `uv`: normalized texture coordinates, usually `0..1`.
- `sourceColor`: decoded RGBA source video at `uv`.
- `inputResolution`: decoded input video resolution in pixels.
- `outputResolution`: shader render resolution in pixels. The current pipeline renders the shader stack at input resolution, then scales the final frame to the configured DeckLink output mode.
- `time`: elapsed runtime time in seconds.
- `frameCount`: incrementing frame counter.
- `mixAmount`: runtime mix amount.
- `bypass`: `1.0` when the layer is bypassed, otherwise `0.0`.
- `sourceHistoryLength`: number of usable source-history frames currently available.
- `temporalHistoryLength`: number of usable temporal frames currently available for this layer.
## Helper Functions
The wrapper provides:
```slang
float4 sampleVideo(float2 uv);
float4 sampleSourceHistory(int framesAgo, float2 uv);
float4 sampleTemporalHistory(int framesAgo, float2 uv);
```
`sampleVideo` samples the live decoded source video.
`sampleSourceHistory` samples previous decoded source frames. `framesAgo` is clamped into the available range. If no history is available, it falls back to `sampleVideo`.
`sampleTemporalHistory` samples previous pre-layer input frames for temporal shaders that request `preLayerInput` history. `framesAgo` is clamped into the available range. If no temporal history is available, it falls back to `sampleVideo`.
Example:
```slang
float4 shadeVideo(ShaderContext context)
{
float4 previous = sampleSourceHistory(1, context.uv);
return lerp(context.sourceColor, previous, 0.35);
}
```
## Parameters
Manifest parameters are exposed to Slang as global values with the same `id`.
Supported types:
| Manifest type | Slang type | JSON value |
| --- | --- | --- |
| `float` | `float` | number |
| `vec2` | `float2` | `[x, y]` |
| `color` | `float4` | `[r, g, b, a]` |
| `bool` | `bool` | `true` or `false` |
| `enum` | `int` | selected option index |
Float example:
```json
{
"id": "brightness",
"label": "Brightness",
"type": "float",
"default": 1.0,
"min": 0.0,
"max": 2.0,
"step": 0.01
}
```
```slang
color.rgb *= brightness;
```
Vector example:
```json
{
"id": "offset",
"label": "Offset",
"type": "vec2",
"default": [0.0, 0.0],
"min": [-0.2, -0.2],
"max": [0.2, 0.2],
"step": [0.001, 0.001]
}
```
```slang
float2 uv = clamp(context.uv + offset, float2(0.0), float2(1.0));
```
Color example:
```json
{
"id": "tint",
"label": "Tint",
"type": "color",
"default": [1.0, 1.0, 1.0, 1.0]
}
```
```slang
color *= tint;
```
Boolean example:
```json
{
"id": "invert",
"label": "Invert",
"type": "bool",
"default": false
}
```
```slang
if (invert)
color.rgb = 1.0 - color.rgb;
```
Enum example:
```json
{
"id": "mode",
"label": "Mode",
"type": "enum",
"default": "normal",
"options": [
{ "value": "normal", "label": "Normal" },
{ "value": "luma", "label": "Luma" },
{ "value": "posterize", "label": "Posterize" }
]
}
```
Enums are stored in presets/state by their string `value`, but exposed to Slang as a zero-based integer index in option order:
```slang
if (mode == 1)
{
float luma = dot(color.rgb, float3(0.2126, 0.7152, 0.0722));
color.rgb = float3(luma);
}
else if (mode == 2)
{
color.rgb = floor(color.rgb * 4.0) / 4.0;
}
```
Parameter validation:
- Float values are clamped to `min`/`max` if provided.
- `vec2` must have exactly 2 numbers.
- `color` must have exactly 4 numbers.
- Enum defaults must match one of the declared option values.
- Non-finite numeric values are rejected.
## Texture Assets
Declare texture assets in the manifest:
```json
{
"textures": [
{
"id": "logoTexture",
"path": "logo.png"
}
]
}
```
Rules:
- `id` must be a valid shader identifier.
- `path` is relative to the shader package directory.
- The file must exist when the manifest is loaded.
- Texture asset changes trigger shader reload.
Texture IDs become `Sampler2D<float4>` globals:
```slang
float4 logo = logoTexture.Sample(logoUv);
```
For sprite or overlay shaders, return premultiplied-looking output if you want clean composition:
```slang
float alpha = logo.a;
return float4(logo.rgb * alpha, alpha);
```
See `shaders/dvd-bounce/` for a complete texture-driven example.
## Temporal Shaders
Temporal shaders can request access to previous frames.
Manifest example:
```json
{
"temporal": {
"enabled": true,
"historySource": "preLayerInput",
"historyLength": 12
}
}
```
Supported `historySource` values:
- `source`: decoded source-video history from previous frames.
- `preLayerInput`: history of the input arriving at this layer before the shader runs.
`historyLength` is the requested frame count. The runtime clamps it by `maxTemporalHistoryFrames` in `config/runtime-host.json`.
Temporal history resets when:
- layers are added, removed, or reordered
- a layer bypass state changes
- a layer changes shader
- a shader is reloaded or recompiled
- render dimensions change
Use the available history lengths to avoid assuming history is ready on the first frame:
```slang
float4 shadeVideo(ShaderContext context)
{
if (context.temporalHistoryLength <= 0)
return context.sourceColor;
float4 oldFrame = sampleTemporalHistory(3, context.uv);
return lerp(context.sourceColor, oldFrame, 0.4);
}
```
See `shaders/temporal-ghost-trail/` and `shaders/temporal-low-fps/` for examples.
## Coordinate And Color Notes
- `uv` is normalized.
- Use `context.outputResolution` for pixel-sized effects.
- Use `context.inputResolution` when sampling source video by input pixel size.
- `sourceColor` and `sampleVideo` return RGBA values in normalized `0..1` range.
- Prefer `saturate(color)` or explicit `clamp` before returning if your math can overshoot.
Pixel-size example:
```slang
float2 pixel = 1.0 / max(context.outputResolution, float2(1.0));
float4 right = sampleVideo(context.uv + float2(pixel.x, 0.0));
```
## Reload And Generated Files
When a shader compiles, the runtime writes generated files under `runtime/shader_cache/`:
- `active_shader_wrapper.slang`
- `active_shader.raw.frag`
- `active_shader.frag`
These files are ignored by git and are useful for debugging compiler output. If a shader fails to compile, inspect the wrapper first; it shows the exact generated Slang code including your included shader.
## Common Pitfalls
- Do not use hyphens in parameter IDs, texture IDs, or entry point names.
- Do not declare your own `ShaderContext`, `GlobalParams`, `sampleVideo`, `sampleSourceHistory`, or `sampleTemporalHistory`.
- Do not write a `[shader("fragment")]` entry point in `shader.slang`; the runtime provides it.
- Remember enum globals are integer indexes, not strings.
- Declare every texture in `shader.json`; undeclared texture samplers will not be bound.
- Keep temporal history requests modest. They consume texture units and memory and are capped by runtime config.
- If a parameter appears in the UI but not in Slang, the shader may still compile, but the control has no effect.
- If a Slang name collides with a generated global, rename your parameter or local symbol.
## Minimal Package Checklist
Before committing a new shader package:
- `shader.json` is valid JSON.
- `id` is unique across `shaders/`.
- `entryPoint`, parameter IDs, and texture IDs are valid identifiers.
- `shader.slang` implements the configured entry point.
- Texture files referenced by `textures` exist.
- Enum defaults are present in their `options`.
- Temporal shaders handle short or empty history gracefully.
- The app can reload and compile the shader without errors.

View File

@@ -46,6 +46,10 @@
#include "resource.h"
#include "OpenGLComposite.h"
#include <algorithm>
#include <shellapi.h>
#include <string>
#ifndef WGL_CONTEXT_MAJOR_VERSION_ARB
#define WGL_CONTEXT_MAJOR_VERSION_ARB 0x2091
#endif
@@ -65,6 +69,169 @@
LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam);
typedef HGLRC (WINAPI* PFNWGLCREATECONTEXTATTRIBSARBPROC)(HDC hdc, HGLRC hShareContext, const int* attribList);
namespace
{
const int kStatusPanelWidth = 680;
const int kStatusPanelHeight = 92;
const int kStatusPadding = 8;
const int kStatusLabelWidth = 58;
const int kStatusButtonWidth = 86;
const int kStatusRowHeight = 24;
const int kStatusGap = 6;
const UINT kCreateStatusStripMessage = WM_APP + 1;
enum StatusControlId
{
kControlUrlEditId = 2001,
kDocsUrlEditId = 2002,
kOscAddressEditId = 2003,
kOpenControlButtonId = 2004,
kOpenDocsButtonId = 2005
};
struct StatusStripControls
{
HWND panel = NULL;
HWND controlLabel = NULL;
HWND controlUrl = NULL;
HWND openControl = NULL;
HWND docsLabel = NULL;
HWND docsUrl = NULL;
HWND openDocs = NULL;
HWND oscLabel = NULL;
HWND oscAddress = NULL;
};
bool StatusStripCreated(const StatusStripControls& controls)
{
return controls.panel != NULL;
}
HWND CreateStatusChild(HWND parent, const char* className, const char* text, DWORD style, DWORD exStyle, int controlId)
{
return CreateWindowExA(
exStyle,
className,
text,
WS_CHILD | WS_VISIBLE | WS_CLIPSIBLINGS | style,
0,
0,
0,
0,
parent,
reinterpret_cast<HMENU>(static_cast<INT_PTR>(controlId)),
reinterpret_cast<HINSTANCE>(GetWindowLongPtr(parent, GWLP_HINSTANCE)),
NULL);
}
void CreateStatusStrip(HWND hWnd, StatusStripControls& controls)
{
controls.panel = CreateStatusChild(hWnd, "STATIC", "", SS_LEFT, WS_EX_CLIENTEDGE, 0);
controls.controlLabel = CreateStatusChild(hWnd, "STATIC", "Control", SS_LEFT, 0, 0);
controls.controlUrl = CreateStatusChild(hWnd, "EDIT", "", ES_AUTOHSCROLL | ES_READONLY | WS_TABSTOP, WS_EX_CLIENTEDGE, kControlUrlEditId);
controls.openControl = CreateStatusChild(hWnd, "BUTTON", "Open", BS_PUSHBUTTON | WS_TABSTOP, 0, kOpenControlButtonId);
controls.docsLabel = CreateStatusChild(hWnd, "STATIC", "Docs", SS_LEFT, 0, 0);
controls.docsUrl = CreateStatusChild(hWnd, "EDIT", "", ES_AUTOHSCROLL | ES_READONLY | WS_TABSTOP, WS_EX_CLIENTEDGE, kDocsUrlEditId);
controls.openDocs = CreateStatusChild(hWnd, "BUTTON", "Open", BS_PUSHBUTTON | WS_TABSTOP, 0, kOpenDocsButtonId);
controls.oscLabel = CreateStatusChild(hWnd, "STATIC", "OSC", SS_LEFT, 0, 0);
controls.oscAddress = CreateStatusChild(hWnd, "EDIT", "", ES_AUTOHSCROLL | ES_READONLY | WS_TABSTOP, WS_EX_CLIENTEDGE, kOscAddressEditId);
HFONT guiFont = reinterpret_cast<HFONT>(GetStockObject(DEFAULT_GUI_FONT));
HWND children[] = {
controls.controlLabel,
controls.controlUrl,
controls.openControl,
controls.docsLabel,
controls.docsUrl,
controls.openDocs,
controls.oscLabel,
controls.oscAddress
};
for (HWND child : children)
{
if (child)
SendMessage(child, WM_SETFONT, reinterpret_cast<WPARAM>(guiFont), TRUE);
}
SetWindowTextA(controls.controlUrl, "Starting control server...");
SetWindowTextA(controls.docsUrl, "Starting API docs...");
SetWindowTextA(controls.oscAddress, "Starting OSC listener...");
}
void RaiseStatusControls(const StatusStripControls& controls)
{
if (!StatusStripCreated(controls))
return;
SetWindowPos(controls.panel, HWND_BOTTOM, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE | SWP_NOACTIVATE);
HWND interactiveControls[] = {
controls.controlLabel,
controls.controlUrl,
controls.openControl,
controls.docsLabel,
controls.docsUrl,
controls.openDocs,
controls.oscLabel,
controls.oscAddress
};
for (HWND control : interactiveControls)
{
if (control)
SetWindowPos(control, HWND_TOP, 0, 0, 0, 0, SWP_NOMOVE | SWP_NOSIZE | SWP_NOACTIVATE);
}
}
void LayoutStatusStrip(HWND hWnd, const StatusStripControls& controls)
{
RECT clientRect = {};
if (!GetClientRect(hWnd, &clientRect) || !controls.panel)
return;
const int clientWidth = static_cast<int>(clientRect.right - clientRect.left);
const int clientHeight = static_cast<int>(clientRect.bottom - clientRect.top);
const int panelWidth = std::max(280, std::min(kStatusPanelWidth, clientWidth - (kStatusPadding * 2)));
const int panelHeight = kStatusPanelHeight;
const int panelLeft = kStatusPadding;
const int panelTop = std::max(kStatusPadding, clientHeight - panelHeight - kStatusPadding);
MoveWindow(controls.panel, panelLeft, panelTop, panelWidth, panelHeight, TRUE);
const int rowX = panelLeft + kStatusPadding;
const int editX = rowX + kStatusLabelWidth + kStatusGap;
const int buttonX = panelLeft + panelWidth - kStatusPadding - kStatusButtonWidth;
const int editWidth = std::max(80, buttonX - editX - kStatusGap);
const int oscWidth = std::max(80, panelLeft + panelWidth - editX - kStatusPadding);
const int row1 = panelTop + kStatusPadding;
const int row2 = row1 + kStatusRowHeight + kStatusGap;
const int row3 = row2 + kStatusRowHeight + kStatusGap;
MoveWindow(controls.controlLabel, rowX, row1 + 3, kStatusLabelWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.controlUrl, editX, row1, editWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.openControl, buttonX, row1, kStatusButtonWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.docsLabel, rowX, row2 + 3, kStatusLabelWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.docsUrl, editX, row2, editWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.openDocs, buttonX, row2, kStatusButtonWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.oscLabel, rowX, row3 + 3, kStatusLabelWidth, kStatusRowHeight, TRUE);
MoveWindow(controls.oscAddress, editX, row3, oscWidth, kStatusRowHeight, TRUE);
RaiseStatusControls(controls);
}
void UpdateStatusStrip(const StatusStripControls& controls, const OpenGLComposite& composite)
{
if (!StatusStripCreated(controls))
return;
SetWindowTextA(controls.controlUrl, composite.GetControlUrl().c_str());
SetWindowTextA(controls.docsUrl, composite.GetDocsUrl().c_str());
SetWindowTextA(controls.oscAddress, composite.GetOscAddress().c_str());
}
void OpenUrl(const char* url)
{
ShellExecuteA(NULL, "open", url, NULL, NULL, SW_SHOWNORMAL);
}
}
void ShowUnhandledExceptionMessage(const char* prefix)
{
try
@@ -203,6 +370,7 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
static HDC hDC = NULL; // Private GDI Device context
static OpenGLComposite* pOpenGLComposite = NULL;
static bool sInteractiveResize = false;
static StatusStripControls sStatusStrip;
switch (message)
{
@@ -251,8 +419,16 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
wglMakeCurrent( NULL, NULL );
if (pOpenGLComposite->Start())
{
PostMessage(hWnd, kCreateStatusStripMessage, 0, 0);
break; // success
}
MessageBoxA(NULL, "The OpenGL/DeckLink runtime initialized, but playout failed to start. See the previous DeckLink start message for the failing call.", "Startup failed", MB_OK | MB_ICONERROR);
}
else
{
MessageBoxA(NULL, "The OpenGL/DeckLink runtime failed to initialize. See the previous initialization message for the failing call.", "Startup failed", MB_OK | MB_ICONERROR);
}
// Failed to initialize - cleanup
delete pOpenGLComposite;
@@ -268,6 +444,25 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
}
}
case kCreateStatusStripMessage:
if (pOpenGLComposite)
{
if (!StatusStripCreated(sStatusStrip))
CreateStatusStrip(hWnd, sStatusStrip);
UpdateStatusStrip(sStatusStrip, *pOpenGLComposite);
LayoutStatusStrip(hWnd, sStatusStrip);
RECT clientRect = {};
if (GetClientRect(hWnd, &clientRect))
{
pOpenGLComposite->resizeGL(
static_cast<WORD>(clientRect.right - clientRect.left),
static_cast<WORD>(clientRect.bottom - clientRect.top));
}
InvalidateRect(hWnd, NULL, FALSE);
}
break;
case WM_DESTROY:
try
{
@@ -300,7 +495,11 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
{
RECT clientRect = {};
if (GetClientRect(hWnd, &clientRect))
pOpenGLComposite->resizeGL(static_cast<WORD>(clientRect.right - clientRect.left), static_cast<WORD>(clientRect.bottom - clientRect.top));
{
pOpenGLComposite->resizeGL(
static_cast<WORD>(clientRect.right - clientRect.left),
static_cast<WORD>(clientRect.bottom - clientRect.top));
}
}
InvalidateRect(hWnd, NULL, FALSE);
break;
@@ -308,6 +507,8 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
case WM_SIZE:
try
{
if (StatusStripCreated(sStatusStrip))
LayoutStatusStrip(hWnd, sStatusStrip);
if (pOpenGLComposite)
pOpenGLComposite->resizeGL(LOWORD(lParam), HIWORD(lParam));
}
@@ -330,8 +531,9 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
if (!sInteractiveResize && pOpenGLComposite)
{
wglMakeCurrent(hDC, hRC);
pOpenGLComposite->paintGL();
pOpenGLComposite->paintGL(true);
wglMakeCurrent( NULL, NULL );
RaiseStatusControls(sStatusStrip);
}
}
catch (...)
@@ -356,6 +558,28 @@ LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
}
break;
case WM_COMMAND:
switch (LOWORD(wParam))
{
case kOpenControlButtonId:
if (pOpenGLComposite)
{
std::string url = pOpenGLComposite->GetControlUrl();
OpenUrl(url.c_str());
}
break;
case kOpenDocsButtonId:
if (pOpenGLComposite)
{
std::string url = pOpenGLComposite->GetDocsUrl();
OpenUrl(url.c_str());
}
break;
default:
return DefWindowProc(hWnd, message, wParam, lParam);
}
break;
default:
return (DefWindowProc(hWnd, message, wParam, lParam));
}

View File

@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8"?>
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="Build" ToolsVersion="15.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup Label="ProjectConfigurations">
<ProjectConfiguration Include="Debug|Win32">
@@ -89,7 +89,7 @@
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">
<ClCompile>
<Optimization>Disabled</Optimization>
<AdditionalIncludeDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>.;control;gl;gl\pipeline;gl\renderer;gl\shader;videoio;videoio\decklink;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;_DEBUG;_WINDOWS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks>
<RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary>
@@ -99,15 +99,11 @@
<LanguageStandard>stdcpp17</LanguageStandard>
</ClCompile>
<Link>
<AdditionalDependencies>dvp.lib;opengl32.lib;Glu32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\lib\win32;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>opengl32.lib;Glu32.lib;Windowscodecs.lib;Ole32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Windows</SubSystem>
<TargetMachine>MachineX86</TargetMachine>
</Link>
<PostBuildEvent>
<Command>copy /y "..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\bin\$(Platform)\dvp.dll" "$(TargetDir)"</Command>
</PostBuildEvent>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">
<Midl>
@@ -115,7 +111,7 @@
</Midl>
<ClCompile>
<Optimization>Disabled</Optimization>
<AdditionalIncludeDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>.;control;gl;gl\pipeline;gl\renderer;gl\shader;videoio;videoio\decklink;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;_DEBUG;_WINDOWS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<BasicRuntimeChecks>EnableFastChecks</BasicRuntimeChecks>
<RuntimeLibrary>MultiThreadedDebugDLL</RuntimeLibrary>
@@ -125,21 +121,17 @@
<LanguageStandard>stdcpp17</LanguageStandard>
</ClCompile>
<Link>
<AdditionalDependencies>dvp.lib;opengl32.lib;Glu32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\lib\x64;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>opengl32.lib;Glu32.lib;Windowscodecs.lib;Ole32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Windows</SubSystem>
<TargetMachine>MachineX64</TargetMachine>
</Link>
<PostBuildEvent>
<Command>copy /y "..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\bin\$(Platform)\dvp.dll" "$(TargetDir)"</Command>
</PostBuildEvent>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">
<ClCompile>
<Optimization>MaxSpeed</Optimization>
<IntrinsicFunctions>true</IntrinsicFunctions>
<AdditionalIncludeDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>.;control;gl;gl\pipeline;gl\renderer;gl\shader;videoio;videoio\decklink;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;NDEBUG;_WINDOWS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>
<FunctionLevelLinking>true</FunctionLevelLinking>
@@ -149,18 +141,13 @@
<LanguageStandard>stdcpp17</LanguageStandard>
</ClCompile>
<Link>
<AdditionalDependencies>dvp.lib;opengl32.lib;Glu32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\lib\win32;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>opengl32.lib;Glu32.lib;Windowscodecs.lib;Ole32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Windows</SubSystem>
<OptimizeReferences>true</OptimizeReferences>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<TargetMachine>MachineX86</TargetMachine>
</Link>
<PostBuildEvent>
<Message>Copy nececssary DLLs to target directory</Message>
<Command>copy /y "..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\bin\$(Platform)\dvp.dll" "$(TargetDir)"</Command>
</PostBuildEvent>
</ItemDefinitionGroup>
<ItemDefinitionGroup Condition="'$(Configuration)|$(Platform)'=='Release|x64'">
<Midl>
@@ -169,7 +156,7 @@
<ClCompile>
<Optimization>MaxSpeed</Optimization>
<IntrinsicFunctions>true</IntrinsicFunctions>
<AdditionalIncludeDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<AdditionalIncludeDirectories>.;control;gl;gl\pipeline;gl\renderer;gl\shader;videoio;videoio\decklink;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>
<PreprocessorDefinitions>WIN32;NDEBUG;_WINDOWS;%(PreprocessorDefinitions)</PreprocessorDefinitions>
<RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>
<FunctionLevelLinking>true</FunctionLevelLinking>
@@ -179,39 +166,65 @@
<LanguageStandard>stdcpp17</LanguageStandard>
</ClCompile>
<Link>
<AdditionalDependencies>dvp.lib;opengl32.lib;Glu32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<AdditionalLibraryDirectories>..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\lib\x64;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>
<AdditionalDependencies>opengl32.lib;Glu32.lib;Windowscodecs.lib;Ole32.lib;%(AdditionalDependencies)</AdditionalDependencies>
<GenerateDebugInformation>true</GenerateDebugInformation>
<SubSystem>Windows</SubSystem>
<OptimizeReferences>true</OptimizeReferences>
<EnableCOMDATFolding>true</EnableCOMDATFolding>
<TargetMachine>MachineX64</TargetMachine>
</Link>
<PostBuildEvent>
<Command>copy /y "..\..\3rdParty\Blackmagic DeckLink SDK 16.0\Win\Samples\NVIDIA_GPUDirect\bin\$(Platform)\dvp.dll" "$(TargetDir)"</Command>
</PostBuildEvent>
</ItemDefinitionGroup>
<ItemGroup>
<ClCompile Include="GLExtensions.cpp" />
<ClCompile Include="gl\renderer\GLExtensions.cpp" />
<ClCompile Include="LoopThroughWithOpenGLCompositing.cpp" />
<ClCompile Include="OpenGLComposite.cpp" />
<ClCompile Include="gl\OpenGLComposite.cpp" />
<ClCompile Include="gl\pipeline\OpenGLRenderPass.cpp" />
<ClCompile Include="gl\pipeline\OpenGLRenderPipeline.cpp" />
<ClCompile Include="gl\renderer\OpenGLRenderer.cpp" />
<ClCompile Include="gl\renderer\RenderTargetPool.cpp" />
<ClCompile Include="gl\shader\OpenGLShaderPrograms.cpp" />
<ClCompile Include="gl\pipeline\PngScreenshotWriter.cpp" />
<ClCompile Include="gl\shader\ShaderBuildQueue.cpp" />
<ClCompile Include="gl\pipeline\TemporalHistoryBuffers.cpp" />
<ClCompile Include="gl\pipeline\OpenGLVideoIOBridge.cpp" />
<ClCompile Include="stdafx.cpp">
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug|Win32'">Create</PrecompiledHeader>
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Debug|x64'">Create</PrecompiledHeader>
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release|Win32'">Create</PrecompiledHeader>
<PrecompiledHeader Condition="'$(Configuration)|$(Platform)'=='Release|x64'">Create</PrecompiledHeader>
</ClCompile>
<ClCompile Include="VideoFrameTransfer.cpp" />
<ClCompile Include="DeckLinkAPI_i.c" />
<ClCompile Include="videoio\decklink\DeckLinkAPI_i.c" />
<ClCompile Include="control\RuntimeServices.cpp" />
<ClCompile Include="videoio\decklink\DeckLinkSession.cpp" />
<ClCompile Include="videoio\decklink\DeckLinkVideoIOFormat.cpp" />
<ClCompile Include="runtime\RuntimeClock.cpp" />
<ClCompile Include="videoio\VideoIOFormat.cpp" />
<ClCompile Include="videoio\VideoPlayoutScheduler.cpp" />
</ItemGroup>
<ItemGroup>
<ClInclude Include="GLExtensions.h" />
<ClInclude Include="gl\renderer\GLExtensions.h" />
<ClInclude Include="LoopThroughWithOpenGLCompositing.h" />
<ClInclude Include="OpenGLComposite.h" />
<ClInclude Include="gl\OpenGLComposite.h" />
<ClInclude Include="gl\pipeline\OpenGLRenderPass.h" />
<ClInclude Include="gl\pipeline\OpenGLRenderPipeline.h" />
<ClInclude Include="gl\pipeline\RenderPassDescriptor.h" />
<ClInclude Include="gl\renderer\OpenGLRenderer.h" />
<ClInclude Include="gl\renderer\RenderTargetPool.h" />
<ClInclude Include="gl\shader\OpenGLShaderPrograms.h" />
<ClInclude Include="gl\pipeline\PngScreenshotWriter.h" />
<ClInclude Include="gl\shader\ShaderBuildQueue.h" />
<ClInclude Include="gl\pipeline\TemporalHistoryBuffers.h" />
<ClInclude Include="gl\pipeline\OpenGLVideoIOBridge.h" />
<ClInclude Include="resource.h" />
<ClInclude Include="stdafx.h" />
<ClInclude Include="targetver.h" />
<ClInclude Include="VideoFrameTransfer.h" />
<ClInclude Include="control\RuntimeServices.h" />
<ClInclude Include="videoio\decklink\DeckLinkSession.h" />
<ClInclude Include="videoio\decklink\DeckLinkVideoIOFormat.h" />
<ClInclude Include="runtime\RuntimeClock.h" />
<ClInclude Include="videoio\VideoIOFormat.h" />
<ClInclude Include="videoio\VideoIOTypes.h" />
<ClInclude Include="videoio\VideoPlayoutScheduler.h" />
</ItemGroup>
<ItemGroup>
<Image Include="LoopThroughWithOpenGLCompositing.ico" />

View File

@@ -1,4 +1,4 @@
<?xml version="1.0" encoding="utf-8"?>
<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<ItemGroup>
<Filter Include="Source Files">
@@ -18,33 +18,105 @@
</Filter>
</ItemGroup>
<ItemGroup>
<ClCompile Include="GLExtensions.cpp">
<ClCompile Include="gl\renderer\GLExtensions.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="LoopThroughWithOpenGLCompositing.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="OpenGLComposite.cpp">
<ClCompile Include="gl\OpenGLComposite.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\pipeline\OpenGLRenderPass.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\pipeline\OpenGLRenderPipeline.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\renderer\OpenGLRenderer.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\renderer\RenderTargetPool.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\shader\OpenGLShaderPrograms.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\pipeline\PngScreenshotWriter.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\shader\ShaderBuildQueue.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\pipeline\TemporalHistoryBuffers.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="gl\pipeline\OpenGLVideoIOBridge.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="stdafx.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="VideoFrameTransfer.cpp">
<ClCompile Include="videoio\decklink\DeckLinkAPI_i.c">
<Filter>DeckLink API</Filter>
</ClCompile>
<ClCompile Include="control\RuntimeServices.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="DeckLinkAPI_i.c">
<Filter>DeckLink API</Filter>
<ClCompile Include="videoio\decklink\DeckLinkSession.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="videoio\decklink\DeckLinkVideoIOFormat.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="runtime\RuntimeClock.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="videoio\VideoIOFormat.cpp">
<Filter>Source Files</Filter>
</ClCompile>
<ClCompile Include="videoio\VideoPlayoutScheduler.cpp">
<Filter>Source Files</Filter>
</ClCompile>
</ItemGroup>
<ItemGroup>
<ClInclude Include="GLExtensions.h">
<ClInclude Include="gl\renderer\GLExtensions.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="LoopThroughWithOpenGLCompositing.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="OpenGLComposite.h">
<ClInclude Include="gl\OpenGLComposite.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\pipeline\OpenGLRenderPass.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\pipeline\OpenGLRenderPipeline.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\pipeline\RenderPassDescriptor.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\renderer\OpenGLRenderer.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\renderer\RenderTargetPool.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\shader\OpenGLShaderPrograms.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\pipeline\PngScreenshotWriter.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\shader\ShaderBuildQueue.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\pipeline\TemporalHistoryBuffers.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="gl\pipeline\OpenGLVideoIOBridge.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="resource.h">
@@ -56,7 +128,25 @@
<ClInclude Include="targetver.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="VideoFrameTransfer.h">
<ClInclude Include="control\RuntimeServices.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="videoio\decklink\DeckLinkSession.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="videoio\decklink\DeckLinkVideoIOFormat.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="runtime\RuntimeClock.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="videoio\VideoIOFormat.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="videoio\VideoIOTypes.h">
<Filter>Header Files</Filter>
</ClInclude>
<ClInclude Include="videoio\VideoPlayoutScheduler.h">
<Filter>Header Files</Filter>
</ClInclude>
</ItemGroup>

File diff suppressed because it is too large Load Diff

View File

@@ -1,361 +0,0 @@
/* -LICENSE-START-
** Copyright (c) 2012 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation (the
** "Software") to use, reproduce, display, distribute, sub-license, execute,
** and transmit the Software, and to prepare derivative works of the Software,
** and to permit third-parties to whom the Software is furnished to do so, in
** accordance with:
**
** (1) if the Software is obtained from Blackmagic Design, the End User License
** Agreement for the Software Development Kit ("EULA") available at
** https://www.blackmagicdesign.com/EULA/DeckLinkSDK; or
**
** (2) if the Software is obtained from any third party, such licensing terms
** as notified by that third party,
**
** and all subject to the following:
**
** (3) the copyright notices in the Software and this entire statement,
** including the above license grant, this restriction and the following
** disclaimer, must be included in all copies of the Software, in whole or in
** part, and all derivative works of the Software, unless such copies or
** derivative works are solely in the form of machine-executable object code
** generated by a source language processor.
**
** (4) THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
** OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
**
** A copy of the Software is available free of charge at
** https://www.blackmagicdesign.com/desktopvideo_sdk under the EULA.
**
** -LICENSE-END-
*/
#ifndef __OPENGL_COMPOSITE_H__
#define __OPENGL_COMPOSITE_H__
#include <windows.h>
#include <process.h>
#include <tchar.h>
#include <gl/gl.h>
#include <gl/glu.h>
#include <objbase.h>
#include <atlbase.h>
#include <comutil.h>
#include "DeckLinkAPI_h.h"
#include "VideoFrameTransfer.h"
#include "RuntimeHost.h"
#include <atomic>
#include <functional>
#include <map>
#include <memory>
#include <vector>
#include <deque>
class PlayoutDelegate;
class CaptureDelegate;
class PinnedMemoryAllocator;
class ControlServer;
class OscServer;
class OpenGLComposite
{
public:
OpenGLComposite(HWND hWnd, HDC hDC, HGLRC hRC);
~OpenGLComposite();
bool InitDeckLink();
bool Start();
bool Stop();
bool ReloadShader();
std::string GetRuntimeStateJson() const;
bool AddLayer(const std::string& shaderId, std::string& error);
bool RemoveLayer(const std::string& layerId, std::string& error);
bool MoveLayer(const std::string& layerId, int direction, std::string& error);
bool MoveLayerToIndex(const std::string& layerId, std::size_t targetIndex, std::string& error);
bool SetLayerBypass(const std::string& layerId, bool bypassed, std::string& error);
bool SetLayerShader(const std::string& layerId, const std::string& shaderId, std::string& error);
bool UpdateLayerParameterJson(const std::string& layerId, const std::string& parameterId, const std::string& valueJson, std::string& error);
bool UpdateLayerParameterByControlKeyJson(const std::string& layerKey, const std::string& parameterKey, const std::string& valueJson, std::string& error);
bool ResetLayerParameters(const std::string& layerId, std::string& error);
bool SaveStackPreset(const std::string& presetName, std::string& error);
bool LoadStackPreset(const std::string& presetName, std::string& error);
void resizeGL(WORD width, WORD height);
void paintGL();
void VideoFrameArrived(IDeckLinkVideoInputFrame* inputFrame, bool hasNoInputSource);
void PlayoutFrameCompleted(IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult result);
private:
void resizeWindow(int width, int height);
bool CheckOpenGLExtensions();
CaptureDelegate* mCaptureDelegate;
PlayoutDelegate* mPlayoutDelegate;
HWND hGLWnd;
HDC hGLDC;
HGLRC hGLRC;
CRITICAL_SECTION pMutex;
// DeckLink
IDeckLinkInput* mDLInput;
IDeckLinkOutput* mDLOutput;
IDeckLinkKeyer* mDLKeyer;
std::deque<IDeckLinkMutableVideoFrame*> mDLOutputVideoFrameQueue;
PinnedMemoryAllocator* mPlayoutAllocator;
BMDTimeValue mFrameDuration;
BMDTimeScale mFrameTimescale;
unsigned mTotalPlayoutFrames;
unsigned mInputFrameWidth;
unsigned mInputFrameHeight;
unsigned mOutputFrameWidth;
unsigned mOutputFrameHeight;
std::string mInputDisplayModeName;
std::string mOutputDisplayModeName;
bool mHasNoInputSource;
std::string mDeckLinkOutputModelName;
bool mDeckLinkSupportsInternalKeying;
bool mDeckLinkSupportsExternalKeying;
bool mDeckLinkKeyerInterfaceAvailable;
bool mDeckLinkExternalKeyingActive;
std::string mDeckLinkStatusMessage;
// OpenGL data
bool mFastTransferExtensionAvailable;
GLuint mCaptureTexture;
GLuint mDecodedTexture;
GLuint mLayerTempTexture;
GLuint mFBOTexture;
GLuint mOutputTexture;
GLuint mUnpinnedTextureBuffer;
GLuint mDecodeFrameBuf;
GLuint mLayerTempFrameBuf;
GLuint mIdFrameBuf;
GLuint mOutputFrameBuf;
GLuint mIdColorBuf;
GLuint mIdDepthBuf;
GLuint mFullscreenVAO;
GLuint mGlobalParamsUBO;
GLuint mDecodeProgram;
GLuint mDecodeVertexShader;
GLuint mDecodeFragmentShader;
GLsizeiptr mGlobalParamsUBOSize;
int mViewWidth;
int mViewHeight;
std::unique_ptr<RuntimeHost> mRuntimeHost;
std::unique_ptr<ControlServer> mControlServer;
std::unique_ptr<OscServer> mOscServer;
struct LayerProgram
{
struct TextureBinding
{
std::string samplerName;
std::filesystem::path sourcePath;
GLuint texture = 0;
};
std::string layerId;
std::string shaderId;
GLuint program = 0;
GLuint vertexShader = 0;
GLuint fragmentShader = 0;
std::vector<TextureBinding> textureBindings;
};
std::vector<LayerProgram> mLayerPrograms;
struct HistorySlot
{
GLuint texture = 0;
GLuint framebuffer = 0;
};
struct HistoryRing
{
std::vector<HistorySlot> slots;
std::size_t nextWriteIndex = 0;
std::size_t filledCount = 0;
unsigned effectiveLength = 0;
TemporalHistorySource historySource = TemporalHistorySource::None;
};
HistoryRing mSourceHistoryRing;
std::map<std::string, HistoryRing> mPreLayerHistoryByLayerId;
bool mTemporalHistoryNeedsReset;
bool InitOpenGLState();
bool compileLayerPrograms(int errorMessageSize, char* errorMessage);
bool compileSingleLayerProgram(const RuntimeRenderState& state, LayerProgram& layerProgram, int errorMessageSize, char* errorMessage);
bool compileDecodeShader(int errorMessageSize, char* errorMessage);
void destroyLayerPrograms();
void destroySingleLayerProgram(LayerProgram& layerProgram);
void destroyDecodeShaderProgram();
void renderDecodePass();
void renderShaderProgram(GLuint sourceTexture, GLuint destinationFrameBuffer, const LayerProgram& layerProgram, const RuntimeRenderState& state);
bool loadTextureAsset(const ShaderTextureAsset& textureAsset, GLuint& textureId, std::string& error);
void bindLayerTextureAssets(const LayerProgram& layerProgram);
void renderEffect();
bool PollRuntimeChanges();
void broadcastRuntimeState();
bool updateGlobalParamsBuffer(const RuntimeRenderState& state, unsigned availableSourceHistoryLength, unsigned availableTemporalHistoryLength);
bool validateTemporalTextureUnitBudget(const std::vector<RuntimeRenderState>& layerStates, std::string& error) const;
bool ensureTemporalHistoryResources(const std::vector<RuntimeRenderState>& layerStates, std::string& error);
bool createHistoryRing(HistoryRing& ring, unsigned effectiveLength, TemporalHistorySource historySource, std::string& error);
void destroyHistoryRing(HistoryRing& ring);
void destroyTemporalHistoryResources();
void resetTemporalHistoryState();
void pushFramebufferToHistoryRing(GLuint sourceFramebuffer, HistoryRing& ring);
void bindHistorySamplers(const RuntimeRenderState& state, GLuint currentSourceTexture);
GLuint resolveHistoryTexture(const HistoryRing& ring, GLuint fallbackTexture, std::size_t framesAgo) const;
unsigned sourceHistoryAvailableCount() const;
unsigned temporalHistoryAvailableCountForLayer(const std::string& layerId) const;
};
////////////////////////////////////////////
// PinnedMemoryAllocator
////////////////////////////////////////////
class PinnedMemoryAllocator : public IDeckLinkVideoBufferAllocator
{
public:
PinnedMemoryAllocator(HDC hdc, HGLRC hglrc, VideoFrameTransfer::Direction direction, unsigned cacheSize, unsigned bufferSize);
virtual ~PinnedMemoryAllocator();
bool transferFrame(void* address, GLuint gpuTexture);
void waitForTransferComplete(void* address);
unsigned bufferSize() { return mBufferSize; }
// IUnknown methods
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID *ppv) override;
virtual ULONG STDMETHODCALLTYPE AddRef(void) override;
virtual ULONG STDMETHODCALLTYPE Release(void) override;
// IDeckLinkVideoBufferAllocator methods
virtual HRESULT STDMETHODCALLTYPE AllocateVideoBuffer (IDeckLinkVideoBuffer** allocatedBuffer) override;
private:
void unPinAddress(void* address);
private:
HDC mHGLDC;
HGLRC mHGLRC;
std::atomic<ULONG> mRefCount;
VideoFrameTransfer::Direction mDirection;
std::map<void*, VideoFrameTransfer*> mFrameTransfer;
unsigned mBufferSize;
std::vector<void*> mFrameCache;
unsigned mFrameCacheSize;
};
////////////////////////////////////////////
// InputAllocatorPool
////////////////////////////////////////////
class InputAllocatorPool : public IDeckLinkVideoBufferAllocatorProvider
{
public:
InputAllocatorPool(HDC hdc, HGLRC hglrc);
// IUnknown interface
ULONG STDMETHODCALLTYPE AddRef() override;
ULONG STDMETHODCALLTYPE Release() override;
HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, void** ppv) override;
// IDeckLinkVideoBufferAllocatorProvider interface
HRESULT STDMETHODCALLTYPE GetVideoBufferAllocator(
/* [in] */ unsigned int bufferSize,
/* [in] */ unsigned int width,
/* [in] */ unsigned int height,
/* [in] */ unsigned int rowBytes,
/* [in] */ BMDPixelFormat pixelFormat,
/* [out] */ IDeckLinkVideoBufferAllocator **allocator) override;
private:
std::atomic<ULONG> mRefCount;
std::map<unsigned int, CComPtr<PinnedMemoryAllocator> > mAllocatorBySize;
HDC mHDC;
HGLRC mHGLRC;
};
////////////////////////////////////////////
// DeckLinkVideoBuffer
////////////////////////////////////////////
class DeckLinkVideoBuffer : public IDeckLinkVideoBuffer
{
public:
explicit DeckLinkVideoBuffer(std::shared_ptr<void>& buffer, PinnedMemoryAllocator* parent);
virtual ~DeckLinkVideoBuffer() = default;
// IUnknown interface
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID riid, void** ppvObject) override;
virtual ULONG STDMETHODCALLTYPE AddRef(void) override;
virtual ULONG STDMETHODCALLTYPE Release(void) override;
// IDeckLinkVideoBuffer interface
virtual HRESULT STDMETHODCALLTYPE GetBytes(void** buffer) override;
virtual HRESULT STDMETHODCALLTYPE GetSize(uint64_t* size) override;
virtual HRESULT STDMETHODCALLTYPE StartAccess(BMDBufferAccessFlags flags) override;
virtual HRESULT STDMETHODCALLTYPE EndAccess(BMDBufferAccessFlags flags) override;
private:
CComPtr<PinnedMemoryAllocator> mParentAllocator; // Dual-purpose: allocator owns mem this points to, and to access transferFrame() via a QueryInterface
std::atomic<ULONG> mRefCount;
std::shared_ptr<void> mBuffer;
};
////////////////////////////////////////////
// Capture Delegate Class
////////////////////////////////////////////
class CaptureDelegate : public IDeckLinkInputCallback
{
OpenGLComposite* m_pOwner;
LONG mRefCount;
public:
CaptureDelegate (OpenGLComposite* pOwner);
// IUnknown needs only a dummy implementation
virtual HRESULT STDMETHODCALLTYPE QueryInterface (REFIID iid, LPVOID *ppv);
virtual ULONG STDMETHODCALLTYPE AddRef ();
virtual ULONG STDMETHODCALLTYPE Release ();
virtual HRESULT STDMETHODCALLTYPE VideoInputFrameArrived (IDeckLinkVideoInputFrame *videoFrame, IDeckLinkAudioInputPacket *audioPacket);
virtual HRESULT STDMETHODCALLTYPE VideoInputFormatChanged (BMDVideoInputFormatChangedEvents notificationEvents, IDeckLinkDisplayMode *newDisplayMode, BMDDetectedVideoInputFormatFlags detectedSignalFlags);
};
////////////////////////////////////////////
// Render Delegate Class
////////////////////////////////////////////
class PlayoutDelegate : public IDeckLinkVideoOutputCallback
{
OpenGLComposite* m_pOwner;
LONG mRefCount;
public:
PlayoutDelegate (OpenGLComposite* pOwner);
// IUnknown needs only a dummy implementation
virtual HRESULT STDMETHODCALLTYPE QueryInterface (REFIID iid, LPVOID *ppv);
virtual ULONG STDMETHODCALLTYPE AddRef ();
virtual ULONG STDMETHODCALLTYPE Release ();
virtual HRESULT STDMETHODCALLTYPE ScheduledFrameCompleted (IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult result);
virtual HRESULT STDMETHODCALLTYPE ScheduledPlaybackHasStopped ();
};
#endif // __OPENGL_COMPOSITE_H__

View File

@@ -1,377 +0,0 @@
/* -LICENSE-START-
** Copyright (c) 2012 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation (the
** "Software") to use, reproduce, display, distribute, sub-license, execute,
** and transmit the Software, and to prepare derivative works of the Software,
** and to permit third-parties to whom the Software is furnished to do so, in
** accordance with:
**
** (1) if the Software is obtained from Blackmagic Design, the End User License
** Agreement for the Software Development Kit ("EULA") available at
** https://www.blackmagicdesign.com/EULA/DeckLinkSDK; or
**
** (2) if the Software is obtained from any third party, such licensing terms
** as notified by that third party,
**
** and all subject to the following:
**
** (3) the copyright notices in the Software and this entire statement,
** including the above license grant, this restriction and the following
** disclaimer, must be included in all copies of the Software, in whole or in
** part, and all derivative works of the Software, unless such copies or
** derivative works are solely in the form of machine-executable object code
** generated by a source language processor.
**
** (4) THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
** OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
**
** A copy of the Software is available free of charge at
** https://www.blackmagicdesign.com/desktopvideo_sdk under the EULA.
**
** -LICENSE-END-
*/
#include "VideoFrameTransfer.h"
#include "NativeHandles.h"
#define DVP_CHECK(cmd) { \
DVPStatus hr = (cmd); \
if (DVP_STATUS_OK != hr) { \
OutputDebugStringA( #cmd " failed\n" ); \
ExitProcess(hr); \
} \
}
// Initialise static members
bool VideoFrameTransfer::mInitialized = false;
bool VideoFrameTransfer::mUseDvp = false;
unsigned VideoFrameTransfer::mWidth = 0;
unsigned VideoFrameTransfer::mHeight = 0;
GLuint VideoFrameTransfer::mCaptureTexture = 0;
// NVIDIA specific static members
DVPBufferHandle VideoFrameTransfer::mDvpCaptureTextureHandle = 0;
DVPBufferHandle VideoFrameTransfer::mDvpPlaybackTextureHandle = 0;
uint32_t VideoFrameTransfer::mBufferAddrAlignment = 0;
uint32_t VideoFrameTransfer::mBufferGpuStrideAlignment = 0;
uint32_t VideoFrameTransfer::mSemaphoreAddrAlignment = 0;
uint32_t VideoFrameTransfer::mSemaphoreAllocSize = 0;
uint32_t VideoFrameTransfer::mSemaphorePayloadOffset = 0;
uint32_t VideoFrameTransfer::mSemaphorePayloadSize = 0;
bool VideoFrameTransfer::isNvidiaDvpAvailable()
{
// Look for supported graphics boards
const GLubyte* renderer = glGetString(GL_RENDERER);
if (renderer == NULL)
return false;
bool hasDvp = (strstr((char*)renderer, "Quadro") != NULL);
return hasDvp;
}
bool VideoFrameTransfer::isAMDPinnedMemoryAvailable()
{
// GL_AMD_pinned_memory presence indicates GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD buffer target is supported
const GLubyte* strExt = glGetString(GL_EXTENSIONS);
if (strExt == NULL)
{
// In a core profile context GL_EXTENSIONS is no longer queryable via glGetString().
// Treat this as "extension unavailable" for now; the fast-transfer path is optional.
return false;
}
bool hasAMDPinned = (strstr((char*)strExt, "GL_AMD_pinned_memory") != NULL);
return hasAMDPinned;
}
bool VideoFrameTransfer::checkFastMemoryTransferAvailable()
{
return (isNvidiaDvpAvailable() || isAMDPinnedMemoryAvailable());
}
bool VideoFrameTransfer::initialize(unsigned width, unsigned height, GLuint captureTexture, GLuint playbackTexture)
{
if (mInitialized)
return false;
bool hasDvp = isNvidiaDvpAvailable();
bool hasAMDPinned = isAMDPinnedMemoryAvailable();
if (!hasDvp && !hasAMDPinned)
return false;
mUseDvp = hasDvp;
mWidth = width;
mHeight = height;
mCaptureTexture = captureTexture;
if (! initializeMemoryLocking(mWidth * mHeight * 4)) // BGRA uses 4 bytes per pixel
return false;
if (mUseDvp)
{
// DVP initialisation
DVP_CHECK(dvpInitGLContext(DVP_DEVICE_FLAGS_SHARE_APP_CONTEXT));
DVP_CHECK(dvpGetRequiredConstantsGLCtx( &mBufferAddrAlignment, &mBufferGpuStrideAlignment,
&mSemaphoreAddrAlignment, &mSemaphoreAllocSize,
&mSemaphorePayloadOffset, &mSemaphorePayloadSize));
// Register textures with DVP
DVP_CHECK(dvpCreateGPUTextureGL(captureTexture, &mDvpCaptureTextureHandle));
DVP_CHECK(dvpCreateGPUTextureGL(playbackTexture, &mDvpPlaybackTextureHandle));
}
mInitialized = true;
return true;
}
bool VideoFrameTransfer::initializeMemoryLocking(unsigned memSize)
{
// Increase the process working set size to allow pinning of memory.
static SIZE_T dwMin = 0, dwMax = 0;
UniqueHandle processHandle(OpenProcess(PROCESS_QUERY_INFORMATION | PROCESS_SET_QUOTA, FALSE, GetCurrentProcessId()));
if (!processHandle.valid())
return false;
// Retrieve the working set size of the process.
if (!dwMin && !GetProcessWorkingSetSize(processHandle.get(), &dwMin, &dwMax))
return false;
// Allow for 80 frames to be locked
BOOL res = SetProcessWorkingSetSize(processHandle.get(), memSize * 80 + dwMin, memSize * 80 + (dwMax-dwMin));
if (!res)
return false;
return true;
}
// SyncInfo sets up a semaphore which is shared between the GPU and CPU and used to
// synchronise access to DVP buffers.
struct SyncInfo
{
SyncInfo(uint32_t semaphoreAllocSize, uint32_t semaphoreAddrAlignment);
~SyncInfo();
volatile uint32_t* mSem;
volatile uint32_t mReleaseValue;
volatile uint32_t mAcquireValue;
DVPSyncObjectHandle mDvpSync;
};
SyncInfo::SyncInfo(uint32_t semaphoreAllocSize, uint32_t semaphoreAddrAlignment)
{
mSem = (uint32_t*)_aligned_malloc(semaphoreAllocSize, semaphoreAddrAlignment);
// Initialise
mSem[0] = 0;
mReleaseValue = 0;
mAcquireValue = 0;
// Setup DVP sync object and import it
DVPSyncObjectDesc syncObjectDesc;
syncObjectDesc.externalClientWaitFunc = NULL;
syncObjectDesc.sem = (uint32_t*)mSem;
DVP_CHECK(dvpImportSyncObject(&syncObjectDesc, &mDvpSync));
}
SyncInfo::~SyncInfo()
{
DVP_CHECK(dvpFreeSyncObject(mDvpSync));
_aligned_free((void*)mSem);
}
VideoFrameTransfer::VideoFrameTransfer(unsigned long memSize, void* address, Direction direction) :
mBuffer(address),
mMemSize(memSize),
mDirection(direction),
mExtSync(NULL),
mGpuSync(NULL),
mDvpSysMemHandle(0),
mBufferHandle(0)
{
if (mUseDvp)
{
// Pin the memory
if (! VirtualLock(mBuffer, mMemSize))
throw std::runtime_error("Error pinning memory with VirtualLock");
// Create necessary sysmem and gpu sync objects
mExtSync = new SyncInfo(mSemaphoreAllocSize, mSemaphoreAddrAlignment);
mGpuSync = new SyncInfo(mSemaphoreAllocSize, mSemaphoreAddrAlignment);
// Register system memory buffers with DVP
DVPSysmemBufferDesc sysMemBuffersDesc;
sysMemBuffersDesc.width = mWidth;
sysMemBuffersDesc.height = mHeight;
sysMemBuffersDesc.stride = mWidth * 4;
sysMemBuffersDesc.format = DVP_BGRA;
sysMemBuffersDesc.type = DVP_UNSIGNED_BYTE;
sysMemBuffersDesc.size = mMemSize;
sysMemBuffersDesc.bufAddr = mBuffer;
if (mDirection == CPUtoGPU)
{
// A UYVY 4:2:2 frame is transferred to the GPU, rather than RGB 4:4:4, so width is halved
sysMemBuffersDesc.width /= 2;
sysMemBuffersDesc.stride /= 2;
}
DVP_CHECK(dvpCreateBuffer(&sysMemBuffersDesc, &mDvpSysMemHandle));
DVP_CHECK(dvpBindToGLCtx(mDvpSysMemHandle));
}
else
{
// Create an OpenGL buffer handle to use for pinned memory
GLuint bufferHandle;
glGenBuffers(1, &bufferHandle);
// Pin memory by binding buffer to special AMD target.
glBindBuffer(GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD, bufferHandle);
// glBufferData() sets up the address so any OpenGL operation on this buffer will use system memory directly
// (assumes address is aligned to 4k boundary).
glBufferData(GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD, mMemSize, address, GL_STREAM_DRAW);
GLenum result = glGetError();
if (result != GL_NO_ERROR)
{
throw std::runtime_error("Error pinning memory with glBufferData(GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD, ...)");
}
glBindBuffer(GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD, 0); // Unbind buffer to target
mBufferHandle = bufferHandle;
}
}
VideoFrameTransfer::~VideoFrameTransfer()
{
if (mUseDvp)
{
DVP_CHECK(dvpUnbindFromGLCtx(mDvpSysMemHandle));
DVP_CHECK(dvpDestroyBuffer(mDvpSysMemHandle));
delete mExtSync;
delete mGpuSync;
VirtualUnlock(mBuffer, mMemSize);
}
else
{
// The buffer is un-pinned by the GPU when the buffer is deleted
glDeleteBuffers(1, &mBufferHandle);
}
}
bool VideoFrameTransfer::performFrameTransfer()
{
if (mUseDvp)
{
// NVIDIA DVP transfers
DVPStatus status;
mGpuSync->mReleaseValue++;
dvpBegin();
if (mDirection == CPUtoGPU)
{
// Copy from system memory to GPU texture
dvpMapBufferWaitDVP(mDvpCaptureTextureHandle);
status = dvpMemcpyLined( mDvpSysMemHandle, mExtSync->mDvpSync, mExtSync->mAcquireValue, DVP_TIMEOUT_IGNORED,
mDvpCaptureTextureHandle, mGpuSync->mDvpSync, mGpuSync->mReleaseValue, 0, mHeight);
dvpMapBufferEndDVP(mDvpCaptureTextureHandle);
}
else
{
// Copy from GPU texture to system memory
dvpMapBufferWaitDVP(mDvpPlaybackTextureHandle);
status = dvpMemcpyLined( mDvpPlaybackTextureHandle, mExtSync->mDvpSync, mExtSync->mReleaseValue, DVP_TIMEOUT_IGNORED,
mDvpSysMemHandle, mGpuSync->mDvpSync, mGpuSync->mReleaseValue, 0, mHeight);
dvpMapBufferEndDVP(mDvpPlaybackTextureHandle);
}
dvpEnd();
return (status == DVP_STATUS_OK);
}
else
{
// AMD pinned memory transfers
if (mDirection == CPUtoGPU)
{
glEnable(GL_TEXTURE_2D);
// Use a pinned buffer for the GL_PIXEL_UNPACK_BUFFER target
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, mBufferHandle);
glBindTexture(GL_TEXTURE_2D, mCaptureTexture);
// NULL for last arg indicates use current GL_PIXEL_UNPACK_BUFFER target as texture data
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, mWidth/2, mHeight, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, NULL);
// Ensure pinned texture has been transferred to GPU before we draw with it
GLsync fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
glClientWaitSync(fence, GL_SYNC_FLUSH_COMMANDS_BIT, 40 * 1000 * 1000); // timeout in nanosec
glDeleteSync(fence);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glDisable(GL_TEXTURE_2D);
}
else
{
// Use a PIXEL PACK BUFFER to read back pixels
glBindBuffer(GL_PIXEL_PACK_BUFFER, mBufferHandle);
glReadPixels(0, 0, mWidth, mHeight, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, NULL);
// Ensure GPU has processed all commands in the pipeline up to this point, before memory is read by the CPU
GLsync fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
glClientWaitSync(fence, GL_SYNC_FLUSH_COMMANDS_BIT, 40 * 1000 * 1000); // timeout in nanosec
glDeleteSync(fence);
}
return (glGetError() == GL_NO_ERROR);
}
}
void VideoFrameTransfer::waitForTransferComplete()
{
if (!mUseDvp)
return;
// Block until buffer has completely transferred between GPU and CPU buffer
dvpBegin();
dvpSyncObjClientWaitComplete(mGpuSync->mDvpSync, DVP_TIMEOUT_IGNORED);
dvpEnd();
}
void VideoFrameTransfer::beginTextureInUse(Direction direction)
{
if (!mUseDvp)
return;
if (direction == CPUtoGPU)
dvpMapBufferWaitAPI(mDvpCaptureTextureHandle);
else
dvpMapBufferWaitAPI(mDvpPlaybackTextureHandle);
}
void VideoFrameTransfer::endTextureInUse(Direction direction)
{
if (!mUseDvp)
return;
if (direction == CPUtoGPU)
dvpMapBufferEndAPI(mDvpCaptureTextureHandle);
else
dvpMapBufferEndAPI(mDvpPlaybackTextureHandle);
}

View File

@@ -1,109 +0,0 @@
/* -LICENSE-START-
** Copyright (c) 2012 Blackmagic Design
**
** Permission is hereby granted, free of charge, to any person or organization
** obtaining a copy of the software and accompanying documentation (the
** "Software") to use, reproduce, display, distribute, sub-license, execute,
** and transmit the Software, and to prepare derivative works of the Software,
** and to permit third-parties to whom the Software is furnished to do so, in
** accordance with:
**
** (1) if the Software is obtained from Blackmagic Design, the End User License
** Agreement for the Software Development Kit ("EULA") available at
** https://www.blackmagicdesign.com/EULA/DeckLinkSDK; or
**
** (2) if the Software is obtained from any third party, such licensing terms
** as notified by that third party,
**
** and all subject to the following:
**
** (3) the copyright notices in the Software and this entire statement,
** including the above license grant, this restriction and the following
** disclaimer, must be included in all copies of the Software, in whole or in
** part, and all derivative works of the Software, unless such copies or
** derivative works are solely in the form of machine-executable object code
** generated by a source language processor.
**
** (4) THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
** OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
** FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
** SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
** FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
** ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
** DEALINGS IN THE SOFTWARE.
**
** A copy of the Software is available free of charge at
** https://www.blackmagicdesign.com/desktopvideo_sdk under the EULA.
**
** -LICENSE-END-
*/
#ifndef __VIDEO_FRAME_TRANSFER_H__
#define __VIDEO_FRAME_TRANSFER_H__
#include "GLExtensions.h"
#include <stdexcept>
#include <map>
// NVIDIA GPU Direct For Video with OpenGL requires the following two headers.
// See the NVIDIA website to check if your graphics card is supported.
#include <DVPAPI.h>
#include <dvpapi_gl.h>
struct SyncInfo;
// Class for performing efficient frame memory transfers between the CPU and GPU,
// using NVIDIA and AMD extensions.
class VideoFrameTransfer
{
public:
enum Direction
{
CPUtoGPU,
GPUtoCPU
};
VideoFrameTransfer(unsigned long memSize, void* address, Direction direction);
~VideoFrameTransfer();
static bool checkFastMemoryTransferAvailable();
static bool initialize(unsigned width, unsigned height, GLuint captureTexture, GLuint playbackTexture);
static void beginTextureInUse(Direction direction);
static void endTextureInUse(Direction direction);
bool performFrameTransfer();
void waitForTransferComplete();
private:
static bool isNvidiaDvpAvailable();
static bool isAMDPinnedMemoryAvailable();
static bool initializeMemoryLocking(unsigned memSize);
void* mBuffer;
unsigned long mMemSize;
Direction mDirection;
static bool mInitialized;
static bool mUseDvp;
static unsigned mWidth;
static unsigned mHeight;
static GLuint mCaptureTexture;
// NVIDIA GPU Direct for Video support
SyncInfo* mExtSync;
SyncInfo* mGpuSync;
DVPBufferHandle mDvpSysMemHandle;
static DVPBufferHandle mDvpCaptureTextureHandle;
static DVPBufferHandle mDvpPlaybackTextureHandle;
static uint32_t mBufferAddrAlignment;
static uint32_t mBufferGpuStrideAlignment;
static uint32_t mSemaphoreAddrAlignment;
static uint32_t mSemaphoreAllocSize;
static uint32_t mSemaphorePayloadOffset;
static uint32_t mSemaphorePayloadSize;
// GPU buffer bound to the target GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD for pinned memory
GLuint mBufferHandle;
};
#endif

View File

@@ -16,6 +16,9 @@
namespace
{
constexpr DWORD kStateBroadcastIntervalMs = 250;
constexpr DWORD kStateBroadcastThrottleMs = 50;
bool InitializeWinsock(std::string& error)
{
WSADATA wsaData = {};
@@ -73,7 +76,7 @@ std::string GuessContentType(const std::filesystem::path& assetPath)
}
ControlServer::ControlServer()
: mPort(0), mRunning(false)
: mPort(0), mRunning(false), mBroadcastPending(false)
{
}
@@ -159,15 +162,35 @@ void ControlServer::Stop()
void ControlServer::BroadcastState()
{
mBroadcastPending = false;
std::lock_guard<std::mutex> lock(mMutex);
BroadcastStateLocked();
}
void ControlServer::RequestBroadcastState()
{
mBroadcastPending = true;
}
void ControlServer::ServerLoop()
{
DWORD lastStateBroadcastMs = GetTickCount();
while (mRunning)
{
TryAcceptClient();
const DWORD nowMs = GetTickCount();
if (mBroadcastPending && nowMs - lastStateBroadcastMs >= kStateBroadcastThrottleMs)
{
BroadcastState();
lastStateBroadcastMs = nowMs;
}
else if (nowMs - lastStateBroadcastMs >= kStateBroadcastIntervalMs)
{
BroadcastState();
lastStateBroadcastMs = nowMs;
}
Sleep(25);
}
}
@@ -422,6 +445,11 @@ bool ControlServer::InvokePostRoute(const std::string& path, const JsonValue& ro
{
return mCallbacks.reloadShader && mCallbacks.reloadShader(error);
}
},
{ "/api/screenshot", [this](const JsonValue&, std::string& error)
{
return mCallbacks.requestScreenshot && mCallbacks.requestScreenshot(error);
}
}
};
@@ -453,6 +481,7 @@ bool ControlServer::HandleWebSocketUpgrade(UniqueSocket clientSocket, const Http
client.socket.reset(clientSocket.release());
client.websocket = true;
mClients.push_back(std::move(client));
mBroadcastPending = false;
BroadcastStateLocked();
}
return true;
@@ -485,6 +514,9 @@ bool ControlServer::SendWebSocketText(SOCKET clientSocket, const std::string& pa
void ControlServer::BroadcastStateLocked()
{
if (mClients.empty())
return;
const std::string stateMessage = mCallbacks.getStateJson ? mCallbacks.getStateJson() : "{}";
for (auto it = mClients.begin(); it != mClients.end();)
{

View File

@@ -32,6 +32,7 @@ public:
std::function<bool(const std::string&, std::string&)> saveStackPreset;
std::function<bool(const std::string&, std::string&)> loadStackPreset;
std::function<bool(std::string&)> reloadShader;
std::function<bool(std::string&)> requestScreenshot;
};
ControlServer();
@@ -40,6 +41,7 @@ public:
bool Start(const std::filesystem::path& uiRoot, const std::filesystem::path& docsRoot, unsigned short preferredPort, const Callbacks& callbacks, std::string& error);
void Stop();
void BroadcastState();
void RequestBroadcastState();
unsigned short GetPort() const { return mPort; }
@@ -99,6 +101,7 @@ private:
unsigned short mPort;
std::thread mThread;
std::atomic<bool> mRunning;
std::atomic<bool> mBroadcastPending;
mutable std::mutex mMutex;
std::vector<ClientConnection> mClients;
};

View File

@@ -55,7 +55,7 @@ OscServer::~OscServer()
Stop();
}
bool OscServer::Start(unsigned short port, const Callbacks& callbacks, std::string& error)
bool OscServer::Start(const std::string& bindAddress, unsigned short port, const Callbacks& callbacks, std::string& error)
{
if (port == 0)
return true;
@@ -78,11 +78,15 @@ bool OscServer::Start(unsigned short port, const Callbacks& callbacks, std::stri
sockaddr_in address = {};
address.sin_family = AF_INET;
address.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
if (!TryParseBindAddress(bindAddress, address.sin_addr, error))
{
mSocket.reset();
return false;
}
address.sin_port = htons(static_cast<u_short>(port));
if (bind(mSocket.get(), reinterpret_cast<sockaddr*>(&address), sizeof(address)) != 0)
{
error = "Could not bind OSC listener to UDP port " + std::to_string(port) + ".";
error = "Could not bind OSC listener to " + bindAddress + ":" + std::to_string(port) + ".";
mSocket.reset();
return false;
}
@@ -92,6 +96,24 @@ bool OscServer::Start(unsigned short port, const Callbacks& callbacks, std::stri
return true;
}
bool OscServer::TryParseBindAddress(const std::string& bindAddress, in_addr& address, std::string& error)
{
if (bindAddress.empty())
{
error = "OSC bind address must not be empty.";
return false;
}
address = {};
if (InetPtonA(AF_INET, bindAddress.c_str(), &address) != 1)
{
error = "Invalid OSC bind address '" + bindAddress + "'. Use an IPv4 address such as 127.0.0.1 or 0.0.0.0.";
return false;
}
return true;
}
void OscServer::Stop()
{
mRunning = false;

View File

@@ -20,7 +20,7 @@ public:
OscServer();
~OscServer();
bool Start(unsigned short port, const Callbacks& callbacks, std::string& error);
bool Start(const std::string& bindAddress, unsigned short port, const Callbacks& callbacks, std::string& error);
void Stop();
unsigned short GetPort() const { return mPort; }
@@ -37,6 +37,7 @@ private:
void ServerLoop();
bool DecodeMessage(const char* data, int byteCount, OscMessage& message, std::string& error) const;
bool DispatchMessage(const OscMessage& message, std::string& error) const;
static bool TryParseBindAddress(const std::string& bindAddress, in_addr& address, std::string& error);
static bool DecodeArgument(const char* data, int byteCount, int& offset, char valueType, std::string& valueJson);
static bool ReadPaddedString(const char* data, int byteCount, int& offset, std::string& value);
static bool ReadInt32(const char* data, int byteCount, int& offset, int& value);

View File

@@ -0,0 +1,53 @@
#include "RuntimeControlBridge.h"
#include "ControlServer.h"
#include "OpenGLComposite.h"
#include "OscServer.h"
#include "RuntimeHost.h"
#include "RuntimeServices.h"
bool StartRuntimeControlServices(
OpenGLComposite& composite,
RuntimeHost& runtimeHost,
RuntimeServices& runtimeServices,
ControlServer& controlServer,
OscServer& oscServer,
std::string& error)
{
ControlServer::Callbacks callbacks;
callbacks.getStateJson = [&composite]() { return composite.GetRuntimeStateJson(); };
callbacks.addLayer = [&composite](const std::string& shaderId, std::string& actionError) { return composite.AddLayer(shaderId, actionError); };
callbacks.removeLayer = [&composite](const std::string& layerId, std::string& actionError) { return composite.RemoveLayer(layerId, actionError); };
callbacks.moveLayer = [&composite](const std::string& layerId, int direction, std::string& actionError) { return composite.MoveLayer(layerId, direction, actionError); };
callbacks.moveLayerToIndex = [&composite](const std::string& layerId, std::size_t targetIndex, std::string& actionError) { return composite.MoveLayerToIndex(layerId, targetIndex, actionError); };
callbacks.setLayerBypass = [&composite](const std::string& layerId, bool bypassed, std::string& actionError) { return composite.SetLayerBypass(layerId, bypassed, actionError); };
callbacks.setLayerShader = [&composite](const std::string& layerId, const std::string& shaderId, std::string& actionError) { return composite.SetLayerShader(layerId, shaderId, actionError); };
callbacks.updateLayerParameter = [&composite](const std::string& layerId, const std::string& parameterId, const std::string& valueJson, std::string& actionError) {
return composite.UpdateLayerParameterJson(layerId, parameterId, valueJson, actionError);
};
callbacks.resetLayerParameters = [&composite](const std::string& layerId, std::string& actionError) { return composite.ResetLayerParameters(layerId, actionError); };
callbacks.saveStackPreset = [&composite](const std::string& presetName, std::string& actionError) { return composite.SaveStackPreset(presetName, actionError); };
callbacks.loadStackPreset = [&composite](const std::string& presetName, std::string& actionError) { return composite.LoadStackPreset(presetName, actionError); };
callbacks.requestScreenshot = [&composite](std::string& actionError) { return composite.RequestScreenshot(actionError); };
callbacks.reloadShader = [&composite](std::string& actionError) {
if (!composite.ReloadShader())
{
actionError = "Shader reload failed. See native app status for details.";
return false;
}
return true;
};
if (!controlServer.Start(runtimeHost.GetUiRoot(), runtimeHost.GetDocsRoot(), runtimeHost.GetServerPort(), callbacks, error))
return false;
runtimeHost.SetServerPort(controlServer.GetPort());
OscServer::Callbacks oscCallbacks;
oscCallbacks.updateParameter = [&runtimeServices](const std::string& layerKey, const std::string& parameterKey, const std::string& valueJson, std::string& actionError) {
return runtimeServices.QueueOscUpdate(layerKey, parameterKey, valueJson, actionError);
};
if (runtimeHost.GetOscPort() > 0 && !oscServer.Start(runtimeHost.GetOscBindAddress(), runtimeHost.GetOscPort(), oscCallbacks, error))
return false;
return true;
}

View File

@@ -0,0 +1,17 @@
#pragma once
#include <string>
class ControlServer;
class OpenGLComposite;
class OscServer;
class RuntimeHost;
class RuntimeServices;
bool StartRuntimeControlServices(
OpenGLComposite& composite,
RuntimeHost& runtimeHost,
RuntimeServices& runtimeServices,
ControlServer& controlServer,
OscServer& oscServer,
std::string& error);

View File

@@ -0,0 +1,247 @@
#include "RuntimeServices.h"
#include "ControlServer.h"
#include "OscServer.h"
#include "RuntimeControlBridge.h"
#include "RuntimeHost.h"
#include <windows.h>
RuntimeServices::RuntimeServices() :
mControlServer(std::make_unique<ControlServer>()),
mOscServer(std::make_unique<OscServer>()),
mPollRunning(false),
mRegistryChanged(false),
mReloadRequested(false),
mPollFailed(false)
{
}
RuntimeServices::~RuntimeServices()
{
Stop();
}
bool RuntimeServices::Start(OpenGLComposite& composite, RuntimeHost& runtimeHost, std::string& error)
{
Stop();
if (!StartRuntimeControlServices(composite, runtimeHost, *this, *mControlServer, *mOscServer, error))
{
Stop();
return false;
}
return true;
}
void RuntimeServices::BeginPolling(RuntimeHost& runtimeHost)
{
StartPolling(runtimeHost);
}
void RuntimeServices::Stop()
{
StopPolling();
if (mOscServer)
mOscServer->Stop();
if (mControlServer)
mControlServer->Stop();
}
void RuntimeServices::BroadcastState()
{
if (mControlServer)
mControlServer->BroadcastState();
}
void RuntimeServices::RequestBroadcastState()
{
if (mControlServer)
mControlServer->RequestBroadcastState();
}
bool RuntimeServices::QueueOscUpdate(const std::string& layerKey, const std::string& parameterKey, const std::string& valueJson, std::string& error)
{
(void)error;
PendingOscUpdate update;
update.layerKey = layerKey;
update.parameterKey = parameterKey;
update.valueJson = valueJson;
const std::string routeKey = layerKey + "\n" + parameterKey;
{
std::lock_guard<std::mutex> lock(mPendingOscMutex);
mPendingOscUpdates[routeKey] = std::move(update);
}
return true;
}
bool RuntimeServices::ApplyPendingOscUpdates(std::vector<AppliedOscUpdate>& appliedUpdates, std::string& error)
{
appliedUpdates.clear();
std::map<std::string, PendingOscUpdate> pending;
{
std::lock_guard<std::mutex> lock(mPendingOscMutex);
if (mPendingOscUpdates.empty())
return true;
pending.swap(mPendingOscUpdates);
}
for (const auto& entry : pending)
{
JsonValue targetValue;
std::string parseError;
if (!ParseJson(entry.second.valueJson, targetValue, parseError))
{
OutputDebugStringA(("OSC queued value parse failed: " + parseError + "\n").c_str());
continue;
}
AppliedOscUpdate appliedUpdate;
appliedUpdate.routeKey = entry.first;
appliedUpdate.layerKey = entry.second.layerKey;
appliedUpdate.parameterKey = entry.second.parameterKey;
appliedUpdate.targetValue = targetValue;
appliedUpdates.push_back(std::move(appliedUpdate));
}
(void)error;
return true;
}
bool RuntimeServices::QueueOscCommit(const std::string& routeKey, const std::string& layerKey, const std::string& parameterKey, const JsonValue& value, uint64_t generation, std::string& error)
{
(void)error;
PendingOscCommit commit;
commit.routeKey = routeKey;
commit.layerKey = layerKey;
commit.parameterKey = parameterKey;
commit.value = value;
commit.generation = generation;
{
std::lock_guard<std::mutex> lock(mPendingOscCommitMutex);
mPendingOscCommits[routeKey] = std::move(commit);
}
return true;
}
void RuntimeServices::ClearOscState()
{
{
std::lock_guard<std::mutex> lock(mPendingOscMutex);
mPendingOscUpdates.clear();
}
{
std::lock_guard<std::mutex> lock(mPendingOscCommitMutex);
mPendingOscCommits.clear();
}
{
std::lock_guard<std::mutex> lock(mCompletedOscCommitMutex);
mCompletedOscCommits.clear();
}
}
void RuntimeServices::ConsumeCompletedOscCommits(std::vector<CompletedOscCommit>& completedCommits)
{
completedCommits.clear();
std::lock_guard<std::mutex> lock(mCompletedOscCommitMutex);
if (mCompletedOscCommits.empty())
return;
completedCommits.swap(mCompletedOscCommits);
}
RuntimePollEvents RuntimeServices::ConsumePollEvents()
{
RuntimePollEvents events;
events.registryChanged = mRegistryChanged.exchange(false);
events.reloadRequested = mReloadRequested.exchange(false);
events.failed = mPollFailed.exchange(false);
if (events.failed)
{
std::lock_guard<std::mutex> lock(mPollErrorMutex);
events.error = mPollError;
}
return events;
}
void RuntimeServices::StartPolling(RuntimeHost& runtimeHost)
{
if (mPollRunning.exchange(true))
return;
mPollThread = std::thread([this, &runtimeHost]() { PollLoop(runtimeHost); });
}
void RuntimeServices::StopPolling()
{
if (!mPollRunning.exchange(false))
return;
if (mPollThread.joinable())
mPollThread.join();
}
void RuntimeServices::PollLoop(RuntimeHost& runtimeHost)
{
while (mPollRunning)
{
std::map<std::string, PendingOscCommit> pendingCommits;
{
std::lock_guard<std::mutex> lock(mPendingOscCommitMutex);
pendingCommits.swap(mPendingOscCommits);
}
for (const auto& entry : pendingCommits)
{
std::string commitError;
if (runtimeHost.UpdateLayerParameterByControlKey(
entry.second.layerKey,
entry.second.parameterKey,
entry.second.value,
false,
commitError))
{
CompletedOscCommit completedCommit;
completedCommit.routeKey = entry.second.routeKey;
completedCommit.generation = entry.second.generation;
std::lock_guard<std::mutex> lock(mCompletedOscCommitMutex);
mCompletedOscCommits.push_back(std::move(completedCommit));
}
else if (!commitError.empty())
{
OutputDebugStringA(("OSC commit failed: " + commitError + "\n").c_str());
}
}
bool registryChanged = false;
bool reloadRequested = false;
std::string runtimeError;
if (!runtimeHost.PollFileChanges(registryChanged, reloadRequested, runtimeError))
{
{
std::lock_guard<std::mutex> lock(mPollErrorMutex);
mPollError = runtimeError;
}
mPollFailed = true;
}
else
{
if (registryChanged)
mRegistryChanged = true;
if (reloadRequested)
mReloadRequested = true;
}
for (int i = 0; i < 25 && mPollRunning; ++i)
Sleep(10);
}
}

View File

@@ -0,0 +1,94 @@
#pragma once
#include "RuntimeJson.h"
#include "ShaderTypes.h"
#include <atomic>
#include <map>
#include <memory>
#include <mutex>
#include <string>
#include <thread>
class ControlServer;
class OpenGLComposite;
class OscServer;
class RuntimeHost;
struct RuntimePollEvents
{
bool registryChanged = false;
bool reloadRequested = false;
bool failed = false;
std::string error;
};
class RuntimeServices
{
public:
struct AppliedOscUpdate
{
std::string routeKey;
std::string layerKey;
std::string parameterKey;
JsonValue targetValue;
};
struct CompletedOscCommit
{
std::string routeKey;
uint64_t generation = 0;
};
RuntimeServices();
~RuntimeServices();
bool Start(OpenGLComposite& composite, RuntimeHost& runtimeHost, std::string& error);
void BeginPolling(RuntimeHost& runtimeHost);
void Stop();
void BroadcastState();
void RequestBroadcastState();
bool QueueOscUpdate(const std::string& layerKey, const std::string& parameterKey, const std::string& valueJson, std::string& error);
bool ApplyPendingOscUpdates(std::vector<AppliedOscUpdate>& appliedUpdates, std::string& error);
bool QueueOscCommit(const std::string& routeKey, const std::string& layerKey, const std::string& parameterKey, const JsonValue& value, uint64_t generation, std::string& error);
void ClearOscState();
void ConsumeCompletedOscCommits(std::vector<CompletedOscCommit>& completedCommits);
RuntimePollEvents ConsumePollEvents();
private:
struct PendingOscUpdate
{
std::string layerKey;
std::string parameterKey;
std::string valueJson;
};
struct PendingOscCommit
{
std::string routeKey;
std::string layerKey;
std::string parameterKey;
JsonValue value;
uint64_t generation = 0;
};
void StartPolling(RuntimeHost& runtimeHost);
void StopPolling();
void PollLoop(RuntimeHost& runtimeHost);
std::unique_ptr<ControlServer> mControlServer;
std::unique_ptr<OscServer> mOscServer;
std::thread mPollThread;
std::atomic<bool> mPollRunning;
std::atomic<bool> mRegistryChanged;
std::atomic<bool> mReloadRequested;
std::atomic<bool> mPollFailed;
std::mutex mPollErrorMutex;
std::string mPollError;
std::mutex mPendingOscMutex;
std::map<std::string, PendingOscUpdate> mPendingOscUpdates;
std::mutex mPendingOscCommitMutex;
std::map<std::string, PendingOscCommit> mPendingOscCommits;
std::mutex mCompletedOscCommitMutex;
std::vector<CompletedOscCommit> mCompletedOscCommits;
};

View File

@@ -0,0 +1,800 @@
#include "DeckLinkDisplayMode.h"
#include "DeckLinkSession.h"
#include "OpenGLComposite.h"
#include "GLExtensions.h"
#include "GlRenderConstants.h"
#include "OpenGLRenderPass.h"
#include "OpenGLRenderPipeline.h"
#include "OpenGLShaderPrograms.h"
#include "OpenGLVideoIOBridge.h"
#include "PngScreenshotWriter.h"
#include "RuntimeParameterUtils.h"
#include "RuntimeServices.h"
#include "ShaderBuildQueue.h"
#include <algorithm>
#include <cctype>
#include <chrono>
#include <cmath>
#include <ctime>
#include <filesystem>
#include <iomanip>
#include <memory>
#include <set>
#include <sstream>
#include <string>
#include <vector>
namespace
{
constexpr auto kOscOverlayCommitDelay = std::chrono::milliseconds(150);
constexpr double kOscSmoothingReferenceFps = 60.0;
constexpr double kOscSmoothingMaxStepSeconds = 0.25;
std::string SimplifyOscControlKey(const std::string& text)
{
std::string simplified;
for (unsigned char ch : text)
{
if (std::isalnum(ch))
simplified.push_back(static_cast<char>(std::tolower(ch)));
}
return simplified;
}
bool MatchesOscControlKey(const std::string& candidate, const std::string& key)
{
return candidate == key || SimplifyOscControlKey(candidate) == SimplifyOscControlKey(key);
}
double ClampOscAlpha(double value)
{
return (std::max)(0.0, (std::min)(1.0, value));
}
double ComputeTimeBasedOscAlpha(double smoothing, double deltaSeconds)
{
const double clampedSmoothing = ClampOscAlpha(smoothing);
if (clampedSmoothing <= 0.0)
return 0.0;
if (clampedSmoothing >= 1.0)
return 1.0;
const double clampedDeltaSeconds = (std::max)(0.0, (std::min)(kOscSmoothingMaxStepSeconds, deltaSeconds));
if (clampedDeltaSeconds <= 0.0)
return 0.0;
const double frameScale = clampedDeltaSeconds * kOscSmoothingReferenceFps;
return ClampOscAlpha(1.0 - std::pow(1.0 - clampedSmoothing, frameScale));
}
JsonValue BuildOscCommitValue(const ShaderParameterDefinition& definition, const ShaderParameterValue& value)
{
switch (definition.type)
{
case ShaderParameterType::Boolean:
return JsonValue(value.booleanValue);
case ShaderParameterType::Enum:
return JsonValue(value.enumValue);
case ShaderParameterType::Text:
return JsonValue(value.textValue);
case ShaderParameterType::Trigger:
case ShaderParameterType::Float:
return JsonValue(value.numberValues.empty() ? 0.0 : value.numberValues.front());
case ShaderParameterType::Vec2:
case ShaderParameterType::Color:
{
JsonValue array = JsonValue::MakeArray();
for (double number : value.numberValues)
array.pushBack(JsonValue(number));
return array;
}
}
return JsonValue();
}
}
OpenGLComposite::OpenGLComposite(HWND hWnd, HDC hDC, HGLRC hRC) :
hGLWnd(hWnd), hGLDC(hDC), hGLRC(hRC),
mVideoIO(std::make_unique<DeckLinkSession>()),
mRenderer(std::make_unique<OpenGLRenderer>()),
mUseCommittedLayerStates(false),
mScreenshotRequested(false)
{
InitializeCriticalSection(&pMutex);
mRuntimeHost = std::make_unique<RuntimeHost>();
mRenderPipeline = std::make_unique<OpenGLRenderPipeline>(
*mRenderer,
*mRuntimeHost,
[this]() { renderEffect(); },
[this]() { ProcessScreenshotRequest(); },
[this]() { paintGL(false); });
mVideoIOBridge = std::make_unique<OpenGLVideoIOBridge>(
*mVideoIO,
*mRenderer,
*mRenderPipeline,
*mRuntimeHost,
pMutex,
hGLDC,
hGLRC);
mRenderPass = std::make_unique<OpenGLRenderPass>(*mRenderer);
mShaderPrograms = std::make_unique<OpenGLShaderPrograms>(*mRenderer, *mRuntimeHost);
mShaderBuildQueue = std::make_unique<ShaderBuildQueue>(*mRuntimeHost);
mRuntimeServices = std::make_unique<RuntimeServices>();
}
OpenGLComposite::~OpenGLComposite()
{
if (mRuntimeServices)
mRuntimeServices->Stop();
if (mShaderBuildQueue)
mShaderBuildQueue->Stop();
mVideoIO->ReleaseResources();
mRenderer->DestroyResources();
DeleteCriticalSection(&pMutex);
}
bool OpenGLComposite::InitDeckLink()
{
return InitVideoIO();
}
bool OpenGLComposite::InitVideoIO()
{
VideoFormatSelection videoModes;
std::string initFailureReason;
if (mRuntimeHost && mRuntimeHost->GetRepoRoot().empty())
{
std::string runtimeError;
if (!mRuntimeHost->Initialize(runtimeError))
{
MessageBoxA(NULL, runtimeError.c_str(), "Runtime host failed to initialize", MB_OK);
return false;
}
}
if (mRuntimeHost)
{
if (!ResolveConfiguredVideoFormats(
mRuntimeHost->GetInputVideoFormat(),
mRuntimeHost->GetInputFrameRate(),
mRuntimeHost->GetOutputVideoFormat(),
mRuntimeHost->GetOutputFrameRate(),
videoModes,
initFailureReason))
{
MessageBoxA(NULL, initFailureReason.c_str(), "DeckLink mode configuration error", MB_OK);
return false;
}
}
if (!mVideoIO->DiscoverDevicesAndModes(videoModes, initFailureReason))
{
const char* title = initFailureReason == "Please install the Blackmagic DeckLink drivers to use the features of this application."
? "This application requires the DeckLink drivers installed."
: "DeckLink initialization failed";
MessageBoxA(NULL, initFailureReason.c_str(), title, MB_OK | MB_ICONERROR);
return false;
}
const bool outputAlphaRequired = mRuntimeHost && mRuntimeHost->ExternalKeyingEnabled();
if (!mVideoIO->SelectPreferredFormats(videoModes, outputAlphaRequired, initFailureReason))
goto error;
if (! CheckOpenGLExtensions())
{
initFailureReason = "OpenGL extension checks failed.";
goto error;
}
if (! InitOpenGLState())
{
initFailureReason = "OpenGL state initialization failed.";
goto error;
}
PublishVideoIOStatus(mVideoIO->OutputModelName().empty()
? "DeckLink output device selected."
: ("Selected output device: " + mVideoIO->OutputModelName()));
// Resize window to match output video frame, but scale large formats down by half for viewing.
if (mVideoIO->OutputFrameWidth() < 1920)
resizeWindow(mVideoIO->OutputFrameWidth(), mVideoIO->OutputFrameHeight());
else
resizeWindow(mVideoIO->OutputFrameWidth() / 2, mVideoIO->OutputFrameHeight() / 2);
if (!mVideoIO->ConfigureInput([this](const VideoIOFrame& frame) { mVideoIOBridge->VideoFrameArrived(frame); }, videoModes.input, initFailureReason))
{
goto error;
}
if (!mVideoIO->HasInputDevice() && mRuntimeHost)
{
mRuntimeHost->SetSignalStatus(false, mVideoIO->InputFrameWidth(), mVideoIO->InputFrameHeight(), mVideoIO->InputDisplayModeName());
}
if (!mVideoIO->ConfigureOutput([this](const VideoIOCompletion& completion) { mVideoIOBridge->PlayoutFrameCompleted(completion); }, videoModes.output, mRuntimeHost && mRuntimeHost->ExternalKeyingEnabled(), initFailureReason))
{
goto error;
}
PublishVideoIOStatus(mVideoIO->StatusMessage());
return true;
error:
if (!initFailureReason.empty())
MessageBoxA(NULL, initFailureReason.c_str(), "DeckLink initialization failed", MB_OK | MB_ICONERROR);
mVideoIO->ReleaseResources();
return false;
}
void OpenGLComposite::paintGL(bool force)
{
if (!force)
{
if (IsIconic(hGLWnd))
return;
const unsigned previewFps = mRuntimeHost ? mRuntimeHost->GetPreviewFps() : 30u;
if (previewFps == 0)
return;
const auto now = std::chrono::steady_clock::now();
const auto minimumInterval = std::chrono::microseconds(1000000 / (previewFps == 0 ? 1u : previewFps));
if (mLastPreviewPresentTime != std::chrono::steady_clock::time_point() &&
now - mLastPreviewPresentTime < minimumInterval)
{
return;
}
}
if (!TryEnterCriticalSection(&pMutex))
{
ValidateRect(hGLWnd, NULL);
return;
}
mRenderer->PresentToWindow(hGLDC, mVideoIO->OutputFrameWidth(), mVideoIO->OutputFrameHeight());
mLastPreviewPresentTime = std::chrono::steady_clock::now();
ValidateRect(hGLWnd, NULL);
LeaveCriticalSection(&pMutex);
}
void OpenGLComposite::resizeGL(WORD width, WORD height)
{
// We don't set the project or model matrices here since the window data is copied directly from
// an off-screen FBO in paintGL(). Just save the width and height for use in paintGL().
mRenderer->ResizeView(width, height);
}
void OpenGLComposite::resizeWindow(int width, int height)
{
RECT r;
if (GetWindowRect(hGLWnd, &r))
{
SetWindowPos(hGLWnd, HWND_TOP, r.left, r.top, r.left + width, r.top + height, 0);
}
}
void OpenGLComposite::PublishVideoIOStatus(const std::string& statusMessage)
{
if (!mRuntimeHost)
return;
if (!statusMessage.empty())
mVideoIO->SetStatusMessage(statusMessage);
mRuntimeHost->SetVideoIOStatus(
"decklink",
mVideoIO->OutputModelName(),
mVideoIO->SupportsInternalKeying(),
mVideoIO->SupportsExternalKeying(),
mVideoIO->KeyerInterfaceAvailable(),
mRuntimeHost->ExternalKeyingEnabled(),
mVideoIO->ExternalKeyingActive(),
mVideoIO->StatusMessage());
}
bool OpenGLComposite::InitOpenGLState()
{
if (! ResolveGLExtensions())
return false;
std::string runtimeError;
if (mRuntimeHost->GetRepoRoot().empty() && !mRuntimeHost->Initialize(runtimeError))
{
MessageBoxA(NULL, runtimeError.c_str(), "Runtime host failed to initialize", MB_OK);
return false;
}
if (!mRuntimeServices->Start(*this, *mRuntimeHost, runtimeError))
{
MessageBoxA(NULL, runtimeError.c_str(), "Runtime control services failed to start", MB_OK);
return false;
}
// Prepare the runtime shader program generated from the active shader package.
char compilerErrorMessage[1024];
if (!mShaderPrograms->CompileDecodeShader(sizeof(compilerErrorMessage), compilerErrorMessage))
{
MessageBoxA(NULL, compilerErrorMessage, "OpenGL decode shader failed to load or compile", MB_OK);
return false;
}
if (!mShaderPrograms->CompileOutputPackShader(sizeof(compilerErrorMessage), compilerErrorMessage))
{
MessageBoxA(NULL, compilerErrorMessage, "OpenGL output pack shader failed to load or compile", MB_OK);
return false;
}
std::string rendererError;
if (!mRenderer->InitializeResources(
mVideoIO->InputFrameWidth(),
mVideoIO->InputFrameHeight(),
mVideoIO->CaptureTextureWidth(),
mVideoIO->OutputFrameWidth(),
mVideoIO->OutputFrameHeight(),
mVideoIO->OutputPackTextureWidth(),
rendererError))
{
MessageBoxA(NULL, rendererError.c_str(), "OpenGL initialization error.", MB_OK);
return false;
}
if (!mShaderPrograms->CompileLayerPrograms(mVideoIO->InputFrameWidth(), mVideoIO->InputFrameHeight(), sizeof(compilerErrorMessage), compilerErrorMessage))
{
MessageBoxA(NULL, compilerErrorMessage, "OpenGL shader failed to load or compile", MB_OK);
return false;
}
mCachedLayerRenderStates = mShaderPrograms->CommittedLayerStates();
mUseCommittedLayerStates = false;
mShaderPrograms->ResetTemporalHistoryState();
mShaderPrograms->ResetShaderFeedbackState();
broadcastRuntimeState();
mRuntimeServices->BeginPolling(*mRuntimeHost);
return true;
}
bool OpenGLComposite::Start()
{
return mVideoIO->Start();
}
bool OpenGLComposite::Stop()
{
if (mRuntimeServices)
mRuntimeServices->Stop();
const bool wasExternalKeyingActive = mVideoIO->ExternalKeyingActive();
mVideoIO->Stop();
if (wasExternalKeyingActive)
PublishVideoIOStatus("External keying has been disabled.");
return true;
}
bool OpenGLComposite::ReloadShader(bool preserveFeedbackState)
{
mPreserveFeedbackOnNextShaderBuild = preserveFeedbackState;
if (mRuntimeHost)
{
mRuntimeHost->SetCompileStatus(true, "Shader rebuild queued.");
mRuntimeHost->ClearReloadRequest();
}
RequestShaderBuild();
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::RequestScreenshot(std::string& error)
{
(void)error;
mScreenshotRequested.store(true);
return true;
}
void OpenGLComposite::renderEffect()
{
ProcessRuntimePollResults();
std::vector<RuntimeServices::AppliedOscUpdate> appliedOscUpdates;
std::vector<RuntimeServices::CompletedOscCommit> completedOscCommits;
if (mRuntimeHost && mRuntimeServices)
{
std::string oscError;
if (!mRuntimeServices->ApplyPendingOscUpdates(appliedOscUpdates, oscError) && !oscError.empty())
OutputDebugStringA(("OSC apply failed: " + oscError + "\n").c_str());
mRuntimeServices->ConsumeCompletedOscCommits(completedOscCommits);
}
for (const RuntimeServices::CompletedOscCommit& completedCommit : completedOscCommits)
{
auto overlayIt = mOscOverlayStates.find(completedCommit.routeKey);
if (overlayIt == mOscOverlayStates.end())
continue;
OscOverlayState& overlay = overlayIt->second;
if (overlay.commitQueued &&
overlay.pendingCommitGeneration == completedCommit.generation &&
overlay.generation == completedCommit.generation)
{
mOscOverlayStates.erase(overlayIt);
}
}
std::set<std::string> pendingOscRouteKeys;
const auto oscNow = std::chrono::steady_clock::now();
for (const RuntimeServices::AppliedOscUpdate& update : appliedOscUpdates)
{
const std::string routeKey = update.routeKey;
auto overlayIt = mOscOverlayStates.find(routeKey);
if (overlayIt == mOscOverlayStates.end())
{
OscOverlayState overlay;
overlay.layerKey = update.layerKey;
overlay.parameterKey = update.parameterKey;
overlay.targetValue = update.targetValue;
overlay.lastUpdatedTime = oscNow;
overlay.lastAppliedTime = oscNow;
overlay.generation = 1;
mOscOverlayStates[routeKey] = std::move(overlay);
}
else
{
overlayIt->second.targetValue = update.targetValue;
overlayIt->second.lastUpdatedTime = oscNow;
overlayIt->second.generation += 1;
overlayIt->second.commitQueued = false;
}
pendingOscRouteKeys.insert(routeKey);
}
const auto applyOscOverlays = [&](std::vector<RuntimeRenderState>& states, bool allowCommit)
{
if (states.empty() || mOscOverlayStates.empty() || !mRuntimeHost)
return;
const double smoothing = ClampOscAlpha(mRuntimeHost->GetOscSmoothing());
std::vector<std::string> overlayKeysToRemove;
for (auto& item : mOscOverlayStates)
{
OscOverlayState& overlay = item.second;
auto stateIt = std::find_if(states.begin(), states.end(),
[&overlay](const RuntimeRenderState& state)
{
return MatchesOscControlKey(state.layerId, overlay.layerKey) ||
MatchesOscControlKey(state.shaderId, overlay.layerKey) ||
MatchesOscControlKey(state.shaderName, overlay.layerKey);
});
if (stateIt == states.end())
continue;
auto definitionIt = std::find_if(stateIt->parameterDefinitions.begin(), stateIt->parameterDefinitions.end(),
[&overlay](const ShaderParameterDefinition& definition)
{
return MatchesOscControlKey(definition.id, overlay.parameterKey) ||
MatchesOscControlKey(definition.label, overlay.parameterKey);
});
if (definitionIt == stateIt->parameterDefinitions.end())
continue;
if (definitionIt->type == ShaderParameterType::Trigger)
{
if (pendingOscRouteKeys.find(item.first) == pendingOscRouteKeys.end())
continue;
ShaderParameterValue& value = stateIt->parameterValues[definitionIt->id];
const double previousCount = value.numberValues.empty() ? 0.0 : value.numberValues[0];
const double triggerTime = stateIt->timeSeconds;
value.numberValues = { previousCount + 1.0, triggerTime };
overlayKeysToRemove.push_back(item.first);
continue;
}
ShaderParameterValue targetValue;
std::string normalizeError;
if (!NormalizeAndValidateParameterValue(*definitionIt, overlay.targetValue, targetValue, normalizeError))
continue;
const bool smoothable =
smoothing > 0.0 &&
(definitionIt->type == ShaderParameterType::Float ||
definitionIt->type == ShaderParameterType::Vec2 ||
definitionIt->type == ShaderParameterType::Color);
if (!smoothable)
{
overlay.currentValue = targetValue;
overlay.hasCurrentValue = true;
stateIt->parameterValues[definitionIt->id] = overlay.currentValue;
if (allowCommit &&
!overlay.commitQueued &&
oscNow - overlay.lastUpdatedTime >= kOscOverlayCommitDelay &&
mRuntimeServices)
{
std::string commitError;
if (mRuntimeServices->QueueOscCommit(item.first, overlay.layerKey, overlay.parameterKey, overlay.targetValue, overlay.generation, commitError))
{
overlay.pendingCommitGeneration = overlay.generation;
overlay.commitQueued = true;
}
}
continue;
}
if (!overlay.hasCurrentValue)
{
overlay.currentValue = DefaultValueForDefinition(*definitionIt);
auto currentIt = stateIt->parameterValues.find(definitionIt->id);
if (currentIt != stateIt->parameterValues.end())
overlay.currentValue = currentIt->second;
overlay.hasCurrentValue = true;
}
if (overlay.currentValue.numberValues.size() != targetValue.numberValues.size())
overlay.currentValue.numberValues = targetValue.numberValues;
double smoothingAlpha = smoothing;
if (overlay.lastAppliedTime != std::chrono::steady_clock::time_point())
{
const double deltaSeconds =
std::chrono::duration_cast<std::chrono::duration<double>>(oscNow - overlay.lastAppliedTime).count();
smoothingAlpha = ComputeTimeBasedOscAlpha(smoothing, deltaSeconds);
}
overlay.lastAppliedTime = oscNow;
ShaderParameterValue nextValue = targetValue;
bool converged = true;
for (std::size_t index = 0; index < targetValue.numberValues.size(); ++index)
{
const double currentNumber = overlay.currentValue.numberValues[index];
const double targetNumber = targetValue.numberValues[index];
const double delta = targetNumber - currentNumber;
double nextNumber = currentNumber + delta * smoothingAlpha;
if (std::fabs(delta) <= 0.0005)
nextNumber = targetNumber;
else
converged = false;
nextValue.numberValues[index] = nextNumber;
}
if (converged)
nextValue.numberValues = targetValue.numberValues;
overlay.currentValue = nextValue;
overlay.hasCurrentValue = true;
stateIt->parameterValues[definitionIt->id] = overlay.currentValue;
if (allowCommit &&
converged &&
!overlay.commitQueued &&
oscNow - overlay.lastUpdatedTime >= kOscOverlayCommitDelay &&
mRuntimeServices)
{
std::string commitError;
JsonValue committedValue = BuildOscCommitValue(*definitionIt, overlay.currentValue);
if (mRuntimeServices->QueueOscCommit(item.first, overlay.layerKey, overlay.parameterKey, committedValue, overlay.generation, commitError))
{
overlay.pendingCommitGeneration = overlay.generation;
overlay.commitQueued = true;
}
}
}
for (const std::string& overlayKey : overlayKeysToRemove)
mOscOverlayStates.erase(overlayKey);
};
const bool hasInputSource = mVideoIO->HasInputSource();
std::vector<RuntimeRenderState> layerStates;
if (mUseCommittedLayerStates)
{
layerStates = mShaderPrograms->CommittedLayerStates();
applyOscOverlays(layerStates, false);
if (mRuntimeHost)
mRuntimeHost->RefreshDynamicRenderStateFields(layerStates);
}
else if (mRuntimeHost)
{
const unsigned renderWidth = mVideoIO->InputFrameWidth();
const unsigned renderHeight = mVideoIO->InputFrameHeight();
const uint64_t renderStateVersion = mRuntimeHost->GetRenderStateVersion();
const uint64_t parameterStateVersion = mRuntimeHost->GetParameterStateVersion();
const bool renderStateCacheValid =
!mCachedLayerRenderStates.empty() &&
mCachedRenderStateVersion == renderStateVersion &&
mCachedRenderStateWidth == renderWidth &&
mCachedRenderStateHeight == renderHeight;
if (renderStateCacheValid)
{
applyOscOverlays(mCachedLayerRenderStates, true);
if (mCachedParameterStateVersion != parameterStateVersion &&
mRuntimeHost->TryRefreshCachedLayerStates(mCachedLayerRenderStates))
{
mCachedParameterStateVersion = parameterStateVersion;
applyOscOverlays(mCachedLayerRenderStates, true);
}
layerStates = mCachedLayerRenderStates;
mRuntimeHost->RefreshDynamicRenderStateFields(layerStates);
}
else
{
if (mRuntimeHost->TryGetLayerRenderStates(renderWidth, renderHeight, layerStates))
{
mCachedLayerRenderStates = layerStates;
mCachedRenderStateVersion = renderStateVersion;
mCachedParameterStateVersion = parameterStateVersion;
mCachedRenderStateWidth = renderWidth;
mCachedRenderStateHeight = renderHeight;
applyOscOverlays(mCachedLayerRenderStates, true);
layerStates = mCachedLayerRenderStates;
}
else
{
applyOscOverlays(mCachedLayerRenderStates, true);
layerStates = mCachedLayerRenderStates;
mRuntimeHost->RefreshDynamicRenderStateFields(layerStates);
}
}
}
const unsigned historyCap = mRuntimeHost ? mRuntimeHost->GetMaxTemporalHistoryFrames() : 0;
mRenderPass->Render(
hasInputSource,
layerStates,
mVideoIO->InputFrameWidth(),
mVideoIO->InputFrameHeight(),
mVideoIO->CaptureTextureWidth(),
mVideoIO->InputPixelFormat(),
historyCap,
[this](const RuntimeRenderState& state, LayerProgram::TextBinding& textBinding, std::string& error) {
return mShaderPrograms->UpdateTextBindingTexture(state, textBinding, error);
},
[this](const RuntimeRenderState& state, unsigned availableSourceHistoryLength, unsigned availableTemporalHistoryLength, bool feedbackAvailable) {
return mShaderPrograms->UpdateGlobalParamsBuffer(state, availableSourceHistoryLength, availableTemporalHistoryLength, feedbackAvailable);
});
}
void OpenGLComposite::ProcessScreenshotRequest()
{
if (!mScreenshotRequested.exchange(false))
return;
const unsigned width = mVideoIO ? mVideoIO->OutputFrameWidth() : 0;
const unsigned height = mVideoIO ? mVideoIO->OutputFrameHeight() : 0;
if (width == 0 || height == 0)
return;
std::vector<unsigned char> bottomUpPixels(static_cast<std::size_t>(width) * height * 4);
std::vector<unsigned char> topDownPixels(bottomUpPixels.size());
glBindFramebuffer(GL_READ_FRAMEBUFFER, mRenderer->OutputFramebuffer());
glReadBuffer(GL_COLOR_ATTACHMENT0);
glPixelStorei(GL_PACK_ALIGNMENT, 1);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, bottomUpPixels.data());
glPixelStorei(GL_PACK_ALIGNMENT, 4);
const std::size_t rowBytes = static_cast<std::size_t>(width) * 4;
for (unsigned y = 0; y < height; ++y)
{
const unsigned sourceY = height - 1 - y;
std::copy(
bottomUpPixels.begin() + static_cast<std::ptrdiff_t>(sourceY * rowBytes),
bottomUpPixels.begin() + static_cast<std::ptrdiff_t>((sourceY + 1) * rowBytes),
topDownPixels.begin() + static_cast<std::ptrdiff_t>(y * rowBytes));
}
try
{
const std::filesystem::path outputPath = BuildScreenshotPath();
std::filesystem::create_directories(outputPath.parent_path());
WritePngFileAsync(outputPath, width, height, std::move(topDownPixels));
}
catch (const std::exception& exception)
{
OutputDebugStringA((std::string("Screenshot request failed: ") + exception.what() + "\n").c_str());
}
}
std::filesystem::path OpenGLComposite::BuildScreenshotPath() const
{
const std::filesystem::path root = mRuntimeHost && !mRuntimeHost->GetRuntimeRoot().empty()
? mRuntimeHost->GetRuntimeRoot()
: std::filesystem::current_path();
const auto now = std::chrono::system_clock::now();
const auto milliseconds = std::chrono::duration_cast<std::chrono::milliseconds>(now.time_since_epoch()) % 1000;
const std::time_t nowTime = std::chrono::system_clock::to_time_t(now);
std::tm localTime = {};
localtime_s(&localTime, &nowTime);
std::ostringstream filename;
filename << "video-shader-toys-"
<< std::put_time(&localTime, "%Y%m%d-%H%M%S")
<< "-" << std::setw(3) << std::setfill('0') << milliseconds.count()
<< ".png";
return root / "screenshots" / filename.str();
}
bool OpenGLComposite::ProcessRuntimePollResults()
{
if (!mRuntimeHost || !mRuntimeServices)
return true;
const RuntimePollEvents events = mRuntimeServices->ConsumePollEvents();
if (events.failed)
{
mRuntimeHost->SetCompileStatus(false, events.error);
broadcastRuntimeState();
return false;
}
if (events.registryChanged)
broadcastRuntimeState();
if (!events.reloadRequested)
{
PreparedShaderBuild readyBuild;
if (!mShaderBuildQueue || !mShaderBuildQueue->TryConsumeReadyBuild(readyBuild))
return true;
char compilerErrorMessage[1024] = {};
if (!mShaderPrograms->CommitPreparedLayerPrograms(readyBuild, mVideoIO->InputFrameWidth(), mVideoIO->InputFrameHeight(), sizeof(compilerErrorMessage), compilerErrorMessage))
{
mRuntimeHost->SetCompileStatus(false, compilerErrorMessage);
mUseCommittedLayerStates = true;
mPreserveFeedbackOnNextShaderBuild = false;
broadcastRuntimeState();
return false;
}
mUseCommittedLayerStates = false;
mCachedLayerRenderStates = mShaderPrograms->CommittedLayerStates();
mShaderPrograms->ResetTemporalHistoryState();
if (!mPreserveFeedbackOnNextShaderBuild)
mShaderPrograms->ResetShaderFeedbackState();
mPreserveFeedbackOnNextShaderBuild = false;
broadcastRuntimeState();
return true;
}
mRuntimeHost->SetCompileStatus(true, "Shader rebuild queued.");
mPreserveFeedbackOnNextShaderBuild = false;
RequestShaderBuild();
broadcastRuntimeState();
return true;
}
void OpenGLComposite::RequestShaderBuild()
{
if (!mShaderBuildQueue || !mVideoIO)
return;
mUseCommittedLayerStates = true;
if (mRuntimeHost)
mRuntimeHost->ClearReloadRequest();
mShaderBuildQueue->RequestBuild(mVideoIO->InputFrameWidth(), mVideoIO->InputFrameHeight());
}
void OpenGLComposite::broadcastRuntimeState()
{
if (mRuntimeServices)
mRuntimeServices->BroadcastState();
}
void OpenGLComposite::resetTemporalHistoryState()
{
mShaderPrograms->ResetTemporalHistoryState();
mShaderPrograms->ResetShaderFeedbackState();
}
bool OpenGLComposite::CheckOpenGLExtensions()
{
return true;
}
////////////////////////////////////////////

View File

@@ -0,0 +1,125 @@
#ifndef __OPENGL_COMPOSITE_H__
#define __OPENGL_COMPOSITE_H__
#include <windows.h>
#include <process.h>
#include <tchar.h>
#include <gl/gl.h>
#include <gl/glu.h>
#include <objbase.h>
#include <atlbase.h>
#include <comutil.h>
#include "GLExtensions.h"
#include "OpenGLRenderer.h"
#include "RuntimeHost.h"
#include <functional>
#include <atomic>
#include <filesystem>
#include <map>
#include <memory>
#include <string>
#include <vector>
#include <deque>
#include <chrono>
class VideoIODevice;
class OpenGLVideoIOBridge;
class OpenGLRenderPass;
class OpenGLRenderPipeline;
class OpenGLShaderPrograms;
class RuntimeServices;
class ShaderBuildQueue;
class OpenGLComposite
{
public:
OpenGLComposite(HWND hWnd, HDC hDC, HGLRC hRC);
~OpenGLComposite();
bool InitDeckLink();
bool InitVideoIO();
bool Start();
bool Stop();
bool ReloadShader(bool preserveFeedbackState = false);
std::string GetRuntimeStateJson() const;
bool AddLayer(const std::string& shaderId, std::string& error);
bool RemoveLayer(const std::string& layerId, std::string& error);
bool MoveLayer(const std::string& layerId, int direction, std::string& error);
bool MoveLayerToIndex(const std::string& layerId, std::size_t targetIndex, std::string& error);
bool SetLayerBypass(const std::string& layerId, bool bypassed, std::string& error);
bool SetLayerShader(const std::string& layerId, const std::string& shaderId, std::string& error);
bool UpdateLayerParameterJson(const std::string& layerId, const std::string& parameterId, const std::string& valueJson, std::string& error);
bool UpdateLayerParameterByControlKeyJson(const std::string& layerKey, const std::string& parameterKey, const std::string& valueJson, std::string& error);
bool ResetLayerParameters(const std::string& layerId, std::string& error);
bool SaveStackPreset(const std::string& presetName, std::string& error);
bool LoadStackPreset(const std::string& presetName, std::string& error);
bool RequestScreenshot(std::string& error);
unsigned short GetControlServerPort() const;
unsigned short GetOscPort() const;
std::string GetOscBindAddress() const;
std::string GetControlUrl() const;
std::string GetDocsUrl() const;
std::string GetOscAddress() const;
void resizeGL(WORD width, WORD height);
void paintGL(bool force = false);
private:
void resizeWindow(int width, int height);
bool CheckOpenGLExtensions();
void PublishVideoIOStatus(const std::string& statusMessage);
using LayerProgram = OpenGLRenderer::LayerProgram;
struct OscOverlayState
{
std::string layerKey;
std::string parameterKey;
JsonValue targetValue;
ShaderParameterValue currentValue;
bool hasCurrentValue = false;
std::chrono::steady_clock::time_point lastUpdatedTime;
std::chrono::steady_clock::time_point lastAppliedTime;
uint64_t generation = 0;
uint64_t pendingCommitGeneration = 0;
bool commitQueued = false;
};
HWND hGLWnd;
HDC hGLDC;
HGLRC hGLRC;
CRITICAL_SECTION pMutex;
std::unique_ptr<VideoIODevice> mVideoIO;
std::unique_ptr<OpenGLRenderer> mRenderer;
std::unique_ptr<RuntimeHost> mRuntimeHost;
std::unique_ptr<OpenGLVideoIOBridge> mVideoIOBridge;
std::unique_ptr<OpenGLRenderPass> mRenderPass;
std::unique_ptr<OpenGLRenderPipeline> mRenderPipeline;
std::unique_ptr<OpenGLShaderPrograms> mShaderPrograms;
std::unique_ptr<ShaderBuildQueue> mShaderBuildQueue;
std::unique_ptr<RuntimeServices> mRuntimeServices;
std::vector<RuntimeRenderState> mCachedLayerRenderStates;
uint64_t mCachedRenderStateVersion = 0;
uint64_t mCachedParameterStateVersion = 0;
unsigned mCachedRenderStateWidth = 0;
unsigned mCachedRenderStateHeight = 0;
std::map<std::string, OscOverlayState> mOscOverlayStates;
std::atomic<bool> mUseCommittedLayerStates;
std::atomic<bool> mScreenshotRequested;
std::chrono::steady_clock::time_point mLastPreviewPresentTime;
bool mPreserveFeedbackOnNextShaderBuild = false;
bool InitOpenGLState();
void renderEffect();
bool ProcessRuntimePollResults();
void RequestShaderBuild();
void ProcessScreenshotRequest();
std::filesystem::path BuildScreenshotPath() const;
void broadcastRuntimeState();
void resetTemporalHistoryState();
};
#endif // __OPENGL_COMPOSITE_H__

View File

@@ -0,0 +1,156 @@
#include "OpenGLComposite.h"
#include "RuntimeServices.h"
std::string OpenGLComposite::GetRuntimeStateJson() const
{
return mRuntimeHost ? mRuntimeHost->BuildStateJson() : "{}";
}
unsigned short OpenGLComposite::GetControlServerPort() const
{
return mRuntimeHost ? mRuntimeHost->GetServerPort() : 0;
}
unsigned short OpenGLComposite::GetOscPort() const
{
return mRuntimeHost ? mRuntimeHost->GetOscPort() : 0;
}
std::string OpenGLComposite::GetOscBindAddress() const
{
return mRuntimeHost ? mRuntimeHost->GetOscBindAddress() : "127.0.0.1";
}
std::string OpenGLComposite::GetControlUrl() const
{
return "http://127.0.0.1:" + std::to_string(GetControlServerPort()) + "/";
}
std::string OpenGLComposite::GetDocsUrl() const
{
return "http://127.0.0.1:" + std::to_string(GetControlServerPort()) + "/docs";
}
std::string OpenGLComposite::GetOscAddress() const
{
return "udp://" + GetOscBindAddress() + ":" + std::to_string(GetOscPort()) + " /VideoShaderToys/{Layer}/{Parameter}";
}
bool OpenGLComposite::AddLayer(const std::string& shaderId, std::string& error)
{
if (!mRuntimeHost->AddLayer(shaderId, error))
return false;
ReloadShader(true);
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::RemoveLayer(const std::string& layerId, std::string& error)
{
if (!mRuntimeHost->RemoveLayer(layerId, error))
return false;
ReloadShader(true);
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::MoveLayer(const std::string& layerId, int direction, std::string& error)
{
if (!mRuntimeHost->MoveLayer(layerId, direction, error))
return false;
ReloadShader(true);
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::MoveLayerToIndex(const std::string& layerId, std::size_t targetIndex, std::string& error)
{
if (!mRuntimeHost->MoveLayerToIndex(layerId, targetIndex, error))
return false;
ReloadShader(true);
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::SetLayerBypass(const std::string& layerId, bool bypassed, std::string& error)
{
if (!mRuntimeHost->SetLayerBypass(layerId, bypassed, error))
return false;
ReloadShader();
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::SetLayerShader(const std::string& layerId, const std::string& shaderId, std::string& error)
{
if (!mRuntimeHost->SetLayerShader(layerId, shaderId, error))
return false;
ReloadShader();
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::UpdateLayerParameterJson(const std::string& layerId, const std::string& parameterId, const std::string& valueJson, std::string& error)
{
JsonValue parsedValue;
if (!ParseJson(valueJson, parsedValue, error))
return false;
if (!mRuntimeHost->UpdateLayerParameter(layerId, parameterId, parsedValue, error))
return false;
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::UpdateLayerParameterByControlKeyJson(const std::string& layerKey, const std::string& parameterKey, const std::string& valueJson, std::string& error)
{
JsonValue parsedValue;
if (!ParseJson(valueJson, parsedValue, error))
return false;
if (!mRuntimeHost->UpdateLayerParameterByControlKey(layerKey, parameterKey, parsedValue, error))
return false;
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::ResetLayerParameters(const std::string& layerId, std::string& error)
{
if (!mRuntimeHost->ResetLayerParameters(layerId, error))
return false;
mOscOverlayStates.clear();
if (mRuntimeServices)
mRuntimeServices->ClearOscState();
resetTemporalHistoryState();
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::SaveStackPreset(const std::string& presetName, std::string& error)
{
if (!mRuntimeHost->SaveStackPreset(presetName, error))
return false;
broadcastRuntimeState();
return true;
}
bool OpenGLComposite::LoadStackPreset(const std::string& presetName, std::string& error)
{
if (!mRuntimeHost->LoadStackPreset(presetName, error))
return false;
ReloadShader();
broadcastRuntimeState();
return true;
}

View File

@@ -0,0 +1,288 @@
#include "OpenGLRenderPass.h"
#include "GlRenderConstants.h"
#include <map>
OpenGLRenderPass::OpenGLRenderPass(OpenGLRenderer& renderer) :
mRenderer(renderer)
{
}
void OpenGLRenderPass::Render(
bool hasInputSource,
const std::vector<RuntimeRenderState>& layerStates,
unsigned inputFrameWidth,
unsigned inputFrameHeight,
unsigned captureTextureWidth,
VideoIOPixelFormat inputPixelFormat,
unsigned historyCap,
const TextBindingUpdater& updateTextBinding,
const GlobalParamsUpdater& updateGlobalParams)
{
glDisable(GL_SCISSOR_TEST);
glDisable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
if (hasInputSource)
{
RenderDecodePass(inputFrameWidth, inputFrameHeight, captureTextureWidth, inputPixelFormat);
}
else
{
glBindFramebuffer(GL_FRAMEBUFFER, mRenderer.DecodeFramebuffer());
glViewport(0, 0, inputFrameWidth, inputFrameHeight);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
}
std::vector<LayerProgram>& layerPrograms = mRenderer.LayerPrograms();
if (layerStates.empty() || layerPrograms.empty())
{
glBindFramebuffer(GL_READ_FRAMEBUFFER, mRenderer.DecodeFramebuffer());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mRenderer.CompositeFramebuffer());
glBlitFramebuffer(0, 0, inputFrameWidth, inputFrameHeight, 0, 0, inputFrameWidth, inputFrameHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBindFramebuffer(GL_FRAMEBUFFER, mRenderer.CompositeFramebuffer());
}
else
{
const std::vector<RenderPassDescriptor>& passes = BuildLayerPassDescriptors(layerStates, layerPrograms);
for (const RenderPassDescriptor& pass : passes)
{
RenderLayerPass(
pass,
inputFrameWidth,
inputFrameHeight,
historyCap,
updateTextBinding,
updateGlobalParams);
}
}
mRenderer.TemporalHistory().PushSourceFramebuffer(mRenderer.DecodeFramebuffer(), inputFrameWidth, inputFrameHeight);
mRenderer.FeedbackBuffers().FinalizeFrame();
}
void OpenGLRenderPass::RenderDecodePass(unsigned inputFrameWidth, unsigned inputFrameHeight, unsigned captureTextureWidth, VideoIOPixelFormat inputPixelFormat)
{
glBindFramebuffer(GL_FRAMEBUFFER, mRenderer.DecodeFramebuffer());
glViewport(0, 0, inputFrameWidth, inputFrameHeight);
glClear(GL_COLOR_BUFFER_BIT);
glActiveTexture(GL_TEXTURE0 + kPackedVideoTextureUnit);
glBindTexture(GL_TEXTURE_2D, mRenderer.CaptureTexture());
glBindVertexArray(mRenderer.FullscreenVertexArray());
glUseProgram(mRenderer.DecodeProgram());
const GLint packedResolutionLocation = mRenderer.DecodePackedResolutionLocation();
const GLint decodedResolutionLocation = mRenderer.DecodeDecodedResolutionLocation();
const GLint inputPixelFormatLocation = mRenderer.DecodeInputPixelFormatLocation();
if (packedResolutionLocation >= 0)
glUniform2f(packedResolutionLocation, static_cast<float>(captureTextureWidth), static_cast<float>(inputFrameHeight));
if (decodedResolutionLocation >= 0)
glUniform2f(decodedResolutionLocation, static_cast<float>(inputFrameWidth), static_cast<float>(inputFrameHeight));
if (inputPixelFormatLocation >= 0)
glUniform1i(inputPixelFormatLocation, inputPixelFormat == VideoIOPixelFormat::V210 ? 1 : 0);
glDrawArrays(GL_TRIANGLES, 0, 3);
glUseProgram(0);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
glActiveTexture(GL_TEXTURE0);
}
std::vector<RenderPassDescriptor> OpenGLRenderPass::BuildLayerPassDescriptors(
const std::vector<RuntimeRenderState>& layerStates,
std::vector<LayerProgram>& layerPrograms) const
{
// Flatten the layer stack into concrete GL passes. A layer may now contain
// several shader passes, but the outer stack still sees one visible output
// per layer.
std::vector<RenderPassDescriptor>& passes = mPassScratch;
passes.clear();
const std::size_t passCount = layerStates.size() < layerPrograms.size() ? layerStates.size() : layerPrograms.size();
std::size_t descriptorCount = 0;
for (std::size_t index = 0; index < passCount; ++index)
descriptorCount += layerPrograms[index].passes.size();
passes.reserve(descriptorCount);
GLuint sourceTexture = mRenderer.DecodedTexture();
GLuint sourceFramebuffer = mRenderer.DecodeFramebuffer();
for (std::size_t index = 0; index < passCount; ++index)
{
const RuntimeRenderState& state = layerStates[index];
LayerProgram& layerProgram = layerPrograms[index];
if (layerProgram.passes.empty())
continue;
// Preserve the original two-target layer ping-pong. Intermediate passes
// inside this layer are routed through pooled temporary targets instead.
const std::size_t remaining = layerStates.size() - index;
const bool writeToMain = (remaining % 2) == 1;
const GLuint layerOutputTexture = writeToMain ? mRenderer.CompositeTexture() : mRenderer.LayerTempTexture();
const GLuint layerOutputFramebuffer = writeToMain ? mRenderer.CompositeFramebuffer() : mRenderer.LayerTempFramebuffer();
const RenderPassOutputTarget layerOutputTarget = writeToMain ? RenderPassOutputTarget::Composite : RenderPassOutputTarget::LayerTemp;
const GLuint layerInputTexture = sourceTexture;
const GLuint layerInputFramebuffer = sourceFramebuffer;
GLuint previousPassTexture = layerInputTexture;
GLuint previousPassFramebuffer = layerInputFramebuffer;
std::map<std::string, std::pair<GLuint, GLuint>> namedOutputs;
std::size_t temporaryTargetIndex = 0;
for (std::size_t passIndex = 0; passIndex < layerProgram.passes.size(); ++passIndex)
{
PassProgram& passProgram = layerProgram.passes[passIndex];
const bool lastPassForLayer = passIndex + 1 == layerProgram.passes.size();
const std::string outputName = passProgram.outputName.empty() ? passProgram.passId : passProgram.outputName;
const bool writesLayerOutput = outputName == "layerOutput" || lastPassForLayer;
GLuint passSourceTexture = previousPassTexture;
GLuint passSourceFramebuffer = previousPassFramebuffer;
if (!passProgram.inputNames.empty())
{
// v1 multipass uses the first declared input as gVideoInput.
// Later inputs are parsed for forward compatibility.
const std::string& inputName = passProgram.inputNames.front();
if (inputName == "layerInput")
{
passSourceTexture = layerInputTexture;
passSourceFramebuffer = layerInputFramebuffer;
}
else if (inputName == "previousPass")
{
passSourceTexture = previousPassTexture;
passSourceFramebuffer = previousPassFramebuffer;
}
else
{
auto namedOutputIt = namedOutputs.find(inputName);
if (namedOutputIt != namedOutputs.end())
{
passSourceTexture = namedOutputIt->second.first;
passSourceFramebuffer = namedOutputIt->second.second;
}
}
}
GLuint passDestinationTexture = layerOutputTexture;
GLuint passDestinationFramebuffer = layerOutputFramebuffer;
RenderPassOutputTarget outputTarget = layerOutputTarget;
if (!writesLayerOutput)
{
// Temporary targets are reserved when the shader stack is
// committed, avoiding texture allocation during playback.
if (temporaryTargetIndex < mRenderer.TemporaryRenderTargetCount())
{
const RenderTarget& temporaryTarget = mRenderer.TemporaryRenderTarget(temporaryTargetIndex);
++temporaryTargetIndex;
passDestinationTexture = temporaryTarget.texture;
passDestinationFramebuffer = temporaryTarget.framebuffer;
outputTarget = RenderPassOutputTarget::Temporary;
}
}
RenderPassDescriptor pass;
pass.kind = RenderPassKind::LayerEffect;
pass.outputTarget = outputTarget;
pass.passIndex = passes.size();
pass.passId = passProgram.passId;
pass.layerId = state.layerId;
pass.shaderId = state.shaderId;
pass.layerInputTexture = layerInputTexture;
pass.sourceTexture = passSourceTexture;
pass.sourceFramebuffer = passIndex == 0 ? layerInputFramebuffer : passSourceFramebuffer;
pass.destinationTexture = passDestinationTexture;
pass.destinationFramebuffer = passDestinationFramebuffer;
pass.layerProgram = &layerProgram;
pass.passProgram = &passProgram;
pass.layerState = &state;
pass.capturePreLayerHistory = passIndex == 0 && state.temporalHistorySource == TemporalHistorySource::PreLayerInput;
pass.captureFeedbackWrite = state.feedback.enabled && passProgram.passId == state.feedback.writePassId;
passes.push_back(pass);
// A later pass can reference either the explicit output name or the
// pass id, which keeps small manifests pleasant to write.
namedOutputs[outputName] = std::make_pair(passDestinationTexture, passDestinationFramebuffer);
namedOutputs[passProgram.passId] = std::make_pair(passDestinationTexture, passDestinationFramebuffer);
previousPassTexture = passDestinationTexture;
previousPassFramebuffer = passDestinationFramebuffer;
}
sourceTexture = layerOutputTexture;
sourceFramebuffer = layerOutputFramebuffer;
}
return passes;
}
void OpenGLRenderPass::RenderLayerPass(
const RenderPassDescriptor& pass,
unsigned inputFrameWidth,
unsigned inputFrameHeight,
unsigned historyCap,
const TextBindingUpdater& updateTextBinding,
const GlobalParamsUpdater& updateGlobalParams)
{
if (pass.passProgram == nullptr || pass.layerState == nullptr)
return;
RenderShaderProgram(
pass.layerInputTexture,
pass.sourceTexture,
pass.destinationFramebuffer,
*pass.passProgram,
*pass.layerState,
inputFrameWidth,
inputFrameHeight,
historyCap,
updateTextBinding,
updateGlobalParams);
if (pass.capturePreLayerHistory)
mRenderer.TemporalHistory().PushPreLayerFramebuffer(pass.layerId, pass.sourceFramebuffer, inputFrameWidth, inputFrameHeight);
if (pass.captureFeedbackWrite)
mRenderer.FeedbackBuffers().CaptureFeedbackFramebuffer(pass.layerId, pass.destinationFramebuffer, inputFrameWidth, inputFrameHeight);
}
void OpenGLRenderPass::RenderShaderProgram(
GLuint layerInputTexture,
GLuint sourceTexture,
GLuint destinationFrameBuffer,
PassProgram& passProgram,
const RuntimeRenderState& state,
unsigned inputFrameWidth,
unsigned inputFrameHeight,
unsigned historyCap,
const TextBindingUpdater& updateTextBinding,
const GlobalParamsUpdater& updateGlobalParams)
{
for (LayerProgram::TextBinding& textBinding : passProgram.textBindings)
{
std::string textError;
if (!updateTextBinding(state, textBinding, textError))
OutputDebugStringA((textError + "\n").c_str());
}
glBindFramebuffer(GL_FRAMEBUFFER, destinationFrameBuffer);
glViewport(0, 0, inputFrameWidth, inputFrameHeight);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
const std::vector<GLuint> sourceHistoryTextures = mRenderer.TemporalHistory().ResolveSourceHistoryTextures(sourceTexture, state.isTemporal ? historyCap : 0);
const std::vector<GLuint> temporalHistoryTextures = mRenderer.TemporalHistory().ResolveTemporalHistoryTextures(state, sourceTexture, state.isTemporal ? historyCap : 0);
const GLuint feedbackTexture = mRenderer.FeedbackBuffers().ResolveReadTexture(state);
const ShaderTextureBindings::RuntimeTextureBindingPlan texturePlan =
mTextureBindings.BuildLayerRuntimeBindingPlan(passProgram, sourceTexture, layerInputTexture, state, feedbackTexture, sourceHistoryTextures, temporalHistoryTextures);
mTextureBindings.BindRuntimeTexturePlan(texturePlan);
glBindVertexArray(mRenderer.FullscreenVertexArray());
glUseProgram(passProgram.program);
// The UBO is shared by every pass in a layer; texture routing is what
// changes from pass to pass.
updateGlobalParams(
state,
mRenderer.TemporalHistory().SourceAvailableCount(),
mRenderer.TemporalHistory().AvailableCountForLayer(state.layerId),
mRenderer.FeedbackBuffers().FeedbackAvailable(state));
glDrawArrays(GL_TRIANGLES, 0, 3);
glUseProgram(0);
glBindVertexArray(0);
mTextureBindings.UnbindRuntimeTexturePlan(texturePlan);
}

View File

@@ -0,0 +1,61 @@
#pragma once
#include "OpenGLRenderer.h"
#include "RenderPassDescriptor.h"
#include "ShaderTextureBindings.h"
#include "ShaderTypes.h"
#include "VideoIOFormat.h"
#include <functional>
#include <string>
#include <vector>
class OpenGLRenderPass
{
public:
using LayerProgram = OpenGLRenderer::LayerProgram;
using PassProgram = OpenGLRenderer::LayerProgram::PassProgram;
using TextBindingUpdater = std::function<bool(const RuntimeRenderState&, LayerProgram::TextBinding&, std::string&)>;
using GlobalParamsUpdater = std::function<bool(const RuntimeRenderState&, unsigned, unsigned, bool)>;
explicit OpenGLRenderPass(OpenGLRenderer& renderer);
void Render(
bool hasInputSource,
const std::vector<RuntimeRenderState>& layerStates,
unsigned inputFrameWidth,
unsigned inputFrameHeight,
unsigned captureTextureWidth,
VideoIOPixelFormat inputPixelFormat,
unsigned historyCap,
const TextBindingUpdater& updateTextBinding,
const GlobalParamsUpdater& updateGlobalParams);
private:
void RenderDecodePass(unsigned inputFrameWidth, unsigned inputFrameHeight, unsigned captureTextureWidth, VideoIOPixelFormat inputPixelFormat);
std::vector<RenderPassDescriptor> BuildLayerPassDescriptors(
const std::vector<RuntimeRenderState>& layerStates,
std::vector<LayerProgram>& layerPrograms) const;
void RenderLayerPass(
const RenderPassDescriptor& pass,
unsigned inputFrameWidth,
unsigned inputFrameHeight,
unsigned historyCap,
const TextBindingUpdater& updateTextBinding,
const GlobalParamsUpdater& updateGlobalParams);
void RenderShaderProgram(
GLuint layerInputTexture,
GLuint sourceTexture,
GLuint destinationFrameBuffer,
PassProgram& passProgram,
const RuntimeRenderState& state,
unsigned inputFrameWidth,
unsigned inputFrameHeight,
unsigned historyCap,
const TextBindingUpdater& updateTextBinding,
const GlobalParamsUpdater& updateGlobalParams);
OpenGLRenderer& mRenderer;
ShaderTextureBindings mTextureBindings;
mutable std::vector<RenderPassDescriptor> mPassScratch;
};

View File

@@ -0,0 +1,279 @@
#include "OpenGLRenderPipeline.h"
#include "OpenGLRenderer.h"
#include "RuntimeHost.h"
#include "VideoIOFormat.h"
#include <cstring>
#include <chrono>
#include <gl/gl.h>
OpenGLRenderPipeline::OpenGLRenderPipeline(
OpenGLRenderer& renderer,
RuntimeHost& runtimeHost,
RenderEffectCallback renderEffect,
OutputReadyCallback outputReady,
PaintCallback paint) :
mRenderer(renderer),
mRuntimeHost(runtimeHost),
mRenderEffect(renderEffect),
mOutputReady(outputReady),
mPaint(paint)
{
}
OpenGLRenderPipeline::~OpenGLRenderPipeline()
{
ResetAsyncReadbackState();
}
bool OpenGLRenderPipeline::RenderFrame(const RenderPipelineFrameContext& context, VideoIOOutputFrame& outputFrame)
{
const VideoIOState& state = context.videoState;
const auto renderStartTime = std::chrono::steady_clock::now();
glBindFramebuffer(GL_FRAMEBUFFER, mRenderer.CompositeFramebuffer());
mRenderEffect();
glBindFramebuffer(GL_READ_FRAMEBUFFER, mRenderer.CompositeFramebuffer());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, mRenderer.OutputFramebuffer());
glBlitFramebuffer(0, 0, state.inputFrameSize.width, state.inputFrameSize.height, 0, 0, state.outputFrameSize.width, state.outputFrameSize.height, GL_COLOR_BUFFER_BIT, GL_LINEAR);
glBindFramebuffer(GL_FRAMEBUFFER, mRenderer.OutputFramebuffer());
if (mOutputReady)
mOutputReady();
if (state.outputPixelFormat == VideoIOPixelFormat::V210 || state.outputPixelFormat == VideoIOPixelFormat::Yuva10)
PackOutputFor10Bit(state);
glFlush();
const auto renderEndTime = std::chrono::steady_clock::now();
const double renderMilliseconds = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(renderEndTime - renderStartTime).count();
mRuntimeHost.TrySetPerformanceStats(state.frameBudgetMilliseconds, renderMilliseconds);
mRuntimeHost.TryAdvanceFrame();
ReadOutputFrame(state, outputFrame);
if (mPaint)
mPaint();
return true;
}
void OpenGLRenderPipeline::PackOutputFor10Bit(const VideoIOState& state)
{
glBindFramebuffer(GL_FRAMEBUFFER, mRenderer.OutputPackFramebuffer());
glViewport(0, 0, state.outputPackTextureWidth, state.outputFrameSize.height);
glDisable(GL_SCISSOR_TEST);
glDisable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, mRenderer.OutputTexture());
glBindVertexArray(mRenderer.FullscreenVertexArray());
glUseProgram(mRenderer.OutputPackProgram());
const GLint outputResolutionLocation = mRenderer.OutputPackResolutionLocation();
const GLint activeWordsLocation = mRenderer.OutputPackActiveWordsLocation();
const GLint packFormatLocation = mRenderer.OutputPackFormatLocation();
if (outputResolutionLocation >= 0)
glUniform2f(outputResolutionLocation, static_cast<float>(state.outputFrameSize.width), static_cast<float>(state.outputFrameSize.height));
if (activeWordsLocation >= 0)
glUniform1f(activeWordsLocation, static_cast<float>(ActiveV210WordsForWidth(state.outputFrameSize.width)));
if (packFormatLocation >= 0)
glUniform1i(packFormatLocation, state.outputPixelFormat == VideoIOPixelFormat::Yuva10 ? 2 : 1);
glDrawArrays(GL_TRIANGLES, 0, 3);
glUseProgram(0);
glBindVertexArray(0);
glBindTexture(GL_TEXTURE_2D, 0);
}
bool OpenGLRenderPipeline::EnsureAsyncReadbackBuffers(std::size_t requiredBytes)
{
if (requiredBytes == 0)
return false;
if (mAsyncReadbackBytes == requiredBytes && mAsyncReadbackSlots[0].pixelPackBuffer != 0)
return true;
ResetAsyncReadbackState();
mAsyncReadbackBytes = requiredBytes;
for (AsyncReadbackSlot& slot : mAsyncReadbackSlots)
{
glGenBuffers(1, &slot.pixelPackBuffer);
glBindBuffer(GL_PIXEL_PACK_BUFFER, slot.pixelPackBuffer);
glBufferData(GL_PIXEL_PACK_BUFFER, static_cast<GLsizeiptr>(requiredBytes), nullptr, GL_STREAM_READ);
slot.sizeBytes = requiredBytes;
slot.inFlight = false;
}
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
mAsyncReadbackWriteIndex = 0;
mAsyncReadbackReadIndex = 0;
return true;
}
void OpenGLRenderPipeline::ResetAsyncReadbackState()
{
FlushAsyncReadbackPipeline();
for (AsyncReadbackSlot& slot : mAsyncReadbackSlots)
slot.sizeBytes = 0;
if (mAsyncReadbackSlots[0].pixelPackBuffer != 0)
{
for (AsyncReadbackSlot& slot : mAsyncReadbackSlots)
{
if (slot.pixelPackBuffer != 0)
{
glDeleteBuffers(1, &slot.pixelPackBuffer);
slot.pixelPackBuffer = 0;
}
}
}
mAsyncReadbackWriteIndex = 0;
mAsyncReadbackReadIndex = 0;
mAsyncReadbackBytes = 0;
}
void OpenGLRenderPipeline::FlushAsyncReadbackPipeline()
{
for (AsyncReadbackSlot& slot : mAsyncReadbackSlots)
{
if (slot.fence != nullptr)
{
glDeleteSync(slot.fence);
slot.fence = nullptr;
}
slot.inFlight = false;
}
mAsyncReadbackWriteIndex = 0;
mAsyncReadbackReadIndex = 0;
}
void OpenGLRenderPipeline::QueueAsyncReadback(const VideoIOState& state)
{
const bool usePackedOutput = state.outputPixelFormat == VideoIOPixelFormat::V210 || state.outputPixelFormat == VideoIOPixelFormat::Yuva10;
const std::size_t requiredBytes = static_cast<std::size_t>(state.outputFrameRowBytes) * state.outputFrameSize.height;
const GLenum format = usePackedOutput ? GL_RGBA : GL_BGRA;
const GLenum type = usePackedOutput ? GL_UNSIGNED_BYTE : GL_UNSIGNED_INT_8_8_8_8_REV;
const GLuint framebuffer = usePackedOutput ? mRenderer.OutputPackFramebuffer() : mRenderer.OutputFramebuffer();
const GLsizei readWidth = static_cast<GLsizei>(usePackedOutput ? state.outputPackTextureWidth : state.outputFrameSize.width);
const GLsizei readHeight = static_cast<GLsizei>(state.outputFrameSize.height);
if (requiredBytes == 0)
return;
if (mAsyncReadbackBytes != requiredBytes
|| mAsyncReadbackFormat != format
|| mAsyncReadbackType != type
|| mAsyncReadbackFramebuffer != framebuffer)
{
mAsyncReadbackFormat = format;
mAsyncReadbackType = type;
mAsyncReadbackFramebuffer = framebuffer;
if (!EnsureAsyncReadbackBuffers(requiredBytes))
return;
}
AsyncReadbackSlot& slot = mAsyncReadbackSlots[mAsyncReadbackWriteIndex];
if (slot.fence != nullptr)
{
glDeleteSync(slot.fence);
slot.fence = nullptr;
}
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
glBindBuffer(GL_PIXEL_PACK_BUFFER, slot.pixelPackBuffer);
glBufferData(GL_PIXEL_PACK_BUFFER, static_cast<GLsizeiptr>(requiredBytes), nullptr, GL_STREAM_READ);
glReadPixels(0, 0, readWidth, readHeight, format, type, nullptr);
slot.fence = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
slot.inFlight = slot.fence != nullptr;
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
mAsyncReadbackWriteIndex = (mAsyncReadbackWriteIndex + 1) % mAsyncReadbackSlots.size();
}
bool OpenGLRenderPipeline::TryConsumeAsyncReadback(VideoIOOutputFrame& outputFrame, GLuint64 timeoutNanoseconds)
{
if (mAsyncReadbackBytes == 0 || outputFrame.bytes == nullptr)
return false;
AsyncReadbackSlot& slot = mAsyncReadbackSlots[mAsyncReadbackReadIndex];
if (!slot.inFlight || slot.fence == nullptr || slot.pixelPackBuffer == 0)
return false;
const GLenum waitFlags = timeoutNanoseconds > 0 ? GL_SYNC_FLUSH_COMMANDS_BIT : 0;
const GLenum waitResult = glClientWaitSync(slot.fence, waitFlags, timeoutNanoseconds);
if (waitResult != GL_ALREADY_SIGNALED && waitResult != GL_CONDITION_SATISFIED)
return false;
glDeleteSync(slot.fence);
slot.fence = nullptr;
glBindBuffer(GL_PIXEL_PACK_BUFFER, slot.pixelPackBuffer);
void* mappedBytes = glMapBuffer(GL_PIXEL_PACK_BUFFER, GL_READ_ONLY);
if (mappedBytes == nullptr)
{
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
slot.inFlight = false;
mAsyncReadbackReadIndex = (mAsyncReadbackReadIndex + 1) % mAsyncReadbackSlots.size();
return false;
}
std::memcpy(outputFrame.bytes, mappedBytes, slot.sizeBytes);
glUnmapBuffer(GL_PIXEL_PACK_BUFFER);
glBindBuffer(GL_PIXEL_PACK_BUFFER, 0);
slot.inFlight = false;
mAsyncReadbackReadIndex = (mAsyncReadbackReadIndex + 1) % mAsyncReadbackSlots.size();
CacheOutputFrame(outputFrame);
return true;
}
void OpenGLRenderPipeline::CacheOutputFrame(const VideoIOOutputFrame& outputFrame)
{
if (outputFrame.bytes == nullptr || outputFrame.height == 0 || outputFrame.rowBytes <= 0)
return;
const std::size_t byteCount = static_cast<std::size_t>(outputFrame.rowBytes) * outputFrame.height;
mCachedOutputFrame.resize(byteCount);
std::memcpy(mCachedOutputFrame.data(), outputFrame.bytes, byteCount);
}
void OpenGLRenderPipeline::ReadOutputFrameSynchronously(const VideoIOState& state, void* destinationBytes)
{
const bool usePackedOutput = state.outputPixelFormat == VideoIOPixelFormat::V210 || state.outputPixelFormat == VideoIOPixelFormat::Yuva10;
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glPixelStorei(GL_PACK_ROW_LENGTH, 0);
if (usePackedOutput)
{
glBindFramebuffer(GL_READ_FRAMEBUFFER, mRenderer.OutputPackFramebuffer());
glReadPixels(0, 0, state.outputPackTextureWidth, state.outputFrameSize.height, GL_RGBA, GL_UNSIGNED_BYTE, destinationBytes);
}
else
{
glBindFramebuffer(GL_READ_FRAMEBUFFER, mRenderer.OutputFramebuffer());
glReadPixels(0, 0, state.outputFrameSize.width, state.outputFrameSize.height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, destinationBytes);
}
}
void OpenGLRenderPipeline::ReadOutputFrame(const VideoIOState& state, VideoIOOutputFrame& outputFrame)
{
if (TryConsumeAsyncReadback(outputFrame, 500000))
{
QueueAsyncReadback(state);
return;
}
// If async readback misses the playout deadline, prefer a fresh synchronous
// frame over reusing stale cached output, then restart the async pipeline.
if (outputFrame.bytes != nullptr)
{
ReadOutputFrameSynchronously(state, outputFrame.bytes);
CacheOutputFrame(outputFrame);
}
FlushAsyncReadbackPipeline();
QueueAsyncReadback(state);
}

View File

@@ -0,0 +1,68 @@
#pragma once
#include "GLExtensions.h"
#include "VideoIOTypes.h"
#include <array>
#include <functional>
#include <vector>
class OpenGLRenderer;
class RuntimeHost;
struct RenderPipelineFrameContext
{
VideoIOState videoState;
VideoIOCompletion completion;
};
class OpenGLRenderPipeline
{
public:
using RenderEffectCallback = std::function<void()>;
using OutputReadyCallback = std::function<void()>;
using PaintCallback = std::function<void()>;
OpenGLRenderPipeline(
OpenGLRenderer& renderer,
RuntimeHost& runtimeHost,
RenderEffectCallback renderEffect,
OutputReadyCallback outputReady,
PaintCallback paint);
~OpenGLRenderPipeline();
bool RenderFrame(const RenderPipelineFrameContext& context, VideoIOOutputFrame& outputFrame);
private:
struct AsyncReadbackSlot
{
GLuint pixelPackBuffer = 0;
GLsync fence = nullptr;
std::size_t sizeBytes = 0;
bool inFlight = false;
};
bool EnsureAsyncReadbackBuffers(std::size_t requiredBytes);
void ResetAsyncReadbackState();
void FlushAsyncReadbackPipeline();
void QueueAsyncReadback(const VideoIOState& state);
bool TryConsumeAsyncReadback(VideoIOOutputFrame& outputFrame, GLuint64 timeoutNanoseconds);
void CacheOutputFrame(const VideoIOOutputFrame& outputFrame);
void ReadOutputFrameSynchronously(const VideoIOState& state, void* destinationBytes);
void PackOutputFor10Bit(const VideoIOState& state);
void ReadOutputFrame(const VideoIOState& state, VideoIOOutputFrame& outputFrame);
OpenGLRenderer& mRenderer;
RuntimeHost& mRuntimeHost;
RenderEffectCallback mRenderEffect;
OutputReadyCallback mOutputReady;
PaintCallback mPaint;
std::array<AsyncReadbackSlot, 3> mAsyncReadbackSlots;
std::size_t mAsyncReadbackWriteIndex = 0;
std::size_t mAsyncReadbackReadIndex = 0;
std::size_t mAsyncReadbackBytes = 0;
GLenum mAsyncReadbackFormat = GL_BGRA;
GLenum mAsyncReadbackType = GL_UNSIGNED_INT_8_8_8_8_REV;
GLuint mAsyncReadbackFramebuffer = 0;
std::vector<unsigned char> mCachedOutputFrame;
};

View File

@@ -0,0 +1,124 @@
#include "OpenGLVideoIOBridge.h"
#include "OpenGLRenderer.h"
#include "RuntimeHost.h"
#include <chrono>
#include <gl/gl.h>
OpenGLVideoIOBridge::OpenGLVideoIOBridge(
VideoIODevice& videoIO,
OpenGLRenderer& renderer,
OpenGLRenderPipeline& renderPipeline,
RuntimeHost& runtimeHost,
CRITICAL_SECTION& mutex,
HDC hdc,
HGLRC hglrc) :
mVideoIO(videoIO),
mRenderer(renderer),
mRenderPipeline(renderPipeline),
mRuntimeHost(runtimeHost),
mMutex(mutex),
mHdc(hdc),
mHglrc(hglrc)
{
}
void OpenGLVideoIOBridge::RecordFramePacing(VideoIOCompletionResult completionResult)
{
const auto now = std::chrono::steady_clock::now();
if (mLastPlayoutCompletionTime != std::chrono::steady_clock::time_point())
{
mCompletionIntervalMilliseconds = std::chrono::duration_cast<std::chrono::duration<double, std::milli>>(now - mLastPlayoutCompletionTime).count();
if (mSmoothedCompletionIntervalMilliseconds <= 0.0)
mSmoothedCompletionIntervalMilliseconds = mCompletionIntervalMilliseconds;
else
mSmoothedCompletionIntervalMilliseconds = mSmoothedCompletionIntervalMilliseconds * 0.9 + mCompletionIntervalMilliseconds * 0.1;
if (mCompletionIntervalMilliseconds > mMaxCompletionIntervalMilliseconds)
mMaxCompletionIntervalMilliseconds = mCompletionIntervalMilliseconds;
}
mLastPlayoutCompletionTime = now;
if (completionResult == VideoIOCompletionResult::DisplayedLate)
++mLateFrameCount;
else if (completionResult == VideoIOCompletionResult::Dropped)
++mDroppedFrameCount;
else if (completionResult == VideoIOCompletionResult::Flushed)
++mFlushedFrameCount;
mRuntimeHost.TrySetFramePacingStats(
mCompletionIntervalMilliseconds,
mSmoothedCompletionIntervalMilliseconds,
mMaxCompletionIntervalMilliseconds,
mLateFrameCount,
mDroppedFrameCount,
mFlushedFrameCount);
}
void OpenGLVideoIOBridge::VideoFrameArrived(const VideoIOFrame& inputFrame)
{
const VideoIOState& state = mVideoIO.State();
mRuntimeHost.TrySetSignalStatus(!inputFrame.hasNoInputSource, state.inputFrameSize.width, state.inputFrameSize.height, state.inputDisplayModeName);
if (inputFrame.hasNoInputSource || inputFrame.bytes == nullptr)
return; // don't transfer texture when there's no input
const long textureSize = inputFrame.rowBytes * static_cast<long>(inputFrame.height);
// Never let input upload stall the playout/render callback. If the GL bridge
// is busy producing an output frame, skip this upload and use the next input.
if (!TryEnterCriticalSection(&mMutex))
return;
wglMakeCurrent(mHdc, mHglrc); // make OpenGL context current in this thread
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glPixelStorei(GL_UNPACK_ROW_LENGTH, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, mRenderer.TextureUploadBuffer());
glBufferData(GL_PIXEL_UNPACK_BUFFER, textureSize, inputFrame.bytes, GL_DYNAMIC_DRAW);
glBindTexture(GL_TEXTURE_2D, mRenderer.CaptureTexture());
// NULL for last arg indicates use current GL_PIXEL_UNPACK_BUFFER target as texture data.
if (inputFrame.pixelFormat == VideoIOPixelFormat::V210)
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, state.captureTextureWidth, state.inputFrameSize.height, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
else
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, state.captureTextureWidth, state.inputFrameSize.height, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, NULL);
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
wglMakeCurrent(NULL, NULL);
LeaveCriticalSection(&mMutex);
}
void OpenGLVideoIOBridge::PlayoutFrameCompleted(const VideoIOCompletion& completion)
{
RecordFramePacing(completion.result);
VideoIOOutputFrame outputFrame;
if (!mVideoIO.BeginOutputFrame(outputFrame))
return;
const VideoIOState& state = mVideoIO.State();
RenderPipelineFrameContext frameContext;
frameContext.videoState = state;
frameContext.completion = completion;
EnterCriticalSection(&mMutex);
// make GL context current in this thread
wglMakeCurrent(mHdc, mHglrc);
mRenderPipeline.RenderFrame(frameContext, outputFrame);
wglMakeCurrent(NULL, NULL);
LeaveCriticalSection(&mMutex);
mVideoIO.EndOutputFrame(outputFrame);
mVideoIO.AccountForCompletionResult(completion.result);
// Schedule the next frame for playout after the GL bridge is released so
// input uploads are not blocked by non-GL output bookkeeping.
mVideoIO.ScheduleOutputFrame(outputFrame);
}

View File

@@ -0,0 +1,44 @@
#pragma once
#include "OpenGLRenderPipeline.h"
#include <windows.h>
#include <chrono>
#include <cstdint>
class RuntimeHost;
class OpenGLVideoIOBridge
{
public:
OpenGLVideoIOBridge(
VideoIODevice& videoIO,
OpenGLRenderer& renderer,
OpenGLRenderPipeline& renderPipeline,
RuntimeHost& runtimeHost,
CRITICAL_SECTION& mutex,
HDC hdc,
HGLRC hglrc);
void VideoFrameArrived(const VideoIOFrame& inputFrame);
void PlayoutFrameCompleted(const VideoIOCompletion& completion);
private:
void RecordFramePacing(VideoIOCompletionResult completionResult);
VideoIODevice& mVideoIO;
OpenGLRenderer& mRenderer;
OpenGLRenderPipeline& mRenderPipeline;
RuntimeHost& mRuntimeHost;
CRITICAL_SECTION& mMutex;
HDC mHdc;
HGLRC mHglrc;
std::chrono::steady_clock::time_point mLastPlayoutCompletionTime;
double mCompletionIntervalMilliseconds = 0.0;
double mSmoothedCompletionIntervalMilliseconds = 0.0;
double mMaxCompletionIntervalMilliseconds = 0.0;
uint64_t mLateFrameCount = 0;
uint64_t mDroppedFrameCount = 0;
uint64_t mFlushedFrameCount = 0;
};

View File

@@ -0,0 +1,137 @@
#include "PngScreenshotWriter.h"
#include <windows.h>
#include <wincodec.h>
#include <atlbase.h>
#include <sstream>
#include <thread>
namespace
{
std::string HResultToString(HRESULT hr)
{
std::ostringstream stream;
stream << "HRESULT 0x" << std::hex << static_cast<unsigned long>(hr);
return stream.str();
}
bool WritePngFile(
const std::filesystem::path& outputPath,
unsigned width,
unsigned height,
const std::vector<unsigned char>& bgraPixels,
std::string& error)
{
if (width == 0 || height == 0 || bgraPixels.size() < static_cast<std::size_t>(width) * height * 4)
{
error = "Invalid screenshot dimensions or pixel buffer.";
return false;
}
HRESULT initializeResult = CoInitializeEx(nullptr, COINIT_MULTITHREADED);
const bool shouldUninitialize = SUCCEEDED(initializeResult);
if (FAILED(initializeResult) && initializeResult != RPC_E_CHANGED_MODE)
{
error = "CoInitializeEx failed: " + HResultToString(initializeResult);
return false;
}
CComPtr<IWICImagingFactory> factory;
HRESULT result = CoCreateInstance(
CLSID_WICImagingFactory,
nullptr,
CLSCTX_INPROC_SERVER,
IID_PPV_ARGS(&factory));
if (FAILED(result))
{
error = "Could not create WIC imaging factory: " + HResultToString(result);
if (shouldUninitialize)
CoUninitialize();
return false;
}
CComPtr<IWICStream> stream;
result = factory->CreateStream(&stream);
if (SUCCEEDED(result))
result = stream->InitializeFromFilename(outputPath.wstring().c_str(), GENERIC_WRITE);
if (FAILED(result))
{
error = "Could not open screenshot output file: " + HResultToString(result);
if (shouldUninitialize)
CoUninitialize();
return false;
}
CComPtr<IWICBitmapEncoder> encoder;
result = factory->CreateEncoder(GUID_ContainerFormatPng, nullptr, &encoder);
if (SUCCEEDED(result))
result = encoder->Initialize(stream, WICBitmapEncoderNoCache);
if (FAILED(result))
{
error = "Could not initialize PNG encoder: " + HResultToString(result);
if (shouldUninitialize)
CoUninitialize();
return false;
}
CComPtr<IWICBitmapFrameEncode> frame;
CComPtr<IPropertyBag2> propertyBag;
result = encoder->CreateNewFrame(&frame, &propertyBag);
if (SUCCEEDED(result))
result = frame->Initialize(propertyBag);
if (SUCCEEDED(result))
result = frame->SetSize(width, height);
WICPixelFormatGUID pixelFormat = GUID_WICPixelFormat32bppBGRA;
if (SUCCEEDED(result))
result = frame->SetPixelFormat(&pixelFormat);
if (SUCCEEDED(result) && pixelFormat != GUID_WICPixelFormat32bppBGRA)
{
error = "PNG encoder did not accept BGRA pixel format.";
result = E_FAIL;
}
const UINT stride = width * 4;
const UINT imageSize = stride * height;
if (SUCCEEDED(result))
result = frame->WritePixels(height, stride, imageSize, const_cast<BYTE*>(bgraPixels.data()));
if (SUCCEEDED(result))
result = frame->Commit();
if (SUCCEEDED(result))
result = encoder->Commit();
if (shouldUninitialize)
CoUninitialize();
if (FAILED(result))
{
error = "Could not write screenshot PNG: " + HResultToString(result);
std::error_code ignored;
std::filesystem::remove(outputPath, ignored);
return false;
}
return true;
}
}
void WritePngFileAsync(
const std::filesystem::path& outputPath,
unsigned width,
unsigned height,
std::vector<unsigned char> rgbaPixels)
{
std::thread(
[outputPath, width, height, pixels = std::move(rgbaPixels)]() mutable
{
for (std::size_t index = 0; index + 3 < pixels.size(); index += 4)
std::swap(pixels[index], pixels[index + 2]);
std::string error;
if (!WritePngFile(outputPath, width, height, pixels, error))
OutputDebugStringA(("Screenshot write failed: " + error + "\n").c_str());
else
OutputDebugStringA(("Screenshot written: " + outputPath.string() + "\n").c_str());
}).detach();
}

View File

@@ -0,0 +1,12 @@
#pragma once
#include <filesystem>
#include <string>
#include <vector>
void WritePngFileAsync(
const std::filesystem::path& outputPath,
unsigned width,
unsigned height,
std::vector<unsigned char> rgbaPixels);

View File

@@ -0,0 +1,41 @@
#pragma once
#include "OpenGLRenderer.h"
#include "ShaderTypes.h"
#include <gl/gl.h>
#include <cstddef>
#include <string>
enum class RenderPassKind
{
LayerEffect
};
enum class RenderPassOutputTarget
{
Temporary,
LayerTemp,
Composite
};
struct RenderPassDescriptor
{
RenderPassKind kind = RenderPassKind::LayerEffect;
RenderPassOutputTarget outputTarget = RenderPassOutputTarget::Composite;
std::size_t passIndex = 0;
std::string passId;
std::string layerId;
std::string shaderId;
GLuint layerInputTexture = 0;
GLuint sourceTexture = 0;
GLuint sourceFramebuffer = 0;
GLuint destinationTexture = 0;
GLuint destinationFramebuffer = 0;
OpenGLRenderer::LayerProgram* layerProgram = nullptr;
OpenGLRenderer::LayerProgram::PassProgram* passProgram = nullptr;
const RuntimeRenderState* layerState = nullptr;
bool capturePreLayerHistory = false;
bool captureFeedbackWrite = false;
};

View File

@@ -0,0 +1,202 @@
#include "ShaderFeedbackBuffers.h"
#include <set>
namespace
{
void ConfigureFeedbackTexture(unsigned frameWidth, unsigned frameHeight)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, frameWidth, frameHeight, 0, GL_RGBA, GL_FLOAT, NULL);
}
}
bool ShaderFeedbackBuffers::EnsureResources(const std::vector<RuntimeRenderState>& layerStates, unsigned frameWidth, unsigned frameHeight, std::string& error)
{
if (!EnsureZeroTexture())
{
error = "Failed to initialize shader feedback fallback texture.";
return false;
}
std::set<std::string> requiredLayerIds;
for (const RuntimeRenderState& state : layerStates)
{
if (!state.feedback.enabled)
continue;
requiredLayerIds.insert(state.layerId);
auto surfaceIt = mSurfacesByLayerId.find(state.layerId);
if (surfaceIt == mSurfacesByLayerId.end() ||
surfaceIt->second.width != frameWidth ||
surfaceIt->second.height != frameHeight)
{
Surface replacement;
if (!CreateSurface(replacement, frameWidth, frameHeight, error))
return false;
mSurfacesByLayerId[state.layerId] = std::move(replacement);
}
}
for (auto it = mSurfacesByLayerId.begin(); it != mSurfacesByLayerId.end();)
{
if (requiredLayerIds.find(it->first) == requiredLayerIds.end())
{
DestroySurface(it->second);
it = mSurfacesByLayerId.erase(it);
}
else
{
++it;
}
}
return true;
}
void ShaderFeedbackBuffers::DestroyResources()
{
for (auto& entry : mSurfacesByLayerId)
DestroySurface(entry.second);
mSurfacesByLayerId.clear();
if (mZeroTexture != 0)
{
glDeleteTextures(1, &mZeroTexture);
mZeroTexture = 0;
}
}
void ShaderFeedbackBuffers::ResetState()
{
for (auto& entry : mSurfacesByLayerId)
ClearSurfaceState(entry.second);
}
GLuint ShaderFeedbackBuffers::ResolveReadTexture(const RuntimeRenderState& state) const
{
if (!state.feedback.enabled)
return mZeroTexture;
auto surfaceIt = mSurfacesByLayerId.find(state.layerId);
if (surfaceIt == mSurfacesByLayerId.end() || !surfaceIt->second.hasData)
return mZeroTexture;
return surfaceIt->second.slots[surfaceIt->second.readIndex].texture != 0
? surfaceIt->second.slots[surfaceIt->second.readIndex].texture
: mZeroTexture;
}
bool ShaderFeedbackBuffers::FeedbackAvailable(const RuntimeRenderState& state) const
{
if (!state.feedback.enabled)
return false;
auto surfaceIt = mSurfacesByLayerId.find(state.layerId);
return surfaceIt != mSurfacesByLayerId.end() && surfaceIt->second.hasData;
}
void ShaderFeedbackBuffers::CaptureFeedbackFramebuffer(const std::string& layerId, GLuint sourceFramebuffer, unsigned frameWidth, unsigned frameHeight)
{
auto surfaceIt = mSurfacesByLayerId.find(layerId);
if (surfaceIt == mSurfacesByLayerId.end())
return;
Surface& surface = surfaceIt->second;
const unsigned writeIndex = 1u - surface.readIndex;
glBindFramebuffer(GL_READ_FRAMEBUFFER, sourceFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, surface.slots[writeIndex].framebuffer);
glBlitFramebuffer(0, 0, frameWidth, frameHeight, 0, 0, frameWidth, frameHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
surface.pendingWrite = true;
}
void ShaderFeedbackBuffers::FinalizeFrame()
{
for (auto& entry : mSurfacesByLayerId)
{
Surface& surface = entry.second;
if (!surface.pendingWrite)
continue;
surface.readIndex = 1u - surface.readIndex;
surface.hasData = true;
surface.pendingWrite = false;
}
}
bool ShaderFeedbackBuffers::EnsureZeroTexture()
{
if (mZeroTexture != 0)
return true;
glGenTextures(1, &mZeroTexture);
glBindTexture(GL_TEXTURE_2D, mZeroTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
const float zeroPixel[4] = { 0.0f, 0.0f, 0.0f, 0.0f };
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, 1, 1, 0, GL_RGBA, GL_FLOAT, zeroPixel);
glBindTexture(GL_TEXTURE_2D, 0);
return mZeroTexture != 0;
}
bool ShaderFeedbackBuffers::CreateSurface(Surface& surface, unsigned frameWidth, unsigned frameHeight, std::string& error)
{
DestroySurface(surface);
surface.width = frameWidth;
surface.height = frameHeight;
for (Slot& slot : surface.slots)
{
glGenTextures(1, &slot.texture);
glBindTexture(GL_TEXTURE_2D, slot.texture);
ConfigureFeedbackTexture(frameWidth, frameHeight);
glGenFramebuffers(1, &slot.framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, slot.framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, slot.texture, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
error = "Failed to initialize a shader feedback framebuffer.";
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
DestroySurface(surface);
return false;
}
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
ClearSurfaceState(surface);
return true;
}
void ShaderFeedbackBuffers::DestroySurface(Surface& surface)
{
for (Slot& slot : surface.slots)
{
if (slot.framebuffer != 0)
glDeleteFramebuffers(1, &slot.framebuffer);
if (slot.texture != 0)
glDeleteTextures(1, &slot.texture);
slot.framebuffer = 0;
slot.texture = 0;
}
surface.width = 0;
surface.height = 0;
surface.readIndex = 0;
surface.hasData = false;
surface.pendingWrite = false;
}
void ShaderFeedbackBuffers::ClearSurfaceState(Surface& surface)
{
surface.readIndex = 0;
surface.hasData = false;
surface.pendingWrite = false;
}

View File

@@ -0,0 +1,46 @@
#pragma once
#include "GLExtensions.h"
#include "ShaderTypes.h"
#include <map>
#include <string>
#include <vector>
class ShaderFeedbackBuffers
{
public:
struct Slot
{
GLuint texture = 0;
GLuint framebuffer = 0;
};
struct Surface
{
Slot slots[2];
unsigned width = 0;
unsigned height = 0;
unsigned readIndex = 0;
bool hasData = false;
bool pendingWrite = false;
};
bool EnsureResources(const std::vector<RuntimeRenderState>& layerStates, unsigned frameWidth, unsigned frameHeight, std::string& error);
void DestroyResources();
void ResetState();
GLuint ResolveReadTexture(const RuntimeRenderState& state) const;
bool FeedbackAvailable(const RuntimeRenderState& state) const;
void CaptureFeedbackFramebuffer(const std::string& layerId, GLuint sourceFramebuffer, unsigned frameWidth, unsigned frameHeight);
void FinalizeFrame();
private:
bool EnsureZeroTexture();
bool CreateSurface(Surface& surface, unsigned frameWidth, unsigned frameHeight, std::string& error);
void DestroySurface(Surface& surface);
void ClearSurfaceState(Surface& surface);
private:
std::map<std::string, Surface> mSurfacesByLayerId;
GLuint mZeroTexture = 0;
};

View File

@@ -0,0 +1,261 @@
#include "TemporalHistoryBuffers.h"
#include "GlRenderConstants.h"
#include "ShaderTypes.h"
#include <algorithm>
#include <sstream>
#include <set>
bool TemporalHistoryBuffers::ValidateTextureUnitBudget(const std::vector<RuntimeRenderState>& layerStates, unsigned historyCap, std::string& error) const
{
unsigned requiredUnits = kSourceHistoryTextureUnitBase;
for (const RuntimeRenderState& state : layerStates)
{
unsigned textTextureCount = 0;
for (const ShaderParameterDefinition& definition : state.parameterDefinitions)
{
if (definition.type == ShaderParameterType::Text)
++textTextureCount;
}
const unsigned totalShaderTextures = static_cast<unsigned>(state.textureAssets.size()) + textTextureCount;
const unsigned feedbackTextureCount = state.feedback.enabled ? 1u : 0u;
const unsigned layerRequiredUnits = kSourceHistoryTextureUnitBase + (state.isTemporal ? historyCap + historyCap : 0u) + feedbackTextureCount + totalShaderTextures;
if (layerRequiredUnits > requiredUnits)
requiredUnits = layerRequiredUnits;
}
GLint maxTextureUnits = 0;
glGetIntegerv(GL_MAX_TEXTURE_IMAGE_UNITS, &maxTextureUnits);
const unsigned availableUnits = maxTextureUnits > 0 ? static_cast<unsigned>(maxTextureUnits) : 0u;
if (requiredUnits > availableUnits)
{
std::ostringstream message;
message << "The current history and shader texture asset configuration requires " << requiredUnits
<< " fragment texture units, but only " << maxTextureUnits << " are available.";
error = message.str();
return false;
}
return true;
}
bool TemporalHistoryBuffers::EnsureResources(const std::vector<RuntimeRenderState>& layerStates, unsigned historyCap, unsigned frameWidth, unsigned frameHeight, std::string& error)
{
const bool sourceHistoryNeeded = std::any_of(layerStates.begin(), layerStates.end(),
[](const RuntimeRenderState& state) { return state.isTemporal && state.effectiveTemporalHistoryLength > 0; });
const unsigned sourceHistoryLength = sourceHistoryNeeded ? historyCap : 0;
if (sourceHistoryRing.effectiveLength != sourceHistoryLength)
{
if (!CreateRing(sourceHistoryRing, sourceHistoryLength, TemporalHistorySource::Source, frameWidth, frameHeight, error))
return false;
mNeedsReset = true;
}
std::set<std::string> requiredPreLayerIds;
for (const RuntimeRenderState& state : layerStates)
{
if (!state.isTemporal || state.temporalHistorySource != TemporalHistorySource::PreLayerInput)
continue;
requiredPreLayerIds.insert(state.layerId);
auto historyIt = preLayerHistoryByLayerId.find(state.layerId);
if (historyIt == preLayerHistoryByLayerId.end() || historyIt->second.effectiveLength != state.effectiveTemporalHistoryLength)
{
Ring replacement;
if (!CreateRing(replacement, state.effectiveTemporalHistoryLength, TemporalHistorySource::PreLayerInput, frameWidth, frameHeight, error))
return false;
preLayerHistoryByLayerId[state.layerId] = std::move(replacement);
mNeedsReset = true;
}
}
for (auto it = preLayerHistoryByLayerId.begin(); it != preLayerHistoryByLayerId.end();)
{
if (requiredPreLayerIds.find(it->first) == requiredPreLayerIds.end())
{
DestroyRing(it->second);
it = preLayerHistoryByLayerId.erase(it);
mNeedsReset = true;
}
else
{
++it;
}
}
if (mNeedsReset)
ResetState();
return true;
}
bool TemporalHistoryBuffers::CreateRing(Ring& ring, unsigned effectiveLength, TemporalHistorySource historySource, unsigned frameWidth, unsigned frameHeight, std::string& error)
{
DestroyRing(ring);
ring.effectiveLength = effectiveLength;
ring.historySource = historySource;
if (effectiveLength == 0)
return true;
ring.slots.resize(effectiveLength);
for (Slot& slot : ring.slots)
{
glGenTextures(1, &slot.texture);
glBindTexture(GL_TEXTURE_2D, slot.texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, frameWidth, frameHeight, 0, GL_RGBA, GL_FLOAT, NULL);
glGenFramebuffers(1, &slot.framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, slot.framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, slot.texture, 0);
const GLenum framebufferStatus = glCheckFramebufferStatus(GL_FRAMEBUFFER);
if (framebufferStatus != GL_FRAMEBUFFER_COMPLETE)
{
error = "Failed to initialize a temporal history framebuffer.";
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
DestroyRing(ring);
return false;
}
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
return true;
}
void TemporalHistoryBuffers::DestroyRing(Ring& ring)
{
for (Slot& slot : ring.slots)
{
if (slot.framebuffer != 0)
glDeleteFramebuffers(1, &slot.framebuffer);
if (slot.texture != 0)
glDeleteTextures(1, &slot.texture);
slot.framebuffer = 0;
slot.texture = 0;
}
ring.slots.clear();
ring.nextWriteIndex = 0;
ring.filledCount = 0;
ring.effectiveLength = 0;
ring.historySource = TemporalHistorySource::None;
}
void TemporalHistoryBuffers::DestroyResources()
{
DestroyRing(sourceHistoryRing);
for (auto& historyEntry : preLayerHistoryByLayerId)
DestroyRing(historyEntry.second);
preLayerHistoryByLayerId.clear();
mNeedsReset = true;
}
void TemporalHistoryBuffers::ResetState()
{
sourceHistoryRing.nextWriteIndex = 0;
sourceHistoryRing.filledCount = 0;
for (auto& historyEntry : preLayerHistoryByLayerId)
{
historyEntry.second.nextWriteIndex = 0;
historyEntry.second.filledCount = 0;
}
mNeedsReset = false;
}
void TemporalHistoryBuffers::PushFramebuffer(GLuint sourceFramebuffer, Ring& ring, unsigned frameWidth, unsigned frameHeight)
{
if (ring.effectiveLength == 0 || ring.slots.empty())
return;
Slot& targetSlot = ring.slots[ring.nextWriteIndex];
glBindFramebuffer(GL_READ_FRAMEBUFFER, sourceFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, targetSlot.framebuffer);
glBlitFramebuffer(0, 0, frameWidth, frameHeight, 0, 0, frameWidth, frameHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
ring.nextWriteIndex = (ring.nextWriteIndex + 1) % ring.slots.size();
ring.filledCount = std::min<std::size_t>(ring.filledCount + 1, ring.slots.size());
}
void TemporalHistoryBuffers::PushSourceFramebuffer(GLuint sourceFramebuffer, unsigned frameWidth, unsigned frameHeight)
{
PushFramebuffer(sourceFramebuffer, sourceHistoryRing, frameWidth, frameHeight);
}
void TemporalHistoryBuffers::PushPreLayerFramebuffer(const std::string& layerId, GLuint sourceFramebuffer, unsigned frameWidth, unsigned frameHeight)
{
auto historyIt = preLayerHistoryByLayerId.find(layerId);
if (historyIt != preLayerHistoryByLayerId.end())
PushFramebuffer(sourceFramebuffer, historyIt->second, frameWidth, frameHeight);
}
void TemporalHistoryBuffers::BindSamplers(const RuntimeRenderState& state, GLuint currentSourceTexture, unsigned historyCap)
{
for (unsigned index = 0; index < historyCap; ++index)
{
glActiveTexture(GL_TEXTURE0 + kSourceHistoryTextureUnitBase + index);
glBindTexture(GL_TEXTURE_2D, ResolveTexture(sourceHistoryRing, currentSourceTexture, index));
}
const GLuint temporalBase = kSourceHistoryTextureUnitBase + historyCap;
const Ring* temporalRing = nullptr;
auto it = preLayerHistoryByLayerId.find(state.layerId);
if (it != preLayerHistoryByLayerId.end())
temporalRing = &it->second;
for (unsigned index = 0; index < historyCap; ++index)
{
glActiveTexture(GL_TEXTURE0 + temporalBase + index);
glBindTexture(GL_TEXTURE_2D, temporalRing ? ResolveTexture(*temporalRing, currentSourceTexture, index) : currentSourceTexture);
}
glActiveTexture(GL_TEXTURE0);
}
std::vector<GLuint> TemporalHistoryBuffers::ResolveSourceHistoryTextures(GLuint fallbackTexture, unsigned historyCap) const
{
std::vector<GLuint> textures;
textures.reserve(historyCap);
for (unsigned index = 0; index < historyCap; ++index)
textures.push_back(ResolveTexture(sourceHistoryRing, fallbackTexture, index));
return textures;
}
std::vector<GLuint> TemporalHistoryBuffers::ResolveTemporalHistoryTextures(const RuntimeRenderState& state, GLuint fallbackTexture, unsigned historyCap) const
{
const Ring* temporalRing = nullptr;
auto it = preLayerHistoryByLayerId.find(state.layerId);
if (it != preLayerHistoryByLayerId.end())
temporalRing = &it->second;
std::vector<GLuint> textures;
textures.reserve(historyCap);
for (unsigned index = 0; index < historyCap; ++index)
textures.push_back(temporalRing ? ResolveTexture(*temporalRing, fallbackTexture, index) : fallbackTexture);
return textures;
}
GLuint TemporalHistoryBuffers::ResolveTexture(const Ring& ring, GLuint fallbackTexture, std::size_t framesAgo) const
{
if (ring.filledCount == 0 || ring.slots.empty())
return fallbackTexture;
const std::size_t clampedOffset = std::min<std::size_t>(framesAgo, ring.filledCount - 1);
const std::size_t newestIndex = (ring.nextWriteIndex + ring.slots.size() - 1) % ring.slots.size();
const std::size_t slotIndex = (newestIndex + ring.slots.size() - clampedOffset) % ring.slots.size();
return ring.slots[slotIndex].texture != 0 ? ring.slots[slotIndex].texture : fallbackTexture;
}
unsigned TemporalHistoryBuffers::SourceAvailableCount() const
{
return static_cast<unsigned>(sourceHistoryRing.filledCount);
}
unsigned TemporalHistoryBuffers::AvailableCountForLayer(const std::string& layerId) const
{
auto it = preLayerHistoryByLayerId.find(layerId);
if (it == preLayerHistoryByLayerId.end())
return 0;
return static_cast<unsigned>(it->second.filledCount);
}

View File

@@ -0,0 +1,53 @@
#pragma once
#include "GLExtensions.h"
#include "ShaderTypes.h"
#include <windows.h>
#include <gl/gl.h>
#include <map>
#include <string>
#include <vector>
struct RuntimeRenderState;
class TemporalHistoryBuffers
{
public:
struct Slot
{
GLuint texture = 0;
GLuint framebuffer = 0;
};
struct Ring
{
std::vector<Slot> slots;
std::size_t nextWriteIndex = 0;
std::size_t filledCount = 0;
unsigned effectiveLength = 0;
TemporalHistorySource historySource = TemporalHistorySource::None;
};
bool ValidateTextureUnitBudget(const std::vector<RuntimeRenderState>& layerStates, unsigned historyCap, std::string& error) const;
bool EnsureResources(const std::vector<RuntimeRenderState>& layerStates, unsigned historyCap, unsigned frameWidth, unsigned frameHeight, std::string& error);
bool CreateRing(Ring& ring, unsigned effectiveLength, TemporalHistorySource historySource, unsigned frameWidth, unsigned frameHeight, std::string& error);
void DestroyRing(Ring& ring);
void DestroyResources();
void ResetState();
void PushFramebuffer(GLuint sourceFramebuffer, Ring& ring, unsigned frameWidth, unsigned frameHeight);
void PushSourceFramebuffer(GLuint sourceFramebuffer, unsigned frameWidth, unsigned frameHeight);
void PushPreLayerFramebuffer(const std::string& layerId, GLuint sourceFramebuffer, unsigned frameWidth, unsigned frameHeight);
void BindSamplers(const RuntimeRenderState& state, GLuint currentSourceTexture, unsigned historyCap);
std::vector<GLuint> ResolveSourceHistoryTextures(GLuint fallbackTexture, unsigned historyCap) const;
std::vector<GLuint> ResolveTemporalHistoryTextures(const RuntimeRenderState& state, GLuint fallbackTexture, unsigned historyCap) const;
GLuint ResolveTexture(const Ring& ring, GLuint fallbackTexture, std::size_t framesAgo) const;
unsigned SourceAvailableCount() const;
unsigned AvailableCountForLayer(const std::string& layerId) const;
private:
Ring sourceHistoryRing;
std::map<std::string, Ring> preLayerHistoryByLayerId;
bool mNeedsReset = true;
};

View File

@@ -62,6 +62,8 @@ PFNGLGENBUFFERSPROC glGenBuffers;
PFNGLDELETEBUFFERSPROC glDeleteBuffers;
PFNGLBINDBUFFERPROC glBindBuffer;
PFNGLBUFFERDATAPROC glBufferData;
PFNGLMAPBUFFERPROC glMapBuffer;
PFNGLUNMAPBUFFERPROC glUnmapBuffer;
PFNGLBUFFERSUBDATAPROC glBufferSubData;
PFNGLBINDBUFFERBASEPROC glBindBufferBase;
PFNGLACTIVETEXTUREPROC glActiveTexture;
@@ -131,6 +133,8 @@ bool ResolveGLExtensions()
glDeleteBuffers = (PFNGLDELETEBUFFERSPROC) wglGetProcAddress("glDeleteBuffers");
glBindBuffer = (PFNGLBINDBUFFERPROC) wglGetProcAddress("glBindBuffer");
glBufferData = (PFNGLBUFFERDATAPROC) wglGetProcAddress("glBufferData");
glMapBuffer = (PFNGLMAPBUFFERPROC) wglGetProcAddress("glMapBuffer");
glUnmapBuffer = (PFNGLUNMAPBUFFERPROC) wglGetProcAddress("glUnmapBuffer");
glBufferSubData = (PFNGLBUFFERSUBDATAPROC) wglGetProcAddress("glBufferSubData");
glBindBufferBase = (PFNGLBINDBUFFERBASEPROC) wglGetProcAddress("glBindBufferBase");
glActiveTexture = (PFNGLACTIVETEXTUREPROC) wglGetProcAddress("glActiveTexture");
@@ -176,6 +180,8 @@ bool ResolveGLExtensions()
&& glDeleteBuffers
&& glBindBuffer
&& glBufferData
&& glMapBuffer
&& glUnmapBuffer
&& glBufferSubData
&& glBindBufferBase
&& glActiveTexture

View File

@@ -60,13 +60,19 @@
#define GL_DYNAMIC_DRAW 0x88E8
#define GL_UNIFORM_BUFFER 0x8A11
#define GL_RGBA8 0x8058
#define GL_RGBA16F 0x881A
#define GL_TEXTURE0 0x84C0
#define GL_ACTIVE_TEXTURE 0x84E0
#define GL_ARRAY_BUFFER 0x8892
#define GL_PIXEL_PACK_BUFFER 0x88EB
#define GL_PIXEL_UNPACK_BUFFER 0x88EC
#define GL_PIXEL_UNPACK_BUFFER_BINDING 0x88EF
#define GL_FRAGMENT_SHADER 0x8B30
#define GL_VERTEX_SHADER 0x8B31
#define GL_COMPILE_STATUS 0x8B81
#define GL_LINK_STATUS 0x8B82
#define GL_INVALID_INDEX 0xFFFFFFFFu
#define GL_MAX_TEXTURE_IMAGE_UNITS 0x8872
#define GL_RENDERBUFFER_EXT 0x8D41
#define GL_FRAMEBUFFER_EXT 0x8D40
#define GL_FRAMEBUFFER_COMPLETE_EXT 0x8CD5
@@ -83,6 +89,11 @@
#define GL_EXTERNAL_VIRTUAL_MEMORY_BUFFER_AMD 0x9160
#define GL_SYNC_GPU_COMMANDS_COMPLETE 0x9117
#define GL_SYNC_FLUSH_COMMANDS_BIT 0x00000001
#define GL_ALREADY_SIGNALED 0x911A
#define GL_TIMEOUT_EXPIRED 0x911B
#define GL_CONDITION_SATISFIED 0x911C
#define GL_WAIT_FAILED 0x911D
#define GL_READ_ONLY 0x88B8
typedef struct __GLsync *GLsync;
typedef unsigned __int64 GLuint64;
@@ -94,6 +105,8 @@ typedef void (APIENTRYP PFNGLBINDBUFFERPROC) (GLenum target, GLuint buffer);
typedef void (APIENTRYP PFNGLDELETEBUFFERSPROC) (GLsizei n, const GLuint *buffers);
typedef void (APIENTRYP PFNGLGENBUFFERSPROC) (GLsizei n, GLuint *buffers);
typedef void (APIENTRYP PFNGLBUFFERDATAPROC) (GLenum target, GLsizeiptr size, const GLvoid *data, GLenum usage);
typedef GLvoid* (APIENTRYP PFNGLMAPBUFFERPROC) (GLenum target, GLenum access);
typedef GLboolean (APIENTRYP PFNGLUNMAPBUFFERPROC) (GLenum target);
typedef void (APIENTRYP PFNGLATTACHSHADERPROC) (GLuint program, GLuint shader);
typedef void (APIENTRYP PFNGLCOMPILESHADERPROC) (GLuint shader);
typedef GLuint (APIENTRYP PFNGLCREATEPROGRAMPROC) (void);
@@ -153,6 +166,8 @@ extern PFNGLGENBUFFERSPROC glGenBuffers;
extern PFNGLDELETEBUFFERSPROC glDeleteBuffers;
extern PFNGLBINDBUFFERPROC glBindBuffer;
extern PFNGLBUFFERDATAPROC glBufferData;
extern PFNGLMAPBUFFERPROC glMapBuffer;
extern PFNGLUNMAPBUFFERPROC glUnmapBuffer;
extern PFNGLBUFFERSUBDATAPROC glBufferSubData;
extern PFNGLBINDBUFFERBASEPROC glBindBufferBase;
extern PFNGLACTIVETEXTUREPROC glActiveTexture;

View File

@@ -0,0 +1,10 @@
#pragma once
#include <gl/gl.h>
constexpr GLuint kLayerInputTextureUnit = 0;
constexpr GLuint kDecodedVideoTextureUnit = 1;
constexpr GLuint kSourceHistoryTextureUnitBase = 2;
constexpr GLuint kPackedVideoTextureUnit = 2;
constexpr GLuint kGlobalParamsBindingPoint = 0;
constexpr unsigned kPrerollFrameCount = 12;

View File

@@ -0,0 +1,57 @@
#pragma once
#include <gl/gl.h>
class ScopedGlShader
{
public:
explicit ScopedGlShader(GLuint shader = 0) : mShader(shader) {}
~ScopedGlShader() { reset(); }
ScopedGlShader(const ScopedGlShader&) = delete;
ScopedGlShader& operator=(const ScopedGlShader&) = delete;
GLuint get() const { return mShader; }
GLuint release()
{
GLuint shader = mShader;
mShader = 0;
return shader;
}
void reset(GLuint shader = 0)
{
if (mShader != 0)
glDeleteShader(mShader);
mShader = shader;
}
private:
GLuint mShader;
};
class ScopedGlProgram
{
public:
explicit ScopedGlProgram(GLuint program = 0) : mProgram(program) {}
~ScopedGlProgram() { reset(); }
ScopedGlProgram(const ScopedGlProgram&) = delete;
ScopedGlProgram& operator=(const ScopedGlProgram&) = delete;
GLuint get() const { return mProgram; }
GLuint release()
{
GLuint program = mProgram;
mProgram = 0;
return program;
}
void reset(GLuint program = 0)
{
if (mProgram != 0)
glDeleteProgram(mProgram);
mProgram = program;
}
private:
GLuint mProgram;
};

View File

@@ -0,0 +1,268 @@
#include "OpenGLRenderer.h"
#include "GlRenderConstants.h"
namespace
{
void ConfigureByteFrameTexture(unsigned width, unsigned height)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
}
}
bool OpenGLRenderer::InitializeResources(unsigned inputFrameWidth, unsigned inputFrameHeight, unsigned captureTextureWidth, unsigned outputFrameWidth, unsigned outputFrameHeight, unsigned outputPackTextureWidth, std::string& error)
{
glClearColor(0.0f, 0.0f, 0.0f, 0.5f);
glDisable(GL_DEPTH_TEST);
glGenBuffers(1, &mTextureUploadBuffer);
glGenTextures(1, &mCaptureTexture);
glBindTexture(GL_TEXTURE_2D, mCaptureTexture);
ConfigureByteFrameTexture(captureTextureWidth, inputFrameHeight);
glBindTexture(GL_TEXTURE_2D, 0);
glGenRenderbuffers(1, &mIdColorBuf);
glGenRenderbuffers(1, &mIdDepthBuf);
glGenVertexArrays(1, &mFullscreenVAO);
glGenBuffers(1, &mGlobalParamsUBO);
if (!mRenderTargets.Create(RenderTargetId::Decoded, inputFrameWidth, inputFrameHeight, GL_RGBA16F, GL_RGBA, GL_FLOAT, "decode", error))
return false;
if (!mRenderTargets.Create(RenderTargetId::LayerTemp, inputFrameWidth, inputFrameHeight, GL_RGBA16F, GL_RGBA, GL_FLOAT, "layer", error))
return false;
if (!mRenderTargets.Create(RenderTargetId::Composite, inputFrameWidth, inputFrameHeight, GL_RGBA16F, GL_RGBA, GL_FLOAT, "composite", error))
return false;
glBindFramebuffer(GL_FRAMEBUFFER, CompositeFramebuffer());
glBindRenderbuffer(GL_RENDERBUFFER, mIdDepthBuf);
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, inputFrameWidth, inputFrameHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT_EXT, GL_RENDERBUFFER, mIdDepthBuf);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
error = "Cannot initialize framebuffer.";
return false;
}
if (!mRenderTargets.Create(RenderTargetId::Output, outputFrameWidth, outputFrameHeight, GL_RGBA16F, GL_RGBA, GL_FLOAT, "output", error))
return false;
if (!mRenderTargets.Create(RenderTargetId::OutputPack, outputPackTextureWidth, outputFrameHeight, GL_RGBA8, GL_RGBA, GL_UNSIGNED_BYTE, "output pack", error))
return false;
glBindTexture(GL_TEXTURE_2D, 0);
glBindRenderbuffer(GL_RENDERBUFFER, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
glBindVertexArray(mFullscreenVAO);
glBindVertexArray(0);
glBindBuffer(GL_UNIFORM_BUFFER, mGlobalParamsUBO);
glBufferData(GL_UNIFORM_BUFFER, 1024, NULL, GL_DYNAMIC_DRAW);
glBindBufferBase(GL_UNIFORM_BUFFER, kGlobalParamsBindingPoint, mGlobalParamsUBO);
glBindBuffer(GL_UNIFORM_BUFFER, 0);
mResourcesInitialized = true;
return true;
}
void OpenGLRenderer::SetDecodeShaderProgram(GLuint program, GLuint vertexShader, GLuint fragmentShader)
{
mDecodeProgram = program;
mDecodeVertexShader = vertexShader;
mDecodeFragmentShader = fragmentShader;
mDecodePackedResolutionLocation = program != 0 ? glGetUniformLocation(program, "uPackedVideoResolution") : -1;
mDecodeDecodedResolutionLocation = program != 0 ? glGetUniformLocation(program, "uDecodedVideoResolution") : -1;
mDecodeInputPixelFormatLocation = program != 0 ? glGetUniformLocation(program, "uInputPixelFormat") : -1;
}
void OpenGLRenderer::SetOutputPackShaderProgram(GLuint program, GLuint vertexShader, GLuint fragmentShader)
{
mOutputPackProgram = program;
mOutputPackVertexShader = vertexShader;
mOutputPackFragmentShader = fragmentShader;
mOutputPackResolutionLocation = program != 0 ? glGetUniformLocation(program, "uOutputVideoResolution") : -1;
mOutputPackActiveWordsLocation = program != 0 ? glGetUniformLocation(program, "uActiveV210Words") : -1;
mOutputPackFormatLocation = program != 0 ? glGetUniformLocation(program, "uOutputPackFormat") : -1;
}
bool OpenGLRenderer::ReserveTemporaryRenderTargets(std::size_t count, unsigned width, unsigned height, std::string& error)
{
return mRenderTargets.ReserveTemporaryTargets(count, width, height, GL_RGBA16F, GL_RGBA, GL_FLOAT, error);
}
void OpenGLRenderer::ResizeView(int width, int height)
{
mViewWidth = width;
mViewHeight = height;
}
void OpenGLRenderer::PresentToWindow(HDC hdc, unsigned outputFrameWidth, unsigned outputFrameHeight)
{
int destWidth = mViewWidth;
int destHeight = mViewHeight;
int destX = 0;
int destY = 0;
if (outputFrameWidth > 0 && outputFrameHeight > 0 && mViewWidth > 0 && mViewHeight > 0)
{
const double frameAspect = static_cast<double>(outputFrameWidth) / static_cast<double>(outputFrameHeight);
const double viewAspect = static_cast<double>(mViewWidth) / static_cast<double>(mViewHeight);
if (viewAspect > frameAspect)
{
destHeight = mViewHeight;
destWidth = static_cast<int>(destHeight * frameAspect + 0.5);
destX = (mViewWidth - destWidth) / 2;
}
else
{
destWidth = mViewWidth;
destHeight = static_cast<int>(destWidth / frameAspect + 0.5);
destY = (mViewHeight - destHeight) / 2;
}
}
glBindFramebuffer(GL_READ_FRAMEBUFFER, OutputFramebuffer());
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0);
glDisable(GL_SCISSOR_TEST);
glViewport(0, 0, mViewWidth, mViewHeight);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBlitFramebuffer(0, 0, outputFrameWidth, outputFrameHeight, destX, destY, destX + destWidth, destY + destHeight, GL_COLOR_BUFFER_BIT, GL_LINEAR);
SwapBuffers(hdc);
}
void OpenGLRenderer::DestroyResources()
{
if (mFullscreenVAO != 0)
glDeleteVertexArrays(1, &mFullscreenVAO);
if (mGlobalParamsUBO != 0)
glDeleteBuffers(1, &mGlobalParamsUBO);
if (mIdColorBuf != 0)
glDeleteRenderbuffers(1, &mIdColorBuf);
if (mIdDepthBuf != 0)
glDeleteRenderbuffers(1, &mIdDepthBuf);
if (mCaptureTexture != 0)
glDeleteTextures(1, &mCaptureTexture);
if (mTextureUploadBuffer != 0)
glDeleteBuffers(1, &mTextureUploadBuffer);
mRenderTargets.Destroy();
mFullscreenVAO = 0;
mGlobalParamsUBO = 0;
mIdColorBuf = 0;
mIdDepthBuf = 0;
mCaptureTexture = 0;
mTextureUploadBuffer = 0;
mGlobalParamsUBOSize = 0;
mResourcesInitialized = false;
mTemporalHistory.DestroyResources();
mFeedbackBuffers.DestroyResources();
DestroyLayerPrograms();
DestroyDecodeShaderProgram();
DestroyOutputPackShaderProgram();
}
void OpenGLRenderer::DestroySingleLayerProgram(LayerProgram& layerProgram)
{
for (LayerProgram::PassProgram& passProgram : layerProgram.passes)
{
for (LayerProgram::TextureBinding& binding : passProgram.textureBindings)
{
if (binding.texture != 0)
{
glDeleteTextures(1, &binding.texture);
binding.texture = 0;
}
}
passProgram.textureBindings.clear();
for (LayerProgram::TextBinding& binding : passProgram.textBindings)
{
if (binding.texture != 0)
{
glDeleteTextures(1, &binding.texture);
binding.texture = 0;
}
}
passProgram.textBindings.clear();
if (passProgram.program != 0)
{
glDeleteProgram(passProgram.program);
passProgram.program = 0;
}
if (passProgram.fragmentShader != 0)
{
glDeleteShader(passProgram.fragmentShader);
passProgram.fragmentShader = 0;
}
if (passProgram.vertexShader != 0)
{
glDeleteShader(passProgram.vertexShader);
passProgram.vertexShader = 0;
}
}
layerProgram.passes.clear();
}
void OpenGLRenderer::DestroyLayerPrograms()
{
for (LayerProgram& layerProgram : mLayerPrograms)
DestroySingleLayerProgram(layerProgram);
mLayerPrograms.clear();
}
void OpenGLRenderer::DestroyDecodeShaderProgram()
{
if (mDecodeProgram != 0)
{
glDeleteProgram(mDecodeProgram);
mDecodeProgram = 0;
}
mDecodePackedResolutionLocation = -1;
mDecodeDecodedResolutionLocation = -1;
mDecodeInputPixelFormatLocation = -1;
if (mDecodeFragmentShader != 0)
{
glDeleteShader(mDecodeFragmentShader);
mDecodeFragmentShader = 0;
}
if (mDecodeVertexShader != 0)
{
glDeleteShader(mDecodeVertexShader);
mDecodeVertexShader = 0;
}
}
void OpenGLRenderer::DestroyOutputPackShaderProgram()
{
if (mOutputPackProgram != 0)
{
glDeleteProgram(mOutputPackProgram);
mOutputPackProgram = 0;
}
mOutputPackResolutionLocation = -1;
mOutputPackActiveWordsLocation = -1;
mOutputPackFormatLocation = -1;
if (mOutputPackFragmentShader != 0)
{
glDeleteShader(mOutputPackFragmentShader);
mOutputPackFragmentShader = 0;
}
if (mOutputPackVertexShader != 0)
{
glDeleteShader(mOutputPackVertexShader);
mOutputPackVertexShader = 0;
}
}

View File

@@ -0,0 +1,131 @@
#pragma once
#include "GLExtensions.h"
#include "RenderTargetPool.h"
#include "ShaderFeedbackBuffers.h"
#include "ShaderTypes.h"
#include "TemporalHistoryBuffers.h"
#include <windows.h>
#include <filesystem>
#include <gl/gl.h>
#include <string>
#include <vector>
class OpenGLRenderer
{
public:
struct LayerProgram
{
struct TextureBinding
{
std::string samplerName;
std::filesystem::path sourcePath;
GLuint texture = 0;
};
struct TextBinding
{
std::string parameterId;
std::string samplerName;
std::string fontId;
GLuint texture = 0;
std::string renderedText;
unsigned renderedWidth = 0;
unsigned renderedHeight = 0;
};
std::string layerId;
std::string shaderId;
struct PassProgram
{
std::string passId;
std::vector<std::string> inputNames;
std::string outputName;
GLuint shaderTextureBase = 0;
GLuint program = 0;
GLuint vertexShader = 0;
GLuint fragmentShader = 0;
std::vector<TextureBinding> textureBindings;
std::vector<TextBinding> textBindings;
};
std::vector<PassProgram> passes;
};
GLuint CaptureTexture() const { return mCaptureTexture; }
GLuint DecodedTexture() const { return mRenderTargets.Texture(RenderTargetId::Decoded); }
GLuint LayerTempTexture() const { return mRenderTargets.Texture(RenderTargetId::LayerTemp); }
GLuint CompositeTexture() const { return mRenderTargets.Texture(RenderTargetId::Composite); }
GLuint OutputTexture() const { return mRenderTargets.Texture(RenderTargetId::Output); }
GLuint OutputPackTexture() const { return mRenderTargets.Texture(RenderTargetId::OutputPack); }
GLuint TextureUploadBuffer() const { return mTextureUploadBuffer; }
GLuint DecodeFramebuffer() const { return mRenderTargets.Framebuffer(RenderTargetId::Decoded); }
GLuint LayerTempFramebuffer() const { return mRenderTargets.Framebuffer(RenderTargetId::LayerTemp); }
GLuint CompositeFramebuffer() const { return mRenderTargets.Framebuffer(RenderTargetId::Composite); }
GLuint OutputFramebuffer() const { return mRenderTargets.Framebuffer(RenderTargetId::Output); }
GLuint OutputPackFramebuffer() const { return mRenderTargets.Framebuffer(RenderTargetId::OutputPack); }
GLuint FullscreenVertexArray() const { return mFullscreenVAO; }
GLuint GlobalParamsUBO() const { return mGlobalParamsUBO; }
GLuint DecodeProgram() const { return mDecodeProgram; }
GLuint OutputPackProgram() const { return mOutputPackProgram; }
GLint DecodePackedResolutionLocation() const { return mDecodePackedResolutionLocation; }
GLint DecodeDecodedResolutionLocation() const { return mDecodeDecodedResolutionLocation; }
GLint DecodeInputPixelFormatLocation() const { return mDecodeInputPixelFormatLocation; }
GLint OutputPackResolutionLocation() const { return mOutputPackResolutionLocation; }
GLint OutputPackActiveWordsLocation() const { return mOutputPackActiveWordsLocation; }
GLint OutputPackFormatLocation() const { return mOutputPackFormatLocation; }
GLsizeiptr GlobalParamsUBOSize() const { return mGlobalParamsUBOSize; }
void SetGlobalParamsUBOSize(GLsizeiptr size) { mGlobalParamsUBOSize = size; }
bool ResourcesInitialized() const { return mResourcesInitialized; }
void ReplaceLayerPrograms(std::vector<LayerProgram>& newPrograms) { mLayerPrograms.swap(newPrograms); }
std::vector<LayerProgram>& LayerPrograms() { return mLayerPrograms; }
const std::vector<LayerProgram>& LayerPrograms() const { return mLayerPrograms; }
bool ReserveTemporaryRenderTargets(std::size_t count, unsigned width, unsigned height, std::string& error);
const RenderTarget& TemporaryRenderTarget(std::size_t index) const { return mRenderTargets.TemporaryTarget(index); }
std::size_t TemporaryRenderTargetCount() const { return mRenderTargets.TemporaryTargetCount(); }
TemporalHistoryBuffers& TemporalHistory() { return mTemporalHistory; }
const TemporalHistoryBuffers& TemporalHistory() const { return mTemporalHistory; }
ShaderFeedbackBuffers& FeedbackBuffers() { return mFeedbackBuffers; }
const ShaderFeedbackBuffers& FeedbackBuffers() const { return mFeedbackBuffers; }
void SetDecodeShaderProgram(GLuint program, GLuint vertexShader, GLuint fragmentShader);
void SetOutputPackShaderProgram(GLuint program, GLuint vertexShader, GLuint fragmentShader);
bool InitializeResources(unsigned inputFrameWidth, unsigned inputFrameHeight, unsigned captureTextureWidth, unsigned outputFrameWidth, unsigned outputFrameHeight, unsigned outputPackTextureWidth, std::string& error);
void ResizeView(int width, int height);
void PresentToWindow(HDC hdc, unsigned outputFrameWidth, unsigned outputFrameHeight);
void DestroyResources();
void DestroySingleLayerProgram(LayerProgram& layerProgram);
void DestroyLayerPrograms();
void DestroyDecodeShaderProgram();
void DestroyOutputPackShaderProgram();
private:
GLuint mCaptureTexture = 0;
GLuint mTextureUploadBuffer = 0;
GLuint mIdColorBuf = 0;
GLuint mIdDepthBuf = 0;
GLuint mFullscreenVAO = 0;
GLuint mGlobalParamsUBO = 0;
GLuint mDecodeProgram = 0;
GLuint mDecodeVertexShader = 0;
GLuint mDecodeFragmentShader = 0;
GLint mDecodePackedResolutionLocation = -1;
GLint mDecodeDecodedResolutionLocation = -1;
GLint mDecodeInputPixelFormatLocation = -1;
GLuint mOutputPackProgram = 0;
GLuint mOutputPackVertexShader = 0;
GLuint mOutputPackFragmentShader = 0;
GLint mOutputPackResolutionLocation = -1;
GLint mOutputPackActiveWordsLocation = -1;
GLint mOutputPackFormatLocation = -1;
GLsizeiptr mGlobalParamsUBOSize = 0;
bool mResourcesInitialized = false;
int mViewWidth = 0;
int mViewHeight = 0;
std::vector<LayerProgram> mLayerPrograms;
RenderTargetPool mRenderTargets;
TemporalHistoryBuffers mTemporalHistory;
ShaderFeedbackBuffers mFeedbackBuffers;
};

View File

@@ -0,0 +1,136 @@
#include "RenderTargetPool.h"
#include <cstddef>
namespace
{
void ConfigureRenderTargetTexture(
unsigned width,
unsigned height,
GLenum internalFormat,
GLenum pixelFormat,
GLenum pixelType)
{
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, width, height, 0, pixelFormat, pixelType, NULL);
}
}
bool RenderTargetPool::Create(
RenderTargetId id,
unsigned width,
unsigned height,
GLenum internalFormat,
GLenum pixelFormat,
GLenum pixelType,
const char* errorPrefix,
std::string& error)
{
RenderTarget& target = mTargets[TargetIndex(id)];
if (target.texture != 0 || target.framebuffer != 0)
{
error = std::string(errorPrefix) + " render target was already initialized.";
return false;
}
glGenTextures(1, &target.texture);
glBindTexture(GL_TEXTURE_2D, target.texture);
ConfigureRenderTargetTexture(width, height, internalFormat, pixelFormat, pixelType);
glBindTexture(GL_TEXTURE_2D, 0);
glGenFramebuffers(1, &target.framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, target.framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, target.texture, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
error = std::string("Cannot initialize ") + errorPrefix + " framebuffer.";
glBindFramebuffer(GL_FRAMEBUFFER, 0);
return false;
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
target.width = width;
target.height = height;
target.internalFormat = internalFormat;
target.pixelFormat = pixelFormat;
target.pixelType = pixelType;
return true;
}
bool RenderTargetPool::ReserveTemporaryTargets(
std::size_t count,
unsigned width,
unsigned height,
GLenum internalFormat,
GLenum pixelFormat,
GLenum pixelType,
std::string& error)
{
if (mTemporaryTargets.size() == count)
return true;
DestroyTemporaryTargets();
mTemporaryTargets.resize(count);
for (std::size_t index = 0; index < mTemporaryTargets.size(); ++index)
{
RenderTarget& target = mTemporaryTargets[index];
glGenTextures(1, &target.texture);
glBindTexture(GL_TEXTURE_2D, target.texture);
ConfigureRenderTargetTexture(width, height, internalFormat, pixelFormat, pixelType);
glBindTexture(GL_TEXTURE_2D, 0);
glGenFramebuffers(1, &target.framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, target.framebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, target.texture, 0);
if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE)
{
error = "Cannot initialize temporary render target.";
glBindFramebuffer(GL_FRAMEBUFFER, 0);
return false;
}
target.width = width;
target.height = height;
target.internalFormat = internalFormat;
target.pixelFormat = pixelFormat;
target.pixelType = pixelType;
}
glBindFramebuffer(GL_FRAMEBUFFER, 0);
return true;
}
void RenderTargetPool::DestroyTemporaryTargets()
{
for (RenderTarget& target : mTemporaryTargets)
{
if (target.framebuffer != 0)
glDeleteFramebuffers(1, &target.framebuffer);
if (target.texture != 0)
glDeleteTextures(1, &target.texture);
}
mTemporaryTargets.clear();
}
void RenderTargetPool::Destroy()
{
for (RenderTarget& target : mTargets)
{
if (target.framebuffer != 0)
glDeleteFramebuffers(1, &target.framebuffer);
if (target.texture != 0)
glDeleteTextures(1, &target.texture);
target = RenderTarget();
}
DestroyTemporaryTargets();
}
const RenderTarget& RenderTargetPool::Target(RenderTargetId id) const
{
return mTargets[TargetIndex(id)];
}

View File

@@ -0,0 +1,64 @@
#pragma once
#include "GLExtensions.h"
#include <array>
#include <string>
#include <vector>
enum class RenderTargetId
{
Decoded,
LayerTemp,
Composite,
Output,
OutputPack,
Count
};
struct RenderTarget
{
GLuint texture = 0;
GLuint framebuffer = 0;
unsigned width = 0;
unsigned height = 0;
GLenum internalFormat = GL_RGBA8;
GLenum pixelFormat = GL_RGBA;
GLenum pixelType = GL_UNSIGNED_BYTE;
};
class RenderTargetPool
{
public:
bool Create(
RenderTargetId id,
unsigned width,
unsigned height,
GLenum internalFormat,
GLenum pixelFormat,
GLenum pixelType,
const char* errorPrefix,
std::string& error);
bool ReserveTemporaryTargets(
std::size_t count,
unsigned width,
unsigned height,
GLenum internalFormat,
GLenum pixelFormat,
GLenum pixelType,
std::string& error);
void DestroyTemporaryTargets();
void Destroy();
GLuint Texture(RenderTargetId id) const { return Target(id).texture; }
GLuint Framebuffer(RenderTargetId id) const { return Target(id).framebuffer; }
const RenderTarget& Target(RenderTargetId id) const;
const RenderTarget& TemporaryTarget(std::size_t index) const { return mTemporaryTargets[index]; }
std::size_t TemporaryTargetCount() const { return mTemporaryTargets.size(); }
private:
static std::size_t TargetIndex(RenderTargetId id) { return static_cast<std::size_t>(id); }
std::array<RenderTarget, static_cast<std::size_t>(RenderTargetId::Count)> mTargets;
std::vector<RenderTarget> mTemporaryTargets;
};

View File

@@ -0,0 +1,172 @@
#include "GlShaderSources.h"
const char* kFullscreenTriangleVertexShaderSource =
"#version 430 core\n"
"out vec2 vTexCoord;\n"
"void main()\n"
"{\n"
" vec2 positions[3] = vec2[3](vec2(-1.0, -1.0), vec2(3.0, -1.0), vec2(-1.0, 3.0));\n"
" vec2 texCoords[3] = vec2[3](vec2(0.0, 0.0), vec2(2.0, 0.0), vec2(0.0, 2.0));\n"
" gl_Position = vec4(positions[gl_VertexID], 0.0, 1.0);\n"
" vTexCoord = texCoords[gl_VertexID];\n"
"}\n";
const char* kDecodeFragmentShaderSource =
"#version 430 core\n"
"layout(binding = 2) uniform sampler2D uPackedVideoInput;\n"
"uniform vec2 uPackedVideoResolution;\n"
"uniform vec2 uDecodedVideoResolution;\n"
"uniform int uInputPixelFormat;\n"
"in vec2 vTexCoord;\n"
"layout(location = 0) out vec4 fragColor;\n"
"vec4 rec709YCbCr2rgba(float Y, float Cb, float Cr, float a)\n"
"{\n"
" Y = (Y * 256.0 - 16.0) / 219.0;\n"
" Cb = (Cb * 256.0 - 16.0) / 224.0 - 0.5;\n"
" Cr = (Cr * 256.0 - 16.0) / 224.0 - 0.5;\n"
" return vec4(Y + 1.5748 * Cr, Y - 0.1873 * Cb - 0.4681 * Cr, Y + 1.8556 * Cb, a);\n"
"}\n"
"vec4 rec709YCbCr10_2rgba(float Y, float Cb, float Cr, float a)\n"
"{\n"
" Y = (Y - 64.0) / 876.0;\n"
" Cb = (Cb - 64.0) / 896.0 - 0.5;\n"
" Cr = (Cr - 64.0) / 896.0 - 0.5;\n"
" return vec4(Y + 1.5748 * Cr, Y - 0.1873 * Cb - 0.4681 * Cr, Y + 1.8556 * Cb, a);\n"
"}\n"
"uint loadV210Word(ivec2 coord)\n"
"{\n"
" vec4 b = round(texelFetch(uPackedVideoInput, coord, 0) * 255.0);\n"
" return uint(b.r) | (uint(b.g) << 8) | (uint(b.b) << 16) | (uint(b.a) << 24);\n"
"}\n"
"float v210Component(uint word, int index)\n"
"{\n"
" return float((word >> uint(index * 10)) & 1023u);\n"
"}\n"
"vec4 decodeUyvy8(ivec2 outputCoord, ivec2 packedSize)\n"
"{\n"
" ivec2 packedCoord = ivec2(clamp(outputCoord.x / 2, 0, packedSize.x - 1), clamp(outputCoord.y, 0, packedSize.y - 1));\n"
" vec4 macroPixel = texelFetch(uPackedVideoInput, packedCoord, 0);\n"
" float ySample = (outputCoord.x & 1) != 0 ? macroPixel.a : macroPixel.g;\n"
" return rec709YCbCr2rgba(ySample, macroPixel.b, macroPixel.r, 1.0);\n"
"}\n"
"vec4 decodeV210(ivec2 outputCoord, ivec2 packedSize)\n"
"{\n"
" int group = outputCoord.x / 6;\n"
" int pixel = outputCoord.x - group * 6;\n"
" int wordBase = group * 4;\n"
" ivec2 rowBase = ivec2(wordBase, clamp(outputCoord.y, 0, packedSize.y - 1));\n"
" uint w0 = loadV210Word(ivec2(min(rowBase.x + 0, packedSize.x - 1), rowBase.y));\n"
" uint w1 = loadV210Word(ivec2(min(rowBase.x + 1, packedSize.x - 1), rowBase.y));\n"
" uint w2 = loadV210Word(ivec2(min(rowBase.x + 2, packedSize.x - 1), rowBase.y));\n"
" uint w3 = loadV210Word(ivec2(min(rowBase.x + 3, packedSize.x - 1), rowBase.y));\n"
" float y0 = v210Component(w0, 1);\n"
" float y1 = v210Component(w1, 0);\n"
" float y2 = v210Component(w1, 2);\n"
" float y3 = v210Component(w2, 1);\n"
" float y4 = v210Component(w3, 0);\n"
" float y5 = v210Component(w3, 2);\n"
" float cb0 = v210Component(w0, 0);\n"
" float cr0 = v210Component(w0, 2);\n"
" float cb2 = v210Component(w1, 1);\n"
" float cr2 = v210Component(w2, 0);\n"
" float cb4 = v210Component(w2, 2);\n"
" float cr4 = v210Component(w3, 1);\n"
" float ySample = pixel == 0 ? y0 : pixel == 1 ? y1 : pixel == 2 ? y2 : pixel == 3 ? y3 : pixel == 4 ? y4 : y5;\n"
" float cbSample = pixel < 2 ? cb0 : pixel < 4 ? cb2 : cb4;\n"
" float crSample = pixel < 2 ? cr0 : pixel < 4 ? cr2 : cr4;\n"
" return rec709YCbCr10_2rgba(ySample, cbSample, crSample, 1.0);\n"
"}\n"
"void main()\n"
"{\n"
" vec2 correctedUv = vec2(vTexCoord.x, 1.0 - vTexCoord.y);\n"
" ivec2 decodedSize = ivec2(max(uDecodedVideoResolution, vec2(1.0, 1.0)));\n"
" ivec2 outputCoord = clamp(ivec2(correctedUv * vec2(decodedSize)), ivec2(0, 0), decodedSize - ivec2(1, 1));\n"
" ivec2 packedSize = ivec2(max(uPackedVideoResolution, vec2(1.0, 1.0)));\n"
" fragColor = uInputPixelFormat == 1 ? decodeV210(outputCoord, packedSize) : decodeUyvy8(outputCoord, packedSize);\n"
"}\n";
const char* kOutputPackFragmentShaderSource =
"#version 430 core\n"
"layout(binding = 0) uniform sampler2D uOutputRgb;\n"
"uniform vec2 uOutputVideoResolution;\n"
"uniform float uActiveV210Words;\n"
"uniform int uOutputPackFormat;\n"
"in vec2 vTexCoord;\n"
"layout(location = 0) out vec4 fragColor;\n"
"vec4 rgbaAt(int x, int y)\n"
"{\n"
" ivec2 size = ivec2(max(uOutputVideoResolution, vec2(1.0, 1.0)));\n"
" return clamp(texelFetch(uOutputRgb, ivec2(clamp(x, 0, size.x - 1), clamp(y, 0, size.y - 1)), 0), vec4(0.0), vec4(1.0));\n"
"}\n"
"vec3 rgbAt(int x, int y)\n"
"{\n"
" return rgbaAt(x, y).rgb;\n"
"}\n"
"vec3 rgbToLegalYcbcr10(vec3 rgb)\n"
"{\n"
" float y = dot(rgb, vec3(0.2126, 0.7152, 0.0722));\n"
" float cb = (rgb.b - y) / 1.8556 + 0.5;\n"
" float cr = (rgb.r - y) / 1.5748 + 0.5;\n"
" return vec3(clamp(round(64.0 + y * 876.0), 64.0, 940.0), clamp(round(64.0 + cb * 896.0), 64.0, 960.0), clamp(round(64.0 + cr * 896.0), 64.0, 960.0));\n"
"}\n"
"uint makeWord(float a, float b, float c)\n"
"{\n"
" return (uint(a) & 1023u) | ((uint(b) & 1023u) << 10) | ((uint(c) & 1023u) << 20);\n"
"}\n"
"vec4 wordToBytes(uint word)\n"
"{\n"
" return vec4(float(word & 255u), float((word >> 8) & 255u), float((word >> 16) & 255u), float((word >> 24) & 255u)) / 255.0;\n"
"}\n"
"vec4 bigEndianWordToBytes(uint word)\n"
"{\n"
" return vec4(float((word >> 24) & 255u), float((word >> 16) & 255u), float((word >> 8) & 255u), float(word & 255u)) / 255.0;\n"
"}\n"
"vec4 packAy10Word(ivec2 outCoord)\n"
"{\n"
" ivec2 size = ivec2(max(uOutputVideoResolution, vec2(1.0, 1.0)));\n"
" if (outCoord.x >= size.x)\n"
" return vec4(0.0);\n"
" int pixelBase = (outCoord.x / 2) * 2;\n"
" int y = outCoord.y;\n"
" vec4 rgba0 = rgbaAt(pixelBase + 0, y);\n"
" vec4 rgba1 = rgbaAt(pixelBase + 1, y);\n"
" vec3 c0 = rgbToLegalYcbcr10(rgba0.rgb);\n"
" vec3 c1 = rgbToLegalYcbcr10(rgba1.rgb);\n"
" float chroma = (outCoord.x & 1) == 0 ? round((c0.y + c1.y) * 0.5) : round((c0.z + c1.z) * 0.5);\n"
" float alpha = round(clamp(((outCoord.x & 1) == 0 ? rgba0.a : rgba1.a), 0.0, 1.0) * 1023.0);\n"
" float luma = (outCoord.x & 1) == 0 ? c0.x : c1.x;\n"
" uint word = ((uint(luma) & 1023u) << 22) | ((uint(chroma) & 1023u) << 12) | ((uint(alpha) & 1023u) << 2);\n"
" return bigEndianWordToBytes(word);\n"
"}\n"
"void main()\n"
"{\n"
" ivec2 outCoord = ivec2(gl_FragCoord.xy);\n"
" if (uOutputPackFormat == 2)\n"
" {\n"
" fragColor = packAy10Word(outCoord);\n"
" return;\n"
" }\n"
" if (float(outCoord.x) >= uActiveV210Words)\n"
" {\n"
" fragColor = vec4(0.0);\n"
" return;\n"
" }\n"
" int group = outCoord.x / 4;\n"
" int wordIndex = outCoord.x - group * 4;\n"
" int pixelBase = group * 6;\n"
" int y = outCoord.y;\n"
" vec3 c0 = rgbToLegalYcbcr10(rgbAt(pixelBase + 0, y));\n"
" vec3 c1 = rgbToLegalYcbcr10(rgbAt(pixelBase + 1, y));\n"
" vec3 c2 = rgbToLegalYcbcr10(rgbAt(pixelBase + 2, y));\n"
" vec3 c3 = rgbToLegalYcbcr10(rgbAt(pixelBase + 3, y));\n"
" vec3 c4 = rgbToLegalYcbcr10(rgbAt(pixelBase + 4, y));\n"
" vec3 c5 = rgbToLegalYcbcr10(rgbAt(pixelBase + 5, y));\n"
" float cb0 = round((c0.y + c1.y) * 0.5);\n"
" float cr0 = round((c0.z + c1.z) * 0.5);\n"
" float cb2 = round((c2.y + c3.y) * 0.5);\n"
" float cr2 = round((c2.z + c3.z) * 0.5);\n"
" float cb4 = round((c4.y + c5.y) * 0.5);\n"
" float cr4 = round((c4.z + c5.z) * 0.5);\n"
" uint word = wordIndex == 0 ? makeWord(cb0, c0.x, cr0) : wordIndex == 1 ? makeWord(c1.x, cb2, c2.x) : wordIndex == 2 ? makeWord(cr2, c3.x, cb4) : makeWord(c4.x, cr4, c5.x);\n"
" fragColor = wordToBytes(word);\n"
"}\n";

View File

@@ -0,0 +1,5 @@
#pragma once
extern const char* kFullscreenTriangleVertexShaderSource;
extern const char* kDecodeFragmentShaderSource;
extern const char* kOutputPackFragmentShaderSource;

View File

@@ -0,0 +1,104 @@
#include "GlobalParamsBuffer.h"
#include "GlRenderConstants.h"
#include "Std140Buffer.h"
#include <vector>
GlobalParamsBuffer::GlobalParamsBuffer(OpenGLRenderer& renderer) :
mRenderer(renderer)
{
}
bool GlobalParamsBuffer::Update(const RuntimeRenderState& state, unsigned availableSourceHistoryLength, unsigned availableTemporalHistoryLength, bool feedbackAvailable)
{
std::vector<unsigned char>& buffer = mScratchBuffer;
buffer.clear();
buffer.reserve(512);
AppendStd140Float(buffer, static_cast<float>(state.timeSeconds));
AppendStd140Vec2(buffer, static_cast<float>(state.inputWidth), static_cast<float>(state.inputHeight));
AppendStd140Vec2(buffer, static_cast<float>(state.outputWidth), static_cast<float>(state.outputHeight));
AppendStd140Float(buffer, static_cast<float>(state.utcTimeSeconds));
AppendStd140Float(buffer, static_cast<float>(state.utcOffsetSeconds));
AppendStd140Float(buffer, static_cast<float>(state.startupRandom));
AppendStd140Float(buffer, static_cast<float>(state.frameCount));
AppendStd140Float(buffer, static_cast<float>(state.mixAmount));
AppendStd140Float(buffer, static_cast<float>(state.bypass));
const unsigned effectiveSourceHistoryLength = availableSourceHistoryLength < state.effectiveTemporalHistoryLength
? availableSourceHistoryLength
: state.effectiveTemporalHistoryLength;
const unsigned effectiveTemporalHistoryLength = (state.temporalHistorySource == TemporalHistorySource::PreLayerInput)
? (availableTemporalHistoryLength < state.effectiveTemporalHistoryLength ? availableTemporalHistoryLength : state.effectiveTemporalHistoryLength)
: 0u;
AppendStd140Int(buffer, static_cast<int>(effectiveSourceHistoryLength));
AppendStd140Int(buffer, static_cast<int>(effectiveTemporalHistoryLength));
AppendStd140Int(buffer, feedbackAvailable ? 1 : 0);
for (const ShaderParameterDefinition& definition : state.parameterDefinitions)
{
auto valueIt = state.parameterValues.find(definition.id);
const ShaderParameterValue value = valueIt != state.parameterValues.end()
? valueIt->second
: ShaderParameterValue();
switch (definition.type)
{
case ShaderParameterType::Float:
AppendStd140Float(buffer, value.numberValues.empty() ? 0.0f : static_cast<float>(value.numberValues[0]));
break;
case ShaderParameterType::Vec2:
AppendStd140Vec2(buffer,
value.numberValues.size() > 0 ? static_cast<float>(value.numberValues[0]) : 0.0f,
value.numberValues.size() > 1 ? static_cast<float>(value.numberValues[1]) : 0.0f);
break;
case ShaderParameterType::Color:
AppendStd140Vec4(buffer,
value.numberValues.size() > 0 ? static_cast<float>(value.numberValues[0]) : 1.0f,
value.numberValues.size() > 1 ? static_cast<float>(value.numberValues[1]) : 1.0f,
value.numberValues.size() > 2 ? static_cast<float>(value.numberValues[2]) : 1.0f,
value.numberValues.size() > 3 ? static_cast<float>(value.numberValues[3]) : 1.0f);
break;
case ShaderParameterType::Boolean:
AppendStd140Int(buffer, value.booleanValue ? 1 : 0);
break;
case ShaderParameterType::Enum:
{
int selectedIndex = 0;
for (std::size_t optionIndex = 0; optionIndex < definition.enumOptions.size(); ++optionIndex)
{
if (definition.enumOptions[optionIndex].value == value.enumValue)
{
selectedIndex = static_cast<int>(optionIndex);
break;
}
}
AppendStd140Int(buffer, selectedIndex);
break;
}
case ShaderParameterType::Text:
break;
case ShaderParameterType::Trigger:
AppendStd140Int(buffer, value.numberValues.empty() ? 0 : static_cast<int>(value.numberValues[0]));
AppendStd140Float(buffer, value.numberValues.size() > 1 ? static_cast<float>(value.numberValues[1]) : -1000000.0f);
break;
}
}
buffer.resize(AlignStd140(buffer.size(), 16), 0);
glBindBuffer(GL_UNIFORM_BUFFER, mRenderer.GlobalParamsUBO());
if (mRenderer.GlobalParamsUBOSize() != static_cast<GLsizeiptr>(buffer.size()))
{
glBufferData(GL_UNIFORM_BUFFER, static_cast<GLsizeiptr>(buffer.size()), buffer.data(), GL_DYNAMIC_DRAW);
mRenderer.SetGlobalParamsUBOSize(static_cast<GLsizeiptr>(buffer.size()));
}
else
{
glBufferSubData(GL_UNIFORM_BUFFER, 0, static_cast<GLsizeiptr>(buffer.size()), buffer.data());
}
glBindBufferBase(GL_UNIFORM_BUFFER, kGlobalParamsBindingPoint, mRenderer.GlobalParamsUBO());
glBindBuffer(GL_UNIFORM_BUFFER, 0);
return true;
}

View File

@@ -0,0 +1,18 @@
#pragma once
#include "OpenGLRenderer.h"
#include "ShaderTypes.h"
#include <vector>
class GlobalParamsBuffer
{
public:
explicit GlobalParamsBuffer(OpenGLRenderer& renderer);
bool Update(const RuntimeRenderState& state, unsigned availableSourceHistoryLength, unsigned availableTemporalHistoryLength, bool feedbackAvailable);
private:
OpenGLRenderer& mRenderer;
std::vector<unsigned char> mScratchBuffer;
};

View File

@@ -0,0 +1,204 @@
#include "OpenGLShaderPrograms.h"
#include <cstring>
#include <string>
#include <vector>
namespace
{
void CopyErrorMessage(const std::string& message, int errorMessageSize, char* errorMessage)
{
if (!errorMessage || errorMessageSize <= 0)
return;
strncpy_s(errorMessage, errorMessageSize, message.c_str(), _TRUNCATE);
}
std::size_t RequiredTemporaryRenderTargets(const std::vector<OpenGLRenderer::LayerProgram>& layerPrograms)
{
// Only one layer renders at a time, so the pool needs to cover the widest
// layer, not the sum of every intermediate pass in the stack.
std::size_t requiredTargets = 0;
for (const OpenGLRenderer::LayerProgram& layerProgram : layerPrograms)
{
const std::size_t internalPasses = layerProgram.passes.size() > 0 ? layerProgram.passes.size() - 1 : 0;
if (internalPasses > requiredTargets)
requiredTargets = internalPasses;
}
return requiredTargets;
}
}
OpenGLShaderPrograms::OpenGLShaderPrograms(OpenGLRenderer& renderer, RuntimeHost& runtimeHost) :
mRenderer(renderer),
mRuntimeHost(runtimeHost),
mGlobalParamsBuffer(renderer),
mCompiler(renderer, runtimeHost, mTextureBindings)
{
}
bool OpenGLShaderPrograms::CompileLayerPrograms(unsigned inputFrameWidth, unsigned inputFrameHeight, int errorMessageSize, char* errorMessage)
{
const std::vector<RuntimeRenderState> layerStates = mRuntimeHost.GetLayerRenderStates(inputFrameWidth, inputFrameHeight);
std::string temporalError;
const unsigned historyCap = mRuntimeHost.GetMaxTemporalHistoryFrames();
if (!mRenderer.TemporalHistory().ValidateTextureUnitBudget(layerStates, historyCap, temporalError))
{
CopyErrorMessage(temporalError, errorMessageSize, errorMessage);
return false;
}
if (!mRenderer.TemporalHistory().EnsureResources(layerStates, historyCap, inputFrameWidth, inputFrameHeight, temporalError))
{
CopyErrorMessage(temporalError, errorMessageSize, errorMessage);
return false;
}
if (mRenderer.ResourcesInitialized() &&
!mRenderer.FeedbackBuffers().EnsureResources(layerStates, inputFrameWidth, inputFrameHeight, temporalError))
{
CopyErrorMessage(temporalError, errorMessageSize, errorMessage);
return false;
}
// Initial startup still compiles synchronously; auto-reload uses the build
// queue so Slang/file work stays off the playback path.
std::vector<LayerProgram> newPrograms;
newPrograms.reserve(layerStates.size());
for (const RuntimeRenderState& state : layerStates)
{
LayerProgram layerProgram;
if (!mCompiler.CompileLayerProgram(state, layerProgram, errorMessageSize, errorMessage))
{
for (LayerProgram& program : newPrograms)
DestroySingleLayerProgram(program);
return false;
}
newPrograms.push_back(layerProgram);
}
std::string targetError;
if (!mRenderer.ReserveTemporaryRenderTargets(RequiredTemporaryRenderTargets(newPrograms), inputFrameWidth, inputFrameHeight, targetError))
{
for (LayerProgram& program : newPrograms)
DestroySingleLayerProgram(program);
CopyErrorMessage(targetError, errorMessageSize, errorMessage);
return false;
}
DestroyLayerPrograms();
mRenderer.ReplaceLayerPrograms(newPrograms);
mCommittedLayerStates = layerStates;
mRuntimeHost.SetCompileStatus(true, "Shader layers compiled successfully.");
mRuntimeHost.ClearReloadRequest();
return true;
}
bool OpenGLShaderPrograms::CommitPreparedLayerPrograms(const PreparedShaderBuild& preparedBuild, unsigned inputFrameWidth, unsigned inputFrameHeight, int errorMessageSize, char* errorMessage)
{
if (!preparedBuild.succeeded)
{
CopyErrorMessage(preparedBuild.message, errorMessageSize, errorMessage);
return false;
}
std::string temporalError;
const unsigned historyCap = mRuntimeHost.GetMaxTemporalHistoryFrames();
if (!mRenderer.TemporalHistory().ValidateTextureUnitBudget(preparedBuild.layerStates, historyCap, temporalError))
{
CopyErrorMessage(temporalError, errorMessageSize, errorMessage);
return false;
}
if (!mRenderer.TemporalHistory().EnsureResources(preparedBuild.layerStates, historyCap, inputFrameWidth, inputFrameHeight, temporalError))
{
CopyErrorMessage(temporalError, errorMessageSize, errorMessage);
return false;
}
if (mRenderer.ResourcesInitialized() &&
!mRenderer.FeedbackBuffers().EnsureResources(preparedBuild.layerStates, inputFrameWidth, inputFrameHeight, temporalError))
{
CopyErrorMessage(temporalError, errorMessageSize, errorMessage);
return false;
}
// The prepared build already contains GLSL text for each pass. This commit
// step performs the short GL work on the render thread.
std::vector<LayerProgram> newPrograms;
newPrograms.reserve(preparedBuild.layers.size());
for (const PreparedLayerShader& preparedLayer : preparedBuild.layers)
{
LayerProgram layerProgram;
if (!mCompiler.CompilePreparedLayerProgram(preparedLayer.state, preparedLayer.passes, layerProgram, errorMessageSize, errorMessage))
{
for (LayerProgram& program : newPrograms)
DestroySingleLayerProgram(program);
return false;
}
newPrograms.push_back(layerProgram);
}
std::string targetError;
if (!mRenderer.ReserveTemporaryRenderTargets(RequiredTemporaryRenderTargets(newPrograms), inputFrameWidth, inputFrameHeight, targetError))
{
for (LayerProgram& program : newPrograms)
DestroySingleLayerProgram(program);
CopyErrorMessage(targetError, errorMessageSize, errorMessage);
return false;
}
DestroyLayerPrograms();
mRenderer.ReplaceLayerPrograms(newPrograms);
mCommittedLayerStates = preparedBuild.layerStates;
mRuntimeHost.SetCompileStatus(true, "Shader layers compiled successfully.");
mRuntimeHost.ClearReloadRequest();
return true;
}
bool OpenGLShaderPrograms::CompileDecodeShader(int errorMessageSize, char* errorMessage)
{
return mCompiler.CompileDecodeShader(errorMessageSize, errorMessage);
}
bool OpenGLShaderPrograms::CompileOutputPackShader(int errorMessageSize, char* errorMessage)
{
return mCompiler.CompileOutputPackShader(errorMessageSize, errorMessage);
}
void OpenGLShaderPrograms::DestroySingleLayerProgram(LayerProgram& layerProgram)
{
mRenderer.DestroySingleLayerProgram(layerProgram);
}
void OpenGLShaderPrograms::DestroyLayerPrograms()
{
mRenderer.DestroyLayerPrograms();
}
void OpenGLShaderPrograms::DestroyDecodeShaderProgram()
{
mRenderer.DestroyDecodeShaderProgram();
}
void OpenGLShaderPrograms::ResetTemporalHistoryState()
{
mRenderer.TemporalHistory().ResetState();
}
void OpenGLShaderPrograms::ResetShaderFeedbackState()
{
mRenderer.FeedbackBuffers().ResetState();
}
bool OpenGLShaderPrograms::UpdateTextBindingTexture(const RuntimeRenderState& state, LayerProgram::TextBinding& textBinding, std::string& error)
{
return mTextureBindings.UpdateTextBindingTexture(state, textBinding, error);
}
bool OpenGLShaderPrograms::UpdateGlobalParamsBuffer(const RuntimeRenderState& state, unsigned availableSourceHistoryLength, unsigned availableTemporalHistoryLength, bool feedbackAvailable)
{
return mGlobalParamsBuffer.Update(state, availableSourceHistoryLength, availableTemporalHistoryLength, feedbackAvailable);
}

View File

@@ -0,0 +1,40 @@
#pragma once
#include "GlobalParamsBuffer.h"
#include "OpenGLRenderer.h"
#include "RuntimeHost.h"
#include "ShaderBuildQueue.h"
#include "ShaderTypes.h"
#include "ShaderProgramCompiler.h"
#include "ShaderTextureBindings.h"
#include <string>
class OpenGLShaderPrograms
{
public:
using LayerProgram = OpenGLRenderer::LayerProgram;
OpenGLShaderPrograms(OpenGLRenderer& renderer, RuntimeHost& runtimeHost);
bool CompileLayerPrograms(unsigned inputFrameWidth, unsigned inputFrameHeight, int errorMessageSize, char* errorMessage);
bool CommitPreparedLayerPrograms(const PreparedShaderBuild& preparedBuild, unsigned inputFrameWidth, unsigned inputFrameHeight, int errorMessageSize, char* errorMessage);
bool CompileDecodeShader(int errorMessageSize, char* errorMessage);
bool CompileOutputPackShader(int errorMessageSize, char* errorMessage);
void DestroyLayerPrograms();
void DestroySingleLayerProgram(LayerProgram& layerProgram);
void DestroyDecodeShaderProgram();
void ResetTemporalHistoryState();
void ResetShaderFeedbackState();
const std::vector<RuntimeRenderState>& CommittedLayerStates() const { return mCommittedLayerStates; }
bool UpdateTextBindingTexture(const RuntimeRenderState& state, LayerProgram::TextBinding& textBinding, std::string& error);
bool UpdateGlobalParamsBuffer(const RuntimeRenderState& state, unsigned availableSourceHistoryLength, unsigned availableTemporalHistoryLength, bool feedbackAvailable);
private:
OpenGLRenderer& mRenderer;
RuntimeHost& mRuntimeHost;
ShaderTextureBindings mTextureBindings;
GlobalParamsBuffer mGlobalParamsBuffer;
ShaderProgramCompiler mCompiler;
std::vector<RuntimeRenderState> mCommittedLayerStates;
};

View File

@@ -0,0 +1,134 @@
#include "ShaderBuildQueue.h"
#include "RuntimeHost.h"
#include <chrono>
#include <utility>
namespace
{
constexpr auto kShaderBuildDebounce = std::chrono::milliseconds(400);
}
ShaderBuildQueue::ShaderBuildQueue(RuntimeHost& runtimeHost) :
mRuntimeHost(runtimeHost),
mWorkerThread([this]() { WorkerLoop(); })
{
}
ShaderBuildQueue::~ShaderBuildQueue()
{
Stop();
}
void ShaderBuildQueue::RequestBuild(unsigned outputWidth, unsigned outputHeight)
{
{
std::lock_guard<std::mutex> lock(mMutex);
mHasRequest = true;
++mRequestedGeneration;
mRequestedOutputWidth = outputWidth;
mRequestedOutputHeight = outputHeight;
mHasReadyBuild = false;
}
mCondition.notify_one();
}
bool ShaderBuildQueue::TryConsumeReadyBuild(PreparedShaderBuild& build)
{
std::lock_guard<std::mutex> lock(mMutex);
if (!mHasReadyBuild)
return false;
build = std::move(mReadyBuild);
mReadyBuild = PreparedShaderBuild();
mHasReadyBuild = false;
return true;
}
void ShaderBuildQueue::Stop()
{
{
std::lock_guard<std::mutex> lock(mMutex);
if (mStopping)
return;
mStopping = true;
}
mCondition.notify_one();
if (mWorkerThread.joinable())
mWorkerThread.join();
}
void ShaderBuildQueue::WorkerLoop()
{
for (;;)
{
uint64_t generation = 0;
unsigned outputWidth = 0;
unsigned outputHeight = 0;
{
std::unique_lock<std::mutex> lock(mMutex);
mCondition.wait(lock, [this]() { return mStopping || mHasRequest; });
if (mStopping)
return;
generation = mRequestedGeneration;
outputWidth = mRequestedOutputWidth;
outputHeight = mRequestedOutputHeight;
mHasRequest = false;
}
for (;;)
{
std::unique_lock<std::mutex> lock(mMutex);
if (mCondition.wait_for(lock, kShaderBuildDebounce, [this, generation]() {
return mStopping || (mHasRequest && mRequestedGeneration != generation);
}))
{
if (mStopping)
return;
generation = mRequestedGeneration;
outputWidth = mRequestedOutputWidth;
outputHeight = mRequestedOutputHeight;
mHasRequest = false;
continue;
}
break;
}
PreparedShaderBuild build = Build(generation, outputWidth, outputHeight);
std::lock_guard<std::mutex> lock(mMutex);
if (mStopping)
return;
if (generation != mRequestedGeneration)
continue;
mReadyBuild = std::move(build);
mHasReadyBuild = true;
}
}
PreparedShaderBuild ShaderBuildQueue::Build(uint64_t generation, unsigned outputWidth, unsigned outputHeight)
{
PreparedShaderBuild build;
build.generation = generation;
build.layerStates = mRuntimeHost.GetLayerRenderStates(outputWidth, outputHeight);
build.layers.reserve(build.layerStates.size());
for (const RuntimeRenderState& state : build.layerStates)
{
PreparedLayerShader layer;
layer.state = state;
if (!mRuntimeHost.BuildLayerPassFragmentShaderSources(state.layerId, layer.passes, build.message))
{
build.succeeded = false;
return build;
}
build.layers.push_back(std::move(layer));
}
build.succeeded = true;
build.message = "Shader layers prepared successfully.";
return build;
}

View File

@@ -0,0 +1,57 @@
#pragma once
#include "ShaderTypes.h"
#include <condition_variable>
#include <cstdint>
#include <mutex>
#include <string>
#include <thread>
#include <vector>
class RuntimeHost;
struct PreparedLayerShader
{
RuntimeRenderState state;
std::vector<ShaderPassBuildSource> passes;
};
struct PreparedShaderBuild
{
uint64_t generation = 0;
bool succeeded = false;
std::string message;
std::vector<RuntimeRenderState> layerStates;
std::vector<PreparedLayerShader> layers;
};
class ShaderBuildQueue
{
public:
explicit ShaderBuildQueue(RuntimeHost& runtimeHost);
~ShaderBuildQueue();
ShaderBuildQueue(const ShaderBuildQueue&) = delete;
ShaderBuildQueue& operator=(const ShaderBuildQueue&) = delete;
void RequestBuild(unsigned outputWidth, unsigned outputHeight);
bool TryConsumeReadyBuild(PreparedShaderBuild& build);
void Stop();
private:
void WorkerLoop();
PreparedShaderBuild Build(uint64_t generation, unsigned outputWidth, unsigned outputHeight);
RuntimeHost& mRuntimeHost;
std::thread mWorkerThread;
std::mutex mMutex;
std::condition_variable mCondition;
bool mStopping = false;
bool mHasRequest = false;
uint64_t mRequestedGeneration = 0;
unsigned mRequestedOutputWidth = 0;
unsigned mRequestedOutputHeight = 0;
bool mHasReadyBuild = false;
PreparedShaderBuild mReadyBuild;
};

View File

@@ -0,0 +1,233 @@
#include "ShaderProgramCompiler.h"
#include "GlRenderConstants.h"
#include "GlScopedObjects.h"
#include "GlShaderSources.h"
#include <cstring>
#include <utility>
#include <vector>
namespace
{
void CopyErrorMessage(const std::string& message, int errorMessageSize, char* errorMessage)
{
if (!errorMessage || errorMessageSize <= 0)
return;
strncpy_s(errorMessage, errorMessageSize, message.c_str(), _TRUNCATE);
}
}
ShaderProgramCompiler::ShaderProgramCompiler(OpenGLRenderer& renderer, RuntimeHost& runtimeHost, ShaderTextureBindings& textureBindings) :
mRenderer(renderer),
mRuntimeHost(runtimeHost),
mTextureBindings(textureBindings)
{
}
bool ShaderProgramCompiler::CompileLayerProgram(const RuntimeRenderState& state, LayerProgram& layerProgram, int errorMessageSize, char* errorMessage)
{
std::vector<ShaderPassBuildSource> passSources;
std::string loadError;
if (!mRuntimeHost.BuildLayerPassFragmentShaderSources(state.layerId, passSources, loadError))
{
CopyErrorMessage(loadError, errorMessageSize, errorMessage);
return false;
}
return CompilePreparedLayerProgram(state, passSources, layerProgram, errorMessageSize, errorMessage);
}
bool ShaderProgramCompiler::CompilePreparedLayerProgram(const RuntimeRenderState& state, const std::vector<ShaderPassBuildSource>& passSources, LayerProgram& layerProgram, int errorMessageSize, char* errorMessage)
{
GLsizei errorBufferSize = 0;
std::string loadError;
const char* vertexSource = kFullscreenTriangleVertexShaderSource;
layerProgram.layerId = state.layerId;
layerProgram.shaderId = state.shaderId;
layerProgram.passes.clear();
for (const auto& passSource : passSources)
{
GLint compileResult = GL_FALSE;
GLint linkResult = GL_FALSE;
const char* fragmentSource = passSource.fragmentShaderSource.c_str();
ScopedGlShader newVertexShader(glCreateShader(GL_VERTEX_SHADER));
glShaderSource(newVertexShader.get(), 1, (const GLchar**)&vertexSource, NULL);
glCompileShader(newVertexShader.get());
glGetShaderiv(newVertexShader.get(), GL_COMPILE_STATUS, &compileResult);
if (compileResult == GL_FALSE)
{
glGetShaderInfoLog(newVertexShader.get(), errorMessageSize, &errorBufferSize, errorMessage);
mRenderer.DestroySingleLayerProgram(layerProgram);
return false;
}
ScopedGlShader newFragmentShader(glCreateShader(GL_FRAGMENT_SHADER));
glShaderSource(newFragmentShader.get(), 1, (const GLchar**)&fragmentSource, NULL);
glCompileShader(newFragmentShader.get());
glGetShaderiv(newFragmentShader.get(), GL_COMPILE_STATUS, &compileResult);
if (compileResult == GL_FALSE)
{
glGetShaderInfoLog(newFragmentShader.get(), errorMessageSize, &errorBufferSize, errorMessage);
mRenderer.DestroySingleLayerProgram(layerProgram);
return false;
}
ScopedGlProgram newProgram(glCreateProgram());
glAttachShader(newProgram.get(), newVertexShader.get());
glAttachShader(newProgram.get(), newFragmentShader.get());
glLinkProgram(newProgram.get());
glGetProgramiv(newProgram.get(), GL_LINK_STATUS, &linkResult);
if (linkResult == GL_FALSE)
{
glGetProgramInfoLog(newProgram.get(), errorMessageSize, &errorBufferSize, errorMessage);
mRenderer.DestroySingleLayerProgram(layerProgram);
return false;
}
std::vector<LayerProgram::TextureBinding> textureBindings;
for (const ShaderTextureAsset& textureAsset : state.textureAssets)
{
LayerProgram::TextureBinding textureBinding;
textureBinding.samplerName = textureAsset.id;
textureBinding.sourcePath = textureAsset.path;
if (!mTextureBindings.LoadTextureAsset(textureAsset, textureBinding.texture, loadError))
{
for (LayerProgram::TextureBinding& loadedTexture : textureBindings)
{
if (loadedTexture.texture != 0)
glDeleteTextures(1, &loadedTexture.texture);
}
CopyErrorMessage(loadError, errorMessageSize, errorMessage);
mRenderer.DestroySingleLayerProgram(layerProgram);
return false;
}
textureBindings.push_back(textureBinding);
}
std::vector<LayerProgram::TextBinding> textBindings;
mTextureBindings.CreateTextBindings(state, textBindings);
PassProgram passProgram;
passProgram.passId = passSource.passId;
passProgram.inputNames = passSource.inputNames;
passProgram.outputName = passSource.outputName;
passProgram.shaderTextureBase = mTextureBindings.ResolveShaderTextureBase(state, mRuntimeHost.GetMaxTemporalHistoryFrames());
passProgram.textureBindings.swap(textureBindings);
passProgram.textBindings.swap(textBindings);
const GLuint globalParamsIndex = glGetUniformBlockIndex(newProgram.get(), "GlobalParams");
if (globalParamsIndex != GL_INVALID_INDEX)
glUniformBlockBinding(newProgram.get(), globalParamsIndex, kGlobalParamsBindingPoint);
const unsigned historyCap = mRuntimeHost.GetMaxTemporalHistoryFrames();
glUseProgram(newProgram.get());
mTextureBindings.AssignLayerSamplerUniforms(newProgram.get(), state, passProgram, historyCap);
glUseProgram(0);
passProgram.program = newProgram.release();
passProgram.vertexShader = newVertexShader.release();
passProgram.fragmentShader = newFragmentShader.release();
layerProgram.passes.push_back(std::move(passProgram));
}
return true;
}
bool ShaderProgramCompiler::CompileDecodeShader(int errorMessageSize, char* errorMessage)
{
GLsizei errorBufferSize = 0;
GLint compileResult = GL_FALSE;
GLint linkResult = GL_FALSE;
const char* vertexSource = kFullscreenTriangleVertexShaderSource;
const char* fragmentSource = kDecodeFragmentShaderSource;
ScopedGlShader newVertexShader(glCreateShader(GL_VERTEX_SHADER));
glShaderSource(newVertexShader.get(), 1, (const GLchar**)&vertexSource, NULL);
glCompileShader(newVertexShader.get());
glGetShaderiv(newVertexShader.get(), GL_COMPILE_STATUS, &compileResult);
if (compileResult == GL_FALSE)
{
glGetShaderInfoLog(newVertexShader.get(), errorMessageSize, &errorBufferSize, errorMessage);
return false;
}
ScopedGlShader newFragmentShader(glCreateShader(GL_FRAGMENT_SHADER));
glShaderSource(newFragmentShader.get(), 1, (const GLchar**)&fragmentSource, NULL);
glCompileShader(newFragmentShader.get());
glGetShaderiv(newFragmentShader.get(), GL_COMPILE_STATUS, &compileResult);
if (compileResult == GL_FALSE)
{
glGetShaderInfoLog(newFragmentShader.get(), errorMessageSize, &errorBufferSize, errorMessage);
return false;
}
ScopedGlProgram newProgram(glCreateProgram());
glAttachShader(newProgram.get(), newVertexShader.get());
glAttachShader(newProgram.get(), newFragmentShader.get());
glLinkProgram(newProgram.get());
glGetProgramiv(newProgram.get(), GL_LINK_STATUS, &linkResult);
if (linkResult == GL_FALSE)
{
glGetProgramInfoLog(newProgram.get(), errorMessageSize, &errorBufferSize, errorMessage);
return false;
}
mRenderer.DestroyDecodeShaderProgram();
mRenderer.SetDecodeShaderProgram(newProgram.release(), newVertexShader.release(), newFragmentShader.release());
return true;
}
bool ShaderProgramCompiler::CompileOutputPackShader(int errorMessageSize, char* errorMessage)
{
GLsizei errorBufferSize = 0;
GLint compileResult = GL_FALSE;
GLint linkResult = GL_FALSE;
const char* vertexSource = kFullscreenTriangleVertexShaderSource;
const char* fragmentSource = kOutputPackFragmentShaderSource;
ScopedGlShader newVertexShader(glCreateShader(GL_VERTEX_SHADER));
glShaderSource(newVertexShader.get(), 1, (const GLchar**)&vertexSource, NULL);
glCompileShader(newVertexShader.get());
glGetShaderiv(newVertexShader.get(), GL_COMPILE_STATUS, &compileResult);
if (compileResult == GL_FALSE)
{
glGetShaderInfoLog(newVertexShader.get(), errorMessageSize, &errorBufferSize, errorMessage);
return false;
}
ScopedGlShader newFragmentShader(glCreateShader(GL_FRAGMENT_SHADER));
glShaderSource(newFragmentShader.get(), 1, (const GLchar**)&fragmentSource, NULL);
glCompileShader(newFragmentShader.get());
glGetShaderiv(newFragmentShader.get(), GL_COMPILE_STATUS, &compileResult);
if (compileResult == GL_FALSE)
{
glGetShaderInfoLog(newFragmentShader.get(), errorMessageSize, &errorBufferSize, errorMessage);
return false;
}
ScopedGlProgram newProgram(glCreateProgram());
glAttachShader(newProgram.get(), newVertexShader.get());
glAttachShader(newProgram.get(), newFragmentShader.get());
glLinkProgram(newProgram.get());
glGetProgramiv(newProgram.get(), GL_LINK_STATUS, &linkResult);
if (linkResult == GL_FALSE)
{
glGetProgramInfoLog(newProgram.get(), errorMessageSize, &errorBufferSize, errorMessage);
return false;
}
glUseProgram(newProgram.get());
const GLint outputSamplerLocation = glGetUniformLocation(newProgram.get(), "uOutputRgb");
if (outputSamplerLocation >= 0)
glUniform1i(outputSamplerLocation, 0);
glUseProgram(0);
mRenderer.DestroyOutputPackShaderProgram();
mRenderer.SetOutputPackShaderProgram(newProgram.release(), newVertexShader.release(), newFragmentShader.release());
return true;
}

View File

@@ -0,0 +1,27 @@
#pragma once
#include "OpenGLRenderer.h"
#include "RuntimeHost.h"
#include "ShaderTextureBindings.h"
#include <string>
#include <vector>
class ShaderProgramCompiler
{
public:
using LayerProgram = OpenGLRenderer::LayerProgram;
using PassProgram = OpenGLRenderer::LayerProgram::PassProgram;
ShaderProgramCompiler(OpenGLRenderer& renderer, RuntimeHost& runtimeHost, ShaderTextureBindings& textureBindings);
bool CompileLayerProgram(const RuntimeRenderState& state, LayerProgram& layerProgram, int errorMessageSize, char* errorMessage);
bool CompilePreparedLayerProgram(const RuntimeRenderState& state, const std::vector<ShaderPassBuildSource>& passSources, LayerProgram& layerProgram, int errorMessageSize, char* errorMessage);
bool CompileDecodeShader(int errorMessageSize, char* errorMessage);
bool CompileOutputPackShader(int errorMessageSize, char* errorMessage);
private:
OpenGLRenderer& mRenderer;
RuntimeHost& mRuntimeHost;
ShaderTextureBindings& mTextureBindings;
};

View File

@@ -0,0 +1,256 @@
#include "ShaderTextureBindings.h"
#include "GlRenderConstants.h"
#include "TextRasterizer.h"
#include "TextureAssetLoader.h"
#include <algorithm>
#include <filesystem>
namespace
{
std::string TextValueForBinding(const RuntimeRenderState& state, const std::string& parameterId)
{
auto valueIt = state.parameterValues.find(parameterId);
return valueIt == state.parameterValues.end() ? std::string() : valueIt->second.textValue;
}
const ShaderFontAsset* FindFontAssetForParameter(const RuntimeRenderState& state, const ShaderParameterDefinition& definition)
{
if (!definition.fontId.empty())
{
for (const ShaderFontAsset& fontAsset : state.fontAssets)
{
if (fontAsset.id == definition.fontId)
return &fontAsset;
}
}
return state.fontAssets.empty() ? nullptr : &state.fontAssets.front();
}
}
bool ShaderTextureBindings::LoadTextureAsset(const ShaderTextureAsset& textureAsset, GLuint& textureId, std::string& error)
{
return ::LoadTextureAsset(textureAsset, textureId, error);
}
void ShaderTextureBindings::CreateTextBindings(const RuntimeRenderState& state, std::vector<LayerProgram::TextBinding>& textBindings)
{
for (const ShaderParameterDefinition& definition : state.parameterDefinitions)
{
if (definition.type != ShaderParameterType::Text)
continue;
LayerProgram::TextBinding textBinding;
textBinding.parameterId = definition.id;
textBinding.samplerName = definition.id + "Texture";
textBinding.fontId = definition.fontId;
glGenTextures(1, &textBinding.texture);
glBindTexture(GL_TEXTURE_2D, textBinding.texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
std::vector<unsigned char> empty(static_cast<std::size_t>(kTextTextureWidth) * kTextTextureHeight * 4, 0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, kTextTextureWidth, kTextTextureHeight, 0, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, empty.data());
glBindTexture(GL_TEXTURE_2D, 0);
textBindings.push_back(textBinding);
}
}
bool ShaderTextureBindings::UpdateTextBindingTexture(const RuntimeRenderState& state, LayerProgram::TextBinding& textBinding, std::string& error)
{
const std::string text = TextValueForBinding(state, textBinding.parameterId);
if (text == textBinding.renderedText && textBinding.renderedWidth == kTextTextureWidth && textBinding.renderedHeight == kTextTextureHeight)
return true;
auto definitionIt = std::find_if(state.parameterDefinitions.begin(), state.parameterDefinitions.end(),
[&textBinding](const ShaderParameterDefinition& definition) { return definition.id == textBinding.parameterId; });
if (definitionIt == state.parameterDefinitions.end())
return true;
const ShaderFontAsset* fontAsset = FindFontAssetForParameter(state, *definitionIt);
std::filesystem::path fontPath;
if (fontAsset)
fontPath = fontAsset->path;
std::vector<unsigned char> sdf;
if (!RasterizeTextSdf(text, fontPath, sdf, error))
return false;
GLint previousActiveTexture = 0;
GLint previousUnpackBuffer = 0;
glGetIntegerv(GL_ACTIVE_TEXTURE, &previousActiveTexture);
glGetIntegerv(GL_PIXEL_UNPACK_BUFFER_BINDING, &previousUnpackBuffer);
glActiveTexture(GL_TEXTURE0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, textBinding.texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, kTextTextureWidth, kTextTextureHeight, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, sdf.data());
glBindTexture(GL_TEXTURE_2D, 0);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, static_cast<GLuint>(previousUnpackBuffer));
glActiveTexture(static_cast<GLenum>(previousActiveTexture));
textBinding.renderedText = text;
textBinding.renderedWidth = kTextTextureWidth;
textBinding.renderedHeight = kTextTextureHeight;
return true;
}
GLint ShaderTextureBindings::FindSamplerUniformLocation(GLuint program, const std::string& samplerName) const
{
GLint location = glGetUniformLocation(program, samplerName.c_str());
if (location >= 0)
return location;
return glGetUniformLocation(program, (samplerName + "_0").c_str());
}
GLuint ShaderTextureBindings::ResolveFeedbackTextureUnit(const RuntimeRenderState& state, unsigned historyCap) const
{
return state.isTemporal ? kSourceHistoryTextureUnitBase + historyCap + historyCap : kSourceHistoryTextureUnitBase;
}
GLuint ShaderTextureBindings::ResolveShaderTextureBase(const RuntimeRenderState& state, unsigned historyCap) const
{
return ResolveFeedbackTextureUnit(state, historyCap) + (state.feedback.enabled ? 1u : 0u);
}
void ShaderTextureBindings::AssignLayerSamplerUniforms(GLuint program, const RuntimeRenderState& state, const PassProgram& passProgram, unsigned historyCap) const
{
const GLuint shaderTextureBase = ResolveShaderTextureBase(state, historyCap);
const GLint layerInputLocation = FindSamplerUniformLocation(program, "gLayerInput");
if (layerInputLocation >= 0)
glUniform1i(layerInputLocation, static_cast<GLint>(kLayerInputTextureUnit));
const GLint videoInputLocation = FindSamplerUniformLocation(program, "gVideoInput");
if (videoInputLocation >= 0)
glUniform1i(videoInputLocation, static_cast<GLint>(kDecodedVideoTextureUnit));
for (unsigned index = 0; index < historyCap; ++index)
{
const std::string sourceSamplerName = "gSourceHistory" + std::to_string(index);
const GLint sourceSamplerLocation = glGetUniformLocation(program, sourceSamplerName.c_str());
if (sourceSamplerLocation >= 0)
glUniform1i(sourceSamplerLocation, static_cast<GLint>(kSourceHistoryTextureUnitBase + index));
const std::string temporalSamplerName = "gTemporalHistory" + std::to_string(index);
const GLint temporalSamplerLocation = glGetUniformLocation(program, temporalSamplerName.c_str());
if (temporalSamplerLocation >= 0)
glUniform1i(temporalSamplerLocation, static_cast<GLint>(kSourceHistoryTextureUnitBase + historyCap + index));
}
if (state.feedback.enabled)
{
const GLint feedbackSamplerLocation = FindSamplerUniformLocation(program, "gFeedbackState");
if (feedbackSamplerLocation >= 0)
glUniform1i(feedbackSamplerLocation, static_cast<GLint>(ResolveFeedbackTextureUnit(state, historyCap)));
}
for (std::size_t index = 0; index < passProgram.textureBindings.size(); ++index)
{
const GLint textureSamplerLocation = FindSamplerUniformLocation(program, passProgram.textureBindings[index].samplerName);
if (textureSamplerLocation >= 0)
glUniform1i(textureSamplerLocation, static_cast<GLint>(shaderTextureBase + static_cast<GLuint>(index)));
}
const GLuint textTextureBase = shaderTextureBase + static_cast<GLuint>(passProgram.textureBindings.size());
for (std::size_t index = 0; index < passProgram.textBindings.size(); ++index)
{
const GLint textSamplerLocation = FindSamplerUniformLocation(program, passProgram.textBindings[index].samplerName);
if (textSamplerLocation >= 0)
glUniform1i(textSamplerLocation, static_cast<GLint>(textTextureBase + static_cast<GLuint>(index)));
}
}
ShaderTextureBindings::RuntimeTextureBindingPlan ShaderTextureBindings::BuildLayerRuntimeBindingPlan(
const PassProgram& passProgram,
GLuint layerInputTexture,
GLuint originalLayerInputTexture,
const RuntimeRenderState& state,
GLuint feedbackTexture,
const std::vector<GLuint>& sourceHistoryTextures,
const std::vector<GLuint>& temporalHistoryTextures) const
{
RuntimeTextureBindingPlan plan;
plan.bindings.push_back({ "originalLayerInput", "gLayerInput", originalLayerInputTexture, kLayerInputTextureUnit });
plan.bindings.push_back({ "layerInput", "gVideoInput", layerInputTexture, kDecodedVideoTextureUnit });
for (std::size_t index = 0; index < sourceHistoryTextures.size(); ++index)
{
plan.bindings.push_back({
"sourceHistory",
"gSourceHistory" + std::to_string(index),
sourceHistoryTextures[index],
kSourceHistoryTextureUnitBase + static_cast<GLuint>(index)
});
}
const GLuint temporalBase = kSourceHistoryTextureUnitBase + static_cast<GLuint>(sourceHistoryTextures.size());
for (std::size_t index = 0; index < temporalHistoryTextures.size(); ++index)
{
plan.bindings.push_back({
"temporalHistory",
"gTemporalHistory" + std::to_string(index),
temporalHistoryTextures[index],
temporalBase + static_cast<GLuint>(index)
});
}
const GLuint feedbackTextureUnit = ResolveFeedbackTextureUnit(state, static_cast<unsigned>(sourceHistoryTextures.size()));
if (state.feedback.enabled)
{
plan.bindings.push_back({
"feedbackState",
"gFeedbackState",
feedbackTexture,
feedbackTextureUnit
});
}
const GLuint shaderTextureBase = passProgram.shaderTextureBase != 0
? passProgram.shaderTextureBase
: feedbackTextureUnit + (state.feedback.enabled ? 1u : 0u);
for (std::size_t index = 0; index < passProgram.textureBindings.size(); ++index)
{
const LayerProgram::TextureBinding& textureBinding = passProgram.textureBindings[index];
plan.bindings.push_back({
"shaderTexture",
textureBinding.samplerName,
textureBinding.texture,
shaderTextureBase + static_cast<GLuint>(index)
});
}
const GLuint textTextureBase = shaderTextureBase + static_cast<GLuint>(passProgram.textureBindings.size());
for (std::size_t index = 0; index < passProgram.textBindings.size(); ++index)
{
const LayerProgram::TextBinding& textBinding = passProgram.textBindings[index];
plan.bindings.push_back({
"textTexture",
textBinding.samplerName,
textBinding.texture,
textTextureBase + static_cast<GLuint>(index)
});
}
return plan;
}
void ShaderTextureBindings::BindRuntimeTexturePlan(const RuntimeTextureBindingPlan& plan) const
{
for (const RuntimeTextureBinding& binding : plan.bindings)
{
glActiveTexture(GL_TEXTURE0 + binding.textureUnit);
glBindTexture(GL_TEXTURE_2D, binding.texture);
}
glActiveTexture(GL_TEXTURE0);
}
void ShaderTextureBindings::UnbindRuntimeTexturePlan(const RuntimeTextureBindingPlan& plan) const
{
for (const RuntimeTextureBinding& binding : plan.bindings)
{
glActiveTexture(GL_TEXTURE0 + binding.textureUnit);
glBindTexture(GL_TEXTURE_2D, 0);
}
glActiveTexture(GL_TEXTURE0);
}

View File

@@ -0,0 +1,45 @@
#pragma once
#include "OpenGLRenderer.h"
#include "ShaderTypes.h"
#include <string>
#include <vector>
class ShaderTextureBindings
{
public:
using LayerProgram = OpenGLRenderer::LayerProgram;
using PassProgram = OpenGLRenderer::LayerProgram::PassProgram;
struct RuntimeTextureBinding
{
std::string semanticName;
std::string samplerName;
GLuint texture = 0;
GLuint textureUnit = 0;
};
struct RuntimeTextureBindingPlan
{
std::vector<RuntimeTextureBinding> bindings;
};
bool LoadTextureAsset(const ShaderTextureAsset& textureAsset, GLuint& textureId, std::string& error);
void CreateTextBindings(const RuntimeRenderState& state, std::vector<LayerProgram::TextBinding>& textBindings);
bool UpdateTextBindingTexture(const RuntimeRenderState& state, LayerProgram::TextBinding& textBinding, std::string& error);
GLint FindSamplerUniformLocation(GLuint program, const std::string& samplerName) const;
GLuint ResolveFeedbackTextureUnit(const RuntimeRenderState& state, unsigned historyCap) const;
GLuint ResolveShaderTextureBase(const RuntimeRenderState& state, unsigned historyCap) const;
void AssignLayerSamplerUniforms(GLuint program, const RuntimeRenderState& state, const PassProgram& passProgram, unsigned historyCap) const;
RuntimeTextureBindingPlan BuildLayerRuntimeBindingPlan(
const PassProgram& passProgram,
GLuint layerInputTexture,
GLuint originalLayerInputTexture,
const RuntimeRenderState& state,
GLuint feedbackTexture,
const std::vector<GLuint>& sourceHistoryTextures,
const std::vector<GLuint>& temporalHistoryTextures) const;
void BindRuntimeTexturePlan(const RuntimeTextureBindingPlan& plan) const;
void UnbindRuntimeTexturePlan(const RuntimeTextureBindingPlan& plan) const;
};

View File

@@ -0,0 +1,48 @@
#pragma once
#include <cstddef>
#include <cstring>
#include <vector>
inline std::size_t AlignStd140(std::size_t offset, std::size_t alignment)
{
const std::size_t mask = alignment - 1;
return (offset + mask) & ~mask;
}
template <typename TValue>
inline void AppendStd140Value(std::vector<unsigned char>& buffer, std::size_t alignment, const TValue& value)
{
const std::size_t offset = AlignStd140(buffer.size(), alignment);
if (buffer.size() < offset + sizeof(TValue))
buffer.resize(offset + sizeof(TValue), 0);
std::memcpy(buffer.data() + offset, &value, sizeof(TValue));
}
inline void AppendStd140Float(std::vector<unsigned char>& buffer, float value)
{
AppendStd140Value(buffer, 4, value);
}
inline void AppendStd140Int(std::vector<unsigned char>& buffer, int value)
{
AppendStd140Value(buffer, 4, value);
}
inline void AppendStd140Vec2(std::vector<unsigned char>& buffer, float x, float y)
{
const std::size_t offset = AlignStd140(buffer.size(), 8);
if (buffer.size() < offset + sizeof(float) * 2)
buffer.resize(offset + sizeof(float) * 2, 0);
float values[2] = { x, y };
std::memcpy(buffer.data() + offset, values, sizeof(values));
}
inline void AppendStd140Vec4(std::vector<unsigned char>& buffer, float x, float y, float z, float w)
{
const std::size_t offset = AlignStd140(buffer.size(), 16);
if (buffer.size() < offset + sizeof(float) * 4)
buffer.resize(offset + sizeof(float) * 4, 0);
float values[4] = { x, y, z, w };
std::memcpy(buffer.data() + offset, values, sizeof(values));
}

View File

@@ -0,0 +1,243 @@
#include "TextRasterizer.h"
#include <windows.h>
#include <algorithm>
#include <cmath>
#include <cstring>
#include <gdiplus.h>
#include <memory>
namespace
{
constexpr int kTextSdfSpread = 20;
constexpr float kTextFontPixelSize = 144.0f;
constexpr float kTextLayoutPadding = 48.0f;
constexpr float kSdfInfinity = 1.0e20f;
class GdiplusSession
{
public:
GdiplusSession()
{
Gdiplus::GdiplusStartupInput startupInput;
mStarted = Gdiplus::GdiplusStartup(&mToken, &startupInput, NULL) == Gdiplus::Ok;
}
~GdiplusSession()
{
if (mStarted)
Gdiplus::GdiplusShutdown(mToken);
}
GdiplusSession(const GdiplusSession&) = delete;
GdiplusSession& operator=(const GdiplusSession&) = delete;
bool started() const { return mStarted; }
private:
ULONG_PTR mToken = 0;
bool mStarted = false;
};
std::wstring Utf8ToWide(const std::string& text)
{
if (text.empty())
return std::wstring();
const int required = MultiByteToWideChar(CP_UTF8, 0, text.c_str(), -1, NULL, 0);
if (required <= 1)
return std::wstring();
std::wstring wide(static_cast<std::size_t>(required - 1), L'\0');
MultiByteToWideChar(CP_UTF8, 0, text.c_str(), -1, wide.data(), required);
return wide;
}
void DistanceTransform1D(const std::vector<float>& input, std::vector<float>& output, unsigned count)
{
std::vector<unsigned> locations(count, 0);
std::vector<float> boundaries(static_cast<std::size_t>(count) + 1, 0.0f);
unsigned segment = 0;
locations[0] = 0;
boundaries[0] = -kSdfInfinity;
boundaries[1] = kSdfInfinity;
for (unsigned q = 1; q < count; ++q)
{
float intersection = 0.0f;
for (;;)
{
const unsigned location = locations[segment];
intersection =
((input[q] + static_cast<float>(q * q)) - (input[location] + static_cast<float>(location * location))) /
(2.0f * static_cast<float>(q) - 2.0f * static_cast<float>(location));
if (intersection > boundaries[segment] || segment == 0)
break;
--segment;
}
++segment;
locations[segment] = q;
boundaries[segment] = intersection;
boundaries[segment + 1] = kSdfInfinity;
}
segment = 0;
for (unsigned q = 0; q < count; ++q)
{
while (boundaries[segment + 1] < static_cast<float>(q))
++segment;
const unsigned location = locations[segment];
const float delta = static_cast<float>(q) - static_cast<float>(location);
output[q] = delta * delta + input[location];
}
}
std::vector<float> DistanceTransform2D(const std::vector<unsigned char>& targetMask, unsigned width, unsigned height)
{
std::vector<float> rowInput(width, 0.0f);
std::vector<float> rowOutput(width, 0.0f);
std::vector<float> columnInput(height, 0.0f);
std::vector<float> columnOutput(height, 0.0f);
std::vector<float> rowDistance(static_cast<std::size_t>(width) * height, 0.0f);
std::vector<float> distance(static_cast<std::size_t>(width) * height, 0.0f);
for (unsigned y = 0; y < height; ++y)
{
for (unsigned x = 0; x < width; ++x)
rowInput[x] = targetMask[static_cast<std::size_t>(y) * width + x] ? 0.0f : kSdfInfinity;
DistanceTransform1D(rowInput, rowOutput, width);
for (unsigned x = 0; x < width; ++x)
rowDistance[static_cast<std::size_t>(y) * width + x] = rowOutput[x];
}
for (unsigned x = 0; x < width; ++x)
{
for (unsigned y = 0; y < height; ++y)
columnInput[y] = rowDistance[static_cast<std::size_t>(y) * width + x];
DistanceTransform1D(columnInput, columnOutput, height);
for (unsigned y = 0; y < height; ++y)
distance[static_cast<std::size_t>(y) * width + x] = columnOutput[y];
}
return distance;
}
std::vector<unsigned char> BuildTextSdfTexture(const std::vector<unsigned char>& alpha, unsigned width, unsigned height)
{
std::vector<unsigned char> insideMask(static_cast<std::size_t>(width) * height, 0);
std::vector<unsigned char> outsideMask(static_cast<std::size_t>(width) * height, 0);
for (std::size_t index = 0; index < alpha.size(); ++index)
{
const bool inside = alpha[index] > 127;
insideMask[index] = inside ? 1 : 0;
outsideMask[index] = inside ? 0 : 1;
}
const std::vector<float> distanceToInside = DistanceTransform2D(insideMask, width, height);
const std::vector<float> distanceToOutside = DistanceTransform2D(outsideMask, width, height);
std::vector<unsigned char> sdf(static_cast<std::size_t>(width) * height * 4, 0);
for (unsigned y = 0; y < height; ++y)
{
const unsigned flippedY = height - 1 - y;
for (unsigned x = 0; x < width; ++x)
{
const std::size_t source = static_cast<std::size_t>(y) * width + x;
const float signedDistance = std::sqrt(distanceToOutside[source]) - std::sqrt(distanceToInside[source]);
const float normalized = std::clamp(
0.5f + signedDistance / static_cast<float>(kTextSdfSpread * 2),
0.0f,
1.0f);
const unsigned char value = static_cast<unsigned char>(normalized * 255.0f + 0.5f);
const std::size_t out = (static_cast<std::size_t>(flippedY) * width + x) * 4;
sdf[out + 0] = value;
sdf[out + 1] = value;
sdf[out + 2] = value;
sdf[out + 3] = value;
}
}
return sdf;
}
}
bool RasterizeTextSdf(const std::string& text, const std::filesystem::path& fontPath, std::vector<unsigned char>& sdf, std::string& error)
{
GdiplusSession gdiplus;
if (!gdiplus.started())
{
error = "Could not start GDI+ for text rendering.";
return false;
}
Gdiplus::PrivateFontCollection fontCollection;
Gdiplus::FontFamily fallbackFamily(L"Arial");
Gdiplus::FontFamily* fontFamily = &fallbackFamily;
std::unique_ptr<Gdiplus::FontFamily[]> families;
const std::wstring wideFontPath = fontPath.empty() ? std::wstring() : fontPath.wstring();
if (!wideFontPath.empty())
{
if (fontCollection.AddFontFile(wideFontPath.c_str()) != Gdiplus::Ok)
{
error = "Could not load packaged font file for text rendering: " + fontPath.string();
return false;
}
const INT familyCount = fontCollection.GetFamilyCount();
if (familyCount <= 0)
{
error = "Packaged font did not contain a usable font family: " + fontPath.string();
return false;
}
families.reset(new Gdiplus::FontFamily[familyCount]);
INT found = 0;
if (fontCollection.GetFamilies(familyCount, families.get(), &found) != Gdiplus::Ok || found <= 0)
{
error = "Could not read the packaged font family: " + fontPath.string();
return false;
}
fontFamily = &families[0];
}
Gdiplus::Bitmap bitmap(kTextTextureWidth, kTextTextureHeight, PixelFormat32bppARGB);
Gdiplus::Graphics graphics(&bitmap);
graphics.SetCompositingMode(Gdiplus::CompositingModeSourceCopy);
graphics.Clear(Gdiplus::Color(255, 0, 0, 0));
graphics.SetCompositingMode(Gdiplus::CompositingModeSourceOver);
graphics.SetTextRenderingHint(Gdiplus::TextRenderingHintAntiAlias);
graphics.SetSmoothingMode(Gdiplus::SmoothingModeHighQuality);
Gdiplus::Font font(fontFamily, kTextFontPixelSize, Gdiplus::FontStyleRegular, Gdiplus::UnitPixel);
Gdiplus::SolidBrush brush(Gdiplus::Color(255, 255, 255, 255));
Gdiplus::StringFormat format;
format.SetAlignment(Gdiplus::StringAlignmentNear);
format.SetLineAlignment(Gdiplus::StringAlignmentCenter);
format.SetFormatFlags(Gdiplus::StringFormatFlagsNoWrap | Gdiplus::StringFormatFlagsMeasureTrailingSpaces);
const Gdiplus::RectF layout(
kTextLayoutPadding,
0.0f,
static_cast<Gdiplus::REAL>(kTextTextureWidth) - (kTextLayoutPadding * 2.0f),
static_cast<Gdiplus::REAL>(kTextTextureHeight));
const std::wstring wideText = Utf8ToWide(text);
graphics.DrawString(wideText.c_str(), -1, &font, layout, &format, &brush);
std::vector<unsigned char> alpha(static_cast<std::size_t>(kTextTextureWidth) * kTextTextureHeight, 0);
for (unsigned y = 0; y < kTextTextureHeight; ++y)
{
for (unsigned x = 0; x < kTextTextureWidth; ++x)
{
Gdiplus::Color pixel;
bitmap.GetPixel(x, y, &pixel);
BYTE luminance = pixel.GetRed();
if (pixel.GetGreen() > luminance)
luminance = pixel.GetGreen();
if (pixel.GetBlue() > luminance)
luminance = pixel.GetBlue();
alpha[static_cast<std::size_t>(y) * kTextTextureWidth + x] = static_cast<unsigned char>(luminance);
}
}
sdf = BuildTextSdfTexture(alpha, kTextTextureWidth, kTextTextureHeight);
return true;
}

View File

@@ -0,0 +1,10 @@
#pragma once
#include <filesystem>
#include <string>
#include <vector>
constexpr unsigned kTextTextureWidth = 4096;
constexpr unsigned kTextTextureHeight = 512;
bool RasterizeTextSdf(const std::string& text, const std::filesystem::path& fontPath, std::vector<unsigned char>& sdf, std::string& error);

View File

@@ -0,0 +1,222 @@
#include "TextureAssetLoader.h"
#include <windows.h>
#include <wincodec.h>
#include <atlbase.h>
#include <algorithm>
#include <cctype>
#include <cstring>
#include <fstream>
#include <sstream>
#include <string>
#include <vector>
#ifndef GL_RGBA32F
#define GL_RGBA32F 0x8814
#endif
namespace
{
std::string LowercaseExtension(const std::filesystem::path& path)
{
std::string extension = path.extension().string();
std::transform(extension.begin(), extension.end(), extension.begin(),
[](unsigned char value) { return static_cast<char>(std::tolower(value)); });
return extension;
}
bool LoadCubeTextureAsset(const ShaderTextureAsset& textureAsset, GLuint& textureId, std::string& error)
{
std::ifstream file(textureAsset.path);
if (!file)
{
error = "Could not open shader LUT asset: " + textureAsset.path.string();
return false;
}
unsigned lutSize = 0;
std::vector<float> values;
std::string line;
while (std::getline(file, line))
{
const std::size_t commentStart = line.find('#');
if (commentStart != std::string::npos)
line.resize(commentStart);
std::istringstream stream(line);
std::string firstToken;
if (!(stream >> firstToken))
continue;
if (firstToken == "TITLE" || firstToken == "DOMAIN_MIN" || firstToken == "DOMAIN_MAX")
continue;
if (firstToken == "LUT_3D_SIZE")
{
stream >> lutSize;
continue;
}
if (firstToken == "LUT_1D_SIZE")
{
error = "Only 3D .cube LUT assets are supported: " + textureAsset.path.string();
return false;
}
float red = 0.0f;
float green = 0.0f;
float blue = 0.0f;
try
{
red = std::stof(firstToken);
}
catch (...)
{
error = "Unsupported .cube directive in shader LUT asset: " + firstToken;
return false;
}
if (!(stream >> green >> blue))
{
error = "Malformed RGB entry in shader LUT asset: " + textureAsset.path.string();
return false;
}
values.push_back(red);
values.push_back(green);
values.push_back(blue);
values.push_back(1.0f);
}
if (lutSize == 0)
{
error = "Shader LUT asset is missing LUT_3D_SIZE: " + textureAsset.path.string();
return false;
}
const std::size_t expectedFloats = static_cast<std::size_t>(lutSize) * lutSize * lutSize * 4;
if (values.size() != expectedFloats)
{
error = "Shader LUT asset entry count does not match LUT_3D_SIZE: " + textureAsset.path.string();
return false;
}
const GLsizei atlasWidth = static_cast<GLsizei>(lutSize * lutSize);
const GLsizei atlasHeight = static_cast<GLsizei>(lutSize);
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, atlasWidth, atlasHeight, 0, GL_RGBA, GL_FLOAT, values.data());
glBindTexture(GL_TEXTURE_2D, 0);
return true;
}
}
bool LoadTextureAsset(const ShaderTextureAsset& textureAsset, GLuint& textureId, std::string& error)
{
textureId = 0;
if (LowercaseExtension(textureAsset.path) == ".cube")
return LoadCubeTextureAsset(textureAsset, textureId, error);
HRESULT comInitResult = CoInitializeEx(NULL, COINIT_MULTITHREADED);
const bool shouldUninitializeCom = (comInitResult == S_OK || comInitResult == S_FALSE);
if (FAILED(comInitResult) && comInitResult != RPC_E_CHANGED_MODE)
{
error = "Could not initialize COM to load shader texture assets.";
return false;
}
CComPtr<IWICImagingFactory> imagingFactory;
HRESULT result = CoCreateInstance(CLSID_WICImagingFactory, NULL, CLSCTX_INPROC_SERVER, IID_PPV_ARGS(&imagingFactory));
if (FAILED(result) || !imagingFactory)
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Could not create a WIC imaging factory to load shader texture assets.";
return false;
}
CComPtr<IWICBitmapDecoder> bitmapDecoder;
result = imagingFactory->CreateDecoderFromFilename(textureAsset.path.wstring().c_str(), NULL, GENERIC_READ, WICDecodeMetadataCacheOnLoad, &bitmapDecoder);
if (FAILED(result) || !bitmapDecoder)
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Could not open shader texture asset: " + textureAsset.path.string();
return false;
}
CComPtr<IWICBitmapFrameDecode> bitmapFrame;
result = bitmapDecoder->GetFrame(0, &bitmapFrame);
if (FAILED(result) || !bitmapFrame)
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Could not decode the first frame of shader texture asset: " + textureAsset.path.string();
return false;
}
CComPtr<IWICFormatConverter> formatConverter;
result = imagingFactory->CreateFormatConverter(&formatConverter);
if (FAILED(result) || !formatConverter)
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Could not create a WIC format converter for shader texture asset: " + textureAsset.path.string();
return false;
}
result = formatConverter->Initialize(bitmapFrame, GUID_WICPixelFormat32bppBGRA, WICBitmapDitherTypeNone, NULL, 0.0, WICBitmapPaletteTypeCustom);
if (FAILED(result))
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Could not convert shader texture asset to BGRA: " + textureAsset.path.string();
return false;
}
UINT width = 0;
UINT height = 0;
result = formatConverter->GetSize(&width, &height);
if (FAILED(result) || width == 0 || height == 0)
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Shader texture asset has an invalid size: " + textureAsset.path.string();
return false;
}
const UINT stride = width * 4;
std::vector<unsigned char> pixels(static_cast<std::size_t>(stride) * static_cast<std::size_t>(height));
result = formatConverter->CopyPixels(NULL, stride, static_cast<UINT>(pixels.size()), pixels.data());
if (FAILED(result))
{
if (shouldUninitializeCom)
CoUninitialize();
error = "Could not read shader texture pixels: " + textureAsset.path.string();
return false;
}
std::vector<unsigned char> flippedPixels(pixels.size());
for (UINT row = 0; row < height; ++row)
{
const std::size_t srcOffset = static_cast<std::size_t>(row) * stride;
const std::size_t dstOffset = static_cast<std::size_t>(height - 1 - row) * stride;
std::memcpy(flippedPixels.data() + dstOffset, pixels.data() + srcOffset, stride);
}
glGenTextures(1, &textureId);
glBindTexture(GL_TEXTURE_2D, textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, static_cast<GLsizei>(width), static_cast<GLsizei>(height), 0, GL_BGRA, GL_UNSIGNED_BYTE, flippedPixels.data());
glBindTexture(GL_TEXTURE_2D, 0);
if (shouldUninitializeCom)
CoUninitialize();
return true;
}

View File

@@ -0,0 +1,11 @@
#pragma once
#include "GLExtensions.h"
#include "ShaderTypes.h"
#include <windows.h>
#include <gl/gl.h>
#include <string>
bool LoadTextureAsset(const ShaderTextureAsset& textureAsset, GLuint& textureId, std::string& error);

View File

@@ -0,0 +1,45 @@
#include "RuntimeClock.h"
#include <chrono>
namespace
{
bool ToUtcTime(std::time_t time, std::tm& utcTime)
{
return gmtime_s(&utcTime, &time) == 0;
}
bool ToLocalTime(std::time_t time, std::tm& localTime)
{
return localtime_s(&localTime, &time) == 0;
}
}
RuntimeClockSnapshot GetRuntimeClockSnapshot()
{
return MakeRuntimeClockSnapshot(std::chrono::system_clock::to_time_t(std::chrono::system_clock::now()));
}
RuntimeClockSnapshot MakeRuntimeClockSnapshot(std::time_t now)
{
RuntimeClockSnapshot snapshot;
std::tm utcTime = {};
if (!ToUtcTime(now, utcTime))
return snapshot;
snapshot.utcTimeSeconds =
static_cast<double>(utcTime.tm_hour * 3600 + utcTime.tm_min * 60 + utcTime.tm_sec);
std::tm localTime = {};
if (!ToLocalTime(now, localTime))
return snapshot;
utcTime.tm_isdst = localTime.tm_isdst;
const std::time_t localAsTime = std::mktime(&localTime);
const std::time_t utcAsLocalTime = std::mktime(&utcTime);
if (localAsTime != static_cast<std::time_t>(-1) && utcAsLocalTime != static_cast<std::time_t>(-1))
snapshot.utcOffsetSeconds = std::difftime(localAsTime, utcAsLocalTime);
return snapshot;
}

View File

@@ -0,0 +1,12 @@
#pragma once
#include <ctime>
struct RuntimeClockSnapshot
{
double utcTimeSeconds = 0.0;
double utcOffsetSeconds = 0.0;
};
RuntimeClockSnapshot GetRuntimeClockSnapshot();
RuntimeClockSnapshot MakeRuntimeClockSnapshot(std::time_t now);

View File

@@ -3,11 +3,13 @@
#include "RuntimeJson.h"
#include "ShaderTypes.h"
#include <atomic>
#include <chrono>
#include <filesystem>
#include <map>
#include <mutex>
#include <string>
#include <utility>
#include <vector>
class RuntimeHost
@@ -29,20 +31,36 @@ public:
bool SetLayerShader(const std::string& layerId, const std::string& shaderId, std::string& error);
bool UpdateLayerParameter(const std::string& layerId, const std::string& parameterId, const JsonValue& newValue, std::string& error);
bool UpdateLayerParameterByControlKey(const std::string& layerKey, const std::string& parameterKey, const JsonValue& newValue, std::string& error);
bool UpdateLayerParameterByControlKey(const std::string& layerKey, const std::string& parameterKey, const JsonValue& newValue, bool persistState, std::string& error);
bool ApplyOscTargetByControlKey(const std::string& layerKey, const std::string& parameterKey, const JsonValue& targetValue, double smoothingAmount, bool& keepApplying, std::string& resolvedLayerId, std::string& resolvedParameterId, ShaderParameterValue& appliedValue, std::string& error);
bool ResetLayerParameters(const std::string& layerId, std::string& error);
bool SaveStackPreset(const std::string& presetName, std::string& error) const;
bool LoadStackPreset(const std::string& presetName, std::string& error);
void SetCompileStatus(bool succeeded, const std::string& message);
void SetSignalStatus(bool hasSignal, unsigned width, unsigned height, const std::string& modeName);
bool TrySetSignalStatus(bool hasSignal, unsigned width, unsigned height, const std::string& modeName);
void SetDeckLinkOutputStatus(const std::string& modelName, bool supportsInternalKeying, bool supportsExternalKeying,
bool keyerInterfaceAvailable, bool externalKeyingRequested, bool externalKeyingActive, const std::string& statusMessage);
void SetVideoIOStatus(const std::string& backendName, const std::string& modelName, bool supportsInternalKeying, bool supportsExternalKeying,
bool keyerInterfaceAvailable, bool externalKeyingRequested, bool externalKeyingActive, const std::string& statusMessage);
void SetPerformanceStats(double frameBudgetMilliseconds, double renderMilliseconds);
bool TrySetPerformanceStats(double frameBudgetMilliseconds, double renderMilliseconds);
void SetFramePacingStats(double completionIntervalMilliseconds, double smoothedCompletionIntervalMilliseconds,
double maxCompletionIntervalMilliseconds, uint64_t lateFrameCount, uint64_t droppedFrameCount, uint64_t flushedFrameCount);
bool TrySetFramePacingStats(double completionIntervalMilliseconds, double smoothedCompletionIntervalMilliseconds,
double maxCompletionIntervalMilliseconds, uint64_t lateFrameCount, uint64_t droppedFrameCount, uint64_t flushedFrameCount);
void AdvanceFrame();
bool TryAdvanceFrame();
bool BuildLayerFragmentShaderSource(const std::string& layerId, std::string& fragmentShaderSource, std::string& error);
bool BuildLayerPassFragmentShaderSources(const std::string& layerId, std::vector<ShaderPassBuildSource>& passSources, std::string& error);
std::vector<RuntimeRenderState> GetLayerRenderStates(unsigned outputWidth, unsigned outputHeight) const;
bool TryGetLayerRenderStates(unsigned outputWidth, unsigned outputHeight, std::vector<RuntimeRenderState>& states) const;
bool TryRefreshCachedLayerStates(std::vector<RuntimeRenderState>& states) const;
void RefreshDynamicRenderStateFields(std::vector<RuntimeRenderState>& states) const;
std::string BuildStateJson() const;
uint64_t GetRenderStateVersion() const { return mRenderStateVersion.load(std::memory_order_relaxed); }
uint64_t GetParameterStateVersion() const { return mParameterStateVersion.load(std::memory_order_relaxed); }
const std::filesystem::path& GetRepoRoot() const { return mRepoRoot; }
const std::filesystem::path& GetUiRoot() const { return mUiRoot; }
@@ -50,7 +68,10 @@ public:
const std::filesystem::path& GetRuntimeRoot() const { return mRuntimeRoot; }
unsigned short GetServerPort() const { return mServerPort; }
unsigned short GetOscPort() const { return mConfig.oscPort; }
const std::string& GetOscBindAddress() const { return mConfig.oscBindAddress; }
double GetOscSmoothing() const { return mConfig.oscSmoothing; }
unsigned GetMaxTemporalHistoryFrames() const { return mConfig.maxTemporalHistoryFrames; }
unsigned GetPreviewFps() const { return mConfig.previewFps; }
bool ExternalKeyingEnabled() const { return mConfig.enableExternalKeying; }
const std::string& GetInputVideoFormat() const { return mConfig.inputVideoFormat; }
const std::string& GetInputFrameRate() const { return mConfig.inputFrameRate; }
@@ -65,8 +86,11 @@ private:
std::string shaderLibrary = "shaders";
unsigned short serverPort = 8080;
unsigned short oscPort = 9000;
std::string oscBindAddress = "127.0.0.1";
double oscSmoothing = 0.18;
bool autoReload = true;
unsigned maxTemporalHistoryFrames = 4;
unsigned previewFps = 30;
bool enableExternalKeying = false;
std::string inputVideoFormat = "1080p";
std::string inputFrameRate = "59.94";
@@ -76,6 +100,7 @@ private:
struct DeckLinkOutputStatus
{
std::string backendName = "decklink";
std::string modelName;
bool supportsInternalKeying = false;
bool supportsExternalKeying = false;
@@ -102,13 +127,13 @@ private:
bool LoadPersistentState(std::string& error);
bool SavePersistentState(std::string& error) const;
bool ScanShaderPackages(std::string& error);
bool ParseShaderManifest(const std::filesystem::path& manifestPath, ShaderPackage& shaderPackage, std::string& error) const;
bool NormalizeAndValidateValue(const ShaderParameterDefinition& definition, const JsonValue& value, ShaderParameterValue& normalizedValue, std::string& error) const;
ShaderParameterValue DefaultValueForDefinition(const ShaderParameterDefinition& definition) const;
void EnsureLayerDefaultsLocked(LayerPersistentState& layerState, const ShaderPackage& shaderPackage) const;
std::string ReadTextFile(const std::filesystem::path& path, std::string& error) const;
bool WriteTextFile(const std::filesystem::path& path, const std::string& contents, std::string& error) const;
bool ResolvePaths(std::string& error);
void BuildLayerRenderStatesLocked(unsigned outputWidth, unsigned outputHeight, std::vector<RuntimeRenderState>& states) const;
JsonValue BuildStateValue() const;
JsonValue SerializeLayerStackLocked() const;
bool DeserializeLayerStackLocked(const JsonValue& layersValue, std::vector<LayerPersistentState>& layers, std::string& error);
@@ -120,6 +145,12 @@ private:
LayerPersistentState* FindLayerById(const std::string& layerId);
const LayerPersistentState* FindLayerById(const std::string& layerId) const;
std::string GenerateLayerId();
void SetSignalStatusLocked(bool hasSignal, unsigned width, unsigned height, const std::string& modeName);
void MarkRenderStateDirtyLocked();
void MarkParameterStateDirtyLocked();
void SetPerformanceStatsLocked(double frameBudgetMilliseconds, double renderMilliseconds);
void SetFramePacingStatsLocked(double completionIntervalMilliseconds, double smoothedCompletionIntervalMilliseconds,
double maxCompletionIntervalMilliseconds, uint64_t lateFrameCount, uint64_t droppedFrameCount, uint64_t flushedFrameCount);
private:
mutable std::mutex mMutex;
@@ -138,6 +169,7 @@ private:
std::filesystem::path mPatchedGlslPath;
std::map<std::string, ShaderPackage> mPackagesById;
std::vector<std::string> mPackageOrder;
std::vector<ShaderPackageStatus> mPackageStatuses;
bool mReloadRequested;
bool mCompileSucceeded;
std::string mCompileMessage;
@@ -148,11 +180,20 @@ private:
double mFrameBudgetMilliseconds;
double mRenderMilliseconds;
double mSmoothedRenderMilliseconds;
double mCompletionIntervalMilliseconds;
double mSmoothedCompletionIntervalMilliseconds;
double mMaxCompletionIntervalMilliseconds;
double mStartupRandom;
uint64_t mLateFrameCount;
uint64_t mDroppedFrameCount;
uint64_t mFlushedFrameCount;
DeckLinkOutputStatus mDeckLinkOutputStatus;
unsigned short mServerPort;
bool mAutoReloadEnabled;
std::chrono::steady_clock::time_point mStartTime;
std::chrono::steady_clock::time_point mLastScanTime;
uint64_t mFrameCounter;
std::atomic<uint64_t> mFrameCounter{ 0 };
std::atomic<uint64_t> mRenderStateVersion{ 0 };
std::atomic<uint64_t> mParameterStateVersion{ 0 };
uint64_t mNextLayerId;
};

View File

@@ -661,3 +661,14 @@ std::string SerializeJson(const JsonValue& value, bool pretty)
SerializeJsonImpl(value, output, pretty, 0);
return output.str();
}
std::vector<double> JsonArrayToNumbers(const JsonValue& value)
{
std::vector<double> numbers;
for (const JsonValue& item : value.asArray())
{
if (item.isNumber())
numbers.push_back(item.asNumber());
}
return numbers;
}

View File

@@ -62,3 +62,4 @@ private:
bool ParseJson(const std::string& text, JsonValue& value, std::string& error);
std::string SerializeJson(const JsonValue& value, bool pretty = false);
std::vector<double> JsonArrayToNumbers(const JsonValue& value);

View File

@@ -26,15 +26,19 @@ bool IsFiniteNumber(double value)
return std::isfinite(value) != 0;
}
std::vector<double> JsonArrayToNumbers(const JsonValue& value)
std::string NormalizeTextValue(const std::string& text, unsigned maxLength)
{
std::vector<double> numbers;
for (const JsonValue& item : value.asArray())
std::string normalized;
normalized.reserve(std::min<std::size_t>(text.size(), maxLength));
for (unsigned char ch : text)
{
if (item.isNumber())
numbers.push_back(item.asNumber());
if (ch < 32 || ch > 126)
continue;
if (normalized.size() >= maxLength)
break;
normalized.push_back(static_cast<char>(ch));
}
return numbers;
return normalized;
}
}
@@ -82,6 +86,12 @@ ShaderParameterValue DefaultValueForDefinition(const ShaderParameterDefinition&
case ShaderParameterType::Enum:
value.enumValue = definition.defaultEnumValue;
break;
case ShaderParameterType::Text:
value.textValue = NormalizeTextValue(definition.defaultTextValue, definition.maxLength);
break;
case ShaderParameterType::Trigger:
value.numberValues = { 0.0, -1000000.0 };
break;
}
return value;
}
@@ -164,6 +174,22 @@ bool NormalizeAndValidateParameterValue(const ShaderParameterDefinition& definit
error = "Enum parameter '" + definition.id + "' received unsupported option '" + selectedValue + "'.";
return false;
}
case ShaderParameterType::Text:
if (!value.isString())
{
error = "Expected string value for text parameter '" + definition.id + "'.";
return false;
}
normalizedValue.textValue = NormalizeTextValue(value.asString(), definition.maxLength);
return true;
case ShaderParameterType::Trigger:
if (!value.isNumber() && !value.isBoolean())
{
error = "Expected numeric or boolean value for trigger parameter '" + definition.id + "'.";
return false;
}
normalizedValue.numberValues = { value.isNumber() ? std::max(0.0, std::floor(value.asNumber())) : 0.0, -1000000.0 };
return true;
}
return false;

View File

@@ -4,6 +4,7 @@
#include "NativeHandles.h"
#include <fstream>
#include <cctype>
#include <regex>
#include <sstream>
#include <vector>
@@ -30,15 +31,36 @@ std::string SlangCBufferTypeForParameter(ShaderParameterType type)
case ShaderParameterType::Color: return "float4";
case ShaderParameterType::Boolean: return "bool";
case ShaderParameterType::Enum: return "int";
case ShaderParameterType::Text: return "";
case ShaderParameterType::Trigger: return "int";
}
return "float";
}
std::string CapitalizeIdentifier(const std::string& identifier)
{
if (identifier.empty())
return identifier;
std::string text = identifier;
text[0] = static_cast<char>(std::toupper(static_cast<unsigned char>(text[0])));
return text;
}
std::string BuildParameterUniforms(const std::vector<ShaderParameterDefinition>& parameters)
{
std::ostringstream source;
for (const ShaderParameterDefinition& definition : parameters)
{
if (definition.type == ShaderParameterType::Text)
continue;
if (definition.type == ShaderParameterType::Trigger)
{
source << "\tint " << definition.id << ";\n";
source << "\tfloat " << definition.id << "Time;\n";
continue;
}
source << "\t" << SlangCBufferTypeForParameter(definition.type) << " " << definition.id << ";\n";
}
return source.str();
}
@@ -60,6 +82,44 @@ std::string BuildTextureSamplerDeclarations(const std::vector<ShaderTextureAsset
return source.str();
}
std::string BuildTextSamplerDeclarations(const std::vector<ShaderParameterDefinition>& parameters)
{
std::ostringstream source;
for (const ShaderParameterDefinition& definition : parameters)
{
if (definition.type != ShaderParameterType::Text)
continue;
source << "Sampler2D<float4> " << definition.id << "Texture;\n";
}
if (source.tellp() > 0)
source << "\n";
return source.str();
}
std::string BuildTextHelpers(const std::vector<ShaderParameterDefinition>& parameters)
{
std::ostringstream source;
for (const ShaderParameterDefinition& definition : parameters)
{
if (definition.type != ShaderParameterType::Text)
continue;
const std::string suffix = CapitalizeIdentifier(definition.id);
source
<< "float sample" << suffix << "(float2 uv)\n"
<< "{\n"
<< "\tif (uv.x < 0.0 || uv.x > 1.0 || uv.y < 0.0 || uv.y > 1.0)\n"
<< "\t\treturn 0.0;\n"
<< "\treturn " << definition.id << "Texture.Sample(uv).r;\n"
<< "}\n\n"
<< "float4 draw" << suffix << "(float2 uv, float4 fillColor)\n"
<< "{\n"
<< "\tfloat alpha = sample" << suffix << "(uv) * fillColor.a;\n"
<< "\treturn float4(fillColor.rgb * alpha, alpha);\n"
<< "}\n\n";
}
return source.str();
}
std::string BuildHistorySwitchCases(const std::string& samplerPrefix, unsigned historyLength)
{
std::ostringstream source;
@@ -83,10 +143,10 @@ ShaderCompiler::ShaderCompiler(
{
}
bool ShaderCompiler::BuildLayerFragmentShaderSource(const ShaderPackage& shaderPackage, std::string& fragmentShaderSource, std::string& error) const
bool ShaderCompiler::BuildPassFragmentShaderSource(const ShaderPackage& shaderPackage, const ShaderPassDefinition& pass, std::string& fragmentShaderSource, std::string& error) const
{
std::string wrapperSource;
if (!BuildWrapperSlangSource(shaderPackage, wrapperSource, error))
if (!BuildWrapperSlangSource(shaderPackage, pass, wrapperSource, error))
return false;
if (!WriteTextFile(mWrapperPath, wrapperSource, error))
return false;
@@ -107,7 +167,7 @@ bool ShaderCompiler::BuildLayerFragmentShaderSource(const ShaderPackage& shaderP
return true;
}
bool ShaderCompiler::BuildWrapperSlangSource(const ShaderPackage& shaderPackage, std::string& wrapperSource, std::string& error) const
bool ShaderCompiler::BuildWrapperSlangSource(const ShaderPackage& shaderPackage, const ShaderPassDefinition& pass, std::string& wrapperSource, std::string& error) const
{
const std::filesystem::path templatePath = mRepoRoot / "runtime" / "templates" / "shader_wrapper.slang.in";
wrapperSource = ReadTextFile(templatePath, error);
@@ -115,18 +175,38 @@ bool ShaderCompiler::BuildWrapperSlangSource(const ShaderPackage& shaderPackage,
return false;
wrapperSource = ReplaceAll(wrapperSource, "{{PARAMETER_UNIFORMS}}", BuildParameterUniforms(shaderPackage.parameters));
wrapperSource = ReplaceAll(wrapperSource, "{{SOURCE_HISTORY_SAMPLERS}}", BuildHistorySamplerDeclarations("gSourceHistory", mMaxTemporalHistoryFrames));
wrapperSource = ReplaceAll(wrapperSource, "{{TEMPORAL_HISTORY_SAMPLERS}}", BuildHistorySamplerDeclarations("gTemporalHistory", mMaxTemporalHistoryFrames));
const unsigned historySamplerCount = shaderPackage.temporal.enabled ? mMaxTemporalHistoryFrames : 0;
wrapperSource = ReplaceAll(wrapperSource, "{{SOURCE_HISTORY_SAMPLERS}}", BuildHistorySamplerDeclarations("gSourceHistory", historySamplerCount));
wrapperSource = ReplaceAll(wrapperSource, "{{TEMPORAL_HISTORY_SAMPLERS}}", BuildHistorySamplerDeclarations("gTemporalHistory", historySamplerCount));
wrapperSource = ReplaceAll(wrapperSource, "{{FEEDBACK_SAMPLER}}", shaderPackage.feedback.enabled ? "Sampler2D<float4> gFeedbackState;\n" : "");
wrapperSource = ReplaceAll(wrapperSource, "{{FEEDBACK_HELPER}}",
shaderPackage.feedback.enabled
? "float4 sampleFeedback(float2 tc)\n{\n\tif (gFeedbackAvailable <= 0)\n\t\treturn float4(0.0, 0.0, 0.0, 0.0);\n\treturn gFeedbackState.Sample(tc);\n}\n"
: "float4 sampleFeedback(float2 tc)\n{\n\treturn float4(0.0, 0.0, 0.0, 0.0);\n}\n");
wrapperSource = ReplaceAll(wrapperSource, "{{TEXTURE_SAMPLERS}}", BuildTextureSamplerDeclarations(shaderPackage.textureAssets));
wrapperSource = ReplaceAll(wrapperSource, "{{SOURCE_HISTORY_SWITCH_CASES}}", BuildHistorySwitchCases("gSourceHistory", mMaxTemporalHistoryFrames));
wrapperSource = ReplaceAll(wrapperSource, "{{TEMPORAL_HISTORY_SWITCH_CASES}}", BuildHistorySwitchCases("gTemporalHistory", mMaxTemporalHistoryFrames));
wrapperSource = ReplaceAll(wrapperSource, "{{USER_SHADER_INCLUDE}}", shaderPackage.shaderPath.generic_string());
wrapperSource = ReplaceAll(wrapperSource, "{{ENTRY_POINT_CALL}}", shaderPackage.entryPoint + "(context)");
wrapperSource = ReplaceAll(wrapperSource, "{{TEXT_SAMPLERS}}", BuildTextSamplerDeclarations(shaderPackage.parameters));
wrapperSource = ReplaceAll(wrapperSource, "{{TEXT_HELPERS}}", BuildTextHelpers(shaderPackage.parameters));
wrapperSource = ReplaceAll(wrapperSource, "{{SOURCE_HISTORY_SWITCH_CASES}}", BuildHistorySwitchCases("gSourceHistory", historySamplerCount));
wrapperSource = ReplaceAll(wrapperSource, "{{TEMPORAL_HISTORY_SWITCH_CASES}}", BuildHistorySwitchCases("gTemporalHistory", historySamplerCount));
wrapperSource = ReplaceAll(wrapperSource, "{{USER_SHADER_INCLUDE}}", pass.sourcePath.generic_string());
wrapperSource = ReplaceAll(wrapperSource, "{{ENTRY_POINT_CALL}}", pass.entryPoint + "(context)");
return true;
}
bool ShaderCompiler::FindSlangCompiler(std::filesystem::path& compilerPath, std::string& error) const
{
char slangRootBuffer[MAX_PATH] = {};
const DWORD slangRootLength = GetEnvironmentVariableA("SLANG_ROOT", slangRootBuffer, static_cast<DWORD>(sizeof(slangRootBuffer)));
if (slangRootLength > 0 && slangRootLength < sizeof(slangRootBuffer))
{
std::filesystem::path candidate = std::filesystem::path(slangRootBuffer) / "bin" / "slangc.exe";
if (std::filesystem::exists(candidate))
{
compilerPath = candidate;
return true;
}
}
std::filesystem::path thirdPartyRoot = mRepoRoot / "3rdParty";
if (!std::filesystem::exists(thirdPartyRoot))
{

View File

@@ -15,10 +15,10 @@ public:
const std::filesystem::path& patchedGlslPath,
unsigned maxTemporalHistoryFrames);
bool BuildLayerFragmentShaderSource(const ShaderPackage& shaderPackage, std::string& fragmentShaderSource, std::string& error) const;
bool BuildPassFragmentShaderSource(const ShaderPackage& shaderPackage, const ShaderPassDefinition& pass, std::string& fragmentShaderSource, std::string& error) const;
private:
bool BuildWrapperSlangSource(const ShaderPackage& shaderPackage, std::string& wrapperSource, std::string& error) const;
bool BuildWrapperSlangSource(const ShaderPackage& shaderPackage, const ShaderPassDefinition& pass, std::string& wrapperSource, std::string& error) const;
bool FindSlangCompiler(std::filesystem::path& compilerPath, std::string& error) const;
bool RunSlangCompiler(const std::filesystem::path& wrapperPath, const std::filesystem::path& outputPath, std::string& error) const;
bool PatchGeneratedGlsl(std::string& shaderText, std::string& error) const;

View File

@@ -29,17 +29,6 @@ bool IsFiniteNumber(double value)
return std::isfinite(value) != 0;
}
std::vector<double> JsonArrayToNumbers(const JsonValue& value)
{
std::vector<double> numbers;
for (const JsonValue& item : value.asArray())
{
if (item.isNumber())
numbers.push_back(item.asNumber());
}
return numbers;
}
bool ParseShaderParameterType(const std::string& typeName, ShaderParameterType& type)
{
if (typeName == "float")
@@ -67,6 +56,16 @@ bool ParseShaderParameterType(const std::string& typeName, ShaderParameterType&
type = ShaderParameterType::Enum;
return true;
}
if (typeName == "text")
{
type = ShaderParameterType::Text;
return true;
}
if (typeName == "trigger")
{
type = ShaderParameterType::Trigger;
return true;
}
return false;
}
@@ -240,6 +239,107 @@ bool ParseShaderMetadata(const JsonValue& manifestJson, ShaderPackage& shaderPac
return true;
}
bool ParsePassDefinitions(const JsonValue& manifestJson, ShaderPackage& shaderPackage, const std::filesystem::path& manifestPath, std::string& error)
{
const JsonValue* passesValue = nullptr;
if (!OptionalArrayField(manifestJson, "passes", passesValue, manifestPath, error))
return false;
if (!passesValue)
{
// Existing shader packages are treated as a single implicit pass, so
// multipass support does not require manifest churn.
ShaderPassDefinition pass;
pass.id = "main";
pass.entryPoint = shaderPackage.entryPoint;
pass.sourcePath = shaderPackage.shaderPath;
pass.outputName = "layerOutput";
if (!std::filesystem::exists(pass.sourcePath))
{
error = "Shader source not found for package " + shaderPackage.id + ": " + pass.sourcePath.string();
return false;
}
pass.sourceWriteTime = std::filesystem::last_write_time(pass.sourcePath);
shaderPackage.passes.push_back(pass);
return true;
}
if (passesValue->asArray().empty())
{
error = "Shader manifest 'passes' field must not be empty in: " + ManifestPathMessage(manifestPath);
return false;
}
for (const JsonValue& passJson : passesValue->asArray())
{
if (!passJson.isObject())
{
error = "Shader pass entry must be an object in: " + ManifestPathMessage(manifestPath);
return false;
}
std::string passId;
std::string sourcePath;
if (!RequireNonEmptyStringField(passJson, "id", passId, manifestPath, error) ||
!RequireNonEmptyStringField(passJson, "source", sourcePath, manifestPath, error))
{
error = "Shader pass is missing required 'id' or 'source' in: " + ManifestPathMessage(manifestPath);
return false;
}
if (!ValidateShaderIdentifier(passId, "passes[].id", manifestPath, error))
return false;
for (const ShaderPassDefinition& existingPass : shaderPackage.passes)
{
if (existingPass.id == passId)
{
error = "Duplicate shader pass id '" + passId + "' in: " + ManifestPathMessage(manifestPath);
return false;
}
}
ShaderPassDefinition pass;
pass.id = passId;
pass.sourcePath = shaderPackage.directoryPath / sourcePath;
if (!OptionalStringField(passJson, "entryPoint", pass.entryPoint, shaderPackage.entryPoint, manifestPath, error) ||
!OptionalStringField(passJson, "output", pass.outputName, passId, manifestPath, error))
{
return false;
}
if (!ValidateShaderIdentifier(pass.entryPoint, "passes[].entryPoint", manifestPath, error))
return false;
const JsonValue* inputsValue = nullptr;
if (!OptionalArrayField(passJson, "inputs", inputsValue, manifestPath, error))
return false;
if (inputsValue)
{
for (const JsonValue& inputValue : inputsValue->asArray())
{
if (!inputValue.isString())
{
error = "Shader pass inputs must be strings in: " + ManifestPathMessage(manifestPath);
return false;
}
pass.inputNames.push_back(inputValue.asString());
}
}
// Keep source validation in the registry. Bad pass declarations then
// appear as unavailable shaders instead of failing at render time.
if (!std::filesystem::exists(pass.sourcePath))
{
error = "Shader pass source not found for package " + shaderPackage.id + ": " + pass.sourcePath.string();
return false;
}
pass.sourceWriteTime = std::filesystem::last_write_time(pass.sourcePath);
shaderPackage.passes.push_back(pass);
}
shaderPackage.shaderPath = shaderPackage.passes.front().sourcePath;
return true;
}
bool ParseTextureAssets(const JsonValue& manifestJson, ShaderPackage& shaderPackage, const std::filesystem::path& manifestPath, std::string& error)
{
const JsonValue* texturesValue = nullptr;
@@ -283,6 +383,49 @@ bool ParseTextureAssets(const JsonValue& manifestJson, ShaderPackage& shaderPack
return true;
}
bool ParseFontAssets(const JsonValue& manifestJson, ShaderPackage& shaderPackage, const std::filesystem::path& manifestPath, std::string& error)
{
const JsonValue* fontsValue = nullptr;
if (!OptionalArrayField(manifestJson, "fonts", fontsValue, manifestPath, error))
return false;
if (!fontsValue)
return true;
for (const JsonValue& fontJson : fontsValue->asArray())
{
if (!fontJson.isObject())
{
error = "Shader font entry must be an object in: " + ManifestPathMessage(manifestPath);
return false;
}
std::string fontId;
std::string fontPath;
if (!RequireNonEmptyStringField(fontJson, "id", fontId, manifestPath, error) ||
!RequireNonEmptyStringField(fontJson, "path", fontPath, manifestPath, error))
{
error = "Shader font is missing required 'id' or 'path' in: " + ManifestPathMessage(manifestPath);
return false;
}
if (!ValidateShaderIdentifier(fontId, "fonts[].id", manifestPath, error))
return false;
ShaderFontAsset fontAsset;
fontAsset.id = fontId;
fontAsset.path = shaderPackage.directoryPath / fontPath;
if (!std::filesystem::exists(fontAsset.path))
{
error = "Shader font asset not found for package " + shaderPackage.id + ": " + fontAsset.path.string();
return false;
}
fontAsset.writeTime = std::filesystem::last_write_time(fontAsset.path);
shaderPackage.fontAssets.push_back(fontAsset);
}
return true;
}
bool ParseTemporalSettings(const JsonValue& manifestJson, ShaderPackage& shaderPackage, unsigned maxTemporalHistoryFrames, const std::filesystem::path& manifestPath, std::string& error)
{
const JsonValue* temporalValue = nullptr;
@@ -330,6 +473,46 @@ bool ParseTemporalSettings(const JsonValue& manifestJson, ShaderPackage& shaderP
return true;
}
bool ParseFeedbackSettings(const JsonValue& manifestJson, ShaderPackage& shaderPackage, const std::filesystem::path& manifestPath, std::string& error)
{
const JsonValue* feedbackValue = nullptr;
if (!OptionalObjectField(manifestJson, "feedback", feedbackValue, manifestPath, error))
return false;
if (!feedbackValue)
return true;
const JsonValue* enabledValue = feedbackValue->find("enabled");
if (!enabledValue || !enabledValue->asBoolean(false))
return true;
shaderPackage.feedback.enabled = true;
if (!OptionalStringField(*feedbackValue, "writePass", shaderPackage.feedback.writePassId, "", manifestPath, error))
return false;
if (shaderPackage.feedback.writePassId.empty())
{
if (shaderPackage.passes.empty())
{
error = "Feedback-enabled shader has no passes to target in: " + ManifestPathMessage(manifestPath);
return false;
}
shaderPackage.feedback.writePassId = shaderPackage.passes.back().id;
}
if (!ValidateShaderIdentifier(shaderPackage.feedback.writePassId, "feedback.writePass", manifestPath, error))
return false;
const auto passIt = std::find_if(shaderPackage.passes.begin(), shaderPackage.passes.end(),
[&shaderPackage](const ShaderPassDefinition& pass) { return pass.id == shaderPackage.feedback.writePassId; });
if (passIt == shaderPackage.passes.end())
{
error = "Feedback writePass '" + shaderPackage.feedback.writePassId + "' does not match any declared pass in: " + ManifestPathMessage(manifestPath);
return false;
}
return true;
}
bool ParseParameterNumberField(const JsonValue& parameterJson, const char* fieldName, std::vector<double>& values, const std::filesystem::path& manifestPath, std::string& error)
{
if (const JsonValue* fieldValue = parameterJson.find(fieldName))
@@ -365,6 +548,17 @@ bool ParseParameterDefault(const JsonValue& parameterJson, ShaderParameterDefini
return true;
}
if (definition.type == ShaderParameterType::Text)
{
if (!defaultValue->isString())
{
error = "Text parameter default must be a string for: " + definition.id;
return false;
}
definition.defaultTextValue = defaultValue->asString();
return true;
}
return NumberListFromJsonValue(*defaultValue, definition.defaultNumbers, "default", manifestPath, error);
}
@@ -439,6 +633,9 @@ bool ParseParameterDefinition(const JsonValue& parameterJson, ShaderParameterDef
if (!ValidateShaderIdentifier(definition.id, "parameters[].id", manifestPath, error))
return false;
if (!OptionalStringField(parameterJson, "description", definition.description, "", manifestPath, error))
return false;
if (!ParseParameterDefault(parameterJson, definition, manifestPath, error) ||
!ParseParameterNumberField(parameterJson, "min", definition.minNumbers, manifestPath, error) ||
!ParseParameterNumberField(parameterJson, "max", definition.maxNumbers, manifestPath, error) ||
@@ -447,6 +644,30 @@ bool ParseParameterDefinition(const JsonValue& parameterJson, ShaderParameterDef
return false;
}
if (definition.type == ShaderParameterType::Text)
{
if (const JsonValue* fontValue = parameterJson.find("font"))
{
if (!fontValue->isString())
{
error = "Text parameter 'font' must be a string for: " + definition.id;
return false;
}
definition.fontId = fontValue->asString();
if (!definition.fontId.empty() && !ValidateShaderIdentifier(definition.fontId, "parameters[].font", manifestPath, error))
return false;
}
if (const JsonValue* maxLengthValue = parameterJson.find("maxLength"))
{
if (!maxLengthValue->isNumber() || maxLengthValue->asNumber() < 1.0 || maxLengthValue->asNumber() > 256.0)
{
error = "Text parameter 'maxLength' must be a number from 1 to 256 for: " + definition.id;
return false;
}
definition.maxLength = static_cast<unsigned>(maxLengthValue->asNumber());
}
}
if (definition.type == ShaderParameterType::Enum)
return ParseParameterOptions(parameterJson, definition, manifestPath, error);
@@ -471,6 +692,36 @@ bool ParseParameterDefinitions(const JsonValue& manifestJson, ShaderPackage& sha
return true;
}
std::string UniqueUnavailableShaderId(const std::filesystem::path& manifestPath, const std::string& parsedId)
{
const std::string fallbackId = manifestPath.parent_path().filename().string();
const std::string baseId = parsedId.empty() ? fallbackId : parsedId;
return baseId + "@invalid:" + fallbackId;
}
ShaderPackageStatus BuildUnavailableStatus(const std::filesystem::path& manifestPath, const ShaderPackage& partialPackage, const std::string& packageError)
{
ShaderPackageStatus status;
status.id = UniqueUnavailableShaderId(manifestPath, partialPackage.id);
status.displayName = !partialPackage.displayName.empty() ? partialPackage.displayName : manifestPath.parent_path().filename().string();
status.description = partialPackage.description;
status.category = !partialPackage.category.empty() ? partialPackage.category : "Unavailable";
status.available = false;
status.error = packageError;
return status;
}
ShaderPackageStatus BuildAvailableStatus(const ShaderPackage& shaderPackage)
{
ShaderPackageStatus status;
status.id = shaderPackage.id;
status.displayName = shaderPackage.displayName;
status.description = shaderPackage.description;
status.category = shaderPackage.category;
status.available = true;
return status;
}
}
ShaderPackageRegistry::ShaderPackageRegistry(unsigned maxTemporalHistoryFrames)
@@ -478,10 +729,16 @@ ShaderPackageRegistry::ShaderPackageRegistry(unsigned maxTemporalHistoryFrames)
{
}
bool ShaderPackageRegistry::Scan(const std::filesystem::path& shaderRoot, std::map<std::string, ShaderPackage>& packagesById, std::vector<std::string>& packageOrder, std::string& error) const
bool ShaderPackageRegistry::Scan(
const std::filesystem::path& shaderRoot,
std::map<std::string, ShaderPackage>& packagesById,
std::vector<std::string>& packageOrder,
std::vector<ShaderPackageStatus>& packageStatuses,
std::string& error) const
{
packagesById.clear();
packageOrder.clear();
packageStatuses.clear();
if (!std::filesystem::exists(shaderRoot))
{
@@ -500,19 +757,27 @@ bool ShaderPackageRegistry::Scan(const std::filesystem::path& shaderRoot, std::m
ShaderPackage shaderPackage;
if (!ParseManifest(manifestPath, shaderPackage, error))
return false;
{
packageStatuses.push_back(BuildUnavailableStatus(manifestPath, shaderPackage, error));
error.clear();
continue;
}
if (packagesById.find(shaderPackage.id) != packagesById.end())
{
error = "Duplicate shader id found: " + shaderPackage.id;
return false;
packageStatuses.push_back(BuildUnavailableStatus(manifestPath, shaderPackage, "Duplicate shader id found: " + shaderPackage.id));
continue;
}
packageOrder.push_back(shaderPackage.id);
packageStatuses.push_back(BuildAvailableStatus(shaderPackage));
packagesById[shaderPackage.id] = shaderPackage;
}
std::sort(packageOrder.begin(), packageOrder.end());
std::sort(packageStatuses.begin(), packageStatuses.end(), [](const ShaderPackageStatus& left, const ShaderPackageStatus& right) {
return left.displayName < right.displayName;
});
return true;
}
@@ -534,16 +799,20 @@ bool ShaderPackageRegistry::ParseManifest(const std::filesystem::path& manifestP
if (!ParseShaderMetadata(manifestJson, shaderPackage, manifestPath, error))
return false;
if (!std::filesystem::exists(shaderPackage.shaderPath))
{
error = "Shader source not found for package " + shaderPackage.id + ": " + shaderPackage.shaderPath.string();
if (!ParsePassDefinitions(manifestJson, shaderPackage, manifestPath, error))
return false;
}
shaderPackage.shaderWriteTime = std::filesystem::last_write_time(shaderPackage.shaderPath);
shaderPackage.shaderWriteTime = shaderPackage.passes.front().sourceWriteTime;
for (const ShaderPassDefinition& pass : shaderPackage.passes)
{
if (pass.sourceWriteTime > shaderPackage.shaderWriteTime)
shaderPackage.shaderWriteTime = pass.sourceWriteTime;
}
shaderPackage.manifestWriteTime = std::filesystem::last_write_time(shaderPackage.manifestPath);
return ParseTextureAssets(manifestJson, shaderPackage, manifestPath, error) &&
ParseFontAssets(manifestJson, shaderPackage, manifestPath, error) &&
ParseTemporalSettings(manifestJson, shaderPackage, mMaxTemporalHistoryFrames, manifestPath, error) &&
ParseFeedbackSettings(manifestJson, shaderPackage, manifestPath, error) &&
ParseParameterDefinitions(manifestJson, shaderPackage, manifestPath, error);
}

View File

@@ -12,7 +12,12 @@ class ShaderPackageRegistry
public:
explicit ShaderPackageRegistry(unsigned maxTemporalHistoryFrames);
bool Scan(const std::filesystem::path& shaderRoot, std::map<std::string, ShaderPackage>& packagesById, std::vector<std::string>& packageOrder, std::string& error) const;
bool Scan(
const std::filesystem::path& shaderRoot,
std::map<std::string, ShaderPackage>& packagesById,
std::vector<std::string>& packageOrder,
std::vector<ShaderPackageStatus>& packageStatuses,
std::string& error) const;
bool ParseManifest(const std::filesystem::path& manifestPath, ShaderPackage& shaderPackage, std::string& error) const;
private:

View File

@@ -11,7 +11,9 @@ enum class ShaderParameterType
Vec2,
Color,
Boolean,
Enum
Enum,
Text,
Trigger
};
struct ShaderParameterOption
@@ -24,6 +26,7 @@ struct ShaderParameterDefinition
{
std::string id;
std::string label;
std::string description;
ShaderParameterType type = ShaderParameterType::Float;
std::vector<double> defaultNumbers;
std::vector<double> minNumbers;
@@ -31,6 +34,9 @@ struct ShaderParameterDefinition
std::vector<double> stepNumbers;
bool defaultBoolean = false;
std::string defaultEnumValue;
std::string defaultTextValue;
std::string fontId;
unsigned maxLength = 64;
std::vector<ShaderParameterOption> enumOptions;
};
@@ -39,6 +45,7 @@ struct ShaderParameterValue
std::vector<double> numberValues;
bool booleanValue = false;
std::string enumValue;
std::string textValue;
};
enum class TemporalHistorySource
@@ -56,6 +63,12 @@ struct TemporalSettings
unsigned effectiveHistoryLength = 0;
};
struct FeedbackSettings
{
bool enabled = false;
std::string writePassId;
};
struct ShaderTextureAsset
{
std::string id;
@@ -63,6 +76,31 @@ struct ShaderTextureAsset
std::filesystem::file_time_type writeTime;
};
struct ShaderFontAsset
{
std::string id;
std::filesystem::path path;
std::filesystem::file_time_type writeTime;
};
struct ShaderPassDefinition
{
std::string id;
std::string entryPoint;
std::filesystem::path sourcePath;
std::filesystem::file_time_type sourceWriteTime;
std::vector<std::string> inputNames;
std::string outputName;
};
struct ShaderPassBuildSource
{
std::string passId;
std::string fragmentShaderSource;
std::vector<std::string> inputNames;
std::string outputName;
};
struct ShaderPackage
{
std::string id;
@@ -73,21 +111,39 @@ struct ShaderPackage
std::filesystem::path directoryPath;
std::filesystem::path shaderPath;
std::filesystem::path manifestPath;
std::vector<ShaderPassDefinition> passes;
std::vector<ShaderParameterDefinition> parameters;
std::vector<ShaderTextureAsset> textureAssets;
std::vector<ShaderFontAsset> fontAssets;
TemporalSettings temporal;
FeedbackSettings feedback;
std::filesystem::file_time_type shaderWriteTime;
std::filesystem::file_time_type manifestWriteTime;
};
struct ShaderPackageStatus
{
std::string id;
std::string displayName;
std::string description;
std::string category;
bool available = false;
std::string error;
};
struct RuntimeRenderState
{
std::string layerId;
std::string shaderId;
std::string shaderName;
std::vector<ShaderParameterDefinition> parameterDefinitions;
std::map<std::string, ShaderParameterValue> parameterValues;
std::vector<ShaderTextureAsset> textureAssets;
std::vector<ShaderFontAsset> fontAssets;
double timeSeconds = 0.0;
double utcTimeSeconds = 0.0;
double utcOffsetSeconds = 0.0;
double startupRandom = 0.0;
double frameCount = 0.0;
double mixAmount = 1.0;
double bypass = 0.0;
@@ -99,4 +155,5 @@ struct RuntimeRenderState
TemporalHistorySource temporalHistorySource = TemporalHistorySource::None;
unsigned requestedTemporalHistoryLength = 0;
unsigned effectiveTemporalHistoryLength = 0;
FeedbackSettings feedback;
};

View File

@@ -0,0 +1,170 @@
#include "VideoIOFormat.h"
#include <algorithm>
#include <cmath>
#ifdef min
#undef min
#endif
#ifdef max
#undef max
#endif
namespace
{
uint16_t Clamp10(int value, int minimum, int maximum)
{
return static_cast<uint16_t>(std::max(minimum, std::min(maximum, value)));
}
uint32_t MakeV210Word(uint16_t a, uint16_t b, uint16_t c)
{
return (static_cast<uint32_t>(a) & 0x3ffu)
| ((static_cast<uint32_t>(b) & 0x3ffu) << 10)
| ((static_cast<uint32_t>(c) & 0x3ffu) << 20);
}
void StoreWord(std::array<uint8_t, 16>& bytes, std::size_t wordIndex, uint32_t word)
{
const std::size_t offset = wordIndex * 4;
bytes[offset + 0] = static_cast<uint8_t>(word & 0xffu);
bytes[offset + 1] = static_cast<uint8_t>((word >> 8) & 0xffu);
bytes[offset + 2] = static_cast<uint8_t>((word >> 16) & 0xffu);
bytes[offset + 3] = static_cast<uint8_t>((word >> 24) & 0xffu);
}
uint32_t LoadWord(const std::array<uint8_t, 16>& bytes, std::size_t wordIndex)
{
const std::size_t offset = wordIndex * 4;
return static_cast<uint32_t>(bytes[offset + 0])
| (static_cast<uint32_t>(bytes[offset + 1]) << 8)
| (static_cast<uint32_t>(bytes[offset + 2]) << 16)
| (static_cast<uint32_t>(bytes[offset + 3]) << 24);
}
uint16_t Component(uint32_t word, unsigned index)
{
return static_cast<uint16_t>((word >> (index * 10)) & 0x3ffu);
}
}
const char* VideoIOPixelFormatName(VideoIOPixelFormat format)
{
switch (format)
{
case VideoIOPixelFormat::V210:
return "10-bit YUV v210";
case VideoIOPixelFormat::Yuva10:
return "10-bit YUVA Ay10";
case VideoIOPixelFormat::Bgra8:
return "8-bit BGRA";
case VideoIOPixelFormat::Uyvy8:
default:
return "8-bit YUV UYVY";
}
}
bool VideoIOPixelFormatIsTenBit(VideoIOPixelFormat format)
{
return format == VideoIOPixelFormat::V210 || format == VideoIOPixelFormat::Yuva10;
}
VideoIOPixelFormat ChoosePreferredVideoIOFormat(bool tenBitSupported)
{
return tenBitSupported ? VideoIOPixelFormat::V210 : VideoIOPixelFormat::Uyvy8;
}
unsigned VideoIOBytesPerPixel(VideoIOPixelFormat format)
{
switch (format)
{
case VideoIOPixelFormat::Uyvy8:
return 2u;
case VideoIOPixelFormat::Bgra8:
return 4u;
case VideoIOPixelFormat::Yuva10:
return 4u;
case VideoIOPixelFormat::V210:
default:
return 0u;
}
}
unsigned VideoIORowBytes(VideoIOPixelFormat format, unsigned frameWidth)
{
if (format == VideoIOPixelFormat::V210)
return MinimumV210RowBytes(frameWidth);
if (format == VideoIOPixelFormat::Yuva10)
return MinimumYuva10RowBytes(frameWidth);
return frameWidth * VideoIOBytesPerPixel(format);
}
unsigned PackedTextureWidthFromRowBytes(unsigned rowBytes)
{
return (rowBytes + 3u) / 4u;
}
unsigned MinimumV210RowBytes(unsigned frameWidth)
{
return ((frameWidth + 5u) / 6u) * 16u;
}
unsigned MinimumYuva10RowBytes(unsigned frameWidth)
{
return ((frameWidth + 63u) / 64u) * 256u;
}
unsigned ActiveV210WordsForWidth(unsigned frameWidth)
{
return ((frameWidth + 5u) / 6u) * 4u;
}
V210CodeValues Rec709RgbToLegalV210(float red, float green, float blue)
{
red = std::max(0.0f, std::min(1.0f, red));
green = std::max(0.0f, std::min(1.0f, green));
blue = std::max(0.0f, std::min(1.0f, blue));
const float y = 0.2126f * red + 0.7152f * green + 0.0722f * blue;
const float cb = (blue - y) / 1.8556f + 0.5f;
const float cr = (red - y) / 1.5748f + 0.5f;
V210CodeValues values;
values.y = Clamp10(static_cast<int>(std::lround(64.0f + y * 876.0f)), 64, 940);
values.cb = Clamp10(static_cast<int>(std::lround(64.0f + cb * 896.0f)), 64, 960);
values.cr = Clamp10(static_cast<int>(std::lround(64.0f + cr * 896.0f)), 64, 960);
return values;
}
std::array<uint8_t, 16> PackV210Block(const V210SixPixelBlock& block)
{
std::array<uint8_t, 16> bytes = {};
StoreWord(bytes, 0, MakeV210Word(block.cb[0], block.y[0], block.cr[0]));
StoreWord(bytes, 1, MakeV210Word(block.y[1], block.cb[1], block.y[2]));
StoreWord(bytes, 2, MakeV210Word(block.cr[1], block.y[3], block.cb[2]));
StoreWord(bytes, 3, MakeV210Word(block.y[4], block.cr[2], block.y[5]));
return bytes;
}
V210SixPixelBlock UnpackV210Block(const std::array<uint8_t, 16>& bytes)
{
const uint32_t word0 = LoadWord(bytes, 0);
const uint32_t word1 = LoadWord(bytes, 1);
const uint32_t word2 = LoadWord(bytes, 2);
const uint32_t word3 = LoadWord(bytes, 3);
V210SixPixelBlock block;
block.cb[0] = Component(word0, 0);
block.y[0] = Component(word0, 1);
block.cr[0] = Component(word0, 2);
block.y[1] = Component(word1, 0);
block.cb[1] = Component(word1, 1);
block.y[2] = Component(word1, 2);
block.cr[1] = Component(word2, 0);
block.y[3] = Component(word2, 1);
block.cb[2] = Component(word2, 2);
block.y[4] = Component(word3, 0);
block.cr[2] = Component(word3, 1);
block.y[5] = Component(word3, 2);
return block;
}

View File

@@ -0,0 +1,39 @@
#pragma once
#include <array>
#include <cstdint>
enum class VideoIOPixelFormat
{
Uyvy8,
V210,
Yuva10,
Bgra8
};
struct V210CodeValues
{
uint16_t y = 64;
uint16_t cb = 512;
uint16_t cr = 512;
};
struct V210SixPixelBlock
{
std::array<uint16_t, 6> y = {};
std::array<uint16_t, 3> cb = {};
std::array<uint16_t, 3> cr = {};
};
const char* VideoIOPixelFormatName(VideoIOPixelFormat format);
bool VideoIOPixelFormatIsTenBit(VideoIOPixelFormat format);
VideoIOPixelFormat ChoosePreferredVideoIOFormat(bool tenBitSupported);
unsigned VideoIOBytesPerPixel(VideoIOPixelFormat format);
unsigned VideoIORowBytes(VideoIOPixelFormat format, unsigned frameWidth);
unsigned PackedTextureWidthFromRowBytes(unsigned rowBytes);
unsigned MinimumV210RowBytes(unsigned frameWidth);
unsigned MinimumYuva10RowBytes(unsigned frameWidth);
unsigned ActiveV210WordsForWidth(unsigned frameWidth);
V210CodeValues Rec709RgbToLegalV210(float red, float green, float blue);
std::array<uint8_t, 16> PackV210Block(const V210SixPixelBlock& block);
V210SixPixelBlock UnpackV210Block(const std::array<uint8_t, 16>& bytes);

View File

@@ -0,0 +1,137 @@
#pragma once
#include "DeckLinkDisplayMode.h"
#include "VideoIOFormat.h"
#include <cstdint>
#include <functional>
#include <string>
enum class VideoIOBackend
{
DeckLink
};
enum class VideoIOCompletionResult
{
Completed,
DisplayedLate,
Dropped,
Flushed,
Unknown
};
struct VideoIOConfig
{
VideoFormatSelection videoModes;
bool externalKeyingEnabled = false;
bool preferTenBit = true;
};
struct VideoIOState
{
FrameSize inputFrameSize;
FrameSize outputFrameSize;
VideoIOPixelFormat inputPixelFormat = VideoIOPixelFormat::Uyvy8;
VideoIOPixelFormat outputPixelFormat = VideoIOPixelFormat::Bgra8;
unsigned inputFrameRowBytes = 0;
unsigned outputFrameRowBytes = 0;
unsigned captureTextureWidth = 0;
unsigned outputPackTextureWidth = 0;
std::string inputDisplayModeName = "1080p59.94";
std::string outputDisplayModeName = "1080p59.94";
std::string outputModelName;
std::string statusMessage;
std::string formatStatusMessage;
bool hasInputDevice = false;
bool hasInputSource = false;
bool supportsInternalKeying = false;
bool supportsExternalKeying = false;
bool keyerInterfaceAvailable = false;
bool externalKeyingActive = false;
double frameBudgetMilliseconds = 0.0;
};
struct VideoIOFrame
{
void* bytes = nullptr;
long rowBytes = 0;
unsigned width = 0;
unsigned height = 0;
VideoIOPixelFormat pixelFormat = VideoIOPixelFormat::Uyvy8;
bool hasNoInputSource = false;
};
struct VideoIOOutputFrame
{
void* bytes = nullptr;
long rowBytes = 0;
unsigned width = 0;
unsigned height = 0;
VideoIOPixelFormat pixelFormat = VideoIOPixelFormat::Bgra8;
void* nativeFrame = nullptr;
void* nativeBuffer = nullptr;
};
struct VideoIOCompletion
{
VideoIOCompletionResult result = VideoIOCompletionResult::Completed;
};
struct VideoIOScheduleTime
{
int64_t streamTime = 0;
int64_t duration = 0;
int64_t timeScale = 0;
uint64_t frameIndex = 0;
};
class VideoIODevice
{
public:
using InputFrameCallback = std::function<void(const VideoIOFrame&)>;
using OutputFrameCallback = std::function<void(const VideoIOCompletion&)>;
virtual ~VideoIODevice() = default;
virtual void ReleaseResources() = 0;
virtual bool DiscoverDevicesAndModes(const VideoFormatSelection& videoModes, std::string& error) = 0;
virtual bool SelectPreferredFormats(const VideoFormatSelection& videoModes, bool outputAlphaRequired, std::string& error) = 0;
virtual bool ConfigureInput(InputFrameCallback callback, const VideoFormat& inputVideoMode, std::string& error) = 0;
virtual bool ConfigureOutput(OutputFrameCallback callback, const VideoFormat& outputVideoMode, bool externalKeyingEnabled, std::string& error) = 0;
virtual bool Start() = 0;
virtual bool Stop() = 0;
virtual const VideoIOState& State() const = 0;
virtual VideoIOState& MutableState() = 0;
virtual bool BeginOutputFrame(VideoIOOutputFrame& frame) = 0;
virtual void EndOutputFrame(VideoIOOutputFrame& frame) = 0;
virtual bool ScheduleOutputFrame(const VideoIOOutputFrame& frame) = 0;
virtual void AccountForCompletionResult(VideoIOCompletionResult result) = 0;
bool HasInputDevice() const { return State().hasInputDevice; }
bool HasInputSource() const { return State().hasInputSource; }
bool InputOutputDimensionsDiffer() const { return State().inputFrameSize != State().outputFrameSize; }
const FrameSize& InputFrameSize() const { return State().inputFrameSize; }
const FrameSize& OutputFrameSize() const { return State().outputFrameSize; }
unsigned InputFrameWidth() const { return State().inputFrameSize.width; }
unsigned InputFrameHeight() const { return State().inputFrameSize.height; }
unsigned OutputFrameWidth() const { return State().outputFrameSize.width; }
unsigned OutputFrameHeight() const { return State().outputFrameSize.height; }
VideoIOPixelFormat InputPixelFormat() const { return State().inputPixelFormat; }
VideoIOPixelFormat OutputPixelFormat() const { return State().outputPixelFormat; }
bool InputIsTenBit() const { return VideoIOPixelFormatIsTenBit(State().inputPixelFormat); }
bool OutputIsTenBit() const { return VideoIOPixelFormatIsTenBit(State().outputPixelFormat); }
unsigned InputFrameRowBytes() const { return State().inputFrameRowBytes; }
unsigned OutputFrameRowBytes() const { return State().outputFrameRowBytes; }
unsigned CaptureTextureWidth() const { return State().captureTextureWidth; }
unsigned OutputPackTextureWidth() const { return State().outputPackTextureWidth; }
const std::string& FormatStatusMessage() const { return State().formatStatusMessage; }
const std::string& InputDisplayModeName() const { return State().inputDisplayModeName; }
const std::string& OutputModelName() const { return State().outputModelName; }
bool SupportsInternalKeying() const { return State().supportsInternalKeying; }
bool SupportsExternalKeying() const { return State().supportsExternalKeying; }
bool KeyerInterfaceAvailable() const { return State().keyerInterfaceAvailable; }
bool ExternalKeyingActive() const { return State().externalKeyingActive; }
const std::string& StatusMessage() const { return State().statusMessage; }
double FrameBudgetMilliseconds() const { return State().frameBudgetMilliseconds; }
void SetStatusMessage(const std::string& message) { MutableState().statusMessage = message; }
};

View File

@@ -0,0 +1,37 @@
#include "VideoPlayoutScheduler.h"
void VideoPlayoutScheduler::Configure(int64_t frameDuration, int64_t timeScale)
{
mFrameDuration = frameDuration;
mTimeScale = timeScale;
Reset();
}
void VideoPlayoutScheduler::Reset()
{
mScheduledFrameIndex = 0;
}
VideoIOScheduleTime VideoPlayoutScheduler::NextScheduleTime()
{
VideoIOScheduleTime time;
time.streamTime = static_cast<int64_t>(mScheduledFrameIndex) * mFrameDuration;
time.duration = mFrameDuration;
time.timeScale = mTimeScale;
time.frameIndex = mScheduledFrameIndex;
++mScheduledFrameIndex;
return time;
}
void VideoPlayoutScheduler::AccountForCompletionResult(VideoIOCompletionResult result)
{
if (result == VideoIOCompletionResult::DisplayedLate || result == VideoIOCompletionResult::Dropped)
mScheduledFrameIndex += 2;
}
double VideoPlayoutScheduler::FrameBudgetMilliseconds() const
{
return mTimeScale != 0
? (static_cast<double>(mFrameDuration) * 1000.0) / static_cast<double>(mTimeScale)
: 0.0;
}

View File

@@ -0,0 +1,22 @@
#pragma once
#include "VideoIOTypes.h"
#include <cstdint>
class VideoPlayoutScheduler
{
public:
void Configure(int64_t frameDuration, int64_t timeScale);
void Reset();
VideoIOScheduleTime NextScheduleTime();
void AccountForCompletionResult(VideoIOCompletionResult result);
double FrameBudgetMilliseconds() const;
uint64_t ScheduledFrameIndex() const { return mScheduledFrameIndex; }
int64_t TimeScale() const { return mTimeScale; }
private:
int64_t mFrameDuration = 0;
int64_t mTimeScale = 0;
uint64_t mScheduledFrameIndex = 0;
};

View File

@@ -0,0 +1,146 @@
#include "DeckLinkDisplayMode.h"
#include <cctype>
std::string NormalizeModeToken(const std::string& value)
{
std::string normalized;
for (unsigned char ch : value)
{
if (std::isalnum(ch))
normalized.push_back(static_cast<char>(std::tolower(ch)));
}
return normalized;
}
bool ResolveConfiguredDisplayMode(const std::string& videoFormat, const std::string& frameRate, BMDDisplayMode& displayMode, std::string& displayModeName)
{
VideoFormat videoMode;
if (!ResolveConfiguredVideoFormat(videoFormat, frameRate, videoMode))
return false;
displayMode = videoMode.displayMode;
displayModeName = videoMode.displayName;
return true;
}
bool ResolveConfiguredVideoFormat(const std::string& videoFormat, const std::string& frameRate, VideoFormat& videoMode)
{
const std::string formatToken = NormalizeModeToken(videoFormat);
const std::string frameToken = NormalizeModeToken(frameRate);
const std::string combinedToken = formatToken + frameToken;
struct ModeOption
{
const char* token;
BMDDisplayMode mode;
const char* displayName;
};
static const ModeOption options[] =
{
{ "720p50", bmdModeHD720p50, "720p50" },
{ "hd720p50", bmdModeHD720p50, "720p50" },
{ "720p5994", bmdModeHD720p5994, "720p59.94" },
{ "hd720p5994", bmdModeHD720p5994, "720p59.94" },
{ "720p60", bmdModeHD720p60, "720p60" },
{ "hd720p60", bmdModeHD720p60, "720p60" },
{ "1080i50", bmdModeHD1080i50, "1080i50" },
{ "hd1080i50", bmdModeHD1080i50, "1080i50" },
{ "1080i5994", bmdModeHD1080i5994, "1080i59.94" },
{ "hd1080i5994", bmdModeHD1080i5994, "1080i59.94" },
{ "1080i60", bmdModeHD1080i6000, "1080i60" },
{ "hd1080i60", bmdModeHD1080i6000, "1080i60" },
{ "1080p2398", bmdModeHD1080p2398, "1080p23.98" },
{ "hd1080p2398", bmdModeHD1080p2398, "1080p23.98" },
{ "1080p24", bmdModeHD1080p24, "1080p24" },
{ "hd1080p24", bmdModeHD1080p24, "1080p24" },
{ "1080p25", bmdModeHD1080p25, "1080p25" },
{ "hd1080p25", bmdModeHD1080p25, "1080p25" },
{ "1080p2997", bmdModeHD1080p2997, "1080p29.97" },
{ "hd1080p2997", bmdModeHD1080p2997, "1080p29.97" },
{ "1080p30", bmdModeHD1080p30, "1080p30" },
{ "hd1080p30", bmdModeHD1080p30, "1080p30" },
{ "1080p50", bmdModeHD1080p50, "1080p50" },
{ "hd1080p50", bmdModeHD1080p50, "1080p50" },
{ "1080p5994", bmdModeHD1080p5994, "1080p59.94" },
{ "hd1080p5994", bmdModeHD1080p5994, "1080p59.94" },
{ "1080p60", bmdModeHD1080p6000, "1080p60" },
{ "hd1080p60", bmdModeHD1080p6000, "1080p60" },
{ "2160p2398", bmdMode4K2160p2398, "2160p23.98" },
{ "4k2160p2398", bmdMode4K2160p2398, "2160p23.98" },
{ "2160p24", bmdMode4K2160p24, "2160p24" },
{ "4k2160p24", bmdMode4K2160p24, "2160p24" },
{ "2160p25", bmdMode4K2160p25, "2160p25" },
{ "4k2160p25", bmdMode4K2160p25, "2160p25" },
{ "2160p2997", bmdMode4K2160p2997, "2160p29.97" },
{ "4k2160p2997", bmdMode4K2160p2997, "2160p29.97" },
{ "2160p30", bmdMode4K2160p30, "2160p30" },
{ "4k2160p30", bmdMode4K2160p30, "2160p30" },
{ "2160p50", bmdMode4K2160p50, "2160p50" },
{ "4k2160p50", bmdMode4K2160p50, "2160p50" },
{ "2160p5994", bmdMode4K2160p5994, "2160p59.94" },
{ "4k2160p5994", bmdMode4K2160p5994, "2160p59.94" },
{ "2160p60", bmdMode4K2160p60, "2160p60" },
{ "4k2160p60", bmdMode4K2160p60, "2160p60" }
};
for (const ModeOption& option : options)
{
if (combinedToken == option.token || (frameToken.empty() && formatToken == option.token))
{
videoMode.displayMode = option.mode;
videoMode.displayName = option.displayName;
return true;
}
}
return false;
}
bool ResolveConfiguredVideoFormats(
const std::string& inputVideoFormat,
const std::string& inputFrameRate,
const std::string& outputVideoFormat,
const std::string& outputFrameRate,
VideoFormatSelection& videoModes,
std::string& error)
{
if (!ResolveConfiguredVideoFormat(inputVideoFormat, inputFrameRate, videoModes.input))
{
error = "Unsupported DeckLink inputVideoFormat/inputFrameRate in config/runtime-host.json: " +
inputVideoFormat + " / " + inputFrameRate;
return false;
}
if (!ResolveConfiguredVideoFormat(outputVideoFormat, outputFrameRate, videoModes.output))
{
error = "Unsupported DeckLink outputVideoFormat/outputFrameRate in config/runtime-host.json: " +
outputVideoFormat + " / " + outputFrameRate;
return false;
}
return true;
}
bool FindDeckLinkDisplayMode(IDeckLinkDisplayModeIterator* iterator, BMDDisplayMode targetMode, IDeckLinkDisplayMode** foundMode)
{
if (!iterator || !foundMode)
return false;
*foundMode = NULL;
IDeckLinkDisplayMode* candidate = NULL;
while (iterator->Next(&candidate) == S_OK)
{
if (candidate->GetDisplayMode() == targetMode)
{
*foundMode = candidate;
return true;
}
candidate->Release();
candidate = NULL;
}
return false;
}

View File

@@ -0,0 +1,47 @@
#pragma once
#include "DeckLinkAPI_h.h"
#include <string>
struct FrameSize
{
unsigned width = 0;
unsigned height = 0;
bool IsEmpty() const { return width == 0 || height == 0; }
};
inline bool operator==(const FrameSize& left, const FrameSize& right)
{
return left.width == right.width && left.height == right.height;
}
inline bool operator!=(const FrameSize& left, const FrameSize& right)
{
return !(left == right);
}
struct VideoFormat
{
BMDDisplayMode displayMode = bmdModeHD1080p5994;
std::string displayName = "1080p59.94";
};
struct VideoFormatSelection
{
VideoFormat input;
VideoFormat output;
};
std::string NormalizeModeToken(const std::string& value);
bool ResolveConfiguredDisplayMode(const std::string& videoFormat, const std::string& frameRate, BMDDisplayMode& displayMode, std::string& displayModeName);
bool ResolveConfiguredVideoFormat(const std::string& videoFormat, const std::string& frameRate, VideoFormat& videoMode);
bool ResolveConfiguredVideoFormats(
const std::string& inputVideoFormat,
const std::string& inputFrameRate,
const std::string& outputVideoFormat,
const std::string& outputFrameRate,
VideoFormatSelection& videoModes,
std::string& error);
bool FindDeckLinkDisplayMode(IDeckLinkDisplayModeIterator* iterator, BMDDisplayMode targetMode, IDeckLinkDisplayMode** foundMode);

View File

@@ -0,0 +1,104 @@
#include "DeckLinkFrameTransfer.h"
#include "DeckLinkSession.h"
////////////////////////////////////////////
// DeckLink Capture Delegate Class
////////////////////////////////////////////
CaptureDelegate::CaptureDelegate(DeckLinkSession* pOwner) :
m_pOwner(pOwner),
mRefCount(1)
{
}
HRESULT CaptureDelegate::QueryInterface(REFIID, LPVOID* ppv)
{
*ppv = NULL;
return E_NOINTERFACE;
}
ULONG CaptureDelegate::AddRef()
{
return InterlockedIncrement(&mRefCount);
}
ULONG CaptureDelegate::Release()
{
int newCount = InterlockedDecrement(&mRefCount);
if (newCount == 0)
delete this;
return newCount;
}
HRESULT CaptureDelegate::VideoInputFrameArrived(IDeckLinkVideoInputFrame* inputFrame, IDeckLinkAudioInputPacket*)
{
if (!inputFrame)
{
// It's possible to receive a NULL inputFrame, but a valid audioPacket. Ignore audio-only frame.
return S_OK;
}
bool hasNoInputSource = (inputFrame->GetFlags() & bmdFrameHasNoInputSource) == bmdFrameHasNoInputSource;
m_pOwner->HandleVideoInputFrame(inputFrame, hasNoInputSource);
return S_OK;
}
HRESULT CaptureDelegate::VideoInputFormatChanged(BMDVideoInputFormatChangedEvents, IDeckLinkDisplayMode*, BMDDetectedVideoInputFormatFlags)
{
return S_OK;
}
////////////////////////////////////////////
// DeckLink Playout Delegate Class
////////////////////////////////////////////
PlayoutDelegate::PlayoutDelegate(DeckLinkSession* pOwner) :
m_pOwner(pOwner),
mRefCount(1)
{
}
HRESULT PlayoutDelegate::QueryInterface(REFIID, LPVOID* ppv)
{
*ppv = NULL;
return E_NOINTERFACE;
}
ULONG PlayoutDelegate::AddRef()
{
return InterlockedIncrement(&mRefCount);
}
ULONG PlayoutDelegate::Release()
{
int newCount = InterlockedDecrement(&mRefCount);
if (newCount == 0)
delete this;
return newCount;
}
HRESULT PlayoutDelegate::ScheduledFrameCompleted(IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult result)
{
switch (result)
{
case bmdOutputFrameDisplayedLate:
OutputDebugStringA("ScheduledFrameCompleted() frame did not complete: Frame Displayed Late\n");
break;
case bmdOutputFrameDropped:
OutputDebugStringA("ScheduledFrameCompleted() frame did not complete: Frame Dropped\n");
break;
case bmdOutputFrameCompleted:
case bmdOutputFrameFlushed:
// Don't log bmdOutputFrameFlushed result since it is expected when Stop() is called
break;
default:
OutputDebugStringA("ScheduledFrameCompleted() frame did not complete: Unknown error\n");
}
m_pOwner->HandlePlayoutFrameCompleted(completedFrame, result);
return S_OK;
}
HRESULT PlayoutDelegate::ScheduledPlaybackHasStopped()
{
return S_OK;
}

View File

@@ -0,0 +1,49 @@
#pragma once
#include <windows.h>
#include <atomic>
#include "DeckLinkAPI_h.h"
class DeckLinkSession;
////////////////////////////////////////////
// Capture Delegate Class
////////////////////////////////////////////
class CaptureDelegate : public IDeckLinkInputCallback
{
DeckLinkSession* m_pOwner;
LONG mRefCount;
public:
CaptureDelegate(DeckLinkSession* pOwner);
// IUnknown needs only a dummy implementation
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID* ppv);
virtual ULONG STDMETHODCALLTYPE AddRef();
virtual ULONG STDMETHODCALLTYPE Release();
virtual HRESULT STDMETHODCALLTYPE VideoInputFrameArrived(IDeckLinkVideoInputFrame* videoFrame, IDeckLinkAudioInputPacket* audioPacket);
virtual HRESULT STDMETHODCALLTYPE VideoInputFormatChanged(BMDVideoInputFormatChangedEvents notificationEvents, IDeckLinkDisplayMode* newDisplayMode, BMDDetectedVideoInputFormatFlags detectedSignalFlags);
};
////////////////////////////////////////////
// Render Delegate Class
////////////////////////////////////////////
class PlayoutDelegate : public IDeckLinkVideoOutputCallback
{
DeckLinkSession* m_pOwner;
LONG mRefCount;
public:
PlayoutDelegate(DeckLinkSession* pOwner);
// IUnknown needs only a dummy implementation
virtual HRESULT STDMETHODCALLTYPE QueryInterface(REFIID iid, LPVOID* ppv);
virtual ULONG STDMETHODCALLTYPE AddRef();
virtual ULONG STDMETHODCALLTYPE Release();
virtual HRESULT STDMETHODCALLTYPE ScheduledFrameCompleted(IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult result);
virtual HRESULT STDMETHODCALLTYPE ScheduledPlaybackHasStopped();
};

View File

@@ -0,0 +1,621 @@
#include "DeckLinkSession.h"
#include "GlRenderConstants.h"
#include <atlbase.h>
#include <cstdio>
#include <cstring>
#include <new>
#include <sstream>
#include <utility>
#include <vector>
namespace
{
std::string BstrToUtf8(BSTR value)
{
if (value == nullptr)
return std::string();
const int requiredBytes = WideCharToMultiByte(CP_UTF8, 0, value, -1, NULL, 0, NULL, NULL);
if (requiredBytes <= 1)
return std::string();
std::vector<char> utf8Name(static_cast<std::size_t>(requiredBytes), '\0');
if (WideCharToMultiByte(CP_UTF8, 0, value, -1, utf8Name.data(), requiredBytes, NULL, NULL) <= 0)
return std::string();
return std::string(utf8Name.data());
}
bool InputSupportsFormat(IDeckLinkInput* input, BMDDisplayMode displayMode, BMDPixelFormat pixelFormat)
{
if (input == nullptr)
return false;
BOOL supported = FALSE;
BMDDisplayMode actualMode = bmdModeUnknown;
const HRESULT result = input->DoesSupportVideoMode(
bmdVideoConnectionUnspecified,
displayMode,
pixelFormat,
bmdNoVideoInputConversion,
bmdSupportedVideoModeDefault,
&actualMode,
&supported);
return result == S_OK && supported != FALSE;
}
bool OutputSupportsFormat(IDeckLinkOutput* output, BMDDisplayMode displayMode, BMDPixelFormat pixelFormat)
{
if (output == nullptr)
return false;
BOOL supported = FALSE;
BMDDisplayMode actualMode = bmdModeUnknown;
const HRESULT result = output->DoesSupportVideoMode(
bmdVideoConnectionUnspecified,
displayMode,
pixelFormat,
bmdNoVideoOutputConversion,
bmdSupportedVideoModeDefault,
&actualMode,
&supported);
return result == S_OK && supported != FALSE;
}
}
DeckLinkSession::~DeckLinkSession()
{
ReleaseResources();
}
void DeckLinkSession::ReleaseResources()
{
if (input != nullptr)
input->SetCallback(nullptr);
captureDelegate.Release();
input.Release();
if (output != nullptr)
output->SetScheduledFrameCompletionCallback(nullptr);
if (keyer != nullptr)
{
keyer->Disable();
mState.externalKeyingActive = false;
}
keyer.Release();
playoutDelegate.Release();
outputVideoFrameQueue.clear();
output.Release();
}
bool DeckLinkSession::DiscoverDevicesAndModes(const VideoFormatSelection& videoModes, std::string& error)
{
CComPtr<IDeckLinkIterator> deckLinkIterator;
CComPtr<IDeckLinkDisplayMode> inputMode;
CComPtr<IDeckLinkDisplayMode> outputMode;
mState.inputDisplayModeName = videoModes.input.displayName;
mState.outputDisplayModeName = videoModes.output.displayName;
HRESULT result = CoCreateInstance(CLSID_CDeckLinkIterator, nullptr, CLSCTX_ALL, IID_IDeckLinkIterator, reinterpret_cast<void**>(&deckLinkIterator));
if (FAILED(result))
{
error = "Please install the Blackmagic DeckLink drivers to use the features of this application.";
return false;
}
CComPtr<IDeckLink> deckLink;
while (deckLinkIterator->Next(&deckLink) == S_OK)
{
int64_t duplexMode;
bool deviceSupportsInternalKeying = false;
bool deviceSupportsExternalKeying = false;
std::string modelName;
CComPtr<IDeckLinkProfileAttributes> deckLinkAttributes;
if (deckLink->QueryInterface(IID_IDeckLinkProfileAttributes, (void**)&deckLinkAttributes) != S_OK)
{
printf("Could not obtain the IDeckLinkProfileAttributes interface\n");
deckLink.Release();
continue;
}
result = deckLinkAttributes->GetInt(BMDDeckLinkDuplex, &duplexMode);
BOOL attributeFlag = FALSE;
if (deckLinkAttributes->GetFlag(BMDDeckLinkSupportsInternalKeying, &attributeFlag) == S_OK)
deviceSupportsInternalKeying = (attributeFlag != FALSE);
attributeFlag = FALSE;
if (deckLinkAttributes->GetFlag(BMDDeckLinkSupportsExternalKeying, &attributeFlag) == S_OK)
deviceSupportsExternalKeying = (attributeFlag != FALSE);
CComBSTR modelNameBstr;
if (deckLinkAttributes->GetString(BMDDeckLinkModelName, &modelNameBstr) == S_OK)
modelName = BstrToUtf8(modelNameBstr);
if (result != S_OK || duplexMode == bmdDuplexInactive)
{
deckLink.Release();
continue;
}
bool inputUsed = false;
if (!input && deckLink->QueryInterface(IID_IDeckLinkInput, (void**)&input) == S_OK)
inputUsed = true;
if (!output && (!inputUsed || (duplexMode == bmdDuplexFull)))
{
if (deckLink->QueryInterface(IID_IDeckLinkOutput, (void**)&output) != S_OK)
output.Release();
else
{
mState.outputModelName = modelName;
mState.supportsInternalKeying = deviceSupportsInternalKeying;
mState.supportsExternalKeying = deviceSupportsExternalKeying;
}
}
deckLink.Release();
if (output && input)
break;
}
if (!output)
{
error = "Expected an Output DeckLink device";
ReleaseResources();
return false;
}
CComPtr<IDeckLinkDisplayModeIterator> inputDisplayModeIterator;
if (input && input->GetDisplayModeIterator(&inputDisplayModeIterator) != S_OK)
{
error = "Cannot get input Display Mode Iterator.";
ReleaseResources();
return false;
}
if (input && !FindDeckLinkDisplayMode(inputDisplayModeIterator, videoModes.input.displayMode, &inputMode))
{
error = "Cannot get specified input BMDDisplayMode for configured mode: " + videoModes.input.displayName;
ReleaseResources();
return false;
}
inputDisplayModeIterator.Release();
CComPtr<IDeckLinkDisplayModeIterator> outputDisplayModeIterator;
if (output->GetDisplayModeIterator(&outputDisplayModeIterator) != S_OK)
{
error = "Cannot get output Display Mode Iterator.";
ReleaseResources();
return false;
}
if (!FindDeckLinkDisplayMode(outputDisplayModeIterator, videoModes.output.displayMode, &outputMode))
{
error = "Cannot get specified output BMDDisplayMode for configured mode: " + videoModes.output.displayName;
ReleaseResources();
return false;
}
mState.outputFrameSize = { static_cast<unsigned>(outputMode->GetWidth()), static_cast<unsigned>(outputMode->GetHeight()) };
mState.inputFrameSize = inputMode
? FrameSize{ static_cast<unsigned>(inputMode->GetWidth()), static_cast<unsigned>(inputMode->GetHeight()) }
: mState.outputFrameSize;
if (!input)
mState.inputDisplayModeName = "No input - black frame";
BMDTimeValue frameDuration = 0;
BMDTimeScale frameTimescale = 0;
outputMode->GetFrameRate(&frameDuration, &frameTimescale);
mScheduler.Configure(frameDuration, frameTimescale);
mState.frameBudgetMilliseconds = mScheduler.FrameBudgetMilliseconds();
mState.inputFrameRowBytes = mState.inputFrameSize.width * 2u;
mState.outputFrameRowBytes = mState.outputFrameSize.width * 4u;
mState.captureTextureWidth = mState.inputFrameSize.width / 2u;
mState.outputPackTextureWidth = mState.outputFrameSize.width;
mState.hasInputDevice = input != nullptr;
mState.hasInputSource = false;
return true;
}
bool DeckLinkSession::SelectPreferredFormats(const VideoFormatSelection& videoModes, bool outputAlphaRequired, std::string& error)
{
if (!output)
{
error = "Expected an Output DeckLink device";
return false;
}
mState.formatStatusMessage.clear();
const bool inputTenBitSupported = input != nullptr && InputSupportsFormat(input, videoModes.input.displayMode, bmdFormat10BitYUV);
mState.inputPixelFormat = input != nullptr ? ChoosePreferredVideoIOFormat(inputTenBitSupported) : VideoIOPixelFormat::Uyvy8;
if (input != nullptr && !inputTenBitSupported)
mState.formatStatusMessage += "DeckLink input does not report 10-bit YUV support for the configured mode; using 8-bit capture. ";
const bool outputTenBitSupported = OutputSupportsFormat(output, videoModes.output.displayMode, bmdFormat10BitYUV);
const bool outputTenBitYuvaSupported = OutputSupportsFormat(output, videoModes.output.displayMode, bmdFormat10BitYUVA);
mState.outputPixelFormat = outputAlphaRequired
? (outputTenBitYuvaSupported ? VideoIOPixelFormat::Yuva10 : VideoIOPixelFormat::Bgra8)
: (outputTenBitSupported ? VideoIOPixelFormat::V210 : VideoIOPixelFormat::Bgra8);
if (outputAlphaRequired && outputTenBitYuvaSupported)
mState.formatStatusMessage += "External keying requires alpha; using 10-bit YUVA output. ";
else if (outputAlphaRequired)
mState.formatStatusMessage += "External keying requires alpha, but DeckLink output does not report 10-bit YUVA support for the configured mode; using 8-bit BGRA output. ";
else if (!outputTenBitSupported)
mState.formatStatusMessage += "DeckLink output does not report 10-bit YUV support for the configured mode; using 8-bit BGRA output. ";
int deckLinkOutputRowBytes = 0;
if (output->RowBytesForPixelFormat(DeckLinkPixelFormatForVideoIO(mState.outputPixelFormat), mState.outputFrameSize.width, &deckLinkOutputRowBytes) != S_OK)
{
error = "DeckLink output setup failed while calculating output row bytes.";
return false;
}
mState.outputFrameRowBytes = static_cast<unsigned>(deckLinkOutputRowBytes);
mState.outputPackTextureWidth = OutputIsTenBit()
? PackedTextureWidthFromRowBytes(mState.outputFrameRowBytes)
: mState.outputFrameSize.width;
if (InputIsTenBit())
{
int deckLinkInputRowBytes = 0;
if (output->RowBytesForPixelFormat(bmdFormat10BitYUV, mState.inputFrameSize.width, &deckLinkInputRowBytes) == S_OK)
mState.inputFrameRowBytes = static_cast<unsigned>(deckLinkInputRowBytes);
else
mState.inputFrameRowBytes = MinimumV210RowBytes(mState.inputFrameSize.width);
}
else
{
mState.inputFrameRowBytes = mState.inputFrameSize.width * 2u;
}
mState.captureTextureWidth = InputIsTenBit()
? PackedTextureWidthFromRowBytes(mState.inputFrameRowBytes)
: mState.inputFrameSize.width / 2u;
std::ostringstream status;
status << "DeckLink formats: capture " << (input ? VideoIOPixelFormatName(mState.inputPixelFormat) : "none")
<< ", output " << VideoIOPixelFormatName(mState.outputPixelFormat) << ".";
if (!mState.formatStatusMessage.empty())
status << " " << mState.formatStatusMessage;
mState.formatStatusMessage = status.str();
return true;
}
bool DeckLinkSession::ConfigureInput(InputFrameCallback callback, const VideoFormat& inputVideoMode, std::string& error)
{
mInputFrameCallback = std::move(callback);
if (!input)
{
mState.hasInputSource = false;
mState.inputDisplayModeName = "No input - black frame";
return true;
}
const BMDPixelFormat deckLinkInputPixelFormat = DeckLinkPixelFormatForVideoIO(mState.inputPixelFormat);
if (input->EnableVideoInput(inputVideoMode.displayMode, deckLinkInputPixelFormat, bmdVideoInputFlagDefault) != S_OK)
{
if (mState.inputPixelFormat == VideoIOPixelFormat::V210)
{
OutputDebugStringA("DeckLink 10-bit input could not be enabled; falling back to 8-bit capture.\n");
mState.inputPixelFormat = VideoIOPixelFormat::Uyvy8;
mState.inputFrameRowBytes = mState.inputFrameSize.width * 2u;
mState.captureTextureWidth = mState.inputFrameSize.width / 2u;
if (input->EnableVideoInput(inputVideoMode.displayMode, bmdFormat8BitYUV, bmdVideoInputFlagDefault) == S_OK)
{
std::ostringstream status;
status << "DeckLink formats: capture " << VideoIOPixelFormatName(mState.inputPixelFormat)
<< ", output " << VideoIOPixelFormatName(mState.outputPixelFormat)
<< ". DeckLink 10-bit input enable failed; using 8-bit capture.";
mState.formatStatusMessage = status.str();
goto input_enabled;
}
}
OutputDebugStringA("DeckLink input could not be enabled; continuing in output-only black-frame mode.\n");
input.Release();
mState.hasInputDevice = false;
mState.hasInputSource = false;
mState.inputDisplayModeName = "No input - black frame";
return true;
}
input_enabled:
captureDelegate.Attach(new (std::nothrow) CaptureDelegate(this));
if (captureDelegate == nullptr)
{
error = "DeckLink input setup failed while creating the capture callback.";
return false;
}
if (input->SetCallback(captureDelegate) != S_OK)
{
error = "DeckLink input setup failed while installing the capture callback.";
return false;
}
return true;
}
bool DeckLinkSession::ConfigureOutput(OutputFrameCallback callback, const VideoFormat& outputVideoMode, bool externalKeyingEnabled, std::string& error)
{
mOutputFrameCallback = std::move(callback);
if (output->EnableVideoOutput(outputVideoMode.displayMode, bmdVideoOutputFlagDefault) != S_OK)
{
error = "DeckLink output setup failed while enabling video output.";
return false;
}
if (output->QueryInterface(IID_IDeckLinkKeyer, (void**)&keyer) == S_OK && keyer != NULL)
mState.keyerInterfaceAvailable = true;
if (externalKeyingEnabled)
{
if (!mState.supportsExternalKeying)
{
mState.statusMessage = "External keying was requested, but the selected DeckLink output does not report external keying support.";
}
else if (!mState.keyerInterfaceAvailable)
{
mState.statusMessage = "External keying was requested, but the selected DeckLink output does not expose the IDeckLinkKeyer interface.";
}
else if (keyer->Enable(TRUE) != S_OK || keyer->SetLevel(255) != S_OK)
{
mState.statusMessage = "External keying was requested, but enabling the DeckLink keyer failed.";
}
else
{
mState.externalKeyingActive = true;
mState.statusMessage = "External keying is active on the selected DeckLink output.";
}
}
else if (mState.supportsExternalKeying)
{
mState.statusMessage = "Selected DeckLink output supports external keying. Set enableExternalKeying to true in runtime-host.json to request it.";
}
for (int i = 0; i < 10; i++)
{
CComPtr<IDeckLinkMutableVideoFrame> outputFrame;
const BMDPixelFormat deckLinkOutputPixelFormat = DeckLinkPixelFormatForVideoIO(mState.outputPixelFormat);
if (output->CreateVideoFrame(mState.outputFrameSize.width, mState.outputFrameSize.height, mState.outputFrameRowBytes, deckLinkOutputPixelFormat, bmdFrameFlagFlipVertical, &outputFrame) != S_OK)
{
error = "DeckLink output setup failed while creating an output video frame.";
return false;
}
outputVideoFrameQueue.push_back(outputFrame);
}
playoutDelegate.Attach(new (std::nothrow) PlayoutDelegate(this));
if (playoutDelegate == nullptr)
{
error = "DeckLink output setup failed while creating the playout callback.";
return false;
}
if (output->SetScheduledFrameCompletionCallback(playoutDelegate) != S_OK)
{
error = "DeckLink output setup failed while installing the scheduled-frame callback.";
return false;
}
if (!mState.formatStatusMessage.empty())
mState.statusMessage = mState.statusMessage.empty() ? mState.formatStatusMessage : mState.formatStatusMessage + " " + mState.statusMessage;
return true;
}
double DeckLinkSession::FrameBudgetMilliseconds() const
{
return mScheduler.FrameBudgetMilliseconds();
}
bool DeckLinkSession::BeginOutputFrame(VideoIOOutputFrame& frame)
{
CComPtr<IDeckLinkMutableVideoFrame> outputVideoFrame = outputVideoFrameQueue.front();
outputVideoFrameQueue.push_back(outputVideoFrame);
outputVideoFrameQueue.pop_front();
CComPtr<IDeckLinkVideoBuffer> outputVideoFrameBuffer;
if (outputVideoFrame->QueryInterface(IID_IDeckLinkVideoBuffer, (void**)&outputVideoFrameBuffer) != S_OK)
return false;
if (outputVideoFrameBuffer->StartAccess(bmdBufferAccessWrite) != S_OK)
return false;
void* pFrame = nullptr;
outputVideoFrameBuffer->GetBytes(&pFrame);
frame.bytes = pFrame;
frame.rowBytes = outputVideoFrame->GetRowBytes();
frame.width = mState.outputFrameSize.width;
frame.height = mState.outputFrameSize.height;
frame.pixelFormat = mState.outputPixelFormat;
frame.nativeFrame = outputVideoFrame.p;
frame.nativeBuffer = outputVideoFrameBuffer.Detach();
return true;
}
void DeckLinkSession::EndOutputFrame(VideoIOOutputFrame& frame)
{
IDeckLinkVideoBuffer* outputVideoFrameBuffer = static_cast<IDeckLinkVideoBuffer*>(frame.nativeBuffer);
if (outputVideoFrameBuffer != nullptr)
{
outputVideoFrameBuffer->EndAccess(bmdBufferAccessWrite);
outputVideoFrameBuffer->Release();
}
frame.nativeBuffer = nullptr;
frame.bytes = nullptr;
}
void DeckLinkSession::AccountForCompletionResult(VideoIOCompletionResult completionResult)
{
mScheduler.AccountForCompletionResult(completionResult);
}
bool DeckLinkSession::ScheduleOutputFrame(const VideoIOOutputFrame& frame)
{
IDeckLinkMutableVideoFrame* outputVideoFrame = static_cast<IDeckLinkMutableVideoFrame*>(frame.nativeFrame);
const VideoIOScheduleTime scheduleTime = mScheduler.NextScheduleTime();
if (outputVideoFrame == nullptr || output->ScheduleVideoFrame(outputVideoFrame, scheduleTime.streamTime, scheduleTime.duration, scheduleTime.timeScale) != S_OK)
return false;
return true;
}
bool DeckLinkSession::Start()
{
mScheduler.Reset();
if (!output)
{
MessageBoxA(NULL, "Cannot start playout because no DeckLink output device is available.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
if (outputVideoFrameQueue.empty())
{
MessageBoxA(NULL, "Cannot start playout because the output frame queue is empty.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
for (unsigned i = 0; i < kPrerollFrameCount; i++)
{
CComPtr<IDeckLinkMutableVideoFrame> outputVideoFrame = outputVideoFrameQueue.front();
outputVideoFrameQueue.push_back(outputVideoFrame);
outputVideoFrameQueue.pop_front();
CComPtr<IDeckLinkVideoBuffer> outputVideoFrameBuffer;
if (outputVideoFrame->QueryInterface(IID_IDeckLinkVideoBuffer, (void**)&outputVideoFrameBuffer) != S_OK)
{
MessageBoxA(NULL, "Could not query the preroll output frame buffer.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
if (outputVideoFrameBuffer->StartAccess(bmdBufferAccessWrite) != S_OK)
{
MessageBoxA(NULL, "Could not write to the preroll output frame buffer.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
void* pFrame = nullptr;
outputVideoFrameBuffer->GetBytes((void**)&pFrame);
memset(pFrame, 0, outputVideoFrame->GetRowBytes() * mState.outputFrameSize.height);
outputVideoFrameBuffer->EndAccess(bmdBufferAccessWrite);
const VideoIOScheduleTime scheduleTime = mScheduler.NextScheduleTime();
if (output->ScheduleVideoFrame(outputVideoFrame, scheduleTime.streamTime, scheduleTime.duration, scheduleTime.timeScale) != S_OK)
{
MessageBoxA(NULL, "Could not schedule a preroll output frame.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
}
if (input)
{
if (input->StartStreams() != S_OK)
{
MessageBoxA(NULL, "Could not start the DeckLink input stream.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
}
if (output->StartScheduledPlayback(0, mScheduler.TimeScale(), 1.0) != S_OK)
{
MessageBoxA(NULL, "Could not start DeckLink scheduled playback.", "DeckLink start failed", MB_OK | MB_ICONERROR);
return false;
}
return true;
}
bool DeckLinkSession::Stop()
{
if (keyer != nullptr)
{
keyer->Disable();
mState.externalKeyingActive = false;
}
if (input)
{
input->StopStreams();
input->DisableVideoInput();
}
if (output)
{
output->StopScheduledPlayback(0, NULL, 0);
output->DisableVideoOutput();
}
return true;
}
void DeckLinkSession::HandleVideoInputFrame(IDeckLinkVideoInputFrame* inputFrame, bool hasNoInputSource)
{
mState.hasInputSource = !hasNoInputSource;
if (hasNoInputSource || mInputFrameCallback == nullptr)
{
VideoIOFrame frame;
frame.width = mState.inputFrameSize.width;
frame.height = mState.inputFrameSize.height;
frame.pixelFormat = mState.inputPixelFormat;
frame.hasNoInputSource = hasNoInputSource;
if (mInputFrameCallback)
mInputFrameCallback(frame);
return;
}
CComPtr<IDeckLinkVideoBuffer> inputFrameBuffer;
void* videoPixels = nullptr;
if (inputFrame->QueryInterface(IID_IDeckLinkVideoBuffer, (void**)&inputFrameBuffer) != S_OK)
return;
if (inputFrameBuffer->StartAccess(bmdBufferAccessRead) != S_OK)
return;
inputFrameBuffer->GetBytes(&videoPixels);
VideoIOFrame frame;
frame.bytes = videoPixels;
frame.rowBytes = inputFrame->GetRowBytes();
frame.width = static_cast<unsigned>(inputFrame->GetWidth());
frame.height = static_cast<unsigned>(inputFrame->GetHeight());
frame.pixelFormat = mState.inputPixelFormat;
frame.hasNoInputSource = hasNoInputSource;
mInputFrameCallback(frame);
inputFrameBuffer->EndAccess(bmdBufferAccessRead);
}
void DeckLinkSession::HandlePlayoutFrameCompleted(IDeckLinkVideoFrame*, BMDOutputFrameCompletionResult completionResult)
{
if (!mOutputFrameCallback)
return;
VideoIOCompletion completion;
switch (completionResult)
{
case bmdOutputFrameDisplayedLate:
completion.result = VideoIOCompletionResult::DisplayedLate;
break;
case bmdOutputFrameDropped:
completion.result = VideoIOCompletionResult::Dropped;
break;
case bmdOutputFrameFlushed:
completion.result = VideoIOCompletionResult::Flushed;
break;
case bmdOutputFrameCompleted:
completion.result = VideoIOCompletionResult::Completed;
break;
default:
completion.result = VideoIOCompletionResult::Unknown;
break;
}
mOutputFrameCallback(completion);
}

View File

@@ -0,0 +1,79 @@
#pragma once
#include "DeckLinkAPI_h.h"
#include "DeckLinkDisplayMode.h"
#include "DeckLinkFrameTransfer.h"
#include "DeckLinkVideoIOFormat.h"
#include "VideoIOFormat.h"
#include "VideoIOTypes.h"
#include "VideoPlayoutScheduler.h"
#include <atlbase.h>
#include <deque>
#include <string>
class OpenGLComposite;
class DeckLinkSession : public VideoIODevice
{
public:
DeckLinkSession() = default;
~DeckLinkSession();
void ReleaseResources() override;
bool DiscoverDevicesAndModes(const VideoFormatSelection& videoModes, std::string& error) override;
bool SelectPreferredFormats(const VideoFormatSelection& videoModes, bool outputAlphaRequired, std::string& error) override;
bool ConfigureInput(InputFrameCallback callback, const VideoFormat& inputVideoMode, std::string& error) override;
bool ConfigureOutput(OutputFrameCallback callback, const VideoFormat& outputVideoMode, bool externalKeyingEnabled, std::string& error) override;
bool Start() override;
bool Stop() override;
bool HasInputDevice() const { return mState.hasInputDevice; }
bool HasInputSource() const { return mState.hasInputSource; }
void SetInputSourceMissing(bool missing) { mState.hasInputSource = !missing; }
bool InputOutputDimensionsDiffer() const { return mState.inputFrameSize != mState.outputFrameSize; }
const FrameSize& InputFrameSize() const { return mState.inputFrameSize; }
const FrameSize& OutputFrameSize() const { return mState.outputFrameSize; }
unsigned InputFrameWidth() const { return mState.inputFrameSize.width; }
unsigned InputFrameHeight() const { return mState.inputFrameSize.height; }
unsigned OutputFrameWidth() const { return mState.outputFrameSize.width; }
unsigned OutputFrameHeight() const { return mState.outputFrameSize.height; }
VideoIOPixelFormat InputPixelFormat() const { return mState.inputPixelFormat; }
VideoIOPixelFormat OutputPixelFormat() const { return mState.outputPixelFormat; }
bool InputIsTenBit() const { return VideoIOPixelFormatIsTenBit(mState.inputPixelFormat); }
bool OutputIsTenBit() const { return VideoIOPixelFormatIsTenBit(mState.outputPixelFormat); }
unsigned InputFrameRowBytes() const { return mState.inputFrameRowBytes; }
unsigned OutputFrameRowBytes() const { return mState.outputFrameRowBytes; }
unsigned CaptureTextureWidth() const { return mState.captureTextureWidth; }
unsigned OutputPackTextureWidth() const { return mState.outputPackTextureWidth; }
const std::string& FormatStatusMessage() const { return mState.formatStatusMessage; }
const std::string& InputDisplayModeName() const { return mState.inputDisplayModeName; }
const std::string& OutputModelName() const { return mState.outputModelName; }
bool SupportsInternalKeying() const { return mState.supportsInternalKeying; }
bool SupportsExternalKeying() const { return mState.supportsExternalKeying; }
bool KeyerInterfaceAvailable() const { return mState.keyerInterfaceAvailable; }
bool ExternalKeyingActive() const { return mState.externalKeyingActive; }
const std::string& StatusMessage() const { return mState.statusMessage; }
void SetStatusMessage(const std::string& message) { mState.statusMessage = message; }
const VideoIOState& State() const override { return mState; }
VideoIOState& MutableState() override { return mState; }
double FrameBudgetMilliseconds() const;
void AccountForCompletionResult(VideoIOCompletionResult completionResult) override;
bool BeginOutputFrame(VideoIOOutputFrame& frame) override;
void EndOutputFrame(VideoIOOutputFrame& frame) override;
bool ScheduleOutputFrame(const VideoIOOutputFrame& frame) override;
void HandleVideoInputFrame(IDeckLinkVideoInputFrame* inputFrame, bool hasNoInputSource);
void HandlePlayoutFrameCompleted(IDeckLinkVideoFrame* completedFrame, BMDOutputFrameCompletionResult completionResult);
private:
CComPtr<CaptureDelegate> captureDelegate;
CComPtr<PlayoutDelegate> playoutDelegate;
CComPtr<IDeckLinkInput> input;
CComPtr<IDeckLinkOutput> output;
CComPtr<IDeckLinkKeyer> keyer;
std::deque<CComPtr<IDeckLinkMutableVideoFrame>> outputVideoFrameQueue;
VideoIOState mState;
VideoPlayoutScheduler mScheduler;
InputFrameCallback mInputFrameCallback;
OutputFrameCallback mOutputFrameCallback;
};

View File

@@ -0,0 +1,28 @@
#include "DeckLinkVideoIOFormat.h"
BMDPixelFormat DeckLinkPixelFormatForVideoIO(VideoIOPixelFormat format)
{
switch (format)
{
case VideoIOPixelFormat::V210:
return bmdFormat10BitYUV;
case VideoIOPixelFormat::Yuva10:
return bmdFormat10BitYUVA;
case VideoIOPixelFormat::Bgra8:
return bmdFormat8BitBGRA;
case VideoIOPixelFormat::Uyvy8:
default:
return bmdFormat8BitYUV;
}
}
VideoIOPixelFormat VideoIOPixelFormatFromDeckLink(BMDPixelFormat format)
{
if (format == bmdFormat10BitYUV)
return VideoIOPixelFormat::V210;
if (format == bmdFormat10BitYUVA)
return VideoIOPixelFormat::Yuva10;
if (format == bmdFormat8BitBGRA)
return VideoIOPixelFormat::Bgra8;
return VideoIOPixelFormat::Uyvy8;
}

View File

@@ -0,0 +1,7 @@
#pragma once
#include "DeckLinkAPI_h.h"
#include "VideoIOFormat.h"
BMDPixelFormat DeckLinkPixelFormatForVideoIO(VideoIOPixelFormat format);
VideoIOPixelFormat VideoIOPixelFormatFromDeckLink(BMDPixelFormat format);

View File

@@ -1,12 +1,15 @@
{
"shaderLibrary": "shaders",
"serverPort": 8080,
"oscBindAddress": "0.0.0.0",
"oscPort": 9000,
"oscSmoothing": 0.18,
"inputVideoFormat": "1080p",
"inputFrameRate": "59.94",
"outputVideoFormat": "1080p",
"outputFrameRate": "59.94",
"autoReload": true,
"maxTemporalHistoryFrames": 12,
"previewFps": 30,
"enableExternalKeying": true
}

View File

@@ -0,0 +1,638 @@
# Architecture Resilience Review
This note summarizes the main architectural improvements that would make the app more resilient during live use, especially around timing isolation, failure isolation, and recoverability.
Phase checklist:
- [ ] Define subsystem boundaries and target architecture
- [ ] Introduce an internal event model
- [ ] Split `RuntimeHost`
- [ ] Make the render thread the sole GL owner
- [ ] Refactor live state layering into an explicit composition model
- [ ] Move persistence onto a background snapshot writer
- [ ] Make DeckLink/backend lifecycle explicit with a state machine
- [ ] Add structured health, telemetry, and operational reporting
## Timing Review
The recent OSC work removed several control-path stalls, but the app still has a few deeper timing characteristics that matter for live resilience:
- output playout is still effectively render-on-demand from the DeckLink completion callback
- output buffering and preroll are now larger, but the buffering model is still static and only loosely related to actual render cost
- GPU readback is partly asynchronous, but the fallback path still returns to synchronous readback on any miss
- preview presentation is still tied to the playout render path
- background service timing still relies on coarse polling sleeps
Those points are important because they affect not just average performance, but how the app behaves under brief spikes, device jitter, or load bursts.
## Key Findings
### 1. `RuntimeHost` is carrying too many responsibilities
`RuntimeHost` currently acts as:
- config store
- persistent state store
- live parameter/state authority
- shader package registry owner
- status/telemetry sink
- control mutation entrypoint
That makes it a single contention and failure domain. It is also why OSC and render timing issues repeatedly surfaced around shared state access.
Relevant code:
- [RuntimeHost.h](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/runtime/RuntimeHost.h:15)
Recommended direction:
- split persisted config/state from live render-facing state
- separate status/telemetry updates from control mutation paths
- make render consume snapshots rather than sharing a large mutable authority object
### 2. OpenGL ownership is still centralized behind one shared lock
Even after recent timing improvements, preview, input upload, and playout rendering still rely on one shared GL context protected by one `CRITICAL_SECTION`.
Relevant code:
- [OpenGLComposite.h](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/OpenGLComposite.h:93)
- [OpenGLComposite.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/OpenGLComposite.cpp:253)
- [OpenGLVideoIOBridge.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLVideoIOBridge.cpp:70)
This is still a central choke point and limits timing isolation.
Recommended direction:
- use one dedicated render thread as the sole GL owner
- have input/output/control threads queue work instead of performing GL work directly
- remove ad hoc GL use from callback threads
### 3. Control flow is spread across polling and shared-memory patterns
`RuntimeServices` currently mixes:
- file polling
- deferred OSC commit handling
- control service orchestration
OSC ingest, overlay application, and host sync are distributed across several components.
Relevant code:
- [RuntimeServices.h](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/control/RuntimeServices.h:26)
- [RuntimeServices.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/control/RuntimeServices.cpp:178)
Recommended direction:
- introduce a small internal event pipeline or message bus
- use typed events for OSC, reloads, persistence requests, and status changes
- make timing ownership explicit per subsystem
Example event types:
- `OscParameterTargeted`
- `RenderOverlaySettled`
- `PersistStateRequested`
- `ShaderReloadRequested`
- `DeckLinkStatusChanged`
### 4. Error handling is still heavily UI-coupled
Failures are often surfaced via `MessageBoxA`, while background services mainly log with `OutputDebugStringA`.
Relevant code:
- [OpenGLComposite.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/OpenGLComposite.cpp:314)
- [DeckLinkSession.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/videoio/decklink/DeckLinkSession.cpp:478)
- [RuntimeServices.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/control/RuntimeServices.cpp:205)
This is not ideal for a live system where modal dialogs and silent debug logging are both poor operational behavior.
Recommended direction:
- introduce structured in-app error reporting
- define severity levels and counters
- prefer degraded runtime states over modal failure handling where possible
- add a rolling log file for operational troubleshooting
### 5. Live OSC overlay and persisted state are still separate concepts without a formal model
The current design works better now, but it still relies on hand-managed reconciliation between:
- persisted parameter state in `RuntimeHost`
- transient OSC overlay state in `OpenGLComposite`
Relevant code:
- [OpenGLComposite.h](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/OpenGLComposite.h:66)
Recommended direction:
Formalize three layers of state:
- base persisted state
- operator/UI committed state
- transient live automation overlay
Then render can always resolve:
- `final = base + committed + transient`
That avoids special-case sync behavior becoming scattered across the code.
### 6. DeckLink lifecycle could be modeled more explicitly
`DeckLinkSession` has a number of imperative calls, but startup, preroll, running, degraded, and stopped are not represented as an explicit state machine.
Relevant code:
- [DeckLinkSession.h](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/videoio/decklink/DeckLinkSession.h:17)
Recommended direction:
- introduce explicit session states
- define allowed transitions
- centralize recovery behavior
- make shutdown ordering and degraded-mode behavior more predictable
Timing-specific additions:
- separate "device callback received" from "render the next output frame" so output cadence is not driven directly by the completion callback thread
- make playout headroom configurable and adaptive instead of using a fixed compile-time preroll count
- track an explicit backend health state such as `running-steady`, `catching-up`, `late`, and `dropping`
Relevant timing code:
- [OpenGLVideoIOBridge.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLVideoIOBridge.cpp:86)
- [DeckLinkSession.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/videoio/decklink/DeckLinkSession.cpp:420)
- [DeckLinkSession.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/videoio/decklink/DeckLinkSession.cpp:487)
- [VideoPlayoutScheduler.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/videoio/VideoPlayoutScheduler.cpp:26)
Why this matters:
- `PlayoutFrameCompleted()` currently begins an output frame, takes the shared GL path, renders, reads back, and schedules the next frame in one callback-driven flow.
- `VideoPlayoutScheduler::AccountForCompletionResult()` currently reacts to both late and dropped frames by blindly advancing the schedule index by `2`, which is simple but not especially robust.
- `kPrerollFrameCount` is now `12`, but `DeckLinkSession::ConfigureOutput()` still creates a fixed pool of `10` mutable output frames. That mismatch suggests the buffering model is not being sized from one coherent source of truth.
Recommended direction:
- move playout to a producer/consumer model where a render worker fills output buffers ahead of the DeckLink callback
- define buffer-pool sizing from one policy object, for example: preroll depth, minimum spare buffers, and allowed catch-up depth
- replace fixed "skip two frames" recovery with measured lag accounting based on actual scheduled-versus-completed position
- expose playout latency as a runtime setting or policy, rather than burying it in a constant
### 6a. The current playout timing model is still callback-coupled
The app now has more headroom, but the next output frame is still produced directly in the scheduled-frame completion callback path.
Relevant code:
- [OpenGLVideoIOBridge.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLVideoIOBridge.cpp:86)
- [DeckLinkFrameTransfer.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/videoio/decklink/DeckLinkFrameTransfer.cpp:53)
That means the completion callback is currently responsible for:
- frame pacing accounting
- acquiring the next output buffer
- taking the GL critical section
- rendering the composite
- performing output readback
- scheduling the next frame
This works when the app is comfortably within budget, but it makes deadline misses much harder to absorb gracefully.
Recommended direction:
- make the DeckLink callback a lightweight notifier
- have a dedicated playout worker or render worker keep an ahead-of-time queue of ready output frames
- treat callback time as control-plane time, not render time
### 6b. A producer/consumer playout model would be a better long-term fit
The stronger architecture for this app is:
- a render scheduler or dedicated render thread runs at the configured video cadence
- rendering produces completed output frames ahead of need
- those frames are placed into a bounded queue or ring buffer
- the DeckLink side consumes already-prepared frames when callbacks indicate they are needed
That is a better fit than callback-driven rendering because it separates:
- render timing
- GL ownership
- output-device timing
- latency policy
In that model:
- render is the producer
- DeckLink is the timing consumer
- the queue between them becomes the main place to manage latency versus resilience
Why this is preferable:
- brief callback jitter is less likely to become a visible dropped frame
- render spikes can be absorbed by queue headroom instead of immediately missing output deadlines
- latency becomes an explicit policy choice rather than an incidental side effect of callback timing
- queue depth, underruns, stale-frame reuse, and catch-up behavior become measurable and tunable
Recommended direction:
- move toward a bounded producer/consumer playout queue
- make queue depth and target headroom runtime policy, not compile-time constants
- define explicit underrun behavior, for example:
- reuse newest completed frame
- reuse last scheduled frame
- output black or degraded frame
- keep DeckLink callbacks limited to dequeue/schedule/accounting work wherever possible
### 7. Persistence should be more asynchronous and debounced
`SavePersistentState()` is still called directly from many update paths.
Relevant code:
- [RuntimeHost.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/runtime/RuntimeHost.cpp:1841)
Recent OSC work already reduced this problem for live automation, but the broader architecture would still benefit from:
- a debounced persistence queue
- atomic write-behind snapshots
- clear separation between state mutation and disk flush
This improves both resilience and timing safety.
### 8. Telemetry is useful, but still too coarse
The app already records render timing and playout pacing, which is a good foundation.
Relevant code:
- [OpenGLRenderPipeline.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLRenderPipeline.cpp:24)
- [OpenGLVideoIOBridge.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLVideoIOBridge.cpp:24)
Recommended direction:
Add lightweight tracing for:
- input callback latency
- input upload skip count
- GL lock wait time
- render queue depth
- render time
- pass build/compile latency
- readback time
- output scheduling lag
- output queue depth
- preroll depth versus spare-buffer depth
- preview present cost and skipped-preview count
- control queue depth
- `RuntimeHost` lock contention
That would make future tuning and failure diagnosis much easier.
Timing-specific observations from the current code:
- render time is captured as one total number in [OpenGLRenderPipeline.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLRenderPipeline.cpp:24), but not split into draw, pack, readback wait, readback copy, or preview present
- frame pacing stats are recorded in [OpenGLVideoIOBridge.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLVideoIOBridge.cpp:17), but there is no explicit visibility into how much queued playout headroom remains
- input uploads are intentionally skipped when the GL bridge is busy in [OpenGLVideoIOBridge.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLVideoIOBridge.cpp:60), but the app does not currently surface how often that is happening
### 8a. Preview and playout are still too close together
The desktop preview is rate-limited, but still presented from inside the render pipeline path.
Relevant code:
- [OpenGLRenderPipeline.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLRenderPipeline.cpp:54)
- [OpenGLComposite.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/OpenGLComposite.cpp:235)
This means preview presentation can still consume time on the same path that is trying to meet output deadlines.
Recommended direction:
- treat preview as best-effort and entirely subordinate to playout
- move preview present to a separate presentation schedule fed from the latest completed render
- record preview skips and preview present cost independently from playout timing
### 8b. Readback is improved, but still not fully deadline-safe
The async readback path is a good step, but the miss path still falls back to synchronous `glReadPixels()` and then flushes the async pipeline.
Relevant code:
- [OpenGLRenderPipeline.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLRenderPipeline.cpp:150)
- [OpenGLRenderPipeline.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/gl/pipeline/OpenGLRenderPipeline.cpp:228)
That means a single late GPU fence can push the app back onto the most timing-sensitive path exactly when it is already under pressure.
Recommended direction:
- increase readback instrumentation before changing policy again
- consider deeper readback buffering or a true stale-frame reuse policy instead of immediate synchronous fallback
- separate "freshest possible frame" policy from "never miss output deadline" policy and make that tradeoff explicit
### 8c. Background control and file-watch timing are still coarse
`RuntimeServices::PollLoop()` currently uses a `25 x Sleep(10)` loop, which gives it a coarse `~250 ms` cadence for file-watch polling and deferred OSC commit work.
Relevant code:
- [RuntimeServices.cpp](/c:/Users/Aiden/Documents/GitHub/video-shader-toys/apps/LoopThroughWithOpenGLCompositing/control/RuntimeServices.cpp:245)
That is acceptable for non-critical background work, but it is still too blunt to be the long-term timing model for coordination-heavy runtime services.
Recommended direction:
- replace coarse sleep polling with waitable events or condition-variable driven wakeups where practical
- isolate truly background work from latency-sensitive control reconciliation
- add separate metrics for queue age, not just queue depth
## Phased Roadmap
This roadmap is ordered by architectural dependency rather than by “quick wins.” The goal is to move the app toward clearer ownership boundaries and safer live behavior without doing later work on top of foundations that are likely to change again.
### Phase 1. Define subsystem boundaries and target architecture
Before changing major internals, formalize the target responsibilities for each major part of the app.
Target split:
- `RuntimeStore`
- persisted config
- persisted layer stack
- preset persistence
- `RuntimeSnapshot`
- render-facing immutable or near-immutable snapshots
- parameter values prepared for the render path
- `ControlServices`
- OSC ingress
- web control ingress
- reload/file-watch requests
- commit/persist requests
- `RenderEngine`
- sole owner of live GL rendering
- sole consumer of render snapshots plus transient overlays
- `VideoBackend`
- DeckLink input/output lifecycle
- pacing and scheduling
- `Health/Telemetry`
- logging
- counters
- timing traces
- degraded-state reporting
Why this phase comes first:
- it prevents later refactors from reintroducing responsibility overlap
- it gives names to the seams the later phases will build around
- it reduces the risk of replacing one monolith with several poorly-defined ones
Suggested deliverables:
- a short architecture diagram
- a responsibility table for each subsystem
- a list of allowed dependencies between subsystems
### Phase 2. Introduce an internal event model
Once subsystem boundaries are defined, introduce a typed event pipeline between them. This should happen before large state splits so the app has a stable coordination model.
Example event families:
- control events
- `OscParameterTargeted`
- `UiParameterCommitted`
- `TriggerFired`
- runtime events
- `ShaderReloadRequested`
- `PackagesRescanned`
- `PersistStateRequested`
- render events
- `OverlayApplied`
- `OverlaySettled`
- `SnapshotPublished`
- backend events
- `InputSignalChanged`
- `OutputLateFrameDetected`
- `OutputDroppedFrameDetected`
- health events
- `SubsystemWarningRaised`
- `SubsystemRecovered`
Why this phase comes second:
- it provides a migration path away from direct cross-calls
- it makes ownership explicit before data structures are split apart
- it lets you move one subsystem at a time without losing coordination
Suggested outcome:
- the app stops relying on “shared object plus mutex plus polling” as the default coordination pattern
### Phase 3. Split `RuntimeHost` into persistent state, render snapshot state, and service-facing coordination
After the event model exists, break apart `RuntimeHost`.
Recommended split:
- `RuntimeStore`
- owns config and saved layer data
- handles serialization/deserialization
- does not sit on the live render path
- `RuntimeCoordinator`
- resolves control actions
- validates mutations
- publishes new snapshots
- bridges events between services and render
- `RuntimeSnapshotProvider`
- publishes immutable render snapshots
- avoids large shared mutable structures on the render path
Why this phase comes before render-thread isolation:
- render isolation is easier when the render thread consumes clean snapshots instead of a large mutable host object
- otherwise the GL refactor still drags along too much shared state complexity
Primary design rule:
- render should read snapshots
- persistence should write stored state
- services should request mutations through the coordinator
### Phase 4. Make the render thread the sole GL owner
With state and coordination cleaner, move to a dedicated render-thread model.
Target behavior:
- one thread owns the GL context
- input callbacks never perform GL work directly
- output callbacks never perform GL work directly
- preview presentation, texture upload, render passes, readback, and output pack work are all issued by the render thread
Other threads should only:
- enqueue new video frames
- enqueue control updates
- enqueue backend events
- consume produced output buffers
Why this phase comes here:
- it is much safer once state access and control coordination are no longer centered on `RuntimeHost`
- it avoids coupling the render-thread refactor to storage and service refactors at the same time
Expected benefits:
- less cross-thread GL contention
- easier timing reasoning
- much lower risk of callback-driven stalls
- a clearer foundation for future GPU pipeline work
### Phase 5. Refactor live state layering into an explicit composition model
Once rendering and snapshots are isolated, formalize how final parameter values are derived.
Recommended layers:
- base persisted state
- operator-committed live state
- transient automation overlay
Render should derive final values from a clear composition rule such as:
- `final = base + committed + transient`
Why this phase follows render isolation:
- once render owns snapshot consumption, it becomes much easier to cleanly evaluate layered state without touching persistence or control services
- it turns the current OSC overlay behavior into a first-class model instead of an implementation detail
Expected benefits:
- fewer one-off sync rules
- clearer behavior for OSC, UI changes, and automation
- easier future expansion to presets, cues, or timed transitions
### Phase 6. Move persistence onto a background snapshot writer
After the state model is explicit, persistence should become a background concern rather than a synchronous side effect of mutations.
Target behavior:
- mutations update authoritative in-memory stored state
- persistence requests are queued
- disk writes are debounced and coalesced
- writes are atomic and versioned where practical
Why this phase comes after state splitting:
- otherwise persistence logic will need to be rewritten twice
- it should operate on the new `RuntimeStore` model, not on the current mixed-responsibility object
Expected benefits:
- less timing interference
- better corruption resistance
- cleaner restart/recovery semantics
### Phase 7. Make DeckLink/backend lifecycle explicit with a state machine
Once the render and state layers are cleaner, refactor the video backend into an explicit lifecycle model.
Suggested states:
- uninitialized
- devices-discovered
- configured
- prerolling
- running
- degraded
- stopping
- stopped
- failed
Why this phase belongs here:
- the backend should integrate with the new event model
- degraded/recovery behavior will be easier once rendering and state coordination are already more deterministic
Expected benefits:
- safer startup/shutdown ordering
- clearer recovery behavior
- easier handling of missing input, dropped frames, or reconfiguration
- a clearer place to own playout headroom policy, output queue sizing, and late-frame recovery behavior
### Phase 8. Add structured health, telemetry, and operational reporting
This phase should happen after the main ownership changes so the telemetry can reflect the final architecture instead of a transient one.
Recommended coverage:
- render queue depth
- GL lock wait time, if any shared lock remains
- input callback latency
- input upload skip count
- output scheduling lag
- output queue depth and spare-buffer depth
- readback timing
- readback fence wait timing
- synchronous readback fallback count
- preview present timing and skipped-preview count
- snapshot publish frequency
- persistence queue depth
- event queue depth
- backend state transitions
- warning/error counters per subsystem
Also replace modal-only error handling with:
- structured in-app health state
- severity-based logging
- rolling log files
- operator-visible degraded-state messages
Why this phase comes last:
- it should instrument the architecture you intend to keep
- otherwise instrumentation work gets invalidated by the refactor
## Recommended Execution Order
If this is approached as a serious architecture program rather than opportunistic cleanup, the recommended order is:
1. Define subsystem boundaries and target architecture.
2. Introduce the internal event model.
3. Split `RuntimeHost`.
4. Make the render thread the sole GL owner.
5. Formalize live state layering and composition.
6. Move persistence to a background snapshot writer.
7. Refactor DeckLink/backend lifecycle into an explicit state machine.
8. Add structured telemetry, health reporting, and operational diagnostics.
## Why This Order Makes Sense
This order tries to avoid doing foundational work twice.
- The event model comes before major subsystem extraction so coordination patterns stabilize early.
- `RuntimeHost` is split before render isolation so the render thread does not inherit the current monolithic state model.
- Live state layering is formalized only after render ownership is clearer.
- Persistence is moved later so it can target the final state model rather than the current one.
- Telemetry is intentionally late so it instruments the architecture that survives the refactor.
## Short Version
The app is in a much better place than it was before the OSC timing work, but the main remaining architectural risk is still shared ownership. Too many responsibilities converge on `RuntimeHost` and the shared GL path. The most sensible path forward is:
1. define boundaries
2. establish an event model
3. split state ownership
4. isolate rendering
5. formalize layered live state
6. background persistence
7. explicit backend lifecycle
8. health and telemetry
That sequence gives each later phase a cleaner foundation than the current app has today.

View File

@@ -8,11 +8,15 @@ Set the UDP port in `config/runtime-host.json`:
```json
{
"oscPort": 9000
"oscBindAddress": "127.0.0.1",
"oscPort": 9000,
"oscSmoothing": 0.18
}
```
Set `oscPort` to `0` to disable the OSC listener.
Set `oscBindAddress` to `127.0.0.1` to keep OSC local to the host, or `0.0.0.0` to listen on all IPv4 interfaces.
Set `oscSmoothing` to a value from `0.0` to `1.0` to add a subtle per-frame easing amount for numeric OSC controls. `0.0` disables smoothing, and larger values respond more quickly.
## Address Pattern
@@ -47,6 +51,8 @@ Matching is exact first. If that fails, names are compared in a simplified form
If multiple layers use the same shader package ID or display name, the first matching layer in the stack is controlled. Use the internal layer ID shown in the UI when you need to target one duplicate layer precisely.
In the control UI, each parameter row has a small **OSC** button. Clicking it copies that parameter's exact OSC address to the clipboard, which is the safest way to target controls with long names or duplicate shader layers.
## Values
The listener accepts these OSC argument types:
@@ -59,17 +65,27 @@ The listener accepts these OSC argument types:
Single-argument messages become scalar JSON values. Multi-argument messages become JSON arrays, which lets OSC drive `vec2` and `color` parameters.
OSC updates are coalesced by target route and applied once per render tick, so rapid controller motion does not force one runtime mutation, disk write, and UI push per incoming UDP packet. Numeric OSC controls can also be slightly smoothed with `oscSmoothing`.
Examples:
```text
/VideoShaderToys/fisheye-reproject/panDegrees 45.0
/VideoShaderToys/fisheye-reproject/fisheyeModel "equisolid"
/VideoShaderToys/video-transform/pan 0.25 -0.5
/VideoShaderToys/composition-guides/lineColor 1.0 0.8 0.1 1.0
/VideoShaderToys/safe-area-guides/lineColor 1.0 0.8 0.1 1.0
```
Values are validated with the same shader parameter rules used by the REST API. Invalid values or unknown addresses are ignored and reported to the native debug output.
OSC-driven parameter changes are not autosaved to `runtime/runtime_state.json`. Stack edits made through the UI and preset operations still persist as before. Smoothing only applies to numeric controls such as floats, `vec2`, and `color`; booleans, enums, text, and triggers stay immediate.
For `trigger` parameters, the OSC value is treated as a pulse. A simple integer or boolean message is enough:
```text
/VideoShaderToys/trigger-flash/flash 1
```
## Open Stage Control
For simple scalar controls, set the widget address and target directly:
@@ -106,10 +122,21 @@ send('127.0.0.1:9000', '/VideoShaderToys/fisheye-reproject/tiltDegrees', {type:
## Network
The listener binds to localhost only:
By default the listener binds to localhost only:
```text
127.0.0.1:<oscPort>
```
This keeps the control surface local to the machine running Video Shader Toys.
To accept OSC from other machines on the network, set:
```json
{
"oscBindAddress": "0.0.0.0",
"oscPort": 9000
}
```
That listens on all IPv4 interfaces, so make sure your firewall and network are configured appropriately.

View File

@@ -201,6 +201,7 @@ paths:
post:
tags: [Runtime]
summary: Reload shaders
description: Rescans the shader library, re-reads manifests, queues shader compilation, and refreshes shader availability/errors. If a changed shader fails, the previous working stack remains active where possible.
operationId: reloadShaders
requestBody:
required: false
@@ -214,6 +215,24 @@ paths:
$ref: "#/components/responses/ActionOk"
"400":
$ref: "#/components/responses/ActionError"
/api/screenshot:
post:
tags: [Runtime]
summary: Queue a PNG screenshot of the final output render target
description: Captures the next completed output render target and writes it under `runtime/screenshots/`.
operationId: queueScreenshot
requestBody:
required: false
content:
application/json:
schema:
type: object
additionalProperties: false
responses:
"200":
$ref: "#/components/responses/ActionOk"
"400":
$ref: "#/components/responses/ActionError"
components:
responses:
ActionOk:
@@ -340,6 +359,8 @@ components:
$ref: "#/components/schemas/VideoStatus"
decklink:
$ref: "#/components/schemas/DeckLinkStatus"
videoIO:
$ref: "#/components/schemas/VideoIOStatus"
performance:
$ref: "#/components/schemas/PerformanceStatus"
shaders:
@@ -397,6 +418,8 @@ components:
type: string
DeckLinkStatus:
type: object
deprecated: true
description: Legacy DeckLink-specific status object. Prefer `videoIO` for new clients.
properties:
modelName:
type: string
@@ -412,6 +435,26 @@ components:
type: boolean
statusMessage:
type: string
VideoIOStatus:
type: object
properties:
backend:
type: string
example: decklink
modelName:
type: string
supportsInternalKeying:
type: boolean
supportsExternalKeying:
type: boolean
keyerInterfaceAvailable:
type: boolean
externalKeyingRequested:
type: boolean
externalKeyingActive:
type: boolean
statusMessage:
type: string
PerformanceStatus:
type: object
properties:
@@ -423,6 +466,18 @@ components:
type: number
budgetUsedPercent:
type: number
completionIntervalMs:
type: number
smoothedCompletionIntervalMs:
type: number
maxCompletionIntervalMs:
type: number
lateFrameCount:
type: number
droppedFrameCount:
type: number
flushedFrameCount:
type: number
ShaderSummary:
type: object
properties:
@@ -434,6 +489,12 @@ components:
type: string
category:
type: string
available:
type: boolean
description: False when the shader package exists but failed manifest or compile validation.
error:
type: string
description: Error text for unavailable shader packages.
temporal:
$ref: "#/components/schemas/TemporalState"
TemporalState:
@@ -472,9 +533,21 @@ components:
type: string
label:
type: string
description:
type: string
description: Short helper text shown under the parameter label in the control UI.
type:
type: string
enum: [float, vec2, color, bool, enum]
enum: [float, vec2, color, bool, enum, text, trigger]
defaultValue:
description: Default parameter value from the shader manifest.
oneOf:
- type: number
- type: boolean
- type: string
- type: array
items:
type: number
min:
type: array
items:
@@ -491,6 +564,12 @@ components:
type: array
items:
$ref: "#/components/schemas/ParameterOption"
maxLength:
type: number
description: Maximum length for text parameters.
font:
type: string
description: Font asset id used by text parameters, when declared.
value:
description: Current parameter value.
oneOf:

View File

@@ -14,11 +14,12 @@ Packaged documentation:
Generated files:
- `shader_cache/active_shader_wrapper.slang`: generated Slang wrapper for the active shader/layer.
- `shader_cache/active_shader.raw.frag`: raw GLSL emitted by `slangc`.
- `shader_cache/active_shader.frag`: patched GLSL consumed by the OpenGL path.
- `shader_cache/active_shader_wrapper.slang`: generated Slang wrapper for the most recently compiled shader pass.
- `shader_cache/active_shader.raw.frag`: raw GLSL emitted by `slangc` for the most recently compiled pass.
- `shader_cache/active_shader.frag`: patched GLSL consumed by the OpenGL path for the most recently compiled pass.
- `runtime_state.json`: autosaved latest layer stack, layer order, bypass state, shader assignments, and parameter values. The host reloads this file on startup.
- `stack_presets/*.json`: user-saved layer stack presets.
- `screenshots/*.png`: screenshots captured from the final output render target through the control UI/API.
Git policy:

View File

@@ -11,11 +11,15 @@ struct ShaderContext
float2 inputResolution;
float2 outputResolution;
float time;
float utcTimeSeconds;
float utcOffsetSeconds;
float startupRandom;
float frameCount;
float mixAmount;
float bypass;
int sourceHistoryLength;
int temporalHistoryLength;
int feedbackAvailable;
};
cbuffer GlobalParams
@@ -23,20 +27,31 @@ cbuffer GlobalParams
float gTime;
float2 gInputResolution;
float2 gOutputResolution;
float gUtcTimeSeconds;
float gUtcOffsetSeconds;
float gStartupRandom;
float gFrameCount;
float gMixAmount;
float gBypass;
int gSourceHistoryLength;
int gTemporalHistoryLength;
int gFeedbackAvailable;
{{PARAMETER_UNIFORMS}}};
Sampler2D<float4> gVideoInput;
{{SOURCE_HISTORY_SAMPLERS}}{{TEMPORAL_HISTORY_SAMPLERS}}{{TEXTURE_SAMPLERS}}
Sampler2D<float4> gLayerInput;
{{SOURCE_HISTORY_SAMPLERS}}{{TEMPORAL_HISTORY_SAMPLERS}}{{FEEDBACK_SAMPLER}}{{TEXTURE_SAMPLERS}}
{{TEXT_SAMPLERS}}
float4 sampleVideo(float2 tc)
{
return gVideoInput.Sample(tc);
}
float4 sampleLayerInput(float2 tc)
{
return gLayerInput.Sample(tc);
}
float4 sampleSourceHistory(int framesAgo, float2 tc)
{
if (gSourceHistoryLength <= 0)
@@ -67,6 +82,9 @@ float4 sampleTemporalHistory(int framesAgo, float2 tc)
}
}
{{FEEDBACK_HELPER}}
{{TEXT_HELPERS}}
#include "{{USER_SHADER_INCLUDE}}"
[shader("fragment")]
@@ -78,11 +96,15 @@ float4 fragmentMain(FragmentInput input) : SV_Target
context.inputResolution = gInputResolution;
context.outputResolution = gOutputResolution;
context.time = gTime;
context.utcTimeSeconds = gUtcTimeSeconds;
context.utcOffsetSeconds = gUtcOffsetSeconds;
context.startupRandom = gStartupRandom;
context.frameCount = gFrameCount;
context.mixAmount = gMixAmount;
context.bypass = gBypass;
context.sourceHistoryLength = gSourceHistoryLength;
context.temporalHistoryLength = gTemporalHistoryLength;
context.feedbackAvailable = gFeedbackAvailable;
float4 effectedColor = {{ENTRY_POINT_CALL}};
float mixValue = clamp(gBypass > 0.5 ? 0.0 : gMixAmount, 0.0, 1.0);
return lerp(context.sourceColor, effectedColor, mixValue);

819
shaders/SHADER_CONTRACT.md Normal file
View File

@@ -0,0 +1,819 @@
# Shader Package Contract
This document explains how to create shaders for the Video Shader runtime.
Each shader is a small package under `shaders/<id>/`:
```text
shaders/my-effect/
shader.json
shader.slang
optional-texture.png
```
The runtime reads `shader.json`, generates a Slang wrapper from `runtime/templates/shader_wrapper.slang.in`, includes your `shader.slang`, compiles the result to GLSL, and exposes the shader in the local control UI.
## Quick Start
Create a folder:
```text
shaders/my-effect/
```
Add `shader.json`:
```json
{
"id": "my-effect",
"name": "My Effect",
"description": "A simple starter shader.",
"category": "Custom",
"entryPoint": "shadeVideo",
"parameters": [
{
"id": "strength",
"label": "Strength",
"type": "float",
"default": 0.5,
"min": 0.0,
"max": 1.0,
"step": 0.01
}
]
}
```
Add `shader.slang`:
```slang
float4 shadeVideo(ShaderContext context)
{
float4 color = context.sourceColor;
color.rgb = lerp(color.rgb, 1.0 - color.rgb, strength);
return saturate(color);
}
```
With `autoReload` enabled in `config/runtime-host.json`, edits to shader source, manifests, and declared texture assets are picked up automatically. You can also use **Reload shaders** in the control UI to manually rescan the shader library.
## Guidance For Shaders
When generating a new shader package, prefer matching the existing runtime contract over copying code verbatim from Shadertoy, GLSL sandbox sites, or WebGL demos.
Important rules:
- Generate a complete package: `shaders/<id>/shader.json` and `shaders/<id>/shader.slang`.
- Use `float4 shadeVideo(ShaderContext context)` unless the manifest explicitly sets a different `entryPoint`.
- Do not create `mainImage`, `main`, `fragColor`, `iResolution`, `iTime`, `iChannel0`, or a fragment shader attribute layout. The runtime wrapper provides the real fragment entry point.
- Replace Shadertoy `fragCoord` with `context.uv * context.outputResolution`.
- Replace `iResolution.xy` with `context.outputResolution`.
- Replace `iTime` with `context.time`.
- Replace `iFrame` with `context.frameCount`.
- Replace source-video `iChannel0` sampling with `sampleVideo(uv)` or `context.sourceColor`.
- Use Slang/HLSL names and syntax: `float2`, `float3`, `float4`, `float2x2`, `lerp`, `frac`, `saturate`, and `mul(matrix, vector)`.
- Do not use GLSL-only types/functions such as `vec2`, `vec3`, `vec4`, `mat2`, `mix`, `fract`, `mod`, `texture`, or `mainImage`.
- Keep parameter IDs, texture IDs, font IDs, and function entry points as valid shader identifiers: letters, numbers, and underscores only, starting with a letter or underscore.
- Add only controls that are actually used by the shader.
- Prefer a small number of clear controls with conservative defaults.
- Keep shaders deterministic unless randomness is an explicit feature. For stable process-level variation, use `context.startupRandom`; for per-pixel pseudo-randomness, hash from `uv`, pixel coordinates, `frameCount`, or trigger values.
- If adapting third-party code, include attribution and source URL in the manifest description when the license allows adaptation.
- If the source license is unclear or incompatible, do not add the shader package.
Before finishing, compile-check the shader through the runtime wrapper or launch the app and verify the shader appears without an error in the selector. CI also runs shader validation, so every available package in `shaders/` should compile successfully. Intentionally broken examples should stay visibly marked as broken rather than pretending to be production shaders.
## Manifest Fields
`shader.json` is the runtime-facing description of the shader.
Required fields:
- `id`: package ID used by state/presets. Hyphenated names are OK here, for example `my-effect`.
- `name`: display name in the UI.
- `parameters`: array of exposed controls. Use `[]` if there are no user parameters.
Optional fields:
- `description`: display/help text for the shader library.
- `category`: UI grouping label.
- `entryPoint`: Slang function to call. Defaults to `shadeVideo`.
- `passes`: advanced render-pass declarations. Omit this for normal single-pass shaders.
- `textures`: texture assets to load and expose as samplers.
- `fonts`: packaged font assets for live text parameters.
- `temporal`: history-buffer requirements.
- `feedback`: optional previous-frame shader-local feedback surface.
Parameter objects may also include an optional `description` string. The control UI displays it as one-line helper text with the full text available on hover, so use it for short operational guidance rather than long documentation.
Metadata conventions:
- Keep `name` short, human-facing, and in title case.
- Keep `category` consistent with existing library groups such as `Color`, `Transform`, `Projection`, `Temporal`, `Scopes & Guides`, `Utility`, `Feedback`, and `Calibration`.
- Keep `description` to one clear sentence in present tense that explains what the shader does for an operator.
- Avoid placeholder, joke, or overly implementation-heavy descriptions unless the shader is intentionally a diagnostic or broken example.
Shader-visible identifiers must be valid Slang-style identifiers:
- `entryPoint`
- parameter `id`
- texture `id`
- font `id`
Use letters, numbers, and underscores only, and start with a letter or underscore. For example, `logoTexture` is valid; `logo-texture` is not valid as a shader-visible texture ID.
## Render Passes
Most shaders should omit `passes`. The runtime then creates one implicit pass:
```json
{
"id": "main",
"source": "shader.slang",
"entryPoint": "shadeVideo",
"output": "layerOutput"
}
```
Advanced shaders may declare explicit passes. All passes may live in one `.slang` file by using different `entryPoint` values, or they may be split across multiple source files:
```json
{
"passes": [
{
"id": "blurX",
"source": "blur-x.slang",
"entryPoint": "blurHorizontal",
"inputs": ["layerInput"],
"output": "blurredX"
},
{
"id": "final",
"source": "final.slang",
"entryPoint": "finish",
"inputs": ["blurredX"],
"output": "layerOutput"
}
]
}
```
Pass fields:
- `id`: required pass identifier. It must be a valid shader identifier and unique inside the package.
- `source`: required Slang source path relative to the package directory.
- `entryPoint`: optional Slang function for this pass. Defaults to the package-level `entryPoint`.
- `inputs`: optional list of named inputs. The first input is used as the pass input texture.
- `output`: optional output name. Use `layerOutput` for the final visible layer result.
Pass input names:
- `layerInput`: the input to this layer, before any of its passes run.
- `previousPass`: the previous pass output in this layer. If there is no previous pass, this falls back to `layerInput`.
- Any earlier pass `id` or `output` name from the same layer.
If `inputs` is omitted, the first pass samples `layerInput` and later passes sample `previousPass`.
Single-file multipass example:
```json
{
"passes": [
{
"id": "mask",
"source": "shader.slang",
"entryPoint": "makeMask",
"output": "maskBuffer"
},
{
"id": "final",
"source": "shader.slang",
"entryPoint": "finish",
"inputs": ["maskBuffer"],
"output": "layerOutput"
}
]
}
```
Pass output names:
- `layerOutput`: the final visible output of this layer.
- Any other name creates an intermediate 16-bit float render target that later passes may sample.
If the final declared pass does not explicitly output `layerOutput`, the runtime still treats that final pass as the visible layer output. Existing single-pass shaders are unaffected.
## Feedback Surface
Shaders may opt in to a persistent previous-frame feedback surface:
```json
{
"feedback": {
"enabled": true,
"writePass": "final"
}
}
```
Fields:
- `enabled`: when `true`, the runtime allocates one persistent `RGBA16F` feedback surface for this shader at the current render resolution.
- `writePass`: optional pass `id` whose output should become next frame's feedback surface. If omitted, the runtime uses the final declared pass, or the implicit `main` pass for single-pass shaders.
Behavior:
- all passes may sample the same previous-frame feedback surface
- one designated pass writes the next feedback surface
- feedback is previous-frame state, not same-frame pass chaining
Guardrails:
- Feedback is best suited to image-like state such as trails, masks, luminance fields, decay maps, and shader-local analysis buffers.
- Feedback is not a precise long-term data store. The surface uses `RGBA16F`, so repeated accumulation, exact counters, and tightly packed metadata can drift or clamp over time.
- The feedback surface is currently filtered like an image, not configured as strict texel-addressed storage. If you reserve texels as data slots, sample them carefully and do not assume exact CPU-style array semantics.
- Each feedback-enabled layer allocates two full-resolution feedback textures for ping-pong state. This increases VRAM use and adds one extra full-frame feedback copy per rendered frame.
- In multipass shaders, feedback remains previous-frame state even when a pass also consumes same-frame pass outputs. Do not treat feedback as another same-frame intermediate buffer.
Single-pass example:
```json
{
"id": "feedback-glow",
"name": "Feedback Glow",
"feedback": {
"enabled": true
},
"parameters": []
}
```
Multipass example:
```json
{
"passes": [
{
"id": "analysis",
"source": "shader.slang",
"entryPoint": "analyzeFrame",
"output": "analysisBuffer"
},
{
"id": "final",
"source": "shader.slang",
"entryPoint": "finishFrame",
"inputs": ["analysisBuffer"],
"output": "layerOutput"
}
],
"feedback": {
"enabled": true,
"writePass": "final"
}
}
```
The wrapper exposes:
```slang
float4 sampleFeedback(float2 uv);
```
On the first frame, or after a reset, `sampleFeedback` returns transparent black.
Feedback resets when:
- a layer bypass state changes
- a layer changes shader
- the layer itself is removed
- a shader is reloaded or recompiled
- render dimensions change
- the app restarts
Ordinary stack add/remove/reorder operations on other layers are intended to preserve feedback state for unchanged feedback-enabled layers.
So feedback should be treated as live runtime state, not durable saved state.
## Slang Entry Point
Your shader file must implement the manifest `entryPoint`.
Default:
```slang
float4 shadeVideo(ShaderContext context)
{
return context.sourceColor;
}
```
The runtime owns the real fragment shader entry point. Your function is called from the wrapper, and the runtime handles final bypass/mix behavior:
```slang
return lerp(context.sourceColor, effectedColor, mixValue);
```
That means:
- Return the fully effected color from your function.
- Respect alpha if your shader produces an overlay or sprite.
- The runtime will blend your result with the source according to `mixAmount` and bypass state.
## ShaderContext
Your entry point receives:
```slang
struct ShaderContext
{
float2 uv;
float4 sourceColor;
float2 inputResolution;
float2 outputResolution;
float time;
float utcTimeSeconds;
float utcOffsetSeconds;
float startupRandom;
float frameCount;
float mixAmount;
float bypass;
int sourceHistoryLength;
int temporalHistoryLength;
int feedbackAvailable;
};
```
Fields:
- `uv`: normalized texture coordinates, usually `0..1`.
- `sourceColor`: decoded RGBA source video at `uv`.
- `inputResolution`: decoded input video resolution in pixels.
- `outputResolution`: shader render resolution in pixels. The current pipeline renders the shader stack at input resolution, then scales the final frame to the configured video I/O output mode.
- `time`: elapsed runtime time in seconds.
- `utcTimeSeconds`: current UTC time of day from the host PC clock, expressed as seconds since UTC midnight.
- `utcOffsetSeconds`: host PC local UTC offset in seconds. Add this to `utcTimeSeconds` and wrap to `0..86400` to get local time of day.
- `startupRandom`: random `0..1` value generated once when the host process starts. It stays constant for the lifetime of the app and changes on the next launch.
- `frameCount`: incrementing frame counter.
- `mixAmount`: runtime mix amount.
- `bypass`: `1.0` when the layer is bypassed, otherwise `0.0`.
- `sourceHistoryLength`: number of usable source-history frames currently available.
- `temporalHistoryLength`: number of usable temporal frames currently available for this layer.
- `feedbackAvailable`: `1` when previous-frame feedback exists for this layer, otherwise `0`.
Color/precision notes:
- `context.sourceColor`, `sampleVideo()`, and temporal history samples are display-referred Rec.709-like RGB, not linear-light RGB.
- The current DeckLink backend prefers 10-bit YUV capture and output when the card/mode supports it, with automatic 8-bit fallback. If external keying is enabled, output prefers 10-bit YUVA (`Ay10`) when supported so shader alpha can drive the key signal, then falls back to 8-bit BGRA.
- Internal decoded, layer, composite, output, and temporal render targets are 16-bit floating point, so gradients and LUT work have more headroom than packed byte video I/O formats.
- Do not add extra Rec.709 or linear conversions unless the shader intentionally documents that behavior.
## Helper Functions
The wrapper provides:
```slang
float4 sampleLayerInput(float2 uv);
float4 sampleVideo(float2 uv);
float4 sampleSourceHistory(int framesAgo, float2 uv);
float4 sampleTemporalHistory(int framesAgo, float2 uv);
float4 sampleFeedback(float2 uv);
```
`sampleLayerInput` samples the input arriving at this shader layer before any of the layer's own passes run. If this layer follows another shader, it sees that previous shader's output. If this is the first shader layer, it sees the decoded source image.
`sampleVideo` samples the current pass input texture. In single-pass shaders this is usually the layer input. In multipass shaders it may instead be a named pass output or `previousPass`, depending on the manifest routing for that pass.
`sampleSourceHistory` samples previous decoded source frames. `framesAgo` is clamped into the available range. If no history is available, it falls back to `sampleVideo`.
`sampleTemporalHistory` samples previous pre-layer input frames for temporal shaders that request `preLayerInput` history. `framesAgo` is clamped into the available range. If no temporal history is available, it falls back to `sampleVideo`.
`sampleFeedback` samples the shader-local previous-frame feedback surface. If feedback has not been written yet, it returns transparent black.
Example:
```slang
float4 shadeVideo(ShaderContext context)
{
float4 previous = sampleSourceHistory(1, context.uv);
return lerp(context.sourceColor, previous, 0.35);
}
```
Layer-input example:
```slang
float4 finishPass(ShaderContext context)
{
float3 baseColor = sampleLayerInput(context.uv).rgb;
float3 passResult = context.sourceColor.rgb;
return float4(baseColor + passResult * 0.25, 1.0);
}
```
Feedback example:
```slang
float4 shadeVideo(ShaderContext context)
{
float4 previous = sampleFeedback(context.uv);
float4 current = context.sourceColor;
return lerp(current, previous, 0.2);
}
```
Multipass feedback example:
```slang
float4 analyzeFrame(ShaderContext context)
{
float4 previous = sampleFeedback(context.uv);
float luma = dot(context.sourceColor.rgb, float3(0.2126, 0.7152, 0.0722));
return float4(lerp(previous.rgb, float3(luma), 0.1), 1.0);
}
float4 finishFrame(ShaderContext context)
{
float4 analysis = context.sourceColor;
return float4(analysis.rgb, 1.0);
}
```
In that multipass case:
- `analyzeFrame` reads last frame's feedback
- `finishFrame` receives the same-frame pass output through normal multipass routing
- the `writePass` decides which pass output becomes next frame's feedback
That means:
- use `context.sourceColor` or `sampleVideo()` when you want this pass's routed input
- use `sampleLayerInput()` when you want the pre-pass layer input
- use `sampleFeedback()` when you want previous-frame persistent shader-local state
## Parameters
Manifest parameters are exposed to Slang as global values with the same `id`.
Supported types:
| Manifest type | Slang type | JSON value |
| --- | --- | --- |
| `float` | `float` | number |
| `vec2` | `float2` | `[x, y]` |
| `color` | `float4` | `[r, g, b, a]` |
| `bool` | `bool` | `true` or `false` |
| `enum` | `int` | selected option index |
| `text` | generated texture/helper | string |
| `trigger` | `int <id>`, `float <id>Time` | pulse/count |
Float example:
```json
{
"id": "brightness",
"label": "Brightness",
"type": "float",
"default": 1.0,
"min": 0.0,
"max": 2.0,
"step": 0.01
}
```
```slang
color.rgb *= brightness;
```
Vector example:
```json
{
"id": "offset",
"label": "Offset",
"type": "vec2",
"default": [0.0, 0.0],
"min": [-0.2, -0.2],
"max": [0.2, 0.2],
"step": [0.001, 0.001]
}
```
```slang
float2 uv = clamp(context.uv + offset, float2(0.0), float2(1.0));
```
Color example:
```json
{
"id": "tint",
"label": "Tint",
"type": "color",
"default": [1.0, 1.0, 1.0, 1.0]
}
```
```slang
color *= tint;
```
Boolean example:
```json
{
"id": "invert",
"label": "Invert",
"type": "bool",
"default": false
}
```
```slang
if (invert)
color.rgb = 1.0 - color.rgb;
```
Enum example:
```json
{
"id": "mode",
"label": "Mode",
"type": "enum",
"default": "normal",
"options": [
{ "value": "normal", "label": "Normal" },
{ "value": "luma", "label": "Luma" },
{ "value": "posterize", "label": "Posterize" }
]
}
```
Enums are stored in presets/state by their string `value`, but exposed to Slang as a zero-based integer index in option order:
```slang
if (mode == 1)
{
float luma = dot(color.rgb, float3(0.2126, 0.7152, 0.0722));
color.rgb = float3(luma);
}
else if (mode == 2)
{
color.rgb = floor(color.rgb * 4.0) / 4.0;
}
```
Text example:
```json
{
"fonts": [
{ "id": "inter", "path": "fonts/Inter-Regular.ttf" }
],
"parameters": [
{
"id": "titleText",
"label": "Title",
"type": "text",
"default": "LIVE",
"font": "inter",
"maxLength": 64
}
]
}
```
Text parameters are runtime-owned strings. They are not emitted as uniform values. Instead, the runtime renders the current string into a single-line SDF mask texture and the shader wrapper exposes helpers based on the parameter id:
```slang
float mask = sampleTitleText(textUv);
float4 premultipliedText = drawTitleText(textUv, float4(1.0, 1.0, 1.0, 1.0));
```
Text is currently limited to printable ASCII. `maxLength` defaults to `64` and is clamped to `1..256`. The optional `font` field references a packaged font declared in `fonts`; if no font is specified, the runtime uses its fallback sans-serif renderer.
Trigger example:
```json
{
"id": "flash",
"label": "Flash",
"type": "trigger"
}
```
A trigger appears as a button in the control UI. Pressing it increments the shader-visible integer `flash` and records the runtime time in `flashTime`:
```slang
float age = context.time - flashTime;
float intensity = flash > 0 ? exp(-age * 5.0) : 0.0;
color.rgb += intensity;
```
Triggers are useful for one-shot shader reactions such as flashes, ripples, cuts, or randomized looks. They do not execute arbitrary CPU code; they only update uniforms consumed by the shader.
Parameter validation:
- Float values are clamped to `min`/`max` if provided.
- `vec2` must have exactly 2 numbers.
- `color` must have exactly 4 numbers.
- Enum defaults must match one of the declared option values.
- Text defaults must be strings. Non-printable characters are dropped and values are clamped to `maxLength`.
- Trigger values are incremented by the host when triggered. The shader sees the trigger count and last trigger time.
- Non-finite numeric values are rejected.
## Texture Assets
Declare texture assets in the manifest:
```json
{
"textures": [
{
"id": "logoTexture",
"path": "logo.png"
}
]
}
```
Rules:
- `id` must be a valid shader identifier.
- `path` is relative to the shader package directory.
- The file must exist when the manifest is loaded.
- Texture asset changes trigger shader reload.
Texture IDs become `Sampler2D<float4>` globals:
```slang
float4 logo = logoTexture.Sample(logoUv);
```
For sprite or overlay shaders, return premultiplied-looking output if you want clean composition:
```slang
float alpha = logo.a;
return float4(logo.rgb * alpha, alpha);
```
See `shaders/dvd-bounce/` for a complete texture-driven example.
## Font Assets
Declare packaged font assets in the manifest:
```json
{
"fonts": [
{
"id": "inter",
"path": "fonts/Inter-Regular.ttf"
}
]
}
```
Rules:
- `id` must be a valid shader identifier.
- `path` is relative to the shader package directory.
- The file must exist when the manifest is loaded.
- Font asset changes trigger shader reload.
- V1 text layout is single-line; shaders position and scale the generated text texture themselves.
See `shaders/text-overlay/` for a complete live text example. The sample bundles Roboto Regular and includes its OFL license beside the font file.
## Temporal Shaders
Temporal shaders can request access to previous frames.
Manifest example:
```json
{
"temporal": {
"enabled": true,
"historySource": "preLayerInput",
"historyLength": 12
}
}
```
Supported `historySource` values:
- `source`: decoded source-video history from previous frames.
- `preLayerInput`: history of the input arriving at this layer before the shader runs.
`historyLength` is the requested frame count. The runtime clamps it by `maxTemporalHistoryFrames` in `config/runtime-host.json`.
Temporal history resets when:
- layers are added, removed, or reordered
- a layer bypass state changes
- a layer changes shader
- a shader is reloaded or recompiled
- render dimensions change
Use the available history lengths to avoid assuming history is ready on the first frame:
```slang
float4 shadeVideo(ShaderContext context)
{
if (context.temporalHistoryLength <= 0)
return context.sourceColor;
float4 oldFrame = sampleTemporalHistory(3, context.uv);
return lerp(context.sourceColor, oldFrame, 0.4);
}
```
See `shaders/temporal-ghost-trail/` and `shaders/temporal-low-fps/` for examples.
## Coordinate And Color Notes
- `uv` is normalized.
- Use `context.outputResolution` for pixel-sized effects.
- Use `context.inputResolution` when sampling source video by input pixel size.
- `sourceColor` and `sampleVideo` return RGBA values in normalized `0..1` range.
- Prefer `saturate(color)` or explicit `clamp` before returning if your math can overshoot.
- For generated calibration charts, test patterns, gradients, and exposure ramps, state whether patch values are linear-light, display-referred gamma encoded, Rec.709 encoded, or intentionally artistic.
- For one-stop exposure patches, each patch should normally be `baseLevel * 2^patchIndex` before any display/tone encoding.
- For Rec.709 OETF encoding, use:
```slang
float rec709Oetf(float linearLevel)
{
float value = saturate(linearLevel);
if (value < 0.018)
return 4.5 * value;
return 1.099 * pow(value, 0.45) - 0.099;
}
```
Pixel-size example:
```slang
float2 pixel = 1.0 / max(context.outputResolution, float2(1.0));
float4 right = sampleVideo(context.uv + float2(pixel.x, 0.0));
```
## Animation And Timing Notes
- `context.time` is elapsed runtime time in seconds and is the default animation source for generative shaders.
- `context.frameCount` increments once per rendered output frame and is useful when an effect must be frame-locked.
- Avoid expensive CPU-like timing logic in the shader; animation should usually be a simple function of `context.time`, `context.frameCount`, trigger uniforms, or parameters.
- If a shader appears to judder only while animated, first test whether freezing its time removes the issue. That usually separates animation cadence issues from rendering or transfer issues.
- Do not add custom timer uniforms to the wrapper. Use the fields already in `ShaderContext`.
## Performance Notes
The app has to meet a fixed video frame cadence, so avoid shader code that only looks good in unconstrained browser demos.
Guidelines:
- Keep loops bounded with compile-time constants where possible.
- Avoid very high per-pixel raymarch counts by default. If a heavy loop is needed, expose a quality/steps control with a safe default.
- Prefer early exits only when they are simple; highly divergent branches can be expensive across a full frame.
- Avoid repeated texture sampling in large loops unless the effect really needs it.
- Use `context.outputResolution` carefully. A 1080p frame is over 2 million fragments; a tiny extra loop can become expensive.
- The UI render time may measure CPU command submission rather than true GPU execution time, so visual frame issues can still be GPU-related even when reported render time is small.
- Do not write debug files, allocate resources, or assume CPU-side work can happen from `shader.slang`. Shader code is GPU-only.
## Reload And Generated Files
When a shader compiles, the runtime writes generated files under `runtime/shader_cache/`:
- `active_shader_wrapper.slang`
- `active_shader.raw.frag`
- `active_shader.frag`
These files are ignored by git and are useful for debugging compiler output. If a shader fails to compile, inspect the wrapper first; it shows the exact generated Slang code including your included shader.
For multipass shaders, these files reflect the most recently compiled pass. If a package has several passes, the reported compile error and pass name are usually more useful than assuming the cache contains the first pass.
## Common Pitfalls
- Do not use hyphens in parameter IDs, texture IDs, or entry point names.
- Do not declare your own `ShaderContext`, `GlobalParams`, `sampleVideo`, `sampleSourceHistory`, or `sampleTemporalHistory`.
- Do not write a `[shader("fragment")]` entry point in `shader.slang`; the runtime provides it.
- Remember enum globals are integer indexes, not strings.
- Declare every texture in `shader.json`; undeclared texture samplers will not be bound.
- Declare packaged fonts in `shader.json` when text parameters should use a specific font.
- Keep temporal history requests modest. They consume texture units and memory and are capped by runtime config.
- If a parameter appears in the UI but not in Slang, the shader may still compile, but the control has no effect.
- If a Slang name collides with a generated global, rename your parameter or local symbol.
## Minimal Package Checklist
Before committing a new shader package:
- `shader.json` is valid JSON.
- `id` is unique across `shaders/`.
- `entryPoint`, parameter IDs, and texture IDs are valid identifiers.
- `shader.slang` implements the configured entry point.
- Texture files referenced by `textures` exist.
- Font files referenced by `fonts` exist.
- Enum defaults are present in their `options`.
- Temporal shaders handle short or empty history gracefully.
- The app can reload and compile the shader without errors.

View File

@@ -0,0 +1,85 @@
{
"id": "anamorphic-desqueeze",
"name": "Anamorphic Desqueeze",
"description": "Desqueezes anamorphic footage by 1.3x, 1.33x, 1.5x, or 2x with fit or fill framing.",
"category": "Transform",
"entryPoint": "shadeVideo",
"parameters": [
{
"id": "desqueezeFactor",
"label": "Desqueeze",
"type": "enum",
"default": "x1_33",
"options": [
{
"value": "x1_3",
"label": "1.3x"
},
{
"value": "x1_33",
"label": "1.33x"
},
{
"value": "x1_5",
"label": "1.5x"
},
{
"value": "x2_0",
"label": "2x"
}
],
"description": "Horizontal stretch factor matching the anamorphic lens or adapter."
},
{
"id": "framing",
"label": "Framing",
"type": "enum",
"default": "fit",
"options": [
{
"value": "fit",
"label": "Fit"
},
{
"value": "fill",
"label": "Fill"
}
],
"description": "Fit preserves the whole image; Fill crops to remove borders."
},
{
"id": "pan",
"label": "Pan",
"type": "vec2",
"default": [
0,
0
],
"min": [
-1,
-1
],
"max": [
1,
1
],
"step": [
0.001,
0.001
],
"description": "Reframes the desqueezed image after fit/fill scaling."
},
{
"id": "outsideColor",
"label": "Outside Color",
"type": "color",
"default": [
0,
0,
0,
1
],
"description": "Color used where the remapped image samples outside the source frame."
}
]
}

View File

@@ -0,0 +1,32 @@
float selectedDesqueezeFactor()
{
if (desqueezeFactor == 0)
return 1.3;
if (desqueezeFactor == 1)
return 1.3333333;
if (desqueezeFactor == 2)
return 1.5;
return 2.0;
}
float4 shadeVideo(ShaderContext context)
{
float factor = selectedDesqueezeFactor();
float2 centered = context.uv - 0.5;
if (framing == 0)
{
centered.y *= factor;
}
else
{
centered.x /= factor;
}
float2 sourceUv = centered + 0.5 - pan;
bool inside = sourceUv.x >= 0.0 && sourceUv.x <= 1.0 && sourceUv.y >= 0.0 && sourceUv.y <= 1.0;
if (!inside)
return outsideColor;
return sampleVideo(sourceUv);
}

Some files were not shown because too many files have changed in this diff Show More