Compare commits

..

64 Commits

Author SHA1 Message Date
kixelated df5d362754
Add optional/required extensions. (#117) 2023-11-03 15:10:15 +09:00
kixelated ea701bcf7e
Also build the moq-pub image in this repo. (#116) 2023-11-03 13:56:45 +09:00
kixelated ddfe7963e6
Initial moq-transport-01 support (#115)
Co-authored-by: Mike English <mike.english@gmail.com>
2023-11-03 13:19:41 +09:00
kixelated d55c4a80d1
Add `--tls-root` and `--tls-disable-verify` to moq-pub. (#114) 2023-10-30 22:54:27 +09:00
kixelated 24cf36e923
Update HACKATHON.md 2023-10-25 15:39:39 +09:00
kixelated d69c7491ba
Hackathon (#113) 2023-10-25 15:28:47 +09:00
Luke Curley d2a0722b1b Remove some additional log lines. 2023-10-20 15:41:02 +09:00
dependabot[bot] 9da061b8fe
Bump rustix from 0.37.23 to 0.37.25 (#99)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-20 12:05:40 +09:00
dependabot[bot] e762956a70
Bump rustix from 0.37.19 to 0.37.25 in /moq-transport (#100)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-20 12:05:28 +09:00
kixelated 53817f41e7
Remove subscribers/publisher on close (#103) 2023-10-20 12:04:55 +09:00
kixelated a30f313439
Add a flag to manually specify roots. (#98) 2023-10-17 15:48:36 +09:00
kixelated c5b3e5cb8d
Rename some TLS flags (#97) 2023-10-17 14:50:17 +09:00
kixelated d0fca05485
Fix a panic when --fingerprint was not provided, and rename it to --dev (#96) 2023-10-16 14:31:12 +09:00
kixelated 9a25143694
Support multiple TLS certificates. (#95) 2023-10-16 13:05:40 +09:00
kixelated 1749989dc5
Small stuff. (#94) 2023-10-13 23:43:29 +09:00
kixelated 5a0357b111
Maybe the order matters. (#93) 2023-10-13 15:59:54 +09:00
kixelated 6c9394db00
Switch to Docker Hub. (#92) 2023-10-13 14:03:22 +09:00
kixelated 80111d02cc
Fixes dependabot (#91) 2023-10-13 11:19:23 +09:00
kixelated 992e68affe
Rename workflows. (#90) 2023-10-13 11:02:43 +09:00
dependabot[bot] 7a779eb65c
Bump rustls-webpki from 0.100.1 to 0.100.3 in /moq-transport (#88)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.100.1 to
0.100.3.
2023-10-13 10:49:20 +09:00
kixelated e039fbdb56
Switch to a GCP registry. (#89)
Unfortunately Cloud Run doesn't support the free/public Github registry.
2023-10-13 10:47:10 +09:00
dependabot[bot] 0bdcd7adb6
Bump webpki from 0.22.1 to 0.22.4 (#86)
Bumps [webpki](https://github.com/briansmith/webpki) from 0.22.1 to
0.22.4.
2023-10-12 13:25:08 +09:00
kixelated c95bb8209f
Fix local development. (#87) 2023-10-12 13:24:28 +09:00
kixelated 163bc98605
Missed a link 2023-10-12 13:14:56 +09:00
kixelated 1cf8a7617c
Update links in README.md 2023-10-12 13:13:45 +09:00
kixelated 04ff9d5a6a
Add support for multiple origins (#82)
Adds `moq-api` to get/set the origin for each broadcast. Not used by default for local development.
2023-10-12 13:09:32 +09:00
Luke Curley 5e4eb420c0 Bump webtransport-proto to fix Chrome 117 2023-09-27 07:03:14 +09:00
Luke Curley 43a2ed15d4 Revert "Enable tracing to debug. (#80)"
This reverts commit 6e0e85272d.
2023-09-19 14:49:02 -07:00
Luke Curley 80fd13a9dc Revert "Bump golang.org/x/text from 0.3.7 to 0.3.8 in /dev (#70)"
This reverts commit 5697abeb80.
2023-09-19 10:11:33 -07:00
kixelated eb7e707be3
Implement prioritization in moq-pub (#74)
Here's the main change in webtransport-quinn 0.5.3:
ec553fa340

I haven't run into any errors so I don't know what was broken before
@englishm. I'm hoping that setting the stream priority to max when
writing the stream header avoids the issue? Otherwise we need to go bug
diving.
2023-09-19 10:01:26 -07:00
kixelated 6e0e85272d
Enable tracing to debug. (#80) 2023-09-19 10:00:55 -07:00
Luke Curley 7c8287ee35 I think this token be missing. 2023-09-18 23:07:34 -07:00
kixelated 6bf897d980
Switch to depot for faster ARM builds... at a price. (#79) 2023-09-18 23:06:00 -07:00
kixelated 11f8be65d5
Add some more connection logging. (#78)
So I can debug why my handshake is failing.
2023-09-18 22:37:49 -07:00
kixelated fbd06da2ee
Expose the version VarInt (#77)
Useful for documentation. Right now it's:

```rust
pub const KIXEL_00: Version = _
```

This should probably be an enum too?
2023-09-18 17:24:35 -07:00
kixelated 46604ada41
Fix publishing docker images 2023-09-18 00:19:40 -07:00
kixelated f2c1a0e460
Only perform one release at a time 2023-09-17 22:52:26 -07:00
dependabot[bot] 2696a56885
Bump golang.org/x/net from 0.0.0-20220421235706-1d1ef9303861 to 0.7.0 in /dev (#69)
Bumps [golang.org/x/net](https://github.com/golang/net) from
0.0.0-20220421235706-1d1ef9303861 to 0.7.0.
2023-09-17 22:45:21 -07:00
dependabot[bot] 5697abeb80
Bump golang.org/x/text from 0.3.7 to 0.3.8 in /dev (#70)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.7 to
0.3.8.
2023-09-17 22:45:02 -07:00
kixelated eaa8abcdc6
Better read/write error messages (#75)
Still need to properly support encode/decode though. The problem there
is that encode/decode uses AsyncRead, which means we get io::Error
instead of quinn::ReadError and quinn::WriteError. The io::Error type is
not clonable so we just can't use it, well unless it's wrapped in an Arc
or something gross.
2023-09-17 22:44:01 -07:00
kixelated 89f1bc430d
Also support EC private keys. (#73)
(probably)

@englishm I think you ran into this issue. The `rustls::PrivateKey`
documentation says it supports SEC1-encoded EC private keys so it should
just work?
2023-09-17 22:43:48 -07:00
kixelated 9f50cd5d69
Update README.md 2023-09-17 22:43:22 -07:00
kixelated 38a20153ba
Update README.md 2023-09-17 22:43:05 -07:00
kixelated 415f4e972d
Don't run the publish workflow on PR. (#76)
It takes foooorever and we have a separate check.
2023-09-17 22:29:32 -07:00
Luke Curley 48fb8b77b0 Also build for ARM. 2023-09-17 13:09:38 -07:00
Luke Curley 639f916b6a Why bother test. 2023-09-17 00:22:57 -07:00
Luke Curley c0dbb8c372 Revert my changes. 2023-09-17 00:20:57 -07:00
Luke Curley d6350995a1 Simplify the publish action. 2023-09-16 23:58:38 -07:00
kixelated 2cd887f992
Create docker publish 2023-09-16 23:38:04 -07:00
Mike English 7f402bd070
Update fly.io deployment (#71) 2023-09-15 16:10:13 -04:00
kixelated 88542e266c
Major moq-transport API simplification (#68)
Exponentially easier to use moq-transport as there's no message handling required. This is a BREAKING CHANGE.
2023-09-15 12:06:28 -07:00
dependabot[bot] 35c2127683
Bump webpki from 0.22.0 to 0.22.1 (#67)
Bumps [webpki](https://github.com/briansmith/webpki) from 0.22.0 to
0.22.1.
<details>
<summary>Commits</summary>
<ul>
<li>See full diff in <a
href="https://github.com/briansmith/webpki/commits">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=webpki&package-manager=cargo&previous-version=0.22.0&new-version=0.22.1)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/kixelated/moq-rs/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-09-07 16:20:56 -07:00
Mike English 48de6a3f5c
Fly.io (#49)
Adds support for deploying the `moq-quinn` relay server to fly.io
2023-09-07 16:02:07 -04:00
Zafer Gürel 73f450aa91
moq-pub: Avoid namespace conflict (#66)
namespace as a command line argument
or assign a unique (uuid4) value.

---------

Co-authored-by: Zafer Gurel <zafer@perculus.com>
2023-09-07 15:59:49 -04:00
Mike English 90818ac848
moq-pub: JSON catalog support, bugfixes (#60)
Fixes some bugs around subscription handling and 
adds support for the new JSON catalog format
2023-09-05 15:08:35 -04:00
Mike English 2b1a9a4ce5
Add moq-pub (#54)
Initial version of a CLI publisher / contribution tool
2023-08-30 09:32:42 -04:00
dependabot[bot] 838bffdd51
Bump rustls-webpki from 0.100.1 to 0.100.2 (#59)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.100.1 to
0.100.2.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a
href="https://github.com/rustls/webpki/releases">rustls-webpki's
releases</a>.</em></p>
<blockquote>
<h2>v/0.100.2</h2>
<h2>Release notes</h2>
<ul>
<li>certificate path building and verification is now capped at 100
signature validation operations to avoid the risk of CPU usage
denial-of-service attack when validating crafted certificate chains
producing quadratic runtime. This risk affected both clients, as well as
servers that verified client certificates.</li>
</ul>
<h2>What's Changed</h2>
<ul>
<li>v0.100.2 prep by <a
href="https://github.com/cpu"><code>@​cpu</code></a> in <a
href="https://redirect.github.com/rustls/webpki/pull/154">rustls/webpki#154</a></li>
</ul>
<p><strong>Full Changelog</strong>: <a
href="https://github.com/rustls/webpki/compare/v/0.100.1...v/0.100.2">https://github.com/rustls/webpki/compare/v/0.100.1...v/0.100.2</a></p>
</blockquote>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a
href="c8b821450b"><code>c8b8214</code></a>
Bump MSRV to 1.60</li>
<li><a
href="855752292e"><code>8557522</code></a>
Avoid testing MSRV of dev-dependencies</li>
<li><a
href="73a7f0c7d7"><code>73a7f0c</code></a>
Cargo: version 0.100.1 -&gt; 0.100.2</li>
<li><a
href="4ea052366f"><code>4ea0523</code></a>
verify_cert: enforce maximum number of signatures.</li>
<li>See full diff in <a
href="https://github.com/rustls/webpki/compare/v/0.100.1...v/0.100.2">compare
view</a></li>
</ul>
</details>
<br />


[![Dependabot compatibility
score](https://dependabot-badges.githubapp.com/badges/compatibility_score?dependency-name=rustls-webpki&package-manager=cargo&previous-version=0.100.1&new-version=0.100.2)](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)

Dependabot will resolve any conflicts with this PR as long as you don't
alter it yourself. You can also trigger a rebase manually by commenting
`@dependabot rebase`.

[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)

---

<details>
<summary>Dependabot commands and options</summary>
<br />

You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits
that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after
your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge
and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating
it. You can achieve the same result by closing it manually
- `@dependabot show <dependency name> ignore conditions` will show all
of the ignore conditions of the specified dependency
- `@dependabot ignore this major version` will close this PR and stop
Dependabot creating any more for this major version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop
Dependabot creating any more for this minor version (unless you reopen
the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop
Dependabot creating any more for this dependency (unless you reopen the
PR or upgrade to it yourself)
You can disable automated security fix PRs for this repo from the
[Security Alerts
page](https://github.com/kixelated/moq-rs/network/alerts).

</details>

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-08-29 15:30:40 -07:00
kixelated 8e3ebfcc7b
Remove the incompatible role check. (#58) 2023-08-29 15:30:21 -07:00
Mike English fdc05ffb99
moq-transport: Make Messages and Objects Clone (#57) 2023-08-29 00:59:30 -04:00
Mike English 5423d7c93a
Add more detailed Debug for MapSource (#56)
and everything it contains.

Stop short of printing all of the bytes in a State's VecDeque, but
expose most everything else for easier troubleshooting.
2023-08-28 15:14:15 -04:00
kixelated c53b3ddbe0
Update README.md 2023-08-25 16:36:06 -07:00
kixelated 5c3f794053
A few minor changes to the API. (#52)
The only salvagable remains from a multi-day refactoring effort. The
main benefit is that Setup messages are no longer part of the Message
enum, so match will be a lot easier.
2023-08-23 15:28:27 -07:00
kixelated c5d8873e4e
Webtransport generic (#51)
Switched to the webtransport-generic crate so quinn or quiche (with
adapter) can be used. This also involved switching out the
decoder/encoder since it meant a wrapper was required.
2023-08-15 10:20:03 -07:00
kixelated 3a65873055
Fix the buffering used for parsing. (#50)
fill_buf didn't work like I expected. This code is much better anyway.
2023-08-02 11:41:28 -07:00
121 changed files with 7447 additions and 3945 deletions

3
.dockerignore Normal file
View File

@ -0,0 +1,3 @@
target
dev
*.mp4

View File

@ -8,3 +8,10 @@ insert_final_newline = true
indent_style = tab
indent_size = 4
max_line_length = 120
[*.md]
trim_trailing_whitespace = false
[*.yml]
indent_style = space
indent_size = 2

348
.github/logo.svg vendored Normal file
View File

@ -0,0 +1,348 @@
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
width="100%" viewBox="0 0 1600 350" enable-background="new 0 0 1600 350" xml:space="preserve">
<path fill="#00BF2D" opacity="1.000000" stroke="none"
d="
M629.000000,251.175552
C696.998962,251.217285 764.499023,251.043793 831.996460,251.381424
C872.647339,251.584763 913.298706,252.386017 953.941528,253.292007
C982.745422,253.934082 1011.545044,254.980118 1040.328979,256.247833
C1072.095703,257.646881 1103.870239,259.095795 1135.590210,261.269958
C1172.600830,263.806732 1209.631958,266.431824 1246.516602,270.323273
C1295.623901,275.504242 1344.692993,281.265167 1393.597412,288.076874
C1427.318604,292.773712 1460.732422,299.676239 1494.286621,305.576050
C1495.344238,305.762024 1496.434937,305.760956 1497.510132,305.847107
C1497.821045,305.462891 1498.131958,305.078674 1498.442871,304.694458
C1488.217285,293.633759 1477.855469,282.693481 1467.882812,271.409302
C1465.925903,269.195099 1464.219971,265.425110 1464.695190,262.786316
C1465.133423,260.352997 1468.427979,257.132629 1470.885132,256.711182
C1473.629028,256.240570 1477.574585,257.756165 1479.722534,259.768066
C1486.266846,265.897827 1492.221802,272.660034 1498.360718,279.218506
C1505.749878,287.112610 1513.088989,295.053375 1520.459839,302.964661
C1521.367432,303.938873 1522.438965,304.774780 1523.260864,305.812347
C1527.773071,311.508759 1525.604370,317.245026 1518.569824,318.799347
C1514.398804,319.720978 1510.317749,321.054749 1506.204834,322.233215
C1492.317871,326.212219 1478.444458,330.239075 1464.535034,334.137756
C1463.163330,334.522247 1461.333008,334.640198 1460.143799,334.037231
C1457.149780,332.519318 1454.400635,330.518280 1451.553345,328.710907
C1453.454956,326.675415 1454.985962,323.813782 1457.329102,322.762604
C1463.820068,319.850586 1470.642944,317.674500 1477.350464,315.251526
C1478.867554,314.703461 1480.448853,314.333069 1481.939453,313.325989
C1472.812378,312.558197 1463.662720,311.992706 1454.562866,310.980316
C1435.890259,308.902893 1417.238159,306.635834 1398.588989,304.354828
C1376.142456,301.609375 1353.720581,298.658081 1331.263916,295.999207
C1308.310791,293.281616 1285.343750,290.661316 1262.354004,288.277039
C1243.162964,286.286713 1223.942993,284.541443 1204.712646,282.972260
C1179.338867,280.901733 1153.949097,279.014832 1128.553101,277.235474
C1115.102783,276.293091 1101.631714,275.571259 1088.158325,275.050995
C1040.065796,273.194092 991.970093,271.413574 943.871704,269.715393
C924.729553,269.039551 905.579163,268.302917 886.429138,268.194153
C777.940491,267.577850 669.451111,267.074097 560.961487,266.667938
C464.478882,266.306732 367.991150,266.582184 271.514618,265.650635
C228.399628,265.234344 185.303329,262.628845 142.205185,260.855164
C126.569107,260.211700 110.952034,259.103943 95.327812,258.182037
C94.409203,258.127808 93.504318,257.841095 92.610275,256.814972
C93.799576,256.692017 94.988075,256.470398 96.178314,256.460632
C124.171005,256.231537 152.166306,256.187103 180.156113,255.771332
C210.804214,255.316101 241.445053,254.377655 272.093018,253.907425
C324.735779,253.099701 377.379822,252.239914 430.026611,251.863953
C496.182678,251.391495 562.342041,251.383850 629.000000,251.175552
z"/>
<path fill="#FBFBFC" opacity="1.000000" stroke="none"
d="
M1136.531250,75.468430
C1149.502686,89.464615 1158.312866,105.204132 1161.071167,123.944977
C1164.256714,145.588089 1155.058838,161.365723 1137.358032,172.860504
C1132.915649,175.745377 1128.063721,177.999420 1123.526367,180.474442
C1126.513184,187.666153 1129.579590,195.049347 1132.855957,202.938431
C1123.931519,204.148102 1115.508179,202.346771 1110.577148,195.110870
C1105.483032,187.635696 1100.227417,187.258179 1092.027954,188.385681
C1070.494629,191.346680 1049.619141,188.256668 1031.786011,174.742630
C999.943604,150.612198 998.185486,104.367447 1027.705688,76.309776
C1039.444458,65.152596 1052.978394,57.629692 1069.411377,55.724426
C1075.942505,54.967213 1082.346558,52.011589 1088.812012,51.999477
C1108.218750,51.963127 1123.192627,61.870239 1136.531250,75.468430
M1081.841064,145.525299
C1080.419556,139.547165 1081.585205,135.960693 1085.617065,133.906250
C1089.598022,131.877777 1093.365967,133.214371 1097.057617,138.187378
C1098.537598,140.181168 1099.774536,142.367310 1101.005127,144.532379
C1104.915039,151.411697 1108.770020,158.322296 1113.107422,166.041336
C1120.483521,162.215363 1127.704468,159.163391 1134.170044,154.951767
C1143.576294,148.824509 1148.202881,139.659027 1146.197388,128.397034
C1141.447144,101.723297 1125.503296,83.583786 1101.300049,72.604919
C1092.501587,68.613853 1082.749512,66.282639 1073.111206,68.667519
C1058.141846,72.371498 1045.922119,80.847488 1036.339478,92.972885
C1019.910889,113.760803 1022.760925,144.436249 1042.867920,159.496338
C1057.771973,170.659393 1075.039429,171.245407 1092.828979,169.646286
C1089.109741,161.592758 1085.579102,153.947586 1081.841064,145.525299
z"/>
<path fill="#FAFAFB" opacity="1.000000" stroke="none"
d="
M163.409271,97.601189
C159.556641,82.961777 155.850479,68.708122 152.144302,54.454468
C151.539719,54.494175 150.935135,54.533886 150.330551,54.573593
C150.238388,56.162678 150.003922,57.758244 150.075943,59.339851
C150.856064,76.471855 152.018768,93.594795 152.371094,110.734718
C152.574844,120.646263 152.286118,130.707367 150.768463,140.475784
C149.239716,150.315628 139.963211,153.317551 133.007675,146.116348
C126.618286,139.501297 120.986832,131.806183 116.531418,123.751656
C109.635605,111.285385 104.017014,98.112022 97.877350,85.228714
C97.052658,83.498199 96.347321,81.710808 95.109352,78.844490
C93.638824,92.596100 92.969818,105.222824 90.824066,117.593391
C86.344818,143.416916 81.103470,169.108627 76.107986,194.841431
C75.765762,196.604324 75.335861,198.819595 74.120293,199.838791
C72.050568,201.574173 68.985298,203.912735 66.967407,203.420258
C64.912872,202.918823 62.141987,199.226456 62.258987,197.071014
C62.652668,189.818619 63.983761,182.589264 65.329239,175.421097
C72.090233,139.401337 76.428711,103.108971 78.109642,66.492729
C78.467834,58.690132 79.265854,50.906723 79.913078,43.118782
C80.302116,38.437618 82.696655,35.136398 87.391167,34.368179
C92.028061,33.609390 95.074997,36.559422 96.790001,40.362938
C101.086494,49.891605 105.364403,59.453991 109.039825,69.232117
C116.894081,90.127693 125.350204,110.678284 138.704605,128.815002
C139.493042,129.885788 140.304932,130.941299 141.151642,131.966141
C141.447189,132.323883 141.880798,132.567551 143.146240,133.577560
C143.830902,128.312241 145.004654,123.667191 144.932312,119.041618
C144.690186,103.559769 143.727631,88.087433 143.584473,72.606430
C143.492386,62.647972 144.171692,52.666691 144.934692,42.728611
C145.096863,40.616306 147.122131,38.647049 148.289017,36.611893
C150.504868,38.044456 153.744263,38.964149 154.762741,40.996700
C159.663269,50.776634 164.578201,60.630810 168.352448,70.876930
C176.551086,93.134140 184.218704,115.592049 191.787308,138.074326
C196.154129,151.045807 197.230148,164.420029 194.373154,178.764542
C193.094727,177.356552 191.899033,176.219727 190.911789,174.924072
C182.707550,164.156830 177.968872,151.777100 174.438080,138.884979
C170.711868,125.279350 167.175201,111.621811 163.409271,97.601189
z"/>
<path fill="#FBFBFB" opacity="1.000000" stroke="none"
d="
M1318.623291,213.559036
C1326.593018,211.154846 1334.216919,208.927917 1340.603882,207.062286
C1344.129639,171.296021 1347.595703,136.135696 1351.028809,101.308739
C1342.915527,102.577072 1334.153809,104.093391 1325.344238,105.241043
C1322.425903,105.621231 1319.333374,105.347610 1316.410278,104.847801
C1312.129639,104.115845 1309.018921,100.919800 1309.508179,96.984612
C1309.856445,94.183403 1312.863647,90.051605 1315.391113,89.441330
C1338.757202,83.799500 1362.226318,78.544655 1385.787476,73.779846
C1391.750977,72.573822 1398.291504,72.711571 1404.852295,76.730232
C1392.971924,90.214096 1376.451050,92.201271 1360.660645,98.128868
C1369.758423,132.311966 1362.319824,166.812546 1360.679443,201.668655
C1369.330933,200.256683 1377.375122,198.362335 1385.510742,197.808289
C1391.530029,197.398361 1397.741577,198.331177 1403.728638,199.425430
C1408.042236,200.213837 1408.640625,202.562515 1405.111938,204.918213
C1399.963013,208.355545 1394.470825,212.122604 1388.599731,213.484131
C1366.290161,218.657791 1343.783691,222.992584 1321.318115,227.476761
C1315.322754,228.673447 1310.937744,226.695831 1309.865845,223.001495
C1308.736206,219.107834 1311.269165,216.317764 1318.623291,213.559036
z"/>
<path fill="#FBFBFB" opacity="1.000000" stroke="none"
d="
M1287.844727,130.118118
C1294.262207,144.747940 1300.462524,159.051559 1306.968994,173.214539
C1310.164185,180.170013 1310.593384,186.843002 1306.737915,194.156265
C1300.050781,192.227844 1294.365967,189.112549 1290.990601,182.952011
C1286.840210,175.377365 1283.038086,167.611832 1279.091431,159.925705
C1278.517090,158.807190 1277.975586,157.671844 1277.343994,156.393036
C1274.793579,161.344620 1272.824829,166.260513 1269.945923,170.568115
C1256.856689,190.153320 1232.661377,197.443680 1213.129883,187.786453
C1199.905762,181.247833 1192.978516,169.579681 1189.343994,156.023926
C1183.894531,135.699097 1183.905884,114.917877 1185.908691,94.103676
C1186.309937,89.935753 1188.117554,86.130157 1192.624146,86.199341
C1197.307983,86.271255 1199.271851,90.129578 1199.567261,94.530823
C1200.413452,107.143066 1200.622437,119.827904 1202.148315,132.354294
C1203.181030,140.831161 1205.151001,149.424347 1208.237061,157.369156
C1213.500000,170.918060 1223.648682,176.521271 1238.000244,173.992706
C1253.198120,171.315033 1265.452393,163.931900 1270.945557,148.483078
C1271.809204,146.054733 1271.923828,142.860641 1271.133911,140.426361
C1266.598022,126.445602 1263.218018,112.255936 1262.750122,97.529289
C1262.626465,93.635056 1262.824219,89.307877 1264.435791,85.922951
C1265.601440,83.474945 1269.326782,80.929306 1271.949707,80.887558
C1274.252197,80.850899 1277.810791,83.795891 1278.690063,86.200768
C1280.482788,91.103928 1281.747070,96.555244 1281.603394,101.735649
C1281.326904,111.707245 1284.505249,120.671661 1287.844727,130.118118
z"/>
<path fill="#FBFBFB" opacity="1.000000" stroke="none"
d="
M342.167328,116.560349
C342.607605,128.632126 346.551636,139.397873 349.744049,150.408463
C353.743408,164.202164 356.755859,178.281998 360.194305,192.238327
C359.493927,192.600998 358.793518,192.963654 358.093140,193.326324
C353.282837,188.887039 348.472504,184.447754 343.396149,179.762924
C342.712463,181.693939 342.026672,183.621872 341.347015,185.551987
C336.777466,198.529419 325.178802,206.497086 313.291534,204.823074
C301.280579,203.131653 292.968079,193.438995 292.280273,179.400070
C291.788818,169.369019 294.304321,159.899078 300.247253,151.620850
C307.615692,141.356949 316.363403,140.147446 328.292542,147.976822
C327.980743,145.991440 327.800507,144.420532 327.481873,142.878204
C322.446899,118.508255 316.997528,94.213493 312.583099,69.732010
C311.228119,62.217468 312.484894,54.181606 312.943695,46.407955
C313.253174,41.164368 317.112122,38.454212 321.743195,37.473629
C326.174896,36.535255 329.411072,39.629780 330.726501,43.192245
C333.578522,50.916084 336.623993,58.818966 337.689606,66.899292
C339.840698,83.211403 340.724609,99.690636 342.167328,116.560349
M309.993713,172.914719
C309.980682,175.578445 309.777679,178.257812 309.995667,180.902496
C310.400360,185.813385 313.078979,188.762299 318.044159,189.620300
C326.737244,191.122452 338.776581,180.273026 338.102356,171.451462
C338.053619,170.813690 337.791229,170.173080 337.535583,169.573257
C335.144562,163.962906 332.733246,158.361206 330.440308,153.019058
C328.357971,153.635178 326.326569,154.664383 324.247040,154.773209
C317.316254,155.135880 313.925201,159.362122 311.976410,165.337982
C311.257599,167.542145 310.705719,169.800751 309.993713,172.914719
z"/>
<path fill="#FBFBFB" opacity="1.000000" stroke="none"
d="
M1513.002930,170.004578
C1494.165283,176.457748 1475.004761,181.331024 1454.884521,179.858139
C1440.411255,178.798630 1427.254639,173.872086 1416.528076,163.787704
C1401.059082,149.244965 1397.860962,127.149376 1408.758911,108.019783
C1421.130615,86.303116 1439.984131,73.373810 1464.666260,69.665749
C1469.809692,68.893051 1475.527588,70.309135 1480.654785,71.821388
C1482.884399,72.479012 1484.290283,75.929276 1486.070312,78.111168
C1483.802979,79.326584 1481.623413,81.378937 1479.254517,81.627686
C1456.195435,84.049095 1439.207275,96.295998 1426.711914,114.875160
C1416.032837,130.753922 1421.064209,147.970795 1438.437866,155.968704
C1453.955200,163.112061 1470.665405,166.288895 1487.626831,167.681854
C1495.733643,168.347626 1503.915161,168.104095 1512.499268,168.560028
C1512.958008,169.233292 1512.980469,169.618927 1513.002930,170.004578
z"/>
<path fill="#FBFBFB" opacity="1.000000" stroke="none"
d="
M239.201996,130.145264
C248.979431,126.278221 258.399506,125.809898 267.565765,130.571045
C279.664764,136.855499 281.621063,148.989212 271.858185,158.454910
C261.869293,168.139725 249.079178,173.241089 236.618393,178.784348
C235.295547,179.372803 233.914032,179.829407 231.761765,180.652298
C242.173447,194.611206 255.143845,202.371750 273.385498,202.476227
C265.802246,207.568604 258.215027,210.398682 249.902466,210.877014
C231.864685,211.914963 216.718536,199.375671 212.102386,179.896667
C207.268417,159.498505 217.625748,140.263992 239.201996,130.145264
M242.810928,159.339478
C250.042633,155.146423 257.274323,150.953354 265.286041,146.308029
C252.233353,139.755737 233.610580,150.564499 231.300339,164.740128
C235.050735,163.017868 238.613678,161.381699 242.810928,159.339478
z"/>
<path fill="#FBFBFB" opacity="1.000000" stroke="none"
d="
M624.167725,142.316818
C627.976257,136.513885 630.969482,130.369003 635.563232,125.851997
C641.728455,119.789795 648.274597,120.355034 655.143066,125.560020
C655.977478,126.192337 657.476501,126.451340 658.519653,126.219101
C663.589783,125.090317 668.150696,126.073051 672.096252,129.341476
C682.107605,137.634766 686.472229,148.416443 684.693054,161.209671
C683.076294,172.835236 674.802002,179.983856 661.441101,182.456970
C644.284973,185.632584 627.357971,176.376709 623.780945,160.850235
C622.462158,155.125778 623.890076,148.768494 624.167725,142.316818
M649.053528,141.777878
C647.990417,141.277863 646.927307,140.777863 645.846741,140.269623
C635.844177,152.499466 640.121277,164.773209 655.520142,167.670135
C657.451111,168.033401 659.509094,168.111038 661.461914,167.897369
C668.356323,167.143036 671.814758,162.153915 670.620911,155.275330
C669.067078,146.321915 662.716980,141.557297 655.674438,137.376404
C653.544861,138.765686 651.604553,140.031464 649.053528,141.777878
z"/>
<path fill="#FAFAFB" opacity="1.000000" stroke="none"
d="
M772.803223,161.083405
C770.535583,148.230927 774.162292,137.747620 784.392395,130.386093
C794.918396,122.811592 806.447388,122.673988 817.790405,128.736755
C825.595703,132.908661 827.257202,141.721008 821.407837,148.616196
C813.783691,157.603333 803.725098,160.429779 792.228149,158.507355
C791.281921,158.349136 790.339722,158.167206 788.353943,157.807709
C797.509705,171.729874 810.399170,176.276154 824.957275,177.567291
C820.938843,183.615295 807.955444,188.047073 798.095459,186.049118
C784.516785,183.297653 776.255554,174.859680 772.803223,161.083405
M809.558899,135.862030
C798.588806,134.269241 792.872375,137.437286 789.616760,147.077301
C798.363342,144.511520 808.040649,144.867493 815.703674,137.563873
C813.464600,136.878754 811.919861,136.406067 809.558899,135.862030
z"/>
<path fill="#FAFAFA" opacity="1.000000" stroke="none"
d="
M738.921509,145.878479
C743.540833,140.195419 747.723938,134.589798 752.464417,129.503799
C755.873108,125.846649 758.645569,126.472183 759.393555,131.253906
C760.107605,135.818985 760.240601,141.074524 758.657593,145.285431
C754.134583,157.316803 746.475342,167.422684 736.250244,175.312180
C726.499817,182.835358 719.246155,181.199112 712.776489,170.630692
C705.711975,159.090546 703.272278,146.248688 702.279846,133.068970
C702.142761,131.247955 704.191101,129.262344 705.227661,127.352921
C706.951050,128.577072 709.394531,129.419556 710.279236,131.088379
C715.386841,140.722626 720.126099,150.552109 725.727478,161.795441
C730.797424,155.703156 734.760620,150.940750 738.921509,145.878479
z"/>
<path fill="#F9F9FA" opacity="1.000000" stroke="none"
d="
M467.165802,147.065536
C472.151550,142.035843 476.259644,141.605576 479.298157,147.278534
C485.218536,158.331879 490.264221,169.873840 495.226379,181.405243
C496.375275,184.075195 495.731476,187.516586 495.914032,190.627243
C485.614532,190.666885 480.080963,186.271896 472.886597,173.218597
C468.179077,176.379196 463.730988,180.169205 458.632111,182.586426
C452.203400,185.634048 445.446136,185.023941 440.131805,179.805389
C434.910126,174.677826 434.783051,168.194519 437.191345,161.708786
C441.004791,151.438919 449.126953,146.544067 459.485138,145.024109
C461.408630,144.741852 463.548248,145.932449 465.833313,147.078125
C465.015228,148.906235 464.032562,150.185135 462.873444,151.276627
C459.494141,154.458847 455.682281,157.263580 452.731049,160.793991
C450.146057,163.886307 447.220856,167.825180 450.510193,171.991028
C453.720581,176.056946 458.457642,175.628967 462.928589,174.324921
C464.630066,173.828644 466.167450,172.678284 467.693420,171.692612
C470.238068,170.048920 470.982910,168.047455 469.733948,164.947800
C467.439240,159.252930 465.345367,153.405151 467.165802,147.065536
z"/>
<path fill="#FAFAFA" opacity="1.000000" stroke="none"
d="
M857.086792,179.194427
C850.600952,181.854446 846.328918,179.966370 844.737183,173.525055
C843.466309,168.382309 842.729065,163.088470 842.050903,157.823563
C841.268188,151.747162 840.628784,145.639175 840.283630,139.524780
C840.096436,136.210342 838.479553,131.607040 844.893616,131.885284
C845.673340,131.919128 846.687500,130.233536 847.330750,129.184052
C859.965393,108.569351 884.028992,113.000961 896.941345,124.514656
C891.695740,125.275612 886.481995,125.892799 881.320496,126.805115
C860.490051,130.486984 855.529175,139.071472 858.402710,160.841064
C858.792725,163.796036 859.318726,166.771271 860.180298,169.616455
C861.346863,173.468765 860.345154,176.458069 857.086792,179.194427
z"/>
<path fill="#FAFBFB" opacity="1.000000" stroke="none"
d="
M380.792236,102.062325
C380.235046,99.406120 379.549133,97.142342 379.399231,94.843613
C378.919006,87.477249 380.686188,84.940620 388.542725,81.606430
C387.097687,80.077728 385.776917,78.680450 384.315948,77.134903
C388.208435,72.881897 393.210083,73.350266 398.035309,74.871124
C409.622528,78.523277 416.637024,95.479935 411.179871,106.306816
C408.505524,111.612694 404.068512,114.634636 398.109192,114.996391
C391.277008,115.411140 386.330261,112.096550 382.769348,106.511284
C381.974762,105.264984 381.520081,103.801987 380.792236,102.062325
z"/>
<path fill="#F9F9FA" opacity="1.000000" stroke="none"
d="
M391.399902,139.195312
C393.872406,142.425400 396.737183,145.267319 397.940125,148.692810
C402.384827,161.349411 406.280365,174.198837 410.381775,186.975983
C410.483490,187.292831 410.608765,187.605957 410.672058,187.930756
C411.964386,194.566360 409.668182,199.769592 404.835266,201.174149
C400.390320,202.465942 395.465637,199.427383 393.409119,193.040955
C391.333191,186.594208 389.840393,179.908524 388.701935,173.223679
C387.197479,164.389755 386.045258,155.477432 385.207855,146.555038
C384.677216,140.901108 386.365387,139.208511 391.399902,139.195312
z"/>
<path fill="#058B2C" opacity="1.000000" stroke="none"
d="
M80.124115,257.917877
C73.365936,257.403900 67.078743,256.905029 60.791542,256.406158
C60.831306,256.228943 60.871075,256.051727 60.910839,255.874512
C70.181015,255.874512 79.451187,255.874512 88.721367,255.874512
C88.736588,256.560669 88.751801,257.246826 88.767021,257.932983
C86.043045,257.932983 83.319061,257.932983 80.124115,257.917877
z"/>
<path fill="#A4A7AE" opacity="1.000000" stroke="none"
d="
M1513.363892,170.057541
C1512.980469,169.618927 1512.958008,169.233292 1512.905029,168.553741
C1514.441284,168.074600 1516.007935,167.889343 1517.574707,167.704102
C1517.666748,168.004990 1517.758911,168.305893 1517.851074,168.606781
C1516.475708,169.108032 1515.100220,169.609268 1513.363892,170.057541
z"/>
</svg>

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -1,29 +0,0 @@
name: moq.rs
on:
pull_request:
branches: [ "main" ]
env:
CARGO_TERM_COLOR: always
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: clippy, rustfmt
- name: test
run: cargo test --verbose
- name: clippy
run: cargo clippy
- name: fmt
run: cargo fmt --check

65
.github/workflows/main.yml vendored Normal file
View File

@ -0,0 +1,65 @@
name: main
on:
push:
branches: ["main"]
env:
REGISTRY: docker.io
IMAGE: kixelated/moq-rs
IMAGE-PUB: kixelated/moq-pub
SERVICE: api # Restart the API service TODO and relays
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
# Only one release at a time and cancel prior releases
concurrency:
group: release
cancel-in-progress: true
steps:
- uses: actions/checkout@v3
# I'm paying for Depot for faster ARM builds.
- uses: depot/setup-action@v1
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
# Build and push Docker image with Depot
- uses: depot/build-push-action@v1
with:
project: r257ctfqm6
context: .
push: true
tags: ${{env.REGISTRY}}/${{env.IMAGE}}
platforms: linux/amd64,linux/arm64
# Same, but include ffmpeg for publishing BBB
- uses: depot/build-push-action@v1
with:
project: r257ctfqm6
context: .
push: true
target: moq-pub # instead of the default target
tags: ${{env.REGISTRY}}/${{env.IMAGE-PUB}}
platforms: linux/amd64,linux/arm64
# Log in to GCP
- uses: google-github-actions/auth@v1
with:
credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
# Deploy to cloud run
- uses: google-github-actions/deploy-cloudrun@v1
with:
service: ${{env.SERVICE}}
image: ${{env.REGISTRY}}/${{env.IMAGE}}

28
.github/workflows/pr.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: pr
on:
pull_request:
branches: ["main"]
env:
CARGO_TERM_COLOR: always
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# Install Rust with clippy/rustfmt
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: clippy, rustfmt
# Make sure u guys don't write bad code
- run: cargo test --verbose
- run: cargo clippy --no-deps
- run: cargo fmt --check
# Check for unused dependencies
- uses: bnjbvr/cargo-machete@main

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
.DS_Store
target/
logs/
*.mp4

1477
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,2 +1,3 @@
[workspace]
members = ["moq-transport", "moq-quinn", "moq-warp"]
members = ["moq-transport", "moq-relay", "moq-pub", "moq-api"]
resolver = "2"

39
Dockerfile Normal file
View File

@ -0,0 +1,39 @@
FROM rust:latest as builder
# Create a build directory and copy over all of the files
WORKDIR /build
COPY . .
# Reuse a cache between builds.
# I tried to `cargo install`, but it doesn't seem to work with workspaces.
# There's also issues with the cache mount since it builds into /usr/local/cargo/bin, and we can't mount that without clobbering cargo itself.
# We instead we build the binaries and copy them to the cargo bin directory.
RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/build/target \
cargo build --release && cp /build/target/release/moq-* /usr/local/cargo/bin
# Special image for moq-pub with ffmpeg and a publish script included.
FROM rust:latest as moq-pub
# Install required utilities and ffmpeg
RUN apt-get update && \
apt-get install -y ffmpeg wget
# Copy the publish script into the image
COPY deploy/publish.sh /usr/local/bin/publish
# Copy the compiled binary
COPY --from=builder /usr/local/cargo/bin/moq-pub /usr/local/cargo/bin/moq-pub
CMD [ "publish" ]
# moq-rs image with just the binaries
FROM rust:latest as moq-rs
LABEL org.opencontainers.image.source=https://github.com/kixelated/moq-rs
LABEL org.opencontainers.image.licenses="MIT OR Apache-2.0"
# Fly.io entrypoint
ADD deploy/fly-relay.sh .
# Copy the compiled binaries
COPY --from=builder /usr/local/cargo/bin /usr/local/cargo/bin

53
HACKATHON.md Normal file
View File

@ -0,0 +1,53 @@
# Hackathon
IETF Prague 118
## MoqTransport
Reference libraries are available at [moq-rs](https://github.com/kixelated/moq-rs) and [moq-js](https://github.com/kixelated/moq-js). The Rust library is [well documented](https://docs.rs/moq-transport/latest/moq_transport/) but the web library, not so much.
**TODO** Update both to draft-01.
**TODO** Switch any remaining forks over to extensions. ex: track_id in SUBSCRIBE
The stream mapping right now is quite rigid: `stream == group == object`.
**TODO** Support multiple objects per group. They MUST NOT use different priorities, different tracks, or out-of-order sequences.
The API and cache aren't designed to send/receive arbitrary objects over arbitrary streams as specified in the draft. I don't think it should, and it wouldn't be possible to implement in time for the hackathon anyway.
**TODO** Make an extension to enforce this stream mapping?
## Generic Relay
I'm hosting a simple CDN at: `relay.quic.video`
The traffic is sharded based on the WebTransport path to avoid namespace collisions. Think of it like a customer ID, although it's completely unauthenticated for now. Use your username or whatever string you want: `CONNECT https://relay.quic.video/alan`.
**TODO** Currently, it performs an implicit `ANNOUNCE ""` when `role=publisher`. This means there can only be a single publisher per shard and `role=both` is not supported. I should have explicit `ANNOUNCE` messages supported before the hackathon to remove this limitation.
**TODO** I don't know if I will have subscribe hints fully working in time. They will be parsed but might be ignored.
## CMAF Media
You can [publish](https://quic.video/publish) and [watch](https://quic.video/watch) broadcasts.
There's a [24/7 bunny stream](https://quic.video/watch/bbb) or you can publish your own using [moq-pub](https://github.com/kixelated/moq-rs/tree/main/moq-pub).
If you want to fetch from the relay directly, the name of the broadcast is the path. For example, `https://quic.video/watch/bbb` can be accessed at `relay.quic.video/bbb`.
The namespace is empty and the catalog track is `.catalog`. I'm currently using simple JSON catalog with no support for delta updates.
**TODO** update to the proposed [Warp catalog](https://datatracker.ietf.org/doc/draft-wilaw-moq-catalogformat/).
The media tracks uses a single (unbounded) object per group. Video groups are per GoP, while audio groups are per frame. There's also an init track containing information required to initialize the decoder.
**TODO** Base64 encode the init track in the catalog.
## Clock
**TODO** Host a clock demo that sends a group per second:
```
GROUP: YYYY-MM-DD HH:MM
OBJECT: SS
```

View File

@ -1,40 +1,64 @@
# Media over QUIC
<p align="center">
<img height="128px" src="https://github.com/kixelated/moq-rs/blob/main/.github/logo.svg" alt="Media over QUIC">
</p>
Media over QUIC (MoQ) is a live media delivery protocol utilizing QUIC streams.
See the [Warp draft](https://datatracker.ietf.org/doc/draft-lcurley-warp/).
See [quic.video](https://quic.video) for more information.
This repository is a Rust server that supports both contribution (ingest) and distribution (playback).
It requires a client, such as [moq-js](https://github.com/kixelated/moq-js).
This repository contains a few crates:
## Setup
- **moq-relay**: A relay server, accepting content from publishers and fanning it out to subscribers.
- **moq-pub**: A publish client, accepting media from stdin (ex. via ffmpeg) and sending it to a remote server.
- **moq-transport**: An async implementation of the underlying MoQ protocol.
- **moq-api**: A HTTP API server that stores the origin for each broadcast, backed by redis.
### Certificates
There's currently no way to view media with this repo; you'll need to use [moq-js](https://github.com/kixelated/moq-js) for that.
Unfortunately, QUIC mandates TLS and makes local development difficult.
If you have a valid certificate you can use it instead of self-signing.
## Development
Use [mkcert](https://github.com/FiloSottile/mkcert) to generate a self-signed certificate.
Unfortunately, this currently requires Go in order to [fork](https://github.com/FiloSottile/mkcert/pull/513) the tool.
```
./cert/generate
```
Unfortunately, WebTransport in Chrome currently (May 2023) doesn't verify certificates using the root CA.
The workaround is to use the `serverFingerprints` options, which requires the certificate MUST be only valid for at most **14 days**.
This is also why we're using a fork of mkcert, because it generates certificates valid for years by default.
This limitation will be removed once Chrome uses the system CA for WebTransport.
Use the [dev helper scripts](dev/README.md) for local development.
## Usage
Run the server:
### moq-relay
```
cargo run
```
**moq-relay** is a server that forwards subscriptions from publishers to subscribers, caching and deduplicating along the way.
It's designed to be run in a datacenter, relaying media across multiple hops to deduplicate and improve QoS.
The relays register themselves via the [moq-api](moq-api) endpoints, which is used to discover other relays and share broadcasts.
This listens for WebTransport connections on `https://localhost:4443` by default.
Use a [MoQ client](https://github.com/kixelated/moq-js) to connect to the server.
Notable arguments:
- `--listen <ADDR>` Listen on this address, default: `[::]:4443`
- `--tls-cert <CERT>` Use the certificate file at this path
- `--tls-key <KEY>` Use the private key at this path
- `--dev` Listen via HTTPS as well, serving the `/fingerprint` of the self-signed certificate. (dev only)
This listens for WebTransport connections on `UDP https://localhost:4443` by default.
You need a client to connect to that address, to both publish and consume media.
### moq-pub
This is a client that publishes a fMP4 stream from stdin over MoQ.
This can be combined with ffmpeg (and other tools) to produce a live stream.
Notable arguments:
- `<URL>` connect to the given address, which must start with `https://` for WebTransport.
**NOTE**: We're very particular about the fMP4 ingested. See [this script](dev/pub) for the required ffmpeg flags.
### moq-transport
A media-agnostic library used by [moq-relay](moq-relay) and [moq-pub](moq-pub) to serve the underlying subscriptions.
It has caching/deduplication built-in, so your application is oblivious to the number of connections under the hood.
See the published [crate](https://crates.io/crates/moq-transport) and [documentation](https://docs.rs/moq-transport/latest/moq_transport/).
### moq-api
This is a API server that exposes a REST API.
It's used by relays to inserts themselves as origins when publishing, and to find the origin when subscribing.
It's basically just a thin wrapper around redis that is only needed to run multiple relays in a (simple) cluster.
## License

8
deploy/fly-relay.sh Executable file
View File

@ -0,0 +1,8 @@
#!/usr/bin/env sh
mkdir cert
# Nothing to see here...
echo "$MOQ_CRT" | base64 -d > dev/moq-demo.crt
echo "$MOQ_KEY" | base64 -d > dev/moq-demo.key
RUST_LOG=info /usr/local/cargo/bin/moq-relay --tls-cert dev/moq-demo.crt --tls-key dev/moq-demo.key

20
deploy/fly.toml Normal file
View File

@ -0,0 +1,20 @@
app = "englishm-moq-relay"
kill_signal = "SIGINT"
kill_timeout = 5
[env]
PORT = "4443"
[experimental]
cmd = "./fly-relay.sh"
[[services]]
internal_port = 4443
protocol = "udp"
[services.concurrency]
hard_limit = 25
soft_limit = 20
[[services.ports]]
port = "4443"

41
deploy/publish.sh Executable file
View File

@ -0,0 +1,41 @@
#!/bin/bash
set -euo pipefail
ADDR=${ADDR:-"https://relay.quic.video"}
NAME=${NAME:-"bbb"}
URL=${URL:-"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4"}
# Download the funny bunny
wget -nv "${URL}" -O "${NAME}.mp4"
# ffmpeg
# -hide_banner: Hide the banner
# -v quiet: and any other output
# -stats: But we still want some stats on stderr
# -stream_loop -1: Loop the broadcast an infinite number of times
# -re: Output in real-time
# -i "${INPUT}": Read from a file on disk
# -vf "drawtext": Render the current time in the corner of the video
# -an: Disable audio for now
# -b:v 3M: Output video at 3Mbps
# -preset ultrafast: Don't use much CPU at the cost of quality
# -tune zerolatency: Optimize for latency at the cost of quality
# -f mp4: Output to mp4 format
# -movflags: Build a fMP4 file with a frame per fragment
# - | moq-pub: Output to stdout and moq-pub to publish
# Run ffmpeg
ffmpeg \
-stream_loop -1 \
-hide_banner \
-v quiet \
-re \
-i "${NAME}.mp4" \
-vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:text='%{gmtime\: %H\\\\\:%M\\\\\:%S.%3N}':x=(W-tw)-24:y=24:fontsize=48:fontcolor=white:box=1:boxcolor=black@0.5" \
-an \
-b:v 3M \
-preset ultrafast \
-tune zerolatency \
-f mp4 \
-movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset \
- | moq-pub "${ADDR}/${NAME}"

View File

@ -1,3 +1,4 @@
*.crt
*.key
*.hex
*.hex
*.mp4

118
dev/README.md Normal file
View File

@ -0,0 +1,118 @@
# Local Development
This is a collection of helpful scripts for local development.
## Setup
### moq-relay
Unfortunately, QUIC mandates TLS and makes local development difficult.
If you have a valid certificate you can use it instead of self-signing.
Use [mkcert](https://github.com/FiloSottile/mkcert) to generate a self-signed certificate.
Unfortunately, this currently requires [Go](https://golang.org/) to be installed in order to [fork](https://github.com/FiloSottile/mkcert/pull/513) the tool.
Somebody should get that merged or make something similar in Rust...
```bash
./dev/cert
```
Unfortunately, WebTransport in Chrome currently (May 2023) doesn't verify certificates using the root CA.
The workaround is to use the `serverFingerprints` options, which requires the certificate MUST be only valid for at most **14 days**.
This is also why we're using a fork of mkcert, because it generates certificates valid for years by default.
This limitation will be removed once Chrome uses the system CA for WebTransport.
### moq-pub
You'll want some test footage to broadcast.
Anything works, but make sure the codec is supported by the player since `moq-pub` does not re-encode.
Here's a criticially acclaimed short film:
```bash
mkdir media
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4 -O dev/source.mp4
```
`moq-pub` uses [ffmpeg](https://ffmpeg.org/) to convert the media to fMP4.
You should have it installed already if you're a video nerd, otherwise:
```bash
brew install ffmpeg
```
### moq-api
`moq-api` uses a redis instance to store active origins for clustering.
This is not relevant for most local development and the code path is skipped by default.
However, if you want to test the clustering, you'll need either either [Docker](https://www.docker.com/) or [Podman](https://podman.io/) installed.
We run the redis instance via a container automatically as part of `dev/api`.
## Development
**tl;dr** run these commands in seperate terminals:
```bash
./dev/cert
./dev/relay
./dev/pub
```
They will each print out a URL you can use to publish/watch broadcasts.
### moq-relay
You can run the relay with the following command, automatically using the self-signed certificates generated earlier.
This listens for WebTransport connections on WebTransport `https://localhost:4443` by default.
```bash
./dev/relay
```
It will print out a URL when you can use to publish. Alternatively, you can use `dev/pub` instead.
> Publish URL: https://quic.video/publish/?server=localhost:4443
### moq-pub
The following command runs a development instance, broadcasing `dev/source.mp4` to WebTransport `https://localhost:4443`:
```bash
./dev/pub
```
It will print out a URL when you can use to watch.
By default, the broadcast name is `dev` but you can overwrite it with the `NAME` env.
> Watch URL: https://quic.video/watch/dev?server=localhost:4443
If you're debugging encoding issues, you can use this script to dump the file to disk instead, defaulting to
`dev/output.mp4`.
```bash
./dev/pub-file
```
### moq-api
The following commands runs an API server, listening for HTTP requests on `http://localhost:4442` by default.
```bash
./dev/api
```
Nodes can now register themselves via the API, which means you can run multiple interconnected relays.
There's two separate `dev/relay-0` and `dev/relay-1` scripts to test clustering locally:
```bash
./dev/relay-0
./dev/relay-1
```
These listen on `:4443` and `:4444` respectively, inserting themselves into the origin database as `localhost:$PORT`.
There's also a separate `dev/pub-1` script to publish to the `:4444` instance.
You can use the exisitng `dev/pub` script to publish to the `:4443` instance.
If all goes well, you would be able to publish to one relay and watch from the other.

45
dev/api Executable file
View File

@ -0,0 +1,45 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Use debug logging by default
export RUST_LOG="${RUST_LOG:-debug}"
# Run the API server on port 4442 by default
HOST="${HOST:-[::]}"
PORT="${PORT:-4442}"
LISTEN="${LISTEN:-$HOST:$PORT}"
# Check for Podman/Docker and set runtime accordingly
if command -v podman &> /dev/null; then
RUNTIME=podman
elif command -v docker &> /dev/null; then
RUNTIME=docker
else
echo "Neither podman or docker found in PATH. Exiting."
exit 1
fi
REDIS_PORT=${REDIS_PORT:-6400} # The default is 6379, but we'll use 6400 to avoid conflicts
# Cleanup function to stop Redis when script exits
cleanup() {
$RUNTIME rm -f moq-redis || true
}
# Stop the redis instance if it's still running
cleanup
# Run a Redis instance
REDIS_CONTAINER=$($RUNTIME run --rm --name moq-redis -d -p "$REDIS_PORT:6379" redis:latest)
# Cleanup function to stop Redis when script exits
trap cleanup EXIT
# Default to a sqlite database in memory
DATABASE="${DATABASE-sqlite::memory:}"
# Run the relay and forward any arguments
cargo run --bin moq-api -- --listen "$LISTEN" --redis "redis://localhost:$REDIS_PORT" "$@"

40
dev/pub Executable file
View File

@ -0,0 +1,40 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Use debug logging by default
export RUST_LOG="${RUST_LOG:-debug}"
# Connect to localhost by default.
HOST="${HOST:-localhost}"
PORT="${PORT:-4443}"
ADDR="${ADDR:-$HOST:$PORT}"
# Generate a random 16 character name by default.
#NAME="${NAME:-$(head /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | head -c 16)}"
# JK use the name "dev" instead
# TODO use that random name if the host is not localhost
NAME="${NAME:-dev}"
# Combine the host and name into a URL.
URL="${URL:-"https://$ADDR/$NAME"}"
# Default to a source video
INPUT="${INPUT:-dev/source.mp4}"
# Print out the watch URL
echo "Watch URL: https://quic.video/watch/$NAME?server=$ADDR"
# Run ffmpeg and pipe the output to moq-pub
# TODO enable audio again once fixed.
ffmpeg -hide_banner -v quiet \
-stream_loop -1 -re \
-i "$INPUT" \
-c copy \
-an \
-f mp4 -movflags cmaf+separate_moof+delay_moov+skip_trailer \
-frag_duration 1 \
- | cargo run --bin moq-pub -- "$URL" "$@"

10
dev/pub-1 Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Connect to the 2nd relay by default.
export PORT="${PORT:-4444}"
./dev/pub

90
dev/pub-file Executable file
View File

@ -0,0 +1,90 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Default to a source video
INPUT="${INPUT:-dev/source.mp4}"
# Output the fragmented MP4 to disk for testing.
OUTPUT="${OUTPUT:-dev/output.mp4}"
# Run ffmpeg the same as dev/pub, but:
# - print any errors/warnings
# - only loop twice
#
# Note this is artificially slowed down to real-time using the -re flag; you can remove it.
ffmpeg \
-re \
-y \
-i "$INPUT" \
-c copy \
-fps_mode passthrough \
-f mp4 -movflags cmaf+separate_moof+delay_moov+skip_trailer \
-frag_duration 1 \
"${OUTPUT}"
# % ffmpeg -f mp4 --ffmpeg -h muxer=mov
#
# ffmpeg version 6.0 Copyright (c) 2000-2023 the FFmpeg developers
# Muxer mov [QuickTime / MOV]:
# Common extensions: mov.
# Default video codec: h264.
# Default audio codec: aac.
# mov/mp4/tgp/psp/tg2/ipod/ismv/f4v muxer AVOptions:
# -movflags <flags> E.......... MOV muxer flags (default 0)
# rtphint E.......... Add RTP hint tracks
# empty_moov E.......... Make the initial moov atom empty
# frag_keyframe E.......... Fragment at video keyframes
# frag_every_frame E.......... Fragment at every frame
# separate_moof E.......... Write separate moof/mdat atoms for each track
# frag_custom E.......... Flush fragments on caller requests
# isml E.......... Create a live smooth streaming feed (for pushing to a publishing point)
# faststart E.......... Run a second pass to put the index (moov atom) at the beginning of the file
# omit_tfhd_offset E.......... Omit the base data offset in tfhd atoms
# disable_chpl E.......... Disable Nero chapter atom
# default_base_moof E.......... Set the default-base-is-moof flag in tfhd atoms
# dash E.......... Write DASH compatible fragmented MP4
# cmaf E.......... Write CMAF compatible fragmented MP4
# frag_discont E.......... Signal that the next fragment is discontinuous from earlier ones
# delay_moov E.......... Delay writing the initial moov until the first fragment is cut, or until the first fragment flush
# global_sidx E.......... Write a global sidx index at the start of the file
# skip_sidx E.......... Skip writing of sidx atom
# write_colr E.......... Write colr atom even if the color info is unspecified (Experimental, may be renamed or changed, do not use from scripts)
# prefer_icc E.......... If writing colr atom prioritise usage of ICC profile if it exists in stream packet side data
# write_gama E.......... Write deprecated gama atom
# use_metadata_tags E.......... Use mdta atom for metadata.
# skip_trailer E.......... Skip writing the mfra/tfra/mfro trailer for fragmented files
# negative_cts_offsets E.......... Use negative CTS offsets (reducing the need for edit lists)
# -moov_size <int> E.......... maximum moov size so it can be placed at the begin (from 0 to INT_MAX) (default 0)
# -rtpflags <flags> E.......... RTP muxer flags (default 0)
# latm E.......... Use MP4A-LATM packetization instead of MPEG4-GENERIC for AAC
# rfc2190 E.......... Use RFC 2190 packetization instead of RFC 4629 for H.263
# skip_rtcp E.......... Don't send RTCP sender reports
# h264_mode0 E.......... Use mode 0 for H.264 in RTP
# send_bye E.......... Send RTCP BYE packets when finishing
# -skip_iods <boolean> E.......... Skip writing iods atom. (default true)
# -iods_audio_profile <int> E.......... iods audio profile atom. (from -1 to 255) (default -1)
# -iods_video_profile <int> E.......... iods video profile atom. (from -1 to 255) (default -1)
# -frag_duration <int> E.......... Maximum fragment duration (from 0 to INT_MAX) (default 0)
# -min_frag_duration <int> E.......... Minimum fragment duration (from 0 to INT_MAX) (default 0)
# -frag_size <int> E.......... Maximum fragment size (from 0 to INT_MAX) (default 0)
# -ism_lookahead <int> E.......... Number of lookahead entries for ISM files (from 0 to 255) (default 0)
# -video_track_timescale <int> E.......... set timescale of all video tracks (from 0 to INT_MAX) (default 0)
# -brand <string> E.......... Override major brand
# -use_editlist <boolean> E.......... use edit list (default auto)
# -fragment_index <int> E.......... Fragment number of the next fragment (from 1 to INT_MAX) (default 1)
# -mov_gamma <float> E.......... gamma value for gama atom (from 0 to 10) (default 0)
# -frag_interleave <int> E.......... Interleave samples within fragments (max number of consecutive samples, lower is tighter interleaving, but with more overhead) (from 0 to INT_MAX) (default 0)
# -encryption_scheme <string> E.......... Configures the encryption scheme, allowed values are none, cenc-aes-ctr
# -encryption_key <binary> E.......... The media encryption key (hex)
# -encryption_kid <binary> E.......... The media encryption key identifier (hex)
# -use_stream_ids_as_track_ids <boolean> E.......... use stream ids as track ids (default false)
# -write_btrt <boolean> E.......... force or disable writing btrt (default auto)
# -write_tmcd <boolean> E.......... force or disable writing tmcd (default auto)
# -write_prft <int> E.......... Write producer reference time box with specified time source (from 0 to 2) (default 0)
# wallclock 1 E..........
# pts 2 E..........
# -empty_hdlr_name <boolean> E.......... write zero-length name string in hdlr atoms within mdia and minf atoms (default false)
# -movie_timescale <int> E.......... set movie timescale (from 1 to INT_MAX) (default 1000)

37
dev/relay Executable file
View File

@ -0,0 +1,37 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Use debug logging by default
export RUST_LOG="${RUST_LOG:-debug}"
# Default to a self-signed certificate
# TODO automatically generate if it doesn't exist.
CERT="${CERT:-dev/localhost.crt}"
KEY="${KEY:-dev/localhost.key}"
# Default to listening on localhost:4443
HOST="${HOST:-[::]}"
PORT="${PORT:-4443}"
LISTEN="${LISTEN:-$HOST:$PORT}"
# A list of optional args
ARGS=""
# Connect to the given URL to get origins.
# TODO default to a public instance?
if [ -n "${API-}" ]; then
ARGS="$ARGS --api $API"
fi
# Provide our node URL when registering origins.
if [ -n "${NODE-}" ]; then
ARGS="$ARGS --api-node $NODE"
fi
echo "Publish URL: https://quic.video/publish/?server=localhost:${PORT}"
# Run the relay and forward any arguments
cargo run --bin moq-relay -- --listen "$LISTEN" --tls-cert "$CERT" --tls-key "$KEY" --dev $ARGS -- "$@"

12
dev/relay-0 Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Run an instance that advertises itself to the origin API.
export PORT="${PORT:-4443}"
export API="${API:-http://localhost:4442}" # TODO support HTTPS
export NODE="${NODE:-https://localhost:$PORT}"
./dev/relay

12
dev/relay-1 Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Run an instance that advertises itself to the origin API.
export PORT="${PORT:-4444}"
export API="${API:-http://localhost:4442}" # TODO support HTTPS
export NODE="${NODE:-https://localhost:$PORT}"
./dev/relay

2
dev/setup Normal file
View File

@ -0,0 +1,2 @@
#!/bin/bash
set -euo pipefail

43
moq-api/Cargo.toml Normal file
View File

@ -0,0 +1,43 @@
[package]
name = "moq-api"
description = "Media over QUIC"
authors = ["Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
version = "0.0.1"
edition = "2021"
keywords = ["quic", "http3", "webtransport", "media", "live"]
categories = ["multimedia", "network-programming", "web-programming"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
# HTTP server
axum = "0.6"
hyper = { version = "0.14", features = ["full"] }
tokio = { version = "1", features = ["full"] }
# HTTP client
reqwest = { version = "0.11", features = ["json", "rustls-tls"] }
# JSON encoding
serde = "1"
serde_json = "1"
# CLI
clap = { version = "4", features = ["derive"] }
# Database
redis = { version = "0.23", features = [
"tokio-rustls-comp",
"connection-manager",
] }
url = { version = "2", features = ["serde"] }
# Error handling
log = "0.4"
env_logger = "0.9"
thiserror = "1"

4
moq-api/README.md Normal file
View File

@ -0,0 +1,4 @@
# moq-api
A thin HTTP API that wraps Redis.
Basically I didn't want the relays connecting to Redis directly.

56
moq-api/src/client.rs Normal file
View File

@ -0,0 +1,56 @@
use url::Url;
use crate::{ApiError, Origin};
#[derive(Clone)]
pub struct Client {
// The address of the moq-api server
url: Url,
client: reqwest::Client,
}
impl Client {
pub fn new(url: Url) -> Self {
let client = reqwest::Client::new();
Self { url, client }
}
pub async fn get_origin(&self, id: &str) -> Result<Option<Origin>, ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.get(url).send().await?;
if resp.status() == reqwest::StatusCode::NOT_FOUND {
return Ok(None);
}
let origin: Origin = resp.json().await?;
Ok(Some(origin))
}
pub async fn set_origin(&mut self, id: &str, origin: &Origin) -> Result<(), ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.post(url).json(origin).send().await?;
resp.error_for_status()?;
Ok(())
}
pub async fn delete_origin(&mut self, id: &str) -> Result<(), ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.delete(url).send().await?;
resp.error_for_status()?;
Ok(())
}
pub async fn patch_origin(&mut self, id: &str, origin: &Origin) -> Result<(), ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.patch(url).json(origin).send().await?;
resp.error_for_status()?;
Ok(())
}
}

16
moq-api/src/error.rs Normal file
View File

@ -0,0 +1,16 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ApiError {
#[error("redis error: {0}")]
Redis(#[from] redis::RedisError),
#[error("reqwest error: {0}")]
Request(#[from] reqwest::Error),
#[error("hyper error: {0}")]
Hyper(#[from] hyper::Error),
#[error("url error: {0}")]
Url(#[from] url::ParseError),
}

7
moq-api/src/lib.rs Normal file
View File

@ -0,0 +1,7 @@
mod client;
mod error;
mod model;
pub use client::*;
pub use error::*;
pub use model::*;

14
moq-api/src/main.rs Normal file
View File

@ -0,0 +1,14 @@
use clap::Parser;
mod server;
use moq_api::ApiError;
use server::{Server, ServerConfig};
#[tokio::main]
async fn main() -> Result<(), ApiError> {
env_logger::init();
let config = ServerConfig::parse();
let server = Server::new(config);
server.run().await
}

8
moq-api/src/model.rs Normal file
View File

@ -0,0 +1,8 @@
use serde::{Deserialize, Serialize};
use url::Url;
#[derive(Serialize, Deserialize, PartialEq, Eq)]
pub struct Origin {
pub url: Url,
}

171
moq-api/src/server.rs Normal file
View File

@ -0,0 +1,171 @@
use std::net;
use axum::{
extract::{Path, State},
http::StatusCode,
response::{IntoResponse, Response},
routing::get,
Json, Router,
};
use clap::Parser;
use redis::{aio::ConnectionManager, AsyncCommands};
use moq_api::{ApiError, Origin};
/// Runs a HTTP API to create/get origins for broadcasts.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
pub struct ServerConfig {
/// Listen for HTTP requests on the given address
#[arg(long)]
pub listen: net::SocketAddr,
/// Connect to the given redis instance
#[arg(long)]
pub redis: url::Url,
}
pub struct Server {
config: ServerConfig,
}
impl Server {
pub fn new(config: ServerConfig) -> Self {
Self { config }
}
pub async fn run(self) -> Result<(), ApiError> {
log::info!("connecting to redis: url={}", self.config.redis);
// Create the redis client.
let redis = redis::Client::open(self.config.redis)?;
let redis = redis
.get_tokio_connection_manager() // TODO get_tokio_connection_manager_with_backoff?
.await?;
let app = Router::new()
.route(
"/origin/:id",
get(get_origin)
.post(set_origin)
.delete(delete_origin)
.patch(patch_origin),
)
.with_state(redis);
log::info!("serving requests: bind={}", self.config.listen);
axum::Server::bind(&self.config.listen)
.serve(app.into_make_service())
.await?;
Ok(())
}
}
async fn get_origin(
Path(id): Path<String>,
State(mut redis): State<ConnectionManager>,
) -> Result<Json<Origin>, AppError> {
let key = origin_key(&id);
let payload: Option<String> = redis.get(&key).await?;
let payload = payload.ok_or(AppError::NotFound)?;
let origin: Origin = serde_json::from_str(&payload)?;
Ok(Json(origin))
}
async fn set_origin(
State(mut redis): State<ConnectionManager>,
Path(id): Path<String>,
Json(origin): Json<Origin>,
) -> Result<(), AppError> {
// TODO validate origin
let key = origin_key(&id);
// Convert the input back to JSON after validating it add adding any fields (TODO)
let payload = serde_json::to_string(&origin)?;
let res: Option<String> = redis::cmd("SET")
.arg(key)
.arg(payload)
.arg("NX")
.arg("EX")
.arg(600) // Set the key to expire in 10 minutes; the origin needs to keep refreshing it.
.query_async(&mut redis)
.await?;
if res.is_none() {
return Err(AppError::Duplicate);
}
Ok(())
}
async fn delete_origin(Path(id): Path<String>, State(mut redis): State<ConnectionManager>) -> Result<(), AppError> {
let key = origin_key(&id);
match redis.del(key).await? {
0 => Err(AppError::NotFound),
_ => Ok(()),
}
}
// Update the expiration deadline.
async fn patch_origin(
Path(id): Path<String>,
State(mut redis): State<ConnectionManager>,
Json(origin): Json<Origin>,
) -> Result<(), AppError> {
let key = origin_key(&id);
// Make sure the contents haven't changed
// TODO make a LUA script to do this all in one operation.
let payload: Option<String> = redis.get(&key).await?;
let payload = payload.ok_or(AppError::NotFound)?;
let expected: Origin = serde_json::from_str(&payload)?;
if expected != origin {
return Err(AppError::Duplicate);
}
// Reset the timeout to 10 minutes.
match redis.expire(key, 600).await? {
0 => Err(AppError::NotFound),
_ => Ok(()),
}
}
fn origin_key(id: &str) -> String {
format!("origin.{}", id)
}
#[derive(thiserror::Error, Debug)]
enum AppError {
#[error("redis error")]
Redis(#[from] redis::RedisError),
#[error("json error")]
Json(#[from] serde_json::Error),
#[error("not found")]
NotFound,
#[error("duplicate ID")]
Duplicate,
}
// Tell axum how to convert `AppError` into a response.
impl IntoResponse for AppError {
fn into_response(self) -> Response {
match self {
AppError::Redis(e) => (StatusCode::INTERNAL_SERVER_ERROR, format!("redis error: {}", e)).into_response(),
AppError::Json(e) => (StatusCode::INTERNAL_SERVER_ERROR, format!("json error: {}", e)).into_response(),
AppError::NotFound => StatusCode::NOT_FOUND.into_response(),
AppError::Duplicate => StatusCode::CONFLICT.into_response(),
}
}
}

47
moq-pub/Cargo.toml Normal file
View File

@ -0,0 +1,47 @@
[package]
name = "moq-pub"
description = "Media over QUIC"
authors = ["Mike English", "Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
version = "0.1.0"
edition = "2021"
keywords = ["quic", "http3", "webtransport", "media", "live"]
categories = ["multimedia", "network-programming", "web-programming"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
moq-transport = { path = "../moq-transport" }
# QUIC
quinn = "0.10"
webtransport-quinn = "0.6"
#webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn" }
url = "2"
# Crypto
rustls = { version = "0.21", features = ["dangerous_configuration"] }
rustls-native-certs = "0.6"
rustls-pemfile = "1"
# Async stuff
tokio = { version = "1", features = ["full"] }
# CLI, logging, error handling
clap = { version = "4", features = ["derive"] }
log = { version = "0.4", features = ["std"] }
env_logger = "0.9"
mp4 = "0.13"
anyhow = { version = "1", features = ["backtrace"] }
serde_json = "1"
rfc6381-codec = "0.1"
tracing = "0.1"
tracing-subscriber = "0.3"
[build-dependencies]
clap = { version = "4", features = ["derive"] }
clap_mangen = "0.2"
url = "2"

28
moq-pub/README.md Normal file
View File

@ -0,0 +1,28 @@
# moq-pub
A command line tool for publishing media via Media over QUIC (MoQ).
Expects to receive fragmented MP4 via standard input and connect to a MOQT relay.
```
ffmpeg ... - | moq-pub https://localhost:4443
```
### Invoking `moq-pub`:
Here's how I'm currently testing things, with a local copy of Big Buck Bunny named `bbb_source.mp4`:
```
$ ffmpeg -hide_banner -v quiet -stream_loop -1 -re -i bbb_source.mp4 -an -f mp4 -movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset - | RUST_LOG=moq_pub=info moq-pub https://localhost:4443
```
This relies on having `moq-relay` (the relay server) already running locally in another shell.
Note also that we're dropping the audio track (`-an`) above until audio playback is stabilized on the `moq-js` side.
### Known issues
- Expects only one H.264/AVC1-encoded video track (catalog generation doesn't support audio tracks yet)
- Doesn't yet gracefully handle EOF - workaround: never stop sending it media (`-stream_loop -1`)
- Probably still full of lots of bugs
- Various other TODOs you can find in the code

15
moq-pub/build.rs Normal file
View File

@ -0,0 +1,15 @@
include!("src/cli.rs");
use clap::CommandFactory;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let out_dir = std::path::PathBuf::from(
std::env::var_os("OUT_DIR").ok_or(std::io::Error::new(std::io::ErrorKind::NotFound, "OUT_DIR not found"))?,
);
let cmd = Config::command();
let man = clap_mangen::Man::new(cmd);
let mut buffer: Vec<u8> = Default::default();
man.render(&mut buffer)?;
std::fs::write(out_dir.join("moq-pub.1"), buffer)?;
Ok(())
}

48
moq-pub/src/cli.rs Normal file
View File

@ -0,0 +1,48 @@
use clap::Parser;
use std::{net, path};
use url::Url;
#[derive(Parser, Clone, Debug)]
pub struct Config {
/// Listen for UDP packets on the given address.
#[arg(long, default_value = "[::]:0")]
pub bind: net::SocketAddr,
/// Advertise this frame rate in the catalog (informational)
// TODO auto-detect this from the input when not provided
#[arg(long, default_value = "24")]
pub fps: u8,
/// Advertise this bit rate in the catalog (informational)
// TODO auto-detect this from the input when not provided
#[arg(long, default_value = "1500000")]
pub bitrate: u32,
/// Connect to the given URL starting with https://
#[arg(value_parser = moq_url)]
pub url: Url,
/// Use the TLS root CA at this path, encoded as PEM.
///
/// This value can be provided multiple times for multiple roots.
/// If this is empty, system roots will be used instead
#[arg(long)]
pub tls_root: Vec<path::PathBuf>,
/// Danger: Disable TLS certificate verification.
///
/// Fine for local development, but should be used in caution in production.
#[arg(long)]
pub tls_disable_verify: bool,
}
fn moq_url(s: &str) -> Result<Url, String> {
let url = Url::try_from(s).map_err(|e| e.to_string())?;
// Make sure the scheme is moq
if url.scheme() != "https" {
return Err("url scheme must be https:// for WebTransport".to_string());
}
Ok(url)
}

107
moq-pub/src/main.rs Normal file
View File

@ -0,0 +1,107 @@
use std::{fs, io, sync::Arc, time};
use anyhow::Context;
use clap::Parser;
mod cli;
use cli::*;
mod media;
use media::*;
use moq_transport::cache::broadcast;
// TODO: clap complete
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
// Disable tracing so we don't get a bunch of Quinn spam.
let tracer = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(tracing::Level::WARN)
.finish();
tracing::subscriber::set_global_default(tracer).unwrap();
let config = Config::parse();
let (publisher, subscriber) = broadcast::new("");
let mut media = Media::new(&config, publisher).await?;
// Create a list of acceptable root certificates.
let mut roots = rustls::RootCertStore::empty();
if config.tls_root.is_empty() {
// Add the platform's native root certificates.
for cert in rustls_native_certs::load_native_certs().context("could not load platform certs")? {
roots
.add(&rustls::Certificate(cert.0))
.context("failed to add root cert")?;
}
} else {
// Add the specified root certificates.
for root in &config.tls_root {
let root = fs::File::open(root).context("failed to open root cert file")?;
let mut root = io::BufReader::new(root);
let root = rustls_pemfile::certs(&mut root).context("failed to read root cert")?;
anyhow::ensure!(root.len() == 1, "expected a single root cert");
let root = rustls::Certificate(root[0].to_owned());
roots.add(&root).context("failed to add root cert")?;
}
}
let mut tls_config = rustls::ClientConfig::builder()
.with_safe_defaults()
.with_root_certificates(roots)
.with_no_client_auth();
// Allow disabling TLS verification altogether.
if config.tls_disable_verify {
let noop = NoCertificateVerification {};
tls_config.dangerous().set_certificate_verifier(Arc::new(noop));
}
tls_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()]; // this one is important
let arc_tls_config = std::sync::Arc::new(tls_config);
let quinn_client_config = quinn::ClientConfig::new(arc_tls_config);
let mut endpoint = quinn::Endpoint::client(config.bind)?;
endpoint.set_default_client_config(quinn_client_config);
log::info!("connecting to relay: url={}", config.url);
let session = webtransport_quinn::connect(&endpoint, &config.url)
.await
.context("failed to create WebTransport session")?;
let session = moq_transport::session::Client::publisher(session, subscriber)
.await
.context("failed to create MoQ Transport session")?;
// TODO run a task that returns a 404 for all unknown subscriptions.
tokio::select! {
res = session.run() => res.context("session error")?,
res = media.run() => res.context("media error")?,
}
Ok(())
}
pub struct NoCertificateVerification {}
impl rustls::client::ServerCertVerifier for NoCertificateVerification {
fn verify_server_cert(
&self,
_end_entity: &rustls::Certificate,
_intermediates: &[rustls::Certificate],
_server_name: &rustls::ServerName,
_scts: &mut dyn Iterator<Item = &[u8]>,
_ocsp_response: &[u8],
_now: time::SystemTime,
) -> Result<rustls::client::ServerCertVerified, rustls::Error> {
Ok(rustls::client::ServerCertVerified::assertion())
}
}

430
moq-pub/src/media.rs Normal file
View File

@ -0,0 +1,430 @@
use crate::cli::Config;
use anyhow::{self, Context};
use moq_transport::cache::{broadcast, fragment, segment, track};
use moq_transport::VarInt;
use mp4::{self, ReadBox};
use serde_json::json;
use std::cmp::max;
use std::collections::HashMap;
use std::io::Cursor;
use std::time;
use tokio::io::AsyncReadExt;
pub struct Media {
// We hold on to publisher so we don't close then while media is still being published.
_broadcast: broadcast::Publisher,
_catalog: track::Publisher,
_init: track::Publisher,
// Tracks based on their track ID.
tracks: HashMap<u32, Track>,
}
impl Media {
pub async fn new(_config: &Config, mut broadcast: broadcast::Publisher) -> anyhow::Result<Self> {
let mut stdin = tokio::io::stdin();
let ftyp = read_atom(&mut stdin).await?;
anyhow::ensure!(&ftyp[4..8] == b"ftyp", "expected ftyp atom");
let moov = read_atom(&mut stdin).await?;
anyhow::ensure!(&moov[4..8] == b"moov", "expected moov atom");
let mut init = ftyp;
init.extend(&moov);
// We're going to parse the moov box.
// We have to read the moov box header to correctly advance the cursor for the mp4 crate.
let mut moov_reader = Cursor::new(&moov);
let moov_header = mp4::BoxHeader::read(&mut moov_reader)?;
// Parse the moov box so we can detect the timescales for each track.
let moov = mp4::MoovBox::read_box(&mut moov_reader, moov_header.size)?;
// Create the catalog track with a single segment.
let mut init_track = broadcast.create_track("0.mp4")?;
let mut init_segment = init_track.create_segment(segment::Info {
sequence: VarInt::ZERO,
priority: 0,
expires: None,
})?;
// Create a single fragment, optionally setting the size
let mut init_fragment = init_segment.create_fragment(fragment::Info {
sequence: VarInt::ZERO,
size: None, // size is only needed when we have multiple fragments.
})?;
init_fragment.write_chunk(init.into())?;
let mut tracks = HashMap::new();
for trak in &moov.traks {
let id = trak.tkhd.track_id;
let name = format!("{}.m4s", id);
let timescale = track_timescale(&moov, id);
// Store the track publisher in a map so we can update it later.
let track = broadcast.create_track(&name)?;
let track = Track::new(track, timescale);
tracks.insert(id, track);
}
let mut catalog = broadcast.create_track(".catalog")?;
// Create the catalog track
Self::serve_catalog(&mut catalog, &init_track.name, &moov)?;
Ok(Media {
_broadcast: broadcast,
_catalog: catalog,
_init: init_track,
tracks,
})
}
pub async fn run(&mut self) -> anyhow::Result<()> {
let mut stdin = tokio::io::stdin();
// The current track name
let mut current = None;
loop {
let atom = read_atom(&mut stdin).await?;
let mut reader = Cursor::new(&atom);
let header = mp4::BoxHeader::read(&mut reader)?;
match header.name {
mp4::BoxType::MoofBox => {
let moof = mp4::MoofBox::read_box(&mut reader, header.size).context("failed to read MP4")?;
// Process the moof.
let fragment = Fragment::new(moof)?;
// Get the track for this moof.
let track = self.tracks.get_mut(&fragment.track).context("failed to find track")?;
// Save the track ID for the next iteration, which must be a mdat.
anyhow::ensure!(current.is_none(), "multiple moof atoms");
current.replace(fragment.track);
// Publish the moof header, creating a new segment if it's a keyframe.
track.header(atom, fragment).context("failed to publish moof")?;
}
mp4::BoxType::MdatBox => {
// Get the track ID from the previous moof.
let track = current.take().context("missing moof")?;
let track = self.tracks.get_mut(&track).context("failed to find track")?;
// Publish the mdat atom.
track.data(atom).context("failed to publish mdat")?;
}
_ => {
// Skip unknown atoms
}
}
}
}
fn serve_catalog(
track: &mut track::Publisher,
init_track_name: &str,
moov: &mp4::MoovBox,
) -> Result<(), anyhow::Error> {
let mut segment = track.create_segment(segment::Info {
sequence: VarInt::ZERO,
priority: 0,
expires: None,
})?;
let mut tracks = Vec::new();
for trak in &moov.traks {
let mut track = json!({
"container": "mp4",
"init_track": init_track_name,
"data_track": format!("{}.m4s", trak.tkhd.track_id),
});
let stsd = &trak.mdia.minf.stbl.stsd;
if let Some(avc1) = &stsd.avc1 {
// avc1[.PPCCLL]
//
// let profile = 0x64;
// let constraints = 0x00;
// let level = 0x1f;
let profile = avc1.avcc.avc_profile_indication;
let constraints = avc1.avcc.profile_compatibility; // Not 100% certain here, but it's 0x00 on my current test video
let level = avc1.avcc.avc_level_indication;
let width = avc1.width;
let height = avc1.height;
let codec = rfc6381_codec::Codec::avc1(profile, constraints, level);
let codec_str = codec.to_string();
track["kind"] = json!("video");
track["codec"] = json!(codec_str);
track["width"] = json!(width);
track["height"] = json!(height);
} else if let Some(_hev1) = &stsd.hev1 {
// TODO https://github.com/gpac/mp4box.js/blob/325741b592d910297bf609bc7c400fc76101077b/src/box-codecs.js#L106
anyhow::bail!("HEVC not yet supported")
} else if let Some(mp4a) = &stsd.mp4a {
let desc = &mp4a
.esds
.as_ref()
.context("missing esds box for MP4a")?
.es_desc
.dec_config;
let codec_str = format!("mp4a.{:02x}.{}", desc.object_type_indication, desc.dec_specific.profile);
track["kind"] = json!("audio");
track["codec"] = json!(codec_str);
track["channel_count"] = json!(mp4a.channelcount);
track["sample_rate"] = json!(mp4a.samplerate.value());
track["sample_size"] = json!(mp4a.samplesize);
let bitrate = max(desc.max_bitrate, desc.avg_bitrate);
if bitrate > 0 {
track["bit_rate"] = json!(bitrate);
}
} else if let Some(vp09) = &stsd.vp09 {
// https://github.com/gpac/mp4box.js/blob/325741b592d910297bf609bc7c400fc76101077b/src/box-codecs.js#L238
let vpcc = &vp09.vpcc;
let codec_str = format!("vp09.0.{:02x}.{:02x}.{:02x}", vpcc.profile, vpcc.level, vpcc.bit_depth);
track["kind"] = json!("video");
track["codec"] = json!(codec_str);
track["width"] = json!(vp09.width); // no idea if this needs to be multiplied
track["height"] = json!(vp09.height); // no idea if this needs to be multiplied
// TODO Test if this actually works; I'm just guessing based on mp4box.js
anyhow::bail!("VP9 not yet supported")
} else {
// TODO add av01 support: https://github.com/gpac/mp4box.js/blob/325741b592d910297bf609bc7c400fc76101077b/src/box-codecs.js#L251
anyhow::bail!("unknown codec for track: {}", trak.tkhd.track_id);
}
tracks.push(track);
}
let catalog = json!({
"tracks": tracks
});
let catalog_str = serde_json::to_string_pretty(&catalog)?;
log::info!("catalog: {}", catalog_str);
// Create a single fragment for the segment.
let mut fragment = segment.create_fragment(fragment::Info {
sequence: VarInt::ZERO,
size: None, // Size is only needed when we have multiple fragments.
})?;
// Add the segment and add the fragment.
fragment.write_chunk(catalog_str.into())?;
Ok(())
}
}
// Read a full MP4 atom into a vector.
async fn read_atom<R: AsyncReadExt + Unpin>(reader: &mut R) -> anyhow::Result<Vec<u8>> {
// Read the 8 bytes for the size + type
let mut buf = [0u8; 8];
reader.read_exact(&mut buf).await?;
// Convert the first 4 bytes into the size.
let size = u32::from_be_bytes(buf[0..4].try_into()?) as u64;
let mut raw = buf.to_vec();
let mut limit = match size {
// Runs until the end of the file.
0 => reader.take(u64::MAX),
// The next 8 bytes are the extended size to be used instead.
1 => {
reader.read_exact(&mut buf).await?;
let size_large = u64::from_be_bytes(buf);
anyhow::ensure!(size_large >= 16, "impossible extended box size: {}", size_large);
reader.take(size_large - 16)
}
2..=7 => {
anyhow::bail!("impossible box size: {}", size)
}
size => reader.take(size - 8),
};
// Append to the vector and return it.
let _read_bytes = limit.read_to_end(&mut raw).await?;
Ok(raw)
}
struct Track {
// The track we're producing
track: track::Publisher,
// The current segment
current: Option<fragment::Publisher>,
// The number of units per second.
timescale: u64,
// The number of segments produced.
sequence: u64,
}
impl Track {
fn new(track: track::Publisher, timescale: u64) -> Self {
Self {
track,
sequence: 0,
current: None,
timescale,
}
}
pub fn header(&mut self, raw: Vec<u8>, fragment: Fragment) -> anyhow::Result<()> {
if let Some(current) = self.current.as_mut() {
if !fragment.keyframe {
// Use the existing segment
current.write_chunk(raw.into())?;
return Ok(());
}
}
// Otherwise make a new segment
// Compute the timestamp in milliseconds.
// Overflows after 583 million years, so we're fine.
let timestamp: u32 = fragment
.timestamp(self.timescale)
.as_millis()
.try_into()
.context("timestamp too large")?;
// Create a new segment.
let mut segment = self.track.create_segment(segment::Info {
sequence: VarInt::try_from(self.sequence).context("sequence too large")?,
// Newer segments are higher priority
priority: u32::MAX.checked_sub(timestamp).context("priority too large")?,
// Delete segments after 10s.
expires: Some(time::Duration::from_secs(10)),
})?;
// Create a single fragment for the segment that we will keep appending.
let mut fragment = segment.create_fragment(fragment::Info {
sequence: VarInt::ZERO,
size: None,
})?;
self.sequence += 1;
// Insert the raw atom into the segment.
fragment.write_chunk(raw.into())?;
// Save for the next iteration
self.current = Some(fragment);
Ok(())
}
pub fn data(&mut self, raw: Vec<u8>) -> anyhow::Result<()> {
let fragment = self.current.as_mut().context("missing current fragment")?;
fragment.write_chunk(raw.into())?;
Ok(())
}
}
struct Fragment {
// The track for this fragment.
track: u32,
// The timestamp of the first sample in this fragment, in timescale units.
timestamp: u64,
// True if this fragment is a keyframe.
keyframe: bool,
}
impl Fragment {
fn new(moof: mp4::MoofBox) -> anyhow::Result<Self> {
// We can't split the mdat atom, so this is impossible to support
anyhow::ensure!(moof.trafs.len() == 1, "multiple tracks per moof atom");
let track = moof.trafs[0].tfhd.track_id;
// Parse the moof to get some timing information to sleep.
let timestamp = sample_timestamp(&moof).expect("couldn't find timestamp");
// Detect if we should start a new segment.
let keyframe = sample_keyframe(&moof);
Ok(Self {
track,
timestamp,
keyframe,
})
}
// Convert from timescale units to a duration.
fn timestamp(&self, timescale: u64) -> time::Duration {
time::Duration::from_millis(1000 * self.timestamp / timescale)
}
}
fn sample_timestamp(moof: &mp4::MoofBox) -> Option<u64> {
Some(moof.trafs.first()?.tfdt.as_ref()?.base_media_decode_time)
}
fn sample_keyframe(moof: &mp4::MoofBox) -> bool {
for traf in &moof.trafs {
// TODO trak default flags if this is None
let default_flags = traf.tfhd.default_sample_flags.unwrap_or_default();
let trun = match &traf.trun {
Some(t) => t,
None => return false,
};
for i in 0..trun.sample_count {
let mut flags = match trun.sample_flags.get(i as usize) {
Some(f) => *f,
None => default_flags,
};
if i == 0 && trun.first_sample_flags.is_some() {
flags = trun.first_sample_flags.unwrap();
}
// https://chromium.googlesource.com/chromium/src/media/+/master/formats/mp4/track_run_iterator.cc#177
let keyframe = (flags >> 24) & 0x3 == 0x2; // kSampleDependsOnNoOther
let non_sync = (flags >> 16) & 0x1 == 0x1; // kSampleIsNonSyncSample
if keyframe && !non_sync {
return true;
}
}
}
false
}
// Find the timescale for the given track.
fn track_timescale(moov: &mp4::MoovBox, track_id: u32) -> u64 {
let trak = moov
.traks
.iter()
.find(|trak| trak.tkhd.track_id == track_id)
.expect("failed to find trak");
trak.mdia.mdhd.timescale as u64
}

View File

@ -1,42 +0,0 @@
[package]
name = "moq-quinn"
description = "Media over QUIC"
authors = ["Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
version = "0.1.0"
edition = "2021"
keywords = ["quic", "http3", "webtransport", "media", "live"]
categories = ["multimedia", "network-programming", "web-programming"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
moq-transport = { path = "../moq-transport" }
moq-warp = { path = "../moq-warp" }
webtransport-generic = { path = "../../webtransport-rs/webtransport-generic", version = "0.3" }
# QUIC
quinn = "0.10"
webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn", version = "0.4.2" }
# Crypto
ring = "0.16.20"
rustls = "0.21.2"
rustls-pemfile = "1.0.2"
# Async stuff
tokio = { version = "1.27", features = ["full"] }
# Web server to serve the fingerprint
warp = { version = "0.3.3", features = ["tls"] }
hex = "0.4.3"
# Logging
clap = { version = "4.0", features = ["derive"] }
log = { version = "0.4", features = ["std"] }
env_logger = "0.9.3"
anyhow = "1.0.70"

View File

@ -1,84 +0,0 @@
use std::{fs, io, net, path, sync};
use anyhow::Context;
use clap::Parser;
use ring::digest::{digest, SHA256};
use warp::Filter;
mod server;
use server::*;
/// Search for a pattern in a file and display the lines that contain it.
#[derive(Parser, Clone)]
struct Cli {
/// Listen on this address
#[arg(short, long, default_value = "[::]:4443")]
addr: net::SocketAddr,
/// Use the certificate file at this path
#[arg(short, long, default_value = "cert/localhost.crt")]
cert: path::PathBuf,
/// Use the private key at this path
#[arg(short, long, default_value = "cert/localhost.key")]
key: path::PathBuf,
}
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
let args = Cli::parse();
// Create a web server to serve the fingerprint
let serve = serve_http(args.clone());
// Create a server to actually serve the media
let config = ServerConfig {
addr: args.addr,
cert: args.cert,
key: args.key,
};
let server = Server::new(config).context("failed to create server")?;
// Run all of the above
tokio::select! {
res = server.run() => res.context("failed to run server"),
res = serve => res.context("failed to run HTTP server"),
}
}
// Run a HTTP server using Warp
// TODO remove this when Chrome adds support for self-signed certificates using WebTransport
async fn serve_http(args: Cli) -> anyhow::Result<()> {
// Read the PEM certificate file
let crt = fs::File::open(&args.cert)?;
let mut crt = io::BufReader::new(crt);
// Parse the DER certificate
let certs = rustls_pemfile::certs(&mut crt)?;
let cert = certs.first().expect("no certificate found");
// Compute the SHA-256 digest
let fingerprint = digest(&SHA256, cert.as_ref());
let fingerprint = hex::encode(fingerprint.as_ref());
let fingerprint = sync::Arc::new(fingerprint);
let cors = warp::cors().allow_any_origin();
// What an annoyingly complicated way to serve a static String
// I spent a long time trying to find the exact way of cloning and dereferencing the Arc.
let routes = warp::path!("fingerprint")
.map(move || (*(fingerprint.clone())).clone())
.with(cors);
warp::serve(routes)
.tls()
.cert_path(args.cert)
.key_path(args.key)
.run(args.addr)
.await;
Ok(())
}

View File

@ -1,116 +0,0 @@
use std::{fs, io, net, path, sync, time};
use anyhow::Context;
use moq_warp::relay;
use tokio::task::JoinSet;
pub struct Server {
server: quinn::Endpoint,
// The media sources.
broker: relay::Broker,
// The active connections.
conns: JoinSet<anyhow::Result<()>>,
}
pub struct ServerConfig {
pub addr: net::SocketAddr,
pub cert: path::PathBuf,
pub key: path::PathBuf,
}
impl Server {
// Create a new server
pub fn new(config: ServerConfig) -> anyhow::Result<Self> {
// Read the PEM certificate chain
let certs = fs::File::open(config.cert).context("failed to open cert file")?;
let mut certs = io::BufReader::new(certs);
let certs = rustls_pemfile::certs(&mut certs)?
.into_iter()
.map(rustls::Certificate)
.collect();
// Read the PEM private key
let keys = fs::File::open(config.key).context("failed to open key file")?;
let mut keys = io::BufReader::new(keys);
let mut keys = rustls_pemfile::pkcs8_private_keys(&mut keys)?;
anyhow::ensure!(keys.len() == 1, "expected a single key");
let key = rustls::PrivateKey(keys.remove(0));
let mut tls_config = rustls::ServerConfig::builder()
.with_safe_default_cipher_suites()
.with_safe_default_kx_groups()
.with_protocol_versions(&[&rustls::version::TLS13])
.unwrap()
.with_no_client_auth()
.with_single_cert(certs, key)?;
tls_config.max_early_data_size = u32::MAX;
tls_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()];
let mut server_config = quinn::ServerConfig::with_crypto(sync::Arc::new(tls_config));
// Enable BBR congestion control
// TODO validate the implementation
let mut transport_config = quinn::TransportConfig::default();
transport_config.keep_alive_interval(Some(time::Duration::from_secs(2)));
transport_config.congestion_controller_factory(sync::Arc::new(quinn::congestion::BbrConfig::default()));
server_config.transport = sync::Arc::new(transport_config);
let server = quinn::Endpoint::server(server_config, config.addr)?;
let broker = relay::Broker::new();
let conns = JoinSet::new();
Ok(Self { server, broker, conns })
}
pub async fn run(mut self) -> anyhow::Result<()> {
loop {
tokio::select! {
res = self.server.accept() => {
let conn = res.context("failed to accept QUIC connection")?;
let broker = self.broker.clone();
self.conns.spawn(async move { Self::handle(conn, broker).await });
},
res = self.conns.join_next(), if !self.conns.is_empty() => {
let res = res.expect("no tasks").expect("task aborted");
if let Err(err) = res {
log::error!("connection terminated: {:?}", err);
}
},
}
}
}
async fn handle(conn: quinn::Connecting, broker: relay::Broker) -> anyhow::Result<()> {
// Wait for the QUIC connection to be established.
let conn = conn.await.context("failed to establish QUIC connection")?;
// Wait for the CONNECT request.
let request = webtransport_quinn::accept(conn)
.await
.context("failed to receive WebTransport request")?;
// TODO parse the request URI
// Accept the CONNECT request.
let session = request
.ok()
.await
.context("failed to respond to WebTransport request")?;
// Perform the MoQ handshake.
let session = moq_transport::Session::accept(session, moq_transport::setup::Role::Both)
.await
.context("failed to perform MoQ handshake")?;
// Run the relay code.
let session = relay::Session::new(session, broker);
session.run().await
}
}

51
moq-relay/Cargo.toml Normal file
View File

@ -0,0 +1,51 @@
[package]
name = "moq-relay"
description = "Media over QUIC"
authors = ["Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
version = "0.1.0"
edition = "2021"
keywords = ["quic", "http3", "webtransport", "media", "live"]
categories = ["multimedia", "network-programming", "web-programming"]
[dependencies]
moq-transport = { path = "../moq-transport" }
moq-api = { path = "../moq-api" }
# QUIC
quinn = "0.10"
webtransport-quinn = "0.6"
#webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn" }
url = "2"
# Crypto
ring = "0.16"
rustls = { version = "0.21", features = ["dangerous_configuration"] }
rustls-pemfile = "1"
rustls-native-certs = "0.6"
webpki = "0.22"
# Async stuff
tokio = { version = "1", features = ["full"] }
# Web server to serve the fingerprint
axum = { version = "0.6", features = ["tokio"] }
axum-server = { version = "0.5", features = ["tls-rustls"] }
hex = "0.4"
tower-http = { version = "0.4", features = ["cors"] }
# Error handling
anyhow = { version = "1", features = ["backtrace"] }
thiserror = "1"
# CLI
clap = { version = "4", features = ["derive"] }
# Logging
log = { version = "0.4", features = ["std"] }
env_logger = "0.9"
tracing = "0.1"
tracing-subscriber = "0.3"

17
moq-relay/README.md Normal file
View File

@ -0,0 +1,17 @@
# moq-relay
A server that connects publishing clients to subscribing clients.
All subscriptions are deduplicated and cached, so that a single publisher can serve many subscribers.
## Usage
The publisher must choose a unique name for their broadcast, sent as the WebTransport path when connecting to the server.
We currently do a dumb string comparison, so capatilization matters as do slashes.
For example: `CONNECT https://relay.quic.video/BigBuckBunny`
The MoqTransport handshake includes a `role` parameter, which must be `publisher` or `subscriber`.
The specification allows a `both` role but you'll get an error.
You can have one publisher and any number of subscribers connected to the same path.
If the publisher disconnects, then all subscribers receive an error and will not get updates, even if a new publisher reuses the path.

55
moq-relay/src/config.rs Normal file
View File

@ -0,0 +1,55 @@
use std::{net, path};
use url::Url;
use clap::Parser;
/// Search for a pattern in a file and display the lines that contain it.
#[derive(Parser, Clone)]
pub struct Config {
/// Listen on this address
#[arg(long, default_value = "[::]:4443")]
pub listen: net::SocketAddr,
/// Use the certificates at this path, encoded as PEM.
///
/// You can use this option multiple times for multiple certificates.
/// The first match for the provided SNI will be used, otherwise the last cert will be used.
/// You also need to provide the private key multiple times via `key``.
#[arg(long)]
pub tls_cert: Vec<path::PathBuf>,
/// Use the private key at this path, encoded as PEM.
///
/// There must be a key for every certificate provided via `cert`.
#[arg(long)]
pub tls_key: Vec<path::PathBuf>,
/// Use the TLS root at this path, encoded as PEM.
///
/// This value can be provided multiple times for multiple roots.
/// If this is empty, system roots will be used instead
#[arg(long)]
pub tls_root: Vec<path::PathBuf>,
/// Danger: Disable TLS certificate verification.
///
/// Fine for local development and between relays, but should be used in caution in production.
#[arg(long)]
pub tls_disable_verify: bool,
/// Optional: Use the moq-api via HTTP to store origin information.
#[arg(long)]
pub api: Option<Url>,
/// Our internal address which we advertise to other origins.
/// We use QUIC, so the certificate must be valid for this address.
/// This needs to be prefixed with https:// to use WebTransport.
/// This is only used when --api is set and only for publishing broadcasts.
#[arg(long)]
pub api_node: Option<Url>,
/// Enable development mode.
/// Currently, this only listens on HTTPS and serves /fingerprint, for self-signed certificates
#[arg(long, action)]
pub dev: bool,
}

51
moq-relay/src/error.rs Normal file
View File

@ -0,0 +1,51 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum RelayError {
#[error("transport error: {0}")]
Transport(#[from] moq_transport::session::SessionError),
#[error("cache error: {0}")]
Cache(#[from] moq_transport::cache::CacheError),
#[error("api error: {0}")]
MoqApi(#[from] moq_api::ApiError),
#[error("url error: {0}")]
Url(#[from] url::ParseError),
#[error("webtransport client error: {0}")]
WebTransportClient(#[from] webtransport_quinn::ClientError),
#[error("webtransport server error: {0}")]
WebTransportServer(#[from] webtransport_quinn::ServerError),
#[error("missing node")]
MissingNode,
}
impl moq_transport::MoqError for RelayError {
fn code(&self) -> u32 {
match self {
Self::Transport(err) => err.code(),
Self::Cache(err) => err.code(),
Self::MoqApi(_err) => 504,
Self::Url(_) => 500,
Self::MissingNode => 500,
Self::WebTransportClient(_) => 504,
Self::WebTransportServer(_) => 500,
}
}
fn reason(&self) -> String {
match self {
Self::Transport(err) => format!("transport error: {}", err.reason()),
Self::Cache(err) => format!("cache error: {}", err.reason()),
Self::MoqApi(err) => format!("api error: {}", err),
Self::Url(err) => format!("url error: {}", err),
Self::MissingNode => "missing node".to_owned(),
Self::WebTransportServer(err) => format!("upstream server error: {}", err),
Self::WebTransportClient(err) => format!("upstream client error: {}", err),
}
}
}

51
moq-relay/src/main.rs Normal file
View File

@ -0,0 +1,51 @@
use anyhow::Context;
use clap::Parser;
mod config;
mod error;
mod origin;
mod quic;
mod session;
mod tls;
mod web;
pub use config::*;
pub use error::*;
pub use origin::*;
pub use quic::*;
pub use session::*;
pub use tls::*;
pub use web::*;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
env_logger::init();
// Disable tracing so we don't get a bunch of Quinn spam.
let tracer = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(tracing::Level::WARN)
.finish();
tracing::subscriber::set_global_default(tracer).unwrap();
let config = Config::parse();
let tls = Tls::load(&config)?;
// Create a QUIC server for media.
let quic = Quic::new(config.clone(), tls.clone())
.await
.context("failed to create server")?;
// Create the web server if the --dev flag was set.
// This is currently only useful in local development so it's not enabled by default.
if config.dev {
let web = Web::new(config, tls);
// Unfortunately we can't use preconditions because Tokio still executes the branch; just ignore the result
tokio::select! {
res = quic.serve() => res.context("failed to run quic server"),
res = web.serve() => res.context("failed to run web server"),
}
} else {
quic.serve().await.context("failed to run quic server")
}
}

216
moq-relay/src/origin.rs Normal file
View File

@ -0,0 +1,216 @@
use std::ops::{Deref, DerefMut};
use std::{
collections::HashMap,
sync::{Arc, Mutex, Weak},
};
use moq_api::ApiError;
use moq_transport::cache::{broadcast, CacheError};
use url::Url;
use tokio::time;
use crate::RelayError;
#[derive(Clone)]
pub struct Origin {
// An API client used to get/set broadcasts.
// If None then we never use a remote origin.
// TODO: Stub this out instead.
api: Option<moq_api::Client>,
// The internal address of our node.
// If None then we can never advertise ourselves as an origin.
// TODO: Stub this out instead.
node: Option<Url>,
// A map of active broadcasts by ID.
cache: Arc<Mutex<HashMap<String, Weak<Subscriber>>>>,
// A QUIC endpoint we'll use to fetch from other origins.
quic: quinn::Endpoint,
}
impl Origin {
pub fn new(api: Option<moq_api::Client>, node: Option<Url>, quic: quinn::Endpoint) -> Self {
Self {
api,
node,
cache: Default::default(),
quic,
}
}
/// Create a new broadcast with the given ID.
///
/// Publisher::run needs to be called to periodically refresh the origin cache.
pub async fn publish(&mut self, id: &str) -> Result<Publisher, RelayError> {
let (publisher, subscriber) = broadcast::new(id);
let subscriber = {
let mut cache = self.cache.lock().unwrap();
// Check if the broadcast already exists.
// TODO This is racey, because a new publisher could be created while existing subscribers are still active.
if cache.contains_key(id) {
return Err(CacheError::Duplicate.into());
}
// Create subscriber that will remove from the cache when dropped.
let subscriber = Arc::new(Subscriber {
broadcast: subscriber,
origin: self.clone(),
});
cache.insert(id.to_string(), Arc::downgrade(&subscriber));
subscriber
};
// Create a publisher that constantly updates itself as the origin in moq-api.
// It holds a reference to the subscriber to prevent dropping early.
let mut publisher = Publisher {
broadcast: publisher,
subscriber,
api: None,
};
// Insert the publisher into the database.
if let Some(api) = self.api.as_mut() {
// Make a URL for the broadcast.
let url = self.node.as_ref().ok_or(RelayError::MissingNode)?.clone().join(id)?;
let origin = moq_api::Origin { url };
api.set_origin(id, &origin).await?;
// Refresh every 5 minutes
publisher.api = Some((api.clone(), origin));
}
Ok(publisher)
}
pub fn subscribe(&self, id: &str) -> Arc<Subscriber> {
let mut cache = self.cache.lock().unwrap();
if let Some(broadcast) = cache.get(id) {
if let Some(broadcast) = broadcast.upgrade() {
return broadcast;
}
}
let (publisher, subscriber) = broadcast::new(id);
let subscriber = Arc::new(Subscriber {
broadcast: subscriber,
origin: self.clone(),
});
cache.insert(id.to_string(), Arc::downgrade(&subscriber));
let mut this = self.clone();
let id = id.to_string();
// Rather than fetching from the API and connecting via QUIC inline, we'll spawn a task to do it.
// This way we could stop polling this session and it won't impact other session.
// It also means we'll only connect the API and QUIC once if N subscribers suddenly show up.
// However, the downside is that we don't return an error immediately.
// If that's important, it can be done but it gets a bit racey.
tokio::spawn(async move {
if let Err(err) = this.serve(&id, publisher).await {
log::warn!("failed to serve remote broadcast: id={} err={}", id, err);
}
});
subscriber
}
async fn serve(&mut self, id: &str, publisher: broadcast::Publisher) -> Result<(), RelayError> {
log::debug!("finding origin: id={}", id);
// Fetch the origin from the API.
let origin = self
.api
.as_mut()
.ok_or(CacheError::NotFound)?
.get_origin(id)
.await?
.ok_or(CacheError::NotFound)?;
log::debug!("fetching from origin: id={} url={}", id, origin.url);
// Establish the webtransport session.
let session = webtransport_quinn::connect(&self.quic, &origin.url).await?;
let session = moq_transport::session::Client::subscriber(session, publisher).await?;
session.run().await?;
Ok(())
}
}
pub struct Subscriber {
pub broadcast: broadcast::Subscriber,
origin: Origin,
}
impl Drop for Subscriber {
fn drop(&mut self) {
self.origin.cache.lock().unwrap().remove(&self.broadcast.id);
}
}
impl Deref for Subscriber {
type Target = broadcast::Subscriber;
fn deref(&self) -> &Self::Target {
&self.broadcast
}
}
pub struct Publisher {
pub broadcast: broadcast::Publisher,
api: Option<(moq_api::Client, moq_api::Origin)>,
#[allow(dead_code)]
subscriber: Arc<Subscriber>,
}
impl Publisher {
pub async fn run(&mut self) -> Result<(), ApiError> {
// Every 5m tell the API we're still alive.
// TODO don't hard-code these values
let mut interval = time::interval(time::Duration::from_secs(60 * 5));
loop {
if let Some((api, origin)) = self.api.as_mut() {
api.patch_origin(&self.broadcast.id, origin).await?;
}
// TODO move to start of loop; this is just for testing
interval.tick().await;
}
}
pub async fn close(&mut self) -> Result<(), ApiError> {
if let Some((api, _)) = self.api.as_mut() {
api.delete_origin(&self.broadcast.id).await?;
}
Ok(())
}
}
impl Deref for Publisher {
type Target = broadcast::Publisher;
fn deref(&self) -> &Self::Target {
&self.broadcast
}
}
impl DerefMut for Publisher {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.broadcast
}
}

85
moq-relay/src/quic.rs Normal file
View File

@ -0,0 +1,85 @@
use std::{sync::Arc, time};
use anyhow::Context;
use tokio::task::JoinSet;
use crate::{Config, Origin, Session, Tls};
pub struct Quic {
quic: quinn::Endpoint,
// The active connections.
conns: JoinSet<anyhow::Result<()>>,
// The map of active broadcasts by path.
origin: Origin,
}
impl Quic {
// Create a QUIC endpoint that can be used for both clients and servers.
pub async fn new(config: Config, tls: Tls) -> anyhow::Result<Self> {
let mut client_config = tls.client.clone();
let mut server_config = tls.server.clone();
client_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()];
server_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()];
// Enable BBR congestion control
// TODO validate the implementation
let mut transport_config = quinn::TransportConfig::default();
transport_config.max_idle_timeout(Some(time::Duration::from_secs(10).try_into().unwrap()));
transport_config.keep_alive_interval(Some(time::Duration::from_secs(4))); // TODO make this smarter
transport_config.congestion_controller_factory(Arc::new(quinn::congestion::BbrConfig::default()));
transport_config.mtu_discovery_config(None); // Disable MTU discovery
let transport_config = Arc::new(transport_config);
let mut client_config = quinn::ClientConfig::new(Arc::new(client_config));
let mut server_config = quinn::ServerConfig::with_crypto(Arc::new(server_config));
server_config.transport_config(transport_config.clone());
client_config.transport_config(transport_config);
// There's a bit more boilerplate to make a generic endpoint.
let runtime = quinn::default_runtime().context("no async runtime")?;
let endpoint_config = quinn::EndpointConfig::default();
let socket = std::net::UdpSocket::bind(config.listen).context("failed to bind UDP socket")?;
// Create the generic QUIC endpoint.
let mut quic = quinn::Endpoint::new(endpoint_config, Some(server_config), socket, runtime)
.context("failed to create QUIC endpoint")?;
quic.set_default_client_config(client_config);
let api = config.api.map(|url| {
log::info!("using moq-api: url={}", url);
moq_api::Client::new(url)
});
if let Some(ref node) = config.api_node {
log::info!("advertising origin: url={}", node);
}
let origin = Origin::new(api, config.api_node, quic.clone());
let conns = JoinSet::new();
Ok(Self { quic, origin, conns })
}
pub async fn serve(mut self) -> anyhow::Result<()> {
log::info!("listening on {}", self.quic.local_addr()?);
loop {
tokio::select! {
res = self.quic.accept() => {
let conn = res.context("failed to accept QUIC connection")?;
let mut session = Session::new(self.origin.clone());
self.conns.spawn(async move { session.run(conn).await });
},
res = self.conns.join_next(), if !self.conns.is_empty() => {
let res = res.expect("no tasks").expect("task aborted");
if let Err(err) = res {
log::warn!("connection terminated: {:?}", err);
}
},
}
}
}
}

111
moq-relay/src/session.rs Normal file
View File

@ -0,0 +1,111 @@
use anyhow::Context;
use moq_transport::{session::Request, setup::Role, MoqError};
use crate::Origin;
#[derive(Clone)]
pub struct Session {
origin: Origin,
}
impl Session {
pub fn new(origin: Origin) -> Self {
Self { origin }
}
pub async fn run(&mut self, conn: quinn::Connecting) -> anyhow::Result<()> {
log::debug!("received QUIC handshake: ip={:?}", conn.remote_address());
// Wait for the QUIC connection to be established.
let conn = conn.await.context("failed to establish QUIC connection")?;
log::debug!(
"established QUIC connection: ip={:?} id={}",
conn.remote_address(),
conn.stable_id()
);
let id = conn.stable_id();
// Wait for the CONNECT request.
let request = webtransport_quinn::accept(conn)
.await
.context("failed to receive WebTransport request")?;
// Strip any leading and trailing slashes to get the broadcast name.
let path = request.url().path().trim_matches('/').to_string();
log::debug!("received WebTransport CONNECT: id={} path={}", id, path);
// Accept the CONNECT request.
let session = request
.ok()
.await
.context("failed to respond to WebTransport request")?;
// Perform the MoQ handshake.
let request = moq_transport::session::Server::accept(session)
.await
.context("failed to accept handshake")?;
log::debug!("received MoQ SETUP: id={} role={:?}", id, request.role());
let role = request.role();
match role {
Role::Publisher => {
if let Err(err) = self.serve_publisher(id, request, &path).await {
log::warn!("error serving publisher: id={} path={} err={:#?}", id, path, err);
}
}
Role::Subscriber => {
if let Err(err) = self.serve_subscriber(id, request, &path).await {
log::warn!("error serving subscriber: id={} path={} err={:#?}", id, path, err);
}
}
Role::Both => {
log::warn!("role both not supported: id={}", id);
request.reject(300);
}
};
log::debug!("closing connection: id={}", id);
Ok(())
}
async fn serve_publisher(&mut self, id: usize, request: Request, path: &str) -> anyhow::Result<()> {
log::info!("serving publisher: id={}, path={}", id, path);
let mut origin = match self.origin.publish(path).await {
Ok(origin) => origin,
Err(err) => {
request.reject(err.code());
return Err(err.into());
}
};
let session = request.subscriber(origin.broadcast.clone()).await?;
tokio::select! {
_ = session.run() => origin.close().await?,
_ = origin.run() => (), // TODO send error to session
};
Ok(())
}
async fn serve_subscriber(&mut self, id: usize, request: Request, path: &str) -> anyhow::Result<()> {
log::info!("serving subscriber: id={} path={}", id, path);
let subscriber = self.origin.subscribe(path);
let session = request.publisher(subscriber.broadcast.clone()).await?;
session.run().await?;
// Make sure this doesn't get dropped too early
drop(subscriber);
Ok(())
}
}

182
moq-relay/src/tls.rs Normal file
View File

@ -0,0 +1,182 @@
use anyhow::Context;
use ring::digest::{digest, SHA256};
use rustls::server::{ClientHello, ResolvesServerCert};
use rustls::sign::CertifiedKey;
use rustls::{Certificate, PrivateKey, RootCertStore};
use std::io::{self, Cursor, Read};
use std::path;
use std::sync::Arc;
use std::{fs, time};
use webpki::{DnsNameRef, EndEntityCert};
use crate::Config;
#[derive(Clone)]
pub struct Tls {
pub server: rustls::ServerConfig,
pub client: rustls::ClientConfig,
pub fingerprints: Vec<String>,
}
impl Tls {
pub fn load(config: &Config) -> anyhow::Result<Self> {
let mut serve = ServeCerts::default();
// Load the certificate and key files based on their index.
anyhow::ensure!(
config.tls_cert.len() == config.tls_key.len(),
"--tls-cert and --tls-key counts differ"
);
for (chain, key) in config.tls_cert.iter().zip(config.tls_key.iter()) {
serve.load(chain, key)?;
}
// Create a list of acceptable root certificates.
let mut roots = RootCertStore::empty();
if config.tls_root.is_empty() {
// Add the platform's native root certificates.
for cert in rustls_native_certs::load_native_certs().context("could not load platform certs")? {
roots.add(&Certificate(cert.0)).context("failed to add root cert")?;
}
} else {
// Add the specified root certificates.
for root in &config.tls_root {
let root = fs::File::open(root).context("failed to open root cert file")?;
let mut root = io::BufReader::new(root);
let root = rustls_pemfile::certs(&mut root).context("failed to read root cert")?;
anyhow::ensure!(root.len() == 1, "expected a single root cert");
let root = Certificate(root[0].to_owned());
roots.add(&root).context("failed to add root cert")?;
}
}
// Create the TLS configuration we'll use as a client (relay -> relay)
let mut client = rustls::ClientConfig::builder()
.with_safe_defaults()
.with_root_certificates(roots)
.with_no_client_auth();
// Allow disabling TLS verification altogether.
if config.tls_disable_verify {
let noop = NoCertificateVerification {};
client.dangerous().set_certificate_verifier(Arc::new(noop));
}
let fingerprints = serve.fingerprints();
// Create the TLS configuration we'll use as a server (relay <- browser)
let server = rustls::ServerConfig::builder()
.with_safe_defaults()
.with_no_client_auth()
.with_cert_resolver(Arc::new(serve));
let certs = Self {
server,
client,
fingerprints,
};
Ok(certs)
}
}
#[derive(Default)]
struct ServeCerts {
list: Vec<Arc<CertifiedKey>>,
}
impl ServeCerts {
// Load a certificate and cooresponding key from a file
pub fn load(&mut self, chain: &path::PathBuf, key: &path::PathBuf) -> anyhow::Result<()> {
// Read the PEM certificate chain
let chain = fs::File::open(chain).context("failed to open cert file")?;
let mut chain = io::BufReader::new(chain);
let chain: Vec<Certificate> = rustls_pemfile::certs(&mut chain)?
.into_iter()
.map(Certificate)
.collect();
anyhow::ensure!(!chain.is_empty(), "could not find certificate");
// Read the PEM private key
let mut keys = fs::File::open(key).context("failed to open key file")?;
// Read the keys into a Vec so we can parse it twice.
let mut buf = Vec::new();
keys.read_to_end(&mut buf)?;
// Try to parse a PKCS#8 key
// -----BEGIN PRIVATE KEY-----
let mut keys = rustls_pemfile::pkcs8_private_keys(&mut Cursor::new(&buf))?;
// Try again but with EC keys this time
// -----BEGIN EC PRIVATE KEY-----
if keys.is_empty() {
keys = rustls_pemfile::ec_private_keys(&mut Cursor::new(&buf))?
};
anyhow::ensure!(!keys.is_empty(), "could not find private key");
anyhow::ensure!(keys.len() < 2, "expected a single key");
let key = PrivateKey(keys.remove(0));
let key = rustls::sign::any_supported_type(&key)?;
let certified = Arc::new(CertifiedKey::new(chain, key));
self.list.push(certified);
Ok(())
}
// Return the SHA256 fingerprint of our certificates.
pub fn fingerprints(&self) -> Vec<String> {
self.list
.iter()
.map(|ck| {
let fingerprint = digest(&SHA256, ck.cert[0].as_ref());
let fingerprint = hex::encode(fingerprint.as_ref());
fingerprint
})
.collect()
}
}
impl ResolvesServerCert for ServeCerts {
fn resolve(&self, client_hello: ClientHello<'_>) -> Option<Arc<CertifiedKey>> {
if let Some(name) = client_hello.server_name() {
if let Ok(dns_name) = DnsNameRef::try_from_ascii_str(name) {
for ck in &self.list {
// TODO I gave up on caching the parsed result because of lifetime hell.
// If this shows up on benchmarks, somebody should fix it.
let leaf = ck.cert.first().expect("missing certificate");
let parsed = EndEntityCert::try_from(leaf.0.as_ref()).expect("failed to parse certificate");
if parsed.verify_is_valid_for_dns_name(dns_name).is_ok() {
return Some(ck.clone());
}
}
}
}
// Default to the last certificate if we couldn't find one.
self.list.last().cloned()
}
}
pub struct NoCertificateVerification {}
impl rustls::client::ServerCertVerifier for NoCertificateVerification {
fn verify_server_cert(
&self,
_end_entity: &rustls::Certificate,
_intermediates: &[rustls::Certificate],
_server_name: &rustls::ServerName,
_scts: &mut dyn Iterator<Item = &[u8]>,
_ocsp_response: &[u8],
_now: time::SystemTime,
) -> Result<rustls::client::ServerCertVerified, rustls::Error> {
Ok(rustls::client::ServerCertVerified::assertion())
}
}

44
moq-relay/src/web.rs Normal file
View File

@ -0,0 +1,44 @@
use std::sync::Arc;
use axum::{extract::State, http::Method, response::IntoResponse, routing::get, Router};
use axum_server::{tls_rustls::RustlsAcceptor, Server};
use tower_http::cors::{Any, CorsLayer};
use crate::{Config, Tls};
// Run a HTTP server using Axum
// TODO remove this when Chrome adds support for self-signed certificates using WebTransport
pub struct Web {
app: Router,
server: Server<RustlsAcceptor>,
}
impl Web {
pub fn new(config: Config, tls: Tls) -> Self {
// Get the first certificate's fingerprint.
// TODO serve all of them so we can support multiple signature algorithms.
let fingerprint = tls.fingerprints.first().expect("missing certificate").clone();
let mut tls_config = tls.server.clone();
tls_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec()];
let tls_config = axum_server::tls_rustls::RustlsConfig::from_config(Arc::new(tls_config));
let app = Router::new()
.route("/fingerprint", get(serve_fingerprint))
.layer(CorsLayer::new().allow_origin(Any).allow_methods([Method::GET]))
.with_state(fingerprint);
let server = axum_server::bind_rustls(config.listen, tls_config);
Self { app, server }
}
pub async fn serve(self) -> anyhow::Result<()> {
self.server.serve(self.app.into_make_service()).await?;
Ok(())
}
}
async fn serve_fingerprint(State(fingerprint): State<String>) -> impl IntoResponse {
fingerprint
}

1162
moq-transport/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,7 @@ authors = ["Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
version = "0.1.0"
version = "0.2.0"
edition = "2021"
keywords = ["quic", "http3", "webtransport", "media", "live"]
@ -17,6 +17,13 @@ categories = ["multimedia", "network-programming", "web-programming"]
[dependencies]
bytes = "1"
thiserror = "1"
anyhow = "1"
webtransport-generic = { path = "../../webtransport-rs/webtransport-generic", version = "0.3" }
tokio = { version = "1.27", features = ["macros", "io-util"] }
tokio = { version = "1", features = ["macros", "io-util", "sync"] }
log = "0.4"
indexmap = "2"
quinn = "0.10"
webtransport-quinn = "0.6"
#webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn" }
async-trait = "0.1"
paste = "1"

10
moq-transport/README.md Normal file
View File

@ -0,0 +1,10 @@
[![Documentation](https://docs.rs/moq-transport/badge.svg)](https://docs.rs/moq-transport/)
[![Crates.io](https://img.shields.io/crates/v/moq-transport.svg)](https://crates.io/crates/moq-transport)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](LICENSE-MIT)
# moq-transport
A Rust implementation of the proposed IETF standard.
[Specification](https://datatracker.ietf.org/doc/draft-ietf-moq-transport/)
[Github](https://github.com/moq-wg/moq-transport)

262
moq-transport/src/cache/broadcast.rs vendored Normal file
View File

@ -0,0 +1,262 @@
//! A broadcast is a collection of tracks, split into two handles: [Publisher] and [Subscriber].
//!
//! The [Publisher] can create tracks, either manually or on request.
//! It receives all requests by a [Subscriber] for a tracks that don't exist.
//! The simplest implementation is to close every unknown track with [CacheError::NotFound].
//!
//! A [Subscriber] can request tracks by name.
//! If the track already exists, it will be returned.
//! If the track doesn't exist, it will be sent to [Unknown] to be handled.
//! A [Subscriber] can be cloned to create multiple subscriptions.
//!
//! The broadcast is automatically closed with [CacheError::Closed] when [Publisher] is dropped, or all [Subscriber]s are dropped.
use std::{
collections::{hash_map, HashMap, VecDeque},
fmt,
ops::Deref,
sync::Arc,
};
use super::{track, CacheError, Watch};
/// Create a new broadcast.
pub fn new(id: &str) -> (Publisher, Subscriber) {
let state = Watch::new(State::default());
let info = Arc::new(Info { id: id.to_string() });
let publisher = Publisher::new(state.clone(), info.clone());
let subscriber = Subscriber::new(state, info);
(publisher, subscriber)
}
/// Static information about a broadcast.
#[derive(Debug)]
pub struct Info {
pub id: String,
}
/// Dynamic information about the broadcast.
#[derive(Debug)]
struct State {
tracks: HashMap<String, track::Subscriber>,
requested: VecDeque<track::Publisher>,
closed: Result<(), CacheError>,
}
impl State {
pub fn get(&self, name: &str) -> Result<Option<track::Subscriber>, CacheError> {
// Don't check closed, so we can return from cache.
Ok(self.tracks.get(name).cloned())
}
pub fn insert(&mut self, track: track::Subscriber) -> Result<(), CacheError> {
self.closed.clone()?;
match self.tracks.entry(track.name.clone()) {
hash_map::Entry::Occupied(_) => return Err(CacheError::Duplicate),
hash_map::Entry::Vacant(v) => v.insert(track),
};
Ok(())
}
pub fn request(&mut self, name: &str) -> Result<track::Subscriber, CacheError> {
self.closed.clone()?;
// Create a new track.
let (publisher, subscriber) = track::new(name);
// Insert the track into our Map so we deduplicate future requests.
self.tracks.insert(name.to_string(), subscriber.clone());
// Send the track to the Publisher to handle.
self.requested.push_back(publisher);
Ok(subscriber)
}
pub fn has_next(&self) -> Result<bool, CacheError> {
// Check if there's any elements in the queue before checking closed.
if !self.requested.is_empty() {
return Ok(true);
}
self.closed.clone()?;
Ok(false)
}
pub fn next(&mut self) -> track::Publisher {
// We panic instead of erroring to avoid a nasty wakeup loop if you don't call has_next first.
self.requested.pop_front().expect("no entry in queue")
}
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
}
impl Default for State {
fn default() -> Self {
Self {
tracks: HashMap::new(),
closed: Ok(()),
requested: VecDeque::new(),
}
}
}
/// Publish new tracks for a broadcast by name.
// TODO remove Clone
#[derive(Clone)]
pub struct Publisher {
state: Watch<State>,
info: Arc<Info>,
_dropped: Arc<Dropped>,
}
impl Publisher {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, info, _dropped }
}
/// Create a new track with the given name, inserting it into the broadcast.
pub fn create_track(&mut self, name: &str) -> Result<track::Publisher, CacheError> {
let (publisher, subscriber) = track::new(name);
self.state.lock_mut().insert(subscriber)?;
Ok(publisher)
}
/// Insert a track into the broadcast.
pub fn insert_track(&mut self, track: track::Subscriber) -> Result<(), CacheError> {
self.state.lock_mut().insert(track)
}
/// Block until the next track requested by a subscriber.
pub async fn next_track(&mut self) -> Result<track::Publisher, CacheError> {
loop {
let notify = {
let state = self.state.lock();
if state.has_next()? {
return Ok(state.into_mut().next());
}
state.changed()
};
notify.await;
}
}
/// Close the broadcast with an error.
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
impl Deref for Publisher {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Publisher {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Publisher")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
/// Subscribe to a broadcast by requesting tracks.
///
/// This can be cloned to create handles.
#[derive(Clone)]
pub struct Subscriber {
state: Watch<State>,
info: Arc<Info>,
_dropped: Arc<Dropped>,
}
impl Subscriber {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, info, _dropped }
}
/// Get a track from the broadcast by name.
/// If the track does not exist, it will be created and potentially fufilled by the publisher (via Unknown).
/// Otherwise, it will return [CacheError::NotFound].
pub fn get_track(&self, name: &str) -> Result<track::Subscriber, CacheError> {
let state = self.state.lock();
if let Some(track) = state.get(name)? {
return Ok(track);
}
// Request a new track if it does not exist.
state.into_mut().request(name)
}
/// Check if the broadcast is closed, either because the publisher was dropped or called [Publisher::close].
pub fn is_closed(&self) -> Option<CacheError> {
self.state.lock().closed.as_ref().err().cloned()
}
/// Wait until if the broadcast is closed, either because the publisher was dropped or called [Publisher::close].
pub async fn closed(&self) -> CacheError {
loop {
let notify = {
let state = self.state.lock();
if let Some(err) = state.closed.as_ref().err() {
return err.clone();
}
state.changed()
};
notify.await;
}
}
}
impl Deref for Subscriber {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Subscriber {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Subscriber")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
// A handle that closes the broadcast when dropped:
// - when all Subscribers are dropped or
// - when Publisher and Unknown are dropped.
struct Dropped {
state: Watch<State>,
}
impl Dropped {
fn new(state: Watch<State>) -> Self {
Self { state }
}
}
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

51
moq-transport/src/cache/error.rs vendored Normal file
View File

@ -0,0 +1,51 @@
use thiserror::Error;
use crate::MoqError;
#[derive(Clone, Debug, Error)]
pub enum CacheError {
/// A clean termination, represented as error code 0.
/// This error is automatically used when publishers or subscribers are dropped without calling close.
#[error("closed")]
Closed,
/// An ANNOUNCE_RESET or SUBSCRIBE_RESET was sent by the publisher.
#[error("reset code={0:?}")]
Reset(u32),
/// An ANNOUNCE_STOP or SUBSCRIBE_STOP was sent by the subscriber.
#[error("stop")]
Stop,
/// The requested resource was not found.
#[error("not found")]
NotFound,
/// A resource already exists with that ID.
#[error("duplicate")]
Duplicate,
}
impl MoqError for CacheError {
/// An integer code that is sent over the wire.
fn code(&self) -> u32 {
match self {
Self::Closed => 0,
Self::Reset(code) => *code,
Self::Stop => 206,
Self::NotFound => 404,
Self::Duplicate => 409,
}
}
/// A reason that is sent over the wire.
fn reason(&self) -> String {
match self {
Self::Closed => "closed".to_owned(),
Self::Reset(code) => format!("reset code: {}", code),
Self::Stop => "stop".to_owned(),
Self::NotFound => "not found".to_owned(),
Self::Duplicate => "duplicate".to_owned(),
}
}
}

216
moq-transport/src/cache/fragment.rs vendored Normal file
View File

@ -0,0 +1,216 @@
//! A fragment is a stream of bytes with a header, split into a [Publisher] and [Subscriber] handle.
//!
//! A [Publisher] writes an ordered stream of bytes in chunks.
//! There's no framing, so these chunks can be of any size or position, and won't be maintained over the network.
//!
//! A [Subscriber] reads an ordered stream of bytes in chunks.
//! These chunks are returned directly from the QUIC connection, so they may be of any size or position.
//! You can clone the [Subscriber] and each will read a copy of of all future chunks. (fanout)
//!
//! The fragment is closed with [CacheError::Closed] when all publishers or subscribers are dropped.
use core::fmt;
use std::{ops::Deref, sync::Arc};
use crate::VarInt;
use bytes::Bytes;
use super::{CacheError, Watch};
/// Create a new segment with the given info.
pub fn new(info: Info) -> (Publisher, Subscriber) {
let state = Watch::new(State::default());
let info = Arc::new(info);
let publisher = Publisher::new(state.clone(), info.clone());
let subscriber = Subscriber::new(state, info);
(publisher, subscriber)
}
/// Static information about the segment.
#[derive(Debug)]
pub struct Info {
// The sequence number of the fragment within the segment.
// NOTE: These may be received out of order or with gaps.
pub sequence: VarInt,
// The size of the fragment, optionally None if this is the last fragment in a segment.
// TODO enforce this size.
pub size: Option<VarInt>,
}
struct State {
// The data that has been received thus far.
chunks: Vec<Bytes>,
// Set when the publisher is dropped.
closed: Result<(), CacheError>,
}
impl State {
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
pub fn bytes(&self) -> usize {
self.chunks.iter().map(|f| f.len()).sum::<usize>()
}
}
impl Default for State {
fn default() -> Self {
Self {
chunks: Vec::new(),
closed: Ok(()),
}
}
}
impl fmt::Debug for State {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
// We don't want to print out the contents, so summarize.
f.debug_struct("State")
.field("chunks", &self.chunks.len().to_string())
.field("bytes", &self.bytes().to_string())
.field("closed", &self.closed)
.finish()
}
}
/// Used to write data to a segment and notify subscribers.
pub struct Publisher {
// Mutable segment state.
state: Watch<State>,
// Immutable segment state.
info: Arc<Info>,
// Closes the segment when all Publishers are dropped.
_dropped: Arc<Dropped>,
}
impl Publisher {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, info, _dropped }
}
/// Write a new chunk of bytes.
pub fn write_chunk(&mut self, chunk: Bytes) -> Result<(), CacheError> {
let mut state = self.state.lock_mut();
state.closed.clone()?;
state.chunks.push(chunk);
Ok(())
}
/// Close the segment with an error.
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
impl Deref for Publisher {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Publisher {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Publisher")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
/// Notified when a segment has new data available.
#[derive(Clone)]
pub struct Subscriber {
// Modify the segment state.
state: Watch<State>,
// Immutable segment state.
info: Arc<Info>,
// The number of chunks that we've read.
// NOTE: Cloned subscribers inherit this index, but then run in parallel.
index: usize,
// Dropped when all Subscribers are dropped.
_dropped: Arc<Dropped>,
}
impl Subscriber {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self {
state,
info,
index: 0,
_dropped,
}
}
/// Block until the next chunk of bytes is available.
pub async fn read_chunk(&mut self) -> Result<Option<Bytes>, CacheError> {
loop {
let notify = {
let state = self.state.lock();
if self.index < state.chunks.len() {
let chunk = state.chunks[self.index].clone();
self.index += 1;
return Ok(Some(chunk));
}
match &state.closed {
Err(CacheError::Closed) => return Ok(None),
Err(err) => return Err(err.clone()),
Ok(()) => state.changed(),
}
};
notify.await; // Try again when the state changes
}
}
}
impl Deref for Subscriber {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Subscriber {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Subscriber")
.field("state", &self.state)
.field("info", &self.info)
.field("index", &self.index)
.finish()
}
}
struct Dropped {
// Modify the segment state.
state: Watch<State>,
}
impl Dropped {
fn new(state: Watch<State>) -> Self {
Self { state }
}
}
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

21
moq-transport/src/cache/mod.rs vendored Normal file
View File

@ -0,0 +1,21 @@
//! Allows a publisher to push updates, automatically caching and fanning it out to any subscribers.
//!
//! The hierarchy is: [broadcast] -> [track] -> [segment] -> [fragment] -> [Bytes](bytes::Bytes)
//!
//! The naming scheme doesn't match the spec because it's more strict, and bikeshedding of course:
//!
//! - [broadcast] is kinda like "track namespace"
//! - [track] is "track"
//! - [segment] is "group" but MUST use a single stream.
//! - [fragment] is "object" but MUST have the same properties as the segment.
pub mod broadcast;
mod error;
pub mod fragment;
pub mod segment;
pub mod track;
pub(crate) mod watch;
pub(crate) use watch::*;
pub use error::*;

216
moq-transport/src/cache/segment.rs vendored Normal file
View File

@ -0,0 +1,216 @@
//! A segment is a stream of fragments with a header, split into a [Publisher] and [Subscriber] handle.
//!
//! A [Publisher] writes an ordered stream of fragments.
//! Each fragment can have a sequence number, allowing the subscriber to detect gaps fragments.
//!
//! A [Subscriber] reads an ordered stream of fragments.
//! The subscriber can be cloned, in which case each subscriber receives a copy of each fragment. (fanout)
//!
//! The segment is closed with [CacheError::Closed] when all publishers or subscribers are dropped.
use core::fmt;
use std::{ops::Deref, sync::Arc, time};
use crate::VarInt;
use super::{fragment, CacheError, Watch};
/// Create a new segment with the given info.
pub fn new(info: Info) -> (Publisher, Subscriber) {
let state = Watch::new(State::default());
let info = Arc::new(info);
let publisher = Publisher::new(state.clone(), info.clone());
let subscriber = Subscriber::new(state, info);
(publisher, subscriber)
}
/// Static information about the segment.
#[derive(Debug)]
pub struct Info {
// The sequence number of the segment within the track.
// NOTE: These may be received out of order or with gaps.
pub sequence: VarInt,
// The priority of the segment within the BROADCAST.
pub priority: u32,
// Cache the segment for at most this long.
pub expires: Option<time::Duration>,
}
struct State {
// The data that has been received thus far.
fragments: Vec<fragment::Subscriber>,
// Set when the publisher is dropped.
closed: Result<(), CacheError>,
}
impl State {
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
}
impl Default for State {
fn default() -> Self {
Self {
fragments: Vec::new(),
closed: Ok(()),
}
}
}
impl fmt::Debug for State {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("State")
.field("fragments", &self.fragments)
.field("closed", &self.closed)
.finish()
}
}
/// Used to write data to a segment and notify subscribers.
pub struct Publisher {
// Mutable segment state.
state: Watch<State>,
// Immutable segment state.
info: Arc<Info>,
// Closes the segment when all Publishers are dropped.
_dropped: Arc<Dropped>,
}
impl Publisher {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, info, _dropped }
}
/// Write a fragment
pub fn push_fragment(&mut self, fragment: fragment::Subscriber) -> Result<(), CacheError> {
let mut state = self.state.lock_mut();
state.closed.clone()?;
state.fragments.push(fragment);
Ok(())
}
pub fn create_fragment(&mut self, fragment: fragment::Info) -> Result<fragment::Publisher, CacheError> {
let (publisher, subscriber) = fragment::new(fragment);
self.push_fragment(subscriber)?;
Ok(publisher)
}
/// Close the segment with an error.
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
impl Deref for Publisher {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Publisher {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Publisher")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
/// Notified when a segment has new data available.
#[derive(Clone)]
pub struct Subscriber {
// Modify the segment state.
state: Watch<State>,
// Immutable segment state.
info: Arc<Info>,
// The number of chunks that we've read.
// NOTE: Cloned subscribers inherit this index, but then run in parallel.
index: usize,
// Dropped when all Subscribers are dropped.
_dropped: Arc<Dropped>,
}
impl Subscriber {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self {
state,
info,
index: 0,
_dropped,
}
}
/// Block until the next chunk of bytes is available.
pub async fn next_fragment(&mut self) -> Result<Option<fragment::Subscriber>, CacheError> {
loop {
let notify = {
let state = self.state.lock();
if self.index < state.fragments.len() {
let fragment = state.fragments[self.index].clone();
self.index += 1;
return Ok(Some(fragment));
}
match &state.closed {
Err(CacheError::Closed) => return Ok(None),
Err(err) => return Err(err.clone()),
Ok(()) => state.changed(),
}
};
notify.await; // Try again when the state changes
}
}
}
impl Deref for Subscriber {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Subscriber {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Subscriber")
.field("state", &self.state)
.field("info", &self.info)
.field("index", &self.index)
.finish()
}
}
struct Dropped {
// Modify the segment state.
state: Watch<State>,
}
impl Dropped {
fn new(state: Watch<State>) -> Self {
Self { state }
}
}
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

337
moq-transport/src/cache/track.rs vendored Normal file
View File

@ -0,0 +1,337 @@
//! A track is a collection of semi-reliable and semi-ordered segments, split into a [Publisher] and [Subscriber] handle.
//!
//! A [Publisher] creates segments with a sequence number and priority.
//! The sequest number is used to determine the order of segments, while the priority is used to determine which segment to transmit first.
//! This may seem counter-intuitive, but is designed for live streaming where the newest segments may be higher priority.
//! A cloned [Publisher] can be used to create segments in parallel, but will error if a duplicate sequence number is used.
//!
//! A [Subscriber] may not receive all segments in order or at all.
//! These segments are meant to be transmitted over congested networks and the key to MoQ Tranport is to not block on them.
//! Segments will be cached for a potentially limited duration added to the unreliable nature.
//! A cloned [Subscriber] will receive a copy of all new segment going forward (fanout).
//!
//! The track is closed with [CacheError::Closed] when all publishers or subscribers are dropped.
use std::{collections::BinaryHeap, fmt, ops::Deref, sync::Arc, time};
use indexmap::IndexMap;
use super::{segment, CacheError, Watch};
use crate::VarInt;
/// Create a track with the given name.
pub fn new(name: &str) -> (Publisher, Subscriber) {
let state = Watch::new(State::default());
let info = Arc::new(Info { name: name.to_string() });
let publisher = Publisher::new(state.clone(), info.clone());
let subscriber = Subscriber::new(state, info);
(publisher, subscriber)
}
/// Static information about a track.
#[derive(Debug)]
pub struct Info {
pub name: String,
}
struct State {
// Store segments in received order so subscribers can detect changes.
// The key is the segment sequence, which could have gaps.
// A None value means the segment has expired.
lookup: IndexMap<VarInt, Option<segment::Subscriber>>,
// Store when segments will expire in a priority queue.
expires: BinaryHeap<SegmentExpiration>,
// The number of None entries removed from the start of the lookup.
pruned: usize,
// Set when the publisher is closed/dropped, or all subscribers are dropped.
closed: Result<(), CacheError>,
}
impl State {
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
pub fn insert(&mut self, segment: segment::Subscriber) -> Result<(), CacheError> {
self.closed.clone()?;
let entry = match self.lookup.entry(segment.sequence) {
indexmap::map::Entry::Occupied(_entry) => return Err(CacheError::Duplicate),
indexmap::map::Entry::Vacant(entry) => entry,
};
if let Some(expires) = segment.expires {
self.expires.push(SegmentExpiration {
sequence: segment.sequence,
expires: time::Instant::now() + expires,
});
}
entry.insert(Some(segment));
// Expire any existing segments on insert.
// This means if you don't insert then you won't expire... but it's probably fine since the cache won't grow.
// TODO Use a timer to expire segments at the correct time instead
self.expire();
Ok(())
}
// Try expiring any segments
pub fn expire(&mut self) {
let now = time::Instant::now();
while let Some(segment) = self.expires.peek() {
if segment.expires > now {
break;
}
// Update the entry to None while preserving the index.
match self.lookup.entry(segment.sequence) {
indexmap::map::Entry::Occupied(mut entry) => entry.insert(None),
indexmap::map::Entry::Vacant(_) => panic!("expired segment not found"),
};
self.expires.pop();
}
// Remove None entries from the start of the lookup.
while let Some((_, None)) = self.lookup.get_index(0) {
self.lookup.shift_remove_index(0);
self.pruned += 1;
}
}
}
impl Default for State {
fn default() -> Self {
Self {
lookup: Default::default(),
expires: Default::default(),
pruned: 0,
closed: Ok(()),
}
}
}
impl fmt::Debug for State {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("State")
.field("lookup", &self.lookup)
.field("pruned", &self.pruned)
.field("closed", &self.closed)
.finish()
}
}
/// Creates new segments for a track.
pub struct Publisher {
state: Watch<State>,
info: Arc<Info>,
_dropped: Arc<Dropped>,
}
impl Publisher {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, info, _dropped }
}
/// Insert a new segment.
pub fn insert_segment(&mut self, segment: segment::Subscriber) -> Result<(), CacheError> {
self.state.lock_mut().insert(segment)
}
/// Create an insert a segment with the given info.
pub fn create_segment(&mut self, info: segment::Info) -> Result<segment::Publisher, CacheError> {
let (publisher, subscriber) = segment::new(info);
self.insert_segment(subscriber)?;
Ok(publisher)
}
/// Close the segment with an error.
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
impl Deref for Publisher {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Publisher {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Publisher")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
/// Receives new segments for a track.
#[derive(Clone)]
pub struct Subscriber {
state: Watch<State>,
info: Arc<Info>,
// The index of the next segment to return.
index: usize,
// If there are multiple segments to return, we put them in here to return them in priority order.
pending: BinaryHeap<SegmentPriority>,
// Dropped when all subscribers are dropped.
_dropped: Arc<Dropped>,
}
impl Subscriber {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self {
state,
info,
index: 0,
pending: Default::default(),
_dropped,
}
}
/// Block until the next segment arrives
pub async fn next_segment(&mut self) -> Result<Option<segment::Subscriber>, CacheError> {
loop {
let notify = {
let state = self.state.lock();
// Get our adjusted index, which could be negative if we've removed more broadcasts than read.
let mut index = self.index.saturating_sub(state.pruned);
// Push all new segments into a priority queue.
while index < state.lookup.len() {
let (_, segment) = state.lookup.get_index(index).unwrap();
// Skip None values (expired segments).
// TODO These might actually be expired, so we should check the expiration time.
if let Some(segment) = segment {
self.pending.push(SegmentPriority(segment.clone()));
}
index += 1;
}
self.index = state.pruned + index;
// Return the higher priority segment.
if let Some(segment) = self.pending.pop() {
return Ok(Some(segment.0));
}
// Otherwise check if we need to return an error.
match &state.closed {
Err(CacheError::Closed) => return Ok(None),
Err(err) => return Err(err.clone()),
Ok(()) => state.changed(),
}
};
notify.await
}
}
}
impl Deref for Subscriber {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Subscriber {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Subscriber")
.field("state", &self.state)
.field("info", &self.info)
.field("index", &self.index)
.finish()
}
}
// Closes the track on Drop.
struct Dropped {
state: Watch<State>,
}
impl Dropped {
fn new(state: Watch<State>) -> Self {
Self { state }
}
}
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(CacheError::Closed).ok();
}
}
// Used to order segments by expiration time.
struct SegmentExpiration {
sequence: VarInt,
expires: time::Instant,
}
impl Ord for SegmentExpiration {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
// Reverse order so the earliest expiration is at the top of the heap.
other.expires.cmp(&self.expires)
}
}
impl PartialOrd for SegmentExpiration {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl PartialEq for SegmentExpiration {
fn eq(&self, other: &Self) -> bool {
self.expires == other.expires
}
}
impl Eq for SegmentExpiration {}
// Used to order segments by priority
#[derive(Clone)]
struct SegmentPriority(pub segment::Subscriber);
impl Ord for SegmentPriority {
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
// Reverse order so the highest priority is at the top of the heap.
// TODO I let CodePilot generate this code so yolo
other.0.priority.cmp(&self.0.priority)
}
}
impl PartialOrd for SegmentPriority {
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
Some(self.cmp(other))
}
}
impl PartialEq for SegmentPriority {
fn eq(&self, other: &Self) -> bool {
self.0.priority == other.0.priority
}
}
impl Eq for SegmentPriority {}

180
moq-transport/src/cache/watch.rs vendored Normal file
View File

@ -0,0 +1,180 @@
use std::{
fmt,
future::Future,
ops::{Deref, DerefMut},
pin::Pin,
sync::{Arc, Mutex, MutexGuard},
task,
};
struct State<T> {
value: T,
wakers: Vec<task::Waker>,
epoch: usize,
}
impl<T> State<T> {
pub fn new(value: T) -> Self {
Self {
value,
wakers: Vec::new(),
epoch: 0,
}
}
pub fn register(&mut self, waker: &task::Waker) {
self.wakers.retain(|existing| !existing.will_wake(waker));
self.wakers.push(waker.clone());
}
pub fn notify(&mut self) {
self.epoch += 1;
for waker in self.wakers.drain(..) {
waker.wake();
}
}
}
impl<T: Default> Default for State<T> {
fn default() -> Self {
Self::new(T::default())
}
}
impl<T: fmt::Debug> fmt::Debug for State<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.value.fmt(f)
}
}
pub struct Watch<T> {
state: Arc<Mutex<State<T>>>,
}
impl<T> Watch<T> {
pub fn new(initial: T) -> Self {
let state = Arc::new(Mutex::new(State::new(initial)));
Self { state }
}
pub fn lock(&self) -> WatchRef<T> {
WatchRef {
state: self.state.clone(),
lock: self.state.lock().unwrap(),
}
}
pub fn lock_mut(&self) -> WatchMut<T> {
WatchMut {
lock: self.state.lock().unwrap(),
}
}
}
impl<T> Clone for Watch<T> {
fn clone(&self) -> Self {
Self {
state: self.state.clone(),
}
}
}
impl<T: Default> Default for Watch<T> {
fn default() -> Self {
Self::new(T::default())
}
}
impl<T: fmt::Debug> fmt::Debug for Watch<T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match self.state.try_lock() {
Ok(lock) => lock.value.fmt(f),
Err(_) => write!(f, "<locked>"),
}
}
}
pub struct WatchRef<'a, T> {
state: Arc<Mutex<State<T>>>,
lock: MutexGuard<'a, State<T>>,
}
impl<'a, T> WatchRef<'a, T> {
// Release the lock and wait for a notification when next updated.
pub fn changed(self) -> WatchChanged<T> {
WatchChanged {
state: self.state,
epoch: self.lock.epoch,
}
}
// Upgrade to a mutable references that automatically calls notify on drop.
pub fn into_mut(self) -> WatchMut<'a, T> {
WatchMut { lock: self.lock }
}
}
impl<'a, T> Deref for WatchRef<'a, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.lock.value
}
}
impl<'a, T: fmt::Debug> fmt::Debug for WatchRef<'a, T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.lock.fmt(f)
}
}
pub struct WatchMut<'a, T> {
lock: MutexGuard<'a, State<T>>,
}
impl<'a, T> Deref for WatchMut<'a, T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.lock.value
}
}
impl<'a, T> DerefMut for WatchMut<'a, T> {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.lock.value
}
}
impl<'a, T> Drop for WatchMut<'a, T> {
fn drop(&mut self) {
self.lock.notify();
}
}
impl<'a, T: fmt::Debug> fmt::Debug for WatchMut<'a, T> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.lock.fmt(f)
}
}
pub struct WatchChanged<T> {
state: Arc<Mutex<State<T>>>,
epoch: usize,
}
impl<T> Future for WatchChanged<T> {
type Output = ();
fn poll(self: Pin<&mut Self>, cx: &mut task::Context<'_>) -> task::Poll<Self::Output> {
// TODO is there an API we can make that doesn't drop this lock?
let mut state = self.state.lock().unwrap();
if state.epoch > self.epoch {
task::Poll::Ready(())
} else {
state.register(cx.waker());
task::Poll::Pending
}
}
}

View File

@ -1,9 +1,21 @@
use super::VarInt;
use bytes::{Buf, Bytes};
use std::str;
use super::{BoundsExceeded, VarInt};
use std::{io, str};
use thiserror::Error;
// I'm too lazy to add these trait bounds to every message type.
// TODO Use trait aliases when they're stable, or add these bounds to every method.
pub trait AsyncRead: tokio::io::AsyncRead + Unpin + Send {}
impl AsyncRead for webtransport_quinn::RecvStream {}
impl<T> AsyncRead for tokio::io::Take<&mut T> where T: AsyncRead {}
impl<T: AsRef<[u8]> + Unpin + Send> AsyncRead for io::Cursor<T> {}
#[async_trait::async_trait]
pub trait Decode: Sized {
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError>;
}
/// A decode error.
#[derive(Error, Debug)]
pub enum DecodeError {
#[error("unexpected end of buffer")]
@ -12,65 +24,32 @@ pub enum DecodeError {
#[error("invalid string")]
InvalidString(#[from] str::Utf8Error),
#[error("invalid type: {0:?}")]
InvalidType(VarInt),
#[error("invalid message: {0:?}")]
InvalidMessage(VarInt),
#[error("unknown error")]
Unknown,
#[error("invalid role: {0:?}")]
InvalidRole(VarInt),
#[error("invalid subscribe location")]
InvalidSubscribeLocation,
#[error("varint bounds exceeded")]
BoundsExceeded(#[from] BoundsExceeded),
// TODO move these to ParamError
#[error("duplicate parameter")]
DupliateParameter,
#[error("missing parameter")]
MissingParameter,
#[error("invalid parameter")]
InvalidParameter,
#[error("io error: {0}")]
IoError(#[from] std::io::Error),
// Used to signal that the stream has ended.
#[error("no more messages")]
Final,
}
pub trait Decode: Sized {
// Decodes a message, returning UnexpectedEnd if there's not enough bytes in the buffer.
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError>;
}
impl Decode for Bytes {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let size = VarInt::decode(r)?.into_inner() as usize;
if r.remaining() < size {
return Err(DecodeError::UnexpectedEnd);
}
let buf = r.copy_to_bytes(size);
Ok(buf)
}
}
impl Decode for Vec<u8> {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
Bytes::decode(r).map(|b| b.to_vec())
}
}
impl Decode for String {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let data = Vec::decode(r)?;
let s = str::from_utf8(&data)?.to_string();
Ok(s)
}
}
impl Decode for u8 {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
if r.remaining() < 1 {
return Err(DecodeError::UnexpectedEnd);
}
Ok(r.get_u8())
}
}
/*
impl<const N: usize> Decode for [u8; N] {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
if r.remaining() < N {
return Err(DecodeError::UnexpectedEnd);
}
let mut buf = [0; N];
r.copy_to_slice(&mut buf);
Ok(buf)
}
}
*/

View File

@ -1,20 +0,0 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use bytes::{Buf, BufMut};
use std::time::Duration;
impl Encode for Duration {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
let ms = self.as_millis();
let ms = VarInt::try_from(ms)?;
ms.encode(w)
}
}
impl Decode for Duration {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let ms = VarInt::decode(r)?;
Ok(Self::from_millis(ms.into()))
}
}

View File

@ -1,92 +1,27 @@
use super::{BoundsExceeded, VarInt};
use bytes::{BufMut, Bytes};
use super::BoundsExceeded;
use thiserror::Error;
// I'm too lazy to add these trait bounds to every message type.
// TODO Use trait aliases when they're stable, or add these bounds to every method.
pub trait AsyncWrite: tokio::io::AsyncWrite + Unpin + Send {}
impl AsyncWrite for webtransport_quinn::SendStream {}
impl AsyncWrite for Vec<u8> {}
#[async_trait::async_trait]
pub trait Encode: Sized {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError>;
}
/// An encode error.
#[derive(Error, Debug)]
pub enum EncodeError {
#[error("unexpected end of buffer")]
UnexpectedEnd,
#[error("varint too large")]
BoundsExceeded(#[from] BoundsExceeded),
}
pub trait Encode: Sized {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError>;
}
impl Encode for Bytes {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.as_ref().encode(w)
}
}
impl Encode for Vec<u8> {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.as_slice().encode(w)
}
}
impl Encode for &[u8] {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
let size = VarInt::try_from(self.len())?;
size.encode(w)?;
if w.remaining_mut() < self.len() {
return Err(EncodeError::UnexpectedEnd);
}
w.put_slice(self);
Ok(())
}
}
impl Encode for String {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.as_bytes().encode(w)
}
}
impl Encode for u8 {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
if w.remaining_mut() < 1 {
return Err(EncodeError::UnexpectedEnd);
}
w.put_u8(*self);
Ok(())
}
}
impl Encode for u16 {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
if w.remaining_mut() < 2 {
return Err(EncodeError::UnexpectedEnd);
}
w.put_u16(*self);
Ok(())
}
}
impl Encode for u32 {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
if w.remaining_mut() < 4 {
return Err(EncodeError::UnexpectedEnd);
}
w.put_u32(*self);
Ok(())
}
}
impl Encode for u64 {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
if w.remaining_mut() < 8 {
return Err(EncodeError::UnexpectedEnd);
}
w.put_u64(*self);
Ok(())
}
#[error("invalid value")]
InvalidValue,
#[error("i/o error: {0}")]
IoError(#[from] std::io::Error),
}

View File

@ -1,9 +1,11 @@
mod decode;
mod duration;
mod encode;
mod params;
mod string;
mod varint;
pub use decode::*;
pub use duration::*;
pub use encode::*;
pub use params::*;
pub use string::*;
pub use varint::*;

View File

@ -0,0 +1,85 @@
use std::io::Cursor;
use std::{cmp::max, collections::HashMap};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::coding::{AsyncRead, AsyncWrite, Decode, Encode};
use crate::{
coding::{DecodeError, EncodeError},
VarInt,
};
#[derive(Default, Debug, Clone)]
pub struct Params(pub HashMap<VarInt, Vec<u8>>);
#[async_trait::async_trait]
impl Decode for Params {
async fn decode<R: AsyncRead>(mut r: &mut R) -> Result<Self, DecodeError> {
let mut params = HashMap::new();
// I hate this shit so much; let me encode my role and get on with my life.
let count = VarInt::decode(r).await?;
for _ in 0..count.into_inner() {
let kind = VarInt::decode(r).await?;
if params.contains_key(&kind) {
return Err(DecodeError::DupliateParameter);
}
let size = VarInt::decode(r).await?;
// Don't allocate the entire requested size to avoid a possible attack
// Instead, we allocate up to 1024 and keep appending as we read further.
let mut pr = r.take(size.into_inner());
let mut buf = Vec::with_capacity(max(1024, pr.limit() as usize));
pr.read_to_end(&mut buf).await?;
params.insert(kind, buf);
r = pr.into_inner();
}
Ok(Params(params))
}
}
#[async_trait::async_trait]
impl Encode for Params {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::try_from(self.0.len())?.encode(w).await?;
for (kind, value) in self.0.iter() {
kind.encode(w).await?;
VarInt::try_from(value.len())?.encode(w).await?;
w.write_all(value).await?;
}
Ok(())
}
}
impl Params {
pub fn new() -> Self {
Self::default()
}
pub async fn set<P: Encode>(&mut self, kind: VarInt, p: P) -> Result<(), EncodeError> {
let mut value = Vec::new();
p.encode(&mut value).await?;
self.0.insert(kind, value);
Ok(())
}
pub fn has(&self, kind: VarInt) -> bool {
self.0.contains_key(&kind)
}
pub async fn get<P: Decode>(&mut self, kind: VarInt) -> Result<Option<P>, DecodeError> {
if let Some(value) = self.0.remove(&kind) {
let mut cursor = Cursor::new(value);
Ok(Some(P::decode(&mut cursor).await?))
} else {
Ok(None)
}
}
}

View File

@ -0,0 +1,29 @@
use std::cmp::min;
use crate::coding::{AsyncRead, AsyncWrite};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::VarInt;
use super::{Decode, DecodeError, Encode, EncodeError};
#[async_trait::async_trait]
impl Encode for String {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
let size = VarInt::try_from(self.len())?;
size.encode(w).await?;
w.write_all(self.as_ref()).await?;
Ok(())
}
}
#[async_trait::async_trait]
impl Decode for String {
/// Decode a string with a varint length prefix.
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let size = VarInt::decode(r).await?.into_inner();
let mut str = String::with_capacity(min(1024, size) as usize);
r.take(size).read_to_string(&mut str).await?;
Ok(str)
}
}

View File

@ -5,13 +5,14 @@
use std::convert::{TryFrom, TryInto};
use std::fmt;
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use bytes::{Buf, BufMut};
use crate::coding::{AsyncRead, AsyncWrite};
use thiserror::Error;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use super::{Decode, DecodeError, Encode, EncodeError};
#[derive(Debug, Copy, Clone, Eq, PartialEq, Error)]
#[error("value too large for varint encoding")]
#[error("value out of range")]
pub struct BoundsExceeded;
/// An integer less than 2^62
@ -23,8 +24,12 @@ pub struct BoundsExceeded;
pub struct VarInt(u64);
impl VarInt {
/// The largest possible value.
pub const MAX: Self = Self((1 << 62) - 1);
/// The smallest possible value.
pub const ZERO: Self = Self(0);
/// Construct a `VarInt` infallibly using the largest available type.
/// Larger values need to use `try_from` instead.
pub const fn from_u32(x: u32) -> Self {
@ -108,6 +113,45 @@ impl TryFrom<usize> for VarInt {
}
}
impl TryFrom<VarInt> for u32 {
type Error = BoundsExceeded;
/// Succeeds iff `x` < 2^32
fn try_from(x: VarInt) -> Result<Self, BoundsExceeded> {
if x.0 <= u32::MAX.into() {
Ok(x.0 as u32)
} else {
Err(BoundsExceeded)
}
}
}
impl TryFrom<VarInt> for u16 {
type Error = BoundsExceeded;
/// Succeeds iff `x` < 2^16
fn try_from(x: VarInt) -> Result<Self, BoundsExceeded> {
if x.0 <= u16::MAX.into() {
Ok(x.0 as u16)
} else {
Err(BoundsExceeded)
}
}
}
impl TryFrom<VarInt> for u8 {
type Error = BoundsExceeded;
/// Succeeds iff `x` < 2^8
fn try_from(x: VarInt) -> Result<Self, BoundsExceeded> {
if x.0 <= u8::MAX.into() {
Ok(x.0 as u8)
} else {
Err(BoundsExceeded)
}
}
}
impl fmt::Debug for VarInt {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
self.0.fmt(f)
@ -120,43 +164,36 @@ impl fmt::Display for VarInt {
}
}
#[async_trait::async_trait]
impl Decode for VarInt {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let mut buf = [0; 8];
/// Decode a varint from the given reader.
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let b = r.read_u8().await?;
Self::decode_byte(b, r).await
}
}
if r.remaining() < 1 {
return Err(DecodeError::UnexpectedEnd);
}
impl VarInt {
/// Decode a varint given the first byte, reading the rest as needed.
/// This is silly but useful for determining if the stream has ended.
pub async fn decode_byte<R: AsyncRead>(b: u8, r: &mut R) -> Result<Self, DecodeError> {
let tag = b >> 6;
buf[0] = r.get_u8();
let tag = buf[0] >> 6;
buf[0] &= 0b0011_1111;
let mut buf = [0u8; 8];
buf[0] = b & 0b0011_1111;
let x = match tag {
0b00 => u64::from(buf[0]),
0b01 => {
if r.remaining() < 1 {
return Err(DecodeError::UnexpectedEnd);
}
r.copy_to_slice(buf[1..2].as_mut());
r.read_exact(buf[1..2].as_mut()).await?;
u64::from(u16::from_be_bytes(buf[..2].try_into().unwrap()))
}
0b10 => {
if r.remaining() < 3 {
return Err(DecodeError::UnexpectedEnd);
}
r.copy_to_slice(buf[1..4].as_mut());
r.read_exact(buf[1..4].as_mut()).await?;
u64::from(u32::from_be_bytes(buf[..4].try_into().unwrap()))
}
0b11 => {
if r.remaining() < 7 {
return Err(DecodeError::UnexpectedEnd);
}
r.copy_to_slice(buf[1..8].as_mut());
r.read_exact(buf[1..8].as_mut()).await?;
u64::from_be_bytes(buf)
}
_ => unreachable!(),
@ -166,19 +203,30 @@ impl Decode for VarInt {
}
}
#[async_trait::async_trait]
impl Encode for VarInt {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
/// Encode a varint to the given writer.
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
let x = self.0;
if x < 2u64.pow(6) {
(x as u8).encode(w)
w.write_u8(x as u8).await?;
} else if x < 2u64.pow(14) {
(0b01 << 14 | x as u16).encode(w)
w.write_u16(0b01 << 14 | x as u16).await?;
} else if x < 2u64.pow(30) {
(0b10 << 30 | x as u32).encode(w)
w.write_u32(0b10 << 30 | x as u32).await?;
} else if x < 2u64.pow(62) {
(0b11 << 62 | x).encode(w)
w.write_u64(0b11 << 62 | x).await?;
} else {
unreachable!("malformed VarInt");
}
Ok(())
}
}
// This is a fork of quinn::VarInt.
impl From<quinn::VarInt> for VarInt {
fn from(v: quinn::VarInt) -> Self {
Self(v.into_inner())
}
}

View File

@ -0,0 +1,7 @@
pub trait MoqError {
/// An integer code that is sent over the wire.
fn code(&self) -> u32;
/// An optional reason sometimes sent over the wire.
fn reason(&self) -> String;
}

View File

@ -1,9 +1,18 @@
//! An implementation of the MoQ Transport protocol.
//!
//! MoQ Transport is a pub/sub protocol over QUIC.
//! While originally designed for live media, MoQ Transport is generic and can be used for other live applications.
//! The specification is a work in progress and will change.
//! See the [specification](https://datatracker.ietf.org/doc/draft-ietf-moq-transport/) and [github](https://github.com/moq-wg/moq-transport) for any updates.
//!
//! This implementation has some required extensions until the draft stablizes. See: [Extensions](crate::setup::Extensions)
mod coding;
mod error;
pub mod cache;
pub mod message;
pub mod object;
pub mod session;
pub mod setup;
pub use coding::VarInt;
pub use message::Message;
pub use session::Session;
pub use error::MoqError;

View File

@ -1,23 +1,30 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, Params};
use bytes::{Buf, BufMut};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
#[derive(Debug)]
/// Sent by the publisher to announce the availability of a group of tracks.
#[derive(Clone, Debug)]
pub struct Announce {
// The track namespace
pub track_namespace: String,
/// The track namespace
pub namespace: String,
/// Optional parameters
pub params: Params,
}
impl Decode for Announce {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let track_namespace = String::decode(r)?;
Ok(Self { track_namespace })
impl Announce {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
let params = Params::decode(r).await?;
Ok(Self { namespace, params })
}
}
impl Encode for Announce {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.track_namespace.encode(w)?;
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await?;
self.params.encode(w).await?;
Ok(())
}
}

View File

@ -1,40 +0,0 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use bytes::{Buf, BufMut};
#[derive(Debug)]
pub struct AnnounceError {
// Echo back the namespace that was announced.
// TODO Propose using an ID to save bytes.
pub track_namespace: String,
// An error code.
pub code: VarInt,
// An optional, human-readable reason.
pub reason: String,
}
impl Decode for AnnounceError {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let track_namespace = String::decode(r)?;
let code = VarInt::decode(r)?;
let reason = String::decode(r)?;
Ok(Self {
track_namespace,
code,
reason,
})
}
}
impl Encode for AnnounceError {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.track_namespace.encode(w)?;
self.code.encode(w)?;
self.reason.encode(w)?;
Ok(())
}
}

View File

@ -1,23 +1,23 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use crate::{
coding::{AsyncRead, AsyncWrite, Decode, DecodeError, Encode, EncodeError},
setup::Extensions,
};
use bytes::{Buf, BufMut};
#[derive(Debug)]
/// Sent by the subscriber to accept an Announce.
#[derive(Clone, Debug)]
pub struct AnnounceOk {
// Echo back the namespace that was announced.
// TODO Propose using an ID to save bytes.
pub track_namespace: String,
pub namespace: String,
}
impl Decode for AnnounceOk {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let track_namespace = String::decode(r)?;
Ok(Self { track_namespace })
impl AnnounceOk {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
Ok(Self { namespace })
}
}
impl Encode for AnnounceOk {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.track_namespace.encode(w)
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await
}
}

View File

@ -0,0 +1,39 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the subscriber to reject an Announce.
#[derive(Clone, Debug)]
pub struct AnnounceError {
// Echo back the namespace that was reset
pub namespace: String,
// An error code.
pub code: u32,
// An optional, human-readable reason.
pub reason: String,
}
impl AnnounceError {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
let code = VarInt::decode(r).await?.try_into()?;
let reason = String::decode(r).await?;
Ok(Self {
namespace,
code,
reason,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await?;
VarInt::from_u32(self.code).encode(w).await?;
self.reason.encode(w).await?;
Ok(())
}
}

View File

@ -1,21 +1,21 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use bytes::{Buf, BufMut};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
#[derive(Debug)]
/// Sent by the server to indicate that the client should connect to a different server.
#[derive(Clone, Debug)]
pub struct GoAway {
pub url: String,
}
impl Decode for GoAway {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let url = String::decode(r)?;
impl GoAway {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let url = String::decode(r).await?;
Ok(Self { url })
}
}
impl Encode for GoAway {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.url.encode(w)
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.url.encode(w).await
}
}

View File

@ -1,67 +1,112 @@
//! Low-level message sent over the wire, as defined in the specification.
//!
//! All of these messages are sent over a bidirectional QUIC stream.
//! This introduces some head-of-line blocking but preserves ordering.
//! The only exception are OBJECT "messages", which are sent over dedicated QUIC streams.
//!
//! Messages sent by the publisher:
//! - [Announce]
//! - [Unannounce]
//! - [SubscribeOk]
//! - [SubscribeError]
//! - [SubscribeReset]
//! - [Object]
//!
//! Messages sent by the subscriber:
//! - [Subscribe]
//! - [Unsubscribe]
//! - [AnnounceOk]
//! - [AnnounceError]
//!
//! Example flow:
//! ```test
//! -> ANNOUNCE namespace="foo"
//! <- ANNOUNCE_OK namespace="foo"
//! <- SUBSCRIBE id=0 namespace="foo" name="bar"
//! -> SUBSCRIBE_OK id=0
//! -> OBJECT id=0 sequence=69 priority=4 expires=30
//! -> OBJECT id=0 sequence=70 priority=4 expires=30
//! -> OBJECT id=0 sequence=70 priority=4 expires=30
//! <- SUBSCRIBE_STOP id=0
//! -> SUBSCRIBE_RESET id=0 code=206 reason="closed by peer"
//! ```
mod announce;
mod announce_error;
mod announce_ok;
mod announce_reset;
mod go_away;
mod receiver;
mod sender;
mod object;
mod subscribe;
mod subscribe_error;
mod subscribe_fin;
mod subscribe_ok;
mod subscribe_reset;
mod unannounce;
mod unsubscribe;
pub use announce::*;
pub use announce_error::*;
pub use announce_ok::*;
pub use announce_reset::*;
pub use go_away::*;
pub use receiver::*;
pub use sender::*;
pub use object::*;
pub use subscribe::*;
pub use subscribe_error::*;
pub use subscribe_fin::*;
pub use subscribe_ok::*;
pub use subscribe_reset::*;
pub use unannounce::*;
pub use unsubscribe::*;
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup;
use bytes::{Buf, BufMut};
use std::fmt;
// NOTE: This is forked from moq-transport-00.
// 1. SETUP role indicates local support ("I can subscribe"), not remote support ("server must publish")
// 2. SETUP_SERVER is id=2 to disambiguate
// 3. messages do not have a specified length.
// 4. messages are sent over a single bidrectional stream (after SETUP), not unidirectional streams.
// 5. SUBSCRIBE specifies the track_id, not SUBSCRIBE_OK
// 6. optional parameters are written in order, and zero when unset (setup, announce, subscribe)
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
// Use a macro to generate the message types rather than copy-paste.
// This implements a decode/encode method that uses the specified type.
macro_rules! message_types {
{$($name:ident = $val:expr,)*} => {
/// All supported message types.
#[derive(Clone)]
pub enum Message {
$($name($name)),*
}
impl Decode for Message {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let t = VarInt::decode(r)?;
impl Message {
pub async fn decode<R: AsyncRead>(r: &mut R, ext: &Extensions) -> Result<Self, DecodeError> {
let t = VarInt::decode(r).await?;
match t.into_inner() {
$($val => {
let msg = $name::decode(r)?;
let msg = $name::decode(r, ext).await?;
Ok(Self::$name(msg))
})*
_ => Err(DecodeError::InvalidType(t)),
_ => Err(DecodeError::InvalidMessage(t)),
}
}
}
impl Encode for Message {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, ext: &Extensions) -> Result<(), EncodeError> {
match self {
$(Self::$name(ref m) => {
VarInt::from_u32($val).encode(w)?;
m.encode(w)
VarInt::from_u32($val).encode(w).await?;
m.encode(w, ext).await
},)*
}
}
pub fn id(&self) -> VarInt {
match self {
$(Self::$name(_) => {
VarInt::from_u32($val)
},)*
}
}
pub fn name(&self) -> &'static str {
match self {
$(Self::$name(_) => {
stringify!($name)
},)*
}
}
@ -84,21 +129,32 @@ macro_rules! message_types {
}
}
// Just so we can use the macro above.
type SetupClient = setup::Client;
type SetupServer = setup::Server;
// Each message is prefixed with the given VarInt type.
message_types! {
// NOTE: Object and Setup are in other modules.
// Object = 0x0
SetupClient = 0x1,
SetupServer = 0x2,
// ObjectUnbounded = 0x2
// SetupClient = 0x40
// SetupServer = 0x41
// SUBSCRIBE family, sent by subscriber
Subscribe = 0x3,
Unsubscribe = 0xa,
// SUBSCRIBE family, sent by publisher
SubscribeOk = 0x4,
SubscribeError = 0x5,
SubscribeFin = 0xb,
SubscribeReset = 0xc,
// ANNOUNCE family, sent by publisher
Announce = 0x6,
Unannounce = 0x9,
// ANNOUNCE family, sent by subscriber
AnnounceOk = 0x7,
AnnounceError = 0x8,
// Misc
GoAway = 0x10,
}

View File

@ -0,0 +1,108 @@
use std::{io, time};
use tokio::io::AsyncReadExt;
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup;
/// Sent by the publisher as the header of each data stream.
#[derive(Clone, Debug)]
pub struct Object {
// An ID for this track.
// Proposal: https://github.com/moq-wg/moq-transport/issues/209
pub track: VarInt,
// The sequence number within the track.
pub group: VarInt,
// The sequence number within the group.
pub sequence: VarInt,
// The priority, where **smaller** values are sent first.
pub priority: u32,
// Cache the object for at most this many seconds.
// Zero means never expire.
pub expires: Option<time::Duration>,
/// An optional size, allowing multiple OBJECTs on the same stream.
pub size: Option<VarInt>,
}
impl Object {
pub async fn decode<R: AsyncRead>(r: &mut R, extensions: &setup::Extensions) -> Result<Self, DecodeError> {
// Try reading the first byte, returning a special error if the stream naturally ended.
let typ = match r.read_u8().await {
Ok(b) => VarInt::decode_byte(b, r).await?,
Err(e) if e.kind() == io::ErrorKind::UnexpectedEof => return Err(DecodeError::Final),
Err(e) => return Err(e.into()),
};
let size_present = match typ.into_inner() {
0 => false,
2 => true,
_ => return Err(DecodeError::InvalidMessage(typ)),
};
let track = VarInt::decode(r).await?;
let group = VarInt::decode(r).await?;
let sequence = VarInt::decode(r).await?;
let priority = VarInt::decode(r).await?.try_into()?;
let expires = match extensions.object_expires {
true => match VarInt::decode(r).await?.into_inner() {
0 => None,
secs => Some(time::Duration::from_secs(secs)),
},
false => None,
};
// The presence of the size field depends on the type.
let size = match size_present {
true => Some(VarInt::decode(r).await?),
false => None,
};
Ok(Self {
track,
group,
sequence,
priority,
expires,
size,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, extensions: &setup::Extensions) -> Result<(), EncodeError> {
// The kind changes based on the presence of the size.
let kind = match self.size {
Some(_) => VarInt::from_u32(2),
None => VarInt::ZERO,
};
kind.encode(w).await?;
self.track.encode(w).await?;
self.group.encode(w).await?;
self.sequence.encode(w).await?;
VarInt::from_u32(self.priority).encode(w).await?;
// Round up if there's any decimal points.
let expires = match self.expires {
None => 0,
Some(time::Duration::ZERO) => return Err(EncodeError::InvalidValue), // there's no way of expressing zero currently.
Some(expires) if expires.subsec_nanos() > 0 => expires.as_secs() + 1,
Some(expires) => expires.as_secs(),
};
if extensions.object_expires {
VarInt::try_from(expires)?.encode(w).await?;
}
if let Some(size) = self.size {
size.encode(w).await?;
}
Ok(())
}
}

View File

@ -1,49 +0,0 @@
use crate::coding::{Decode, DecodeError};
use crate::message::Message;
use bytes::{Buf, BytesMut};
use std::io::Cursor;
use webtransport_generic::AsyncRecvStream;
pub struct Receiver<R>
where
R: AsyncRecvStream, // TODO take RecvStream instead
{
stream: R,
buf: BytesMut, // data we've read but haven't fully decoded yet
}
impl<R> Receiver<R>
where
R: AsyncRecvStream,
{
pub fn new(stream: R) -> Self {
Self {
buf: BytesMut::new(),
stream,
}
}
// Read the next full message from the stream.
pub async fn recv(&mut self) -> anyhow::Result<Message> {
loop {
// Read the contents of the buffer
let mut peek = Cursor::new(&self.buf);
match Message::decode(&mut peek) {
Ok(msg) => {
// We've successfully decoded a message, so we can advance the buffer.
self.buf.advance(peek.position() as usize);
return Ok(msg);
}
Err(DecodeError::UnexpectedEnd) => {
// The decode failed, so we need to append more data.
self.stream.recv(&mut self.buf).await?;
}
Err(e) => return Err(e.into()),
}
}
}
}

View File

@ -1,68 +0,0 @@
use crate::coding::Encode;
use crate::message::Message;
use bytes::BytesMut;
use webtransport_generic::AsyncSendStream;
pub struct Sender<S>
where
S: AsyncSendStream, // TODO take SendStream instead
{
stream: S,
buf: BytesMut, // reuse a buffer to encode messages.
}
impl<S> Sender<S>
where
S: AsyncSendStream,
{
pub fn new(stream: S) -> Self {
Self {
buf: BytesMut::new(),
stream,
}
}
pub async fn send<T: Into<Message>>(&mut self, msg: T) -> anyhow::Result<()> {
let msg = msg.into();
self.buf.clear();
msg.encode(&mut self.buf)?;
self.stream.send(&mut self.buf).await?;
Ok(())
}
/*
// Helper that lets multiple threads send control messages.
pub fn share(self) -> ControlShared<S> {
ControlShared {
stream: Arc::new(Mutex::new(self)),
}
}
*/
}
/*
// Helper that allows multiple threads to send control messages.
// There's no equivalent for receiving since only one thread should be receiving at a time.
#[derive(Clone)]
pub struct SendControlShared<S>
where
S: AsyncSendStream,
{
stream: Arc<Mutex<SendControl<S>>>,
}
impl<S> SendControlShared<S>
where
S: AsyncSendStream,
{
pub async fn send<T: Into<Message>>(&mut self, msg: T) -> anyhow::Result<()> {
let mut stream = self.stream.lock().await;
stream.send(msg).await
}
}
*/

View File

@ -1,39 +1,141 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, Params, VarInt};
use bytes::{Buf, BufMut};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
#[derive(Debug)]
/// Sent by the subscriber to request all future objects for the given track.
///
/// Objects will use the provided ID instead of the full track name, to save bytes.
#[derive(Clone, Debug)]
pub struct Subscribe {
// An ID we choose so we can map to the track_name.
/// An ID we choose so we can map to the track_name.
// Proposal: https://github.com/moq-wg/moq-transport/issues/209
pub track_id: VarInt,
pub id: VarInt,
// The track namespace.
pub track_namespace: String,
/// The track namespace.
///
/// Must be None if `extensions.subscribe_split` is false.
pub namespace: Option<String>,
// The track name.
pub track_name: String,
/// The track name.
pub name: String,
/// The start/end group/object.
pub start_group: SubscribeLocation,
pub start_object: SubscribeLocation,
pub end_group: SubscribeLocation,
pub end_object: SubscribeLocation,
/// Optional parameters
pub params: Params,
}
impl Decode for Subscribe {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let track_id = VarInt::decode(r)?;
let track_namespace = String::decode(r)?;
let track_name = String::decode(r)?;
impl Subscribe {
pub async fn decode<R: AsyncRead>(r: &mut R, ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let namespace = match ext.subscribe_split {
true => Some(String::decode(r).await?),
false => None,
};
let name = String::decode(r).await?;
let start_group = SubscribeLocation::decode(r).await?;
let start_object = SubscribeLocation::decode(r).await?;
let end_group = SubscribeLocation::decode(r).await?;
let end_object = SubscribeLocation::decode(r).await?;
// You can't have a start object without a start group.
if start_group == SubscribeLocation::None && start_object != SubscribeLocation::None {
return Err(DecodeError::InvalidSubscribeLocation);
}
// You can't have an end object without an end group.
if end_group == SubscribeLocation::None && end_object != SubscribeLocation::None {
return Err(DecodeError::InvalidSubscribeLocation);
}
// NOTE: There's some more location restrictions in the draft, but they're enforced at a higher level.
let params = Params::decode(r).await?;
Ok(Self {
track_id,
track_namespace,
track_name,
id,
namespace,
name,
start_group,
start_object,
end_group,
end_object,
params,
})
}
}
impl Encode for Subscribe {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.track_id.encode(w)?;
self.track_namespace.encode(w)?;
self.track_name.encode(w)?;
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
if self.namespace.is_some() != ext.subscribe_split {
panic!("namespace must be None if subscribe_split is false");
}
if ext.subscribe_split {
self.namespace.as_ref().unwrap().encode(w).await?;
}
self.name.encode(w).await?;
self.start_group.encode(w).await?;
self.start_object.encode(w).await?;
self.end_group.encode(w).await?;
self.end_object.encode(w).await?;
self.params.encode(w).await?;
Ok(())
}
}
/// Signal where the subscription should begin, relative to the current cache.
#[derive(Clone, Debug, PartialEq)]
pub enum SubscribeLocation {
None,
Absolute(VarInt),
Latest(VarInt),
Future(VarInt),
}
impl SubscribeLocation {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let kind = VarInt::decode(r).await?;
match kind.into_inner() {
0 => Ok(Self::None),
1 => Ok(Self::Absolute(VarInt::decode(r).await?)),
2 => Ok(Self::Latest(VarInt::decode(r).await?)),
3 => Ok(Self::Future(VarInt::decode(r).await?)),
_ => Err(DecodeError::InvalidSubscribeLocation),
}
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
match self {
Self::None => {
VarInt::from_u32(0).encode(w).await?;
}
Self::Absolute(val) => {
VarInt::from_u32(1).encode(w).await?;
val.encode(w).await?;
}
Self::Latest(val) => {
VarInt::from_u32(2).encode(w).await?;
val.encode(w).await?;
}
Self::Future(val) => {
VarInt::from_u32(3).encode(w).await?;
val.encode(w).await?;
}
}
Ok(())
}

View File

@ -1,36 +1,35 @@
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup::Extensions;
use bytes::{Buf, BufMut};
#[derive(Debug)]
/// Sent by the publisher to reject a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeError {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
// The ID for this track.
pub track_id: VarInt,
// The ID for this subscription.
pub id: VarInt,
// An error code.
pub code: VarInt,
pub code: u32,
// An optional, human-readable reason.
pub reason: String,
}
impl Decode for SubscribeError {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let track_id = VarInt::decode(r)?;
let code = VarInt::decode(r)?;
let reason = String::decode(r)?;
impl SubscribeError {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let code = VarInt::decode(r).await?.try_into()?;
let reason = String::decode(r).await?;
Ok(Self { track_id, code, reason })
Ok(Self { id, code, reason })
}
}
impl Encode for SubscribeError {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.track_id.encode(w)?;
self.code.encode(w)?;
self.reason.encode(w)?;
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
VarInt::from_u32(self.code).encode(w).await?;
self.reason.encode(w).await?;
Ok(())
}

View File

@ -0,0 +1,37 @@
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup::Extensions;
/// Sent by the publisher to cleanly terminate a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeFin {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
/// The ID for this subscription.
pub id: VarInt,
/// The final group/object sent on this subscription.
pub final_group: VarInt,
pub final_object: VarInt,
}
impl SubscribeFin {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let final_group = VarInt::decode(r).await?;
let final_object = VarInt::decode(r).await?;
Ok(Self {
id,
final_group,
final_object,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
self.final_group.encode(w).await?;
self.final_object.encode(w).await?;
Ok(())
}
}

View File

@ -1,36 +1,31 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use std::time::Duration;
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
use bytes::{Buf, BufMut};
#[derive(Debug)]
/// Sent by the publisher to accept a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeOk {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
/// The ID for this track.
pub id: VarInt,
// The ID for this track.
pub track_id: VarInt,
// The subscription will end after this duration has elapsed.
// A value of zero is invalid.
pub expires: Option<Duration>,
/// The subscription will expire in this many milliseconds.
pub expires: VarInt,
}
impl Decode for SubscribeOk {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let track_id = VarInt::decode(r)?;
let expires = Duration::decode(r)?;
let expires = if expires == Duration::ZERO { None } else { Some(expires) };
Ok(Self { track_id, expires })
impl SubscribeOk {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let expires = VarInt::decode(r).await?;
Ok(Self { id, expires })
}
}
impl Encode for SubscribeOk {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
self.track_id.encode(w)?;
self.expires.unwrap_or_default().encode(w)?;
impl SubscribeOk {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
self.expires.encode(w).await?;
Ok(())
}
}

View File

@ -0,0 +1,50 @@
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup::Extensions;
/// Sent by the publisher to terminate a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeReset {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
/// The ID for this subscription.
pub id: VarInt,
/// An error code.
pub code: u32,
/// An optional, human-readable reason.
pub reason: String,
/// The final group/object sent on this subscription.
pub final_group: VarInt,
pub final_object: VarInt,
}
impl SubscribeReset {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let code = VarInt::decode(r).await?.try_into()?;
let reason = String::decode(r).await?;
let final_group = VarInt::decode(r).await?;
let final_object = VarInt::decode(r).await?;
Ok(Self {
id,
code,
reason,
final_group,
final_object,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
VarInt::from_u32(self.code).encode(w).await?;
self.reason.encode(w).await?;
self.final_group.encode(w).await?;
self.final_object.encode(w).await?;
Ok(())
}
}

View File

@ -0,0 +1,25 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the publisher to terminate an Announce.
#[derive(Clone, Debug)]
pub struct Unannounce {
// Echo back the namespace that was reset
pub namespace: String,
}
impl Unannounce {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
Ok(Self { namespace })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await?;
Ok(())
}
}

View File

@ -0,0 +1,27 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the subscriber to terminate a Subscribe.
#[derive(Clone, Debug)]
pub struct Unsubscribe {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
// The ID for this subscription.
pub id: VarInt,
}
impl Unsubscribe {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
Ok(Self { id })
}
}
impl Unsubscribe {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
Ok(())
}
}

View File

@ -1,53 +0,0 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use bytes::{Buf, BufMut};
#[derive(Debug)]
pub struct Header {
// An ID for this track.
// Proposal: https://github.com/moq-wg/moq-transport/issues/209
pub track: VarInt,
// The group sequence number.
pub group: VarInt,
// The object sequence number.
pub sequence: VarInt,
// The priority/send order.
pub send_order: VarInt,
}
impl Decode for Header {
fn decode<R: Buf>(r: &mut R) -> Result<Self, DecodeError> {
let typ = VarInt::decode(r)?;
if typ.into_inner() != 0 {
return Err(DecodeError::InvalidType(typ));
}
// NOTE: size has been omitted
let track = VarInt::decode(r)?;
let group = VarInt::decode(r)?;
let sequence = VarInt::decode(r)?;
let send_order = VarInt::decode(r)?;
Ok(Self {
track,
group,
sequence,
send_order,
})
}
}
impl Encode for Header {
fn encode<W: BufMut>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::from_u32(0).encode(w)?;
self.track.encode(w)?;
self.group.encode(w)?;
self.sequence.encode(w)?;
self.send_order.encode(w)?;
Ok(())
}
}

View File

@ -1,7 +0,0 @@
mod header;
mod receiver;
mod sender;
pub use header::*;
pub use receiver::*;
pub use sender::*;

View File

@ -1,128 +0,0 @@
use std::io::Cursor;
use std::task::{self, Poll};
use crate::coding::{Decode, DecodeError};
use crate::object::Header;
use anyhow::Context;
use bytes::{Buf, BufMut, Bytes, BytesMut};
use tokio::task::JoinSet;
use webtransport_generic::RecvStream as GenericRecvStream;
use webtransport_generic::{AsyncRecvStream, AsyncSession};
pub struct Receiver<S>
where
S: AsyncSession,
{
session: S,
// Streams that we've accepted but haven't read the header from yet.
streams: JoinSet<anyhow::Result<(Header, RecvStream<S::RecvStream>)>>,
}
impl<S> Receiver<S>
where
S: AsyncSession,
S::RecvStream: AsyncRecvStream,
{
pub fn new(session: S) -> Self {
Self {
session,
streams: JoinSet::new(),
}
}
pub async fn recv(&mut self) -> anyhow::Result<(Header, RecvStream<S::RecvStream>)> {
loop {
tokio::select! {
res = self.session.accept_uni() => {
let stream = res.context("failed to accept stream")?;
self.streams.spawn(async move { Self::read(stream).await });
},
res = self.streams.join_next(), if !self.streams.is_empty() => {
return res.unwrap().context("failed to run join set")?;
}
}
}
}
async fn read(mut stream: S::RecvStream) -> anyhow::Result<(Header, RecvStream<S::RecvStream>)> {
let mut buf = BytesMut::new();
loop {
// Read more data into the buffer.
stream.recv(&mut buf).await?;
// Use a cursor to read the buffer and remember how much we read.
let mut read = Cursor::new(&mut buf);
let header = match Header::decode(&mut read) {
Ok(header) => header,
Err(DecodeError::UnexpectedEnd) => continue,
Err(err) => return Err(err.into()),
};
// We parsed a full header, advance the buffer.
let size = read.position() as usize;
buf.advance(size);
let buf = buf.freeze();
// log::info!("received stream: {:?}", header);
let stream = RecvStream::new(buf, stream);
return Ok((header, stream));
}
}
}
// Unfortunately, we need to wrap RecvStream with a buffer since moq-transport::Coding only supports buffered reads.
// We first serve any data in the buffer, then we poll the stream.
// TODO fix this so we don't need the wrapper.
pub struct RecvStream<R>
where
R: GenericRecvStream,
{
buf: Bytes,
stream: R,
}
impl<R> RecvStream<R>
where
R: GenericRecvStream,
{
pub(crate) fn new(buf: Bytes, stream: R) -> Self {
Self { buf, stream }
}
pub fn stop(&mut self, code: u32) {
self.stream.stop(code)
}
}
impl<R> GenericRecvStream for RecvStream<R>
where
R: GenericRecvStream,
{
type Error = R::Error;
fn poll_recv<B: BufMut>(
&mut self,
cx: &mut task::Context<'_>,
buf: &mut B,
) -> Poll<Result<Option<usize>, Self::Error>> {
if !self.buf.is_empty() {
let size = self.buf.len();
buf.put(&mut self.buf);
let size = size - self.buf.len();
Poll::Ready(Ok(Some(size)))
} else {
self.stream.poll_recv(cx, buf)
}
}
fn stop(&mut self, error_code: u32) {
self.stream.stop(error_code)
}
}

View File

@ -1,229 +0,0 @@
use std::sync::{Mutex, Weak};
use std::task::{self, Poll};
use std::{collections::BinaryHeap, sync::Arc};
use anyhow::Context;
use bytes::{Buf, BytesMut};
use crate::coding::Encode;
use crate::object::Header;
use webtransport_generic::SendStream as GenericSendStream;
use webtransport_generic::{AsyncSendStream, AsyncSession};
// Allow this to be cloned so we can have multiple senders.
pub struct Sender<S>
where
S: AsyncSession,
S::SendStream: AsyncSendStream,
{
// The session.
session: S,
// A reusable buffer for the stream header.
buf: BytesMut,
// Register new streams with an inner object that will prioritize them.
inner: Arc<Mutex<SenderInner<S::SendStream>>>,
}
impl<S> Sender<S>
where
S: AsyncSession,
S::SendStream: AsyncSendStream,
{
pub fn new(session: S) -> Self {
let inner = SenderInner::new();
Self {
session,
buf: BytesMut::new(),
inner: Arc::new(Mutex::new(inner)),
}
}
pub async fn open(&mut self, header: Header) -> anyhow::Result<SendStream<S::SendStream>> {
let stream = self.session.open_uni().await.context("failed to open uni stream")?;
let mut stream = {
let mut inner = self.inner.lock().unwrap();
inner.register(stream, header.send_order.into_inner())?
};
self.buf.clear();
header.encode(&mut self.buf).unwrap();
stream.send_all(&mut self.buf).await.context("failed to write header")?;
// log::info!("created stream: {:?}", header);
header.encode(&mut self.buf).unwrap();
stream.send_all(&mut self.buf).await.context("failed to write header")?;
Ok(stream)
}
}
impl<S> Clone for Sender<S>
where
S: AsyncSession,
S::SendStream: AsyncSendStream,
{
fn clone(&self) -> Self {
Sender {
session: self.session.clone(),
buf: BytesMut::new(),
inner: self.inner.clone(),
}
}
}
struct SenderInner<S>
where
S: GenericSendStream,
{
// Quinn supports a i32 for priority, but the wire format is a u64.
// Our work around is to keep a list of streams in priority order and use the index as the priority.
// This involves more work, so TODO either increase the Quinn size or reduce the wire size.
ordered: BinaryHeap<SendOrder<S>>,
ordered_swap: BinaryHeap<SendOrder<S>>, // reuse memory to avoid allocations
}
impl<S> SenderInner<S>
where
S: GenericSendStream,
{
fn new() -> Self {
Self {
ordered: BinaryHeap::new(),
ordered_swap: BinaryHeap::new(),
}
}
pub fn register(&mut self, stream: S, order: u64) -> anyhow::Result<SendStream<S>> {
let stream = SendStream::new(stream);
let order = SendOrder::new(&stream, order);
// Add the priority to our existing list.
self.ordered.push(order);
// Loop through the list and update the priorities of any still active streams.
let mut index = 0;
while let Some(stream) = self.ordered.pop() {
if stream.set_priority(index).is_some() {
// Add the stream to the new list so it'll be in sorted order.
self.ordered_swap.push(stream);
index += 1;
}
}
// Swap the lists so we can reuse the memory.
std::mem::swap(&mut self.ordered, &mut self.ordered_swap);
Ok(stream)
}
}
struct SendOrder<S>
where
S: GenericSendStream,
{
// We use Weak here so we don't prevent the stream from being closed when dereferenced.
// set_priority() will return None if the stream was closed.
stream: Weak<Mutex<S>>,
order: u64,
}
impl<S> SendOrder<S>
where
S: GenericSendStream,
{
fn new(stream: &SendStream<S>, order: u64) -> Self {
let stream = stream.weak();
Self { stream, order }
}
fn set_priority(&self, index: i32) -> Option<()> {
let stream = self.stream.upgrade()?;
let mut stream = stream.lock().unwrap();
stream.set_priority(index);
Some(())
}
}
impl<S> PartialEq for SendOrder<S>
where
S: GenericSendStream,
{
fn eq(&self, other: &Self) -> bool {
self.order == other.order
}
}
impl<S> Eq for SendOrder<S> where S: GenericSendStream {}
impl<S> PartialOrd for SendOrder<S>
where
S: GenericSendStream,
{
fn partial_cmp(&self, other: &Self) -> Option<std::cmp::Ordering> {
// We reverse the order so the lower send order is higher priority.
other.order.partial_cmp(&self.order)
}
}
impl<S> Ord for SendOrder<S>
where
S: GenericSendStream,
{
fn cmp(&self, other: &Self) -> std::cmp::Ordering {
// We reverse the order so the lower send order is higher priority.
other.order.cmp(&self.order)
}
}
// Ugh, so we need to wrap SendStream with a mutex because we need to be able to call set_priority on it.
// The problem is that set_priority takes a i32, while send_order is a VarInt
// So the solution is to maintain a priority queue of active streams and constantly update the priority with their index.
// So the library might update the priority of the stream at any point, while the application might similtaniously write to it.
pub struct SendStream<S>
where
S: GenericSendStream,
{
// All SendStream methods are &mut, so we need to wrap them with an internal mutex.
inner: Arc<Mutex<S>>,
}
impl<S> SendStream<S>
where
S: GenericSendStream,
{
pub(crate) fn new(stream: S) -> Self {
Self {
inner: Arc::new(Mutex::new(stream)),
}
}
pub fn weak(&self) -> Weak<Mutex<S>> {
Arc::<Mutex<S>>::downgrade(&self.inner)
}
}
impl<S> GenericSendStream for SendStream<S>
where
S: GenericSendStream,
{
type Error = S::Error;
fn poll_send<B: Buf>(&mut self, cx: &mut task::Context<'_>, buf: &mut B) -> Poll<Result<usize, Self::Error>> {
self.inner.lock().unwrap().poll_send(cx, buf)
}
fn reset(&mut self, reset_code: u32) {
self.inner.lock().unwrap().reset(reset_code)
}
// The application should NOT use this method.
// The library will automatically set the stream priority on creation based on the header.
fn set_priority(&mut self, order: i32) {
self.inner.lock().unwrap().set_priority(order)
}
}

View File

@ -0,0 +1,73 @@
use super::{Control, Publisher, SessionError, Subscriber};
use crate::{cache::broadcast, setup};
use webtransport_quinn::Session;
/// An endpoint that connects to a URL to publish and/or consume live streams.
pub struct Client {}
impl Client {
/// Connect using an established WebTransport session, performing the MoQ handshake as a publisher.
pub async fn publisher(session: Session, source: broadcast::Subscriber) -> Result<Publisher, SessionError> {
let control = Self::send_setup(&session, setup::Role::Publisher).await?;
let publisher = Publisher::new(session, control, source);
Ok(publisher)
}
/// Connect using an established WebTransport session, performing the MoQ handshake as a subscriber.
pub async fn subscriber(session: Session, source: broadcast::Publisher) -> Result<Subscriber, SessionError> {
let control = Self::send_setup(&session, setup::Role::Subscriber).await?;
let subscriber = Subscriber::new(session, control, source);
Ok(subscriber)
}
// TODO support performing both roles
/*
pub async fn connect(self) -> anyhow::Result<(Publisher, Subscriber)> {
self.connect_role(setup::Role::Both).await
}
*/
async fn send_setup(session: &Session, role: setup::Role) -> Result<Control, SessionError> {
let mut control = session.open_bi().await?;
let versions: setup::Versions = [setup::Version::DRAFT_01, setup::Version::KIXEL_01].into();
let client = setup::Client {
role,
versions: versions.clone(),
params: Default::default(),
// Offer all extensions
extensions: setup::Extensions {
object_expires: true,
subscriber_id: true,
subscribe_split: true,
},
};
client.encode(&mut control.0).await?;
let mut server = setup::Server::decode(&mut control.1).await?;
match server.version {
setup::Version::DRAFT_01 => {
// We always require this extension
server.extensions.require_subscriber_id()?;
if server.role.is_publisher() {
// We only require object expires if we're a subscriber, so we don't cache objects indefinitely.
server.extensions.require_object_expires()?;
}
}
setup::Version::KIXEL_01 => {
// KIXEL_01 didn't support extensions; all were enabled.
server.extensions = client.extensions.clone()
}
_ => return Err(SessionError::Version(versions, [server.version].into())),
}
let control = Control::new(control.0, control.1, server.extensions);
Ok(control)
}
}

View File

@ -0,0 +1,45 @@
// A helper class to guard sending control messages behind a Mutex.
use std::{fmt, sync::Arc};
use tokio::sync::Mutex;
use webtransport_quinn::{RecvStream, SendStream};
use super::SessionError;
use crate::{message::Message, setup::Extensions};
#[derive(Debug, Clone)]
pub(crate) struct Control {
send: Arc<Mutex<SendStream>>,
recv: Arc<Mutex<RecvStream>>,
pub ext: Extensions,
}
impl Control {
pub fn new(send: SendStream, recv: RecvStream, ext: Extensions) -> Self {
Self {
send: Arc::new(Mutex::new(send)),
recv: Arc::new(Mutex::new(recv)),
ext,
}
}
pub async fn send<T: Into<Message> + fmt::Debug>(&self, msg: T) -> Result<(), SessionError> {
let mut stream = self.send.lock().await;
log::info!("sending message: {:?}", msg);
msg.into()
.encode(&mut *stream, &self.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
Ok(())
}
// It's likely a mistake to call this from two different tasks, but it's easier to just support it.
pub async fn recv(&self) -> Result<Message, SessionError> {
let mut stream = self.recv.lock().await;
let msg = Message::decode(&mut *stream, &self.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
Ok(msg)
}
}

View File

@ -0,0 +1,101 @@
use crate::{cache, coding, setup, MoqError, VarInt};
#[derive(thiserror::Error, Debug)]
pub enum SessionError {
#[error("webtransport error: {0}")]
Session(#[from] webtransport_quinn::SessionError),
#[error("cache error: {0}")]
Cache(#[from] cache::CacheError),
#[error("encode error: {0}")]
Encode(#[from] coding::EncodeError),
#[error("decode error: {0}")]
Decode(#[from] coding::DecodeError),
#[error("unsupported versions: client={0:?} server={1:?}")]
Version(setup::Versions, setup::Versions),
#[error("incompatible roles: client={0:?} server={1:?}")]
RoleIncompatible(setup::Role, setup::Role),
/// An error occured while reading from the QUIC stream.
#[error("failed to read from stream: {0}")]
Read(#[from] webtransport_quinn::ReadError),
/// An error occured while writing to the QUIC stream.
#[error("failed to write to stream: {0}")]
Write(#[from] webtransport_quinn::WriteError),
/// The role negiotiated in the handshake was violated. For example, a publisher sent a SUBSCRIBE, or a subscriber sent an OBJECT.
#[error("role violation: msg={0}")]
RoleViolation(VarInt),
/// Our enforced stream mapping was disrespected.
#[error("stream mapping conflict")]
StreamMapping,
/// The priority was invalid.
#[error("invalid priority: {0}")]
InvalidPriority(VarInt),
/// The size was invalid.
#[error("invalid size: {0}")]
InvalidSize(VarInt),
/// A required extension was not offered.
#[error("required extension not offered: {0:?}")]
RequiredExtension(VarInt),
/// An unclassified error because I'm lazy. TODO classify these errors
#[error("unknown error: {0}")]
Unknown(String),
}
impl MoqError for SessionError {
/// An integer code that is sent over the wire.
fn code(&self) -> u32 {
match self {
Self::Cache(err) => err.code(),
Self::RoleIncompatible(..) => 406,
Self::RoleViolation(..) => 405,
Self::StreamMapping => 409,
Self::Unknown(_) => 500,
Self::Write(_) => 501,
Self::Read(_) => 502,
Self::Session(_) => 503,
Self::Version(..) => 406,
Self::Encode(_) => 500,
Self::Decode(_) => 500,
Self::InvalidPriority(_) => 400,
Self::InvalidSize(_) => 400,
Self::RequiredExtension(_) => 426,
}
}
/// A reason that is sent over the wire.
fn reason(&self) -> String {
match self {
Self::Cache(err) => err.reason(),
Self::RoleViolation(kind) => format!("role violation for message type {:?}", kind),
Self::RoleIncompatible(client, server) => {
format!(
"role incompatible: client wanted {:?} but server wanted {:?}",
client, server
)
}
Self::Read(err) => format!("read error: {}", err),
Self::Write(err) => format!("write error: {}", err),
Self::Session(err) => format!("session error: {}", err),
Self::Unknown(err) => format!("unknown error: {}", err),
Self::Version(client, server) => format!("unsupported versions: client={:?} server={:?}", client, server),
Self::Encode(err) => format!("encode error: {}", err),
Self::Decode(err) => format!("decode error: {}", err),
Self::StreamMapping => "streaming mapping conflict".to_owned(),
Self::InvalidPriority(priority) => format!("invalid priority: {}", priority),
Self::InvalidSize(size) => format!("invalid size: {}", size),
Self::RequiredExtension(id) => format!("required extension was missing: {:?}", id),
}
}
}

View File

@ -1,109 +1,27 @@
use anyhow::Context;
//! A MoQ Transport session, on top of a WebTransport session, on top of a QUIC connection.
//!
//! The handshake is relatively simple but split into different steps.
//! All of these handshakes slightly differ depending on if the endpoint is a client or server.
//! 1. Complete the QUIC handhake.
//! 2. Complete the WebTransport handshake.
//! 3. Complete the MoQ handshake.
//!
//! Use [Client] or [Server] for the MoQ handshake depending on the endpoint.
//! Then, decide if you want to create a [Publisher] or [Subscriber], or both (TODO).
//!
//! A [Publisher] can announce broadcasts, which will automatically be served over the network.
//! A [Subscriber] can subscribe to broadcasts, which will automatically be served over the network.
use crate::{message, object, setup};
use webtransport_generic::{AsyncRecvStream, AsyncSendStream, AsyncSession};
mod client;
mod control;
mod error;
mod publisher;
mod server;
mod subscriber;
pub struct Session<S>
where
S: AsyncSession,
S::SendStream: AsyncSendStream,
S::RecvStream: AsyncRecvStream,
{
pub send_control: message::Sender<S::SendStream>,
pub recv_control: message::Receiver<S::RecvStream>,
pub send_objects: object::Sender<S>,
pub recv_objects: object::Receiver<S>,
}
impl<S> Session<S>
where
S: AsyncSession,
S::SendStream: AsyncSendStream,
S::RecvStream: AsyncRecvStream,
{
/// Called by a server with an established WebTransport session.
// TODO close the session with an error code
pub async fn accept(session: S, role: setup::Role) -> anyhow::Result<Self> {
let (send, recv) = session.accept_bi().await.context("failed to accept bidi stream")?;
let mut send_control = message::Sender::new(send);
let mut recv_control = message::Receiver::new(recv);
let setup_client = match recv_control.recv().await.context("failed to read SETUP")? {
message::Message::SetupClient(setup) => setup,
_ => anyhow::bail!("expected CLIENT SETUP"),
};
setup_client
.versions
.iter()
.find(|version| **version == setup::Version::DRAFT_00)
.context("no supported versions")?;
if !setup_client.role.compatible(role) {
anyhow::bail!("incompatible roles: {:?} {:?}", setup_client.role, role);
}
let setup_server = setup::Server {
role,
version: setup::Version::DRAFT_00,
};
send_control
.send(message::Message::SetupServer(setup_server))
.await
.context("failed to send setup server")?;
let send_objects = object::Sender::new(session.clone());
let recv_objects = object::Receiver::new(session.clone());
Ok(Session {
send_control,
recv_control,
send_objects,
recv_objects,
})
}
/// Called by a client with an established WebTransport session.
pub async fn connect(session: S, role: setup::Role) -> anyhow::Result<Self> {
let (send, recv) = session.open_bi().await.context("failed to oen bidi stream")?;
let mut send_control = message::Sender::new(send);
let mut recv_control = message::Receiver::new(recv);
let setup_client = setup::Client {
role,
versions: vec![setup::Version::DRAFT_00].into(),
path: "".to_string(),
};
send_control
.send(message::Message::SetupClient(setup_client))
.await
.context("failed to send SETUP CLIENT")?;
let setup_server = match recv_control.recv().await.context("failed to read SETUP")? {
message::Message::SetupServer(setup) => setup,
_ => anyhow::bail!("expected SERVER SETUP"),
};
if setup_server.version != setup::Version::DRAFT_00 {
anyhow::bail!("unsupported version: {:?}", setup_server.version);
}
if !setup_server.role.compatible(role) {
anyhow::bail!("incompatible roles: {:?} {:?}", role, setup_server.role);
}
let send_objects = object::Sender::new(session.clone());
let recv_objects = object::Receiver::new(session.clone());
Ok(Session {
send_control,
recv_control,
send_objects,
recv_objects,
})
}
}
pub use client::*;
pub(crate) use control::*;
pub use error::*;
pub use publisher::*;
pub use server::*;
pub use subscriber::*;

View File

@ -0,0 +1,234 @@
use std::{
collections::{hash_map, HashMap},
sync::{Arc, Mutex},
};
use tokio::task::AbortHandle;
use webtransport_quinn::Session;
use crate::{
cache::{broadcast, segment, track, CacheError},
message,
message::Message,
MoqError, VarInt,
};
use super::{Control, SessionError};
/// Serves broadcasts over the network, automatically handling subscriptions and caching.
// TODO Clone specific fields when a task actually needs it.
#[derive(Clone, Debug)]
pub struct Publisher {
// A map of active subscriptions, containing an abort handle to cancel them.
subscribes: Arc<Mutex<HashMap<VarInt, AbortHandle>>>,
webtransport: Session,
control: Control,
source: broadcast::Subscriber,
}
impl Publisher {
pub(crate) fn new(webtransport: Session, control: Control, source: broadcast::Subscriber) -> Self {
Self {
webtransport,
control,
subscribes: Default::default(),
source,
}
}
// TODO Serve a broadcast without sending an ANNOUNCE.
// fn serve(&mut self, broadcast: broadcast::Subscriber) -> Result<(), SessionError> {
// TODO Wait until the next subscribe that doesn't route to an ANNOUNCE.
// pub async fn subscribed(&mut self) -> Result<track::Producer, SessionError> {
pub async fn run(mut self) -> Result<(), SessionError> {
let res = self.run_inner().await;
// Terminate all active subscribes on error.
self.subscribes
.lock()
.unwrap()
.drain()
.for_each(|(_, abort)| abort.abort());
res
}
pub async fn run_inner(&mut self) -> Result<(), SessionError> {
loop {
tokio::select! {
stream = self.webtransport.accept_uni() => {
stream?;
return Err(SessionError::RoleViolation(VarInt::ZERO));
}
// NOTE: this is not cancel safe, but it's fine since the other branchs are fatal.
msg = self.control.recv() => {
let msg = msg?;
log::info!("message received: {:?}", msg);
if let Err(err) = self.recv_message(&msg).await {
log::warn!("message error: {:?} {:?}", err, msg);
}
},
// No more broadcasts are available.
err = self.source.closed() => {
self.webtransport.close(err.code(), err.reason().as_bytes());
return Ok(());
},
}
}
}
async fn recv_message(&mut self, msg: &Message) -> Result<(), SessionError> {
match msg {
Message::AnnounceOk(msg) => self.recv_announce_ok(msg).await,
Message::AnnounceError(msg) => self.recv_announce_error(msg).await,
Message::Subscribe(msg) => self.recv_subscribe(msg).await,
Message::Unsubscribe(msg) => self.recv_unsubscribe(msg).await,
_ => Err(SessionError::RoleViolation(msg.id())),
}
}
async fn recv_announce_ok(&mut self, _msg: &message::AnnounceOk) -> Result<(), SessionError> {
// We didn't send an announce.
Err(CacheError::NotFound.into())
}
async fn recv_announce_error(&mut self, _msg: &message::AnnounceError) -> Result<(), SessionError> {
// We didn't send an announce.
Err(CacheError::NotFound.into())
}
async fn recv_subscribe(&mut self, msg: &message::Subscribe) -> Result<(), SessionError> {
// Assume that the subscribe ID is unique for now.
let abort = match self.start_subscribe(msg.clone()) {
Ok(abort) => abort,
Err(err) => return self.reset_subscribe(msg.id, err).await,
};
// Insert the abort handle into the lookup table.
match self.subscribes.lock().unwrap().entry(msg.id) {
hash_map::Entry::Occupied(_) => return Err(CacheError::Duplicate.into()), // TODO fatal, because we already started the task
hash_map::Entry::Vacant(entry) => entry.insert(abort),
};
self.control
.send(message::SubscribeOk {
id: msg.id,
expires: VarInt::ZERO,
})
.await
}
async fn reset_subscribe<E: MoqError>(&mut self, id: VarInt, err: E) -> Result<(), SessionError> {
let msg = message::SubscribeReset {
id,
code: err.code(),
reason: err.reason(),
// TODO properly populate these
// But first: https://github.com/moq-wg/moq-transport/issues/313
final_group: VarInt::ZERO,
final_object: VarInt::ZERO,
};
self.control.send(msg).await
}
fn start_subscribe(&mut self, msg: message::Subscribe) -> Result<AbortHandle, SessionError> {
// We currently don't use the namespace field in SUBSCRIBE
// Make sure the namespace is empty if it's provided.
if msg.namespace.as_ref().map_or(false, |namespace| !namespace.is_empty()) {
return Err(CacheError::NotFound.into());
}
let mut track = self.source.get_track(&msg.name)?;
// TODO only clone the fields we need
let mut this = self.clone();
let handle = tokio::spawn(async move {
log::info!("serving track: name={}", track.name);
let res = this.run_subscribe(msg.id, &mut track).await;
if let Err(err) = &res {
log::warn!("failed to serve track: name={} err={:#?}", track.name, err);
}
// Make sure we send a reset at the end.
let err = res.err().unwrap_or(CacheError::Closed.into());
this.reset_subscribe(msg.id, err).await.ok();
// We're all done, so clean up the abort handle.
this.subscribes.lock().unwrap().remove(&msg.id);
});
Ok(handle.abort_handle())
}
async fn run_subscribe(&self, id: VarInt, track: &mut track::Subscriber) -> Result<(), SessionError> {
// TODO add an Ok method to track::Publisher so we can send SUBSCRIBE_OK
while let Some(mut segment) = track.next_segment().await? {
// TODO only clone the fields we need
let this = self.clone();
tokio::spawn(async move {
if let Err(err) = this.run_segment(id, &mut segment).await {
log::warn!("failed to serve segment: {:?}", err)
}
});
}
Ok(())
}
async fn run_segment(&self, id: VarInt, segment: &mut segment::Subscriber) -> Result<(), SessionError> {
log::trace!("serving group: {:?}", segment);
let mut stream = self.webtransport.open_uni().await?;
// Convert the u32 to a i32, since the Quinn set_priority is signed.
let priority = (segment.priority as i64 - i32::MAX as i64) as i32;
stream.set_priority(priority).ok();
while let Some(mut fragment) = segment.next_fragment().await? {
let object = message::Object {
track: id,
// Properties of the segment
group: segment.sequence,
priority: segment.priority,
expires: segment.expires,
// Properties of the fragment
sequence: fragment.sequence,
size: fragment.size,
};
object
.encode(&mut stream, &self.control.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
while let Some(chunk) = fragment.read_chunk().await? {
stream.write_all(&chunk).await?;
}
}
Ok(())
}
async fn recv_unsubscribe(&mut self, msg: &message::Unsubscribe) -> Result<(), SessionError> {
let abort = self
.subscribes
.lock()
.unwrap()
.remove(&msg.id)
.ok_or(CacheError::NotFound)?;
abort.abort();
self.reset_subscribe(msg.id, CacheError::Stop).await
}
}

View File

@ -0,0 +1,112 @@
use super::{Control, Publisher, SessionError, Subscriber};
use crate::{cache::broadcast, setup};
use webtransport_quinn::{RecvStream, SendStream, Session};
/// An endpoint that accepts connections, publishing and/or consuming live streams.
pub struct Server {}
impl Server {
/// Accept an established Webtransport session, performing the MoQ handshake.
///
/// This returns a [Request] half-way through the handshake that allows the application to accept or deny the session.
pub async fn accept(session: Session) -> Result<Request, SessionError> {
let mut control = session.accept_bi().await?;
let mut client = setup::Client::decode(&mut control.1).await?;
if client.versions.contains(&setup::Version::DRAFT_01) {
// We always require subscriber ID.
client.extensions.require_subscriber_id()?;
// We require OBJECT_EXPIRES for publishers only.
if client.role.is_publisher() {
client.extensions.require_object_expires()?;
}
// We don't require SUBSCRIBE_SPLIT since it's easy enough to support, but it's clearly an oversight.
// client.extensions.require(&Extension::SUBSCRIBE_SPLIT)?;
} else if client.versions.contains(&setup::Version::KIXEL_01) {
// Extensions didn't exist in KIXEL_01, so we set them manually.
client.extensions = setup::Extensions {
object_expires: true,
subscriber_id: true,
subscribe_split: true,
};
} else {
return Err(SessionError::Version(
client.versions,
[setup::Version::DRAFT_01, setup::Version::KIXEL_01].into(),
));
}
Ok(Request {
session,
client,
control,
})
}
}
/// A partially complete MoQ Transport handshake.
pub struct Request {
session: Session,
client: setup::Client,
control: (SendStream, RecvStream),
}
impl Request {
/// Accept the session as a publisher, using the provided broadcast to serve subscriptions.
pub async fn publisher(mut self, source: broadcast::Subscriber) -> Result<Publisher, SessionError> {
let setup = self.setup(setup::Role::Publisher)?;
setup.encode(&mut self.control.0).await?;
let control = Control::new(self.control.0, self.control.1, setup.extensions);
let publisher = Publisher::new(self.session, control, source);
Ok(publisher)
}
/// Accept the session as a subscriber only.
pub async fn subscriber(mut self, source: broadcast::Publisher) -> Result<Subscriber, SessionError> {
let setup = self.setup(setup::Role::Subscriber)?;
setup.encode(&mut self.control.0).await?;
let control = Control::new(self.control.0, self.control.1, setup.extensions);
let subscriber = Subscriber::new(self.session, control, source);
Ok(subscriber)
}
// TODO Accept the session and perform both roles.
/*
pub async fn accept(self) -> anyhow::Result<(Publisher, Subscriber)> {
self.ok(setup::Role::Both).await
}
*/
fn setup(&mut self, role: setup::Role) -> Result<setup::Server, SessionError> {
let server = setup::Server {
role,
version: setup::Version::DRAFT_01,
extensions: self.client.extensions.clone(),
params: Default::default(),
};
// We need to sure we support the opposite of the client's role.
// ex. if the client is a publisher, we must be a subscriber ONLY.
if !self.client.role.is_compatible(server.role) {
return Err(SessionError::RoleIncompatible(self.client.role, server.role));
}
Ok(server)
}
/// Reject the request, closing the Webtransport session.
pub fn reject(self, code: u32) {
self.session.close(code, b"")
}
/// The role advertised by the client.
pub fn role(&self) -> setup::Role {
self.client.role
}
}

Some files were not shown because too many files have changed in this diff Show More