Compare commits

...

45 Commits

Author SHA1 Message Date
kixelated df5d362754
Add optional/required extensions. (#117) 2023-11-03 15:10:15 +09:00
kixelated ea701bcf7e
Also build the moq-pub image in this repo. (#116) 2023-11-03 13:56:45 +09:00
kixelated ddfe7963e6
Initial moq-transport-01 support (#115)
Co-authored-by: Mike English <mike.english@gmail.com>
2023-11-03 13:19:41 +09:00
kixelated d55c4a80d1
Add `--tls-root` and `--tls-disable-verify` to moq-pub. (#114) 2023-10-30 22:54:27 +09:00
kixelated 24cf36e923
Update HACKATHON.md 2023-10-25 15:39:39 +09:00
kixelated d69c7491ba
Hackathon (#113) 2023-10-25 15:28:47 +09:00
Luke Curley d2a0722b1b Remove some additional log lines. 2023-10-20 15:41:02 +09:00
dependabot[bot] 9da061b8fe
Bump rustix from 0.37.23 to 0.37.25 (#99)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-20 12:05:40 +09:00
dependabot[bot] e762956a70
Bump rustix from 0.37.19 to 0.37.25 in /moq-transport (#100)
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-10-20 12:05:28 +09:00
kixelated 53817f41e7
Remove subscribers/publisher on close (#103) 2023-10-20 12:04:55 +09:00
kixelated a30f313439
Add a flag to manually specify roots. (#98) 2023-10-17 15:48:36 +09:00
kixelated c5b3e5cb8d
Rename some TLS flags (#97) 2023-10-17 14:50:17 +09:00
kixelated d0fca05485
Fix a panic when --fingerprint was not provided, and rename it to --dev (#96) 2023-10-16 14:31:12 +09:00
kixelated 9a25143694
Support multiple TLS certificates. (#95) 2023-10-16 13:05:40 +09:00
kixelated 1749989dc5
Small stuff. (#94) 2023-10-13 23:43:29 +09:00
kixelated 5a0357b111
Maybe the order matters. (#93) 2023-10-13 15:59:54 +09:00
kixelated 6c9394db00
Switch to Docker Hub. (#92) 2023-10-13 14:03:22 +09:00
kixelated 80111d02cc
Fixes dependabot (#91) 2023-10-13 11:19:23 +09:00
kixelated 992e68affe
Rename workflows. (#90) 2023-10-13 11:02:43 +09:00
dependabot[bot] 7a779eb65c
Bump rustls-webpki from 0.100.1 to 0.100.3 in /moq-transport (#88)
Bumps [rustls-webpki](https://github.com/rustls/webpki) from 0.100.1 to
0.100.3.
2023-10-13 10:49:20 +09:00
kixelated e039fbdb56
Switch to a GCP registry. (#89)
Unfortunately Cloud Run doesn't support the free/public Github registry.
2023-10-13 10:47:10 +09:00
dependabot[bot] 0bdcd7adb6
Bump webpki from 0.22.1 to 0.22.4 (#86)
Bumps [webpki](https://github.com/briansmith/webpki) from 0.22.1 to
0.22.4.
2023-10-12 13:25:08 +09:00
kixelated c95bb8209f
Fix local development. (#87) 2023-10-12 13:24:28 +09:00
kixelated 163bc98605
Missed a link 2023-10-12 13:14:56 +09:00
kixelated 1cf8a7617c
Update links in README.md 2023-10-12 13:13:45 +09:00
kixelated 04ff9d5a6a
Add support for multiple origins (#82)
Adds `moq-api` to get/set the origin for each broadcast. Not used by default for local development.
2023-10-12 13:09:32 +09:00
Luke Curley 5e4eb420c0 Bump webtransport-proto to fix Chrome 117 2023-09-27 07:03:14 +09:00
Luke Curley 43a2ed15d4 Revert "Enable tracing to debug. (#80)"
This reverts commit 6e0e85272d.
2023-09-19 14:49:02 -07:00
Luke Curley 80fd13a9dc Revert "Bump golang.org/x/text from 0.3.7 to 0.3.8 in /dev (#70)"
This reverts commit 5697abeb80.
2023-09-19 10:11:33 -07:00
kixelated eb7e707be3
Implement prioritization in moq-pub (#74)
Here's the main change in webtransport-quinn 0.5.3:
ec553fa340

I haven't run into any errors so I don't know what was broken before
@englishm. I'm hoping that setting the stream priority to max when
writing the stream header avoids the issue? Otherwise we need to go bug
diving.
2023-09-19 10:01:26 -07:00
kixelated 6e0e85272d
Enable tracing to debug. (#80) 2023-09-19 10:00:55 -07:00
Luke Curley 7c8287ee35 I think this token be missing. 2023-09-18 23:07:34 -07:00
kixelated 6bf897d980
Switch to depot for faster ARM builds... at a price. (#79) 2023-09-18 23:06:00 -07:00
kixelated 11f8be65d5
Add some more connection logging. (#78)
So I can debug why my handshake is failing.
2023-09-18 22:37:49 -07:00
kixelated fbd06da2ee
Expose the version VarInt (#77)
Useful for documentation. Right now it's:

```rust
pub const KIXEL_00: Version = _
```

This should probably be an enum too?
2023-09-18 17:24:35 -07:00
kixelated 46604ada41
Fix publishing docker images 2023-09-18 00:19:40 -07:00
kixelated f2c1a0e460
Only perform one release at a time 2023-09-17 22:52:26 -07:00
dependabot[bot] 2696a56885
Bump golang.org/x/net from 0.0.0-20220421235706-1d1ef9303861 to 0.7.0 in /dev (#69)
Bumps [golang.org/x/net](https://github.com/golang/net) from
0.0.0-20220421235706-1d1ef9303861 to 0.7.0.
2023-09-17 22:45:21 -07:00
dependabot[bot] 5697abeb80
Bump golang.org/x/text from 0.3.7 to 0.3.8 in /dev (#70)
Bumps [golang.org/x/text](https://github.com/golang/text) from 0.3.7 to
0.3.8.
2023-09-17 22:45:02 -07:00
kixelated eaa8abcdc6
Better read/write error messages (#75)
Still need to properly support encode/decode though. The problem there
is that encode/decode uses AsyncRead, which means we get io::Error
instead of quinn::ReadError and quinn::WriteError. The io::Error type is
not clonable so we just can't use it, well unless it's wrapped in an Arc
or something gross.
2023-09-17 22:44:01 -07:00
kixelated 89f1bc430d
Also support EC private keys. (#73)
(probably)

@englishm I think you ran into this issue. The `rustls::PrivateKey`
documentation says it supports SEC1-encoded EC private keys so it should
just work?
2023-09-17 22:43:48 -07:00
kixelated 9f50cd5d69
Update README.md 2023-09-17 22:43:22 -07:00
kixelated 38a20153ba
Update README.md 2023-09-17 22:43:05 -07:00
kixelated 415f4e972d
Don't run the publish workflow on PR. (#76)
It takes foooorever and we have a separate check.
2023-09-17 22:29:32 -07:00
Luke Curley 48fb8b77b0 Also build for ARM. 2023-09-17 13:09:38 -07:00
93 changed files with 4329 additions and 2144 deletions

View File

@ -1,2 +1,3 @@
target
dev
*.mp4

View File

@ -8,3 +8,10 @@ insert_final_newline = true
indent_style = tab
indent_size = 4
max_line_length = 120
[*.md]
trim_trailing_whitespace = false
[*.yml]
indent_style = space
indent_size = 2

View File

@ -1,29 +0,0 @@
name: Test & Lint
on:
pull_request:
branches: ["main"]
env:
CARGO_TERM_COLOR: always
jobs:
check:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: clippy, rustfmt
- name: test
run: cargo test --verbose
- name: clippy
run: cargo clippy
- name: fmt
run: cargo fmt --check

65
.github/workflows/main.yml vendored Normal file
View File

@ -0,0 +1,65 @@
name: main
on:
push:
branches: ["main"]
env:
REGISTRY: docker.io
IMAGE: kixelated/moq-rs
IMAGE-PUB: kixelated/moq-pub
SERVICE: api # Restart the API service TODO and relays
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
id-token: write
# Only one release at a time and cancel prior releases
concurrency:
group: release
cancel-in-progress: true
steps:
- uses: actions/checkout@v3
# I'm paying for Depot for faster ARM builds.
- uses: depot/setup-action@v1
- uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
# Build and push Docker image with Depot
- uses: depot/build-push-action@v1
with:
project: r257ctfqm6
context: .
push: true
tags: ${{env.REGISTRY}}/${{env.IMAGE}}
platforms: linux/amd64,linux/arm64
# Same, but include ffmpeg for publishing BBB
- uses: depot/build-push-action@v1
with:
project: r257ctfqm6
context: .
push: true
target: moq-pub # instead of the default target
tags: ${{env.REGISTRY}}/${{env.IMAGE-PUB}}
platforms: linux/amd64,linux/arm64
# Log in to GCP
- uses: google-github-actions/auth@v1
with:
credentials_json: ${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}
# Deploy to cloud run
- uses: google-github-actions/deploy-cloudrun@v1
with:
service: ${{env.SERVICE}}
image: ${{env.REGISTRY}}/${{env.IMAGE}}

28
.github/workflows/pr.yml vendored Normal file
View File

@ -0,0 +1,28 @@
name: pr
on:
pull_request:
branches: ["main"]
env:
CARGO_TERM_COLOR: always
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# Install Rust with clippy/rustfmt
- uses: actions-rust-lang/setup-rust-toolchain@v1
with:
components: clippy, rustfmt
# Make sure u guys don't write bad code
- run: cargo test --verbose
- run: cargo clippy --no-deps
- run: cargo fmt --check
# Check for unused dependencies
- uses: bnjbvr/cargo-machete@main

View File

@ -1,96 +0,0 @@
name: Publish Docker Image
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
on:
schedule:
- cron: "26 7 * * *"
push:
branches: ["main"]
# Publish semver tags as releases.
tags: ["v*.*.*"]
pull_request:
branches: ["main"]
env:
# Use docker.io for Docker Hub if empty
REGISTRY: ghcr.io
# github.repository as <account>/<repo>
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write
# This is used to complete the identity challenge
# with sigstore/fulcio when running outside of PRs.
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v3
# Install the cosign tool except on PR
# https://github.com/sigstore/cosign-installer
- name: Install cosign
if: github.event_name != 'pull_request'
uses: sigstore/cosign-installer@6e04d228eb30da1757ee4e1dd75a0ec73a653e06 #v3.1.1
with:
cosign-release: "v2.1.1"
# Set up BuildKit Docker container builder to be able to build
# multi-platform images and export cache
# https://github.com/docker/setup-buildx-action
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@f95db51fddba0c2d1ec667646a06c2ce06100226 # v3.0.0
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action@96383f45573cb7f253c731d3b3ab81c87ef81934 # v5.0.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action@0565240e2d4ab88bba5387d719585280857ece09 # v5.0.0
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
# Sign the resulting Docker image digest except on PRs.
# This will only write to the public Rekor transparency log when the Docker
# repository is public to avoid leaking data. If you would like to publish
# transparency data even for private images, pass --force to cosign below.
# https://github.com/sigstore/cosign
- name: Sign the published Docker image
if: ${{ github.event_name != 'pull_request' }}
env:
# https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#using-an-intermediate-environment-variable
TAGS: ${{ steps.meta.outputs.tags }}
DIGEST: ${{ steps.build-and-push.outputs.digest }}
# This step uses the identity token to provision an ephemeral certificate
# against the sigstore community Fulcio instance.
run: echo "${TAGS}" | xargs -I {} cosign sign --yes {}@${DIGEST}

1
.gitignore vendored
View File

@ -1,3 +1,4 @@
.DS_Store
target/
logs/
*.mp4

805
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,3 @@
[workspace]
members = ["moq-transport", "moq-relay", "moq-pub"]
members = ["moq-transport", "moq-relay", "moq-pub", "moq-api"]
resolver = "2"

View File

@ -12,14 +12,28 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
--mount=type=cache,target=/build/target \
cargo build --release && cp /build/target/release/moq-* /usr/local/cargo/bin
# Runtime image
FROM rust:latest
# Special image for moq-pub with ffmpeg and a publish script included.
FROM rust:latest as moq-pub
# Install required utilities and ffmpeg
RUN apt-get update && \
apt-get install -y ffmpeg wget
# Copy the publish script into the image
COPY deploy/publish.sh /usr/local/bin/publish
# Copy the compiled binary
COPY --from=builder /usr/local/cargo/bin/moq-pub /usr/local/cargo/bin/moq-pub
CMD [ "publish" ]
# moq-rs image with just the binaries
FROM rust:latest as moq-rs
LABEL org.opencontainers.image.source=https://github.com/kixelated/moq-rs
LABEL org.opencontainers.image.licenses="MIT OR Apache-2.0"
# Fly.io entrypoint
ADD fly-relay.sh .
ADD deploy/fly-relay.sh .
# Copy the compiled binaries
COPY --from=builder /usr/local/cargo/bin /usr/local/cargo/bin

53
HACKATHON.md Normal file
View File

@ -0,0 +1,53 @@
# Hackathon
IETF Prague 118
## MoqTransport
Reference libraries are available at [moq-rs](https://github.com/kixelated/moq-rs) and [moq-js](https://github.com/kixelated/moq-js). The Rust library is [well documented](https://docs.rs/moq-transport/latest/moq_transport/) but the web library, not so much.
**TODO** Update both to draft-01.
**TODO** Switch any remaining forks over to extensions. ex: track_id in SUBSCRIBE
The stream mapping right now is quite rigid: `stream == group == object`.
**TODO** Support multiple objects per group. They MUST NOT use different priorities, different tracks, or out-of-order sequences.
The API and cache aren't designed to send/receive arbitrary objects over arbitrary streams as specified in the draft. I don't think it should, and it wouldn't be possible to implement in time for the hackathon anyway.
**TODO** Make an extension to enforce this stream mapping?
## Generic Relay
I'm hosting a simple CDN at: `relay.quic.video`
The traffic is sharded based on the WebTransport path to avoid namespace collisions. Think of it like a customer ID, although it's completely unauthenticated for now. Use your username or whatever string you want: `CONNECT https://relay.quic.video/alan`.
**TODO** Currently, it performs an implicit `ANNOUNCE ""` when `role=publisher`. This means there can only be a single publisher per shard and `role=both` is not supported. I should have explicit `ANNOUNCE` messages supported before the hackathon to remove this limitation.
**TODO** I don't know if I will have subscribe hints fully working in time. They will be parsed but might be ignored.
## CMAF Media
You can [publish](https://quic.video/publish) and [watch](https://quic.video/watch) broadcasts.
There's a [24/7 bunny stream](https://quic.video/watch/bbb) or you can publish your own using [moq-pub](https://github.com/kixelated/moq-rs/tree/main/moq-pub).
If you want to fetch from the relay directly, the name of the broadcast is the path. For example, `https://quic.video/watch/bbb` can be accessed at `relay.quic.video/bbb`.
The namespace is empty and the catalog track is `.catalog`. I'm currently using simple JSON catalog with no support for delta updates.
**TODO** update to the proposed [Warp catalog](https://datatracker.ietf.org/doc/draft-wilaw-moq-catalogformat/).
The media tracks uses a single (unbounded) object per group. Video groups are per GoP, while audio groups are per frame. There's also an init track containing information required to initialize the decoder.
**TODO** Base64 encode the init track in the catalog.
## Clock
**TODO** Host a clock demo that sends a group per second:
```
GROUP: YYYY-MM-DD HH:MM
OBJECT: SS
```

View File

@ -1,44 +1,22 @@
# Media over QUIC
<p align="center">
<img height="256" src="https://github.com/kixelated/moq-rs/blob/main/.github/logo.svg">
<img height="128px" src="https://github.com/kixelated/moq-rs/blob/main/.github/logo.svg" alt="Media over QUIC">
</p>
Media over QUIC (MoQ) is a live media delivery protocol utilizing QUIC streams.
See the [MoQ working group](https://datatracker.ietf.org/wg/moq/about/) for more information.
See [quic.video](https://quic.video) for more information.
This repository contains reusable libraries and a relay server.
It requires a client to actually publish/view content, such as [moq-js](https://github.com/kixelated/moq-js).
This repository contains a few crates:
Join the [Discord](https://discord.gg/FCYF3p99mr) for updates and discussion.
- **moq-relay**: A relay server, accepting content from publishers and fanning it out to subscribers.
- **moq-pub**: A publish client, accepting media from stdin (ex. via ffmpeg) and sending it to a remote server.
- **moq-transport**: An async implementation of the underlying MoQ protocol.
- **moq-api**: A HTTP API server that stores the origin for each broadcast, backed by redis.
## Setup
There's currently no way to view media with this repo; you'll need to use [moq-js](https://github.com/kixelated/moq-js) for that.
### Certificates
## Development
Unfortunately, QUIC mandates TLS and makes local development difficult.
If you have a valid certificate you can use it instead of self-signing.
Use [mkcert](https://github.com/FiloSottile/mkcert) to generate a self-signed certificate.
Unfortunately, this currently requires Go in order to [fork](https://github.com/FiloSottile/mkcert/pull/513) the tool.
```bash
./dev/cert
```
Unfortunately, WebTransport in Chrome currently (May 2023) doesn't verify certificates using the root CA.
The workaround is to use the `serverFingerprints` options, which requires the certificate MUST be only valid for at most **14 days**.
This is also why we're using a fork of mkcert, because it generates certificates valid for years by default.
This limitation will be removed once Chrome uses the system CA for WebTransport.
### Media
If you're using `moq-pub` then you'll want some test footage to broadcast.
```bash
mkdir media
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4 -O dev/source.mp4
```
Use the [dev helper scripts](dev/README.md) for local development.
## Usage
@ -46,53 +24,41 @@ wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBun
**moq-relay** is a server that forwards subscriptions from publishers to subscribers, caching and deduplicating along the way.
It's designed to be run in a datacenter, relaying media across multiple hops to deduplicate and improve QoS.
You can run the development server with the following command, automatically using the self-signed certificate generated earlier:
```bash
./dev/relay
```
The relays register themselves via the [moq-api](moq-api) endpoints, which is used to discover other relays and share broadcasts.
Notable arguments:
- `--bind <ADDR>` Listen on this address [default: [::]:4443]
- `--cert <CERT>` Use the certificate file at this path
- `--key <KEY>` Use the private key at this path
- `--listen <ADDR>` Listen on this address, default: `[::]:4443`
- `--tls-cert <CERT>` Use the certificate file at this path
- `--tls-key <KEY>` Use the private key at this path
- `--dev` Listen via HTTPS as well, serving the `/fingerprint` of the self-signed certificate. (dev only)
This listens for WebTransport connections on `UDP https://localhost:4443` by default.
You need a client to connect to that address, to both publish and consume media.
The server also listens on `TCP localhost:4443` when in development mode.
This is exclusively to serve a `/fingerprint` endpoint via HTTPS for self-signed certificates, which are not needed in production.
### moq-pub
This is a client that publishes a fMP4 stream from stdin over MoQ.
This can be combined with ffmpeg (and other tools) to produce a live stream.
The following command runs a development instance, broadcasing `dev/source.mp4` to `localhost:4443`:
```bash
./dev/pub
```
Notable arguments:
- `<URI>` connect to the given address, which must start with moq://.
- `<URL>` connect to the given address, which must start with `https://` for WebTransport.
### moq-js
**NOTE**: We're very particular about the fMP4 ingested. See [this script](dev/pub) for the required ffmpeg flags.
There's currently no way to consume broadcasts with `moq-rs`, at least until somebody writes `moq-sub`.
Until then, you can use [moq.js](https://github.com/kixelated/moq-js) both watch broadcasts and publish broadcasts.
### moq-transport
There's a hosted version available at [quic.video](https://quic.video/).
There's a secret `?server` parameter that can be used to connect to a different address.
A media-agnostic library used by [moq-relay](moq-relay) and [moq-pub](moq-pub) to serve the underlying subscriptions.
It has caching/deduplication built-in, so your application is oblivious to the number of connections under the hood.
- Publish to localhost: `https://quic.video/publish/?server=localhost:4443`
- Watch from localhost: `https://quic.video/watch/<name>/?server=localhost:4443`
See the published [crate](https://crates.io/crates/moq-transport) and [documentation](https://docs.rs/moq-transport/latest/moq_transport/).
Note that self-signed certificates are ONLY supported if the server name starts with `localhost`.
You'll need to add an entry to `/etc/hosts` if you want to use a self-signed certs and an IP address.
### moq-api
This is a API server that exposes a REST API.
It's used by relays to inserts themselves as origins when publishing, and to find the origin when subscribing.
It's basically just a thin wrapper around redis that is only needed to run multiple relays in a (simple) cluster.
## License

View File

@ -5,4 +5,4 @@ mkdir cert
echo "$MOQ_CRT" | base64 -d > dev/moq-demo.crt
echo "$MOQ_KEY" | base64 -d > dev/moq-demo.key
RUST_LOG=info /usr/local/cargo/bin/moq-relay --cert dev/moq-demo.crt --key dev/moq-demo.key
RUST_LOG=info /usr/local/cargo/bin/moq-relay --tls-cert dev/moq-demo.crt --tls-key dev/moq-demo.key

20
deploy/fly.toml Normal file
View File

@ -0,0 +1,20 @@
app = "englishm-moq-relay"
kill_signal = "SIGINT"
kill_timeout = 5
[env]
PORT = "4443"
[experimental]
cmd = "./fly-relay.sh"
[[services]]
internal_port = 4443
protocol = "udp"
[services.concurrency]
hard_limit = 25
soft_limit = 20
[[services.ports]]
port = "4443"

41
deploy/publish.sh Executable file
View File

@ -0,0 +1,41 @@
#!/bin/bash
set -euo pipefail
ADDR=${ADDR:-"https://relay.quic.video"}
NAME=${NAME:-"bbb"}
URL=${URL:-"http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4"}
# Download the funny bunny
wget -nv "${URL}" -O "${NAME}.mp4"
# ffmpeg
# -hide_banner: Hide the banner
# -v quiet: and any other output
# -stats: But we still want some stats on stderr
# -stream_loop -1: Loop the broadcast an infinite number of times
# -re: Output in real-time
# -i "${INPUT}": Read from a file on disk
# -vf "drawtext": Render the current time in the corner of the video
# -an: Disable audio for now
# -b:v 3M: Output video at 3Mbps
# -preset ultrafast: Don't use much CPU at the cost of quality
# -tune zerolatency: Optimize for latency at the cost of quality
# -f mp4: Output to mp4 format
# -movflags: Build a fMP4 file with a frame per fragment
# - | moq-pub: Output to stdout and moq-pub to publish
# Run ffmpeg
ffmpeg \
-stream_loop -1 \
-hide_banner \
-v quiet \
-re \
-i "${NAME}.mp4" \
-vf "drawtext=fontfile=/usr/share/fonts/truetype/dejavu/DejaVuSansMono.ttf:text='%{gmtime\: %H\\\\\:%M\\\\\:%S.%3N}':x=(W-tw)-24:y=24:fontsize=48:fontcolor=white:box=1:boxcolor=black@0.5" \
-an \
-b:v 3M \
-preset ultrafast \
-tune zerolatency \
-f mp4 \
-movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset \
- | moq-pub "${ADDR}/${NAME}"

118
dev/README.md Normal file
View File

@ -0,0 +1,118 @@
# Local Development
This is a collection of helpful scripts for local development.
## Setup
### moq-relay
Unfortunately, QUIC mandates TLS and makes local development difficult.
If you have a valid certificate you can use it instead of self-signing.
Use [mkcert](https://github.com/FiloSottile/mkcert) to generate a self-signed certificate.
Unfortunately, this currently requires [Go](https://golang.org/) to be installed in order to [fork](https://github.com/FiloSottile/mkcert/pull/513) the tool.
Somebody should get that merged or make something similar in Rust...
```bash
./dev/cert
```
Unfortunately, WebTransport in Chrome currently (May 2023) doesn't verify certificates using the root CA.
The workaround is to use the `serverFingerprints` options, which requires the certificate MUST be only valid for at most **14 days**.
This is also why we're using a fork of mkcert, because it generates certificates valid for years by default.
This limitation will be removed once Chrome uses the system CA for WebTransport.
### moq-pub
You'll want some test footage to broadcast.
Anything works, but make sure the codec is supported by the player since `moq-pub` does not re-encode.
Here's a criticially acclaimed short film:
```bash
mkdir media
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4 -O dev/source.mp4
```
`moq-pub` uses [ffmpeg](https://ffmpeg.org/) to convert the media to fMP4.
You should have it installed already if you're a video nerd, otherwise:
```bash
brew install ffmpeg
```
### moq-api
`moq-api` uses a redis instance to store active origins for clustering.
This is not relevant for most local development and the code path is skipped by default.
However, if you want to test the clustering, you'll need either either [Docker](https://www.docker.com/) or [Podman](https://podman.io/) installed.
We run the redis instance via a container automatically as part of `dev/api`.
## Development
**tl;dr** run these commands in seperate terminals:
```bash
./dev/cert
./dev/relay
./dev/pub
```
They will each print out a URL you can use to publish/watch broadcasts.
### moq-relay
You can run the relay with the following command, automatically using the self-signed certificates generated earlier.
This listens for WebTransport connections on WebTransport `https://localhost:4443` by default.
```bash
./dev/relay
```
It will print out a URL when you can use to publish. Alternatively, you can use `dev/pub` instead.
> Publish URL: https://quic.video/publish/?server=localhost:4443
### moq-pub
The following command runs a development instance, broadcasing `dev/source.mp4` to WebTransport `https://localhost:4443`:
```bash
./dev/pub
```
It will print out a URL when you can use to watch.
By default, the broadcast name is `dev` but you can overwrite it with the `NAME` env.
> Watch URL: https://quic.video/watch/dev?server=localhost:4443
If you're debugging encoding issues, you can use this script to dump the file to disk instead, defaulting to
`dev/output.mp4`.
```bash
./dev/pub-file
```
### moq-api
The following commands runs an API server, listening for HTTP requests on `http://localhost:4442` by default.
```bash
./dev/api
```
Nodes can now register themselves via the API, which means you can run multiple interconnected relays.
There's two separate `dev/relay-0` and `dev/relay-1` scripts to test clustering locally:
```bash
./dev/relay-0
./dev/relay-1
```
These listen on `:4443` and `:4444` respectively, inserting themselves into the origin database as `localhost:$PORT`.
There's also a separate `dev/pub-1` script to publish to the `:4444` instance.
You can use the exisitng `dev/pub` script to publish to the `:4443` instance.
If all goes well, you would be able to publish to one relay and watch from the other.

45
dev/api Executable file
View File

@ -0,0 +1,45 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Use debug logging by default
export RUST_LOG="${RUST_LOG:-debug}"
# Run the API server on port 4442 by default
HOST="${HOST:-[::]}"
PORT="${PORT:-4442}"
LISTEN="${LISTEN:-$HOST:$PORT}"
# Check for Podman/Docker and set runtime accordingly
if command -v podman &> /dev/null; then
RUNTIME=podman
elif command -v docker &> /dev/null; then
RUNTIME=docker
else
echo "Neither podman or docker found in PATH. Exiting."
exit 1
fi
REDIS_PORT=${REDIS_PORT:-6400} # The default is 6379, but we'll use 6400 to avoid conflicts
# Cleanup function to stop Redis when script exits
cleanup() {
$RUNTIME rm -f moq-redis || true
}
# Stop the redis instance if it's still running
cleanup
# Run a Redis instance
REDIS_CONTAINER=$($RUNTIME run --rm --name moq-redis -d -p "$REDIS_PORT:6379" redis:latest)
# Cleanup function to stop Redis when script exits
trap cleanup EXIT
# Default to a sqlite database in memory
DATABASE="${DATABASE-sqlite::memory:}"
# Run the relay and forward any arguments
cargo run --bin moq-api -- --listen "$LISTEN" --redis "redis://localhost:$REDIS_PORT" "$@"

31
dev/pub
View File

@ -4,22 +4,37 @@ set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Use debug logging by default
export RUST_LOG="${RUST_LOG:-debug}"
# Connect to localhost by default.
HOST="${HOST:-localhost:4443}"
HOST="${HOST:-localhost}"
PORT="${PORT:-4443}"
ADDR="${ADDR:-$HOST:$PORT}"
# Generate a random 16 character name by default.
NAME="${NAME:-$(head /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | head -c 16)}"
#NAME="${NAME:-$(head /dev/urandom | LC_ALL=C tr -dc 'a-zA-Z0-9' | head -c 16)}"
# Combine the host and name into a URI.
URI="${URI:-"moq://$HOST/$NAME"}"
# JK use the name "dev" instead
# TODO use that random name if the host is not localhost
NAME="${NAME:-dev}"
# Combine the host and name into a URL.
URL="${URL:-"https://$ADDR/$NAME"}"
# Default to a source video
MEDIA="${MEDIA:-dev/source.mp4}"
INPUT="${INPUT:-dev/source.mp4}"
# Print out the watch URL
echo "Watch URL: https://quic.video/watch/$NAME?server=$ADDR"
# Run ffmpeg and pipe the output to moq-pub
# TODO enable audio again once fixed.
ffmpeg -hide_banner -v quiet \
-stream_loop -1 -re \
-i "$MEDIA" \
-i "$INPUT" \
-c copy \
-an \
-f mp4 -movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset - \
| RUST_LOG=info cargo run --bin moq-pub -- "$URI" "$@"
-f mp4 -movflags cmaf+separate_moof+delay_moov+skip_trailer \
-frag_duration 1 \
- | cargo run --bin moq-pub -- "$URL" "$@"

10
dev/pub-1 Executable file
View File

@ -0,0 +1,10 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Connect to the 2nd relay by default.
export PORT="${PORT:-4444}"
./dev/pub

90
dev/pub-file Executable file
View File

@ -0,0 +1,90 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Default to a source video
INPUT="${INPUT:-dev/source.mp4}"
# Output the fragmented MP4 to disk for testing.
OUTPUT="${OUTPUT:-dev/output.mp4}"
# Run ffmpeg the same as dev/pub, but:
# - print any errors/warnings
# - only loop twice
#
# Note this is artificially slowed down to real-time using the -re flag; you can remove it.
ffmpeg \
-re \
-y \
-i "$INPUT" \
-c copy \
-fps_mode passthrough \
-f mp4 -movflags cmaf+separate_moof+delay_moov+skip_trailer \
-frag_duration 1 \
"${OUTPUT}"
# % ffmpeg -f mp4 --ffmpeg -h muxer=mov
#
# ffmpeg version 6.0 Copyright (c) 2000-2023 the FFmpeg developers
# Muxer mov [QuickTime / MOV]:
# Common extensions: mov.
# Default video codec: h264.
# Default audio codec: aac.
# mov/mp4/tgp/psp/tg2/ipod/ismv/f4v muxer AVOptions:
# -movflags <flags> E.......... MOV muxer flags (default 0)
# rtphint E.......... Add RTP hint tracks
# empty_moov E.......... Make the initial moov atom empty
# frag_keyframe E.......... Fragment at video keyframes
# frag_every_frame E.......... Fragment at every frame
# separate_moof E.......... Write separate moof/mdat atoms for each track
# frag_custom E.......... Flush fragments on caller requests
# isml E.......... Create a live smooth streaming feed (for pushing to a publishing point)
# faststart E.......... Run a second pass to put the index (moov atom) at the beginning of the file
# omit_tfhd_offset E.......... Omit the base data offset in tfhd atoms
# disable_chpl E.......... Disable Nero chapter atom
# default_base_moof E.......... Set the default-base-is-moof flag in tfhd atoms
# dash E.......... Write DASH compatible fragmented MP4
# cmaf E.......... Write CMAF compatible fragmented MP4
# frag_discont E.......... Signal that the next fragment is discontinuous from earlier ones
# delay_moov E.......... Delay writing the initial moov until the first fragment is cut, or until the first fragment flush
# global_sidx E.......... Write a global sidx index at the start of the file
# skip_sidx E.......... Skip writing of sidx atom
# write_colr E.......... Write colr atom even if the color info is unspecified (Experimental, may be renamed or changed, do not use from scripts)
# prefer_icc E.......... If writing colr atom prioritise usage of ICC profile if it exists in stream packet side data
# write_gama E.......... Write deprecated gama atom
# use_metadata_tags E.......... Use mdta atom for metadata.
# skip_trailer E.......... Skip writing the mfra/tfra/mfro trailer for fragmented files
# negative_cts_offsets E.......... Use negative CTS offsets (reducing the need for edit lists)
# -moov_size <int> E.......... maximum moov size so it can be placed at the begin (from 0 to INT_MAX) (default 0)
# -rtpflags <flags> E.......... RTP muxer flags (default 0)
# latm E.......... Use MP4A-LATM packetization instead of MPEG4-GENERIC for AAC
# rfc2190 E.......... Use RFC 2190 packetization instead of RFC 4629 for H.263
# skip_rtcp E.......... Don't send RTCP sender reports
# h264_mode0 E.......... Use mode 0 for H.264 in RTP
# send_bye E.......... Send RTCP BYE packets when finishing
# -skip_iods <boolean> E.......... Skip writing iods atom. (default true)
# -iods_audio_profile <int> E.......... iods audio profile atom. (from -1 to 255) (default -1)
# -iods_video_profile <int> E.......... iods video profile atom. (from -1 to 255) (default -1)
# -frag_duration <int> E.......... Maximum fragment duration (from 0 to INT_MAX) (default 0)
# -min_frag_duration <int> E.......... Minimum fragment duration (from 0 to INT_MAX) (default 0)
# -frag_size <int> E.......... Maximum fragment size (from 0 to INT_MAX) (default 0)
# -ism_lookahead <int> E.......... Number of lookahead entries for ISM files (from 0 to 255) (default 0)
# -video_track_timescale <int> E.......... set timescale of all video tracks (from 0 to INT_MAX) (default 0)
# -brand <string> E.......... Override major brand
# -use_editlist <boolean> E.......... use edit list (default auto)
# -fragment_index <int> E.......... Fragment number of the next fragment (from 1 to INT_MAX) (default 1)
# -mov_gamma <float> E.......... gamma value for gama atom (from 0 to 10) (default 0)
# -frag_interleave <int> E.......... Interleave samples within fragments (max number of consecutive samples, lower is tighter interleaving, but with more overhead) (from 0 to INT_MAX) (default 0)
# -encryption_scheme <string> E.......... Configures the encryption scheme, allowed values are none, cenc-aes-ctr
# -encryption_key <binary> E.......... The media encryption key (hex)
# -encryption_kid <binary> E.......... The media encryption key identifier (hex)
# -use_stream_ids_as_track_ids <boolean> E.......... use stream ids as track ids (default false)
# -write_btrt <boolean> E.......... force or disable writing btrt (default auto)
# -write_tmcd <boolean> E.......... force or disable writing tmcd (default auto)
# -write_prft <int> E.......... Write producer reference time box with specified time source (from 0 to 2) (default 0)
# wallclock 1 E..........
# pts 2 E..........
# -empty_hdlr_name <boolean> E.......... write zero-length name string in hdlr atoms within mdia and minf atoms (default false)
# -movie_timescale <int> E.......... set movie timescale (from 1 to INT_MAX) (default 1000)

View File

@ -4,10 +4,34 @@ set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Use debug logging by default
export RUST_LOG="${RUST_LOG:-debug}"
# Default to a self-signed certificate
# TODO automatically generate if it doesn't exist.
CERT="${CERT:-dev/localhost.crt}"
KEY="${KEY:-dev/localhost.key}"
# Default to listening on localhost:4443
HOST="${HOST:-[::]}"
PORT="${PORT:-4443}"
LISTEN="${LISTEN:-$HOST:$PORT}"
# A list of optional args
ARGS=""
# Connect to the given URL to get origins.
# TODO default to a public instance?
if [ -n "${API-}" ]; then
ARGS="$ARGS --api $API"
fi
# Provide our node URL when registering origins.
if [ -n "${NODE-}" ]; then
ARGS="$ARGS --api-node $NODE"
fi
echo "Publish URL: https://quic.video/publish/?server=localhost:${PORT}"
# Run the relay and forward any arguments
RUST_LOG=info cargo run --bin moq-relay -- --cert "$CERT" --key "$KEY" --fingerprint "$@"
cargo run --bin moq-relay -- --listen "$LISTEN" --tls-cert "$CERT" --tls-key "$KEY" --dev $ARGS -- "$@"

12
dev/relay-0 Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Run an instance that advertises itself to the origin API.
export PORT="${PORT:-4443}"
export API="${API:-http://localhost:4442}" # TODO support HTTPS
export NODE="${NODE:-https://localhost:$PORT}"
./dev/relay

12
dev/relay-1 Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
set -euo pipefail
# Change directory to the root of the project
cd "$(dirname "$0")/.."
# Run an instance that advertises itself to the origin API.
export PORT="${PORT:-4444}"
export API="${API:-http://localhost:4442}" # TODO support HTTPS
export NODE="${NODE:-https://localhost:$PORT}"
./dev/relay

2
dev/setup Normal file
View File

@ -0,0 +1,2 @@
#!/bin/bash
set -euo pipefail

View File

@ -1,19 +0,0 @@
app = "englishm-moq-relay"
kill_signal = "SIGINT"
kill_timeout = 5
[env]
PORT = "4443"
[experimental]
cmd = "./fly-relay.sh"
[[services]]
internal_port = 4443
protocol = "udp"
[services.concurrency]
hard_limit = 25
soft_limit = 20
[[services.ports]]
port = "4443"

43
moq-api/Cargo.toml Normal file
View File

@ -0,0 +1,43 @@
[package]
name = "moq-api"
description = "Media over QUIC"
authors = ["Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
version = "0.0.1"
edition = "2021"
keywords = ["quic", "http3", "webtransport", "media", "live"]
categories = ["multimedia", "network-programming", "web-programming"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
# HTTP server
axum = "0.6"
hyper = { version = "0.14", features = ["full"] }
tokio = { version = "1", features = ["full"] }
# HTTP client
reqwest = { version = "0.11", features = ["json", "rustls-tls"] }
# JSON encoding
serde = "1"
serde_json = "1"
# CLI
clap = { version = "4", features = ["derive"] }
# Database
redis = { version = "0.23", features = [
"tokio-rustls-comp",
"connection-manager",
] }
url = { version = "2", features = ["serde"] }
# Error handling
log = "0.4"
env_logger = "0.9"
thiserror = "1"

4
moq-api/README.md Normal file
View File

@ -0,0 +1,4 @@
# moq-api
A thin HTTP API that wraps Redis.
Basically I didn't want the relays connecting to Redis directly.

56
moq-api/src/client.rs Normal file
View File

@ -0,0 +1,56 @@
use url::Url;
use crate::{ApiError, Origin};
#[derive(Clone)]
pub struct Client {
// The address of the moq-api server
url: Url,
client: reqwest::Client,
}
impl Client {
pub fn new(url: Url) -> Self {
let client = reqwest::Client::new();
Self { url, client }
}
pub async fn get_origin(&self, id: &str) -> Result<Option<Origin>, ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.get(url).send().await?;
if resp.status() == reqwest::StatusCode::NOT_FOUND {
return Ok(None);
}
let origin: Origin = resp.json().await?;
Ok(Some(origin))
}
pub async fn set_origin(&mut self, id: &str, origin: &Origin) -> Result<(), ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.post(url).json(origin).send().await?;
resp.error_for_status()?;
Ok(())
}
pub async fn delete_origin(&mut self, id: &str) -> Result<(), ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.delete(url).send().await?;
resp.error_for_status()?;
Ok(())
}
pub async fn patch_origin(&mut self, id: &str, origin: &Origin) -> Result<(), ApiError> {
let url = self.url.join("origin/")?.join(id)?;
let resp = self.client.patch(url).json(origin).send().await?;
resp.error_for_status()?;
Ok(())
}
}

16
moq-api/src/error.rs Normal file
View File

@ -0,0 +1,16 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum ApiError {
#[error("redis error: {0}")]
Redis(#[from] redis::RedisError),
#[error("reqwest error: {0}")]
Request(#[from] reqwest::Error),
#[error("hyper error: {0}")]
Hyper(#[from] hyper::Error),
#[error("url error: {0}")]
Url(#[from] url::ParseError),
}

7
moq-api/src/lib.rs Normal file
View File

@ -0,0 +1,7 @@
mod client;
mod error;
mod model;
pub use client::*;
pub use error::*;
pub use model::*;

14
moq-api/src/main.rs Normal file
View File

@ -0,0 +1,14 @@
use clap::Parser;
mod server;
use moq_api::ApiError;
use server::{Server, ServerConfig};
#[tokio::main]
async fn main() -> Result<(), ApiError> {
env_logger::init();
let config = ServerConfig::parse();
let server = Server::new(config);
server.run().await
}

8
moq-api/src/model.rs Normal file
View File

@ -0,0 +1,8 @@
use serde::{Deserialize, Serialize};
use url::Url;
#[derive(Serialize, Deserialize, PartialEq, Eq)]
pub struct Origin {
pub url: Url,
}

171
moq-api/src/server.rs Normal file
View File

@ -0,0 +1,171 @@
use std::net;
use axum::{
extract::{Path, State},
http::StatusCode,
response::{IntoResponse, Response},
routing::get,
Json, Router,
};
use clap::Parser;
use redis::{aio::ConnectionManager, AsyncCommands};
use moq_api::{ApiError, Origin};
/// Runs a HTTP API to create/get origins for broadcasts.
#[derive(Parser, Debug)]
#[command(author, version, about, long_about = None)]
pub struct ServerConfig {
/// Listen for HTTP requests on the given address
#[arg(long)]
pub listen: net::SocketAddr,
/// Connect to the given redis instance
#[arg(long)]
pub redis: url::Url,
}
pub struct Server {
config: ServerConfig,
}
impl Server {
pub fn new(config: ServerConfig) -> Self {
Self { config }
}
pub async fn run(self) -> Result<(), ApiError> {
log::info!("connecting to redis: url={}", self.config.redis);
// Create the redis client.
let redis = redis::Client::open(self.config.redis)?;
let redis = redis
.get_tokio_connection_manager() // TODO get_tokio_connection_manager_with_backoff?
.await?;
let app = Router::new()
.route(
"/origin/:id",
get(get_origin)
.post(set_origin)
.delete(delete_origin)
.patch(patch_origin),
)
.with_state(redis);
log::info!("serving requests: bind={}", self.config.listen);
axum::Server::bind(&self.config.listen)
.serve(app.into_make_service())
.await?;
Ok(())
}
}
async fn get_origin(
Path(id): Path<String>,
State(mut redis): State<ConnectionManager>,
) -> Result<Json<Origin>, AppError> {
let key = origin_key(&id);
let payload: Option<String> = redis.get(&key).await?;
let payload = payload.ok_or(AppError::NotFound)?;
let origin: Origin = serde_json::from_str(&payload)?;
Ok(Json(origin))
}
async fn set_origin(
State(mut redis): State<ConnectionManager>,
Path(id): Path<String>,
Json(origin): Json<Origin>,
) -> Result<(), AppError> {
// TODO validate origin
let key = origin_key(&id);
// Convert the input back to JSON after validating it add adding any fields (TODO)
let payload = serde_json::to_string(&origin)?;
let res: Option<String> = redis::cmd("SET")
.arg(key)
.arg(payload)
.arg("NX")
.arg("EX")
.arg(600) // Set the key to expire in 10 minutes; the origin needs to keep refreshing it.
.query_async(&mut redis)
.await?;
if res.is_none() {
return Err(AppError::Duplicate);
}
Ok(())
}
async fn delete_origin(Path(id): Path<String>, State(mut redis): State<ConnectionManager>) -> Result<(), AppError> {
let key = origin_key(&id);
match redis.del(key).await? {
0 => Err(AppError::NotFound),
_ => Ok(()),
}
}
// Update the expiration deadline.
async fn patch_origin(
Path(id): Path<String>,
State(mut redis): State<ConnectionManager>,
Json(origin): Json<Origin>,
) -> Result<(), AppError> {
let key = origin_key(&id);
// Make sure the contents haven't changed
// TODO make a LUA script to do this all in one operation.
let payload: Option<String> = redis.get(&key).await?;
let payload = payload.ok_or(AppError::NotFound)?;
let expected: Origin = serde_json::from_str(&payload)?;
if expected != origin {
return Err(AppError::Duplicate);
}
// Reset the timeout to 10 minutes.
match redis.expire(key, 600).await? {
0 => Err(AppError::NotFound),
_ => Ok(()),
}
}
fn origin_key(id: &str) -> String {
format!("origin.{}", id)
}
#[derive(thiserror::Error, Debug)]
enum AppError {
#[error("redis error")]
Redis(#[from] redis::RedisError),
#[error("json error")]
Json(#[from] serde_json::Error),
#[error("not found")]
NotFound,
#[error("duplicate ID")]
Duplicate,
}
// Tell axum how to convert `AppError` into a response.
impl IntoResponse for AppError {
fn into_response(self) -> Response {
match self {
AppError::Redis(e) => (StatusCode::INTERNAL_SERVER_ERROR, format!("redis error: {}", e)).into_response(),
AppError::Json(e) => (StatusCode::INTERNAL_SERVER_ERROR, format!("json error: {}", e)).into_response(),
AppError::NotFound => StatusCode::NOT_FOUND.into_response(),
AppError::Duplicate => StatusCode::CONFLICT.into_response(),
}
}
}

View File

@ -1,7 +1,7 @@
[package]
name = "moq-pub"
description = "Media over QUIC"
authors = ["Mike English"]
authors = ["Mike English", "Luke Curley"]
repository = "https://github.com/kixelated/moq-rs"
license = "MIT OR Apache-2.0"
@ -18,29 +18,30 @@ moq-transport = { path = "../moq-transport" }
# QUIC
quinn = "0.10"
webtransport-quinn = "0.5"
webtransport-generic = "0.5"
http = "0.2.9"
webtransport-quinn = "0.6"
#webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn" }
url = "2"
# Crypto
ring = "0.16.20"
rustls = "0.21.2"
rustls-pemfile = "1.0.2"
rustls = { version = "0.21", features = ["dangerous_configuration"] }
rustls-native-certs = "0.6"
rustls-pemfile = "1"
# Async stuff
tokio = { version = "1.27", features = ["full"] }
tokio = { version = "1", features = ["full"] }
# CLI, logging, error handling
clap = { version = "4.0", features = ["derive"] }
clap = { version = "4", features = ["derive"] }
log = { version = "0.4", features = ["std"] }
env_logger = "0.9.3"
mp4 = "0.13.0"
rustls-native-certs = "0.6.3"
anyhow = { version = "1.0.70", features = ["backtrace"] }
serde_json = "1.0.105"
rfc6381-codec = "0.1.0"
env_logger = "0.9"
mp4 = "0.13"
anyhow = { version = "1", features = ["backtrace"] }
serde_json = "1"
rfc6381-codec = "0.1"
tracing = "0.1"
tracing-subscriber = "0.3"
[build-dependencies]
http = "0.2.9"
clap = { version = "4.0", features = ["derive"] }
clap_mangen = "0.2.12"
clap = { version = "4", features = ["derive"] }
clap_mangen = "0.2"
url = "2"

View File

@ -5,7 +5,7 @@ A command line tool for publishing media via Media over QUIC (MoQ).
Expects to receive fragmented MP4 via standard input and connect to a MOQT relay.
```
ffmpeg ... - | moq-pub -i - --host localhost:4443
ffmpeg ... - | moq-pub https://localhost:4443
```
### Invoking `moq-pub`:
@ -13,7 +13,7 @@ ffmpeg ... - | moq-pub -i - --host localhost:4443
Here's how I'm currently testing things, with a local copy of Big Buck Bunny named `bbb_source.mp4`:
```
$ ffmpeg -hide_banner -v quiet -stream_loop -1 -re -i bbb_source.mp4 -an -f mp4 -movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset - | RUST_LOG=moq_pub=info moq-pub -i -
$ ffmpeg -hide_banner -v quiet -stream_loop -1 -re -i bbb_source.mp4 -an -f mp4 -movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset - | RUST_LOG=moq_pub=info moq-pub https://localhost:4443
```
This relies on having `moq-relay` (the relay server) already running locally in another shell.

View File

@ -1,5 +1,6 @@
use clap::Parser;
use std::net;
use std::{net, path};
use url::Url;
#[derive(Parser, Clone, Debug)]
pub struct Config {
@ -17,18 +18,31 @@ pub struct Config {
#[arg(long, default_value = "1500000")]
pub bitrate: u32,
/// Connect to the given URI starting with moq://
#[arg(value_parser = moq_uri)]
pub uri: http::Uri,
/// Connect to the given URL starting with https://
#[arg(value_parser = moq_url)]
pub url: Url,
/// Use the TLS root CA at this path, encoded as PEM.
///
/// This value can be provided multiple times for multiple roots.
/// If this is empty, system roots will be used instead
#[arg(long)]
pub tls_root: Vec<path::PathBuf>,
/// Danger: Disable TLS certificate verification.
///
/// Fine for local development, but should be used in caution in production.
#[arg(long)]
pub tls_disable_verify: bool,
}
fn moq_uri(s: &str) -> Result<http::Uri, String> {
let uri = http::Uri::try_from(s).map_err(|e| e.to_string())?;
fn moq_url(s: &str) -> Result<Url, String> {
let url = Url::try_from(s).map_err(|e| e.to_string())?;
// Make sure the scheme is moq
if uri.scheme_str() != Some("moq") {
return Err("uri scheme must be moq".to_string());
if url.scheme() != "https" {
return Err("url scheme must be https:// for WebTransport".to_string());
}
Ok(uri)
Ok(url)
}

View File

@ -1,3 +1,5 @@
use std::{fs, io, sync::Arc, time};
use anyhow::Context;
use clap::Parser;
@ -7,7 +9,7 @@ use cli::*;
mod media;
use media::*;
use moq_transport::model::broadcast;
use moq_transport::cache::broadcast;
// TODO: clap complete
@ -15,15 +17,39 @@ use moq_transport::model::broadcast;
async fn main() -> anyhow::Result<()> {
env_logger::init();
// Disable tracing so we don't get a bunch of Quinn spam.
let tracer = tracing_subscriber::FmtSubscriber::builder()
.with_max_level(tracing::Level::WARN)
.finish();
tracing::subscriber::set_global_default(tracer).unwrap();
let config = Config::parse();
let (publisher, subscriber) = broadcast::new();
let (publisher, subscriber) = broadcast::new("");
let mut media = Media::new(&config, publisher).await?;
// Ugh, just let me use my native root certs already
// Create a list of acceptable root certificates.
let mut roots = rustls::RootCertStore::empty();
for cert in rustls_native_certs::load_native_certs().expect("could not load platform certs") {
roots.add(&rustls::Certificate(cert.0)).unwrap();
if config.tls_root.is_empty() {
// Add the platform's native root certificates.
for cert in rustls_native_certs::load_native_certs().context("could not load platform certs")? {
roots
.add(&rustls::Certificate(cert.0))
.context("failed to add root cert")?;
}
} else {
// Add the specified root certificates.
for root in &config.tls_root {
let root = fs::File::open(root).context("failed to open root cert file")?;
let mut root = io::BufReader::new(root);
let root = rustls_pemfile::certs(&mut root).context("failed to read root cert")?;
anyhow::ensure!(root.len() == 1, "expected a single root cert");
let root = rustls::Certificate(root[0].to_owned());
roots.add(&root).context("failed to add root cert")?;
}
}
let mut tls_config = rustls::ClientConfig::builder()
@ -31,6 +57,12 @@ async fn main() -> anyhow::Result<()> {
.with_root_certificates(roots)
.with_no_client_auth();
// Allow disabling TLS verification altogether.
if config.tls_disable_verify {
let noop = NoCertificateVerification {};
tls_config.dangerous().set_certificate_verifier(Arc::new(noop));
}
tls_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()]; // this one is important
let arc_tls_config = std::sync::Arc::new(tls_config);
@ -39,14 +71,9 @@ async fn main() -> anyhow::Result<()> {
let mut endpoint = quinn::Endpoint::client(config.bind)?;
endpoint.set_default_client_config(quinn_client_config);
log::info!("connecting to {}", config.uri);
log::info!("connecting to relay: url={}", config.url);
// Change the uri scheme to "https" for WebTransport
let mut parts = config.uri.into_parts();
parts.scheme = Some(http::uri::Scheme::HTTPS);
let uri = http::Uri::from_parts(parts)?;
let session = webtransport_quinn::connect(&endpoint, &uri)
let session = webtransport_quinn::connect(&endpoint, &config.url)
.await
.context("failed to create WebTransport session")?;
@ -62,3 +89,19 @@ async fn main() -> anyhow::Result<()> {
Ok(())
}
pub struct NoCertificateVerification {}
impl rustls::client::ServerCertVerifier for NoCertificateVerification {
fn verify_server_cert(
&self,
_end_entity: &rustls::Certificate,
_intermediates: &[rustls::Certificate],
_server_name: &rustls::ServerName,
_scts: &mut dyn Iterator<Item = &[u8]>,
_ocsp_response: &[u8],
_now: time::SystemTime,
) -> Result<rustls::client::ServerCertVerified, rustls::Error> {
Ok(rustls::client::ServerCertVerified::assertion())
}
}

View File

@ -1,9 +1,10 @@
use crate::cli::Config;
use anyhow::{self, Context};
use moq_transport::model::{broadcast, segment, track};
use moq_transport::cache::{broadcast, fragment, segment, track};
use moq_transport::VarInt;
use mp4::{self, ReadBox};
use serde_json::json;
use std::cmp::max;
use std::collections::HashMap;
use std::io::Cursor;
use std::time;
@ -15,11 +16,12 @@ pub struct Media {
_catalog: track::Publisher,
_init: track::Publisher,
tracks: HashMap<String, Track>,
// Tracks based on their track ID.
tracks: HashMap<u32, Track>,
}
impl Media {
pub async fn new(config: &Config, mut broadcast: broadcast::Publisher) -> anyhow::Result<Self> {
pub async fn new(_config: &Config, mut broadcast: broadcast::Publisher) -> anyhow::Result<Self> {
let mut stdin = tokio::io::stdin();
let ftyp = read_atom(&mut stdin).await?;
anyhow::ensure!(&ftyp[4..8] == b"ftyp", "expected ftyp atom");
@ -39,33 +41,39 @@ impl Media {
let moov = mp4::MoovBox::read_box(&mut moov_reader, moov_header.size)?;
// Create the catalog track with a single segment.
let mut init_track = broadcast.create_track("1.mp4")?;
let mut init_track = broadcast.create_track("0.mp4")?;
let mut init_segment = init_track.create_segment(segment::Info {
sequence: VarInt::ZERO,
priority: i32::MAX,
priority: 0,
expires: None,
})?;
init_segment.write_chunk(init.into())?;
// Create a single fragment, optionally setting the size
let mut init_fragment = init_segment.create_fragment(fragment::Info {
sequence: VarInt::ZERO,
size: None, // size is only needed when we have multiple fragments.
})?;
init_fragment.write_chunk(init.into())?;
let mut tracks = HashMap::new();
for trak in &moov.traks {
let id = trak.tkhd.track_id;
let name = id.to_string();
let name = format!("{}.m4s", id);
let timescale = track_timescale(&moov, id);
// Store the track publisher in a map so we can update it later.
let track = broadcast.create_track(&name)?;
let track = Track::new(track, timescale);
tracks.insert(name, track);
tracks.insert(id, track);
}
let mut catalog = broadcast.create_track(".catalog")?;
// Create the catalog track
Self::serve_catalog(&mut catalog, config, init_track.name.to_string(), &moov, &tracks)?;
Self::serve_catalog(&mut catalog, &init_track.name, &moov)?;
Ok(Media {
_broadcast: broadcast,
@ -78,7 +86,7 @@ impl Media {
pub async fn run(&mut self) -> anyhow::Result<()> {
let mut stdin = tokio::io::stdin();
// The current track name
let mut track_name = None;
let mut current = None;
loop {
let atom = read_atom(&mut stdin).await?;
@ -92,22 +100,21 @@ impl Media {
// Process the moof.
let fragment = Fragment::new(moof)?;
let name = fragment.track.to_string();
// Get the track for this moof.
let track = self.tracks.get_mut(&name).context("failed to find track")?;
let track = self.tracks.get_mut(&fragment.track).context("failed to find track")?;
// Save the track ID for the next iteration, which must be a mdat.
anyhow::ensure!(track_name.is_none(), "multiple moof atoms");
track_name.replace(name);
anyhow::ensure!(current.is_none(), "multiple moof atoms");
current.replace(fragment.track);
// Publish the moof header, creating a new segment if it's a keyframe.
track.header(atom, fragment).context("failed to publish moof")?;
}
mp4::BoxType::MdatBox => {
// Get the track ID from the previous moof.
let name = track_name.take().context("missing moof")?;
let track = self.tracks.get_mut(&name).context("failed to find track")?;
let track = current.take().context("missing moof")?;
let track = self.tracks.get_mut(&track).context("failed to find track")?;
// Publish the mdat atom.
track.data(atom).context("failed to publish mdat")?;
@ -122,33 +129,31 @@ impl Media {
fn serve_catalog(
track: &mut track::Publisher,
config: &Config,
init_track_name: String,
init_track_name: &str,
moov: &mp4::MoovBox,
_tracks: &HashMap<String, Track>,
) -> Result<(), anyhow::Error> {
let mut segment = track.create_segment(segment::Info {
sequence: VarInt::ZERO,
priority: i32::MAX,
priority: 0,
expires: None,
})?;
let mut tracks = Vec::new();
for trak in &moov.traks {
let mut track = json!({
"container": "mp4",
"init_track": init_track_name,
"data_track": format!("{}.m4s", trak.tkhd.track_id),
});
let stsd = &trak.mdia.minf.stbl.stsd;
if let Some(avc1) = &stsd.avc1 {
// avc1[.PPCCLL]
//
// let profile = 0x64;
// let constraints = 0x00;
// let level = 0x1f;
// TODO: do build multi-track catalog by looping through moov.traks
let trak = moov.traks[0].clone();
let avc1 = trak
.mdia
.minf
.stbl
.stsd
.avc1
.ok_or(anyhow::anyhow!("avc1 atom not found"))?;
let profile = avc1.avcc.avc_profile_indication;
let constraints = avc1.avcc.profile_compatibility; // Not 100% certain here, but it's 0x00 on my current test video
let level = avc1.avcc.avc_level_indication;
@ -159,26 +164,67 @@ impl Media {
let codec = rfc6381_codec::Codec::avc1(profile, constraints, level);
let codec_str = codec.to_string();
let catalog = json!({
"tracks": [
{
"container": "mp4",
"kind": "video",
"init_track": init_track_name,
"data_track": "1", // assume just one track for now
"codec": codec_str,
"width": width,
"height": height,
"frame_rate": config.fps,
"bit_rate": config.bitrate,
track["kind"] = json!("video");
track["codec"] = json!(codec_str);
track["width"] = json!(width);
track["height"] = json!(height);
} else if let Some(_hev1) = &stsd.hev1 {
// TODO https://github.com/gpac/mp4box.js/blob/325741b592d910297bf609bc7c400fc76101077b/src/box-codecs.js#L106
anyhow::bail!("HEVC not yet supported")
} else if let Some(mp4a) = &stsd.mp4a {
let desc = &mp4a
.esds
.as_ref()
.context("missing esds box for MP4a")?
.es_desc
.dec_config;
let codec_str = format!("mp4a.{:02x}.{}", desc.object_type_indication, desc.dec_specific.profile);
track["kind"] = json!("audio");
track["codec"] = json!(codec_str);
track["channel_count"] = json!(mp4a.channelcount);
track["sample_rate"] = json!(mp4a.samplerate.value());
track["sample_size"] = json!(mp4a.samplesize);
let bitrate = max(desc.max_bitrate, desc.avg_bitrate);
if bitrate > 0 {
track["bit_rate"] = json!(bitrate);
}
]
} else if let Some(vp09) = &stsd.vp09 {
// https://github.com/gpac/mp4box.js/blob/325741b592d910297bf609bc7c400fc76101077b/src/box-codecs.js#L238
let vpcc = &vp09.vpcc;
let codec_str = format!("vp09.0.{:02x}.{:02x}.{:02x}", vpcc.profile, vpcc.level, vpcc.bit_depth);
track["kind"] = json!("video");
track["codec"] = json!(codec_str);
track["width"] = json!(vp09.width); // no idea if this needs to be multiplied
track["height"] = json!(vp09.height); // no idea if this needs to be multiplied
// TODO Test if this actually works; I'm just guessing based on mp4box.js
anyhow::bail!("VP9 not yet supported")
} else {
// TODO add av01 support: https://github.com/gpac/mp4box.js/blob/325741b592d910297bf609bc7c400fc76101077b/src/box-codecs.js#L251
anyhow::bail!("unknown codec for track: {}", trak.tkhd.track_id);
}
tracks.push(track);
}
let catalog = json!({
"tracks": tracks
});
let catalog_str = serde_json::to_string_pretty(&catalog)?;
log::info!("catalog: {}", catalog_str);
// Create a single fragment for the segment.
let mut fragment = segment.create_fragment(fragment::Info {
sequence: VarInt::ZERO,
size: None, // Size is only needed when we have multiple fragments.
})?;
// Add the segment and add the fragment.
segment.write_chunk(catalog_str.into())?;
fragment.write_chunk(catalog_str.into())?;
Ok(())
}
@ -226,7 +272,7 @@ struct Track {
track: track::Publisher,
// The current segment
segment: Option<segment::Publisher>,
current: Option<fragment::Publisher>,
// The number of units per second.
timescale: u64,
@ -240,16 +286,16 @@ impl Track {
Self {
track,
sequence: 0,
segment: None,
current: None,
timescale,
}
}
pub fn header(&mut self, raw: Vec<u8>, fragment: Fragment) -> anyhow::Result<()> {
if let Some(segment) = self.segment.as_mut() {
if let Some(current) = self.current.as_mut() {
if !fragment.keyframe {
// Use the existing segment
segment.write_chunk(raw.into())?;
current.write_chunk(raw.into())?;
return Ok(());
}
}
@ -258,7 +304,7 @@ impl Track {
// Compute the timestamp in milliseconds.
// Overflows after 583 million years, so we're fine.
let _timestamp: i32 = fragment
let timestamp: u32 = fragment
.timestamp(self.timescale)
.as_millis()
.try_into()
@ -267,26 +313,34 @@ impl Track {
// Create a new segment.
let mut segment = self.track.create_segment(segment::Info {
sequence: VarInt::try_from(self.sequence).context("sequence too large")?,
priority: i32::MAX, // TODO
// Newer segments are higher priority
priority: u32::MAX.checked_sub(timestamp).context("priority too large")?,
// Delete segments after 10s.
expires: Some(time::Duration::from_secs(10)),
})?;
// Create a single fragment for the segment that we will keep appending.
let mut fragment = segment.create_fragment(fragment::Info {
sequence: VarInt::ZERO,
size: None,
})?;
self.sequence += 1;
// Insert the raw atom into the segment.
segment.write_chunk(raw.into())?;
fragment.write_chunk(raw.into())?;
// Save for the next iteration
self.segment = Some(segment);
self.current = Some(fragment);
Ok(())
}
pub fn data(&mut self, raw: Vec<u8>) -> anyhow::Result<()> {
let segment = self.segment.as_mut().context("missing segment")?;
segment.write_chunk(raw.into())?;
let fragment = self.current.as_mut().context("missing current fragment")?;
fragment.write_chunk(raw.into())?;
Ok(())
}

View File

@ -13,28 +13,39 @@ categories = ["multimedia", "network-programming", "web-programming"]
[dependencies]
moq-transport = { path = "../moq-transport" }
moq-api = { path = "../moq-api" }
# QUIC
quinn = "0.10"
webtransport-generic = "0.5"
webtransport-quinn = "0.5"
webtransport-quinn = "0.6"
#webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn" }
url = "2"
# Crypto
ring = "0.16.20"
rustls = "0.21.2"
rustls-pemfile = "1.0.2"
ring = "0.16"
rustls = { version = "0.21", features = ["dangerous_configuration"] }
rustls-pemfile = "1"
rustls-native-certs = "0.6"
webpki = "0.22"
# Async stuff
tokio = { version = "1.27", features = ["full"] }
tokio = { version = "1", features = ["full"] }
# Web server to serve the fingerprint
warp = { version = "0.3.3", features = ["tls"] }
hex = "0.4.3"
axum = { version = "0.6", features = ["tokio"] }
axum-server = { version = "0.5", features = ["tls-rustls"] }
hex = "0.4"
tower-http = { version = "0.4", features = ["cors"] }
# Error handling
anyhow = { version = "1", features = ["backtrace"] }
thiserror = "1"
# CLI
clap = { version = "4", features = ["derive"] }
# Logging
clap = { version = "4.0", features = ["derive"] }
log = { version = "0.4", features = ["std"] }
env_logger = "0.9.3"
anyhow = "1.0.70"
env_logger = "0.9"
tracing = "0.1"
tracing-subscriber = "0.3.0"
tracing-subscriber = "0.3"

View File

@ -1,4 +1,5 @@
use std::{net, path};
use url::Url;
use clap::Parser;
@ -7,17 +8,48 @@ use clap::Parser;
pub struct Config {
/// Listen on this address
#[arg(long, default_value = "[::]:4443")]
pub bind: net::SocketAddr,
pub listen: net::SocketAddr,
/// Use the certificate file at this path
/// Use the certificates at this path, encoded as PEM.
///
/// You can use this option multiple times for multiple certificates.
/// The first match for the provided SNI will be used, otherwise the last cert will be used.
/// You also need to provide the private key multiple times via `key``.
#[arg(long)]
pub cert: path::PathBuf,
pub tls_cert: Vec<path::PathBuf>,
/// Use the private key at this path
/// Use the private key at this path, encoded as PEM.
///
/// There must be a key for every certificate provided via `cert`.
#[arg(long)]
pub key: path::PathBuf,
pub tls_key: Vec<path::PathBuf>,
/// Listen on HTTPS and serve /fingerprint, for self-signed certificates
/// Use the TLS root at this path, encoded as PEM.
///
/// This value can be provided multiple times for multiple roots.
/// If this is empty, system roots will be used instead
#[arg(long)]
pub tls_root: Vec<path::PathBuf>,
/// Danger: Disable TLS certificate verification.
///
/// Fine for local development and between relays, but should be used in caution in production.
#[arg(long)]
pub tls_disable_verify: bool,
/// Optional: Use the moq-api via HTTP to store origin information.
#[arg(long)]
pub api: Option<Url>,
/// Our internal address which we advertise to other origins.
/// We use QUIC, so the certificate must be valid for this address.
/// This needs to be prefixed with https:// to use WebTransport.
/// This is only used when --api is set and only for publishing broadcasts.
#[arg(long)]
pub api_node: Option<Url>,
/// Enable development mode.
/// Currently, this only listens on HTTPS and serves /fingerprint, for self-signed certificates
#[arg(long, action)]
pub fingerprint: bool,
pub dev: bool,
}

51
moq-relay/src/error.rs Normal file
View File

@ -0,0 +1,51 @@
use thiserror::Error;
#[derive(Error, Debug)]
pub enum RelayError {
#[error("transport error: {0}")]
Transport(#[from] moq_transport::session::SessionError),
#[error("cache error: {0}")]
Cache(#[from] moq_transport::cache::CacheError),
#[error("api error: {0}")]
MoqApi(#[from] moq_api::ApiError),
#[error("url error: {0}")]
Url(#[from] url::ParseError),
#[error("webtransport client error: {0}")]
WebTransportClient(#[from] webtransport_quinn::ClientError),
#[error("webtransport server error: {0}")]
WebTransportServer(#[from] webtransport_quinn::ServerError),
#[error("missing node")]
MissingNode,
}
impl moq_transport::MoqError for RelayError {
fn code(&self) -> u32 {
match self {
Self::Transport(err) => err.code(),
Self::Cache(err) => err.code(),
Self::MoqApi(_err) => 504,
Self::Url(_) => 500,
Self::MissingNode => 500,
Self::WebTransportClient(_) => 504,
Self::WebTransportServer(_) => 500,
}
}
fn reason(&self) -> String {
match self {
Self::Transport(err) => format!("transport error: {}", err.reason()),
Self::Cache(err) => format!("cache error: {}", err.reason()),
Self::MoqApi(err) => format!("api error: {}", err),
Self::Url(err) => format!("url error: {}", err),
Self::MissingNode => "missing node".to_owned(),
Self::WebTransportServer(err) => format!("upstream server error: {}", err),
Self::WebTransportClient(err) => format!("upstream client error: {}", err),
}
}
}

View File

@ -1,17 +1,21 @@
use std::{fs, io, sync};
use anyhow::Context;
use clap::Parser;
use ring::digest::{digest, SHA256};
use warp::Filter;
mod config;
mod server;
mod error;
mod origin;
mod quic;
mod session;
mod tls;
mod web;
pub use config::*;
pub use server::*;
pub use error::*;
pub use origin::*;
pub use quic::*;
pub use session::*;
pub use tls::*;
pub use web::*;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
@ -24,47 +28,24 @@ async fn main() -> anyhow::Result<()> {
tracing::subscriber::set_global_default(tracer).unwrap();
let config = Config::parse();
let tls = Tls::load(&config)?;
// Create a server to actually serve the media
let server = Server::new(config.clone()).context("failed to create server")?;
// Create a QUIC server for media.
let quic = Quic::new(config.clone(), tls.clone())
.await
.context("failed to create server")?;
// Run all of the above
// Create the web server if the --dev flag was set.
// This is currently only useful in local development so it's not enabled by default.
if config.dev {
let web = Web::new(config, tls);
// Unfortunately we can't use preconditions because Tokio still executes the branch; just ignore the result
tokio::select! {
res = server.run() => res.context("failed to run server"),
res = serve_http(config), if config.fingerprint => res.context("failed to run HTTP server"),
res = quic.serve() => res.context("failed to run quic server"),
res = web.serve() => res.context("failed to run web server"),
}
} else {
quic.serve().await.context("failed to run quic server")
}
}
// Run a HTTP server using Warp
// TODO remove this when Chrome adds support for self-signed certificates using WebTransport
async fn serve_http(config: Config) -> anyhow::Result<()> {
// Read the PEM certificate file
let crt = fs::File::open(&config.cert)?;
let mut crt = io::BufReader::new(crt);
// Parse the DER certificate
let certs = rustls_pemfile::certs(&mut crt)?;
let cert = certs.first().expect("no certificate found");
// Compute the SHA-256 digest
let fingerprint = digest(&SHA256, cert.as_ref());
let fingerprint = hex::encode(fingerprint.as_ref());
let fingerprint = sync::Arc::new(fingerprint);
let cors = warp::cors().allow_any_origin();
// What an annoyingly complicated way to serve a static String
// I spent a long time trying to find the exact way of cloning and dereferencing the Arc.
let routes = warp::path!("fingerprint")
.map(move || (*(fingerprint.clone())).clone())
.with(cors);
warp::serve(routes)
.tls()
.cert_path(config.cert)
.key_path(config.key)
.run(config.bind)
.await;
Ok(())
}

216
moq-relay/src/origin.rs Normal file
View File

@ -0,0 +1,216 @@
use std::ops::{Deref, DerefMut};
use std::{
collections::HashMap,
sync::{Arc, Mutex, Weak},
};
use moq_api::ApiError;
use moq_transport::cache::{broadcast, CacheError};
use url::Url;
use tokio::time;
use crate::RelayError;
#[derive(Clone)]
pub struct Origin {
// An API client used to get/set broadcasts.
// If None then we never use a remote origin.
// TODO: Stub this out instead.
api: Option<moq_api::Client>,
// The internal address of our node.
// If None then we can never advertise ourselves as an origin.
// TODO: Stub this out instead.
node: Option<Url>,
// A map of active broadcasts by ID.
cache: Arc<Mutex<HashMap<String, Weak<Subscriber>>>>,
// A QUIC endpoint we'll use to fetch from other origins.
quic: quinn::Endpoint,
}
impl Origin {
pub fn new(api: Option<moq_api::Client>, node: Option<Url>, quic: quinn::Endpoint) -> Self {
Self {
api,
node,
cache: Default::default(),
quic,
}
}
/// Create a new broadcast with the given ID.
///
/// Publisher::run needs to be called to periodically refresh the origin cache.
pub async fn publish(&mut self, id: &str) -> Result<Publisher, RelayError> {
let (publisher, subscriber) = broadcast::new(id);
let subscriber = {
let mut cache = self.cache.lock().unwrap();
// Check if the broadcast already exists.
// TODO This is racey, because a new publisher could be created while existing subscribers are still active.
if cache.contains_key(id) {
return Err(CacheError::Duplicate.into());
}
// Create subscriber that will remove from the cache when dropped.
let subscriber = Arc::new(Subscriber {
broadcast: subscriber,
origin: self.clone(),
});
cache.insert(id.to_string(), Arc::downgrade(&subscriber));
subscriber
};
// Create a publisher that constantly updates itself as the origin in moq-api.
// It holds a reference to the subscriber to prevent dropping early.
let mut publisher = Publisher {
broadcast: publisher,
subscriber,
api: None,
};
// Insert the publisher into the database.
if let Some(api) = self.api.as_mut() {
// Make a URL for the broadcast.
let url = self.node.as_ref().ok_or(RelayError::MissingNode)?.clone().join(id)?;
let origin = moq_api::Origin { url };
api.set_origin(id, &origin).await?;
// Refresh every 5 minutes
publisher.api = Some((api.clone(), origin));
}
Ok(publisher)
}
pub fn subscribe(&self, id: &str) -> Arc<Subscriber> {
let mut cache = self.cache.lock().unwrap();
if let Some(broadcast) = cache.get(id) {
if let Some(broadcast) = broadcast.upgrade() {
return broadcast;
}
}
let (publisher, subscriber) = broadcast::new(id);
let subscriber = Arc::new(Subscriber {
broadcast: subscriber,
origin: self.clone(),
});
cache.insert(id.to_string(), Arc::downgrade(&subscriber));
let mut this = self.clone();
let id = id.to_string();
// Rather than fetching from the API and connecting via QUIC inline, we'll spawn a task to do it.
// This way we could stop polling this session and it won't impact other session.
// It also means we'll only connect the API and QUIC once if N subscribers suddenly show up.
// However, the downside is that we don't return an error immediately.
// If that's important, it can be done but it gets a bit racey.
tokio::spawn(async move {
if let Err(err) = this.serve(&id, publisher).await {
log::warn!("failed to serve remote broadcast: id={} err={}", id, err);
}
});
subscriber
}
async fn serve(&mut self, id: &str, publisher: broadcast::Publisher) -> Result<(), RelayError> {
log::debug!("finding origin: id={}", id);
// Fetch the origin from the API.
let origin = self
.api
.as_mut()
.ok_or(CacheError::NotFound)?
.get_origin(id)
.await?
.ok_or(CacheError::NotFound)?;
log::debug!("fetching from origin: id={} url={}", id, origin.url);
// Establish the webtransport session.
let session = webtransport_quinn::connect(&self.quic, &origin.url).await?;
let session = moq_transport::session::Client::subscriber(session, publisher).await?;
session.run().await?;
Ok(())
}
}
pub struct Subscriber {
pub broadcast: broadcast::Subscriber,
origin: Origin,
}
impl Drop for Subscriber {
fn drop(&mut self) {
self.origin.cache.lock().unwrap().remove(&self.broadcast.id);
}
}
impl Deref for Subscriber {
type Target = broadcast::Subscriber;
fn deref(&self) -> &Self::Target {
&self.broadcast
}
}
pub struct Publisher {
pub broadcast: broadcast::Publisher,
api: Option<(moq_api::Client, moq_api::Origin)>,
#[allow(dead_code)]
subscriber: Arc<Subscriber>,
}
impl Publisher {
pub async fn run(&mut self) -> Result<(), ApiError> {
// Every 5m tell the API we're still alive.
// TODO don't hard-code these values
let mut interval = time::interval(time::Duration::from_secs(60 * 5));
loop {
if let Some((api, origin)) = self.api.as_mut() {
api.patch_origin(&self.broadcast.id, origin).await?;
}
// TODO move to start of loop; this is just for testing
interval.tick().await;
}
}
pub async fn close(&mut self) -> Result<(), ApiError> {
if let Some((api, _)) = self.api.as_mut() {
api.delete_origin(&self.broadcast.id).await?;
}
Ok(())
}
}
impl Deref for Publisher {
type Target = broadcast::Publisher;
fn deref(&self) -> &Self::Target {
&self.broadcast
}
}
impl DerefMut for Publisher {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.broadcast
}
}

85
moq-relay/src/quic.rs Normal file
View File

@ -0,0 +1,85 @@
use std::{sync::Arc, time};
use anyhow::Context;
use tokio::task::JoinSet;
use crate::{Config, Origin, Session, Tls};
pub struct Quic {
quic: quinn::Endpoint,
// The active connections.
conns: JoinSet<anyhow::Result<()>>,
// The map of active broadcasts by path.
origin: Origin,
}
impl Quic {
// Create a QUIC endpoint that can be used for both clients and servers.
pub async fn new(config: Config, tls: Tls) -> anyhow::Result<Self> {
let mut client_config = tls.client.clone();
let mut server_config = tls.server.clone();
client_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()];
server_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()];
// Enable BBR congestion control
// TODO validate the implementation
let mut transport_config = quinn::TransportConfig::default();
transport_config.max_idle_timeout(Some(time::Duration::from_secs(10).try_into().unwrap()));
transport_config.keep_alive_interval(Some(time::Duration::from_secs(4))); // TODO make this smarter
transport_config.congestion_controller_factory(Arc::new(quinn::congestion::BbrConfig::default()));
transport_config.mtu_discovery_config(None); // Disable MTU discovery
let transport_config = Arc::new(transport_config);
let mut client_config = quinn::ClientConfig::new(Arc::new(client_config));
let mut server_config = quinn::ServerConfig::with_crypto(Arc::new(server_config));
server_config.transport_config(transport_config.clone());
client_config.transport_config(transport_config);
// There's a bit more boilerplate to make a generic endpoint.
let runtime = quinn::default_runtime().context("no async runtime")?;
let endpoint_config = quinn::EndpointConfig::default();
let socket = std::net::UdpSocket::bind(config.listen).context("failed to bind UDP socket")?;
// Create the generic QUIC endpoint.
let mut quic = quinn::Endpoint::new(endpoint_config, Some(server_config), socket, runtime)
.context("failed to create QUIC endpoint")?;
quic.set_default_client_config(client_config);
let api = config.api.map(|url| {
log::info!("using moq-api: url={}", url);
moq_api::Client::new(url)
});
if let Some(ref node) = config.api_node {
log::info!("advertising origin: url={}", node);
}
let origin = Origin::new(api, config.api_node, quic.clone());
let conns = JoinSet::new();
Ok(Self { quic, origin, conns })
}
pub async fn serve(mut self) -> anyhow::Result<()> {
log::info!("listening on {}", self.quic.local_addr()?);
loop {
tokio::select! {
res = self.quic.accept() => {
let conn = res.context("failed to accept QUIC connection")?;
let mut session = Session::new(self.origin.clone());
self.conns.spawn(async move { session.run(conn).await });
},
res = self.conns.join_next(), if !self.conns.is_empty() => {
let res = res.expect("no tasks").expect("task aborted");
if let Err(err) = res {
log::warn!("connection terminated: {:?}", err);
}
},
}
}
}
}

View File

@ -1,93 +0,0 @@
use std::{
collections::HashMap,
fs, io,
sync::{Arc, Mutex},
time,
};
use anyhow::Context;
use moq_transport::model::broadcast;
use tokio::task::JoinSet;
use crate::{Config, Session};
pub struct Server {
server: quinn::Endpoint,
// The active connections.
conns: JoinSet<anyhow::Result<()>>,
// The map of active broadcasts by path.
broadcasts: Arc<Mutex<HashMap<String, broadcast::Subscriber>>>,
}
impl Server {
// Create a new server
pub fn new(config: Config) -> anyhow::Result<Self> {
// Read the PEM certificate chain
let certs = fs::File::open(config.cert).context("failed to open cert file")?;
let mut certs = io::BufReader::new(certs);
let certs = rustls_pemfile::certs(&mut certs)?
.into_iter()
.map(rustls::Certificate)
.collect();
// Read the PEM private key
let keys = fs::File::open(config.key).context("failed to open key file")?;
let mut keys = io::BufReader::new(keys);
let mut keys = rustls_pemfile::pkcs8_private_keys(&mut keys)?;
anyhow::ensure!(keys.len() == 1, "expected a single key");
let key = rustls::PrivateKey(keys.remove(0));
let mut tls_config = rustls::ServerConfig::builder()
.with_safe_default_cipher_suites()
.with_safe_default_kx_groups()
.with_protocol_versions(&[&rustls::version::TLS13])
.unwrap()
.with_no_client_auth()
.with_single_cert(certs, key)?;
tls_config.max_early_data_size = u32::MAX;
tls_config.alpn_protocols = vec![webtransport_quinn::ALPN.to_vec()];
let mut server_config = quinn::ServerConfig::with_crypto(Arc::new(tls_config));
// Enable BBR congestion control
// TODO validate the implementation
let mut transport_config = quinn::TransportConfig::default();
transport_config.keep_alive_interval(Some(time::Duration::from_secs(2)));
transport_config.congestion_controller_factory(Arc::new(quinn::congestion::BbrConfig::default()));
server_config.transport = Arc::new(transport_config);
let server = quinn::Endpoint::server(server_config, config.bind)?;
let broadcasts = Default::default();
let conns = JoinSet::new();
Ok(Self {
server,
broadcasts,
conns,
})
}
pub async fn run(mut self) -> anyhow::Result<()> {
loop {
tokio::select! {
res = self.server.accept() => {
let conn = res.context("failed to accept QUIC connection")?;
let mut session = Session::new(self.broadcasts.clone());
self.conns.spawn(async move { session.run(conn).await });
},
res = self.conns.join_next(), if !self.conns.is_empty() => {
let res = res.expect("no tasks").expect("task aborted");
if let Err(err) = res {
log::warn!("connection terminated: {:?}", err);
}
},
}
}
}
}

View File

@ -1,32 +1,41 @@
use std::{
collections::{hash_map, HashMap},
sync::{Arc, Mutex},
};
use anyhow::Context;
use moq_transport::{model::broadcast, session::Request, setup::Role};
use moq_transport::{session::Request, setup::Role, MoqError};
use crate::Origin;
#[derive(Clone)]
pub struct Session {
broadcasts: Arc<Mutex<HashMap<String, broadcast::Subscriber>>>,
origin: Origin,
}
impl Session {
pub fn new(broadcasts: Arc<Mutex<HashMap<String, broadcast::Subscriber>>>) -> Self {
Self { broadcasts }
pub fn new(origin: Origin) -> Self {
Self { origin }
}
pub async fn run(&mut self, conn: quinn::Connecting) -> anyhow::Result<()> {
log::debug!("received QUIC handshake: ip={:?}", conn.remote_address());
// Wait for the QUIC connection to be established.
let conn = conn.await.context("failed to establish QUIC connection")?;
log::debug!(
"established QUIC connection: ip={:?} id={}",
conn.remote_address(),
conn.stable_id()
);
let id = conn.stable_id();
// Wait for the CONNECT request.
let request = webtransport_quinn::accept(conn)
.await
.context("failed to receive WebTransport request")?;
let path = request.uri().path().to_string();
// Strip any leading and trailing slashes to get the broadcast name.
let path = request.url().path().trim_matches('/').to_string();
log::debug!("received WebTransport CONNECT: id={} path={}", id, path);
// Accept the CONNECT request.
let session = request
@ -39,58 +48,64 @@ impl Session {
.await
.context("failed to accept handshake")?;
log::debug!("received MoQ SETUP: id={} role={:?}", id, request.role());
let role = request.role();
match role {
Role::Publisher => self.serve_publisher(request, &path).await,
Role::Subscriber => self.serve_subscriber(request, &path).await,
Role::Both => request.reject(300),
Role::Publisher => {
if let Err(err) = self.serve_publisher(id, request, &path).await {
log::warn!("error serving publisher: id={} path={} err={:#?}", id, path, err);
}
}
Role::Subscriber => {
if let Err(err) = self.serve_subscriber(id, request, &path).await {
log::warn!("error serving subscriber: id={} path={} err={:#?}", id, path, err);
}
}
Role::Both => {
log::warn!("role both not supported: id={}", id);
request.reject(300);
}
};
log::debug!("closing connection: id={}", id);
Ok(())
}
async fn serve_publisher(&mut self, id: usize, request: Request, path: &str) -> anyhow::Result<()> {
log::info!("serving publisher: id={}, path={}", id, path);
let mut origin = match self.origin.publish(path).await {
Ok(origin) => origin,
Err(err) => {
request.reject(err.code());
return Err(err.into());
}
};
let session = request.subscriber(origin.broadcast.clone()).await?;
tokio::select! {
_ = session.run() => origin.close().await?,
_ = origin.run() => (), // TODO send error to session
};
Ok(())
}
async fn serve_publisher(&mut self, request: Request, path: &str) {
log::info!("publisher: path={}", path);
async fn serve_subscriber(&mut self, id: usize, request: Request, path: &str) -> anyhow::Result<()> {
log::info!("serving subscriber: id={} path={}", id, path);
let (publisher, subscriber) = broadcast::new();
let subscriber = self.origin.subscribe(path);
match self.broadcasts.lock().unwrap().entry(path.to_string()) {
hash_map::Entry::Occupied(_) => return request.reject(409),
hash_map::Entry::Vacant(entry) => entry.insert(subscriber),
};
if let Err(err) = self.run_publisher(request, publisher).await {
log::warn!("pubisher error: path={} err={:?}", path, err);
}
self.broadcasts.lock().unwrap().remove(path);
}
async fn run_publisher(&mut self, request: Request, publisher: broadcast::Publisher) -> anyhow::Result<()> {
let session = request.subscriber(publisher).await?;
let session = request.publisher(subscriber.broadcast.clone()).await?;
session.run().await?;
Ok(())
}
async fn serve_subscriber(&mut self, request: Request, path: &str) {
log::info!("subscriber: path={}", path);
// Make sure this doesn't get dropped too early
drop(subscriber);
let broadcast = match self.broadcasts.lock().unwrap().get(path) {
Some(broadcast) => broadcast.clone(),
None => {
return request.reject(404);
}
};
if let Err(err) = self.run_subscriber(request, broadcast).await {
log::warn!("subscriber error: path={} err={:?}", path, err);
}
}
async fn run_subscriber(&mut self, request: Request, broadcast: broadcast::Subscriber) -> anyhow::Result<()> {
let session = request.publisher(broadcast).await?;
session.run().await?;
Ok(())
}
}

182
moq-relay/src/tls.rs Normal file
View File

@ -0,0 +1,182 @@
use anyhow::Context;
use ring::digest::{digest, SHA256};
use rustls::server::{ClientHello, ResolvesServerCert};
use rustls::sign::CertifiedKey;
use rustls::{Certificate, PrivateKey, RootCertStore};
use std::io::{self, Cursor, Read};
use std::path;
use std::sync::Arc;
use std::{fs, time};
use webpki::{DnsNameRef, EndEntityCert};
use crate::Config;
#[derive(Clone)]
pub struct Tls {
pub server: rustls::ServerConfig,
pub client: rustls::ClientConfig,
pub fingerprints: Vec<String>,
}
impl Tls {
pub fn load(config: &Config) -> anyhow::Result<Self> {
let mut serve = ServeCerts::default();
// Load the certificate and key files based on their index.
anyhow::ensure!(
config.tls_cert.len() == config.tls_key.len(),
"--tls-cert and --tls-key counts differ"
);
for (chain, key) in config.tls_cert.iter().zip(config.tls_key.iter()) {
serve.load(chain, key)?;
}
// Create a list of acceptable root certificates.
let mut roots = RootCertStore::empty();
if config.tls_root.is_empty() {
// Add the platform's native root certificates.
for cert in rustls_native_certs::load_native_certs().context("could not load platform certs")? {
roots.add(&Certificate(cert.0)).context("failed to add root cert")?;
}
} else {
// Add the specified root certificates.
for root in &config.tls_root {
let root = fs::File::open(root).context("failed to open root cert file")?;
let mut root = io::BufReader::new(root);
let root = rustls_pemfile::certs(&mut root).context("failed to read root cert")?;
anyhow::ensure!(root.len() == 1, "expected a single root cert");
let root = Certificate(root[0].to_owned());
roots.add(&root).context("failed to add root cert")?;
}
}
// Create the TLS configuration we'll use as a client (relay -> relay)
let mut client = rustls::ClientConfig::builder()
.with_safe_defaults()
.with_root_certificates(roots)
.with_no_client_auth();
// Allow disabling TLS verification altogether.
if config.tls_disable_verify {
let noop = NoCertificateVerification {};
client.dangerous().set_certificate_verifier(Arc::new(noop));
}
let fingerprints = serve.fingerprints();
// Create the TLS configuration we'll use as a server (relay <- browser)
let server = rustls::ServerConfig::builder()
.with_safe_defaults()
.with_no_client_auth()
.with_cert_resolver(Arc::new(serve));
let certs = Self {
server,
client,
fingerprints,
};
Ok(certs)
}
}
#[derive(Default)]
struct ServeCerts {
list: Vec<Arc<CertifiedKey>>,
}
impl ServeCerts {
// Load a certificate and cooresponding key from a file
pub fn load(&mut self, chain: &path::PathBuf, key: &path::PathBuf) -> anyhow::Result<()> {
// Read the PEM certificate chain
let chain = fs::File::open(chain).context("failed to open cert file")?;
let mut chain = io::BufReader::new(chain);
let chain: Vec<Certificate> = rustls_pemfile::certs(&mut chain)?
.into_iter()
.map(Certificate)
.collect();
anyhow::ensure!(!chain.is_empty(), "could not find certificate");
// Read the PEM private key
let mut keys = fs::File::open(key).context("failed to open key file")?;
// Read the keys into a Vec so we can parse it twice.
let mut buf = Vec::new();
keys.read_to_end(&mut buf)?;
// Try to parse a PKCS#8 key
// -----BEGIN PRIVATE KEY-----
let mut keys = rustls_pemfile::pkcs8_private_keys(&mut Cursor::new(&buf))?;
// Try again but with EC keys this time
// -----BEGIN EC PRIVATE KEY-----
if keys.is_empty() {
keys = rustls_pemfile::ec_private_keys(&mut Cursor::new(&buf))?
};
anyhow::ensure!(!keys.is_empty(), "could not find private key");
anyhow::ensure!(keys.len() < 2, "expected a single key");
let key = PrivateKey(keys.remove(0));
let key = rustls::sign::any_supported_type(&key)?;
let certified = Arc::new(CertifiedKey::new(chain, key));
self.list.push(certified);
Ok(())
}
// Return the SHA256 fingerprint of our certificates.
pub fn fingerprints(&self) -> Vec<String> {
self.list
.iter()
.map(|ck| {
let fingerprint = digest(&SHA256, ck.cert[0].as_ref());
let fingerprint = hex::encode(fingerprint.as_ref());
fingerprint
})
.collect()
}
}
impl ResolvesServerCert for ServeCerts {
fn resolve(&self, client_hello: ClientHello<'_>) -> Option<Arc<CertifiedKey>> {
if let Some(name) = client_hello.server_name() {
if let Ok(dns_name) = DnsNameRef::try_from_ascii_str(name) {
for ck in &self.list {
// TODO I gave up on caching the parsed result because of lifetime hell.
// If this shows up on benchmarks, somebody should fix it.
let leaf = ck.cert.first().expect("missing certificate");
let parsed = EndEntityCert::try_from(leaf.0.as_ref()).expect("failed to parse certificate");
if parsed.verify_is_valid_for_dns_name(dns_name).is_ok() {
return Some(ck.clone());
}
}
}
}
// Default to the last certificate if we couldn't find one.
self.list.last().cloned()
}
}
pub struct NoCertificateVerification {}
impl rustls::client::ServerCertVerifier for NoCertificateVerification {
fn verify_server_cert(
&self,
_end_entity: &rustls::Certificate,
_intermediates: &[rustls::Certificate],
_server_name: &rustls::ServerName,
_scts: &mut dyn Iterator<Item = &[u8]>,
_ocsp_response: &[u8],
_now: time::SystemTime,
) -> Result<rustls::client::ServerCertVerified, rustls::Error> {
Ok(rustls::client::ServerCertVerified::assertion())
}
}

44
moq-relay/src/web.rs Normal file
View File

@ -0,0 +1,44 @@
use std::sync::Arc;
use axum::{extract::State, http::Method, response::IntoResponse, routing::get, Router};
use axum_server::{tls_rustls::RustlsAcceptor, Server};
use tower_http::cors::{Any, CorsLayer};
use crate::{Config, Tls};
// Run a HTTP server using Axum
// TODO remove this when Chrome adds support for self-signed certificates using WebTransport
pub struct Web {
app: Router,
server: Server<RustlsAcceptor>,
}
impl Web {
pub fn new(config: Config, tls: Tls) -> Self {
// Get the first certificate's fingerprint.
// TODO serve all of them so we can support multiple signature algorithms.
let fingerprint = tls.fingerprints.first().expect("missing certificate").clone();
let mut tls_config = tls.server.clone();
tls_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec()];
let tls_config = axum_server::tls_rustls::RustlsConfig::from_config(Arc::new(tls_config));
let app = Router::new()
.route("/fingerprint", get(serve_fingerprint))
.layer(CorsLayer::new().allow_origin(Any).allow_methods([Method::GET]))
.with_state(fingerprint);
let server = axum_server::bind_rustls(config.listen, tls_config);
Self { app, server }
}
pub async fn serve(self) -> anyhow::Result<()> {
self.server.serve(self.app.into_make_service()).await?;
Ok(())
}
}
async fn serve_fingerprint(State(fingerprint): State<String>) -> impl IntoResponse {
fingerprint
}

1162
moq-transport/Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@ -15,12 +15,15 @@ categories = ["multimedia", "network-programming", "web-programming"]
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
bytes = "1.4"
bytes = "1"
thiserror = "1"
anyhow = "1"
tokio = { version = "1.27", features = ["macros", "io-util", "sync"] }
tokio = { version = "1", features = ["macros", "io-util", "sync"] }
log = "0.4"
indexmap = "2"
quinn = "0.10"
webtransport-quinn = "0.5.2"
webtransport-quinn = "0.6"
#webtransport-quinn = { path = "../../webtransport-rs/webtransport-quinn" }
async-trait = "0.1"
paste = "1"

View File

@ -2,61 +2,67 @@
//!
//! The [Publisher] can create tracks, either manually or on request.
//! It receives all requests by a [Subscriber] for a tracks that don't exist.
//! The simplest implementation is to close every unknown track with [Error::NotFound].
//! The simplest implementation is to close every unknown track with [CacheError::NotFound].
//!
//! A [Subscriber] can request tracks by name.
//! If the track already exists, it will be returned.
//! If the track doesn't exist, it will be sent to [Unknown] to be handled.
//! A [Subscriber] can be cloned to create multiple subscriptions.
//!
//! The broadcast is automatically closed with [Error::Closed] when [Publisher] is dropped, or all [Subscriber]s are dropped.
//! The broadcast is automatically closed with [CacheError::Closed] when [Publisher] is dropped, or all [Subscriber]s are dropped.
use std::{
collections::{hash_map, HashMap, VecDeque},
fmt,
ops::Deref,
sync::Arc,
};
use crate::Error;
use super::{track, Watch};
use super::{track, CacheError, Watch};
/// Create a new broadcast.
pub fn new() -> (Publisher, Subscriber) {
pub fn new(id: &str) -> (Publisher, Subscriber) {
let state = Watch::new(State::default());
let info = Arc::new(Info { id: id.to_string() });
let publisher = Publisher::new(state.clone());
let subscriber = Subscriber::new(state);
let publisher = Publisher::new(state.clone(), info.clone());
let subscriber = Subscriber::new(state, info);
(publisher, subscriber)
}
/// Static information about a broadcast.
#[derive(Debug)]
pub struct Info {
pub id: String,
}
/// Dynamic information about the broadcast.
#[derive(Debug)]
struct State {
tracks: HashMap<String, track::Subscriber>,
requested: VecDeque<track::Publisher>,
closed: Result<(), Error>,
closed: Result<(), CacheError>,
}
impl State {
pub fn get(&self, name: &str) -> Result<Option<track::Subscriber>, Error> {
pub fn get(&self, name: &str) -> Result<Option<track::Subscriber>, CacheError> {
// Don't check closed, so we can return from cache.
Ok(self.tracks.get(name).cloned())
}
pub fn insert(&mut self, track: track::Subscriber) -> Result<(), Error> {
self.closed?;
pub fn insert(&mut self, track: track::Subscriber) -> Result<(), CacheError> {
self.closed.clone()?;
match self.tracks.entry(track.name.clone()) {
hash_map::Entry::Occupied(_) => return Err(Error::Duplicate),
hash_map::Entry::Occupied(_) => return Err(CacheError::Duplicate),
hash_map::Entry::Vacant(v) => v.insert(track),
};
Ok(())
}
pub fn request(&mut self, name: &str) -> Result<track::Subscriber, Error> {
self.closed?;
pub fn request(&mut self, name: &str) -> Result<track::Subscriber, CacheError> {
self.closed.clone()?;
// Create a new track.
let (publisher, subscriber) = track::new(name);
@ -70,13 +76,13 @@ impl State {
Ok(subscriber)
}
pub fn has_next(&self) -> Result<bool, Error> {
pub fn has_next(&self) -> Result<bool, CacheError> {
// Check if there's any elements in the queue before checking closed.
if !self.requested.is_empty() {
return Ok(true);
}
self.closed?;
self.closed.clone()?;
Ok(false)
}
@ -85,8 +91,8 @@ impl State {
self.requested.pop_front().expect("no entry in queue")
}
pub fn close(&mut self, err: Error) -> Result<(), Error> {
self.closed?;
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
@ -107,34 +113,35 @@ impl Default for State {
#[derive(Clone)]
pub struct Publisher {
state: Watch<State>,
info: Arc<Info>,
_dropped: Arc<Dropped>,
}
impl Publisher {
fn new(state: Watch<State>) -> Self {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, _dropped }
Self { state, info, _dropped }
}
/// Create a new track with the given name, inserting it into the broadcast.
pub fn create_track(&mut self, name: &str) -> Result<track::Publisher, Error> {
pub fn create_track(&mut self, name: &str) -> Result<track::Publisher, CacheError> {
let (publisher, subscriber) = track::new(name);
self.state.lock_mut().insert(subscriber)?;
Ok(publisher)
}
/// Insert a track into the broadcast.
pub fn insert_track(&mut self, track: track::Subscriber) -> Result<(), Error> {
pub fn insert_track(&mut self, track: track::Subscriber) -> Result<(), CacheError> {
self.state.lock_mut().insert(track)
}
/// Block until the next track requested by a subscriber.
pub async fn next_track(&mut self) -> Result<Option<track::Publisher>, Error> {
pub async fn next_track(&mut self) -> Result<track::Publisher, CacheError> {
loop {
let notify = {
let state = self.state.lock();
if state.has_next()? {
return Ok(Some(state.into_mut().next()));
return Ok(state.into_mut().next());
}
state.changed()
@ -145,14 +152,25 @@ impl Publisher {
}
/// Close the broadcast with an error.
pub fn close(self, err: Error) -> Result<(), Error> {
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
impl Deref for Publisher {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Publisher {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Publisher").field("state", &self.state).finish()
f.debug_struct("Publisher")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
@ -162,19 +180,20 @@ impl fmt::Debug for Publisher {
#[derive(Clone)]
pub struct Subscriber {
state: Watch<State>,
info: Arc<Info>,
_dropped: Arc<Dropped>,
}
impl Subscriber {
fn new(state: Watch<State>) -> Self {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, _dropped }
Self { state, info, _dropped }
}
/// Get a track from the broadcast by name.
/// If the track does not exist, it will be created and potentially fufilled by the publisher (via Unknown).
/// Otherwise, it will return [Error::NotFound].
pub fn get_track(&self, name: &str) -> Result<track::Subscriber, Error> {
/// Otherwise, it will return [CacheError::NotFound].
pub fn get_track(&self, name: &str) -> Result<track::Subscriber, CacheError> {
let state = self.state.lock();
if let Some(track) = state.get(name)? {
return Ok(track);
@ -183,11 +202,43 @@ impl Subscriber {
// Request a new track if it does not exist.
state.into_mut().request(name)
}
/// Check if the broadcast is closed, either because the publisher was dropped or called [Publisher::close].
pub fn is_closed(&self) -> Option<CacheError> {
self.state.lock().closed.as_ref().err().cloned()
}
/// Wait until if the broadcast is closed, either because the publisher was dropped or called [Publisher::close].
pub async fn closed(&self) -> CacheError {
loop {
let notify = {
let state = self.state.lock();
if let Some(err) = state.closed.as_ref().err() {
return err.clone();
}
state.changed()
};
notify.await;
}
}
}
impl Deref for Subscriber {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Subscriber {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Subscriber").field("state", &self.state).finish()
f.debug_struct("Subscriber")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
@ -206,6 +257,6 @@ impl Dropped {
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(Error::Closed).ok();
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

51
moq-transport/src/cache/error.rs vendored Normal file
View File

@ -0,0 +1,51 @@
use thiserror::Error;
use crate::MoqError;
#[derive(Clone, Debug, Error)]
pub enum CacheError {
/// A clean termination, represented as error code 0.
/// This error is automatically used when publishers or subscribers are dropped without calling close.
#[error("closed")]
Closed,
/// An ANNOUNCE_RESET or SUBSCRIBE_RESET was sent by the publisher.
#[error("reset code={0:?}")]
Reset(u32),
/// An ANNOUNCE_STOP or SUBSCRIBE_STOP was sent by the subscriber.
#[error("stop")]
Stop,
/// The requested resource was not found.
#[error("not found")]
NotFound,
/// A resource already exists with that ID.
#[error("duplicate")]
Duplicate,
}
impl MoqError for CacheError {
/// An integer code that is sent over the wire.
fn code(&self) -> u32 {
match self {
Self::Closed => 0,
Self::Reset(code) => *code,
Self::Stop => 206,
Self::NotFound => 404,
Self::Duplicate => 409,
}
}
/// A reason that is sent over the wire.
fn reason(&self) -> String {
match self {
Self::Closed => "closed".to_owned(),
Self::Reset(code) => format!("reset code: {}", code),
Self::Stop => "stop".to_owned(),
Self::NotFound => "not found".to_owned(),
Self::Duplicate => "duplicate".to_owned(),
}
}
}

View File

@ -1,20 +1,20 @@
//! A segment is a stream of bytes with a header, split into a [Publisher] and [Subscriber] handle.
//! A fragment is a stream of bytes with a header, split into a [Publisher] and [Subscriber] handle.
//!
//! A [Publisher] writes an ordered stream of bytes in chunks.
//! There's no framing, so these chunks can be of any size or position, and won't be maintained over the network.
//!
//! A [Subscriber] reads an ordered stream of bytes in chunks.
//! These chunks are returned directly from the QUIC connection, so they may be of any size or position.
//! A closed [Subscriber] will receive a copy of all future chunks. (fanout)
//! You can clone the [Subscriber] and each will read a copy of of all future chunks. (fanout)
//!
//! The segment is closed with [Error::Closed] when all publishers or subscribers are dropped.
//! The fragment is closed with [CacheError::Closed] when all publishers or subscribers are dropped.
use core::fmt;
use std::{ops::Deref, sync::Arc, time};
use std::{ops::Deref, sync::Arc};
use crate::{Error, VarInt};
use crate::VarInt;
use bytes::Bytes;
use super::Watch;
use super::{CacheError, Watch};
/// Create a new segment with the given info.
pub fn new(info: Info) -> (Publisher, Subscriber) {
@ -30,36 +30,39 @@ pub fn new(info: Info) -> (Publisher, Subscriber) {
/// Static information about the segment.
#[derive(Debug)]
pub struct Info {
// The sequence number of the segment within the track.
// The sequence number of the fragment within the segment.
// NOTE: These may be received out of order or with gaps.
pub sequence: VarInt,
// The priority of the segment within the BROADCAST.
pub priority: i32,
// Cache the segment for at most this long.
pub expires: Option<time::Duration>,
// The size of the fragment, optionally None if this is the last fragment in a segment.
// TODO enforce this size.
pub size: Option<VarInt>,
}
struct State {
// The data that has been received thus far.
data: Vec<Bytes>,
chunks: Vec<Bytes>,
// Set when the publisher is dropped.
closed: Result<(), Error>,
closed: Result<(), CacheError>,
}
impl State {
pub fn close(&mut self, err: Error) -> Result<(), Error> {
self.closed?;
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
pub fn bytes(&self) -> usize {
self.chunks.iter().map(|f| f.len()).sum::<usize>()
}
}
impl Default for State {
fn default() -> Self {
Self {
data: Vec::new(),
chunks: Vec::new(),
closed: Ok(()),
}
}
@ -68,11 +71,9 @@ impl Default for State {
impl fmt::Debug for State {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
// We don't want to print out the contents, so summarize.
let size = self.data.iter().map(|chunk| chunk.len()).sum::<usize>();
let data = format!("size={} chunks={}", size, self.data.len());
f.debug_struct("State")
.field("data", &data)
.field("chunks", &self.chunks.len().to_string())
.field("bytes", &self.bytes().to_string())
.field("closed", &self.closed)
.finish()
}
@ -97,15 +98,15 @@ impl Publisher {
}
/// Write a new chunk of bytes.
pub fn write_chunk(&mut self, data: Bytes) -> Result<(), Error> {
pub fn write_chunk(&mut self, chunk: Bytes) -> Result<(), CacheError> {
let mut state = self.state.lock_mut();
state.closed?;
state.data.push(data);
state.closed.clone()?;
state.chunks.push(chunk);
Ok(())
}
/// Close the segment with an error.
pub fn close(self, err: Error) -> Result<(), Error> {
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
@ -157,19 +158,19 @@ impl Subscriber {
}
/// Block until the next chunk of bytes is available.
pub async fn read_chunk(&mut self) -> Result<Option<Bytes>, Error> {
pub async fn read_chunk(&mut self) -> Result<Option<Bytes>, CacheError> {
loop {
let notify = {
let state = self.state.lock();
if self.index < state.data.len() {
let chunk = state.data[self.index].clone();
if self.index < state.chunks.len() {
let chunk = state.chunks[self.index].clone();
self.index += 1;
return Ok(Some(chunk));
}
match state.closed {
Err(Error::Closed) => return Ok(None),
Err(err) => return Err(err),
match &state.closed {
Err(CacheError::Closed) => return Ok(None),
Err(err) => return Err(err.clone()),
Ok(()) => state.changed(),
}
};
@ -210,6 +211,6 @@ impl Dropped {
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(Error::Closed).ok();
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

21
moq-transport/src/cache/mod.rs vendored Normal file
View File

@ -0,0 +1,21 @@
//! Allows a publisher to push updates, automatically caching and fanning it out to any subscribers.
//!
//! The hierarchy is: [broadcast] -> [track] -> [segment] -> [fragment] -> [Bytes](bytes::Bytes)
//!
//! The naming scheme doesn't match the spec because it's more strict, and bikeshedding of course:
//!
//! - [broadcast] is kinda like "track namespace"
//! - [track] is "track"
//! - [segment] is "group" but MUST use a single stream.
//! - [fragment] is "object" but MUST have the same properties as the segment.
pub mod broadcast;
mod error;
pub mod fragment;
pub mod segment;
pub mod track;
pub(crate) mod watch;
pub(crate) use watch::*;
pub use error::*;

216
moq-transport/src/cache/segment.rs vendored Normal file
View File

@ -0,0 +1,216 @@
//! A segment is a stream of fragments with a header, split into a [Publisher] and [Subscriber] handle.
//!
//! A [Publisher] writes an ordered stream of fragments.
//! Each fragment can have a sequence number, allowing the subscriber to detect gaps fragments.
//!
//! A [Subscriber] reads an ordered stream of fragments.
//! The subscriber can be cloned, in which case each subscriber receives a copy of each fragment. (fanout)
//!
//! The segment is closed with [CacheError::Closed] when all publishers or subscribers are dropped.
use core::fmt;
use std::{ops::Deref, sync::Arc, time};
use crate::VarInt;
use super::{fragment, CacheError, Watch};
/// Create a new segment with the given info.
pub fn new(info: Info) -> (Publisher, Subscriber) {
let state = Watch::new(State::default());
let info = Arc::new(info);
let publisher = Publisher::new(state.clone(), info.clone());
let subscriber = Subscriber::new(state, info);
(publisher, subscriber)
}
/// Static information about the segment.
#[derive(Debug)]
pub struct Info {
// The sequence number of the segment within the track.
// NOTE: These may be received out of order or with gaps.
pub sequence: VarInt,
// The priority of the segment within the BROADCAST.
pub priority: u32,
// Cache the segment for at most this long.
pub expires: Option<time::Duration>,
}
struct State {
// The data that has been received thus far.
fragments: Vec<fragment::Subscriber>,
// Set when the publisher is dropped.
closed: Result<(), CacheError>,
}
impl State {
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
}
impl Default for State {
fn default() -> Self {
Self {
fragments: Vec::new(),
closed: Ok(()),
}
}
}
impl fmt::Debug for State {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("State")
.field("fragments", &self.fragments)
.field("closed", &self.closed)
.finish()
}
}
/// Used to write data to a segment and notify subscribers.
pub struct Publisher {
// Mutable segment state.
state: Watch<State>,
// Immutable segment state.
info: Arc<Info>,
// Closes the segment when all Publishers are dropped.
_dropped: Arc<Dropped>,
}
impl Publisher {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self { state, info, _dropped }
}
/// Write a fragment
pub fn push_fragment(&mut self, fragment: fragment::Subscriber) -> Result<(), CacheError> {
let mut state = self.state.lock_mut();
state.closed.clone()?;
state.fragments.push(fragment);
Ok(())
}
pub fn create_fragment(&mut self, fragment: fragment::Info) -> Result<fragment::Publisher, CacheError> {
let (publisher, subscriber) = fragment::new(fragment);
self.push_fragment(subscriber)?;
Ok(publisher)
}
/// Close the segment with an error.
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
impl Deref for Publisher {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Publisher {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Publisher")
.field("state", &self.state)
.field("info", &self.info)
.finish()
}
}
/// Notified when a segment has new data available.
#[derive(Clone)]
pub struct Subscriber {
// Modify the segment state.
state: Watch<State>,
// Immutable segment state.
info: Arc<Info>,
// The number of chunks that we've read.
// NOTE: Cloned subscribers inherit this index, but then run in parallel.
index: usize,
// Dropped when all Subscribers are dropped.
_dropped: Arc<Dropped>,
}
impl Subscriber {
fn new(state: Watch<State>, info: Arc<Info>) -> Self {
let _dropped = Arc::new(Dropped::new(state.clone()));
Self {
state,
info,
index: 0,
_dropped,
}
}
/// Block until the next chunk of bytes is available.
pub async fn next_fragment(&mut self) -> Result<Option<fragment::Subscriber>, CacheError> {
loop {
let notify = {
let state = self.state.lock();
if self.index < state.fragments.len() {
let fragment = state.fragments[self.index].clone();
self.index += 1;
return Ok(Some(fragment));
}
match &state.closed {
Err(CacheError::Closed) => return Ok(None),
Err(err) => return Err(err.clone()),
Ok(()) => state.changed(),
}
};
notify.await; // Try again when the state changes
}
}
}
impl Deref for Subscriber {
type Target = Info;
fn deref(&self) -> &Self::Target {
&self.info
}
}
impl fmt::Debug for Subscriber {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Subscriber")
.field("state", &self.state)
.field("info", &self.info)
.field("index", &self.index)
.finish()
}
}
struct Dropped {
// Modify the segment state.
state: Watch<State>,
}
impl Dropped {
fn new(state: Watch<State>) -> Self {
Self { state }
}
}
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

View File

@ -10,14 +10,14 @@
//! Segments will be cached for a potentially limited duration added to the unreliable nature.
//! A cloned [Subscriber] will receive a copy of all new segment going forward (fanout).
//!
//! The track is closed with [Error::Closed] when all publishers or subscribers are dropped.
//! The track is closed with [CacheError::Closed] when all publishers or subscribers are dropped.
use std::{collections::BinaryHeap, fmt, ops::Deref, sync::Arc, time};
use indexmap::IndexMap;
use super::{segment, Watch};
use crate::{Error, VarInt};
use super::{segment, CacheError, Watch};
use crate::VarInt;
/// Create a track with the given name.
pub fn new(name: &str) -> (Publisher, Subscriber) {
@ -49,21 +49,21 @@ struct State {
pruned: usize,
// Set when the publisher is closed/dropped, or all subscribers are dropped.
closed: Result<(), Error>,
closed: Result<(), CacheError>,
}
impl State {
pub fn close(&mut self, err: Error) -> Result<(), Error> {
self.closed?;
pub fn close(&mut self, err: CacheError) -> Result<(), CacheError> {
self.closed.clone()?;
self.closed = Err(err);
Ok(())
}
pub fn insert(&mut self, segment: segment::Subscriber) -> Result<(), Error> {
self.closed?;
pub fn insert(&mut self, segment: segment::Subscriber) -> Result<(), CacheError> {
self.closed.clone()?;
let entry = match self.lookup.entry(segment.sequence) {
indexmap::map::Entry::Occupied(_entry) => return Err(Error::Duplicate),
indexmap::map::Entry::Occupied(_entry) => return Err(CacheError::Duplicate),
indexmap::map::Entry::Vacant(entry) => entry,
};
@ -144,19 +144,19 @@ impl Publisher {
}
/// Insert a new segment.
pub fn insert_segment(&mut self, segment: segment::Subscriber) -> Result<(), Error> {
pub fn insert_segment(&mut self, segment: segment::Subscriber) -> Result<(), CacheError> {
self.state.lock_mut().insert(segment)
}
/// Create an insert a segment with the given info.
pub fn create_segment(&mut self, info: segment::Info) -> Result<segment::Publisher, Error> {
pub fn create_segment(&mut self, info: segment::Info) -> Result<segment::Publisher, CacheError> {
let (publisher, subscriber) = segment::new(info);
self.insert_segment(subscriber)?;
Ok(publisher)
}
/// Close the segment with an error.
pub fn close(self, err: Error) -> Result<(), Error> {
pub fn close(self, err: CacheError) -> Result<(), CacheError> {
self.state.lock_mut().close(err)
}
}
@ -206,8 +206,8 @@ impl Subscriber {
}
}
/// Block until the next segment arrives, or return None if the track is [Error::Closed].
pub async fn next_segment(&mut self) -> Result<Option<segment::Subscriber>, Error> {
/// Block until the next segment arrives
pub async fn next_segment(&mut self) -> Result<Option<segment::Subscriber>, CacheError> {
loop {
let notify = {
let state = self.state.lock();
@ -236,9 +236,9 @@ impl Subscriber {
}
// Otherwise check if we need to return an error.
match state.closed {
Err(Error::Closed) => return Ok(None),
Err(err) => return Err(err),
match &state.closed {
Err(CacheError::Closed) => return Ok(None),
Err(err) => return Err(err.clone()),
Ok(()) => state.changed(),
}
};
@ -279,7 +279,7 @@ impl Dropped {
impl Drop for Dropped {
fn drop(&mut self) {
self.state.lock_mut().close(Error::Closed).ok();
self.state.lock_mut().close(CacheError::Closed).ok();
}
}

View File

@ -1,5 +1,5 @@
use super::{BoundsExceeded, VarInt};
use std::str;
use std::{io, str};
use thiserror::Error;
@ -7,6 +7,13 @@ use thiserror::Error;
// TODO Use trait aliases when they're stable, or add these bounds to every method.
pub trait AsyncRead: tokio::io::AsyncRead + Unpin + Send {}
impl AsyncRead for webtransport_quinn::RecvStream {}
impl<T> AsyncRead for tokio::io::Take<&mut T> where T: AsyncRead {}
impl<T: AsRef<[u8]> + Unpin + Send> AsyncRead for io::Cursor<T> {}
#[async_trait::async_trait]
pub trait Decode: Sized {
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError>;
}
/// A decode error.
#[derive(Error, Debug)]
@ -17,12 +24,32 @@ pub enum DecodeError {
#[error("invalid string")]
InvalidString(#[from] str::Utf8Error),
#[error("invalid type: {0:?}")]
InvalidType(VarInt),
#[error("invalid message: {0:?}")]
InvalidMessage(VarInt),
#[error("invalid role: {0:?}")]
InvalidRole(VarInt),
#[error("invalid subscribe location")]
InvalidSubscribeLocation,
#[error("varint bounds exceeded")]
BoundsExceeded(#[from] BoundsExceeded),
// TODO move these to ParamError
#[error("duplicate parameter")]
DupliateParameter,
#[error("missing parameter")]
MissingParameter,
#[error("invalid parameter")]
InvalidParameter,
#[error("io error: {0}")]
IoError(#[from] std::io::Error),
// Used to signal that the stream has ended.
#[error("no more messages")]
Final,
}

View File

@ -6,6 +6,12 @@ use thiserror::Error;
// TODO Use trait aliases when they're stable, or add these bounds to every method.
pub trait AsyncWrite: tokio::io::AsyncWrite + Unpin + Send {}
impl AsyncWrite for webtransport_quinn::SendStream {}
impl AsyncWrite for Vec<u8> {}
#[async_trait::async_trait]
pub trait Encode: Sized {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError>;
}
/// An encode error.
#[derive(Error, Debug)]

View File

@ -1,9 +1,11 @@
mod decode;
mod encode;
mod params;
mod string;
mod varint;
pub use decode::*;
pub use encode::*;
pub use params::*;
pub use string::*;
pub use varint::*;

View File

@ -0,0 +1,85 @@
use std::io::Cursor;
use std::{cmp::max, collections::HashMap};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::coding::{AsyncRead, AsyncWrite, Decode, Encode};
use crate::{
coding::{DecodeError, EncodeError},
VarInt,
};
#[derive(Default, Debug, Clone)]
pub struct Params(pub HashMap<VarInt, Vec<u8>>);
#[async_trait::async_trait]
impl Decode for Params {
async fn decode<R: AsyncRead>(mut r: &mut R) -> Result<Self, DecodeError> {
let mut params = HashMap::new();
// I hate this shit so much; let me encode my role and get on with my life.
let count = VarInt::decode(r).await?;
for _ in 0..count.into_inner() {
let kind = VarInt::decode(r).await?;
if params.contains_key(&kind) {
return Err(DecodeError::DupliateParameter);
}
let size = VarInt::decode(r).await?;
// Don't allocate the entire requested size to avoid a possible attack
// Instead, we allocate up to 1024 and keep appending as we read further.
let mut pr = r.take(size.into_inner());
let mut buf = Vec::with_capacity(max(1024, pr.limit() as usize));
pr.read_to_end(&mut buf).await?;
params.insert(kind, buf);
r = pr.into_inner();
}
Ok(Params(params))
}
}
#[async_trait::async_trait]
impl Encode for Params {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::try_from(self.0.len())?.encode(w).await?;
for (kind, value) in self.0.iter() {
kind.encode(w).await?;
VarInt::try_from(value.len())?.encode(w).await?;
w.write_all(value).await?;
}
Ok(())
}
}
impl Params {
pub fn new() -> Self {
Self::default()
}
pub async fn set<P: Encode>(&mut self, kind: VarInt, p: P) -> Result<(), EncodeError> {
let mut value = Vec::new();
p.encode(&mut value).await?;
self.0.insert(kind, value);
Ok(())
}
pub fn has(&self, kind: VarInt) -> bool {
self.0.contains_key(&kind)
}
pub async fn get<P: Decode>(&mut self, kind: VarInt) -> Result<Option<P>, DecodeError> {
if let Some(value) = self.0.remove(&kind) {
let mut cursor = Cursor::new(value);
Ok(Some(P::decode(&mut cursor).await?))
} else {
Ok(None)
}
}
}

View File

@ -5,20 +5,25 @@ use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::VarInt;
use super::{DecodeError, EncodeError};
use super::{Decode, DecodeError, Encode, EncodeError};
/// Encode a string with a varint length prefix.
pub async fn encode_string<W: AsyncWrite>(s: &str, w: &mut W) -> Result<(), EncodeError> {
let size = VarInt::try_from(s.len())?;
#[async_trait::async_trait]
impl Encode for String {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
let size = VarInt::try_from(self.len())?;
size.encode(w).await?;
w.write_all(s.as_ref()).await?;
w.write_all(self.as_ref()).await?;
Ok(())
}
}
#[async_trait::async_trait]
impl Decode for String {
/// Decode a string with a varint length prefix.
pub async fn decode_string<R: AsyncRead>(r: &mut R) -> Result<String, DecodeError> {
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let size = VarInt::decode(r).await?.into_inner();
let mut str = String::with_capacity(min(1024, size) as usize);
r.take(size).read_to_string(&mut str).await?;
Ok(str)
}
}

View File

@ -9,7 +9,7 @@ use crate::coding::{AsyncRead, AsyncWrite};
use thiserror::Error;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use super::{DecodeError, EncodeError};
use super::{Decode, DecodeError, Encode, EncodeError};
#[derive(Debug, Copy, Clone, Eq, PartialEq, Error)]
#[error("value out of range")]
@ -164,14 +164,23 @@ impl fmt::Display for VarInt {
}
}
impl VarInt {
#[async_trait::async_trait]
impl Decode for VarInt {
/// Decode a varint from the given reader.
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let mut buf = [0u8; 8];
r.read_exact(buf[0..1].as_mut()).await?;
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let b = r.read_u8().await?;
Self::decode_byte(b, r).await
}
}
let tag = buf[0] >> 6;
buf[0] &= 0b0011_1111;
impl VarInt {
/// Decode a varint given the first byte, reading the rest as needed.
/// This is silly but useful for determining if the stream has ended.
pub async fn decode_byte<R: AsyncRead>(b: u8, r: &mut R) -> Result<Self, DecodeError> {
let tag = b >> 6;
let mut buf = [0u8; 8];
buf[0] = b & 0b0011_1111;
let x = match tag {
0b00 => u64::from(buf[0]),
@ -192,9 +201,12 @@ impl VarInt {
Ok(Self(x))
}
}
#[async_trait::async_trait]
impl Encode for VarInt {
/// Encode a varint to the given writer.
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
let x = self.0;
if x < 2u64.pow(6) {
w.write_u8(x as u8).await?;

View File

@ -1,76 +1,7 @@
use thiserror::Error;
use crate::VarInt;
/// A MoQTransport error with an associated error code.
#[derive(Copy, Clone, Debug, Error)]
pub enum Error {
/// A clean termination, represented as error code 0.
/// This error is automatically used when publishers or subscribers are dropped without calling close.
#[error("closed")]
Closed,
/// An ANNOUNCE_RESET or SUBSCRIBE_RESET was sent by the publisher.
#[error("reset code={0:?}")]
Reset(u32),
/// An ANNOUNCE_STOP or SUBSCRIBE_STOP was sent by the subscriber.
#[error("stop")]
Stop,
/// The requested resource was not found.
#[error("not found")]
NotFound,
/// A resource already exists with that ID.
#[error("duplicate")]
Duplicate,
/// The role negiotiated in the handshake was violated. For example, a publisher sent a SUBSCRIBE, or a subscriber sent an OBJECT.
#[error("role violation: msg={0}")]
Role(VarInt),
/// An error occured while reading from the QUIC stream.
#[error("failed to read from stream")]
Read,
/// An error occured while writing to the QUIC stream.
#[error("failed to write to stream")]
Write,
/// An unclassified error because I'm lazy. TODO classify these errors
#[error("unknown error")]
Unknown,
}
impl Error {
pub trait MoqError {
/// An integer code that is sent over the wire.
pub fn code(&self) -> u32 {
match self {
Self::Closed => 0,
Self::Reset(code) => *code,
Self::Stop => 206,
Self::NotFound => 404,
Self::Role(_) => 405,
Self::Duplicate => 409,
Self::Unknown => 500,
Self::Write => 501,
Self::Read => 502,
}
}
fn code(&self) -> u32;
/// A reason that is sent over the wire.
pub fn reason(&self) -> &str {
match self {
Self::Closed => "closed",
Self::Reset(_) => "reset",
Self::Stop => "stop",
Self::NotFound => "not found",
Self::Duplicate => "duplicate",
Self::Role(_msg) => "role violation",
Self::Unknown => "unknown",
Self::Read => "read error",
Self::Write => "write error",
}
}
/// An optional reason sometimes sent over the wire.
fn reason(&self) -> String;
}

View File

@ -5,16 +5,14 @@
//! The specification is a work in progress and will change.
//! See the [specification](https://datatracker.ietf.org/doc/draft-ietf-moq-transport/) and [github](https://github.com/moq-wg/moq-transport) for any updates.
//!
//! **FORKED**: This is implementation makes extensive changes to the protocol.
//! See [KIXEL_00](crate::setup::Version::KIXEL_00) for a list of differences.
//! Many of these will get merged into the specification, so don't panic.
//! This implementation has some required extensions until the draft stablizes. See: [Extensions](crate::setup::Extensions)
mod coding;
mod error;
pub mod cache;
pub mod message;
pub mod model;
pub mod session;
pub mod setup;
pub use coding::VarInt;
pub use error::*;
pub use error::MoqError;

View File

@ -1,22 +1,30 @@
use crate::coding::{decode_string, encode_string, DecodeError, EncodeError};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, Params};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the publisher to announce the availability of a group of tracks.
#[derive(Clone, Debug)]
pub struct Announce {
// The track namespace
/// The track namespace
pub namespace: String,
/// Optional parameters
pub params: Params,
}
impl Announce {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let namespace = decode_string(r).await?;
Ok(Self { namespace })
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
let params = Params::decode(r).await?;
Ok(Self { namespace, params })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
encode_string(&self.namespace, w).await?;
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await?;
self.params.encode(w).await?;
Ok(())
}
}

View File

@ -1,4 +1,7 @@
use crate::coding::{decode_string, encode_string, AsyncRead, AsyncWrite, DecodeError, EncodeError};
use crate::{
coding::{AsyncRead, AsyncWrite, Decode, DecodeError, Encode, EncodeError},
setup::Extensions,
};
/// Sent by the subscriber to accept an Announce.
#[derive(Clone, Debug)]
@ -9,12 +12,12 @@ pub struct AnnounceOk {
}
impl AnnounceOk {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let namespace = decode_string(r).await?;
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
Ok(Self { namespace })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
encode_string(&self.namespace, w).await
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await
}
}

View File

@ -1,10 +1,11 @@
use crate::coding::{decode_string, encode_string, DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the subscriber to reject an Announce.
#[derive(Clone, Debug)]
pub struct AnnounceReset {
pub struct AnnounceError {
// Echo back the namespace that was reset
pub namespace: String,
@ -15,11 +16,11 @@ pub struct AnnounceReset {
pub reason: String,
}
impl AnnounceReset {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let namespace = decode_string(r).await?;
impl AnnounceError {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
let code = VarInt::decode(r).await?.try_into()?;
let reason = decode_string(r).await?;
let reason = String::decode(r).await?;
Ok(Self {
namespace,
@ -28,10 +29,10 @@ impl AnnounceReset {
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
encode_string(&self.namespace, w).await?;
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await?;
VarInt::from_u32(self.code).encode(w).await?;
encode_string(&self.reason, w).await?;
self.reason.encode(w).await?;
Ok(())
}

View File

@ -1,24 +0,0 @@
use crate::coding::{decode_string, encode_string, DecodeError, EncodeError};
use crate::coding::{AsyncRead, AsyncWrite};
/// Sent by the publisher to terminate an Announce.
#[derive(Clone, Debug)]
pub struct AnnounceStop {
// Echo back the namespace that was reset
pub namespace: String,
}
impl AnnounceStop {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let namespace = decode_string(r).await?;
Ok(Self { namespace })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
encode_string(&self.namespace, w).await?;
Ok(())
}
}

View File

@ -1,6 +1,7 @@
use crate::coding::{decode_string, encode_string, DecodeError, EncodeError};
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the server to indicate that the client should connect to a different server.
#[derive(Clone, Debug)]
@ -9,12 +10,12 @@ pub struct GoAway {
}
impl GoAway {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let url = decode_string(r).await?;
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let url = String::decode(r).await?;
Ok(Self { url })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
encode_string(&self.url, w).await
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.url.encode(w).await
}
}

View File

@ -6,16 +6,17 @@
//!
//! Messages sent by the publisher:
//! - [Announce]
//! - [AnnounceReset]
//! - [Unannounce]
//! - [SubscribeOk]
//! - [SubscribeError]
//! - [SubscribeReset]
//! - [Object]
//!
//! Messages sent by the subscriber:
//! - [Subscribe]
//! - [SubscribeStop]
//! - [Unsubscribe]
//! - [AnnounceOk]
//! - [AnnounceStop]
//! - [AnnounceError]
//!
//! Example flow:
//! ```test
@ -32,30 +33,35 @@
mod announce;
mod announce_ok;
mod announce_reset;
mod announce_stop;
mod go_away;
mod object;
mod subscribe;
mod subscribe_error;
mod subscribe_fin;
mod subscribe_ok;
mod subscribe_reset;
mod subscribe_stop;
mod unannounce;
mod unsubscribe;
pub use announce::*;
pub use announce_ok::*;
pub use announce_reset::*;
pub use announce_stop::*;
pub use go_away::*;
pub use object::*;
pub use subscribe::*;
pub use subscribe_error::*;
pub use subscribe_fin::*;
pub use subscribe_ok::*;
pub use subscribe_reset::*;
pub use subscribe_stop::*;
pub use unannounce::*;
pub use unsubscribe::*;
use crate::coding::{DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use std::fmt;
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
// Use a macro to generate the message types rather than copy-paste.
// This implements a decode/encode method that uses the specified type.
@ -68,23 +74,23 @@ macro_rules! message_types {
}
impl Message {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
pub async fn decode<R: AsyncRead>(r: &mut R, ext: &Extensions) -> Result<Self, DecodeError> {
let t = VarInt::decode(r).await?;
match t.into_inner() {
$($val => {
let msg = $name::decode(r).await?;
let msg = $name::decode(r, ext).await?;
Ok(Self::$name(msg))
})*
_ => Err(DecodeError::InvalidType(t)),
_ => Err(DecodeError::InvalidMessage(t)),
}
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, ext: &Extensions) -> Result<(), EncodeError> {
match self {
$(Self::$name(ref m) => {
VarInt::from_u32($val).encode(w).await?;
m.encode(w).await
m.encode(w, ext).await
},)*
}
}
@ -127,15 +133,28 @@ macro_rules! message_types {
message_types! {
// NOTE: Object and Setup are in other modules.
// Object = 0x0
// SetupClient = 0x1
// SetupServer = 0x2
// ObjectUnbounded = 0x2
// SetupClient = 0x40
// SetupServer = 0x41
// SUBSCRIBE family, sent by subscriber
Subscribe = 0x3,
Unsubscribe = 0xa,
// SUBSCRIBE family, sent by publisher
SubscribeOk = 0x4,
SubscribeReset = 0x5,
SubscribeStop = 0x15,
SubscribeError = 0x5,
SubscribeFin = 0xb,
SubscribeReset = 0xc,
// ANNOUNCE family, sent by publisher
Announce = 0x6,
Unannounce = 0x9,
// ANNOUNCE family, sent by subscriber
AnnounceOk = 0x7,
AnnounceReset = 0x8,
AnnounceStop = 0x18,
AnnounceError = 0x8,
// Misc
GoAway = 0x10,
}

View File

@ -1,9 +1,10 @@
use std::time;
use std::{io, time};
use crate::coding::{DecodeError, EncodeError, VarInt};
use tokio::io::AsyncReadExt;
use crate::coding::{AsyncRead, AsyncWrite};
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup;
/// Sent by the publisher as the header of each data stream.
#[derive(Clone, Debug)]
@ -13,47 +14,78 @@ pub struct Object {
pub track: VarInt,
// The sequence number within the track.
pub group: VarInt,
// The sequence number within the group.
pub sequence: VarInt,
// The priority, where **larger** values are sent first.
// Proposal: int32 instead of a varint.
pub priority: i32,
// The priority, where **smaller** values are sent first.
pub priority: u32,
// Cache the object for at most this many seconds.
// Zero means never expire.
pub expires: Option<time::Duration>,
/// An optional size, allowing multiple OBJECTs on the same stream.
pub size: Option<VarInt>,
}
impl Object {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let typ = VarInt::decode(r).await?;
if typ.into_inner() != 0 {
return Err(DecodeError::InvalidType(typ));
}
pub async fn decode<R: AsyncRead>(r: &mut R, extensions: &setup::Extensions) -> Result<Self, DecodeError> {
// Try reading the first byte, returning a special error if the stream naturally ended.
let typ = match r.read_u8().await {
Ok(b) => VarInt::decode_byte(b, r).await?,
Err(e) if e.kind() == io::ErrorKind::UnexpectedEof => return Err(DecodeError::Final),
Err(e) => return Err(e.into()),
};
// NOTE: size has been omitted
let size_present = match typ.into_inner() {
0 => false,
2 => true,
_ => return Err(DecodeError::InvalidMessage(typ)),
};
let track = VarInt::decode(r).await?;
let group = VarInt::decode(r).await?;
let sequence = VarInt::decode(r).await?;
let priority = r.read_i32().await?; // big-endian
let expires = match VarInt::decode(r).await?.into_inner() {
let priority = VarInt::decode(r).await?.try_into()?;
let expires = match extensions.object_expires {
true => match VarInt::decode(r).await?.into_inner() {
0 => None,
secs => Some(time::Duration::from_secs(secs)),
},
false => None,
};
// The presence of the size field depends on the type.
let size = match size_present {
true => Some(VarInt::decode(r).await?),
false => None,
};
Ok(Self {
track,
group,
sequence,
priority,
expires,
size,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::ZERO.encode(w).await?;
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, extensions: &setup::Extensions) -> Result<(), EncodeError> {
// The kind changes based on the presence of the size.
let kind = match self.size {
Some(_) => VarInt::from_u32(2),
None => VarInt::ZERO,
};
kind.encode(w).await?;
self.track.encode(w).await?;
self.group.encode(w).await?;
self.sequence.encode(w).await?;
w.write_i32(self.priority).await?;
VarInt::from_u32(self.priority).encode(w).await?;
// Round up if there's any decimal points.
let expires = match self.expires {
@ -63,7 +95,13 @@ impl Object {
Some(expires) => expires.as_secs(),
};
if extensions.object_expires {
VarInt::try_from(expires)?.encode(w).await?;
}
if let Some(size) = self.size {
size.encode(w).await?;
}
Ok(())
}

View File

@ -1,38 +1,141 @@
use crate::coding::{decode_string, encode_string, DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, Params, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the subscriber to request all future objects for the given track.
///
/// Objects will use the provided ID instead of the full track name, to save bytes.
#[derive(Clone, Debug)]
pub struct Subscribe {
// An ID we choose so we can map to the track_name.
/// An ID we choose so we can map to the track_name.
// Proposal: https://github.com/moq-wg/moq-transport/issues/209
pub id: VarInt,
// The track namespace.
pub namespace: String,
/// The track namespace.
///
/// Must be None if `extensions.subscribe_split` is false.
pub namespace: Option<String>,
// The track name.
/// The track name.
pub name: String,
/// The start/end group/object.
pub start_group: SubscribeLocation,
pub start_object: SubscribeLocation,
pub end_group: SubscribeLocation,
pub end_object: SubscribeLocation,
/// Optional parameters
pub params: Params,
}
impl Subscribe {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
pub async fn decode<R: AsyncRead>(r: &mut R, ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let namespace = decode_string(r).await?;
let name = decode_string(r).await?;
Ok(Self { id, namespace, name })
}
let namespace = match ext.subscribe_split {
true => Some(String::decode(r).await?),
false => None,
};
let name = String::decode(r).await?;
let start_group = SubscribeLocation::decode(r).await?;
let start_object = SubscribeLocation::decode(r).await?;
let end_group = SubscribeLocation::decode(r).await?;
let end_object = SubscribeLocation::decode(r).await?;
// You can't have a start object without a start group.
if start_group == SubscribeLocation::None && start_object != SubscribeLocation::None {
return Err(DecodeError::InvalidSubscribeLocation);
}
impl Subscribe {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
// You can't have an end object without an end group.
if end_group == SubscribeLocation::None && end_object != SubscribeLocation::None {
return Err(DecodeError::InvalidSubscribeLocation);
}
// NOTE: There's some more location restrictions in the draft, but they're enforced at a higher level.
let params = Params::decode(r).await?;
Ok(Self {
id,
namespace,
name,
start_group,
start_object,
end_group,
end_object,
params,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
encode_string(&self.namespace, w).await?;
encode_string(&self.name, w).await?;
if self.namespace.is_some() != ext.subscribe_split {
panic!("namespace must be None if subscribe_split is false");
}
if ext.subscribe_split {
self.namespace.as_ref().unwrap().encode(w).await?;
}
self.name.encode(w).await?;
self.start_group.encode(w).await?;
self.start_object.encode(w).await?;
self.end_group.encode(w).await?;
self.end_object.encode(w).await?;
self.params.encode(w).await?;
Ok(())
}
}
/// Signal where the subscription should begin, relative to the current cache.
#[derive(Clone, Debug, PartialEq)]
pub enum SubscribeLocation {
None,
Absolute(VarInt),
Latest(VarInt),
Future(VarInt),
}
impl SubscribeLocation {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let kind = VarInt::decode(r).await?;
match kind.into_inner() {
0 => Ok(Self::None),
1 => Ok(Self::Absolute(VarInt::decode(r).await?)),
2 => Ok(Self::Latest(VarInt::decode(r).await?)),
3 => Ok(Self::Future(VarInt::decode(r).await?)),
_ => Err(DecodeError::InvalidSubscribeLocation),
}
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
match self {
Self::None => {
VarInt::from_u32(0).encode(w).await?;
}
Self::Absolute(val) => {
VarInt::from_u32(1).encode(w).await?;
val.encode(w).await?;
}
Self::Latest(val) => {
VarInt::from_u32(2).encode(w).await?;
val.encode(w).await?;
}
Self::Future(val) => {
VarInt::from_u32(3).encode(w).await?;
val.encode(w).await?;
}
}
Ok(())
}

View File

@ -0,0 +1,36 @@
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup::Extensions;
/// Sent by the publisher to reject a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeError {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
// The ID for this subscription.
pub id: VarInt,
// An error code.
pub code: u32,
// An optional, human-readable reason.
pub reason: String,
}
impl SubscribeError {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let code = VarInt::decode(r).await?.try_into()?;
let reason = String::decode(r).await?;
Ok(Self { id, code, reason })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
VarInt::from_u32(self.code).encode(w).await?;
self.reason.encode(w).await?;
Ok(())
}
}

View File

@ -0,0 +1,37 @@
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup::Extensions;
/// Sent by the publisher to cleanly terminate a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeFin {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
/// The ID for this subscription.
pub id: VarInt,
/// The final group/object sent on this subscription.
pub final_group: VarInt,
pub final_object: VarInt,
}
impl SubscribeFin {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let final_group = VarInt::decode(r).await?;
let final_object = VarInt::decode(r).await?;
Ok(Self {
id,
final_group,
final_object,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
self.final_group.encode(w).await?;
self.final_object.encode(w).await?;
Ok(())
}
}

View File

@ -1,26 +1,31 @@
use crate::coding::{DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the publisher to accept a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeOk {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
// The ID for this track.
/// The ID for this track.
pub id: VarInt,
/// The subscription will expire in this many milliseconds.
pub expires: VarInt,
}
impl SubscribeOk {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
Ok(Self { id })
let expires = VarInt::decode(r).await?;
Ok(Self { id, expires })
}
}
impl SubscribeOk {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
self.expires.encode(w).await?;
Ok(())
}
}

View File

@ -1,35 +1,49 @@
use crate::coding::{decode_string, encode_string, DecodeError, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::setup::Extensions;
/// Sent by the publisher to reject a Subscribe.
/// Sent by the publisher to terminate a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeReset {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
// The ID for this subscription.
/// The ID for this subscription.
pub id: VarInt,
// An error code.
/// An error code.
pub code: u32,
// An optional, human-readable reason.
/// An optional, human-readable reason.
pub reason: String,
/// The final group/object sent on this subscription.
pub final_group: VarInt,
pub final_object: VarInt,
}
impl SubscribeReset {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
let code = VarInt::decode(r).await?.try_into()?;
let reason = decode_string(r).await?;
let reason = String::decode(r).await?;
let final_group = VarInt::decode(r).await?;
let final_object = VarInt::decode(r).await?;
Ok(Self { id, code, reason })
Ok(Self {
id,
code,
reason,
final_group,
final_object,
})
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
VarInt::from_u32(self.code).encode(w).await?;
encode_string(&self.reason, w).await?;
self.reason.encode(w).await?;
self.final_group.encode(w).await?;
self.final_object.encode(w).await?;
Ok(())
}

View File

@ -0,0 +1,25 @@
use crate::coding::{Decode, DecodeError, Encode, EncodeError};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the publisher to terminate an Announce.
#[derive(Clone, Debug)]
pub struct Unannounce {
// Echo back the namespace that was reset
pub namespace: String,
}
impl Unannounce {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let namespace = String::decode(r).await?;
Ok(Self { namespace })
}
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.namespace.encode(w).await?;
Ok(())
}
}

View File

@ -1,25 +1,26 @@
use crate::coding::{DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
use crate::setup::Extensions;
/// Sent by the subscriber to terminate a Subscribe.
#[derive(Clone, Debug)]
pub struct SubscribeStop {
pub struct Unsubscribe {
// NOTE: No full track name because of this proposal: https://github.com/moq-wg/moq-transport/issues/209
// The ID for this subscription.
pub id: VarInt,
}
impl SubscribeStop {
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
impl Unsubscribe {
pub async fn decode<R: AsyncRead>(r: &mut R, _ext: &Extensions) -> Result<Self, DecodeError> {
let id = VarInt::decode(r).await?;
Ok(Self { id })
}
}
impl SubscribeStop {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
impl Unsubscribe {
pub async fn encode<W: AsyncWrite>(&self, w: &mut W, _ext: &Extensions) -> Result<(), EncodeError> {
self.id.encode(w).await?;
Ok(())
}

View File

@ -1,11 +0,0 @@
//! Allows a publisher to push updates, automatically caching and fanning it out to any subscribers.
//!
//! The naming scheme doesn't match the spec because it's vague and confusing.
//! The hierarchy is: [broadcast] -> [track] -> [segment] -> [Bytes](bytes::Bytes)
pub mod broadcast;
pub mod segment;
pub mod track;
pub(crate) mod watch;
pub(crate) use watch::*;

View File

@ -1,25 +1,21 @@
use super::{Publisher, Subscriber};
use crate::{model::broadcast, setup};
use webtransport_quinn::{RecvStream, SendStream, Session};
use anyhow::Context;
use super::{Control, Publisher, SessionError, Subscriber};
use crate::{cache::broadcast, setup};
use webtransport_quinn::Session;
/// An endpoint that connects to a URL to publish and/or consume live streams.
pub struct Client {}
impl Client {
/// Connect using an established WebTransport session, performing the MoQ handshake as a publisher.
pub async fn publisher(session: Session, source: broadcast::Subscriber) -> anyhow::Result<Publisher> {
pub async fn publisher(session: Session, source: broadcast::Subscriber) -> Result<Publisher, SessionError> {
let control = Self::send_setup(&session, setup::Role::Publisher).await?;
let publisher = Publisher::new(session, control, source);
Ok(publisher)
}
/// Connect using an established WebTransport session, performing the MoQ handshake as a subscriber.
pub async fn subscriber(session: Session, source: broadcast::Publisher) -> anyhow::Result<Subscriber> {
pub async fn subscriber(session: Session, source: broadcast::Publisher) -> Result<Subscriber, SessionError> {
let control = Self::send_setup(&session, setup::Role::Subscriber).await?;
let subscriber = Subscriber::new(session, control, source);
Ok(subscriber)
}
@ -31,31 +27,46 @@ impl Client {
}
*/
async fn send_setup(session: &Session, role: setup::Role) -> anyhow::Result<(SendStream, RecvStream)> {
let mut control = session.open_bi().await.context("failed to oen bidi stream")?;
async fn send_setup(session: &Session, role: setup::Role) -> Result<Control, SessionError> {
let mut control = session.open_bi().await?;
let versions: setup::Versions = [setup::Version::DRAFT_01, setup::Version::KIXEL_01].into();
let client = setup::Client {
role,
versions: vec![setup::Version::KIXEL_00].into(),
versions: versions.clone(),
params: Default::default(),
// Offer all extensions
extensions: setup::Extensions {
object_expires: true,
subscriber_id: true,
subscribe_split: true,
},
};
client
.encode(&mut control.0)
.await
.context("failed to send SETUP CLIENT")?;
client.encode(&mut control.0).await?;
let server = setup::Server::decode(&mut control.1)
.await
.context("failed to read SETUP")?;
let mut server = setup::Server::decode(&mut control.1).await?;
if server.version != setup::Version::KIXEL_00 {
anyhow::bail!("unsupported version: {:?}", server.version);
match server.version {
setup::Version::DRAFT_01 => {
// We always require this extension
server.extensions.require_subscriber_id()?;
if server.role.is_publisher() {
// We only require object expires if we're a subscriber, so we don't cache objects indefinitely.
server.extensions.require_object_expires()?;
}
}
setup::Version::KIXEL_01 => {
// KIXEL_01 didn't support extensions; all were enabled.
server.extensions = client.extensions.clone()
}
_ => return Err(SessionError::Version(versions, [server.version].into())),
}
// Make sure the server replied with the
if !client.role.is_compatible(server.role) {
anyhow::bail!("incompatible roles: client={:?} server={:?}", client.role, server.role);
}
let control = Control::new(control.0, control.1, server.extensions);
Ok(control)
}

View File

@ -5,31 +5,41 @@ use std::{fmt, sync::Arc};
use tokio::sync::Mutex;
use webtransport_quinn::{RecvStream, SendStream};
use crate::{message::Message, Error};
use super::SessionError;
use crate::{message::Message, setup::Extensions};
#[derive(Debug, Clone)]
pub(crate) struct Control {
send: Arc<Mutex<SendStream>>,
recv: Arc<Mutex<RecvStream>>,
pub ext: Extensions,
}
impl Control {
pub fn new(send: SendStream, recv: RecvStream) -> Self {
pub fn new(send: SendStream, recv: RecvStream, ext: Extensions) -> Self {
Self {
send: Arc::new(Mutex::new(send)),
recv: Arc::new(Mutex::new(recv)),
ext,
}
}
pub async fn send<T: Into<Message> + fmt::Debug>(&self, msg: T) -> Result<(), Error> {
pub async fn send<T: Into<Message> + fmt::Debug>(&self, msg: T) -> Result<(), SessionError> {
let mut stream = self.send.lock().await;
log::info!("sending message: {:?}", msg);
msg.into().encode(&mut *stream).await.map_err(|_e| Error::Write)
msg.into()
.encode(&mut *stream, &self.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
Ok(())
}
// It's likely a mistake to call this from two different tasks, but it's easier to just support it.
pub async fn recv(&self) -> Result<Message, Error> {
pub async fn recv(&self) -> Result<Message, SessionError> {
let mut stream = self.recv.lock().await;
Message::decode(&mut *stream).await.map_err(|_e| Error::Read)
let msg = Message::decode(&mut *stream, &self.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
Ok(msg)
}
}

View File

@ -0,0 +1,101 @@
use crate::{cache, coding, setup, MoqError, VarInt};
#[derive(thiserror::Error, Debug)]
pub enum SessionError {
#[error("webtransport error: {0}")]
Session(#[from] webtransport_quinn::SessionError),
#[error("cache error: {0}")]
Cache(#[from] cache::CacheError),
#[error("encode error: {0}")]
Encode(#[from] coding::EncodeError),
#[error("decode error: {0}")]
Decode(#[from] coding::DecodeError),
#[error("unsupported versions: client={0:?} server={1:?}")]
Version(setup::Versions, setup::Versions),
#[error("incompatible roles: client={0:?} server={1:?}")]
RoleIncompatible(setup::Role, setup::Role),
/// An error occured while reading from the QUIC stream.
#[error("failed to read from stream: {0}")]
Read(#[from] webtransport_quinn::ReadError),
/// An error occured while writing to the QUIC stream.
#[error("failed to write to stream: {0}")]
Write(#[from] webtransport_quinn::WriteError),
/// The role negiotiated in the handshake was violated. For example, a publisher sent a SUBSCRIBE, or a subscriber sent an OBJECT.
#[error("role violation: msg={0}")]
RoleViolation(VarInt),
/// Our enforced stream mapping was disrespected.
#[error("stream mapping conflict")]
StreamMapping,
/// The priority was invalid.
#[error("invalid priority: {0}")]
InvalidPriority(VarInt),
/// The size was invalid.
#[error("invalid size: {0}")]
InvalidSize(VarInt),
/// A required extension was not offered.
#[error("required extension not offered: {0:?}")]
RequiredExtension(VarInt),
/// An unclassified error because I'm lazy. TODO classify these errors
#[error("unknown error: {0}")]
Unknown(String),
}
impl MoqError for SessionError {
/// An integer code that is sent over the wire.
fn code(&self) -> u32 {
match self {
Self::Cache(err) => err.code(),
Self::RoleIncompatible(..) => 406,
Self::RoleViolation(..) => 405,
Self::StreamMapping => 409,
Self::Unknown(_) => 500,
Self::Write(_) => 501,
Self::Read(_) => 502,
Self::Session(_) => 503,
Self::Version(..) => 406,
Self::Encode(_) => 500,
Self::Decode(_) => 500,
Self::InvalidPriority(_) => 400,
Self::InvalidSize(_) => 400,
Self::RequiredExtension(_) => 426,
}
}
/// A reason that is sent over the wire.
fn reason(&self) -> String {
match self {
Self::Cache(err) => err.reason(),
Self::RoleViolation(kind) => format!("role violation for message type {:?}", kind),
Self::RoleIncompatible(client, server) => {
format!(
"role incompatible: client wanted {:?} but server wanted {:?}",
client, server
)
}
Self::Read(err) => format!("read error: {}", err),
Self::Write(err) => format!("write error: {}", err),
Self::Session(err) => format!("session error: {}", err),
Self::Unknown(err) => format!("unknown error: {}", err),
Self::Version(client, server) => format!("unsupported versions: client={:?} server={:?}", client, server),
Self::Encode(err) => format!("encode error: {}", err),
Self::Decode(err) => format!("decode error: {}", err),
Self::StreamMapping => "streaming mapping conflict".to_owned(),
Self::InvalidPriority(priority) => format!("invalid priority: {}", priority),
Self::InvalidSize(size) => format!("invalid size: {}", size),
Self::RequiredExtension(id) => format!("required extension was missing: {:?}", id),
}
}
}

View File

@ -14,12 +14,14 @@
mod client;
mod control;
mod error;
mod publisher;
mod server;
mod subscriber;
pub use client::*;
pub(crate) use control::*;
pub use error::*;
pub use publisher::*;
pub use server::*;
pub use subscriber::*;

View File

@ -4,16 +4,16 @@ use std::{
};
use tokio::task::AbortHandle;
use webtransport_quinn::{RecvStream, SendStream, Session};
use webtransport_quinn::Session;
use crate::{
cache::{broadcast, segment, track, CacheError},
message,
message::Message,
model::{broadcast, segment, track},
Error, VarInt,
MoqError, VarInt,
};
use super::Control;
use super::{Control, SessionError};
/// Serves broadcasts over the network, automatically handling subscriptions and caching.
// TODO Clone specific fields when a task actually needs it.
@ -27,63 +27,80 @@ pub struct Publisher {
}
impl Publisher {
pub(crate) fn new(webtransport: Session, control: (SendStream, RecvStream), source: broadcast::Subscriber) -> Self {
let control = Control::new(control.0, control.1);
pub(crate) fn new(webtransport: Session, control: Control, source: broadcast::Subscriber) -> Self {
Self {
webtransport,
subscribes: Default::default(),
control,
subscribes: Default::default(),
source,
}
}
// TODO Serve a broadcast without sending an ANNOUNCE.
// fn serve(&mut self, broadcast: broadcast::Subscriber) -> Result<(), Error> {
// fn serve(&mut self, broadcast: broadcast::Subscriber) -> Result<(), SessionError> {
// TODO Wait until the next subscribe that doesn't route to an ANNOUNCE.
// pub async fn subscribed(&mut self) -> Result<track::Producer, Error> {
// pub async fn subscribed(&mut self) -> Result<track::Producer, SessionError> {
pub async fn run(mut self) -> Result<(), Error> {
pub async fn run(mut self) -> Result<(), SessionError> {
let res = self.run_inner().await;
// Terminate all active subscribes on error.
self.subscribes
.lock()
.unwrap()
.drain()
.for_each(|(_, abort)| abort.abort());
res
}
pub async fn run_inner(&mut self) -> Result<(), SessionError> {
loop {
tokio::select! {
_stream = self.webtransport.accept_uni() => {
return Err(Error::Role(VarInt::ZERO));
stream = self.webtransport.accept_uni() => {
stream?;
return Err(SessionError::RoleViolation(VarInt::ZERO));
}
// NOTE: this is not cancel safe, but it's fine since the other branch is a fatal error.
// NOTE: this is not cancel safe, but it's fine since the other branchs are fatal.
msg = self.control.recv() => {
let msg = msg.map_err(|_x| Error::Read)?;
let msg = msg?;
log::info!("message received: {:?}", msg);
if let Err(err) = self.recv_message(&msg).await {
log::warn!("message error: {:?} {:?}", err, msg);
}
}
},
// No more broadcasts are available.
err = self.source.closed() => {
self.webtransport.close(err.code(), err.reason().as_bytes());
return Ok(());
},
}
}
}
async fn recv_message(&mut self, msg: &Message) -> Result<(), Error> {
async fn recv_message(&mut self, msg: &Message) -> Result<(), SessionError> {
match msg {
Message::AnnounceOk(msg) => self.recv_announce_ok(msg).await,
Message::AnnounceStop(msg) => self.recv_announce_stop(msg).await,
Message::AnnounceError(msg) => self.recv_announce_error(msg).await,
Message::Subscribe(msg) => self.recv_subscribe(msg).await,
Message::SubscribeStop(msg) => self.recv_subscribe_stop(msg).await,
_ => Err(Error::Role(msg.id())),
Message::Unsubscribe(msg) => self.recv_unsubscribe(msg).await,
_ => Err(SessionError::RoleViolation(msg.id())),
}
}
async fn recv_announce_ok(&mut self, _msg: &message::AnnounceOk) -> Result<(), Error> {
async fn recv_announce_ok(&mut self, _msg: &message::AnnounceOk) -> Result<(), SessionError> {
// We didn't send an announce.
Err(Error::NotFound)
Err(CacheError::NotFound.into())
}
async fn recv_announce_stop(&mut self, _msg: &message::AnnounceStop) -> Result<(), Error> {
async fn recv_announce_error(&mut self, _msg: &message::AnnounceError) -> Result<(), SessionError> {
// We didn't send an announce.
Err(Error::NotFound)
Err(CacheError::NotFound.into())
}
async fn recv_subscribe(&mut self, msg: &message::Subscribe) -> Result<(), Error> {
async fn recv_subscribe(&mut self, msg: &message::Subscribe) -> Result<(), SessionError> {
// Assume that the subscribe ID is unique for now.
let abort = match self.start_subscribe(msg.clone()) {
Ok(abort) => abort,
@ -92,27 +109,38 @@ impl Publisher {
// Insert the abort handle into the lookup table.
match self.subscribes.lock().unwrap().entry(msg.id) {
hash_map::Entry::Occupied(_) => return Err(Error::Duplicate), // TODO fatal, because we already started the task
hash_map::Entry::Occupied(_) => return Err(CacheError::Duplicate.into()), // TODO fatal, because we already started the task
hash_map::Entry::Vacant(entry) => entry.insert(abort),
};
self.control.send(message::SubscribeOk { id: msg.id }).await
self.control
.send(message::SubscribeOk {
id: msg.id,
expires: VarInt::ZERO,
})
.await
}
async fn reset_subscribe(&mut self, id: VarInt, err: Error) -> Result<(), Error> {
async fn reset_subscribe<E: MoqError>(&mut self, id: VarInt, err: E) -> Result<(), SessionError> {
let msg = message::SubscribeReset {
id,
code: err.code(),
reason: err.reason().to_string(),
reason: err.reason(),
// TODO properly populate these
// But first: https://github.com/moq-wg/moq-transport/issues/313
final_group: VarInt::ZERO,
final_object: VarInt::ZERO,
};
self.control.send(msg).await
}
fn start_subscribe(&mut self, msg: message::Subscribe) -> Result<AbortHandle, Error> {
fn start_subscribe(&mut self, msg: message::Subscribe) -> Result<AbortHandle, SessionError> {
// We currently don't use the namespace field in SUBSCRIBE
if !msg.namespace.is_empty() {
return Err(Error::NotFound);
// Make sure the namespace is empty if it's provided.
if msg.namespace.as_ref().map_or(false, |namespace| !namespace.is_empty()) {
return Err(CacheError::NotFound.into());
}
let mut track = self.source.get_track(&msg.name)?;
@ -125,11 +153,11 @@ impl Publisher {
let res = this.run_subscribe(msg.id, &mut track).await;
if let Err(err) = &res {
log::warn!("failed to serve track: name={} err={:?}", track.name, err);
log::warn!("failed to serve track: name={} err={:#?}", track.name, err);
}
// Make sure we send a reset at the end.
let err = res.err().unwrap_or(Error::Closed);
let err = res.err().unwrap_or(CacheError::Closed.into());
this.reset_subscribe(msg.id, err).await.ok();
// We're all done, so clean up the abort handle.
@ -139,7 +167,7 @@ impl Publisher {
Ok(handle.abort_handle())
}
async fn run_subscribe(&self, id: VarInt, track: &mut track::Subscriber) -> Result<(), Error> {
async fn run_subscribe(&self, id: VarInt, track: &mut track::Subscriber) -> Result<(), SessionError> {
// TODO add an Ok method to track::Publisher so we can send SUBSCRIBE_OK
while let Some(mut segment) = track.next_segment().await? {
@ -156,34 +184,51 @@ impl Publisher {
Ok(())
}
async fn run_segment(&self, id: VarInt, segment: &mut segment::Subscriber) -> Result<(), Error> {
async fn run_segment(&self, id: VarInt, segment: &mut segment::Subscriber) -> Result<(), SessionError> {
log::trace!("serving group: {:?}", segment);
let mut stream = self.webtransport.open_uni().await?;
// Convert the u32 to a i32, since the Quinn set_priority is signed.
let priority = (segment.priority as i64 - i32::MAX as i64) as i32;
stream.set_priority(priority).ok();
while let Some(mut fragment) = segment.next_fragment().await? {
let object = message::Object {
track: id,
sequence: segment.sequence,
// Properties of the segment
group: segment.sequence,
priority: segment.priority,
expires: segment.expires,
// Properties of the fragment
sequence: fragment.sequence,
size: fragment.size,
};
log::debug!("serving object: {:?}", object);
object
.encode(&mut stream, &self.control.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
let mut stream = self.webtransport.open_uni().await.map_err(|_e| Error::Write)?;
stream.set_priority(object.priority).ok();
// TODO better handle the error.
object.encode(&mut stream).await.map_err(|_e| Error::Write)?;
while let Some(data) = segment.read_chunk().await? {
stream.write_chunk(data).await.map_err(|_e| Error::Write)?;
while let Some(chunk) = fragment.read_chunk().await? {
stream.write_all(&chunk).await?;
}
}
Ok(())
}
async fn recv_subscribe_stop(&mut self, msg: &message::SubscribeStop) -> Result<(), Error> {
let abort = self.subscribes.lock().unwrap().remove(&msg.id).ok_or(Error::NotFound)?;
async fn recv_unsubscribe(&mut self, msg: &message::Unsubscribe) -> Result<(), SessionError> {
let abort = self
.subscribes
.lock()
.unwrap()
.remove(&msg.id)
.ok_or(CacheError::NotFound)?;
abort.abort();
self.reset_subscribe(msg.id, Error::Stop).await
self.reset_subscribe(msg.id, CacheError::Stop).await
}
}

View File

@ -1,10 +1,8 @@
use super::{Publisher, Subscriber};
use crate::{model::broadcast, setup};
use super::{Control, Publisher, SessionError, Subscriber};
use crate::{cache::broadcast, setup};
use webtransport_quinn::{RecvStream, SendStream, Session};
use anyhow::Context;
/// An endpoint that accepts connections, publishing and/or consuming live streams.
pub struct Server {}
@ -12,18 +10,35 @@ impl Server {
/// Accept an established Webtransport session, performing the MoQ handshake.
///
/// This returns a [Request] half-way through the handshake that allows the application to accept or deny the session.
pub async fn accept(session: Session) -> anyhow::Result<Request> {
let mut control = session.accept_bi().await.context("failed to accept bidi stream")?;
pub async fn accept(session: Session) -> Result<Request, SessionError> {
let mut control = session.accept_bi().await?;
let client = setup::Client::decode(&mut control.1)
.await
.context("failed to read CLIENT SETUP")?;
let mut client = setup::Client::decode(&mut control.1).await?;
client
.versions
.iter()
.find(|version| **version == setup::Version::KIXEL_00)
.context("no supported versions")?;
if client.versions.contains(&setup::Version::DRAFT_01) {
// We always require subscriber ID.
client.extensions.require_subscriber_id()?;
// We require OBJECT_EXPIRES for publishers only.
if client.role.is_publisher() {
client.extensions.require_object_expires()?;
}
// We don't require SUBSCRIBE_SPLIT since it's easy enough to support, but it's clearly an oversight.
// client.extensions.require(&Extension::SUBSCRIBE_SPLIT)?;
} else if client.versions.contains(&setup::Version::KIXEL_01) {
// Extensions didn't exist in KIXEL_01, so we set them manually.
client.extensions = setup::Extensions {
object_expires: true,
subscriber_id: true,
subscribe_split: true,
};
} else {
return Err(SessionError::Version(
client.versions,
[setup::Version::DRAFT_01, setup::Version::KIXEL_01].into(),
));
}
Ok(Request {
session,
@ -42,18 +57,22 @@ pub struct Request {
impl Request {
/// Accept the session as a publisher, using the provided broadcast to serve subscriptions.
pub async fn publisher(mut self, source: broadcast::Subscriber) -> anyhow::Result<Publisher> {
self.send_setup(setup::Role::Publisher).await?;
pub async fn publisher(mut self, source: broadcast::Subscriber) -> Result<Publisher, SessionError> {
let setup = self.setup(setup::Role::Publisher)?;
setup.encode(&mut self.control.0).await?;
let publisher = Publisher::new(self.session, self.control, source);
let control = Control::new(self.control.0, self.control.1, setup.extensions);
let publisher = Publisher::new(self.session, control, source);
Ok(publisher)
}
/// Accept the session as a subscriber only.
pub async fn subscriber(mut self, source: broadcast::Publisher) -> anyhow::Result<Subscriber> {
self.send_setup(setup::Role::Subscriber).await?;
pub async fn subscriber(mut self, source: broadcast::Publisher) -> Result<Subscriber, SessionError> {
let setup = self.setup(setup::Role::Subscriber)?;
setup.encode(&mut self.control.0).await?;
let subscriber = Subscriber::new(self.session, self.control, source);
let control = Control::new(self.control.0, self.control.1, setup.extensions);
let subscriber = Subscriber::new(self.session, control, source);
Ok(subscriber)
}
@ -64,28 +83,21 @@ impl Request {
}
*/
async fn send_setup(&mut self, role: setup::Role) -> anyhow::Result<()> {
fn setup(&mut self, role: setup::Role) -> Result<setup::Server, SessionError> {
let server = setup::Server {
role,
version: setup::Version::KIXEL_00,
version: setup::Version::DRAFT_01,
extensions: self.client.extensions.clone(),
params: Default::default(),
};
// We need to sure we support the opposite of the client's role.
// ex. if the client is a publisher, we must be a subscriber ONLY.
if !self.client.role.is_compatible(server.role) {
anyhow::bail!(
"incompatible roles: client={:?} server={:?}",
self.client.role,
server.role
);
return Err(SessionError::RoleIncompatible(self.client.role, server.role));
}
server
.encode(&mut self.control.0)
.await
.context("failed to send setup server")?;
Ok(())
Ok(server)
}
/// Reject the request, closing the Webtransport session.

View File

@ -1,4 +1,4 @@
use webtransport_quinn::{RecvStream, SendStream, Session};
use webtransport_quinn::{RecvStream, Session};
use std::{
collections::HashMap,
@ -6,14 +6,14 @@ use std::{
};
use crate::{
cache::{broadcast, fragment, segment, track, CacheError},
coding::DecodeError,
message,
message::Message,
model::{broadcast, segment, track},
Error, VarInt,
session::{Control, SessionError},
VarInt,
};
use super::Control;
/// Receives broadcasts over the network, automatically handling subscriptions and caching.
// TODO Clone specific fields when a task actually needs it.
#[derive(Clone, Debug)]
@ -35,9 +35,7 @@ pub struct Subscriber {
}
impl Subscriber {
pub(crate) fn new(webtransport: Session, control: (SendStream, RecvStream), source: broadcast::Publisher) -> Self {
let control = Control::new(control.0, control.1);
pub(crate) fn new(webtransport: Session, control: Control, source: broadcast::Publisher) -> Self {
Self {
webtransport,
subscribes: Default::default(),
@ -47,7 +45,7 @@ impl Subscriber {
}
}
pub async fn run(self) -> Result<(), Error> {
pub async fn run(self) -> Result<(), SessionError> {
let inbound = self.clone().run_inbound();
let streams = self.clone().run_streams();
let source = self.clone().run_source();
@ -60,79 +58,130 @@ impl Subscriber {
}
}
async fn run_inbound(mut self) -> Result<(), Error> {
async fn run_inbound(mut self) -> Result<(), SessionError> {
loop {
let msg = self.control.recv().await.map_err(|_e| Error::Read)?;
let msg = self.control.recv().await?;
log::info!("message received: {:?}", msg);
if let Err(err) = self.recv_message(&msg).await {
if let Err(err) = self.recv_message(&msg) {
log::warn!("message error: {:?} {:?}", err, msg);
}
}
}
async fn recv_message(&mut self, msg: &Message) -> Result<(), Error> {
fn recv_message(&mut self, msg: &Message) -> Result<(), SessionError> {
match msg {
Message::Announce(_) => Ok(()), // don't care
Message::AnnounceReset(_) => Ok(()), // also don't care
Message::SubscribeOk(_) => Ok(()), // guess what, don't care
Message::SubscribeReset(msg) => self.recv_subscribe_reset(msg).await,
Message::Unannounce(_) => Ok(()), // also don't care
Message::SubscribeOk(_msg) => Ok(()), // don't care
Message::SubscribeReset(msg) => self.recv_subscribe_error(msg.id, CacheError::Reset(msg.code)),
Message::SubscribeFin(msg) => self.recv_subscribe_error(msg.id, CacheError::Closed),
Message::SubscribeError(msg) => self.recv_subscribe_error(msg.id, CacheError::Reset(msg.code)),
Message::GoAway(_msg) => unimplemented!("GOAWAY"),
_ => Err(Error::Role(msg.id())),
_ => Err(SessionError::RoleViolation(msg.id())),
}
}
async fn recv_subscribe_reset(&mut self, msg: &message::SubscribeReset) -> Result<(), Error> {
let err = Error::Reset(msg.code);
fn recv_subscribe_error(&mut self, id: VarInt, err: CacheError) -> Result<(), SessionError> {
let mut subscribes = self.subscribes.lock().unwrap();
let subscribe = subscribes.remove(&msg.id).ok_or(Error::NotFound)?;
let subscribe = subscribes.remove(&id).ok_or(CacheError::NotFound)?;
subscribe.close(err)?;
Ok(())
}
async fn run_streams(self) -> Result<(), Error> {
async fn run_streams(self) -> Result<(), SessionError> {
loop {
// Accept all incoming unidirectional streams.
let stream = self.webtransport.accept_uni().await.map_err(|_| Error::Read)?;
let stream = self.webtransport.accept_uni().await?;
let this = self.clone();
tokio::spawn(async move {
if let Err(err) = this.run_stream(stream).await {
log::warn!("failed to receive stream: err={:?}", err);
log::warn!("failed to receive stream: err={:#?}", err);
}
});
}
}
async fn run_stream(self, mut stream: RecvStream) -> Result<(), Error> {
async fn run_stream(self, mut stream: RecvStream) -> Result<(), SessionError> {
// Decode the object on the data stream.
let object = message::Object::decode(&mut stream).await.map_err(|_| Error::Read)?;
let mut object = message::Object::decode(&mut stream, &self.control.ext)
.await
.map_err(|e| SessionError::Unknown(e.to_string()))?;
log::debug!("received object: {:?}", object);
log::trace!("received object: {:?}", object);
// A new scope is needed because the async compiler is dumb
let mut publisher = {
let mut segment = {
let mut subscribes = self.subscribes.lock().unwrap();
let track = subscribes.get_mut(&object.track).ok_or(Error::NotFound)?;
let track = subscribes.get_mut(&object.track).ok_or(CacheError::NotFound)?;
track.create_segment(segment::Info {
sequence: object.sequence,
sequence: object.group,
priority: object.priority,
expires: object.expires,
})?
};
while let Some(data) = stream.read_chunk(usize::MAX, true).await.map_err(|_| Error::Read)? {
publisher.write_chunk(data.bytes)?;
// Create the first fragment
let mut fragment = segment.create_fragment(fragment::Info {
sequence: object.sequence,
size: object.size,
})?;
let mut remain = object.size.map(usize::from);
loop {
if let Some(0) = remain {
// Decode the next object from the stream.
let next = match message::Object::decode(&mut stream, &self.control.ext).await {
Ok(next) => next,
// No more objects
Err(DecodeError::Final) => break,
// Unknown error
Err(err) => return Err(err.into()),
};
// NOTE: This is a custom restriction; not part of the moq-transport draft.
// We require every OBJECT to contain the same priority since prioritization is done per-stream.
// We also require every OBJECT to contain the same group so we know when the group ends, and can detect gaps.
if next.priority != object.priority && next.group != object.group {
return Err(SessionError::StreamMapping);
}
// Create a new object.
fragment = segment.create_fragment(fragment::Info {
sequence: object.sequence,
size: object.size,
})?;
object = next;
remain = object.size.map(usize::from);
}
match stream.read_chunk(remain.unwrap_or(usize::MAX), true).await? {
// Unbounded object has ended
None if remain.is_none() => break,
// Bounded object ended early, oops.
None => return Err(DecodeError::UnexpectedEnd.into()),
// NOTE: This does not make a copy!
// Bytes are immutable and ref counted.
Some(data) => fragment.write_chunk(data.bytes)?,
}
}
Ok(())
}
async fn run_source(mut self) -> Result<(), Error> {
while let Some(track) = self.source.next_track().await? {
async fn run_source(mut self) -> Result<(), SessionError> {
loop {
// NOTE: This returns Closed when the source is closed.
let track = self.source.next_track().await?;
let name = track.name.clone();
let id = VarInt::from_u32(self.next.fetch_add(1, atomic::Ordering::SeqCst));
@ -140,13 +189,19 @@ impl Subscriber {
let msg = message::Subscribe {
id,
namespace: "".to_string(),
namespace: self.control.ext.subscribe_split.then(|| "".to_string()),
name,
// TODO correctly support these
start_group: message::SubscribeLocation::Latest(VarInt::ZERO),
start_object: message::SubscribeLocation::Absolute(VarInt::ZERO),
end_group: message::SubscribeLocation::None,
end_object: message::SubscribeLocation::None,
params: Default::default(),
};
self.control.send(msg).await?;
}
Ok(())
}
}

View File

@ -1,6 +1,6 @@
use super::{Role, Versions};
use super::{Extensions, Role, Versions};
use crate::{
coding::{DecodeError, EncodeError},
coding::{Decode, DecodeError, Encode, EncodeError, Params},
VarInt,
};
@ -15,29 +15,57 @@ pub struct Client {
pub versions: Versions,
/// Indicate if the client is a publisher, a subscriber, or both.
// Proposal: moq-wg/moq-transport#151
pub role: Role,
/// A list of known/offered extensions.
pub extensions: Extensions,
/// Unknown parameters.
pub params: Params,
}
impl Client {
/// Decode a client setup message.
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let typ = VarInt::decode(r).await?;
if typ.into_inner() != 1 {
return Err(DecodeError::InvalidType(typ));
if typ.into_inner() != 0x40 {
return Err(DecodeError::InvalidMessage(typ));
}
let versions = Versions::decode(r).await?;
let role = Role::decode(r).await?;
let mut params = Params::decode(r).await?;
Ok(Self { versions, role })
let role = params
.get::<Role>(VarInt::from_u32(0))
.await?
.ok_or(DecodeError::MissingParameter)?;
// Make sure the PATH parameter isn't used
// TODO: This assumes WebTransport support only
if params.has(VarInt::from_u32(1)) {
return Err(DecodeError::InvalidParameter);
}
let extensions = Extensions::load(&mut params).await?;
Ok(Self {
versions,
role,
extensions,
params,
})
}
/// Encode a server setup message.
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::from_u32(1).encode(w).await?;
VarInt::from_u32(0x40).encode(w).await?;
self.versions.encode(w).await?;
self.role.encode(w).await?;
let mut params = self.params.clone();
params.set(VarInt::from_u32(0), self.role).await?;
self.extensions.store(&mut params).await?;
params.encode(w).await?;
Ok(())
}

View File

@ -0,0 +1,84 @@
use tokio::io::{AsyncRead, AsyncWrite};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, Params};
use crate::session::SessionError;
use crate::VarInt;
use paste::paste;
/// This is a custom extension scheme to allow/require draft PRs.
///
/// By convention, the extension number is the PR number + 0xe0000.
macro_rules! extensions {
{$($name:ident = $val:expr,)*} => {
#[derive(Clone, Default, Debug)]
pub struct Extensions {
$(
pub $name: bool,
)*
}
impl Extensions {
pub async fn load(params: &mut Params) -> Result<Self, DecodeError> {
let mut extensions = Self::default();
$(
if let Some(_) = params.get::<ExtensionExists>(VarInt::from_u32($val)).await? {
extensions.$name = true
}
)*
Ok(extensions)
}
pub async fn store(&self, params: &mut Params) -> Result<(), EncodeError> {
$(
if self.$name {
params.set(VarInt::from_u32($val), ExtensionExists{}).await?;
}
)*
Ok(())
}
paste! {
$(
pub fn [<require_ $name>](&self) -> Result<(), SessionError> {
match self.$name {
true => Ok(()),
false => Err(SessionError::RequiredExtension(VarInt::from_u32($val))),
}
}
)*
}
}
}
}
struct ExtensionExists;
#[async_trait::async_trait]
impl Decode for ExtensionExists {
async fn decode<R: AsyncRead>(_r: &mut R) -> Result<Self, DecodeError> {
Ok(ExtensionExists {})
}
}
#[async_trait::async_trait]
impl Encode for ExtensionExists {
async fn encode<W: AsyncWrite>(&self, _w: &mut W) -> Result<(), EncodeError> {
Ok(())
}
}
extensions! {
// required for publishers: OBJECT contains expires VarInt in seconds: https://github.com/moq-wg/moq-transport/issues/249
// TODO write up a PR
object_expires = 0xe00f9,
// required: SUBSCRIBE chooses track ID: https://github.com/moq-wg/moq-transport/pull/258
subscriber_id = 0xe0102,
// optional: SUBSCRIBE contains namespace/name tuple: https://github.com/moq-wg/moq-transport/pull/277
subscribe_split = 0xe0115,
}

View File

@ -5,11 +5,13 @@
//! Both sides negotate the [Version] and [Role].
mod client;
mod extension;
mod role;
mod server;
mod version;
pub use client::*;
pub use extension::*;
pub use role::*;
pub use server::*;
pub use version::*;

View File

@ -1,6 +1,6 @@
use crate::coding::{AsyncRead, AsyncWrite};
use crate::coding::{DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
/// Indicates the endpoint is a publisher, subscriber, or both.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
@ -36,9 +36,9 @@ impl Role {
impl From<Role> for VarInt {
fn from(r: Role) -> Self {
VarInt::from_u32(match r {
Role::Publisher => 0x0,
Role::Subscriber => 0x1,
Role::Both => 0x2,
Role::Publisher => 0x1,
Role::Subscriber => 0x2,
Role::Both => 0x3,
})
}
}
@ -48,23 +48,27 @@ impl TryFrom<VarInt> for Role {
fn try_from(v: VarInt) -> Result<Self, Self::Error> {
match v.into_inner() {
0x0 => Ok(Self::Publisher),
0x1 => Ok(Self::Subscriber),
0x2 => Ok(Self::Both),
_ => Err(DecodeError::InvalidType(v)),
0x1 => Ok(Self::Publisher),
0x2 => Ok(Self::Subscriber),
0x3 => Ok(Self::Both),
_ => Err(DecodeError::InvalidRole(v)),
}
}
}
impl Role {
#[async_trait::async_trait]
impl Decode for Role {
/// Decode the role.
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let v = VarInt::decode(r).await?;
v.try_into()
}
}
#[async_trait::async_trait]
impl Encode for Role {
/// Encode the role.
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::from(*self).encode(w).await
}
}

View File

@ -1,6 +1,6 @@
use super::{Role, Version};
use super::{Extensions, Role, Version};
use crate::{
coding::{DecodeError, EncodeError},
coding::{Decode, DecodeError, Encode, EncodeError, Params},
VarInt,
};
@ -17,27 +17,54 @@ pub struct Server {
/// Indicate if the server is a publisher, a subscriber, or both.
// Proposal: moq-wg/moq-transport#151
pub role: Role,
/// Custom extensions.
pub extensions: Extensions,
/// Unknown parameters.
pub params: Params,
}
impl Server {
/// Decode the server setup.
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let typ = VarInt::decode(r).await?;
if typ.into_inner() != 2 {
return Err(DecodeError::InvalidType(typ));
if typ.into_inner() != 0x41 {
return Err(DecodeError::InvalidMessage(typ));
}
let version = Version::decode(r).await?;
let role = Role::decode(r).await?;
let mut params = Params::decode(r).await?;
Ok(Self { version, role })
let role = params
.get::<Role>(VarInt::from_u32(0))
.await?
.ok_or(DecodeError::MissingParameter)?;
// Make sure the PATH parameter isn't used
if params.has(VarInt::from_u32(1)) {
return Err(DecodeError::InvalidParameter);
}
let extensions = Extensions::load(&mut params).await?;
Ok(Self {
version,
role,
extensions,
params,
})
}
/// Encode the server setup.
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
VarInt::from_u32(2).encode(w).await?;
VarInt::from_u32(0x41).encode(w).await?;
self.version.encode(w).await?;
self.role.encode(w).await?;
let mut params = self.params.clone();
params.set(VarInt::from_u32(0), self.role).await?;
self.extensions.store(&mut params).await?;
params.encode(w).await?;
Ok(())
}

View File

@ -1,4 +1,4 @@
use crate::coding::{DecodeError, EncodeError, VarInt};
use crate::coding::{Decode, DecodeError, Encode, EncodeError, VarInt};
use crate::coding::{AsyncRead, AsyncWrite};
@ -6,12 +6,15 @@ use std::ops::Deref;
/// A version number negotiated during the setup.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Version(VarInt);
pub struct Version(pub VarInt);
impl Version {
/// <https://www.ietf.org/archive/id/draft-ietf-moq-transport-00.html>
/// https://www.ietf.org/archive/id/draft-ietf-moq-transport-00.html
pub const DRAFT_00: Version = Version(VarInt::from_u32(0xff00));
/// https://www.ietf.org/archive/id/draft-ietf-moq-transport-01.html
pub const DRAFT_01: Version = Version(VarInt::from_u32(0xff01));
/// Fork of draft-ietf-moq-transport-00.
///
/// Rough list of differences:
@ -56,6 +59,18 @@ impl Version {
/// # GROUP
/// - GROUP concept was removed, replaced with OBJECT as a QUIC stream.
pub const KIXEL_00: Version = Version(VarInt::from_u32(0xbad00));
/// Fork of draft-ietf-moq-transport-01.
///
/// Most of the KIXEL_00 changes made it into the draft, or were reverted.
/// This was only used for a short time until extensions were created.
///
/// - SUBSCRIBE contains a separate track namespace and track name field (accidental revert). [#277](https://github.com/moq-wg/moq-transport/pull/277)
/// - SUBSCRIBE contains the `track_id` instead of SUBSCRIBE_OK. [#145](https://github.com/moq-wg/moq-transport/issues/145)
/// - SUBSCRIBE_* reference `track_id` the instead of the `track_full_name`. [#145](https://github.com/moq-wg/moq-transport/issues/145)
/// - OBJECT `priority` is still a VarInt, but the max value is a u32 (implementation reasons)
/// - OBJECT messages within the same `group` MUST be on the same QUIC stream.
pub const KIXEL_01: Version = Version(VarInt::from_u32(0xbad01));
}
impl From<VarInt> for Version {
@ -88,9 +103,10 @@ impl Version {
#[derive(Debug, Clone, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct Versions(Vec<Version>);
impl Versions {
#[async_trait::async_trait]
impl Decode for Versions {
/// Decode the version list.
pub async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
async fn decode<R: AsyncRead>(r: &mut R) -> Result<Self, DecodeError> {
let count = VarInt::decode(r).await?.into_inner();
let mut vs = Vec::new();
@ -101,9 +117,12 @@ impl Versions {
Ok(Self(vs))
}
}
#[async_trait::async_trait]
impl Encode for Versions {
/// Encode the version list.
pub async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
async fn encode<W: AsyncWrite>(&self, w: &mut W) -> Result<(), EncodeError> {
let size: VarInt = self.0.len().try_into()?;
size.encode(w).await?;
@ -128,3 +147,9 @@ impl From<Vec<Version>> for Versions {
Self(vs)
}
}
impl<const N: usize> From<[Version; N]> for Versions {
fn from(vs: [Version; N]) -> Self {
Self(vs.to_vec())
}
}