Merge pull request #15 from kixelated/quiche

Switch from Go to Rust
This commit is contained in:
kixelated 2023-05-22 15:25:06 -07:00 committed by GitHub
commit 5410a3767f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
81 changed files with 3227 additions and 2280 deletions

View File

@ -1,64 +1,39 @@
# Warp # Warp
Segmented live media delivery protocol utilizing QUIC streams. See the [Warp draft](https://datatracker.ietf.org/doc/draft-lcurley-warp/). Live media delivery protocol utilizing QUIC streams. See the [Warp draft](https://datatracker.ietf.org/doc/draft-lcurley-warp/).
Warp works by delivering each audio and video segment as a separate QUIC stream. These streams are assigned a priority such that old video will arrive last and can be dropped. This avoids buffering in many cases, offering the viewer a potentially better experience. Warp works by delivering media over independent QUIC stream. These streams are assigned a priority such that old video will arrive last and can be dropped. This avoids buffering in many cases, offering the viewer a potentially better experience.
# Limitations This demo requires WebTransport and WebCodecs, which currently (May 2023) only works on Chrome.
## Browser Support
This demo currently only works on Chrome for two reasons:
1. WebTransport support. # Development
2. [Media underflow behavior](https://github.com/whatwg/html/issues/6359). ## Easy Mode
Requires Docker *only*.
The ability to skip video abuses the fact that Chrome can play audio without video for up to 3 seconds (hardcoded!) when using MSE. It is possible to use something like WebCodecs instead... but that's still Chrome only at the moment. ```
docker-compose up --build
```
## Streaming Then open [https://localhost:4444/](https://localhost:4444) in a browser. You'll have to click past the TLS error, but that's the price you pay for being lazy. Follow the more in-depth instructions if you want a better development experience.
This demo works by reading pre-encoded media and sleeping based on media timestamps. Obviously this is not a live stream; you should plug in your own encoder or source.
The media is encoded on disk as a LL-DASH playlist. There's a crude parser and I haven't used DASH before so don't expect it to work with arbitrary inputs.
## QUIC Implementation
This demo uses a fork of [quic-go](https://github.com/lucas-clemente/quic-go). There are two critical features missing upstream:
1. ~~[WebTransport](https://github.com/lucas-clemente/quic-go/issues/3191)~~
2. [Prioritization](https://github.com/lucas-clemente/quic-go/pull/3442)
## Congestion Control
This demo uses a single rendition. A production implementation will want to:
1. Change the rendition bitrate to match the estimated bitrate.
2. Switch renditions at segment boundaries based on the estimated bitrate.
3. or both!
Also, quic-go ships with the default New Reno congestion control. Something like [BBRv2](https://github.com/lucas-clemente/quic-go/issues/341) will work much better for live video as it limits RTT growth.
# Setup
## Requirements ## Requirements
* Go * Go
* Rust
* ffmpeg * ffmpeg
* openssl * openssl
* Chrome Canary * Chrome
## Media ## Media
This demo simulates a live stream by reading a file from disk and sleeping based on media timestamps. Obviously you should hook this up to a real live stream to do anything useful. This demo simulates a live stream by reading a file from disk and sleeping based on media timestamps. Obviously you should hook this up to a real live stream to do anything useful.
Download your favorite media file: Download your favorite media file and convert it to fragmented MP4:
``` ```
wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4 -O media/source.mp4 wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4 -O media/source.mp4
./media/fragment
``` ```
Use ffmpeg to create a LL-DASH playlist. This creates a segment every 2s and MP4 fragment every 10ms. ## Certificates
```
./media/generate
```
You can increase the `frag_duration` (microseconds) to slightly reduce the file size in exchange for higher latency.
## TLS
Unfortunately, QUIC mandates TLS and makes local development difficult. Unfortunately, QUIC mandates TLS and makes local development difficult.
If you have a valid certificate you can use it instead of self-signing.
If you have a valid certificate you can use it instead of self-signing. The go binaries take a `-tls-cert` and `-tls-key` argument. Skip the remaining steps in this section and use your hostname instead.
Otherwise, we use [mkcert](https://github.com/FiloSottile/mkcert) to install a self-signed CA: Otherwise, we use [mkcert](https://github.com/FiloSottile/mkcert) to install a self-signed CA:
``` ```
@ -72,20 +47,18 @@ The Warp server supports WebTransport, pushing media over streams once a connect
``` ```
cd server cd server
go run main.go cargo run
``` ```
This can be accessed via WebTransport on `https://localhost:4443` by default. This listens for WebTransport connections (not HTTP) on `https://localhost:4443` by default.
## Web Player ## Web
The web assets need to be hosted with a HTTPS server. If you're using a self-signed certificate, you may need to ignore the security warning in Chrome (Advanced -> proceed to localhost). The web assets need to be hosted with a HTTPS server.
``` ```
cd player cd web
yarn install yarn install
yarn serve yarn serve
``` ```
These can be accessed on `https://localhost:4444` by default. These can be accessed on `https://localhost:4444` by default.
If you use a custom domain for the Warp server, make sure to override the server URL with the `url` query string parameter, e.g. `https://localhost:4444/?url=https://warp.demo`.

3
cert/.dockerignore Normal file
View File

@ -0,0 +1,3 @@
*.crt
*.key
*.hex

1
cert/.gitignore vendored
View File

@ -1,2 +1,3 @@
*.crt *.crt
*.key *.key
*.hex

22
cert/Dockerfile Normal file
View File

@ -0,0 +1,22 @@
# Use ubuntu because it's ez
FROM ubuntu:latest
WORKDIR /build
# Use openssl and golang to generate certificates
RUN apt-get update && \
apt-get install -y ca-certificates openssl golang xxd
# Download the go modules
COPY go.mod go.sum ./
RUN go mod download
# Copy over the remaining files.
COPY . .
# Save the certificates to a volume
VOLUME /cert
# TODO support an output directory
CMD ./generate && cp localhost.* /cert

View File

@ -17,4 +17,4 @@ go run filippo.io/mkcert -ecdsa -install
go run filippo.io/mkcert -ecdsa -days 10 -cert-file "$CRT" -key-file "$KEY" localhost 127.0.0.1 ::1 go run filippo.io/mkcert -ecdsa -days 10 -cert-file "$CRT" -key-file "$KEY" localhost 127.0.0.1 ::1
# Compute the sha256 fingerprint of the certificate for WebTransport # Compute the sha256 fingerprint of the certificate for WebTransport
openssl x509 -in "$CRT" -outform der | openssl dgst -sha256 > ../player/fingerprint.hex openssl x509 -in "$CRT" -outform der | openssl dgst -sha256 -binary | xxd -p -c 256 > localhost.hex

45
docker-compose.yml Normal file
View File

@ -0,0 +1,45 @@
version: '3'
services:
# Generate certificates only valid for 14 days.
cert:
build: ./cert
volumes:
- cert:/cert
# Generate a fragmented MP4 file for testing.
media:
build: ./media
volumes:
- media:/media
# Serve the web code once we have certificates.
web:
build: ./web
ports:
- "4444:4444"
volumes:
- cert:/cert
depends_on:
cert:
condition: service_completed_successfully
# Run the server once we have certificates and media.
server:
build: ./server
environment:
- RUST_LOG=debug
ports:
- "4443:4443/udp"
volumes:
- cert:/cert
- media:/media
depends_on:
cert:
condition: service_completed_successfully
media:
condition: service_completed_successfully
volumes:
cert:
media:

1
media/.dockerignore Normal file
View File

@ -0,0 +1 @@
fragmented.mp4

25
media/Dockerfile Normal file
View File

@ -0,0 +1,25 @@
# Create a build image
FROM ubuntu:latest
# Create the working directory.
WORKDIR /build
# Install necessary packages
RUN apt-get update && \
apt-get install -y \
ca-certificates \
wget \
ffmpeg
# Download a file from the internet, in this case my boy big buck bunny
RUN wget http://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4 -O source.mp4
# Copy an run a script to create a fragmented mp4 (more overhead, easier to split)
COPY fragment .
# Create a media volume
VOLUME /media
# Fragment the media
# TODO support an output directory
CMD ./fragment && cp fragmented.mp4 /media

12
media/fragment Executable file
View File

@ -0,0 +1,12 @@
#!/bin/bash
cd "$(dirname "$0")"
# empty_moov: Uses moof fragments instead of one giant moov/mdat pair.
# frag_every_frame: Creates a moof for each frame.
# separate_moof: Splits audio and video into separate moof flags.
# omit_tfhd_offset: Removes absolute byte offsets so we can fragment.
ffmpeg -i source.mp4 -y \
-c copy \
-movflags empty_moov+frag_every_frame+separate_moof+omit_tfhd_offset \
fragmented.mp4 2>&1

View File

@ -1,18 +0,0 @@
#!/bin/bash
ffmpeg -i source.mp4 \
-f dash -ldash 1 \
-c:v libx264 \
-preset veryfast -tune zerolatency \
-c:a aac \
-b:a 128k -ac 2 -ar 44100 \
-map v:0 -s:v:0 1280x720 -b:v:0 3M \
-map v:0 -s:v:1 854x480 -b:v:1 1.1M \
-map v:0 -s:v:2 640x360 -b:v:2 365k \
-map 0:a \
-force_key_frames "expr:gte(t,n_forced*2)" \
-sc_threshold 0 \
-streaming 1 \
-use_timeline 0 \
-seg_duration 2 -frag_duration 0.01 \
-frag_type duration \
playlist.mpd

View File

@ -1,121 +0,0 @@
import * as Message from "./message";
import * as MP4 from "../mp4"
import * as Stream from "../stream"
import * as Util from "../util"
import Renderer from "./renderer"
export default class Decoder {
// Store the init message for each track
tracks: Map<string, Util.Deferred<Message.Init>>;
decoder: AudioDecoder; // TODO one per track
sync: Message.Sync;
constructor(config: Message.Config, renderer: Renderer) {
this.tracks = new Map();
this.decoder = new AudioDecoder({
output: renderer.emit.bind(renderer),
error: console.warn,
});
}
init(msg: Message.Init) {
let defer = this.tracks.get(msg.track);
if (!defer) {
defer = new Util.Deferred()
this.tracks.set(msg.track, defer)
}
if (msg.info.audioTracks.length != 1 || msg.info.videoTracks.length != 0) {
throw new Error("Expected a single audio track")
}
const track = msg.info.audioTracks[0]
const audio = track.audio
defer.resolve(msg)
}
async decode(msg: Message.Segment) {
let track = this.tracks.get(msg.track);
if (!track) {
track = new Util.Deferred()
this.tracks.set(msg.track, track)
}
// Wait for the init segment to be fully received and parsed
const init = await track.promise;
const audio = init.info.audioTracks[0]
if (this.decoder.state == "unconfigured") {
this.decoder.configure({
codec: audio.codec,
numberOfChannels: audio.audio.channel_count,
sampleRate: audio.audio.sample_rate,
})
}
const input = MP4.New();
input.onSamples = (id: number, user: any, samples: MP4.Sample[]) => {
for (let sample of samples) {
// Convert to microseconds
const timestamp = 1000 * 1000 * sample.dts / sample.timescale
const duration = 1000 * 1000 * sample.duration / sample.timescale
// This assumes that timescale == sample rate
this.decoder.decode(new EncodedAudioChunk({
type: sample.is_sync ? "key" : "delta",
data: sample.data,
duration: duration,
timestamp: timestamp,
}))
}
}
input.onReady = (info: any) => {
input.setExtractionOptions(info.tracks[0].id, {}, { nbSamples: 1 });
input.start();
}
// MP4box requires us to reparse the init segment unfortunately
let offset = 0;
for (let raw of init.raw) {
raw.fileStart = offset
input.appendBuffer(raw)
}
const stream = new Stream.Reader(msg.reader, msg.buffer)
/* TODO I'm not actually sure why this code doesn't work; something trips up the MP4 parser
while (1) {
const data = await stream.read()
if (!data) break
input.appendBuffer(data)
input.flush()
}
*/
// One day I'll figure it out; until then read one top-level atom at a time
while (!await stream.done()) {
const raw = await stream.peek(4)
const size = new DataView(raw.buffer, raw.byteOffset, raw.byteLength).getUint32(0)
const atom = await stream.bytes(size)
// Make a copy of the atom because mp4box only accepts an ArrayBuffer unfortunately
let box = new Uint8Array(atom.byteLength);
box.set(atom)
// and for some reason we need to modify the underlying ArrayBuffer with offset
let buffer = box.buffer as MP4.ArrayBuffer
buffer.fileStart = offset
// Parse the data
offset = input.appendBuffer(buffer)
input.flush()
}
}
}

View File

@ -1,30 +0,0 @@
import * as MP4 from "../mp4"
import { RingInit } from "./ring"
export interface Config {
sampleRate: number;
ring: RingInit;
}
export interface Init {
track: string;
info: MP4.Info;
raw: MP4.ArrayBuffer[];
}
export interface Segment {
track: string;
buffer: Uint8Array; // unread buffered data
reader: ReadableStream; // unread unbuffered data
}
// Audio tells video when the given timestamp should be rendered.
export interface Sync {
origin: number;
clock: DOMHighResTimeStamp;
timestamp: number;
}
export interface Play {
timestamp?: number;
}

View File

@ -1,85 +0,0 @@
import * as Message from "./message"
import { Ring } from "./ring"
export default class Renderer {
ring: Ring;
queue: Array<AudioData>;
sync?: DOMHighResTimeStamp
running: number;
constructor(config: Message.Config) {
this.ring = new Ring(config.ring)
this.queue = [];
this.running = 0
}
emit(frame: AudioData) {
if (!this.sync) {
// Save the frame as the sync point
this.sync = 1000 * performance.now() - frame.timestamp
}
// Insert the frame into the queue sorted by timestamp.
if (this.queue.length > 0 && this.queue[this.queue.length-1].timestamp <= frame.timestamp) {
// Fast path because we normally append to the end.
this.queue.push(frame)
} else {
// Do a full binary search
let low = 0
let high = this.queue.length;
while (low < high) {
var mid = (low + high) >>> 1;
if (this.queue[mid].timestamp < frame.timestamp) low = mid + 1;
else high = mid;
}
this.queue.splice(low, 0, frame)
}
if (!this.running) {
// Wait for the next animation frame
this.running = self.requestAnimationFrame(this.render.bind(this))
}
}
render() {
// Determine the target timestamp.
const target = 1000 * performance.now() - this.sync!
// Check if we should skip some frames
while (this.queue.length) {
const next = this.queue[0]
if (next.timestamp >= target) {
break
}
console.warn("dropping audio")
this.queue.shift()
next.close()
}
// Push as many as we can to the ring buffer.
while (this.queue.length) {
let frame = this.queue[0]
let ok = this.ring.write(frame)
if (!ok) {
break
}
frame.close()
this.queue.shift()
}
if (this.queue.length) {
this.running = self.requestAnimationFrame(this.render.bind(this))
} else {
this.running = 0
}
}
play(play: Message.Play) {
this.ring.reset()
}
}

View File

@ -1,146 +0,0 @@
// Ring buffer with audio samples.
enum STATE {
READ_INDEX = 0, // Index of the current read position (mod capacity)
WRITE_INDEX, // Index of the current write position (mod capacity)
LENGTH // Clever way of saving the total number of enums values.
}
export class Ring {
state: Int32Array;
channels: Float32Array[];
capacity: number;
constructor(init: RingInit) {
this.state = new Int32Array(init.state)
this.channels = []
for (let channel of init.channels) {
this.channels.push(new Float32Array(channel))
}
this.capacity = init.capacity
}
// Add the samples for single audio frame
write(frame: AudioData): boolean {
let count = frame.numberOfFrames;
let readIndex = Atomics.load(this.state, STATE.READ_INDEX)
let writeIndex = Atomics.load(this.state, STATE.WRITE_INDEX)
let writeIndexNew = writeIndex + count;
// There's not enough space in the ring buffer
if (writeIndexNew - readIndex > this.capacity) {
return false
}
let startIndex = writeIndex % this.capacity;
let endIndex = writeIndexNew % this.capacity;
// Loop over each channel
for (let i = 0; i < this.channels.length; i += 1) {
const channel = this.channels[i]
if (startIndex < endIndex) {
// One continuous range to copy.
const full = channel.subarray(startIndex, endIndex)
frame.copyTo(full, {
planeIndex: i,
frameCount: count,
})
//audio seems to be breaking whenever endIndex is 0
//this works, without "chopiness"
} else if (startIndex >= endIndex && endIndex != 0) {
const first = channel.subarray(startIndex)
const second = channel.subarray(0, endIndex)
frame.copyTo(first, {
planeIndex: i,
frameCount: first.length,
})
//console.log("frame offset", first.length , "frame count", second.length) to test
frame.copyTo(second, {
planeIndex: i,
frameOffset: first.length,
frameCount: second.length,
})
}
}
Atomics.store(this.state, STATE.WRITE_INDEX, writeIndexNew)
return true
}
read(dst: Float32Array[]) {
let readIndex = Atomics.load(this.state, STATE.READ_INDEX)
let writeIndex = Atomics.load(this.state, STATE.WRITE_INDEX)
if (readIndex >= writeIndex) {
// nothing to read
return
}
let readIndexNew = readIndex + dst[0].length
if (readIndexNew > writeIndex) {
// Partial read
readIndexNew = writeIndex
}
let startIndex = readIndex % this.capacity;
let endIndex = readIndexNew % this.capacity;
// Loop over each channel
for (let i = 0; i < dst.length; i += 1) {
if (i >= this.channels.length) {
// ignore excess channels
}
const input = this.channels[i]
const output = dst[i]
if (startIndex < endIndex) {
const full = input.subarray(startIndex, endIndex)
output.set(full)
} else {
const first = input.subarray(startIndex)
const second = input.subarray(0, endIndex)
output.set(first)
output.set(second, first.length)
}
}
Atomics.store(this.state, STATE.READ_INDEX, readIndexNew)
}
// TODO not thread safe
clear() {
const writeIndex = Atomics.load(this.state, STATE.WRITE_INDEX)
Atomics.store(this.state, STATE.READ_INDEX, writeIndex)
}
}
// No prototype to make this easier to send via postMessage
export class RingInit {
state: SharedArrayBuffer;
channels: SharedArrayBuffer[];
capacity: number;
constructor(channels: number, capacity: number) {
// Store the current state in a separate ring buffer.
this.state = new SharedArrayBuffer(STATE.LENGTH * Int32Array.BYTES_PER_ELEMENT)
// Create a buffer for each audio channel
this.channels = []
for (let i = 0; i < channels; i += 1) {
const buffer = new SharedArrayBuffer(capacity * Float32Array.BYTES_PER_ELEMENT)
this.channels.push(buffer)
}
this.capacity = capacity
}
}

View File

@ -1,26 +0,0 @@
import Decoder from "./decoder"
import Renderer from "./renderer"
import * as Message from "./message"
let decoder: Decoder
let renderer: Renderer;
self.addEventListener('message', (e: MessageEvent) => {
if (e.data.config) {
renderer = new Renderer(e.data.config)
decoder = new Decoder(e.data.config, renderer)
}
if (e.data.init) {
decoder.init(e.data.init)
}
if (e.data.segment) {
decoder.decode(e.data.segment)
}
if (e.data.play) {
renderer.play(e.data.play)
}
})

View File

@ -1,45 +0,0 @@
import Audio from "../audio"
import Transport from "../transport"
import Video from "../video"
export interface PlayerInit {
url: string;
fingerprint?: WebTransportHash; // the certificate fingerprint, temporarily needed for local development
canvas: HTMLCanvasElement;
}
export default class Player {
audio: Audio;
video: Video;
transport: Transport;
constructor(props: PlayerInit) {
this.audio = new Audio()
this.video = new Video({
canvas: props.canvas.transferControlToOffscreen(),
})
this.transport = new Transport({
url: props.url,
fingerprint: props.fingerprint,
audio: this.audio,
video: this.video,
})
}
async close() {
this.transport.close()
}
play() {
this.audio.play({})
//this.video.play()
}
onMessage(msg: any) {
if (msg.sync) {
msg.sync
}
}
}

View File

@ -1,168 +0,0 @@
import * as Message from "./message"
import * as Stream from "../stream"
import * as MP4 from "../mp4"
import Audio from "../audio"
import Video from "../video"
export interface TransportInit {
url: string;
fingerprint?: WebTransportHash; // the certificate fingerprint, temporarily needed for local development
audio: Audio;
video: Video;
}
export default class Transport {
quic: Promise<WebTransport>;
api: Promise<WritableStream>;
tracks: Map<string, MP4.InitParser>
audio: Audio;
video: Video;
constructor(props: TransportInit) {
this.tracks = new Map();
this.audio = props.audio;
this.video = props.video;
this.quic = this.connect(props)
// Create a unidirectional stream for all of our messages
this.api = this.quic.then((q) => {
return q.createUnidirectionalStream()
})
// async functions
this.receiveStreams()
}
async close() {
(await this.quic).close()
}
// Helper function to make creating a promise easier
private async connect(props: TransportInit): Promise<WebTransport> {
let options: WebTransportOptions = {};
if (props.fingerprint) {
options.serverCertificateHashes = [ props.fingerprint ]
}
const quic = new WebTransport(props.url, options)
await quic.ready
return quic
}
async sendMessage(msg: any) {
const payload = JSON.stringify(msg)
const size = payload.length + 8
const stream = await this.api
const writer = new Stream.Writer(stream)
await writer.uint32(size)
await writer.string("warp")
await writer.string(payload)
writer.release()
}
async receiveStreams() {
const q = await this.quic
const streams = q.incomingUnidirectionalStreams.getReader()
while (true) {
const result = await streams.read()
if (result.done) break
const stream = result.value
this.handleStream(stream) // don't await
}
}
async handleStream(stream: ReadableStream) {
let r = new Stream.Reader(stream)
while (!await r.done()) {
const size = await r.uint32();
const typ = new TextDecoder('utf-8').decode(await r.bytes(4));
if (typ != "warp") throw "expected warp atom"
if (size < 8) throw "atom too small"
const payload = new TextDecoder('utf-8').decode(await r.bytes(size - 8));
const msg = JSON.parse(payload)
if (msg.init) {
return this.handleInit(r, msg.init as Message.Init)
} else if (msg.segment) {
return this.handleSegment(r, msg.segment as Message.Segment)
}
}
}
async handleInit(stream: Stream.Reader, msg: Message.Init) {
let track = this.tracks.get(msg.id);
if (!track) {
track = new MP4.InitParser()
this.tracks.set(msg.id, track)
}
while (1) {
const data = await stream.read()
if (!data) break
track.push(data)
}
const info = await track.info
if (info.audioTracks.length + info.videoTracks.length != 1) {
throw new Error("expected a single track")
}
if (info.audioTracks.length) {
this.audio.init({
track: msg.id,
info: info,
raw: track.raw,
})
} else if (info.videoTracks.length) {
this.video.init({
track: msg.id,
info: info,
raw: track.raw,
})
} else {
throw new Error("init is neither audio nor video")
}
}
async handleSegment(stream: Stream.Reader, msg: Message.Segment) {
let track = this.tracks.get(msg.init);
if (!track) {
track = new MP4.InitParser()
this.tracks.set(msg.init, track)
}
// Wait until we learn if this is an audio or video track
const info = await track.info
if (info.audioTracks.length) {
this.audio.segment({
track: msg.init,
buffer: stream.buffer,
reader: stream.reader,
})
} else if (info.videoTracks.length) {
this.video.segment({
track: msg.init,
buffer: stream.buffer,
reader: stream.reader,
})
} else {
throw new Error("segment is neither audio nor video")
}
}
}

View File

@ -1,13 +0,0 @@
export interface Init {
id: string
}
export interface Segment {
init: string // id of the init segment
timestamp: number // presentation timestamp in milliseconds of the first sample
// TODO track would be nice
}
export interface Debug {
max_bitrate: number
}

View File

@ -1,127 +0,0 @@
import * as Message from "./message";
import * as MP4 from "../mp4"
import * as Stream from "../stream"
import * as Util from "../util"
import Renderer from "./renderer"
export default class Decoder {
// Store the init message for each track
tracks: Map<string, Util.Deferred<Message.Init>>
renderer: Renderer;
constructor(renderer: Renderer) {
this.tracks = new Map();
this.renderer = renderer;
}
async init(msg: Message.Init) {
let track = this.tracks.get(msg.track);
if (!track) {
track = new Util.Deferred()
this.tracks.set(msg.track, track)
}
if (msg.info.videoTracks.length != 1 || msg.info.audioTracks.length != 0) {
throw new Error("Expected a single video track")
}
track.resolve(msg)
}
async decode(msg: Message.Segment) {
let track = this.tracks.get(msg.track);
if (!track) {
track = new Util.Deferred()
this.tracks.set(msg.track, track)
}
// Wait for the init segment to be fully received and parsed
const init = await track.promise;
const info = init.info;
const video = info.videoTracks[0]
const decoder = new VideoDecoder({
output: (frame: VideoFrame) => {
this.renderer.emit(frame)
},
error: (err: Error) => {
console.warn(err)
}
});
const input = MP4.New();
input.onSamples = (id: number, user: any, samples: MP4.Sample[]) => {
for (let sample of samples) {
const timestamp = 1000 * sample.dts / sample.timescale // milliseconds
if (sample.is_sync) {
// Configure the decoder using the AVC box for H.264
const avcc = sample.description.avcC;
const description = new MP4.Stream(new Uint8Array(avcc.size), 0, false)
avcc.write(description)
decoder.configure({
codec: video.codec,
codedHeight: video.track_height,
codedWidth: video.track_width,
description: description.buffer?.slice(8),
// optimizeForLatency: true
})
}
decoder.decode(new EncodedVideoChunk({
data: sample.data,
duration: sample.duration,
timestamp: timestamp,
type: sample.is_sync ? "key" : "delta",
}))
}
}
input.onReady = (info: any) => {
input.setExtractionOptions(info.tracks[0].id, {}, { nbSamples: 1 });
input.start();
}
// MP4box requires us to reparse the init segment unfortunately
let offset = 0;
for (let raw of init.raw) {
raw.fileStart = offset
input.appendBuffer(raw)
}
const stream = new Stream.Reader(msg.reader, msg.buffer)
/* TODO I'm not actually sure why this code doesn't work; something trips up the MP4 parser
while (1) {
const data = await stream.read()
if (!data) break
input.appendBuffer(data)
input.flush()
}
*/
// One day I'll figure it out; until then read one top-level atom at a time
while (!await stream.done()) {
const raw = await stream.peek(4)
const size = new DataView(raw.buffer, raw.byteOffset, raw.byteLength).getUint32(0)
const atom = await stream.bytes(size)
// Make a copy of the atom because mp4box only accepts an ArrayBuffer unfortunately
let box = new Uint8Array(atom.byteLength);
box.set(atom)
// and for some reason we need to modify the underlying ArrayBuffer with offset
let buffer = box.buffer as MP4.ArrayBuffer
buffer.fileStart = offset
// Parse the data
offset = input.appendBuffer(buffer)
input.flush()
}
}
}

View File

@ -1,27 +0,0 @@
import * as Message from "./message"
// Wrapper around the WebWorker API
export default class Video {
worker: Worker;
constructor(config: Message.Config) {
const url = new URL('worker.ts', import.meta.url)
this.worker = new Worker(url, {
type: "module",
name: "video",
})
this.worker.postMessage({ config }, [ config.canvas ])
}
init(init: Message.Init) {
this.worker.postMessage({ init }) // note: we copy the raw init bytes each time
}
segment(segment: Message.Segment) {
this.worker.postMessage({ segment }, [ segment.buffer.buffer, segment.reader ])
}
play() {
// TODO
}
}

View File

@ -1,17 +0,0 @@
import * as MP4 from "../mp4"
export interface Config {
canvas: OffscreenCanvas;
}
export interface Init {
track: string;
info: MP4.Info;
raw: MP4.ArrayBuffer[];
}
export interface Segment {
track: string;
buffer: Uint8Array; // unread buffered data
reader: ReadableStream; // unread unbuffered data
}

1
server/.dockerignore Normal file
View File

@ -0,0 +1 @@
target

2
server/.gitignore vendored
View File

@ -1 +1 @@
logs/ target

3
server/.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,3 @@
{
"rust-analyzer.showUnlinkedFileNotification": false
}

1055
server/Cargo.lock generated Normal file

File diff suppressed because it is too large Load Diff

18
server/Cargo.toml Normal file
View File

@ -0,0 +1,18 @@
[package]
name = "warp"
version = "0.1.0"
edition = "2021"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
quiche = { git = "https://github.com/kixelated/quiche.git", branch = "master", features = [ "qlog" ] } # WebTransport fork
clap = { version = "4.0", features = [ "derive" ] }
log = { version = "0.4", features = ["std"] }
mio = { version = "0.8", features = ["net", "os-poll"] }
env_logger = "0.9.3"
ring = "0.16"
anyhow = "1.0.70"
mp4 = "0.13.0"
serde = "1.0.160"
serde_json = "1.0"

42
server/Dockerfile Normal file
View File

@ -0,0 +1,42 @@
# Use the official Rust image as the base image
FROM rust:latest as build
# Quiche requires docker
RUN apt-get update && \
apt-get install -y cmake
# Set the build directory
WORKDIR /warp
# Create an empty project
RUN cargo init --bin
# Copy the Cargo.toml and Cargo.lock files to the container
COPY Cargo.toml Cargo.lock ./
# Build the empty project so we download/cache dependencies
RUN cargo build --release
# Copy the entire project to the container
COPY . .
# Build the project
RUN cargo build --release
# Make a new image to run the binary
FROM ubuntu:latest
# Use a volume to access certificates
VOLUME /cert
# Use another volume to access the media
VOLUME /media
# Expose port 4443 for the server
EXPOSE 4443/udp
# Copy the built binary
COPY --from=build /warp/target/release/warp /bin
# Set the startup command to run the binary
CMD warp --cert /cert/localhost.crt --key /cert/localhost.key --media /media/fragmented.mp4

View File

@ -1,30 +0,0 @@
module github.com/kixelated/warp/server
go 1.18
require (
github.com/abema/go-mp4 v0.7.2
github.com/kixelated/invoker v1.0.0
github.com/kixelated/quic-go v1.31.0
github.com/kixelated/webtransport-go v1.4.1
github.com/zencoder/go-dash/v3 v3.0.2
)
require (
github.com/francoispqt/gojay v1.2.13 // indirect
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
github.com/golang/mock v1.6.0 // indirect
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 // indirect
github.com/google/uuid v1.1.2 // indirect
github.com/marten-seemann/qpack v0.3.0 // indirect
github.com/marten-seemann/qtls-go1-18 v0.1.3 // indirect
github.com/marten-seemann/qtls-go1-19 v0.1.1 // indirect
github.com/onsi/ginkgo/v2 v2.2.0 // indirect
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 // indirect
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e // indirect
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 // indirect
golang.org/x/net v0.0.0-20220722155237-a158d28d115b // indirect
golang.org/x/sys v0.1.1-0.20221102194838-fc697a31fa06 // indirect
golang.org/x/text v0.3.7 // indirect
golang.org/x/tools v0.1.12 // indirect
)

View File

@ -1,264 +0,0 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.37.0/go.mod h1:TS1dMSSfndXH133OKGwekG838Om/cQT0BUHV3HcBgoo=
dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl+fi1br7+Rr3LqpNJf1/uxUdtRUV+Tnj0o93V2B9MU=
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/abema/go-mp4 v0.7.2 h1:ugTC8gfEmjyaDKpXs3vi2QzgJbDu9B8m6UMMIpbYbGg=
github.com/abema/go-mp4 v0.7.2/go.mod h1:vPl9t5ZK7K0x68jh12/+ECWBCXoWuIDtNgPtU2f04ws=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
github.com/buger/jsonparser v0.0.0-20181115193947-bf1c66bbce23/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
github.com/chzyer/logex v1.1.10/go.mod h1:+Ywpsq7O8HXn0nuIou7OrIPyXbp3wmkHB+jjWRnGsAI=
github.com/chzyer/readline v0.0.0-20180603132655-2972be24d48e/go.mod h1:nSuG5e5PlCu98SY8svDHJxuZscDgtXS6KTTbou5AhLI=
github.com/chzyer/test v0.0.0-20180213035817-a1ea475d72b1/go.mod h1:Q3SI9o4m/ZMnBNeIyt5eFwwo7qiLfzFZmjNmxjkiQlU=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/creack/pty v1.1.7/go.mod h1:lj5s0c3V2DBrqTV7llrYr5NG6My20zk30Fl46Y7DoTY=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/francoispqt/gojay v1.2.13 h1:d2m3sFjloqoIUQU3TsHBgj6qg/BVGlTBeHDUmyJnXKk=
github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 h1:p104kn46Q8WdvHunIJ9dAyjPVtrBPhSr3KT2yUst43I=
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0/go.mod h1:fyg7847qk6SyHyPtNmDHnmrv/HOrqktSC+C9fM+CJOE=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.6.0 h1:ErTB+efbowRARo13NNdxyJji2egdxLGQhRaY+DUumQc=
github.com/golang/mock v1.6.0/go.mod h1:p6yTPP+5HYm5mzsMV8JkE6ZKdX+/wYM6Hr+LicevLPs=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.2 h1:ROPKBNFfQgOUMifHyP+KYbvpjbdoFNs+aK7DXlji0Tw=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.5.8 h1:e6P7q2lk1O+qJJb4BtCQXlK8vWEO8V1ZeuEdJNOqZyg=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38 h1:yAJXTCF9TqKcTiHJAE8dj7HMvPfh66eeA2JYW7eFpSE=
github.com/google/pprof v0.0.0-20210407192527-94a9f03dee38/go.mod h1:kpwsk12EmLew5upagYY7GY0pfYCcupk39gWOCRROcvE=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2 h1:EVhdT+1Kseyi1/pUmXKaFxYsDNy9RQYkMWRH68J/W7Y=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go/v2 v2.0.3/go.mod h1:LLvjysVCY1JZeum8Z6l8qUty8fiNwE08qbEPm1M08qg=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/ianlancetaylor/demangle v0.0.0-20200824232613-28f6c0f3b639/go.mod h1:aSSvb/t6k1mPoxDqO4vJh6VOCGPwU4O0C2/Eqndh1Sc=
github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/kisielk/errcheck v1.4.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/kixelated/invoker v1.0.0 h1:0wYlvK39yQPbkwIFy+YN41AhF89WOtGyWqV2pZB39xw=
github.com/kixelated/invoker v1.0.0/go.mod h1:RjG3iqm/sKwZjOpcW4SGq+l+4DJCDR/yUtc70VjCRB8=
github.com/kixelated/quic-go v1.31.0 h1:p2vq3Otvtmz+0EP23vjumnO/HU4Q/DFxNF6xNryVfmA=
github.com/kixelated/quic-go v1.31.0/go.mod h1:AO7pURnb8HXHmdalp5e09UxQfsuwseEhl0NLmwiSOFY=
github.com/kixelated/webtransport-go v1.4.1 h1:ZtY3P7hVe1wK5fAt71b+HHnNISFDcQ913v+bvaNATxA=
github.com/kixelated/webtransport-go v1.4.1/go.mod h1:6RV5pTXF7oP53T83bosSDsLdSdw31j5cfpMDqsO4D5k=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/lunixbochs/vtclean v1.0.0/go.mod h1:pHhQNgMf3btfWnGBVipUOjRYhoOsdGqdm/+2c2E2WMI=
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/marten-seemann/qpack v0.3.0 h1:UiWstOgT8+znlkDPOg2+3rIuYXJ2CnGDkGUXN6ki6hE=
github.com/marten-seemann/qpack v0.3.0/go.mod h1:cGfKPBiP4a9EQdxCwEwI/GEeWAsjSekBvx/X8mh58+g=
github.com/marten-seemann/qtls-go1-18 v0.1.3 h1:R4H2Ks8P6pAtUagjFty2p7BVHn3XiwDAl7TTQf5h7TI=
github.com/marten-seemann/qtls-go1-18 v0.1.3/go.mod h1:mJttiymBAByA49mhlNZZGrH5u1uXYZJ+RW28Py7f4m4=
github.com/marten-seemann/qtls-go1-19 v0.1.1 h1:mnbxeq3oEyQxQXwI4ReCgW9DPoPR94sNlqWoDZnjRIE=
github.com/marten-seemann/qtls-go1-19 v0.1.1/go.mod h1:5HTDWtVudo/WFsHKRNuOhWlbdjrfs5JHrYb0wIJqGpI=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/microcosm-cc/bluemonday v1.0.1/go.mod h1:hsXNsILzKxV+sX77C5b8FSuKF00vh2OMYv+xgHpAMF4=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/neelance/astrewrite v0.0.0-20160511093645-99348263ae86/go.mod h1:kHJEU3ofeGjhHklVoIGuVj85JJwZ6kWPaJwCIxgnFmo=
github.com/neelance/sourcemap v0.0.0-20151028013722-8c68805598ab/go.mod h1:Qr6/a/Q4r9LP1IltGz7tA7iOK1WonHEYhu1HRBA7ZiM=
github.com/onsi/ginkgo/v2 v2.2.0 h1:3ZNA3L1c5FYDFTTxbFeVGGD8jYvjYauHD30YgLxVsNI=
github.com/onsi/ginkgo/v2 v2.2.0/go.mod h1:MEH45j8TBi6u9BMogfbp0stKC5cdGjumZj5Y7AG4VIk=
github.com/onsi/gomega v1.20.1 h1:PA/3qinGoukvymdIDV8pii6tiZgC8kbmJO6Z5+b002Q=
github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
github.com/orcaman/writerseeker v0.0.0-20200621085525-1d3f536ff85e h1:s2RNOM/IGdY0Y6qfTeUKhDawdHDpK9RGBdx80qN4Ttw=
github.com/orcaman/writerseeker v0.0.0-20200621085525-1d3f536ff85e/go.mod h1:nBdnFKj15wFbf94Rwfq4m30eAcyY9V/IyKAGQFtqkW0=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shurcooL/component v0.0.0-20170202220835-f88ec8f54cc4/go.mod h1:XhFIlyj5a1fBNx5aJTbKoIq0mNaPvOagO+HjB3EtxrY=
github.com/shurcooL/events v0.0.0-20181021180414-410e4ca65f48/go.mod h1:5u70Mqkb5O5cxEA8nxTsgrgLehJeAw6Oc4Ab1c/P1HM=
github.com/shurcooL/github_flavored_markdown v0.0.0-20181002035957-2122de532470/go.mod h1:2dOwnU2uBioM+SGy2aZoq1f/Sd1l9OkAeAUvjSyvgU0=
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
github.com/shurcooL/gofontwoff v0.0.0-20180329035133-29b52fc0a18d/go.mod h1:05UtEgK5zq39gLST6uB0cf3NEHjETfB4Fgr3Gx5R9Vw=
github.com/shurcooL/gopherjslib v0.0.0-20160914041154-feb6d3990c2c/go.mod h1:8d3azKNyqcHP1GaQE/c6dDgjkgSx2BZ4IoEi4F1reUI=
github.com/shurcooL/highlight_diff v0.0.0-20170515013008-09bb4053de1b/go.mod h1:ZpfEhSmds4ytuByIcDnOLkTHGUI6KNqRNPDLHDk+mUU=
github.com/shurcooL/highlight_go v0.0.0-20181028180052-98c3abbbae20/go.mod h1:UDKB5a1T23gOMUJrI+uSuH0VRDStOiUVSjBTRDVBVag=
github.com/shurcooL/home v0.0.0-20181020052607-80b7ffcb30f9/go.mod h1:+rgNQw2P9ARFAs37qieuu7ohDNQ3gds9msbT2yn85sg=
github.com/shurcooL/htmlg v0.0.0-20170918183704-d01228ac9e50/go.mod h1:zPn1wHpTIePGnXSHpsVPWEktKXHr6+SS6x/IKRb7cpw=
github.com/shurcooL/httperror v0.0.0-20170206035902-86b7830d14cc/go.mod h1:aYMfkZ6DWSJPJ6c4Wwz3QtW22G7mf/PEgaB9k/ik5+Y=
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/httpgzip v0.0.0-20180522190206-b1c53ac65af9/go.mod h1:919LwcH0M7/W4fcZ0/jy0qGght1GIhqyS/EgWGH2j5Q=
github.com/shurcooL/issues v0.0.0-20181008053335-6292fdc1e191/go.mod h1:e2qWDig5bLteJ4fwvDAc2NHzqFEthkqn7aOZAOpj+PQ=
github.com/shurcooL/issuesapp v0.0.0-20180602232740-048589ce2241/go.mod h1:NPpHK2TI7iSaM0buivtFUc9offApnI0Alt/K8hcHy0I=
github.com/shurcooL/notifications v0.0.0-20181007000457-627ab5aea122/go.mod h1:b5uSkrEVM1jQUspwbixRBhaIjIzL2xazXp6kntxYle0=
github.com/shurcooL/octicon v0.0.0-20181028054416-fa4f57f9efb2/go.mod h1:eWdoE5JD4R5UVWDucdOPg1g2fqQRq78IQa9zlOV1vpQ=
github.com/shurcooL/reactions v0.0.0-20181006231557-f2e0b4ca5b82/go.mod h1:TCR1lToEk4d2s07G3XGfz2QrgHXg4RJBvjrOozvoWfk=
github.com/shurcooL/sanitized_anchor_name v0.0.0-20170918181015-86672fcb3f95/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/users v0.0.0-20180125191416-49c67e49c537/go.mod h1:QJTqeLYEDaXHZDBsXlPCDqdhQuJkuw4NOtaxYe3xii4=
github.com/shurcooL/webdavfs v0.0.0-20170829043945-18c3829fa133/go.mod h1:hKmq5kWdCj2z2KEozexVbfEZIWiTjhE0+UjmZgPqehw=
github.com/sourcegraph/annotate v0.0.0-20160123013949-f4cad6c6324d/go.mod h1:UdhH50NIW0fCiwBSr0co2m7BnFLdv4fQTgdqdJTHFeE=
github.com/sourcegraph/syntaxhighlight v0.0.0-20170531221838-bd320f5d308e/go.mod h1:HuIsMU8RRBOtsCgI77wP899iHVBQpCmg4ErYMZB+2IA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.8.0 h1:pSgiaMZlXftHpm5L7V1+rVB+AZJydKsMxsQBIJw4PKk=
github.com/sunfish-shogi/bufseekio v0.0.0-20210207115823-a4185644b365/go.mod h1:dEzdXgvImkQ3WLI+0KQpmEx8T/C/ma9KeS3AfmU899I=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/viant/assertly v0.4.8/go.mod h1:aGifi++jvCrUaklKEKT0BU95igDNaqkvz+49uaYMPRU=
github.com/viant/toolbox v0.24.0/go.mod h1:OxMCG57V0PXuIP2HNQrtJf2CjqdmbrOx5EkMILuUhzM=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/zencoder/go-dash/v3 v3.0.2 h1:oP1+dOh+Gp57PkvdCyMfbHtrHaxfl3w4kR3KBBbuqQE=
github.com/zencoder/go-dash/v3 v3.0.2/go.mod h1:30R5bKy1aUYY45yesjtZ9l8trNc2TwNqbS17WVQmCzk=
go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
golang.org/x/build v0.0.0-20190111050920-041ab4dc3f9d/go.mod h1:OWs+y06UdEOHN4y+MfF/py+xQ/tYqIWW03b70/CG9Rw=
golang.org/x/crypto v0.0.0-20181030102418-4d3f4d9ffa16/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29 h1:tkVvjkPTB7pnW3jnid7kNyAMPVWllTNOf/qKDze4p9o=
golang.org/x/crypto v0.0.0-20220331220935-ae2d96664a29/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e h1:+WEEuIdZHnUeJJmEUjyYC2gfUMj69yZXw17EnHg/otA=
golang.org/x/exp v0.0.0-20220722155223-a9213eeb770e/go.mod h1:Kr81I6Kryrl9sr8s2FK3vxD90NdsKWRuOIl2O4CvYbA=
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4 h1:6zppjxzCulZykYSLyVDYbneBfbaBIQPYMevg0bEwv2s=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181029044818-c44066c5c816/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190313220215-9f648a60d977/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b h1:PxfKdU9lEEDYjdIzOtC4qFWgkU2rGHdKlKowJSMN9h0=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181029174526-d69651ed3497/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190316082340-a2f829d7f35f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190726091711-fc99dfbffb4e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20191204072324-ce4227a45e2e/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.1-0.20221102194838-fc697a31fa06 h1:E1pm64FqQa4v8dHd/bAneyMkR4hk8LTJhoSlc5mc1cM=
golang.org/x/sys v0.1.1-0.20221102194838-fc697a31fa06/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7 h1:olpwvP2KacW1ZWvsR7uQhoyTYvKAupfQrRGBFM352Gk=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030000716-a0a13e073c7b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200410194907-79a7a3126eef/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.1.1/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12 h1:VveCTK38A2rkS8ZqFY25HIDFscX5X9OoEhJd3quQmXU=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.1.0/go.mod h1:UGEZY7KEX120AnNLIHFMKIo4obdJhkp2tPbaPlQx13Y=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181029155118-b69ba1387ce2/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181202183823-bd91e49a0898/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
google.golang.org/genproto v0.0.0-20190306203927-b5d61aea6440/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/src-d/go-billy.v4 v4.3.2 h1:0SQA1pRztfTFx2miS8sA97XvooFeNOmvUenF4o0EcVg=
gopkg.in/src-d/go-billy.v4 v4.3.2/go.mod h1:nDjArDMp+XMs1aFAESLRjfGSgfvoYN0hDfzEk0GjC98=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2020.1.6/go.mod h1:pyyisuGw24ruLjrr1ddx39WE0y9OooInRzEYLhQB2YY=
sourcegraph.com/sourcegraph/go-diff v0.5.0/go.mod h1:kuch7UrkMzY0X+p9CRK03kfuPQ2zzQcaEFbx8wA8rck=
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=

View File

@ -1,380 +0,0 @@
package warp
import (
"bytes"
"context"
"encoding/binary"
"errors"
"fmt"
"io"
"io/fs"
"os"
"path/filepath"
"strings"
"time"
"github.com/abema/go-mp4"
"github.com/kixelated/invoker"
"github.com/zencoder/go-dash/v3/mpd"
)
// This is a demo; you should actually fetch media from a live backend.
// It's just much easier to read from disk and "fake" being live.
type Media struct {
base fs.FS
inits map[string]*MediaInit
video []*mpd.Representation
audio []*mpd.Representation
}
func NewMedia(playlistPath string) (m *Media, err error) {
m = new(Media)
// Create a fs.FS out of the folder holding the playlist
m.base = os.DirFS(filepath.Dir(playlistPath))
// Read the playlist file
playlist, err := mpd.ReadFromFile(playlistPath)
if err != nil {
return nil, fmt.Errorf("failed to open playlist: %w", err)
}
if len(playlist.Periods) > 1 {
return nil, fmt.Errorf("multiple periods not supported")
}
period := playlist.Periods[0]
for _, adaption := range period.AdaptationSets {
representation := adaption.Representations[0]
if representation.MimeType == nil {
return nil, fmt.Errorf("missing representation mime type")
}
if representation.Bandwidth == nil {
return nil, fmt.Errorf("missing representation bandwidth")
}
switch *representation.MimeType {
case "video/mp4":
m.video = append(m.video, representation)
case "audio/mp4":
m.audio = append(m.audio, representation)
}
}
if len(m.video) == 0 {
return nil, fmt.Errorf("no video representation found")
}
if len(m.audio) == 0 {
return nil, fmt.Errorf("no audio representation found")
}
m.inits = make(map[string]*MediaInit)
var reps []*mpd.Representation
reps = append(reps, m.audio...)
reps = append(reps, m.video...)
for _, rep := range reps {
path := *rep.SegmentTemplate.Initialization
// TODO Support the full template engine
path = strings.ReplaceAll(path, "$RepresentationID$", *rep.ID)
f, err := fs.ReadFile(m.base, path)
if err != nil {
return nil, fmt.Errorf("failed to read init file: %w", err)
}
init, err := newMediaInit(*rep.ID, f)
if err != nil {
return nil, fmt.Errorf("failed to create init segment: %w", err)
}
m.inits[*rep.ID] = init
}
return m, nil
}
func (m *Media) Start(bitrate func() uint64) (inits map[string]*MediaInit, audio *MediaStream, video *MediaStream, err error) {
start := time.Now()
audio, err = newMediaStream(m, m.audio, start, bitrate)
if err != nil {
return nil, nil, nil, err
}
video, err = newMediaStream(m, m.video, start, bitrate)
if err != nil {
return nil, nil, nil, err
}
return m.inits, audio, video, nil
}
type MediaStream struct {
Media *Media
start time.Time
reps []*mpd.Representation
sequence int
bitrate func() uint64 // returns the current estimated bitrate
}
func newMediaStream(m *Media, reps []*mpd.Representation, start time.Time, bitrate func() uint64) (ms *MediaStream, err error) {
ms = new(MediaStream)
ms.Media = m
ms.reps = reps
ms.start = start
ms.bitrate = bitrate
return ms, nil
}
func (ms *MediaStream) chooseRepresentation() (choice *mpd.Representation) {
bitrate := ms.bitrate()
// Loop over the renditions and pick the highest bitrate we can support
for _, r := range ms.reps {
if uint64(*r.Bandwidth) <= bitrate && (choice == nil || *r.Bandwidth > *choice.Bandwidth) {
choice = r
}
}
if choice != nil {
return choice
}
// We can't support any of the bitrates, so find the lowest one.
for _, r := range ms.reps {
if choice == nil || *r.Bandwidth < *choice.Bandwidth {
choice = r
}
}
return choice
}
// Returns the next segment in the stream
func (ms *MediaStream) Next(ctx context.Context) (segment *MediaSegment, err error) {
rep := ms.chooseRepresentation()
if rep.SegmentTemplate == nil {
return nil, fmt.Errorf("missing segment template")
}
if rep.SegmentTemplate.Media == nil {
return nil, fmt.Errorf("no media template")
}
if rep.SegmentTemplate.StartNumber == nil {
return nil, fmt.Errorf("missing start number")
}
path := *rep.SegmentTemplate.Media
sequence := ms.sequence + int(*rep.SegmentTemplate.StartNumber)
// TODO Support the full template engine
path = strings.ReplaceAll(path, "$RepresentationID$", *rep.ID)
path = strings.ReplaceAll(path, "$Number%05d$", fmt.Sprintf("%05d", sequence)) // TODO TODO
// Try openning the file
f, err := ms.Media.base.Open(path)
if errors.Is(err, os.ErrNotExist) && ms.sequence != 0 {
// Return EOF if the next file is missing
return nil, nil
} else if err != nil {
return nil, fmt.Errorf("failed to open segment file: %w", err)
}
duration := time.Duration(*rep.SegmentTemplate.Duration) / time.Nanosecond
timestamp := time.Duration(ms.sequence) * duration
init := ms.Media.inits[*rep.ID]
segment, err = newMediaSegment(ms, init, f, timestamp)
if err != nil {
return nil, fmt.Errorf("failed to create segment: %w", err)
}
ms.sequence += 1
return segment, nil
}
type MediaInit struct {
ID string
Raw []byte
Timescale int
}
func newMediaInit(id string, raw []byte) (mi *MediaInit, err error) {
mi = new(MediaInit)
mi.ID = id
mi.Raw = raw
err = mi.parse()
if err != nil {
return nil, fmt.Errorf("failed to parse init segment: %w", err)
}
return mi, nil
}
// Parse through the init segment, literally just to populate the timescale
func (mi *MediaInit) parse() (err error) {
r := bytes.NewReader(mi.Raw)
_, err = mp4.ReadBoxStructure(r, func(h *mp4.ReadHandle) (interface{}, error) {
if !h.BoxInfo.IsSupportedType() {
return nil, nil
}
payload, _, err := h.ReadPayload()
if err != nil {
return nil, err
}
switch box := payload.(type) {
case *mp4.Mdhd: // Media Header; moov -> trak -> mdia > mdhd
if mi.Timescale != 0 {
// verify only one track
return nil, fmt.Errorf("multiple mdhd atoms")
}
mi.Timescale = int(box.Timescale)
}
// Expands children
return h.Expand()
})
if err != nil {
return fmt.Errorf("failed to parse MP4 file: %w", err)
}
return nil
}
type MediaSegment struct {
Stream *MediaStream
Init *MediaInit
file fs.File
timestamp time.Duration
}
func newMediaSegment(s *MediaStream, init *MediaInit, file fs.File, timestamp time.Duration) (ms *MediaSegment, err error) {
ms = new(MediaSegment)
ms.Stream = s
ms.Init = init
ms.file = file
ms.timestamp = timestamp
return ms, nil
}
// Return the next atom, sleeping based on the PTS to simulate a live stream
func (ms *MediaSegment) Read(ctx context.Context) (chunk []byte, err error) {
// Read the next top-level box
var header [8]byte
_, err = io.ReadFull(ms.file, header[:])
if err != nil {
return nil, fmt.Errorf("failed to read header: %w", err)
}
size := int(binary.BigEndian.Uint32(header[0:4]))
if size < 8 {
return nil, fmt.Errorf("box is too small")
}
buf := make([]byte, size)
n := copy(buf, header[:])
_, err = io.ReadFull(ms.file, buf[n:])
if err != nil {
return nil, fmt.Errorf("failed to read atom: %w", err)
}
sample, err := ms.parseAtom(ctx, buf)
if err != nil {
return nil, fmt.Errorf("failed to parse atom: %w", err)
}
if sample != nil {
// Simulate a live stream by sleeping before we write this sample.
// Figure out how much time has elapsed since the start
elapsed := time.Since(ms.Stream.start)
delay := sample.Timestamp - elapsed
if delay > 0 {
// Sleep until we're supposed to see these samples
err = invoker.Sleep(delay)(ctx)
if err != nil {
return nil, err
}
}
}
return buf, nil
}
// Parse through the MP4 atom, returning infomation about the next fragmented sample
func (ms *MediaSegment) parseAtom(ctx context.Context, buf []byte) (sample *mediaSample, err error) {
r := bytes.NewReader(buf)
_, err = mp4.ReadBoxStructure(r, func(h *mp4.ReadHandle) (interface{}, error) {
if !h.BoxInfo.IsSupportedType() {
return nil, nil
}
payload, _, err := h.ReadPayload()
if err != nil {
return nil, err
}
switch box := payload.(type) {
case *mp4.Moof:
sample = new(mediaSample)
case *mp4.Tfdt: // Track Fragment Decode Timestamp; moof -> traf -> tfdt
// TODO This box isn't required
// TODO we want the last PTS if there are multiple samples
var dts time.Duration
if box.FullBox.Version == 0 {
dts = time.Duration(box.BaseMediaDecodeTimeV0)
} else {
dts = time.Duration(box.BaseMediaDecodeTimeV1)
}
if ms.Init.Timescale == 0 {
return nil, fmt.Errorf("missing timescale")
}
// Convert to seconds
// TODO What about PTS?
sample.Timestamp = dts * time.Second / time.Duration(ms.Init.Timescale)
}
// Expands children
return h.Expand()
})
if err != nil {
return nil, fmt.Errorf("failed to parse MP4 file: %w", err)
}
return sample, nil
}
func (ms *MediaSegment) Close() (err error) {
return ms.file.Close()
}
type mediaSample struct {
Timestamp time.Duration // The timestamp of the first sample
}

View File

@ -1,20 +0,0 @@
package warp
type Message struct {
Init *MessageInit `json:"init,omitempty"`
Segment *MessageSegment `json:"segment,omitempty"`
Debug *MessageDebug `json:"debug,omitempty"`
}
type MessageInit struct {
Id string `json:"id"` // ID of the init segment
}
type MessageSegment struct {
Init string `json:"init"` // ID of the init segment to use for this segment
Timestamp int `json:"timestamp"` // PTS of the first frame in milliseconds
}
type MessageDebug struct {
MaxBitrate int `json:"max_bitrate"` // Artificially limit the QUIC max bitrate
}

View File

@ -1,132 +0,0 @@
package warp
import (
"context"
"crypto/tls"
"encoding/hex"
"fmt"
"io"
"log"
"net/http"
"os"
"path/filepath"
"github.com/kixelated/invoker"
"github.com/kixelated/quic-go"
"github.com/kixelated/quic-go/http3"
"github.com/kixelated/quic-go/logging"
"github.com/kixelated/quic-go/qlog"
"github.com/kixelated/webtransport-go"
)
type Server struct {
inner *webtransport.Server
media *Media
sessions invoker.Tasks
cert *tls.Certificate
}
type Config struct {
Addr string
Cert *tls.Certificate
LogDir string
Media *Media
}
func New(config Config) (s *Server, err error) {
s = new(Server)
s.cert = config.Cert
s.media = config.Media
quicConfig := &quic.Config{}
if config.LogDir != "" {
quicConfig.Tracer = qlog.NewTracer(func(p logging.Perspective, connectionID []byte) io.WriteCloser {
path := fmt.Sprintf("%s-%s.qlog", p, hex.EncodeToString(connectionID))
f, err := os.Create(filepath.Join(config.LogDir, path))
if err != nil {
// lame
panic(err)
}
return f
})
}
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{*s.cert},
}
// Host a HTTP/3 server to serve the WebTransport endpoint
mux := http.NewServeMux()
mux.HandleFunc("/watch", s.handleWatch)
s.inner = &webtransport.Server{
H3: http3.Server{
TLSConfig: tlsConfig,
QuicConfig: quicConfig,
Addr: config.Addr,
Handler: mux,
},
CheckOrigin: func(r *http.Request) bool { return true },
}
return s, nil
}
func (s *Server) runServe(ctx context.Context) (err error) {
return s.inner.ListenAndServe()
}
func (s *Server) runShutdown(ctx context.Context) (err error) {
<-ctx.Done()
s.inner.Close() // close on context shutdown
return ctx.Err()
}
func (s *Server) Run(ctx context.Context) (err error) {
return invoker.Run(ctx, s.runServe, s.runShutdown, s.sessions.Repeat)
}
func (s *Server) handleWatch(w http.ResponseWriter, r *http.Request) {
hijacker, ok := w.(http3.Hijacker)
if !ok {
panic("unable to hijack connection: must use kixelated/quic-go")
}
conn := hijacker.Connection()
sess, err := s.inner.Upgrade(w, r)
if err != nil {
http.Error(w, "failed to upgrade session", 500)
return
}
err = s.serveSession(r.Context(), conn, sess)
if err != nil {
log.Println(err)
}
}
func (s *Server) serveSession(ctx context.Context, conn quic.Connection, sess *webtransport.Session) (err error) {
defer func() {
if err != nil {
sess.CloseWithError(1, err.Error())
} else {
sess.CloseWithError(0, "end of broadcast")
}
}()
ss, err := NewSession(conn, sess, s.media)
if err != nil {
return fmt.Errorf("failed to create session: %w", err)
}
err = ss.Run(ctx)
if err != nil {
return fmt.Errorf("terminated session: %w", err)
}
return nil
}

View File

@ -1,279 +0,0 @@
package warp
import (
"context"
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"io"
"log"
"math"
"time"
"github.com/kixelated/invoker"
"github.com/kixelated/quic-go"
"github.com/kixelated/webtransport-go"
)
// A single WebTransport session
type Session struct {
conn quic.Connection
inner *webtransport.Session
media *Media
inits map[string]*MediaInit
audio *MediaStream
video *MediaStream
streams invoker.Tasks
}
func NewSession(connection quic.Connection, session *webtransport.Session, media *Media) (s *Session, err error) {
s = new(Session)
s.conn = connection
s.inner = session
s.media = media
return s, nil
}
func (s *Session) Run(ctx context.Context) (err error) {
s.inits, s.audio, s.video, err = s.media.Start(s.conn.GetMaxBandwidth)
if err != nil {
return fmt.Errorf("failed to start media: %w", err)
}
// Once we've validated the session, now we can start accessing the streams
return invoker.Run(ctx, s.runAccept, s.runAcceptUni, s.runInit, s.runAudio, s.runVideo, s.streams.Repeat)
}
func (s *Session) runAccept(ctx context.Context) (err error) {
for {
stream, err := s.inner.AcceptStream(ctx)
if err != nil {
return fmt.Errorf("failed to accept bidirectional stream: %w", err)
}
// Warp doesn't utilize bidirectional streams so just close them immediately.
// We might use them in the future so don't close the connection with an error.
stream.CancelRead(1)
}
}
func (s *Session) runAcceptUni(ctx context.Context) (err error) {
for {
stream, err := s.inner.AcceptUniStream(ctx)
if err != nil {
return fmt.Errorf("failed to accept unidirectional stream: %w", err)
}
s.streams.Add(func(ctx context.Context) (err error) {
return s.handleStream(ctx, stream)
})
}
}
func (s *Session) handleStream(ctx context.Context, stream webtransport.ReceiveStream) (err error) {
defer func() {
if err != nil {
stream.CancelRead(1)
}
}()
var header [8]byte
for {
_, err = io.ReadFull(stream, header[:])
if errors.Is(io.EOF, err) {
return nil
} else if err != nil {
return fmt.Errorf("failed to read atom header: %w", err)
}
size := binary.BigEndian.Uint32(header[0:4])
name := string(header[4:8])
if size < 8 {
return fmt.Errorf("atom size is too small")
} else if size > 42069 { // arbitrary limit
return fmt.Errorf("atom size is too large")
} else if name != "warp" {
return fmt.Errorf("only warp atoms are supported")
}
payload := make([]byte, size-8)
_, err = io.ReadFull(stream, payload)
if err != nil {
return fmt.Errorf("failed to read atom payload: %w", err)
}
log.Println("received message:", string(payload))
msg := Message{}
err = json.Unmarshal(payload, &msg)
if err != nil {
return fmt.Errorf("failed to decode json payload: %w", err)
}
if msg.Debug != nil {
s.setDebug(msg.Debug)
}
}
}
func (s *Session) runInit(ctx context.Context) (err error) {
for _, init := range s.inits {
err = s.writeInit(ctx, init)
if err != nil {
return fmt.Errorf("failed to write init stream: %w", err)
}
}
return nil
}
func (s *Session) runAudio(ctx context.Context) (err error) {
for {
segment, err := s.audio.Next(ctx)
if err != nil {
return fmt.Errorf("failed to get next segment: %w", err)
}
if segment == nil {
return nil
}
err = s.writeSegment(ctx, segment)
if err != nil {
return fmt.Errorf("failed to write segment stream: %w", err)
}
}
}
func (s *Session) runVideo(ctx context.Context) (err error) {
for {
segment, err := s.video.Next(ctx)
if err != nil {
return fmt.Errorf("failed to get next segment: %w", err)
}
if segment == nil {
return nil
}
err = s.writeSegment(ctx, segment)
if err != nil {
return fmt.Errorf("failed to write segment stream: %w", err)
}
}
}
// Create a stream for an INIT segment and write the container.
func (s *Session) writeInit(ctx context.Context, init *MediaInit) (err error) {
temp, err := s.inner.OpenUniStreamSync(ctx)
if err != nil {
return fmt.Errorf("failed to create stream: %w", err)
}
if temp == nil {
// Not sure when this happens, perhaps when closing a connection?
return fmt.Errorf("received a nil stream from quic-go")
}
// Wrap the stream in an object that buffers writes instead of blocking.
stream := NewStream(temp)
s.streams.Add(stream.Run)
defer func() {
if err != nil {
stream.WriteCancel(1)
}
}()
stream.SetPriority(math.MaxInt)
err = stream.WriteMessage(Message{
Init: &MessageInit{Id: init.ID},
})
if err != nil {
return fmt.Errorf("failed to write init header: %w", err)
}
_, err = stream.Write(init.Raw)
if err != nil {
return fmt.Errorf("failed to write init data: %w", err)
}
err = stream.Close()
if err != nil {
return fmt.Errorf("failed to close init stream: %w", err)
}
return nil
}
// Create a stream for a segment and write the contents, chunk by chunk.
func (s *Session) writeSegment(ctx context.Context, segment *MediaSegment) (err error) {
temp, err := s.inner.OpenUniStreamSync(ctx)
if err != nil {
return fmt.Errorf("failed to create stream: %w", err)
}
if temp == nil {
// Not sure when this happens, perhaps when closing a connection?
return fmt.Errorf("received a nil stream from quic-go")
}
// Wrap the stream in an object that buffers writes instead of blocking.
stream := NewStream(temp)
s.streams.Add(stream.Run)
defer func() {
if err != nil {
stream.WriteCancel(1)
}
}()
ms := int(segment.timestamp / time.Millisecond)
// newer segments take priority
stream.SetPriority(ms)
err = stream.WriteMessage(Message{
Segment: &MessageSegment{
Init: segment.Init.ID,
Timestamp: ms,
},
})
if err != nil {
return fmt.Errorf("failed to write segment header: %w", err)
}
for {
// Get the next fragment
buf, err := segment.Read(ctx)
if errors.Is(err, io.EOF) {
break
} else if err != nil {
return fmt.Errorf("failed to read segment data: %w", err)
}
// NOTE: This won't block because of our wrapper
_, err = stream.Write(buf)
if err != nil {
return fmt.Errorf("failed to write segment data: %w", err)
}
}
err = stream.Close()
if err != nil {
return fmt.Errorf("failed to close segemnt stream: %w", err)
}
return nil
}
func (s *Session) setDebug(msg *MessageDebug) {
s.conn.SetMaxBandwidth(uint64(msg.MaxBitrate))
}

View File

@ -1,144 +0,0 @@
package warp
import (
"context"
"encoding/binary"
"encoding/json"
"fmt"
"sync"
"github.com/kixelated/webtransport-go"
)
// Wrapper around quic.SendStream to make Write non-blocking.
// Otherwise we can't write to multiple concurrent streams in the same goroutine.
type Stream struct {
inner webtransport.SendStream
chunks [][]byte
closed bool
err error
notify chan struct{}
mutex sync.Mutex
}
func NewStream(inner webtransport.SendStream) (s *Stream) {
s = new(Stream)
s.inner = inner
s.notify = make(chan struct{})
return s
}
func (s *Stream) Run(ctx context.Context) (err error) {
defer func() {
s.mutex.Lock()
s.err = err
s.mutex.Unlock()
}()
for {
s.mutex.Lock()
chunks := s.chunks
notify := s.notify
closed := s.closed
s.chunks = s.chunks[len(s.chunks):]
s.mutex.Unlock()
for _, chunk := range chunks {
_, err = s.inner.Write(chunk)
if err != nil {
return err
}
}
if closed {
return s.inner.Close()
}
if len(chunks) == 0 {
select {
case <-ctx.Done():
return ctx.Err()
case <-notify:
}
}
}
}
func (s *Stream) Write(buf []byte) (n int, err error) {
s.mutex.Lock()
defer s.mutex.Unlock()
if s.err != nil {
return 0, s.err
}
if s.closed {
return 0, fmt.Errorf("closed")
}
// Make a copy of the buffer so it's long lived
buf = append([]byte{}, buf...)
s.chunks = append(s.chunks, buf)
// Wake up the writer
close(s.notify)
s.notify = make(chan struct{})
return len(buf), nil
}
func (s *Stream) WriteMessage(msg Message) (err error) {
payload, err := json.Marshal(msg)
if err != nil {
return fmt.Errorf("failed to marshal message: %w", err)
}
var size [4]byte
binary.BigEndian.PutUint32(size[:], uint32(len(payload)+8))
_, err = s.Write(size[:])
if err != nil {
return fmt.Errorf("failed to write size: %w", err)
}
_, err = s.Write([]byte("warp"))
if err != nil {
return fmt.Errorf("failed to write atom header: %w", err)
}
_, err = s.Write(payload)
if err != nil {
return fmt.Errorf("failed to write payload: %w", err)
}
return nil
}
func (s *Stream) WriteCancel(code webtransport.StreamErrorCode) {
s.inner.CancelWrite(code)
}
func (s *Stream) SetPriority(prio int) {
s.inner.SetPriority(prio)
}
func (s *Stream) Close() (err error) {
s.mutex.Lock()
defer s.mutex.Unlock()
if s.err != nil {
return s.err
}
s.closed = true
// Wake up the writer
close(s.notify)
s.notify = make(chan struct{})
return nil
}

View File

@ -1,56 +0,0 @@
package main
import (
"context"
"crypto/tls"
"flag"
"fmt"
"log"
"github.com/kixelated/invoker"
"github.com/kixelated/warp/server/internal/warp"
)
func main() {
err := run(context.Background())
if err != nil {
log.Fatal(err)
}
}
func run(ctx context.Context) (err error) {
addr := flag.String("addr", ":4443", "HTTPS server address")
cert := flag.String("tls-cert", "../cert/localhost.crt", "TLS certificate file path")
key := flag.String("tls-key", "../cert/localhost.key", "TLS certificate file path")
logDir := flag.String("log-dir", "", "logs will be written to the provided directory")
dash := flag.String("dash", "../media/playlist.mpd", "DASH playlist path")
flag.Parse()
media, err := warp.NewMedia(*dash)
if err != nil {
return fmt.Errorf("failed to open media: %w", err)
}
tlsCert, err := tls.LoadX509KeyPair(*cert, *key)
if err != nil {
return fmt.Errorf("failed to load TLS certificate: %w", err)
}
warpConfig := warp.Config{
Addr: *addr,
Cert: &tlsCert,
LogDir: *logDir,
Media: media,
}
warpServer, err := warp.New(warpConfig)
if err != nil {
return fmt.Errorf("failed to create warp server: %w", err)
}
log.Printf("listening on %s", *addr)
return invoker.Run(ctx, invoker.Interrupt, warpServer.Run)
}

3
server/src/lib.rs Normal file
View File

@ -0,0 +1,3 @@
pub mod media;
pub mod session;
pub mod transport;

38
server/src/main.rs Normal file
View File

@ -0,0 +1,38 @@
use warp::{session, transport};
use clap::Parser;
/// Search for a pattern in a file and display the lines that contain it.
#[derive(Parser)]
struct Cli {
/// Listen on this address
#[arg(short, long, default_value = "[::]:4443")]
addr: String,
/// Use the certificate file at this path
#[arg(short, long, default_value = "../cert/localhost.crt")]
cert: String,
/// Use the private key at this path
#[arg(short, long, default_value = "../cert/localhost.key")]
key: String,
/// Use the media file at this path
#[arg(short, long, default_value = "../media/fragmented.mp4")]
media: String,
}
fn main() -> anyhow::Result<()> {
env_logger::init();
let args = Cli::parse();
let server_config = transport::Config {
addr: args.addr,
cert: args.cert,
key: args.key,
};
let mut server = transport::Server::<session::Session>::new(server_config).unwrap();
server.run()
}

3
server/src/media/mod.rs Normal file
View File

@ -0,0 +1,3 @@
mod source;
pub use source::{Fragment, Source};

230
server/src/media/source.rs Normal file
View File

@ -0,0 +1,230 @@
use std::collections::{HashMap, VecDeque};
use std::io::Read;
use std::{fs, io, time};
use anyhow;
use mp4;
use mp4::ReadBox;
pub struct Source {
// We read the file once, in order, and don't seek backwards.
reader: io::BufReader<fs::File>,
// The timestamp when the broadcast "started", so we can sleep to simulate a live stream.
start: time::Instant,
// The initialization payload; ftyp + moov boxes.
pub init: Vec<u8>,
// The timescale used for each track.
timescales: HashMap<u32, u32>,
// Any fragments parsed and ready to be returned by next().
fragments: VecDeque<Fragment>,
}
pub struct Fragment {
// The track ID for the fragment.
pub track_id: u32,
// The data of the fragment.
pub data: Vec<u8>,
// Whether this fragment is a keyframe.
pub keyframe: bool,
// The timestamp of the fragment, in milliseconds, to simulate a live stream.
pub timestamp: u64,
}
impl Source {
pub fn new(path: &str) -> anyhow::Result<Self> {
let f = fs::File::open(path)?;
let mut reader = io::BufReader::new(f);
let start = time::Instant::now();
let ftyp = read_atom(&mut reader)?;
anyhow::ensure!(&ftyp[4..8] == b"ftyp", "expected ftyp atom");
let moov = read_atom(&mut reader)?;
anyhow::ensure!(&moov[4..8] == b"moov", "expected moov atom");
let mut init = ftyp;
init.extend(&moov);
// We're going to parse the moov box.
// We have to read the moov box header to correctly advance the cursor for the mp4 crate.
let mut moov_reader = io::Cursor::new(&moov);
let moov_header = mp4::BoxHeader::read(&mut moov_reader)?;
// Parse the moov box so we can detect the timescales for each track.
let moov = mp4::MoovBox::read_box(&mut moov_reader, moov_header.size)?;
Ok(Self {
reader,
start,
init,
timescales: timescales(&moov),
fragments: VecDeque::new(),
})
}
pub fn fragment(&mut self) -> anyhow::Result<Option<Fragment>> {
if self.fragments.is_empty() {
self.parse()?;
};
if self.timeout().is_some() {
return Ok(None);
}
Ok(self.fragments.pop_front())
}
fn parse(&mut self) -> anyhow::Result<()> {
loop {
let atom = read_atom(&mut self.reader)?;
let mut reader = io::Cursor::new(&atom);
let header = mp4::BoxHeader::read(&mut reader)?;
match header.name {
mp4::BoxType::FtypBox | mp4::BoxType::MoovBox => {
anyhow::bail!("must call init first")
}
mp4::BoxType::MoofBox => {
let moof = mp4::MoofBox::read_box(&mut reader, header.size)?;
if moof.trafs.len() != 1 {
// We can't split the mdat atom, so this is impossible to support
anyhow::bail!("multiple tracks per moof atom")
}
self.fragments.push_back(Fragment {
track_id: moof.trafs[0].tfhd.track_id,
data: atom,
keyframe: has_keyframe(&moof),
timestamp: first_timestamp(&moof).expect("couldn't find timestamp"),
})
}
mp4::BoxType::MdatBox => {
let moof = self.fragments.back().expect("no atom before mdat");
self.fragments.push_back(Fragment {
track_id: moof.track_id,
data: atom,
keyframe: false,
timestamp: moof.timestamp,
});
// We have some media data, return so we can start sending it.
return Ok(());
}
_ => {
// Skip unknown atoms
}
}
}
}
// Simulate a live stream by sleeping until the next timestamp in the media.
pub fn timeout(&self) -> Option<time::Duration> {
let next = self.fragments.front()?;
let timestamp = next.timestamp;
// Find the timescale for the track.
let timescale = self.timescales.get(&next.track_id).unwrap();
let delay = time::Duration::from_millis(1000 * timestamp / *timescale as u64);
let elapsed = self.start.elapsed();
delay.checked_sub(elapsed)
}
}
// Read a full MP4 atom into a vector.
pub fn read_atom<R: Read>(reader: &mut R) -> anyhow::Result<Vec<u8>> {
// Read the 8 bytes for the size + type
let mut buf = [0u8; 8];
reader.read_exact(&mut buf)?;
// Convert the first 4 bytes into the size.
let size = u32::from_be_bytes(buf[0..4].try_into()?) as u64;
//let typ = &buf[4..8].try_into().ok().unwrap();
let mut raw = buf.to_vec();
let mut limit = match size {
// Runs until the end of the file.
0 => reader.take(u64::MAX),
// The next 8 bytes are the extended size to be used instead.
1 => {
reader.read_exact(&mut buf)?;
let size_large = u64::from_be_bytes(buf);
anyhow::ensure!(
size_large >= 16,
"impossible extended box size: {}",
size_large
);
reader.take(size_large - 16)
}
2..=7 => {
anyhow::bail!("impossible box size: {}", size)
}
// Otherwise read based on the size.
size => reader.take(size - 8),
};
// Append to the vector and return it.
limit.read_to_end(&mut raw)?;
Ok(raw)
}
fn has_keyframe(moof: &mp4::MoofBox) -> bool {
for traf in &moof.trafs {
// TODO trak default flags if this is None
let default_flags = traf.tfhd.default_sample_flags.unwrap_or_default();
let trun = match &traf.trun {
Some(t) => t,
None => return false,
};
for i in 0..trun.sample_count {
let mut flags = match trun.sample_flags.get(i as usize) {
Some(f) => *f,
None => default_flags,
};
if i == 0 && trun.first_sample_flags.is_some() {
flags = trun.first_sample_flags.unwrap();
}
// https://chromium.googlesource.com/chromium/src/media/+/master/formats/mp4/track_run_iterator.cc#177
let keyframe = (flags >> 24) & 0x3 == 0x2; // kSampleDependsOnNoOther
let non_sync = (flags >> 16) & 0x1 == 0x1; // kSampleIsNonSyncSample
if keyframe && !non_sync {
return true;
}
}
}
false
}
fn first_timestamp(moof: &mp4::MoofBox) -> Option<u64> {
Some(moof.trafs.first()?.tfdt.as_ref()?.base_media_decode_time)
}
fn timescales(moov: &mp4::MoovBox) -> HashMap<u32, u32> {
moov.traks
.iter()
.map(|trak| (trak.tkhd.track_id, trak.mdia.mdhd.timescale))
.collect()
}

View File

@ -0,0 +1,37 @@
use serde::{Deserialize, Serialize};
#[derive(Serialize, Deserialize)]
pub struct Message {
pub init: Option<Init>,
pub segment: Option<Segment>,
}
#[derive(Serialize, Deserialize)]
pub struct Init {}
#[derive(Serialize, Deserialize)]
pub struct Segment {
pub track_id: u32,
}
impl Message {
pub fn new() -> Self {
Message {
init: None,
segment: None,
}
}
pub fn serialize(&self) -> anyhow::Result<Vec<u8>> {
let str = serde_json::to_string(self)?;
let bytes = str.as_bytes();
let size = bytes.len() + 8;
let mut out = Vec::with_capacity(size);
out.extend_from_slice(&(size as u32).to_be_bytes());
out.extend_from_slice(b"warp");
out.extend_from_slice(bytes);
Ok(out)
}
}

154
server/src/session/mod.rs Normal file
View File

@ -0,0 +1,154 @@
mod message;
use std::collections::hash_map as hmap;
use std::time;
use quiche;
use quiche::h3::webtransport;
use crate::{media, transport};
#[derive(Default)]
pub struct Session {
media: Option<media::Source>,
streams: transport::Streams, // An easy way of buffering stream data.
tracks: hmap::HashMap<u32, u64>, // map from track_id to current stream_id
}
impl transport::App for Session {
// Process any updates to a session.
fn poll(
&mut self,
conn: &mut quiche::Connection,
session: &mut webtransport::ServerSession,
) -> anyhow::Result<()> {
loop {
let event = match session.poll(conn) {
Err(webtransport::Error::Done) => break,
Err(e) => return Err(e.into()),
Ok(e) => e,
};
log::debug!("webtransport event {:?}", event);
match event {
webtransport::ServerEvent::ConnectRequest(_req) => {
// you can handle request with
// req.authority()
// req.path()
// and you can validate this request with req.origin()
session.accept_connect_request(conn, None)?;
// TODO
let media = media::Source::new("../media/fragmented.mp4")?;
let init = &media.init;
// Create a JSON header.
let mut message = message::Message::new();
message.init = Some(message::Init {});
let data = message.serialize()?;
// Create a new stream and write the header.
let stream_id = session.open_stream(conn, false)?;
self.streams.send(conn, stream_id, data.as_slice(), false)?;
self.streams.send(conn, stream_id, init.as_slice(), true)?;
self.media = Some(media);
}
webtransport::ServerEvent::StreamData(stream_id) => {
let mut buf = vec![0; 10000];
while let Ok(len) = session.recv_stream_data(conn, stream_id, &mut buf) {
let _stream_data = &buf[0..len];
}
}
_ => {}
}
}
// Send any pending stream data.
// NOTE: This doesn't return an error because it's async, and would be confusing.
self.streams.poll(conn);
// Fetch the next media fragment, possibly queuing up stream data.
self.poll_source(conn, session)?;
Ok(())
}
fn timeout(&self) -> Option<time::Duration> {
self.media.as_ref().and_then(|m| m.timeout())
}
}
impl Session {
fn poll_source(
&mut self,
conn: &mut quiche::Connection,
session: &mut webtransport::ServerSession,
) -> anyhow::Result<()> {
// Get the media source once the connection is established.
let media = match &mut self.media {
Some(m) => m,
None => return Ok(()),
};
// Get the next media fragment.
let fragment = match media.fragment()? {
Some(f) => f,
None => return Ok(()),
};
let stream_id = match self.tracks.get(&fragment.track_id) {
// Close the old stream.
Some(stream_id) if fragment.keyframe => {
self.streams.send(conn, *stream_id, &[], true)?;
None
}
// Use the existing stream
Some(stream_id) => Some(*stream_id),
// No existing stream.
_ => None,
};
let stream_id = match stream_id {
// Use the existing stream,
Some(stream_id) => stream_id,
// Open a new stream.
None => {
// Create a new unidirectional stream.
let stream_id = session.open_stream(conn, false)?;
// Set the stream priority to be equal to the timestamp.
// We subtract from u64::MAX so newer media is sent important.
// TODO prioritize audio
let order = u64::MAX - fragment.timestamp;
self.streams.send_order(conn, stream_id, order);
// Encode a JSON header indicating this is a new track.
let mut message: message::Message = message::Message::new();
message.segment = Some(message::Segment {
track_id: fragment.track_id,
});
// Write the header.
let data = message.serialize()?;
self.streams.send(conn, stream_id, &data, false)?;
// Keep a mapping from the track id to the current stream id.
self.tracks.insert(fragment.track_id, stream_id);
stream_id
}
};
// Write the current fragment.
let data = fragment.data.as_slice();
self.streams.send(conn, stream_id, data, false)?;
Ok(())
}
}

View File

@ -0,0 +1,12 @@
use std::time;
use quiche::h3::webtransport;
pub trait App: Default {
fn poll(
&mut self,
conn: &mut quiche::Connection,
session: &mut webtransport::ServerSession,
) -> anyhow::Result<()>;
fn timeout(&self) -> Option<time::Duration>;
}

View File

@ -0,0 +1,15 @@
use quiche;
use quiche::h3::webtransport;
use std::collections::hash_map as hmap;
pub type Id = quiche::ConnectionId<'static>;
use super::app;
pub type Map<T> = hmap::HashMap<Id, Connection<T>>;
pub struct Connection<T: app::App> {
pub quiche: quiche::Connection,
pub session: Option<webtransport::ServerSession>,
pub app: T,
}

View File

@ -0,0 +1,8 @@
mod app;
mod connection;
mod server;
mod streams;
pub use app::App;
pub use server::{Config, Server};
pub use streams::Streams;

View File

@ -0,0 +1,398 @@
use std::io;
use quiche::h3::webtransport;
use super::app;
use super::connection;
const MAX_DATAGRAM_SIZE: usize = 1350;
pub struct Server<T: app::App> {
// IO stuff
socket: mio::net::UdpSocket,
poll: mio::Poll,
events: mio::Events,
// QUIC stuff
quic: quiche::Config,
seed: ring::hmac::Key, // connection ID seed
conns: connection::Map<T>,
}
pub struct Config {
pub addr: String,
pub cert: String,
pub key: String,
}
impl<T: app::App> Server<T> {
pub fn new(config: Config) -> io::Result<Self> {
// Listen on the provided socket address
let addr = config.addr.parse().unwrap();
let mut socket = mio::net::UdpSocket::bind(addr).unwrap();
// Setup the event loop.
let poll = mio::Poll::new().unwrap();
let events = mio::Events::with_capacity(1024);
poll.registry()
.register(&mut socket, mio::Token(0), mio::Interest::READABLE)
.unwrap();
// Generate random values for connection IDs.
let rng = ring::rand::SystemRandom::new();
let seed = ring::hmac::Key::generate(ring::hmac::HMAC_SHA256, &rng).unwrap();
// Create the configuration for the QUIC conns.
let mut quic = quiche::Config::new(quiche::PROTOCOL_VERSION).unwrap();
quic.load_cert_chain_from_pem_file(&config.cert).unwrap();
quic.load_priv_key_from_pem_file(&config.key).unwrap();
quic.set_application_protos(quiche::h3::APPLICATION_PROTOCOL)
.unwrap();
quic.set_max_idle_timeout(5000);
quic.set_max_recv_udp_payload_size(MAX_DATAGRAM_SIZE);
quic.set_max_send_udp_payload_size(MAX_DATAGRAM_SIZE);
quic.set_initial_max_data(10_000_000);
quic.set_initial_max_stream_data_bidi_local(1_000_000);
quic.set_initial_max_stream_data_bidi_remote(1_000_000);
quic.set_initial_max_stream_data_uni(1_000_000);
quic.set_initial_max_streams_bidi(100);
quic.set_initial_max_streams_uni(100);
quic.set_disable_active_migration(true);
quic.enable_early_data();
quic.enable_dgram(true, 65536, 65536);
let conns = Default::default();
Ok(Server {
socket,
poll,
events,
quic,
seed,
conns,
})
}
pub fn run(&mut self) -> anyhow::Result<()> {
log::info!("listening on {}", self.socket.local_addr()?);
loop {
self.wait()?;
self.receive()?;
self.app()?;
self.send()?;
self.cleanup();
}
}
pub fn wait(&mut self) -> anyhow::Result<()> {
// Find the shorter timeout from all the active connections.
//
// TODO: use event loop that properly supports timers
let timeout = self
.conns
.values()
.filter_map(|c| {
let timeout = c.quiche.timeout();
let expires = c.app.timeout();
match (timeout, expires) {
(Some(a), Some(b)) => Some(a.min(b)),
(Some(a), None) => Some(a),
(None, Some(b)) => Some(b),
(None, None) => None,
}
})
.min();
self.poll.poll(&mut self.events, timeout).unwrap();
// If the event loop reported no events, it means that the timeout
// has expired, so handle it without attempting to read packets. We
// will then proceed with the send loop.
if self.events.is_empty() {
for conn in self.conns.values_mut() {
conn.quiche.on_timeout();
}
}
Ok(())
}
// Reads packets from the socket, updating any internal connection state.
fn receive(&mut self) -> anyhow::Result<()> {
let mut src = [0; MAX_DATAGRAM_SIZE];
// Try reading any data currently available on the socket.
loop {
let (len, from) = match self.socket.recv_from(&mut src) {
Ok(v) => v,
Err(e) if e.kind() == std::io::ErrorKind::WouldBlock => return Ok(()),
Err(e) => return Err(e.into()),
};
let src = &mut src[..len];
let info = quiche::RecvInfo {
to: self.socket.local_addr().unwrap(),
from,
};
// Parse the QUIC packet's header.
let hdr = quiche::Header::from_slice(src, quiche::MAX_CONN_ID_LEN).unwrap();
let conn_id = ring::hmac::sign(&self.seed, &hdr.dcid);
let conn_id = &conn_id.as_ref()[..quiche::MAX_CONN_ID_LEN];
let conn_id = conn_id.to_vec().into();
// Check if it's an existing connection.
if let Some(conn) = self.conns.get_mut(&hdr.dcid) {
conn.quiche.recv(src, info)?;
if conn.session.is_none() && conn.quiche.is_established() {
conn.session = Some(webtransport::ServerSession::with_transport(
&mut conn.quiche,
)?)
}
continue;
} else if let Some(conn) = self.conns.get_mut(&conn_id) {
conn.quiche.recv(src, info)?;
// TODO is this needed here?
if conn.session.is_none() && conn.quiche.is_established() {
conn.session = Some(webtransport::ServerSession::with_transport(
&mut conn.quiche,
)?)
}
continue;
}
if hdr.ty != quiche::Type::Initial {
log::warn!("unknown connection ID");
continue;
}
let mut dst = [0; MAX_DATAGRAM_SIZE];
if !quiche::version_is_supported(hdr.version) {
let len = quiche::negotiate_version(&hdr.scid, &hdr.dcid, &mut dst).unwrap();
let dst = &dst[..len];
self.socket.send_to(dst, from).unwrap();
continue;
}
let mut scid = [0; quiche::MAX_CONN_ID_LEN];
scid.copy_from_slice(&conn_id);
let scid = quiche::ConnectionId::from_ref(&scid);
// Token is always present in Initial packets.
let token = hdr.token.as_ref().unwrap();
// Do stateless retry if the client didn't send a token.
if token.is_empty() {
let new_token = mint_token(&hdr, &from);
let len = quiche::retry(
&hdr.scid,
&hdr.dcid,
&scid,
&new_token,
hdr.version,
&mut dst,
)
.unwrap();
let dst = &dst[..len];
self.socket.send_to(dst, from).unwrap();
continue;
}
let odcid = validate_token(&from, token);
// The token was not valid, meaning the retry failed, so
// drop the packet.
if odcid.is_none() {
log::warn!("invalid token");
continue;
}
if scid.len() != hdr.dcid.len() {
log::warn!("invalid connection ID");
continue;
}
// Reuse the source connection ID we sent in the Retry packet,
// instead of changing it again.
let conn_id = hdr.dcid.clone();
let local_addr = self.socket.local_addr().unwrap();
log::debug!("new connection: dcid={:?} scid={:?}", hdr.dcid, scid);
let mut conn =
quiche::accept(&conn_id, odcid.as_ref(), local_addr, from, &mut self.quic)?;
// Log each session with QLOG if the ENV var is set.
if let Some(dir) = std::env::var_os("QLOGDIR") {
let id = format!("{:?}", &scid);
let mut path = std::path::PathBuf::from(dir);
let filename = format!("server-{id}.sqlog");
path.push(filename);
let writer = match std::fs::File::create(&path) {
Ok(f) => std::io::BufWriter::new(f),
Err(e) => panic!(
"Error creating qlog file attempted path was {:?}: {}",
path, e
),
};
conn.set_qlog(
std::boxed::Box::new(writer),
"warp-server qlog".to_string(),
format!("{} id={}", "warp-server qlog", id),
);
}
// Process potentially coalesced packets.
conn.recv(src, info)?;
let user = connection::Connection {
quiche: conn,
session: None,
app: T::default(),
};
self.conns.insert(conn_id, user);
}
}
pub fn app(&mut self) -> anyhow::Result<()> {
for conn in self.conns.values_mut() {
if conn.quiche.is_closed() {
continue;
}
if let Some(session) = &mut conn.session {
if let Err(e) = conn.app.poll(&mut conn.quiche, session) {
log::debug!("app error: {:?}", e);
// Close the connection on any application error
let reason = format!("app error: {:?}", e);
conn.quiche.close(true, 0xff, reason.as_bytes()).ok();
}
}
}
Ok(())
}
// Generate outgoing QUIC packets for all active connections and send
// them on the UDP socket, until quiche reports that there are no more
// packets to be sent.
pub fn send(&mut self) -> anyhow::Result<()> {
for conn in self.conns.values_mut() {
let conn = &mut conn.quiche;
if let Err(e) = send_conn(&self.socket, conn) {
log::error!("{} send failed: {:?}", conn.trace_id(), e);
conn.close(false, 0x1, b"fail").ok();
}
}
Ok(())
}
pub fn cleanup(&mut self) {
// Garbage collect closed connections.
self.conns.retain(|_, ref mut c| !c.quiche.is_closed());
}
}
// Send any pending packets for the connection over the socket.
fn send_conn(socket: &mio::net::UdpSocket, conn: &mut quiche::Connection) -> anyhow::Result<()> {
let mut pkt = [0; MAX_DATAGRAM_SIZE];
loop {
let (size, info) = match conn.send(&mut pkt) {
Ok(v) => v,
Err(quiche::Error::Done) => return Ok(()),
Err(e) => return Err(e.into()),
};
let pkt = &pkt[..size];
match socket.send_to(pkt, info.to) {
Err(e) if e.kind() == io::ErrorKind::WouldBlock => return Ok(()),
Err(e) => return Err(e.into()),
Ok(_) => (),
}
}
}
/// Generate a stateless retry token.
///
/// The token includes the static string `"quiche"` followed by the IP address
/// of the client and by the original destination connection ID generated by the
/// client.
///
/// Note that this function is only an example and doesn't do any cryptographic
/// authenticate of the token. *It should not be used in production system*.
fn mint_token(hdr: &quiche::Header, src: &std::net::SocketAddr) -> Vec<u8> {
let mut token = Vec::new();
token.extend_from_slice(b"quiche");
let addr = match src.ip() {
std::net::IpAddr::V4(a) => a.octets().to_vec(),
std::net::IpAddr::V6(a) => a.octets().to_vec(),
};
token.extend_from_slice(&addr);
token.extend_from_slice(&hdr.dcid);
token
}
/// Validates a stateless retry token.
///
/// This checks that the ticket includes the `"quiche"` static string, and that
/// the client IP address matches the address stored in the ticket.
///
/// Note that this function is only an example and doesn't do any cryptographic
/// authenticate of the token. *It should not be used in production system*.
fn validate_token<'a>(
src: &std::net::SocketAddr,
token: &'a [u8],
) -> Option<quiche::ConnectionId<'a>> {
if token.len() < 6 {
return None;
}
if &token[..6] != b"quiche" {
return None;
}
let token = &token[6..];
let addr = match src.ip() {
std::net::IpAddr::V4(a) => a.octets().to_vec(),
std::net::IpAddr::V6(a) => a.octets().to_vec(),
};
if token.len() < addr.len() || &token[..addr.len()] != addr.as_slice() {
return None;
}
Some(quiche::ConnectionId::from_ref(&token[addr.len()..]))
}

View File

@ -0,0 +1,149 @@
use std::collections::VecDeque;
use anyhow;
use quiche;
#[derive(Default)]
pub struct Streams {
ordered: Vec<Stream>,
}
struct Stream {
id: u64,
order: u64,
buffer: VecDeque<u8>,
fin: bool,
}
impl Streams {
// Write the data to the given stream, buffering it if needed.
pub fn send(
&mut self,
conn: &mut quiche::Connection,
id: u64,
buf: &[u8],
fin: bool,
) -> anyhow::Result<()> {
if buf.is_empty() && !fin {
return Ok(());
}
// Get the index of the stream, or add it to the list of streams.
let pos = self
.ordered
.iter()
.position(|s| s.id == id)
.unwrap_or_else(|| {
// Create a new stream
let stream = Stream {
id,
buffer: VecDeque::new(),
fin: false,
order: 0, // Default to highest priority until send_order is called.
};
self.insert(conn, stream)
});
let stream = &mut self.ordered[pos];
// Check if we've already closed the stream, just in case.
if stream.fin && !buf.is_empty() {
anyhow::bail!("stream is already finished");
}
// If there's no data buffered, try to write it immediately.
let size = if stream.buffer.is_empty() {
match conn.stream_send(id, buf, fin) {
Ok(size) => size,
Err(quiche::Error::Done) => 0,
Err(e) => anyhow::bail!(e),
}
} else {
0
};
if size < buf.len() {
// Short write, save the rest for later.
stream.buffer.extend(&buf[size..]);
}
stream.fin |= fin;
Ok(())
}
// Flush any pending stream data.
pub fn poll(&mut self, conn: &mut quiche::Connection) {
self.ordered.retain_mut(|s| s.poll(conn).is_ok());
}
// Set the send order of the stream.
pub fn send_order(&mut self, conn: &mut quiche::Connection, id: u64, order: u64) {
let mut stream = match self.ordered.iter().position(|s| s.id == id) {
// Remove the stream from the existing list.
Some(pos) => self.ordered.remove(pos),
// This is a new stream, insert it into the list.
None => Stream {
id,
buffer: VecDeque::new(),
fin: false,
order,
},
};
stream.order = order;
self.insert(conn, stream);
}
fn insert(&mut self, conn: &mut quiche::Connection, stream: Stream) -> usize {
// Look for the position to insert the stream.
let pos = match self
.ordered
.binary_search_by_key(&stream.order, |s| s.order)
{
Ok(pos) | Err(pos) => pos,
};
self.ordered.insert(pos, stream);
// Reprioritize all later streams.
// TODO we can avoid this if stream_priorty takes a u64
for (i, stream) in self.ordered[pos..].iter().enumerate() {
_ = conn.stream_priority(stream.id, (pos + i) as u8, true);
}
pos
}
}
impl Stream {
fn poll(&mut self, conn: &mut quiche::Connection) -> quiche::Result<()> {
// Keep reading from the buffer until it's empty.
while !self.buffer.is_empty() {
// VecDeque is a ring buffer, so we can't write the whole thing at once.
let parts = self.buffer.as_slices();
let size = conn.stream_send(self.id, parts.0, false)?;
if size == 0 {
// No more space available for this stream.
return Ok(());
}
// Remove the bytes that were written.
self.buffer.drain(..size);
}
if self.fin {
// Write the stream done signal.
conn.stream_send(self.id, &[], true)?;
Err(quiche::Error::Done)
} else {
Ok(())
}
}
}

View File

@ -1,4 +1,4 @@
node_modules
.parcel-cache
dist dist
.parcel-cache
node_modules
fingerprint.hex fingerprint.hex

13
web/.eslintrc.cjs Normal file
View File

@ -0,0 +1,13 @@
/* eslint-env node */
module.exports = {
extends: ['eslint:recommended', 'plugin:@typescript-eslint/recommended'],
parser: '@typescript-eslint/parser',
plugins: ['@typescript-eslint'],
root: true,
ignorePatterns: [ 'dist', 'node_modules' ],
rules: {
"@typescript-eslint/ban-ts-comment": "off",
"@typescript-eslint/no-non-null-assertion": "off",
"@typescript-eslint/no-explicit-any": "off",
}
};

3
web/.gitignore vendored Normal file
View File

@ -0,0 +1,3 @@
node_modules
.parcel-cache
dist

26
web/Dockerfile Normal file
View File

@ -0,0 +1,26 @@
# Use the official Node.js image as the build image
FROM node:latest
# Set the build directory
WORKDIR /build
# Copy the package.json and yarn.lock files to the container
COPY package*.json yarn.lock ./
# Install dependencies
RUN yarn install
# Copy the entire project to the container
COPY . .
# Expose port 4444 for serving the project
EXPOSE 4444
# Copy the certificate hash before running
VOLUME /cert
# Make a symlink to the certificate fingerprint
RUN ln -s /cert/localhost.hex fingerprint.hex
# Copy the certificate fingerprint and start the web server
CMD yarn parcel serve --https --cert /cert/localhost.crt --key /cert/localhost.key --port 4444

1
web/fingerprint.hex Symbolic link
View File

@ -0,0 +1 @@
../cert/localhost.hex

View File

@ -1,7 +1,8 @@
{ {
"license": "Apache-2.0",
"source": "src/index.html", "source": "src/index.html",
"scripts": { "scripts": {
"serve": "parcel serve --https --cert ../cert/localhost.crt --key ../cert/localhost.key --host localhost --port 4444 --open", "serve": "parcel serve --https --cert ../cert/localhost.crt --key ../cert/localhost.key --port 4444 --open",
"build": "parcel build", "build": "parcel build",
"check": "tsc --noEmit" "check": "tsc --noEmit"
}, },

View File

@ -0,0 +1,104 @@
import * as MP4 from "../mp4"
export class Encoder {
container: MP4.ISOFile
audio: AudioEncoder
video: VideoEncoder
constructor() {
this.container = new MP4.ISOFile();
this.audio = new AudioEncoder({
output: this.onAudio.bind(this),
error: console.warn,
});
this.video = new VideoEncoder({
output: this.onVideo.bind(this),
error: console.warn,
});
this.container.init();
this.audio.configure({
codec: "mp4a.40.2",
numberOfChannels: 2,
sampleRate: 44100,
// TODO bitrate
})
this.video.configure({
codec: "avc1.42002A", // TODO h.264 baseline
avc: { format: "avc" }, // or annexb
width: 1280,
height: 720,
// TODO bitrate
// TODO bitrateMode
// TODO framerate
// TODO latencyMode
})
}
onAudio(frame: EncodedAudioChunk, metadata: EncodedAudioChunkMetadata) {
const config = metadata.decoderConfig!
const track_id = 1;
if (!this.container.getTrackById(track_id)) {
this.container.addTrack({
id: track_id,
type: "mp4a", // TODO wrong
timescale: 1000, // TODO verify
channel_count: config.numberOfChannels,
samplerate: config.sampleRate,
description: config.description, // TODO verify
// TODO description_boxes?: Box[];
});
}
const buffer = new Uint8Array(frame.byteLength);
frame.copyTo(buffer);
// TODO cts?
const sample = this.container.addSample(track_id, buffer, {
is_sync: frame.type == "key",
duration: frame.duration!,
dts: frame.timestamp,
});
const stream = this.container.createSingleSampleMoof(sample);
}
onVideo(frame: EncodedVideoChunk, metadata?: EncodedVideoChunkMetadata) {
const config = metadata!.decoderConfig!
const track_id = 2;
if (!this.container.getTrackById(track_id)) {
this.container.addTrack({
id: 2,
type: "avc1",
width: config.codedWidth,
height: config.codedHeight,
timescale: 1000, // TODO verify
description: config.description, // TODO verify
// TODO description_boxes?: Box[];
});
}
const buffer = new Uint8Array(frame.byteLength);
frame.copyTo(buffer);
// TODO cts?
const sample = this.container.addSample(track_id, buffer, {
is_sync: frame.type == "key",
duration: frame.duration!,
dts: frame.timestamp,
});
const stream = this.container.createSingleSampleMoof(sample);
}
}

View File

@ -0,0 +1,4 @@
export default class Broadcaster {
constructor() {
}
}

View File

@ -11,7 +11,7 @@
<body> <body>
<div id="player"> <div id="player">
<div id="screen"> <div id="screen">
<div id="play"><span>click for audio</span></div> <div id="play"><span>click to play</span></div>
<canvas id="video" width="1280" height="720"></canvas> <canvas id="video" width="1280" height="720"></canvas>
</div> </div>
@ -31,4 +31,5 @@
<script src="index.ts" type="module"></script> <script src="index.ts" type="module"></script>
</body> </body>
</html> </html>

View File

@ -1,4 +1,5 @@
import Player from "./player" import Player from "./player"
import Transport from "./transport"
// @ts-ignore embed the certificate fingerprint using bundler // @ts-ignore embed the certificate fingerprint using bundler
import fingerprintHex from 'bundle-text:../fingerprint.hex'; import fingerprintHex from 'bundle-text:../fingerprint.hex';
@ -14,18 +15,22 @@ const params = new URLSearchParams(window.location.search)
const url = params.get("url") || "https://localhost:4443/watch" const url = params.get("url") || "https://localhost:4443/watch"
const canvas = document.querySelector<HTMLCanvasElement>("canvas#video")! const canvas = document.querySelector<HTMLCanvasElement>("canvas#video")!
const player = new Player({ const transport = new Transport({
url: url, url: url,
fingerprint: { // TODO remove when Chrome accepts the system CA fingerprint: { // TODO remove when Chrome accepts the system CA
"algorithm": "sha-256", "algorithm": "sha-256",
"value": new Uint8Array(fingerprint), "value": new Uint8Array(fingerprint),
}, },
canvas: canvas, })
const player = new Player({
transport,
canvas: canvas.transferControlToOffscreen(),
}) })
const play = document.querySelector<HTMLElement>("#screen #play")! const play = document.querySelector<HTMLElement>("#screen #play")!
let playFunc = (e: Event) => { const playFunc = (e: Event) => {
player.play() player.play()
e.preventDefault() e.preventDefault()

View File

@ -4,7 +4,12 @@ export {
MP4File as File, MP4File as File,
MP4ArrayBuffer as ArrayBuffer, MP4ArrayBuffer as ArrayBuffer,
MP4Info as Info, MP4Info as Info,
MP4Track as Track,
MP4AudioTrack as AudioTrack,
MP4VideoTrack as VideoTrack,
DataStream as Stream, DataStream as Stream,
Box,
ISOFile,
Sample, Sample,
} from "mp4box" } from "mp4box"

View File

@ -20,19 +20,7 @@ export class InitParser {
// Create a promise that gets resolved once the init segment has been parsed. // Create a promise that gets resolved once the init segment has been parsed.
this.info = new Promise((resolve, reject) => { this.info = new Promise((resolve, reject) => {
this.mp4box.onError = reject this.mp4box.onError = reject
this.mp4box.onReady = resolve
// https://github.com/gpac/mp4box.js#onreadyinfo
this.mp4box.onReady = (info: MP4.Info) => {
if (!info.isFragmented) {
reject("expected a fragmented mp4")
}
if (info.tracks.length != 1) {
reject("expected a single track")
}
resolve(info)
}
}) })
} }

View File

@ -1,7 +1,7 @@
// https://github.com/gpac/mp4box.js/issues/233 // https://github.com/gpac/mp4box.js/issues/233
declare module "mp4box" { declare module "mp4box" {
interface MP4MediaTrack { export interface MP4MediaTrack {
id: number; id: number;
created: Date; created: Date;
modified: Date; modified: Date;
@ -19,26 +19,26 @@ declare module "mp4box" {
nb_samples: number; nb_samples: number;
} }
interface MP4VideoData { export interface MP4VideoData {
width: number; width: number;
height: number; height: number;
} }
interface MP4VideoTrack extends MP4MediaTrack { export interface MP4VideoTrack extends MP4MediaTrack {
video: MP4VideoData; video: MP4VideoData;
} }
interface MP4AudioData { export interface MP4AudioData {
sample_rate: number; sample_rate: number;
channel_count: number; channel_count: number;
sample_size: number; sample_size: number;
} }
interface MP4AudioTrack extends MP4MediaTrack { export interface MP4AudioTrack extends MP4MediaTrack {
audio: MP4AudioData; audio: MP4AudioData;
} }
type MP4Track = MP4VideoTrack | MP4AudioTrack; export type MP4Track = MP4VideoTrack | MP4AudioTrack;
export interface MP4Info { export interface MP4Info {
duration: number; duration: number;
@ -82,7 +82,7 @@ declare module "mp4box" {
description: any; description: any;
data: ArrayBuffer; data: ArrayBuffer;
size: number; size: number;
alreadyRead: number; alreadyRead?: number;
duration: number; duration: number;
cts: number; cts: number;
dts: number; dts: number;
@ -104,7 +104,7 @@ declare module "mp4box" {
const LITTLE_ENDIAN: boolean; const LITTLE_ENDIAN: boolean;
export class DataStream { export class DataStream {
constructor(buffer: ArrayBuffer, byteOffset?: number, littleEndian?: boolean); constructor(buffer?: ArrayBuffer, byteOffset?: number, littleEndian?: boolean);
getPosition(): number; getPosition(): number;
get byteLength(): number; get byteLength(): number;
@ -144,5 +144,82 @@ declare module "mp4box" {
// TODO I got bored porting the remaining functions // TODO I got bored porting the remaining functions
} }
export class Box {
write(stream: DataStream): void;
}
export interface TrackOptions {
id?: number;
type?: string;
width?: number;
height?: number;
duration?: number;
layer?: number;
timescale?: number;
media_duration?: number;
language?: string;
hdlr?: string;
// video
avcDecoderConfigRecord?: any;
// audio
balance?: number;
channel_count?: number;
samplesize?: number;
samplerate?: number;
//captions
namespace?: string;
schema_location?: string;
auxiliary_mime_types?: string;
description?: any;
description_boxes?: Box[];
default_sample_description_index_id?: number;
default_sample_duration?: number;
default_sample_size?: number;
default_sample_flags?: number;
}
export interface FileOptions {
brands?: string[];
timescale?: number;
rate?: number;
duration?: number;
width?: number;
}
export interface SampleOptions {
sample_description_index?: number;
duration?: number;
cts?: number;
dts?: number;
is_sync?: boolean;
is_leading?: number;
depends_on?: number;
is_depended_on?: number;
has_redundancy?: number;
degradation_priority?: number;
subsamples?: any;
}
// TODO add the remaining functions
// TODO move to another module
export class ISOFile {
constructor(stream?: DataStream);
init(options?: FileOptions): ISOFile;
addTrack(options?: TrackOptions): number;
addSample(track: number, data: ArrayBuffer, options?: SampleOptions): Sample;
createSingleSampleMoof(sample: Sample): Box;
// helpers
getTrackById(id: number): Box | undefined;
getTrexById(id: number): Box | undefined;
}
export { }; export { };
} }

79
web/src/player/audio.ts Normal file
View File

@ -0,0 +1,79 @@
import * as Message from "./message";
import { Ring } from "./ring"
export default class Audio {
ring?: Ring;
queue: Array<AudioData>;
render?: number; // non-zero if requestAnimationFrame has been called
last?: number; // the timestamp of the last rendered frame, in microseconds
constructor(config: Message.Config) {
this.queue = []
}
push(frame: AudioData) {
// Drop any old frames
if (this.last && frame.timestamp <= this.last) {
frame.close()
return
}
// Insert the frame into the queue sorted by timestamp.
if (this.queue.length > 0 && this.queue[this.queue.length - 1].timestamp <= frame.timestamp) {
// Fast path because we normally append to the end.
this.queue.push(frame)
} else {
// Do a full binary search
let low = 0
let high = this.queue.length;
while (low < high) {
const mid = (low + high) >>> 1;
if (this.queue[mid].timestamp < frame.timestamp) low = mid + 1;
else high = mid;
}
this.queue.splice(low, 0, frame)
}
this.emit()
}
emit() {
const ring = this.ring
if (!ring) {
return
}
while (this.queue.length) {
let frame = this.queue[0];
if (ring.size() + frame.numberOfFrames > ring.capacity) {
// Buffer is full
break
}
const size = ring.write(frame)
if (size < frame.numberOfFrames) {
throw new Error("audio buffer is full")
}
this.last = frame.timestamp
frame.close()
this.queue.shift()
}
}
play(play: Message.Play) {
this.ring = new Ring(play.buffer)
if (!this.render) {
const sampleRate = 44100 // TODO dynamic
// Refresh every half buffer
const refresh = play.buffer.capacity / sampleRate * 1000 / 2
this.render = setInterval(this.emit.bind(this), refresh)
}
}
}

167
web/src/player/decoder.ts Normal file
View File

@ -0,0 +1,167 @@
import * as Message from "./message";
import * as MP4 from "../mp4"
import * as Stream from "../stream"
import Renderer from "./renderer"
export default class Decoder {
init: MP4.InitParser;
decoders: Map<number, AudioDecoder | VideoDecoder>;
renderer: Renderer;
constructor(renderer: Renderer) {
this.init = new MP4.InitParser();
this.decoders = new Map();
this.renderer = renderer;
}
async receiveInit(msg: Message.Init) {
let stream = new Stream.Reader(msg.reader, msg.buffer);
while (1) {
const data = await stream.read()
if (!data) break
this.init.push(data)
}
// TODO make sure the init segment is fully received
}
async receiveSegment(msg: Message.Segment) {
// Wait for the init segment to be fully received and parsed
const info = await this.init.info
const input = MP4.New();
input.onSamples = this.onSamples.bind(this);
input.onReady = (info: any) => {
// Extract all of the tracks, because we don't know if it's audio or video.
for (let track of info.tracks) {
input.setExtractionOptions(track.id, track, { nbSamples: 1 });
}
input.start();
}
// MP4box requires us to reparse the init segment unfortunately
let offset = 0;
for (let raw of this.init.raw) {
raw.fileStart = offset
offset = input.appendBuffer(raw)
}
const stream = new Stream.Reader(msg.reader, msg.buffer)
// For whatever reason, mp4box doesn't work until you read an atom at a time.
while (!await stream.done()) {
const raw = await stream.peek(4)
// TODO this doesn't support when size = 0 (until EOF) or size = 1 (extended size)
const size = new DataView(raw.buffer, raw.byteOffset, raw.byteLength).getUint32(0)
const atom = await stream.bytes(size)
// Make a copy of the atom because mp4box only accepts an ArrayBuffer unfortunately
let box = new Uint8Array(atom.byteLength);
box.set(atom)
// and for some reason we need to modify the underlying ArrayBuffer with offset
let buffer = box.buffer as MP4.ArrayBuffer
buffer.fileStart = offset
// Parse the data
offset = input.appendBuffer(buffer)
input.flush()
}
}
onSamples(track_id: number, track: MP4.Track, samples: MP4.Sample[]) {
let decoder = this.decoders.get(track_id);
if (!decoder) {
// We need a sample to initalize the video decoder, because of mp4box limitations.
let sample = samples[0];
if (isVideoTrack(track)) {
// Configure the decoder using the AVC box for H.264
// TODO it should be easy to support other codecs, just need to know the right boxes.
const avcc = sample.description.avcC;
if (!avcc) throw new Error("TODO only h264 is supported");
const description = new MP4.Stream(new Uint8Array(avcc.size), 0, false)
avcc.write(description)
const videoDecoder = new VideoDecoder({
output: this.renderer.push.bind(this.renderer),
error: console.warn,
});
videoDecoder.configure({
codec: track.codec,
codedHeight: track.video.height,
codedWidth: track.video.width,
description: description.buffer?.slice(8),
// optimizeForLatency: true
})
decoder = videoDecoder
} else if (isAudioTrack(track)) {
const audioDecoder = new AudioDecoder({
output: this.renderer.push.bind(this.renderer),
error: console.warn,
});
audioDecoder.configure({
codec: track.codec,
numberOfChannels: track.audio.channel_count,
sampleRate: track.audio.sample_rate,
})
decoder = audioDecoder
} else {
throw new Error("unknown track type")
}
this.decoders.set(track_id, decoder)
}
for (let sample of samples) {
// Convert to microseconds
const timestamp = 1000 * 1000 * sample.dts / sample.timescale
const duration = 1000 * 1000 * sample.duration / sample.timescale
if (isAudioDecoder(decoder)) {
decoder.decode(new EncodedAudioChunk({
type: sample.is_sync ? "key" : "delta",
data: sample.data,
duration: duration,
timestamp: timestamp,
}))
} else if (isVideoDecoder(decoder)) {
decoder.decode(new EncodedVideoChunk({
type: sample.is_sync ? "key" : "delta",
data: sample.data,
duration: duration,
timestamp: timestamp,
}))
} else {
throw new Error("unknown decoder type")
}
}
}
}
function isAudioDecoder(decoder: AudioDecoder | VideoDecoder): decoder is AudioDecoder {
return decoder instanceof AudioDecoder
}
function isVideoDecoder(decoder: AudioDecoder | VideoDecoder): decoder is VideoDecoder {
return decoder instanceof VideoDecoder
}
function isAudioTrack(track: MP4.Track): track is MP4.AudioTrack {
return (track as MP4.AudioTrack).audio !== undefined;
}
function isVideoTrack(track: MP4.Track): track is MP4.VideoTrack {
return (track as MP4.VideoTrack).video !== undefined;
}

View File

@ -1,45 +1,51 @@
import * as Message from "./message" import * as Message from "./message"
import Renderer from "./renderer" import * as Ring from "./ring"
import Decoder from "./decoder" import Transport from "../transport"
import { RingInit } from "./ring" export interface Config {
transport: Transport
canvas: OffscreenCanvas;
}
// Abstracts the Worker and Worklet into a simpler API
// This class must be created on the main thread due to AudioContext. // This class must be created on the main thread due to AudioContext.
export default class Audio { export default class Player {
context: AudioContext; context: AudioContext;
worker: Worker; worker: Worker;
worklet: Promise<AudioWorkletNode>; worklet: Promise<AudioWorkletNode>;
constructor() { transport: Transport
// Assume 44.1kHz and two audio channels
const config = { constructor(config: Config) {
sampleRate: 44100, this.transport = config.transport
ring: new RingInit(2, 4410), // 100ms at 44.1khz this.transport.callback = this;
}
this.context = new AudioContext({ this.context = new AudioContext({
latencyHint: "interactive", latencyHint: "interactive",
sampleRate: config.sampleRate, sampleRate: 44100,
}) })
this.worker = this.setupWorker(config) this.worker = this.setupWorker(config)
this.worklet = this.setupWorklet(config) this.worklet = this.setupWorklet(config)
} }
private setupWorker(config: Message.Config): Worker { private setupWorker(config: Config): Worker {
const url = new URL('worker.ts', import.meta.url) const url = new URL('worker.ts', import.meta.url)
const worker = new Worker(url, { const worker = new Worker(url, {
name: "audio",
type: "module", type: "module",
name: "media",
}) })
worker.postMessage({ config }) const msg = {
canvas: config.canvas,
}
worker.postMessage({ config: msg }, [msg.canvas])
return worker return worker
} }
private async setupWorklet(config: Message.Config): Promise<AudioWorkletNode> { private async setupWorklet(config: Config): Promise<AudioWorkletNode> {
// Load the worklet source code. // Load the worklet source code.
const url = new URL('worklet.ts', import.meta.url) const url = new URL('worklet.ts', import.meta.url)
await this.context.audioWorklet.addModule(url) await this.context.audioWorklet.addModule(url)
@ -53,8 +59,6 @@ export default class Audio {
console.error("Audio worklet error:", e) console.error("Audio worklet error:", e)
}; };
worklet.port.postMessage({ config })
// Connect the worklet to the volume node and then to the speakers // Connect the worklet to the volume node and then to the speakers
worklet.connect(volume) worklet.connect(volume)
volume.connect(this.context.destination) volume.connect(this.context.destination)
@ -62,16 +66,23 @@ export default class Audio {
return worklet return worklet
} }
init(init: Message.Init) { onInit(init: Message.Init) {
this.worker.postMessage({ init }) this.worker.postMessage({ init }, [init.buffer.buffer, init.reader])
} }
segment(segment: Message.Segment) { onSegment(segment: Message.Segment) {
this.worker.postMessage({ segment }, [segment.buffer.buffer, segment.reader]) this.worker.postMessage({ segment }, [segment.buffer.buffer, segment.reader])
} }
play(play: Message.Play) { async play() {
this.context.resume() this.context.resume()
//this.worker.postMessage({ play })
const play = {
buffer: new Ring.Buffer(2, 44100 / 10), // 100ms of audio
}
const worklet = await this.worklet;
worklet.port.postMessage({ play })
this.worker.postMessage({ play })
} }
} }

21
web/src/player/message.ts Normal file
View File

@ -0,0 +1,21 @@
import * as Ring from "./ring"
export interface Config {
// video stuff
canvas: OffscreenCanvas;
}
export interface Init {
buffer: Uint8Array; // unread buffered data
reader: ReadableStream; // unread unbuffered data
}
export interface Segment {
buffer: Uint8Array; // unread buffered data
reader: ReadableStream; // unread unbuffered data
}
export interface Play {
timestamp?: number;
buffer: Ring.Buffer;
}

View File

@ -0,0 +1,36 @@
import * as Message from "./message";
import Audio from "./audio"
import Video from "./video"
export default class Renderer {
audio: Audio;
video: Video;
constructor(config: Message.Config) {
this.audio = new Audio(config);
this.video = new Video(config);
}
push(frame: AudioData | VideoFrame) {
if (isAudioData(frame)) {
this.audio.push(frame);
} else if (isVideoFrame(frame)) {
this.video.push(frame);
} else {
throw new Error("unknown frame type")
}
}
play(play: Message.Play) {
this.audio.play(play);
this.video.play(play);
}
}
function isAudioData(frame: AudioData | VideoFrame): frame is AudioData {
return frame instanceof AudioData
}
function isVideoFrame(frame: AudioData | VideoFrame): frame is VideoFrame {
return frame instanceof VideoFrame
}

155
web/src/player/ring.ts Normal file
View File

@ -0,0 +1,155 @@
// Ring buffer with audio samples.
enum STATE {
READ_POS = 0, // The current read position
WRITE_POS, // The current write position
LENGTH // Clever way of saving the total number of enums values.
}
// No prototype to make this easier to send via postMessage
export class Buffer {
state: SharedArrayBuffer;
channels: SharedArrayBuffer[];
capacity: number;
constructor(channels: number, capacity: number) {
// Store the current state in a separate ring buffer.
this.state = new SharedArrayBuffer(STATE.LENGTH * Int32Array.BYTES_PER_ELEMENT)
// Create a buffer for each audio channel
this.channels = []
for (let i = 0; i < channels; i += 1) {
const buffer = new SharedArrayBuffer(capacity * Float32Array.BYTES_PER_ELEMENT)
this.channels.push(buffer)
}
this.capacity = capacity
}
}
export class Ring {
state: Int32Array;
channels: Float32Array[];
capacity: number;
constructor(buffer: Buffer) {
this.state = new Int32Array(buffer.state)
this.channels = []
for (let channel of buffer.channels) {
this.channels.push(new Float32Array(channel))
}
this.capacity = buffer.capacity
}
// Write samples for single audio frame, returning the total number written.
write(frame: AudioData): number {
let readPos = Atomics.load(this.state, STATE.READ_POS)
let writePos = Atomics.load(this.state, STATE.WRITE_POS)
const startPos = writePos
let endPos = writePos + frame.numberOfFrames;
if (endPos > readPos + this.capacity) {
endPos = readPos + this.capacity
if (endPos <= startPos) {
// No space to write
return 0
}
}
let startIndex = startPos % this.capacity;
let endIndex = endPos % this.capacity;
// Loop over each channel
for (let i = 0; i < this.channels.length; i += 1) {
const channel = this.channels[i]
if (startIndex < endIndex) {
// One continuous range to copy.
const full = channel.subarray(startIndex, endIndex)
frame.copyTo(full, {
planeIndex: i,
frameCount: endIndex - startIndex,
})
} else {
const first = channel.subarray(startIndex)
const second = channel.subarray(0, endIndex)
frame.copyTo(first, {
planeIndex: i,
frameCount: first.length,
})
// We need this conditional when startIndex == 0 and endIndex == 0
// When capacity=4410 and frameCount=1024, this was happening 52s into the audio.
if (second.length) {
frame.copyTo(second, {
planeIndex: i,
frameOffset: first.length,
frameCount: second.length,
})
}
}
}
Atomics.store(this.state, STATE.WRITE_POS, endPos)
return endPos - startPos
}
read(dst: Float32Array[]): number {
let readPos = Atomics.load(this.state, STATE.READ_POS)
let writePos = Atomics.load(this.state, STATE.WRITE_POS)
let startPos = readPos;
let endPos = startPos + dst[0].length;
if (endPos > writePos) {
endPos = writePos
if (endPos <= startPos) {
// Nothing to read
return 0
}
}
let startIndex = startPos % this.capacity;
let endIndex = endPos % this.capacity;
// Loop over each channel
for (let i = 0; i < dst.length; i += 1) {
if (i >= this.channels.length) {
// ignore excess channels
}
const input = this.channels[i]
const output = dst[i]
if (startIndex < endIndex) {
const full = input.subarray(startIndex, endIndex)
output.set(full)
} else {
const first = input.subarray(startIndex)
const second = input.subarray(0, endIndex)
output.set(first)
output.set(second, first.length)
}
}
Atomics.store(this.state, STATE.READ_POS, endPos)
return endPos - startPos
}
size() {
// TODO is this thread safe?
let readPos = Atomics.load(this.state, STATE.READ_POS)
let writePos = Atomics.load(this.state, STATE.WRITE_POS)
return writePos - readPos
}
}

View File

@ -1,24 +1,21 @@
import * as Message from "./message"; import * as Message from "./message";
export default class Renderer { export default class Video {
canvas: OffscreenCanvas; canvas: OffscreenCanvas;
queue: Array<VideoFrame>; queue: Array<VideoFrame>;
render: number; // non-zero if requestAnimationFrame has been called render: number; // non-zero if requestAnimationFrame has been called
sync?: DOMHighResTimeStamp; // the wall clock value for timestamp 0 sync?: number; // the wall clock value for timestamp 0, in microseconds
last?: number; // the timestamp of the last rendered frame last?: number; // the timestamp of the last rendered frame, in microseconds
constructor(config: Message.Config) { constructor(config: Message.Config) {
this.canvas = config.canvas; this.canvas = config.canvas;
this.queue = []; this.queue = [];
this.render = 0; this.render = 0;
} }
emit(frame: VideoFrame) { push(frame: VideoFrame) {
if (!this.sync) {
// Save the frame as the sync point
this.sync = performance.now() - frame.timestamp
}
// Drop any old frames // Drop any old frames
if (this.last && frame.timestamp <= this.last) { if (this.last && frame.timestamp <= this.last) {
frame.close() frame.close()
@ -35,44 +32,54 @@ export default class Renderer {
let high = this.queue.length; let high = this.queue.length;
while (low < high) { while (low < high) {
var mid = (low + high) >>> 1; const mid = (low + high) >>> 1;
if (this.queue[mid].timestamp < frame.timestamp) low = mid + 1; if (this.queue[mid].timestamp < frame.timestamp) low = mid + 1;
else high = mid; else high = mid;
} }
this.queue.splice(low, 0, frame) this.queue.splice(low, 0, frame)
} }
// Queue up to render the next frame.
if (!this.render) {
this.render = self.requestAnimationFrame(this.draw.bind(this))
}
} }
draw(now: DOMHighResTimeStamp) { draw(now: number) {
// Determine the target timestamp. // Draw and then queue up the next draw call.
const target = now - this.sync! this.drawOnce(now);
let frame = this.queue[0] // Queue up the new draw frame.
if (frame.timestamp >= target) {
// nothing to render yet, wait for the next animation frame
this.render = self.requestAnimationFrame(this.draw.bind(this)) this.render = self.requestAnimationFrame(this.draw.bind(this))
}
drawOnce(now: number) {
// Convert to microseconds
now *= 1000;
if (!this.queue.length) {
return return
} }
this.queue.shift() let frame = this.queue[0];
if (!this.sync) {
this.sync = now - frame.timestamp;
}
// Determine the target timestamp.
const target = now - this.sync
if (frame.timestamp >= target) {
// nothing to render yet, wait for the next animation frame
return
}
this.queue.shift();
// Check if we should skip some frames // Check if we should skip some frames
while (this.queue.length) { while (this.queue.length) {
const next = this.queue[0] const next = this.queue[0]
if (next.timestamp > target) { if (next.timestamp > target) break
break
}
frame.close() frame.close()
frame = this.queue.shift()!;
this.queue.shift()
frame = next
} }
const ctx = this.canvas.getContext("2d"); const ctx = this.canvas.getContext("2d");
@ -80,12 +87,12 @@ export default class Renderer {
this.last = frame.timestamp; this.last = frame.timestamp;
frame.close() frame.close()
}
if (this.queue.length > 0) { play(play: Message.Play) {
// Queue up to render the next frame.
if (!this.render) {
this.render = self.requestAnimationFrame(this.draw.bind(this)) this.render = self.requestAnimationFrame(this.draw.bind(this))
} else {
// Break the loop for now
this.render = 0
} }
} }
} }

View File

@ -13,10 +13,13 @@ self.addEventListener('message', async (e: MessageEvent) => {
decoder = new Decoder(renderer) decoder = new Decoder(renderer)
} else if (e.data.init) { } else if (e.data.init) {
const init = e.data.init as Message.Init const init = e.data.init as Message.Init
await decoder.init(init) await decoder.receiveInit(init)
} else if (e.data.segment) { } else if (e.data.segment) {
const segment = e.data.segment as Message.Segment const segment = e.data.segment as Message.Segment
await decoder.decode(segment) await decoder.receiveSegment(segment)
} else if (e.data.play) {
const play = e.data.play as Message.Play
await renderer.play(play)
} }
}) })

View File

@ -19,19 +19,19 @@ class Renderer extends AudioWorkletProcessor {
} }
onMessage(e: MessageEvent) { onMessage(e: MessageEvent) {
if (e.data.config) { if (e.data.play) {
this.config(e.data.config) this.onPlay(e.data.play)
} }
} }
config(config: Message.Config) { onPlay(play: Message.Play) {
this.ring = new Ring(config.ring) this.ring = new Ring(play.buffer)
} }
// Inputs and outputs in groups of 128 samples. // Inputs and outputs in groups of 128 samples.
process(inputs: Float32Array[][], outputs: Float32Array[][], parameters: Record<string, Float32Array>): boolean { process(inputs: Float32Array[][], outputs: Float32Array[][], parameters: Record<string, Float32Array>): boolean {
if (!this.ring) { if (!this.ring) {
// Not initialized yet // Paused
return true return true
} }
@ -40,7 +40,11 @@ class Renderer extends AudioWorkletProcessor {
} }
const output = outputs[0] const output = outputs[0]
this.ring.read(output)
const size = this.ring.read(output)
if (size < output.length) {
// TODO trigger rebuffering event
}
return true; return true;
} }

View File

@ -0,0 +1,96 @@
import * as Stream from "../stream"
import * as Interface from "./interface"
export interface Config {
url: string;
fingerprint?: WebTransportHash; // the certificate fingerprint, temporarily needed for local development
}
export default class Transport {
quic: Promise<WebTransport>;
api: Promise<WritableStream>;
callback?: Interface.Callback;
constructor(config: Config) {
this.quic = this.connect(config)
// Create a unidirectional stream for all of our messages
this.api = this.quic.then((q) => {
return q.createUnidirectionalStream()
})
// async functions
this.receiveStreams()
}
async close() {
(await this.quic).close()
}
// Helper function to make creating a promise easier
private async connect(config: Config): Promise<WebTransport> {
let options: WebTransportOptions = {};
if (config.fingerprint) {
options.serverCertificateHashes = [ config.fingerprint ]
}
const quic = new WebTransport(config.url, options)
await quic.ready
return quic
}
async sendMessage(msg: any) {
const payload = JSON.stringify(msg)
const size = payload.length + 8
const stream = await this.api
const writer = new Stream.Writer(stream)
await writer.uint32(size)
await writer.string("warp")
await writer.string(payload)
writer.release()
}
async receiveStreams() {
const q = await this.quic
const streams = q.incomingUnidirectionalStreams.getReader()
for (;;) {
const result = await streams.read()
if (result.done) break
const stream = result.value
this.handleStream(stream) // don't await
}
}
async handleStream(stream: ReadableStream) {
const r = new Stream.Reader(stream)
while (!await r.done()) {
const size = await r.uint32();
const typ = new TextDecoder('utf-8').decode(await r.bytes(4));
if (typ != "warp") throw "expected warp atom"
if (size < 8) throw "atom too small"
const payload = new TextDecoder('utf-8').decode(await r.bytes(size - 8));
const msg = JSON.parse(payload)
if (msg.init) {
return this.callback?.onInit({
buffer: r.buffer,
reader: r.reader,
})
} else if (msg.segment) {
return this.callback?.onSegment({
buffer: r.buffer,
reader: r.reader,
})
} else {
console.warn("unknown message", msg);
}
}
}
}

View File

@ -0,0 +1,14 @@
export interface Callback {
onInit(init: Init): any
onSegment(segment: Segment): any
}
export interface Init {
buffer: Uint8Array; // unread buffered data
reader: ReadableStream; // unread unbuffered data
}
export interface Segment {
buffer: Uint8Array; // unread buffered data
reader: ReadableStream; // unread unbuffered data
}

View File

@ -0,0 +1,6 @@
export interface Init {}
export interface Segment {}
export interface Debug {
max_bitrate: number
}