Update to a more general Device and Stream API. Add support for input streams (E.g. microphone). Add default format methods. (#201)

* Update to a more general Device and Stream API

This update prepares for adding input stream support by removing the
`Endpoint` type (which only supports output streams) in favour of a more
general `Device` type which may support any number of input or output
streams. Previously discussed at #117.

The name `Voice` has been replaced with the more ubiquitous name
`Stream`. See #118 for justification.

Also introduces a new `StreamData` which is now passed to the
`EventLoop::run` callback rather than the `UnknownTypeBuffer`.
`StreamData` allows for passing either `Input` data to be read, or
`Output` data to be written.

The `beep.rs` example has been updated for the API changes.

None of the backends have been updated for this API change yet. Backends
will be updated in the following commits.

Closes #117.
Closes #118.

* Update ALSA backend for new `Device` and `Stream` API.

* Update wasapi backend for new `Device` and `Stream` API.

* Update enumerate.rs example for new `Device` and `Stream` API.

* Update coreaudio backend for new `Device` and `Stream` API.

* Fix lib doc tests for Device and Stream API update

* Update emscripten backend for new `Device` and `Stream` API.

* Update null backend for new `Device` and `Stream` API.

* Merge match exprs in beep.rs example

* Add Input variant along with UnknownTypeInputBuffer and InputBuffer

UnknownTypeBuffer and Buffer have been renamed to
UnknownTypeOutputBuffer and OutputBuffer respectively.

No backends have yet been updated for this name change or the addition
of the InputBuffer.

* Update null backend for introduction of InputBuffer

* Update emscripten backend for introduction of InputBuffer

* Make InputBuffer inner field an option to call finish in drop

* Update alsa backend for introduction of InputBuffer

* Update wasapi backend for introduction of InputBuffer

* Update coreaudio backend for introduction of InputBuffer

* Update enumerate.rs example to provide more detail about devices

The enumerate.rs example now also displays:

- Supported input stream formats.
- Default input stream format.
- Default output stream format.

This should also be useful for testing the progress of #201.

* Add `record_wav.rs` example for demonstrating input streams

Records a ~3 second WAV file to `$CARGO_MANIFEST_DIR/recorded.wav` using
the default input device and default input format.

Uses hound 3.0 to create and write to the WAV file.

This should also be useful for testing the input stream implementations
for each different cpal backend.

* Implement input stream support for coreaudio backend

This implements the following for the coreaudio backend:

- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream

The `enumerate.rs` and `record_wav.rs` examples now work successfully on
macos.

* Add `SupportedFormat::cmp_default_heuristics` method

This adds a comparison function which compares two `SupportedFormat`s in
terms of their priority of use as a default stream format.

Some backends (such as ALSA) do not provide a default stream format for
their audio devices. In these cases, CPAL attempts to decide on a
reasonable default format for the user. To do this we use the "greatest"
of all supported stream formats when compared with this method.

* Implement input stream support for ALSA backend

This implements the following for the ALSA backend:

- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream

Note that ALSA itself does not give default stream formats for its
devices. Thus the newly added `SupportedFormat::cmp_default_heuristics`
method is used to determine the most suitable, supported stream format
to use as the default.

The `enumerate.rs` and `record_wav.rs` examples now work successfully on
my linux machine.

* Implement input stream support for wasapi backend

This implements the following for the wasapi backend:

- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream

Note that wasapi does not enumerate supported input/output stream
formats for its devices. Instead, we query the `IsFormatSupported`
method for supported formats ourselves.

* Fix some warnings in the alsa backend

* Update CHANGELOG for introduction of input streams and related items

* Update README to show latest features supported by CPAL

* Simplify beep example using Device::default_output_format

* Remove old commented code from wasapi/stream.rs
This commit is contained in:
mitchmindtree 2018-02-13 00:10:24 +11:00 committed by Pierre Krieger
parent b47e46a4ac
commit c38bbb26e4
19 changed files with 3441 additions and 1891 deletions

1
.gitignore vendored
View File

@ -2,3 +2,4 @@
/Cargo.lock /Cargo.lock
.cargo/ .cargo/
.DS_Store .DS_Store
recorded.wav

View File

@ -1,5 +1,19 @@
# Unreleased # Unreleased
- Add `record_wav.rs` example. Records 3 seconds to
`$CARGO_MANIFEST_DIR/recorded.wav` using default input device.
- Update `enumerate.rs` example to display default input/output devices and
formats.
- Add input stream support to coreaudio, alsa and windows backends.
- Introduce `StreamData` type for handling either input or output streams in
`EventLoop::run` callback.
- Add `Device::supported_{input/output}_formats` methods.
- Add `Device::default_{input/output}_format` methods.
- Add `default_{input/output}_device` functions.
- Replace usage of `Voice` with `Stream` throughout the crate.
- Remove `Endpoint` in favour of `Device` for supporting both input and output
streams.
# Version 0.7.0 (2018-02-04) # Version 0.7.0 (2018-02-04)
- Rename ChannelsCount to ChannelCount. - Rename ChannelsCount to ChannelCount.

View File

@ -11,15 +11,18 @@ keywords = ["audio", "sound"]
[dependencies] [dependencies]
lazy_static = "0.2" lazy_static = "0.2"
[dev-dependencies]
hound = "3.0"
[target.'cfg(target_os = "windows")'.dependencies] [target.'cfg(target_os = "windows")'.dependencies]
winapi = { version = "0.3", features = ["audiosessiontypes", "audioclient", "combaseapi", "debug", "handleapi", "ksmedia", "mmdeviceapi", "objbase", "std", "synchapi", "winuser"] } winapi = { version = "0.3", features = ["audiosessiontypes", "audioclient", "coml2api", "combaseapi", "debug", "devpkey", "handleapi", "ksmedia", "mmdeviceapi", "objbase", "std", "synchapi", "winuser"] }
[target.'cfg(any(target_os = "linux", target_os = "dragonfly", target_os = "freebsd", target_os = "openbsd"))'.dependencies] [target.'cfg(any(target_os = "linux", target_os = "dragonfly", target_os = "freebsd", target_os = "openbsd"))'.dependencies]
alsa-sys = { version = "0.1", path = "alsa-sys" } alsa-sys = { version = "0.1", path = "alsa-sys" }
libc = "0.2" libc = "0.2"
[target.'cfg(any(target_os = "macos", target_os = "ios"))'.dependencies] [target.'cfg(any(target_os = "macos", target_os = "ios"))'.dependencies]
coreaudio-rs = { version = "0.8.1", default-features = false, features = ["audio_unit", "core_audio"] } coreaudio-rs = { version = "0.9.0", default-features = false, features = ["audio_unit", "core_audio"] }
core-foundation-sys = "0.5.1" # For linking to CoreFoundation.framework and handling device name `CFString`s. core-foundation-sys = "0.5.1" # For linking to CoreFoundation.framework and handling device name `CFString`s.
[target.'cfg(target_os = "emscripten")'.dependencies] [target.'cfg(target_os = "emscripten")'.dependencies]

View File

@ -1,8 +1,21 @@
# CPAL - Cross-platform audio library # CPAL - Cross-Platform Audio Library
[Documentation](https://docs.rs/cpal) [![Build Status](https://travis-ci.org/tomaka/cpal.svg?branch=master)](https://travis-ci.org/tomaka/cpal) [![Crates.io](https://img.shields.io/crates/v/cpal.svg)](https://crates.io/crates/cpal) [![docs.rs](https://docs.rs/cpal/badge.svg)](https://docs.rs/cpal/)
Low-level library for audio playback in pure Rust. Low-level library for audio input and output in pure Rust.
This library allows you to open a channel with the audio device of the user's machine, and This library currently supports the following:
send PCM data to it.
- Enumerate all available audio devices.
- Get the current default input and output devices.
- Enumerate known supported input and output stream formats for a device.
- Get the current default input and output stream formats for a device.
- Build and run input and output PCM streams on a chosen device with a given stream format.
Currently supported backends include:
- Linux (via ALSA)
- Windows
- macOS (via CoreAudio)
- iOS (via CoreAudio)
- Emscripten

View File

@ -1,17 +1,11 @@
extern crate cpal; extern crate cpal;
fn main() { fn main() {
let endpoint = cpal::default_endpoint().expect("Failed to get default endpoint"); let device = cpal::default_output_device().expect("Failed to get default output device");
let format = endpoint let format = device.default_output_format().expect("Failed to get default output format");
.supported_formats()
.unwrap()
.next()
.expect("Failed to get endpoint format")
.with_max_sample_rate();
let event_loop = cpal::EventLoop::new(); let event_loop = cpal::EventLoop::new();
let voice_id = event_loop.build_voice(&endpoint, &format).unwrap(); let stream_id = event_loop.build_output_stream(&device, &format).unwrap();
event_loop.play(voice_id); event_loop.play_stream(stream_id.clone());
let sample_rate = format.sample_rate.0 as f32; let sample_rate = format.sample_rate.0 as f32;
let mut sample_clock = 0f32; let mut sample_clock = 0f32;
@ -22,9 +16,9 @@ fn main() {
(sample_clock * 440.0 * 2.0 * 3.141592 / sample_rate).sin() (sample_clock * 440.0 * 2.0 * 3.141592 / sample_rate).sin()
}; };
event_loop.run(move |_, buffer| { event_loop.run(move |_, data| {
match buffer { match data {
cpal::UnknownTypeBuffer::U16(mut buffer) => { cpal::StreamData::Output { buffer: cpal::UnknownTypeOutputBuffer::U16(mut buffer) } => {
for sample in buffer.chunks_mut(format.channels as usize) { for sample in buffer.chunks_mut(format.channels as usize) {
let value = ((next_value() * 0.5 + 0.5) * std::u16::MAX as f32) as u16; let value = ((next_value() * 0.5 + 0.5) * std::u16::MAX as f32) as u16;
for out in sample.iter_mut() { for out in sample.iter_mut() {
@ -32,8 +26,7 @@ fn main() {
} }
} }
}, },
cpal::StreamData::Output { buffer: cpal::UnknownTypeOutputBuffer::I16(mut buffer) } => {
cpal::UnknownTypeBuffer::I16(mut buffer) => {
for sample in buffer.chunks_mut(format.channels as usize) { for sample in buffer.chunks_mut(format.channels as usize) {
let value = (next_value() * std::i16::MAX as f32) as i16; let value = (next_value() * std::i16::MAX as f32) as i16;
for out in sample.iter_mut() { for out in sample.iter_mut() {
@ -41,8 +34,7 @@ fn main() {
} }
} }
}, },
cpal::StreamData::Output { buffer: cpal::UnknownTypeOutputBuffer::F32(mut buffer) } => {
cpal::UnknownTypeBuffer::F32(mut buffer) => {
for sample in buffer.chunks_mut(format.channels as usize) { for sample in buffer.chunks_mut(format.channels as usize) {
let value = next_value(); let value = next_value();
for out in sample.iter_mut() { for out in sample.iter_mut() {
@ -50,6 +42,7 @@ fn main() {
} }
} }
}, },
}; _ => (),
}
}); });
} }

View File

@ -1,25 +1,50 @@
extern crate cpal; extern crate cpal;
fn main() { fn main() {
println!("Default Endpoint:\n {:?}", cpal::default_endpoint().map(|e| e.name())); println!("Default Input Device:\n {:?}", cpal::default_input_device().map(|e| e.name()));
println!("Default Output Device:\n {:?}", cpal::default_output_device().map(|e| e.name()));
let endpoints = cpal::endpoints(); let devices = cpal::devices();
println!("Endpoints: "); println!("Devices: ");
for (endpoint_index, endpoint) in endpoints.enumerate() { for (device_index, device) in devices.enumerate() {
println!("{}. Endpoint \"{}\" Audio formats: ", println!("{}. \"{}\"",
endpoint_index + 1, device_index + 1,
endpoint.name()); device.name());
let formats = match endpoint.supported_formats() { // Input formats
Ok(f) => f, if let Ok(fmt) = device.default_input_format() {
println!(" Default input stream format:\n {:?}", fmt);
}
let mut input_formats = match device.supported_input_formats() {
Ok(f) => f.peekable(),
Err(e) => { Err(e) => {
println!("Error: {:?}", e); println!("Error: {:?}", e);
continue; continue;
}, },
}; };
if input_formats.peek().is_some() {
println!(" All supported input stream formats:");
for (format_index, format) in input_formats.enumerate() {
println!(" {}.{}. {:?}", device_index + 1, format_index + 1, format);
}
}
for (format_index, format) in formats.enumerate() { // Output formats
println!("{}.{}. {:?}", endpoint_index + 1, format_index + 1, format); if let Ok(fmt) = device.default_output_format() {
println!(" Default output stream format:\n {:?}", fmt);
}
let mut output_formats = match device.supported_output_formats() {
Ok(f) => f.peekable(),
Err(e) => {
println!("Error: {:?}", e);
continue;
},
};
if output_formats.peek().is_some() {
println!(" All supported output stream formats:");
for (format_index, format) in output_formats.enumerate() {
println!(" {}.{}. {:?}", device_index + 1, format_index + 1, format);
}
} }
} }
} }

95
examples/record_wav.rs Normal file
View File

@ -0,0 +1,95 @@
//! Records a WAV file (roughly 3 seconds long) using the default input device and format.
//!
//! The input data is recorded to "$CARGO_MANIFEST_DIR/recorded.wav".
extern crate cpal;
extern crate hound;
fn main() {
// Setup the default input device and stream with the default input format.
let device = cpal::default_input_device().expect("Failed to get default input device");
println!("Default input device: {}", device.name());
let format = device.default_input_format().expect("Failed to get default input format");
println!("Default input format: {:?}", format);
let event_loop = cpal::EventLoop::new();
let stream_id = event_loop.build_input_stream(&device, &format)
.expect("Failed to build input stream");
event_loop.play_stream(stream_id);
// The WAV file we're recording to.
const PATH: &'static str = concat!(env!("CARGO_MANIFEST_DIR"), "/recorded.wav");
let spec = wav_spec_from_format(&format);
let writer = hound::WavWriter::create(PATH, spec).unwrap();
let writer = std::sync::Arc::new(std::sync::Mutex::new(Some(writer)));
// A flag to indicate that recording is in progress.
println!("Begin recording...");
let recording = std::sync::Arc::new(std::sync::atomic::AtomicBool::new(true));
// Run the input stream on a separate thread.
let writer_2 = writer.clone();
let recording_2 = recording.clone();
std::thread::spawn(move || {
event_loop.run(move |_, data| {
// If we're done recording, return early.
if !recording_2.load(std::sync::atomic::Ordering::Relaxed) {
return;
}
// Otherwise write to the wav writer.
match data {
cpal::StreamData::Input { buffer: cpal::UnknownTypeInputBuffer::U16(buffer) } => {
if let Ok(mut guard) = writer_2.try_lock() {
if let Some(writer) = guard.as_mut() {
for sample in buffer.iter() {
let sample = cpal::Sample::to_i16(sample);
writer.write_sample(sample).ok();
}
}
}
},
cpal::StreamData::Input { buffer: cpal::UnknownTypeInputBuffer::I16(buffer) } => {
if let Ok(mut guard) = writer_2.try_lock() {
if let Some(writer) = guard.as_mut() {
for &sample in buffer.iter() {
writer.write_sample(sample).ok();
}
}
}
},
cpal::StreamData::Input { buffer: cpal::UnknownTypeInputBuffer::F32(buffer) } => {
if let Ok(mut guard) = writer_2.try_lock() {
if let Some(writer) = guard.as_mut() {
for &sample in buffer.iter() {
writer.write_sample(sample).ok();
}
}
}
},
_ => (),
}
});
});
// Let recording go for roughly three seconds.
std::thread::sleep(std::time::Duration::from_secs(3));
recording.store(false, std::sync::atomic::Ordering::Relaxed);
writer.lock().unwrap().take().unwrap().finalize().unwrap();
println!("Recording {} complete!", PATH);
}
fn sample_format(format: cpal::SampleFormat) -> hound::SampleFormat {
match format {
cpal::SampleFormat::U16 => hound::SampleFormat::Int,
cpal::SampleFormat::I16 => hound::SampleFormat::Int,
cpal::SampleFormat::F32 => hound::SampleFormat::Float,
}
}
fn wav_spec_from_format(format: &cpal::Format) -> hound::WavSpec {
hound::WavSpec {
channels: format.channels as _,
sample_rate: format.sample_rate.0 as _,
bits_per_sample: (format.data_type.sample_size() * 8) as _,
sample_format: sample_format(format.data_type),
}
}

View File

@ -1,14 +1,11 @@
use super::Endpoint; use super::Device;
use super::alsa; use super::alsa;
use super::check_errors; use super::check_errors;
use super::libc;
use std::ffi::CStr;
use std::ffi::CString; use std::ffi::CString;
use std::mem; use std::mem;
/// ALSA implementation for `EndpointsIterator`. /// ALSA implementation for `Devices`.
pub struct EndpointsIterator { pub struct Devices {
// we keep the original list so that we can pass it to the free function // we keep the original list so that we can pass it to the free function
global_list: *const *const u8, global_list: *const *const u8,
@ -16,12 +13,12 @@ pub struct EndpointsIterator {
next_str: *const *const u8, next_str: *const *const u8,
} }
unsafe impl Send for EndpointsIterator { unsafe impl Send for Devices {
} }
unsafe impl Sync for EndpointsIterator { unsafe impl Sync for Devices {
} }
impl Drop for EndpointsIterator { impl Drop for Devices {
#[inline] #[inline]
fn drop(&mut self) { fn drop(&mut self) {
unsafe { unsafe {
@ -30,8 +27,8 @@ impl Drop for EndpointsIterator {
} }
} }
impl Default for EndpointsIterator { impl Default for Devices {
fn default() -> EndpointsIterator { fn default() -> Devices {
unsafe { unsafe {
let mut hints = mem::uninitialized(); let mut hints = mem::uninitialized();
// TODO: check in which situation this can fail // TODO: check in which situation this can fail
@ -40,7 +37,7 @@ impl Default for EndpointsIterator {
let hints = hints as *const *const u8; let hints = hints as *const *const u8;
EndpointsIterator { Devices {
global_list: hints, global_list: hints,
next_str: hints, next_str: hints,
} }
@ -48,10 +45,10 @@ impl Default for EndpointsIterator {
} }
} }
impl Iterator for EndpointsIterator { impl Iterator for Devices {
type Item = Endpoint; type Item = Device;
fn next(&mut self) -> Option<Endpoint> { fn next(&mut self) -> Option<Device> {
loop { loop {
unsafe { unsafe {
if (*self.next_str).is_null() { if (*self.next_str).is_null() {
@ -62,10 +59,9 @@ impl Iterator for EndpointsIterator {
let n_ptr = alsa::snd_device_name_get_hint(*self.next_str as *const _, let n_ptr = alsa::snd_device_name_get_hint(*self.next_str as *const _,
b"NAME\0".as_ptr() as *const _); b"NAME\0".as_ptr() as *const _);
if !n_ptr.is_null() { if !n_ptr.is_null() {
let n = CStr::from_ptr(n_ptr).to_bytes().to_vec(); let bytes = CString::from_raw(n_ptr).into_bytes();
let n = String::from_utf8(n).unwrap(); let string = String::from_utf8(bytes).unwrap();
libc::free(n_ptr as *mut _); Some(string)
Some(n)
} else { } else {
None None
} }
@ -75,10 +71,9 @@ impl Iterator for EndpointsIterator {
let n_ptr = alsa::snd_device_name_get_hint(*self.next_str as *const _, let n_ptr = alsa::snd_device_name_get_hint(*self.next_str as *const _,
b"IOID\0".as_ptr() as *const _); b"IOID\0".as_ptr() as *const _);
if !n_ptr.is_null() { if !n_ptr.is_null() {
let n = CStr::from_ptr(n_ptr).to_bytes().to_vec(); let bytes = CString::from_raw(n_ptr).into_bytes();
let n = String::from_utf8(n).unwrap(); let string = String::from_utf8(bytes).unwrap();
libc::free(n_ptr as *mut _); Some(string)
Some(n)
} else { } else {
None None
} }
@ -92,24 +87,46 @@ impl Iterator for EndpointsIterator {
} }
} }
if let Some(name) = name { let name = match name {
// trying to open the PCM device to see if it can be opened Some(name) => {
let name_zeroed = CString::new(name.clone()).unwrap(); // Ignoring the `null` device.
let mut playback_handle = mem::uninitialized(); if name == "null" {
if alsa::snd_pcm_open(&mut playback_handle,
name_zeroed.as_ptr() as *const _,
alsa::SND_PCM_STREAM_PLAYBACK,
alsa::SND_PCM_NONBLOCK) == 0
{
alsa::snd_pcm_close(playback_handle);
} else {
continue; continue;
} }
name
},
_ => continue,
};
// ignoring the `null` device // trying to open the PCM device to see if it can be opened
if name != "null" { let name_zeroed = CString::new(&name[..]).unwrap();
return Some(Endpoint(name));
// See if the device has an available output stream.
let mut playback_handle = mem::uninitialized();
let has_available_output = alsa::snd_pcm_open(
&mut playback_handle,
name_zeroed.as_ptr() as *const _,
alsa::SND_PCM_STREAM_PLAYBACK,
alsa::SND_PCM_NONBLOCK,
) == 0;
if has_available_output {
alsa::snd_pcm_close(playback_handle);
} }
// See if the device has an available input stream.
let mut capture_handle = mem::uninitialized();
let has_available_input = alsa::snd_pcm_open(
&mut capture_handle,
name_zeroed.as_ptr() as *const _,
alsa::SND_PCM_STREAM_CAPTURE,
alsa::SND_PCM_NONBLOCK,
) == 0;
if has_available_input {
alsa::snd_pcm_close(capture_handle);
}
if has_available_output || has_available_input {
return Some(Device(name));
} }
} }
} }
@ -117,6 +134,11 @@ impl Iterator for EndpointsIterator {
} }
#[inline] #[inline]
pub fn default_endpoint() -> Option<Endpoint> { pub fn default_input_device() -> Option<Device> {
Some(Endpoint("default".to_owned())) Some(Device("default".to_owned()))
}
#[inline]
pub fn default_output_device() -> Option<Device> {
Some(Device("default".to_owned()))
} }

View File

@ -1,23 +1,27 @@
extern crate alsa_sys as alsa; extern crate alsa_sys as alsa;
extern crate libc; extern crate libc;
pub use self::enumerate::{EndpointsIterator, default_endpoint}; pub use self::enumerate::{Devices, default_input_device, default_output_device};
use ChannelCount; use ChannelCount;
use CreationError; use CreationError;
use DefaultFormatError;
use Format; use Format;
use FormatsEnumerationError; use FormatsEnumerationError;
use SampleFormat; use SampleFormat;
use SampleRate; use SampleRate;
use StreamData;
use SupportedFormat; use SupportedFormat;
use UnknownTypeBuffer; use UnknownTypeInputBuffer;
use UnknownTypeOutputBuffer;
use std::{cmp, ffi, iter, mem, ptr}; use std::{cmp, ffi, iter, mem, ptr};
use std::sync::Mutex; use std::sync::Mutex;
use std::sync::atomic::{AtomicUsize, Ordering}; use std::sync::atomic::{AtomicUsize, Ordering};
use std::vec::IntoIter as VecIntoIter; use std::vec::IntoIter as VecIntoIter;
pub type SupportedFormatsIterator = VecIntoIter<SupportedFormat>; pub type SupportedInputFormats = VecIntoIter<SupportedFormat>;
pub type SupportedOutputFormats = VecIntoIter<SupportedFormat>;
mod enumerate; mod enumerate;
@ -64,24 +68,35 @@ impl Drop for Trigger {
#[derive(Clone, Debug, PartialEq, Eq)] #[derive(Clone, Debug, PartialEq, Eq)]
pub struct Endpoint(String); pub struct Device(String);
impl Endpoint { impl Device {
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> { #[inline]
unsafe { pub fn name(&self) -> String {
let mut playback_handle = mem::uninitialized(); self.0.clone()
let device_name = ffi::CString::new(self.0.clone()).expect("Unable to get device name"); }
match alsa::snd_pcm_open(&mut playback_handle, device_name.as_ptr() as *const _, unsafe fn supported_formats(
alsa::SND_PCM_STREAM_PLAYBACK, alsa::SND_PCM_NONBLOCK) &self,
stream_t: alsa::snd_pcm_stream_t,
) -> Result<VecIntoIter<SupportedFormat>, FormatsEnumerationError>
{ {
let mut handle = mem::uninitialized();
let device_name = ffi::CString::new(&self.0[..]).expect("Unable to get device name");
match alsa::snd_pcm_open(
&mut handle,
device_name.as_ptr() as *const _,
stream_t,
alsa::SND_PCM_NONBLOCK,
) {
-2 | -2 |
-16 /* determined empirically */ => return Err(FormatsEnumerationError::DeviceNotAvailable), -16 /* determined empirically */ => return Err(FormatsEnumerationError::DeviceNotAvailable),
e => check_errors(e).expect("device not available") e => check_errors(e).expect("device not available")
} }
let hw_params = HwParams::alloc(); let hw_params = HwParams::alloc();
match check_errors(alsa::snd_pcm_hw_params_any(playback_handle, hw_params.0)) { match check_errors(alsa::snd_pcm_hw_params_any(handle, hw_params.0)) {
Err(_) => return Ok(Vec::new().into_iter()), Err(_) => return Ok(Vec::new().into_iter()),
Ok(_) => (), Ok(_) => (),
}; };
@ -130,7 +145,7 @@ impl Endpoint {
let mut supported_formats = Vec::new(); let mut supported_formats = Vec::new();
for &(sample_format, alsa_format) in FORMATS.iter() { for &(sample_format, alsa_format) in FORMATS.iter() {
if alsa::snd_pcm_hw_params_test_format(playback_handle, if alsa::snd_pcm_hw_params_test_format(handle,
hw_params.0, hw_params.0,
alsa_format) == 0 alsa_format) == 0
{ {
@ -151,7 +166,7 @@ impl Endpoint {
let sample_rates = if min_rate == max_rate { let sample_rates = if min_rate == max_rate {
vec![(min_rate, max_rate)] vec![(min_rate, max_rate)]
} else if alsa::snd_pcm_hw_params_test_rate(playback_handle, } else if alsa::snd_pcm_hw_params_test_rate(handle,
hw_params.0, hw_params.0,
min_rate + 1, min_rate + 1,
0) == 0 0) == 0
@ -176,7 +191,7 @@ impl Endpoint {
let mut rates = Vec::new(); let mut rates = Vec::new();
for &rate in RATES.iter() { for &rate in RATES.iter() {
if alsa::snd_pcm_hw_params_test_rate(playback_handle, if alsa::snd_pcm_hw_params_test_rate(handle,
hw_params.0, hw_params.0,
rate, rate,
0) == 0 0) == 0
@ -201,7 +216,7 @@ impl Endpoint {
let max_channels = cmp::min(max_channels, 32); // TODO: limiting to 32 channels or too much stuff is returned let max_channels = cmp::min(max_channels, 32); // TODO: limiting to 32 channels or too much stuff is returned
let supported_channels = (min_channels .. max_channels + 1) let supported_channels = (min_channels .. max_channels + 1)
.filter_map(|num| if alsa::snd_pcm_hw_params_test_channels( .filter_map(|num| if alsa::snd_pcm_hw_params_test_channels(
playback_handle, handle,
hw_params.0, hw_params.0,
num, num,
) == 0 ) == 0
@ -228,20 +243,67 @@ impl Endpoint {
} }
// TODO: RAII // TODO: RAII
alsa::snd_pcm_close(playback_handle); alsa::snd_pcm_close(handle);
Ok(output.into_iter()) Ok(output.into_iter())
} }
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
unsafe {
self.supported_formats(alsa::SND_PCM_STREAM_CAPTURE)
}
} }
#[inline] pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
pub fn name(&self) -> String { unsafe {
self.0.clone() self.supported_formats(alsa::SND_PCM_STREAM_PLAYBACK)
}
}
// ALSA does not offer default stream formats, so instead we compare all supported formats by
// the `SupportedFormat::cmp_default_heuristics` order and select the greatest.
fn default_format(
&self,
stream_t: alsa::snd_pcm_stream_t,
) -> Result<Format, DefaultFormatError>
{
let mut formats: Vec<_> = unsafe {
match self.supported_formats(stream_t) {
Err(FormatsEnumerationError::DeviceNotAvailable) => {
return Err(DefaultFormatError::DeviceNotAvailable);
},
Ok(fmts) => fmts.collect(),
}
};
formats.sort_by(|a, b| a.cmp_default_heuristics(b));
match formats.into_iter().last() {
Some(f) => {
let min_r = f.min_sample_rate;
let max_r = f.max_sample_rate;
let mut format = f.with_max_sample_rate();
const HZ_44100: SampleRate = SampleRate(44_100);
if min_r <= HZ_44100 && HZ_44100 <= max_r {
format.sample_rate = HZ_44100;
}
Ok(format)
},
None => Err(DefaultFormatError::StreamTypeNotSupported)
}
}
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
self.default_format(alsa::SND_PCM_STREAM_CAPTURE)
}
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
self.default_format(alsa::SND_PCM_STREAM_PLAYBACK)
} }
} }
pub struct EventLoop { pub struct EventLoop {
// Each newly-created voice gets a new ID from this counter. The counter is then incremented. // Each newly-created stream gets a new ID from this counter. The counter is then incremented.
next_voice_id: AtomicUsize, // TODO: use AtomicU64 when stable? next_stream_id: AtomicUsize, // TODO: use AtomicU64 when stable?
// A trigger that uses a `pipe()` as backend. Signalled whenever a new command is ready, so // A trigger that uses a `pipe()` as backend. Signalled whenever a new command is ready, so
// that `poll()` can wake up and pick the changes. // that `poll()` can wake up and pick the changes.
@ -263,22 +325,22 @@ unsafe impl Sync for EventLoop {
} }
enum Command { enum Command {
NewVoice(VoiceInner), NewStream(StreamInner),
PlayVoice(VoiceId), PlayStream(StreamId),
PauseVoice(VoiceId), PauseStream(StreamId),
DestroyVoice(VoiceId), DestroyStream(StreamId),
} }
struct RunContext { struct RunContext {
// Descriptors to wait for. Always contains `pending_trigger.read_fd()` as first element. // Descriptors to wait for. Always contains `pending_trigger.read_fd()` as first element.
descriptors: Vec<libc::pollfd>, descriptors: Vec<libc::pollfd>,
// List of voices that are written in `descriptors`. // List of streams that are written in `descriptors`.
voices: Vec<VoiceInner>, streams: Vec<StreamInner>,
} }
struct VoiceInner { struct StreamInner {
// The id of the voice. // The id of the stream.
id: VoiceId, id: StreamId,
// The ALSA channel. // The ALSA channel.
channel: *mut alsa::snd_pcm_t, channel: *mut alsa::snd_pcm_t,
@ -311,7 +373,7 @@ struct VoiceInner {
} }
#[derive(Debug, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize); pub struct StreamId(usize);
impl EventLoop { impl EventLoop {
#[inline] #[inline]
@ -328,11 +390,11 @@ impl EventLoop {
let run_context = Mutex::new(RunContext { let run_context = Mutex::new(RunContext {
descriptors: initial_descriptors, descriptors: initial_descriptors,
voices: Vec::new(), streams: Vec::new(),
}); });
EventLoop { EventLoop {
next_voice_id: AtomicUsize::new(0), next_stream_id: AtomicUsize::new(0),
pending_trigger: pending_trigger, pending_trigger: pending_trigger,
run_context, run_context,
commands: Mutex::new(Vec::new()), commands: Mutex::new(Vec::new()),
@ -341,12 +403,12 @@ impl EventLoop {
#[inline] #[inline]
pub fn run<F>(&self, mut callback: F) -> ! pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer) where F: FnMut(StreamId, StreamData)
{ {
self.run_inner(&mut callback) self.run_inner(&mut callback)
} }
fn run_inner(&self, callback: &mut FnMut(VoiceId, UnknownTypeBuffer)) -> ! { fn run_inner(&self, callback: &mut FnMut(StreamId, StreamData)) -> ! {
unsafe { unsafe {
let mut run_context = self.run_context.lock().unwrap(); let mut run_context = self.run_context.lock().unwrap();
let run_context = &mut *run_context; let run_context = &mut *run_context;
@ -357,27 +419,27 @@ impl EventLoop {
if !commands_lock.is_empty() { if !commands_lock.is_empty() {
for command in commands_lock.drain(..) { for command in commands_lock.drain(..) {
match command { match command {
Command::DestroyVoice(voice_id) => { Command::DestroyStream(stream_id) => {
run_context.voices.retain(|v| v.id != voice_id); run_context.streams.retain(|s| s.id != stream_id);
}, },
Command::PlayVoice(voice_id) => { Command::PlayStream(stream_id) => {
if let Some(voice) = run_context.voices.iter_mut() if let Some(stream) = run_context.streams.iter_mut()
.find(|voice| voice.can_pause && voice.id == voice_id) .find(|stream| stream.can_pause && stream.id == stream_id)
{ {
alsa::snd_pcm_pause(voice.channel, 0); alsa::snd_pcm_pause(stream.channel, 0);
voice.is_paused = false; stream.is_paused = false;
} }
}, },
Command::PauseVoice(voice_id) => { Command::PauseStream(stream_id) => {
if let Some(voice) = run_context.voices.iter_mut() if let Some(stream) = run_context.streams.iter_mut()
.find(|voice| voice.can_pause && voice.id == voice_id) .find(|stream| stream.can_pause && stream.id == stream_id)
{ {
alsa::snd_pcm_pause(voice.channel, 1); alsa::snd_pcm_pause(stream.channel, 1);
voice.is_paused = true; stream.is_paused = true;
} }
}, },
Command::NewVoice(voice_inner) => { Command::NewStream(stream_inner) => {
run_context.voices.push(voice_inner); run_context.streams.push(stream_inner);
}, },
} }
} }
@ -389,18 +451,18 @@ impl EventLoop {
revents: 0, revents: 0,
}, },
]; ];
for voice in run_context.voices.iter() { for stream in run_context.streams.iter() {
run_context.descriptors.reserve(voice.num_descriptors); run_context.descriptors.reserve(stream.num_descriptors);
let len = run_context.descriptors.len(); let len = run_context.descriptors.len();
let filled = alsa::snd_pcm_poll_descriptors(voice.channel, let filled = alsa::snd_pcm_poll_descriptors(stream.channel,
run_context run_context
.descriptors .descriptors
.as_mut_ptr() .as_mut_ptr()
.offset(len as isize), .offset(len as isize),
voice.num_descriptors as stream.num_descriptors as
libc::c_uint); libc::c_uint);
debug_assert_eq!(filled, voice.num_descriptors as libc::c_int); debug_assert_eq!(filled, stream.num_descriptors as libc::c_int);
run_context.descriptors.set_len(len + voice.num_descriptors); run_context.descriptors.set_len(len + stream.num_descriptors);
} }
} }
} }
@ -420,110 +482,290 @@ impl EventLoop {
self.pending_trigger.clear_pipe(); self.pending_trigger.clear_pipe();
} }
// Iterate over each individual voice/descriptor. // Iterate over each individual stream/descriptor.
let mut i_voice = 0; let mut i_stream = 0;
let mut i_descriptor = 1; let mut i_descriptor = 1;
while (i_descriptor as usize) < run_context.descriptors.len() { while (i_descriptor as usize) < run_context.descriptors.len() {
let voice_inner = run_context.voices.get_mut(i_voice).unwrap(); enum StreamType { Input, Output }
let stream_type;
let stream_inner = run_context.streams.get_mut(i_stream).unwrap();
// Check whether the event is `POLLOUT`. If not, `continue`. // Check whether the event is `POLLOUT` or `POLLIN`. If neither, `continue`.
{ {
let mut revent = mem::uninitialized(); let mut revent = mem::uninitialized();
{ {
let num_descriptors = voice_inner.num_descriptors as libc::c_uint; let num_descriptors = stream_inner.num_descriptors as libc::c_uint;
let desc_ptr = let desc_ptr =
run_context.descriptors.as_mut_ptr().offset(i_descriptor); run_context.descriptors.as_mut_ptr().offset(i_descriptor);
let res = alsa::snd_pcm_poll_descriptors_revents(voice_inner.channel, let res = alsa::snd_pcm_poll_descriptors_revents(stream_inner.channel,
desc_ptr, desc_ptr,
num_descriptors, num_descriptors,
&mut revent); &mut revent);
check_errors(res).unwrap(); check_errors(res).unwrap();
} }
if (revent as libc::c_short & libc::POLLOUT) == 0 { if revent as i16 == libc::POLLOUT {
i_descriptor += voice_inner.num_descriptors as isize; stream_type = StreamType::Output;
i_voice += 1; } else if revent as i16 == libc::POLLIN {
stream_type = StreamType::Input;
} else {
i_descriptor += stream_inner.num_descriptors as isize;
i_stream += 1;
continue; continue;
} }
} }
// Determine the number of samples that are available to write. // Determine the number of samples that are available to read/write.
let available = { let available = {
let available = alsa::snd_pcm_avail(voice_inner.channel); // TODO: what about snd_pcm_avail_update? let available = alsa::snd_pcm_avail(stream_inner.channel); // TODO: what about snd_pcm_avail_update?
if available == -32 { if available == -32 {
// buffer underrun // buffer underrun
voice_inner.buffer_len stream_inner.buffer_len
} else if available < 0 { } else if available < 0 {
check_errors(available as libc::c_int) check_errors(available as libc::c_int)
.expect("buffer is not available"); .expect("buffer is not available");
unreachable!() unreachable!()
} else { } else {
(available * voice_inner.num_channels as alsa::snd_pcm_sframes_t) as (available * stream_inner.num_channels as alsa::snd_pcm_sframes_t) as
usize usize
} }
}; };
if available < voice_inner.period_len { if available < stream_inner.period_len {
i_descriptor += voice_inner.num_descriptors as isize; i_descriptor += stream_inner.num_descriptors as isize;
i_voice += 1; i_stream += 1;
continue; continue;
} }
let voice_id = voice_inner.id.clone(); let stream_id = stream_inner.id.clone();
match stream_type {
StreamType::Input => {
// Simplify shared logic across the sample format branches.
macro_rules! read_buffer {
($T:ty, $Variant:ident) => {{
// The buffer to read into.
let mut buffer: Vec<$T> = iter::repeat(mem::uninitialized())
.take(available)
.collect();
let err = alsa::snd_pcm_readi(
stream_inner.channel,
buffer.as_mut_ptr() as *mut _,
available as _,
);
check_errors(err as _).expect("snd_pcm_readi error");
let input_buffer = InputBuffer {
buffer: &buffer,
};
let buffer = UnknownTypeInputBuffer::$Variant(::InputBuffer {
buffer: Some(input_buffer),
});
let stream_data = StreamData::Input { buffer: buffer };
callback(stream_id, stream_data);
}};
}
match stream_inner.sample_format {
SampleFormat::I16 => read_buffer!(i16, I16),
SampleFormat::U16 => read_buffer!(u16, U16),
SampleFormat::F32 => read_buffer!(f32, F32),
}
},
StreamType::Output => {
// We're now sure that we're ready to write data. // We're now sure that we're ready to write data.
let buffer = match voice_inner.sample_format { let buffer = match stream_inner.sample_format {
SampleFormat::I16 => { SampleFormat::I16 => {
let buffer = Buffer { let buffer = OutputBuffer {
voice_inner: voice_inner, stream_inner: stream_inner,
buffer: iter::repeat(mem::uninitialized()) buffer: iter::repeat(mem::uninitialized())
.take(available) .take(available)
.collect(), .collect(),
}; };
UnknownTypeBuffer::I16(::Buffer { target: Some(buffer) }) UnknownTypeOutputBuffer::I16(::OutputBuffer { target: Some(buffer) })
}, },
SampleFormat::U16 => { SampleFormat::U16 => {
let buffer = Buffer { let buffer = OutputBuffer {
voice_inner: voice_inner, stream_inner: stream_inner,
buffer: iter::repeat(mem::uninitialized()) buffer: iter::repeat(mem::uninitialized())
.take(available) .take(available)
.collect(), .collect(),
}; };
UnknownTypeBuffer::U16(::Buffer { target: Some(buffer) }) UnknownTypeOutputBuffer::U16(::OutputBuffer { target: Some(buffer) })
}, },
SampleFormat::F32 => { SampleFormat::F32 => {
let buffer = Buffer { let buffer = OutputBuffer {
voice_inner: voice_inner, stream_inner: stream_inner,
// Note that we don't use `mem::uninitialized` because of sNaN. // Note that we don't use `mem::uninitialized` because of sNaN.
buffer: iter::repeat(0.0).take(available).collect(), buffer: iter::repeat(0.0).take(available).collect(),
}; };
UnknownTypeBuffer::F32(::Buffer { target: Some(buffer) }) UnknownTypeOutputBuffer::F32(::OutputBuffer { target: Some(buffer) })
}, },
}; };
callback(voice_id, buffer); let stream_data = StreamData::Output { buffer: buffer };
callback(stream_id, stream_data);
},
}
} }
} }
} }
} }
pub fn build_voice(&self, endpoint: &Endpoint, format: &Format) pub fn build_input_stream(
-> Result<VoiceId, CreationError> { &self,
unsafe { device: &Device,
let name = ffi::CString::new(endpoint.0.clone()).expect("unable to clone endpoint"); format: &Format,
) -> Result<StreamId, CreationError>
let mut playback_handle = mem::uninitialized();
match alsa::snd_pcm_open(&mut playback_handle, name.as_ptr(),
alsa::SND_PCM_STREAM_PLAYBACK, 0)
{ {
unsafe {
let name = ffi::CString::new(device.0.clone()).expect("unable to clone device");
let mut capture_handle = mem::uninitialized();
match alsa::snd_pcm_open(
&mut capture_handle,
name.as_ptr(),
alsa::SND_PCM_STREAM_CAPTURE,
alsa::SND_PCM_NONBLOCK,
) {
-16 /* determined empirically */ => return Err(CreationError::DeviceNotAvailable), -16 /* determined empirically */ => return Err(CreationError::DeviceNotAvailable),
e => check_errors(e).expect("Device unavailable") e => check_errors(e).expect("Device unavailable")
} }
let hw_params = HwParams::alloc();
set_hw_params_from_format(capture_handle, &hw_params, format);
let can_pause = alsa::snd_pcm_hw_params_can_pause(hw_params.0) == 1;
let (buffer_len, period_len) = set_sw_params_from_format(capture_handle, format);
check_errors(alsa::snd_pcm_prepare(capture_handle))
.expect("could not get playback handle");
let num_descriptors = {
let num_descriptors = alsa::snd_pcm_poll_descriptors_count(capture_handle);
debug_assert!(num_descriptors >= 1);
num_descriptors as usize
};
let new_stream_id = StreamId(self.next_stream_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_stream_id.0, usize::max_value()); // check for overflows
let stream_inner = StreamInner {
id: new_stream_id.clone(),
channel: capture_handle,
sample_format: format.data_type,
num_descriptors: num_descriptors,
num_channels: format.channels as u16,
buffer_len: buffer_len,
period_len: period_len,
can_pause: can_pause,
is_paused: false,
resume_trigger: Trigger::new(),
};
check_errors(alsa::snd_pcm_start(capture_handle))
.expect("could not start capture stream");
self.push_command(Command::NewStream(stream_inner));
Ok(new_stream_id)
}
}
pub fn build_output_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
unsafe {
let name = ffi::CString::new(device.0.clone()).expect("unable to clone device");
let mut playback_handle = mem::uninitialized();
match alsa::snd_pcm_open(
&mut playback_handle,
name.as_ptr(),
alsa::SND_PCM_STREAM_PLAYBACK,
alsa::SND_PCM_NONBLOCK,
) {
-16 /* determined empirically */ => return Err(CreationError::DeviceNotAvailable),
e => check_errors(e).expect("Device unavailable")
}
let hw_params = HwParams::alloc();
set_hw_params_from_format(playback_handle, &hw_params, format);
let can_pause = alsa::snd_pcm_hw_params_can_pause(hw_params.0) == 1;
let (buffer_len, period_len) = set_sw_params_from_format(playback_handle, format);
check_errors(alsa::snd_pcm_prepare(playback_handle))
.expect("could not get playback handle");
let num_descriptors = {
let num_descriptors = alsa::snd_pcm_poll_descriptors_count(playback_handle);
debug_assert!(num_descriptors >= 1);
num_descriptors as usize
};
let new_stream_id = StreamId(self.next_stream_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_stream_id.0, usize::max_value()); // check for overflows
let stream_inner = StreamInner {
id: new_stream_id.clone(),
channel: playback_handle,
sample_format: format.data_type,
num_descriptors: num_descriptors,
num_channels: format.channels as u16,
buffer_len: buffer_len,
period_len: period_len,
can_pause: can_pause,
is_paused: false,
resume_trigger: Trigger::new(),
};
self.push_command(Command::NewStream(stream_inner));
Ok(new_stream_id)
}
}
#[inline]
fn push_command(&self, command: Command) {
self.commands.lock().unwrap().push(command);
self.pending_trigger.wakeup();
}
#[inline]
pub fn destroy_stream(&self, stream_id: StreamId) {
self.push_command(Command::DestroyStream(stream_id));
}
#[inline]
pub fn play_stream(&self, stream_id: StreamId) {
self.push_command(Command::PlayStream(stream_id));
}
#[inline]
pub fn pause_stream(&self, stream_id: StreamId) {
self.push_command(Command::PauseStream(stream_id));
}
}
unsafe fn set_hw_params_from_format(
pcm_handle: *mut alsa::snd_pcm_t,
hw_params: &HwParams,
format: &Format,
) {
check_errors(alsa::snd_pcm_hw_params_any(pcm_handle, hw_params.0))
.expect("Errors on pcm handle");
check_errors(alsa::snd_pcm_hw_params_set_access(pcm_handle,
hw_params.0,
alsa::SND_PCM_ACCESS_RW_INTERLEAVED))
.expect("handle not acessible");
let data_type = if cfg!(target_endian = "big") { let data_type = if cfg!(target_endian = "big") {
match format.data_type { match format.data_type {
@ -539,23 +781,16 @@ impl EventLoop {
} }
}; };
let hw_params = HwParams::alloc(); check_errors(alsa::snd_pcm_hw_params_set_format(pcm_handle,
check_errors(alsa::snd_pcm_hw_params_any(playback_handle, hw_params.0))
.expect("Errors on playback handle");
check_errors(alsa::snd_pcm_hw_params_set_access(playback_handle,
hw_params.0,
alsa::SND_PCM_ACCESS_RW_INTERLEAVED))
.expect("handle not acessible");
check_errors(alsa::snd_pcm_hw_params_set_format(playback_handle,
hw_params.0, hw_params.0,
data_type)) data_type))
.expect("format could not be set"); .expect("format could not be set");
check_errors(alsa::snd_pcm_hw_params_set_rate(playback_handle, check_errors(alsa::snd_pcm_hw_params_set_rate(pcm_handle,
hw_params.0, hw_params.0,
format.sample_rate.0 as libc::c_uint, format.sample_rate.0 as libc::c_uint,
0)) 0))
.expect("sample rate could not be set"); .expect("sample rate could not be set");
check_errors(alsa::snd_pcm_hw_params_set_channels(playback_handle, check_errors(alsa::snd_pcm_hw_params_set_channels(pcm_handle,
hw_params.0, hw_params.0,
format.channels as format.channels as
libc::c_uint)) libc::c_uint))
@ -563,19 +798,23 @@ impl EventLoop {
let mut max_buffer_size = format.sample_rate.0 as alsa::snd_pcm_uframes_t / let mut max_buffer_size = format.sample_rate.0 as alsa::snd_pcm_uframes_t /
format.channels as alsa::snd_pcm_uframes_t / format.channels as alsa::snd_pcm_uframes_t /
5; // 200ms of buffer 5; // 200ms of buffer
check_errors(alsa::snd_pcm_hw_params_set_buffer_size_max(playback_handle, check_errors(alsa::snd_pcm_hw_params_set_buffer_size_max(pcm_handle,
hw_params.0, hw_params.0,
&mut max_buffer_size)) &mut max_buffer_size))
.unwrap(); .unwrap();
check_errors(alsa::snd_pcm_hw_params(playback_handle, hw_params.0)) check_errors(alsa::snd_pcm_hw_params(pcm_handle, hw_params.0))
.expect("hardware params could not be set"); .expect("hardware params could not be set");
}
let can_pause = alsa::snd_pcm_hw_params_can_pause(hw_params.0) == 1; unsafe fn set_sw_params_from_format(
pcm_handle: *mut alsa::snd_pcm_t,
format: &Format,
) -> (usize, usize)
{
let mut sw_params = mem::uninitialized(); // TODO: RAII let mut sw_params = mem::uninitialized(); // TODO: RAII
check_errors(alsa::snd_pcm_sw_params_malloc(&mut sw_params)).unwrap(); check_errors(alsa::snd_pcm_sw_params_malloc(&mut sw_params)).unwrap();
check_errors(alsa::snd_pcm_sw_params_current(playback_handle, sw_params)).unwrap(); check_errors(alsa::snd_pcm_sw_params_current(pcm_handle, sw_params)).unwrap();
check_errors(alsa::snd_pcm_sw_params_set_start_threshold(playback_handle, check_errors(alsa::snd_pcm_sw_params_set_start_threshold(pcm_handle,
sw_params, sw_params,
0)) 0))
.unwrap(); .unwrap();
@ -583,10 +822,10 @@ impl EventLoop {
let (buffer_len, period_len) = { let (buffer_len, period_len) = {
let mut buffer = mem::uninitialized(); let mut buffer = mem::uninitialized();
let mut period = mem::uninitialized(); let mut period = mem::uninitialized();
check_errors(alsa::snd_pcm_get_params(playback_handle, &mut buffer, &mut period)) check_errors(alsa::snd_pcm_get_params(pcm_handle, &mut buffer, &mut period))
.expect("could not initialize buffer"); .expect("could not initialize buffer");
assert!(buffer != 0); assert!(buffer != 0);
check_errors(alsa::snd_pcm_sw_params_set_avail_min(playback_handle, check_errors(alsa::snd_pcm_sw_params_set_avail_min(pcm_handle,
sw_params, sw_params,
period)) period))
.unwrap(); .unwrap();
@ -595,61 +834,17 @@ impl EventLoop {
(buffer, period) (buffer, period)
}; };
check_errors(alsa::snd_pcm_sw_params(playback_handle, sw_params)).unwrap(); check_errors(alsa::snd_pcm_sw_params(pcm_handle, sw_params)).unwrap();
check_errors(alsa::snd_pcm_prepare(playback_handle)) alsa::snd_pcm_sw_params_free(sw_params);
.expect("could not get playback handle"); (buffer_len, period_len)
let num_descriptors = {
let num_descriptors = alsa::snd_pcm_poll_descriptors_count(playback_handle);
debug_assert!(num_descriptors >= 1);
num_descriptors as usize
};
let new_voice_id = VoiceId(self.next_voice_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_voice_id.0, usize::max_value()); // check for overflows
let voice_inner = VoiceInner {
id: new_voice_id.clone(),
channel: playback_handle,
sample_format: format.data_type,
num_descriptors: num_descriptors,
num_channels: format.channels as u16,
buffer_len: buffer_len,
period_len: period_len,
can_pause: can_pause,
is_paused: false,
resume_trigger: Trigger::new(),
};
self.push_command(Command::NewVoice(voice_inner));
Ok(new_voice_id)
}
} }
#[inline] pub struct InputBuffer<'a, T: 'a> {
fn push_command(&self, command: Command) { buffer: &'a [T],
self.commands.lock().unwrap().push(command);
self.pending_trigger.wakeup();
} }
#[inline] pub struct OutputBuffer<'a, T: 'a> {
pub fn destroy_voice(&self, voice_id: VoiceId) { stream_inner: &'a mut StreamInner,
self.push_command(Command::DestroyVoice(voice_id));
}
#[inline]
pub fn play(&self, voice_id: VoiceId) {
self.push_command(Command::PlayVoice(voice_id));
}
#[inline]
pub fn pause(&self, voice_id: VoiceId) {
self.push_command(Command::PauseVoice(voice_id));
}
}
pub struct Buffer<'a, T: 'a> {
voice_inner: &'a mut VoiceInner,
buffer: Vec<T>, buffer: Vec<T>,
} }
@ -675,7 +870,7 @@ impl Drop for HwParams {
} }
} }
impl Drop for VoiceInner { impl Drop for StreamInner {
#[inline] #[inline]
fn drop(&mut self) { fn drop(&mut self) {
unsafe { unsafe {
@ -684,7 +879,19 @@ impl Drop for VoiceInner {
} }
} }
impl<'a, T> Buffer<'a, T> { impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
&self.buffer
}
#[inline]
pub fn finish(self) {
// Nothing to be done.
}
}
impl<'a, T> OutputBuffer<'a, T> {
#[inline] #[inline]
pub fn buffer(&mut self) -> &mut [T] { pub fn buffer(&mut self) -> &mut [T] {
&mut self.buffer &mut self.buffer
@ -696,18 +903,18 @@ impl<'a, T> Buffer<'a, T> {
} }
pub fn finish(self) { pub fn finish(self) {
let to_write = (self.buffer.len() / self.voice_inner.num_channels as usize) as let to_write = (self.buffer.len() / self.stream_inner.num_channels as usize) as
alsa::snd_pcm_uframes_t; alsa::snd_pcm_uframes_t;
unsafe { unsafe {
loop { loop {
let result = alsa::snd_pcm_writei(self.voice_inner.channel, let result = alsa::snd_pcm_writei(self.stream_inner.channel,
self.buffer.as_ptr() as *const _, self.buffer.as_ptr() as *const _,
to_write); to_write);
if result == -32 { if result == -32 {
// buffer underrun // buffer underrun
alsa::snd_pcm_prepare(self.voice_inner.channel); alsa::snd_pcm_prepare(self.stream_inner.channel);
} else if result < 0 { } else if result < 0 {
check_errors(result as libc::c_int).expect("could not write pcm"); check_errors(result as libc::c_int).expect("could not write pcm");
} else { } else {

View File

@ -8,6 +8,7 @@ use super::coreaudio::sys::{
AudioObjectGetPropertyData, AudioObjectGetPropertyData,
AudioObjectGetPropertyDataSize, AudioObjectGetPropertyDataSize,
kAudioHardwareNoError, kAudioHardwareNoError,
kAudioHardwarePropertyDefaultInputDevice,
kAudioHardwarePropertyDefaultOutputDevice, kAudioHardwarePropertyDefaultOutputDevice,
kAudioHardwarePropertyDevices, kAudioHardwarePropertyDevices,
kAudioObjectPropertyElementMaster, kAudioObjectPropertyElementMaster,
@ -15,9 +16,9 @@ use super::coreaudio::sys::{
kAudioObjectSystemObject, kAudioObjectSystemObject,
OSStatus, OSStatus,
}; };
use super::Endpoint; use super::Device;
unsafe fn audio_output_devices() -> Result<Vec<AudioDeviceID>, OSStatus> { unsafe fn audio_devices() -> Result<Vec<AudioDeviceID>, OSStatus> {
let property_address = AudioObjectPropertyAddress { let property_address = AudioObjectPropertyAddress {
mSelector: kAudioHardwarePropertyDevices, mSelector: kAudioHardwarePropertyDevices,
mScope: kAudioObjectPropertyScopeGlobal, mScope: kAudioObjectPropertyScopeGlobal,
@ -58,42 +59,62 @@ unsafe fn audio_output_devices() -> Result<Vec<AudioDeviceID>, OSStatus> {
audio_devices.set_len(device_count as usize); audio_devices.set_len(device_count as usize);
// Only keep the devices that have some supported output format.
audio_devices.retain(|&id| {
let e = Endpoint { audio_device_id: id };
match e.supported_formats() {
Err(_) => false,
Ok(mut fmts) => fmts.next().is_some(),
}
});
Ok(audio_devices) Ok(audio_devices)
} }
pub struct EndpointsIterator(VecIntoIter<AudioDeviceID>); pub struct Devices(VecIntoIter<AudioDeviceID>);
unsafe impl Send for EndpointsIterator { unsafe impl Send for Devices {
} }
unsafe impl Sync for EndpointsIterator { unsafe impl Sync for Devices {
} }
impl Default for EndpointsIterator { impl Default for Devices {
fn default() -> Self { fn default() -> Self {
let devices = unsafe { let devices = unsafe {
audio_output_devices().expect("failed to get audio output devices") audio_devices().expect("failed to get audio output devices")
}; };
EndpointsIterator(devices.into_iter()) Devices(devices.into_iter())
} }
} }
impl Iterator for EndpointsIterator { impl Iterator for Devices {
type Item = Endpoint; type Item = Device;
fn next(&mut self) -> Option<Endpoint> { fn next(&mut self) -> Option<Device> {
self.0.next().map(|id| Endpoint { audio_device_id: id }) self.0.next().map(|id| Device { audio_device_id: id })
} }
} }
pub fn default_endpoint() -> Option<Endpoint> { pub fn default_input_device() -> Option<Device> {
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioHardwarePropertyDefaultInputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMaster,
};
let audio_device_id: AudioDeviceID = 0;
let data_size = mem::size_of::<AudioDeviceID>();;
let status = unsafe {
AudioObjectGetPropertyData(
kAudioObjectSystemObject,
&property_address as *const _,
0,
null(),
&data_size as *const _ as *mut _,
&audio_device_id as *const _ as *mut _,
)
};
if status != kAudioHardwareNoError as i32 {
return None;
}
let device = Device {
audio_device_id: audio_device_id,
};
Some(device)
}
pub fn default_output_device() -> Option<Device> {
let property_address = AudioObjectPropertyAddress { let property_address = AudioObjectPropertyAddress {
mSelector: kAudioHardwarePropertyDefaultOutputDevice, mSelector: kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal, mScope: kAudioObjectPropertyScopeGlobal,
@ -116,10 +137,11 @@ pub fn default_endpoint() -> Option<Endpoint> {
return None; return None;
} }
let endpoint = Endpoint { let device = Device {
audio_device_id: audio_device_id, audio_device_id: audio_device_id,
}; };
Some(endpoint) Some(device)
} }
pub type SupportedFormatsIterator = VecIntoIter<SupportedFormat>; pub type SupportedInputFormats = VecIntoIter<SupportedFormat>;
pub type SupportedOutputFormats = VecIntoIter<SupportedFormat>;

View File

@ -3,13 +3,16 @@ extern crate core_foundation_sys;
use ChannelCount; use ChannelCount;
use CreationError; use CreationError;
use DefaultFormatError;
use Format; use Format;
use FormatsEnumerationError; use FormatsEnumerationError;
use Sample; use Sample;
use SampleFormat; use SampleFormat;
use SampleRate; use SampleRate;
use StreamData;
use SupportedFormat; use SupportedFormat;
use UnknownTypeBuffer; use UnknownTypeInputBuffer;
use UnknownTypeOutputBuffer;
use std::ffi::CStr; use std::ffi::CStr;
use std::mem; use std::mem;
@ -29,12 +32,15 @@ use self::coreaudio::sys::{
AudioObjectGetPropertyData, AudioObjectGetPropertyData,
AudioObjectGetPropertyDataSize, AudioObjectGetPropertyDataSize,
AudioObjectPropertyAddress, AudioObjectPropertyAddress,
AudioObjectPropertyScope,
AudioStreamBasicDescription, AudioStreamBasicDescription,
AudioValueRange, AudioValueRange,
kAudioDevicePropertyAvailableNominalSampleRates, kAudioDevicePropertyAvailableNominalSampleRates,
kAudioDevicePropertyDeviceNameCFString, kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeInput,
kAudioDevicePropertyScopeOutput, kAudioDevicePropertyScopeOutput,
kAudioDevicePropertyStreamConfiguration, kAudioDevicePropertyStreamConfiguration,
kAudioDevicePropertyStreamFormat,
kAudioFormatFlagIsFloat, kAudioFormatFlagIsFloat,
kAudioFormatFlagIsPacked, kAudioFormatFlagIsPacked,
kAudioFormatLinearPCM, kAudioFormatLinearPCM,
@ -42,8 +48,10 @@ use self::coreaudio::sys::{
kAudioObjectPropertyElementMaster, kAudioObjectPropertyElementMaster,
kAudioObjectPropertyScopeOutput, kAudioObjectPropertyScopeOutput,
kAudioOutputUnitProperty_CurrentDevice, kAudioOutputUnitProperty_CurrentDevice,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitProperty_StreamFormat, kAudioUnitProperty_StreamFormat,
kCFStringEncodingUTF8, kCFStringEncodingUTF8,
OSStatus,
}; };
use self::core_foundation_sys::string::{ use self::core_foundation_sys::string::{
CFStringRef, CFStringRef,
@ -52,18 +60,52 @@ use self::core_foundation_sys::string::{
mod enumerate; mod enumerate;
pub use self::enumerate::{EndpointsIterator, SupportedFormatsIterator, default_endpoint}; pub use self::enumerate::{Devices, SupportedInputFormats, SupportedOutputFormats, default_input_device, default_output_device};
#[derive(Clone, PartialEq, Eq)] #[derive(Clone, PartialEq, Eq)]
pub struct Endpoint { pub struct Device {
audio_device_id: AudioDeviceID, audio_device_id: AudioDeviceID,
} }
impl Endpoint { impl Device {
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> { pub fn name(&self) -> String {
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioDevicePropertyDeviceNameCFString,
mScope: kAudioDevicePropertyScopeOutput,
mElement: kAudioObjectPropertyElementMaster,
};
let device_name: CFStringRef = null();
let data_size = mem::size_of::<CFStringRef>();
let c_str = unsafe {
let status = AudioObjectGetPropertyData(
self.audio_device_id,
&property_address as *const _,
0,
null(),
&data_size as *const _ as *mut _,
&device_name as *const _ as *mut _,
);
if status != kAudioHardwareNoError as i32 {
return format!("<OSStatus: {:?}>", status);
}
let c_string: *const c_char = CFStringGetCStringPtr(device_name, kCFStringEncodingUTF8);
if c_string == null() {
return "<null>".into();
}
CStr::from_ptr(c_string as *mut _)
};
c_str.to_string_lossy().into_owned()
}
// Logic re-used between `supported_input_formats` and `supported_output_formats`.
fn supported_formats(
&self,
scope: AudioObjectPropertyScope,
) -> Result<SupportedOutputFormats, FormatsEnumerationError>
{
let mut property_address = AudioObjectPropertyAddress { let mut property_address = AudioObjectPropertyAddress {
mSelector: kAudioDevicePropertyStreamConfiguration, mSelector: kAudioDevicePropertyStreamConfiguration,
mScope: kAudioObjectPropertyScopeOutput, mScope: scope,
mElement: kAudioObjectPropertyElementMaster, mElement: kAudioObjectPropertyElementMaster,
}; };
@ -163,52 +205,113 @@ impl Endpoint {
} }
} }
pub fn name(&self) -> String { pub fn supported_input_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
self.supported_formats(kAudioObjectPropertyScopeInput)
}
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
self.supported_formats(kAudioObjectPropertyScopeOutput)
}
fn default_format(
&self,
scope: AudioObjectPropertyScope,
) -> Result<Format, DefaultFormatError>
{
fn default_format_error_from_os_status(status: OSStatus) -> Option<DefaultFormatError> {
let err = match coreaudio::Error::from_os_status(status) {
Err(err) => err,
Ok(_) => return None,
};
match err {
coreaudio::Error::RenderCallbackBufferFormatDoesNotMatchAudioUnitStreamFormat |
coreaudio::Error::NoKnownSubtype |
coreaudio::Error::AudioUnit(coreaudio::error::AudioUnitError::FormatNotSupported) |
coreaudio::Error::AudioCodec(_) |
coreaudio::Error::AudioFormat(_) => Some(DefaultFormatError::StreamTypeNotSupported),
_ => Some(DefaultFormatError::DeviceNotAvailable),
}
}
let property_address = AudioObjectPropertyAddress { let property_address = AudioObjectPropertyAddress {
mSelector: kAudioDevicePropertyDeviceNameCFString, mSelector: kAudioDevicePropertyStreamFormat,
mScope: kAudioDevicePropertyScopeOutput, mScope: scope,
mElement: kAudioObjectPropertyElementMaster, mElement: kAudioObjectPropertyElementMaster,
}; };
let device_name: CFStringRef = null();
let data_size = mem::size_of::<CFStringRef>(); unsafe {
let c_str = unsafe { let asbd: AudioStreamBasicDescription = mem::uninitialized();
let data_size = mem::size_of::<AudioStreamBasicDescription>() as u32;
let status = AudioObjectGetPropertyData( let status = AudioObjectGetPropertyData(
self.audio_device_id, self.audio_device_id,
&property_address as *const _, &property_address as *const _,
0, 0,
null(), null(),
&data_size as *const _ as *mut _, &data_size as *const _ as *mut _,
&device_name as *const _ as *mut _, &asbd as *const _ as *mut _,
); );
if status != kAudioHardwareNoError as i32 { if status != kAudioHardwareNoError as i32 {
return format!("<OSStatus: {:?}>", status); let err = default_format_error_from_os_status(status)
.expect("no known error for OsStatus");
return Err(err);
} }
let c_string: *const c_char = CFStringGetCStringPtr(device_name, kCFStringEncodingUTF8);
if c_string == null() { let sample_format = {
return "<null>".into(); let audio_format = coreaudio::audio_unit::AudioFormat::from_format_and_flag(
} asbd.mFormatID,
CStr::from_ptr(c_string as *mut _) Some(asbd.mFormatFlags),
);
let flags = match audio_format {
Some(coreaudio::audio_unit::AudioFormat::LinearPCM(flags)) => flags,
_ => return Err(DefaultFormatError::StreamTypeNotSupported),
}; };
c_str.to_string_lossy().into_owned() let maybe_sample_format =
coreaudio::audio_unit::SampleFormat::from_flags_and_bytes_per_frame(
flags,
asbd.mBytesPerFrame,
);
match maybe_sample_format {
Some(coreaudio::audio_unit::SampleFormat::F32) => SampleFormat::F32,
Some(coreaudio::audio_unit::SampleFormat::I16) => SampleFormat::I16,
_ => return Err(DefaultFormatError::StreamTypeNotSupported),
}
};
let format = Format {
sample_rate: SampleRate(asbd.mSampleRate as _),
channels: asbd.mChannelsPerFrame as _,
data_type: sample_format,
};
Ok(format)
} }
} }
// The ID of a voice is its index within the `voices` array of the events loop. pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
self.default_format(kAudioObjectPropertyScopeInput)
}
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
self.default_format(kAudioObjectPropertyScopeOutput)
}
}
// The ID of a stream is its index within the `streams` array of the events loop.
#[derive(Debug, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize); pub struct StreamId(usize);
pub struct EventLoop { pub struct EventLoop {
// This `Arc` is shared with all the callbacks of coreaudio. // This `Arc` is shared with all the callbacks of coreaudio.
active_callbacks: Arc<ActiveCallbacks>, active_callbacks: Arc<ActiveCallbacks>,
voices: Mutex<Vec<Option<VoiceInner>>>, streams: Mutex<Vec<Option<StreamInner>>>,
} }
struct ActiveCallbacks { struct ActiveCallbacks {
// Whenever the `run()` method is called with a callback, this callback is put in this list. // Whenever the `run()` method is called with a callback, this callback is put in this list.
callbacks: Mutex<Vec<&'static mut (FnMut(VoiceId, UnknownTypeBuffer) + Send)>>, callbacks: Mutex<Vec<&'static mut (FnMut(StreamId, StreamData) + Send)>>,
} }
struct VoiceInner { struct StreamInner {
playing: bool, playing: bool,
audio_unit: AudioUnit, audio_unit: AudioUnit,
} }
@ -227,60 +330,8 @@ impl From<coreaudio::Error> for CreationError {
} }
} }
impl EventLoop { // Create a coreaudio AudioStreamBasicDescription from a CPAL Format.
#[inline] fn asbd_from_format(format: &Format) -> AudioStreamBasicDescription {
pub fn new() -> EventLoop {
EventLoop {
active_callbacks: Arc::new(ActiveCallbacks { callbacks: Mutex::new(Vec::new()) }),
voices: Mutex::new(Vec::new()),
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer) + Send
{
let callback: &mut (FnMut(VoiceId, UnknownTypeBuffer) + Send) = &mut callback;
self.active_callbacks
.callbacks
.lock()
.unwrap()
.push(unsafe { mem::transmute(callback) });
loop {
// So the loop does not get optimised out in --release
thread::sleep(Duration::new(1u64, 0u32));
}
// Note: if we ever change this API so that `run` can return, then it is critical that
// we remove the callback from `active_callbacks`.
}
#[inline]
pub fn build_voice(&self, endpoint: &Endpoint, format: &Format)
-> Result<VoiceId, CreationError> {
let mut audio_unit = {
let au_type = if cfg!(target_os = "ios") {
// The DefaultOutput unit isn't available in iOS unfortunately.
// RemoteIO is a sensible replacement.
// See https://goo.gl/CWwRTx
coreaudio::audio_unit::IOType::RemoteIO
} else {
coreaudio::audio_unit::IOType::DefaultOutput
};
AudioUnit::new(au_type)?
};
// TODO: Set the audio output unit device as the given endpoint device.
audio_unit.set_property(
kAudioOutputUnitProperty_CurrentDevice,
Scope::Global,
Element::Output,
Some(&endpoint.audio_device_id),
)?;
// Set the stream in interleaved mode.
let n_channels = format.channels as usize; let n_channels = format.channels as usize;
let sample_rate = format.sample_rate.0; let sample_rate = format.sample_rate.0;
let bytes_per_channel = format.data_type.sample_size(); let bytes_per_channel = format.data_type.sample_size();
@ -304,24 +355,208 @@ impl EventLoop {
mSampleRate: sample_rate as _, mSampleRate: sample_rate as _,
..Default::default() ..Default::default()
}; };
asbd
}
fn audio_unit_from_device(device: &Device, input: bool) -> Result<AudioUnit, coreaudio::Error> {
let mut audio_unit = {
let au_type = if cfg!(target_os = "ios") {
// The HalOutput unit isn't available in iOS unfortunately.
// RemoteIO is a sensible replacement.
// See https://goo.gl/CWwRTx
coreaudio::audio_unit::IOType::RemoteIO
} else {
coreaudio::audio_unit::IOType::HalOutput
};
AudioUnit::new(au_type)?
};
if input {
// Enable input processing.
let enable_input = 1u32;
audio_unit.set_property( audio_unit.set_property(
kAudioUnitProperty_StreamFormat, kAudioOutputUnitProperty_EnableIO,
Scope::Input, Scope::Input,
Element::Output, Element::Input,
Some(&asbd) Some(&enable_input),
)?; )?;
// Determine the future ID of the voice. // Disable output processing.
let mut voices_lock = self.voices.lock().unwrap(); let disable_output = 0u32;
let voice_id = voices_lock audio_unit.set_property(
kAudioOutputUnitProperty_EnableIO,
Scope::Output,
Element::Output,
Some(&disable_output),
)?;
}
audio_unit.set_property(
kAudioOutputUnitProperty_CurrentDevice,
Scope::Global,
Element::Output,
Some(&device.audio_device_id),
)?;
Ok(audio_unit)
}
impl EventLoop {
#[inline]
pub fn new() -> EventLoop {
EventLoop {
active_callbacks: Arc::new(ActiveCallbacks { callbacks: Mutex::new(Vec::new()) }),
streams: Mutex::new(Vec::new()),
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(StreamId, StreamData) + Send
{
{
let callback: &mut (FnMut(StreamId, StreamData) + Send) = &mut callback;
self.active_callbacks
.callbacks
.lock()
.unwrap()
.push(unsafe { mem::transmute(callback) });
}
loop {
// So the loop does not get optimised out in --release
thread::sleep(Duration::new(1u64, 0u32));
}
// Note: if we ever change this API so that `run` can return, then it is critical that
// we remove the callback from `active_callbacks`.
}
fn next_stream_id(&self) -> usize {
let streams_lock = self.streams.lock().unwrap();
let stream_id = streams_lock
.iter() .iter()
.position(|n| n.is_none()) .position(|n| n.is_none())
.unwrap_or(voices_lock.len()); .unwrap_or(streams_lock.len());
stream_id
}
// Add the stream to the list of streams within `self`.
fn add_stream(&self, stream_id: usize, au: AudioUnit) {
let inner = StreamInner {
playing: true,
audio_unit: au,
};
let mut streams_lock = self.streams.lock().unwrap();
if stream_id == streams_lock.len() {
streams_lock.push(Some(inner));
} else {
streams_lock[stream_id] = Some(inner);
}
}
#[inline]
pub fn build_input_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
let mut audio_unit = audio_unit_from_device(device, true)?;
// The scope and element for working with a device's output stream.
let scope = Scope::Output;
let element = Element::Input;
// Set the stream in interleaved mode.
let asbd = asbd_from_format(format);
audio_unit.set_property(kAudioUnitProperty_StreamFormat, scope, element, Some(&asbd))?;
// Determine the future ID of the stream.
let stream_id = self.next_stream_id();
// Register the callback that is being called by coreaudio whenever it needs data to be // Register the callback that is being called by coreaudio whenever it needs data to be
// fed to the audio buffer. // fed to the audio buffer.
let active_callbacks = self.active_callbacks.clone(); let active_callbacks = self.active_callbacks.clone();
audio_unit.set_render_callback(move |args: render_callback::Args<data::Raw>| unsafe { let sample_format = format.data_type;
let bytes_per_channel = format.data_type.sample_size();
type Args = render_callback::Args<data::Raw>;
audio_unit.set_input_callback(move |args: Args| unsafe {
let ptr = (*args.data.data).mBuffers.as_ptr() as *const AudioBuffer;
let len = (*args.data.data).mNumberBuffers as usize;
let buffers: &[AudioBuffer] = slice::from_raw_parts(ptr, len);
// TODO: Perhaps loop over all buffers instead?
let AudioBuffer {
mNumberChannels: _num_channels,
mDataByteSize: data_byte_size,
mData: data
} = buffers[0];
let mut callbacks = active_callbacks.callbacks.lock().unwrap();
// A small macro to simplify handling the callback for different sample types.
macro_rules! try_callback {
($SampleFormat:ident, $SampleType:ty) => {{
let data_len = (data_byte_size as usize / bytes_per_channel) as usize;
let data_slice = slice::from_raw_parts(data as *const $SampleType, data_len);
let callback = match callbacks.get_mut(0) {
Some(cb) => cb,
None => return Ok(()),
};
let buffer = InputBuffer { buffer: data_slice };
let unknown_type_buffer = UnknownTypeInputBuffer::$SampleFormat(::InputBuffer { buffer: Some(buffer) });
let stream_data = StreamData::Input { buffer: unknown_type_buffer };
callback(StreamId(stream_id), stream_data);
}};
}
match sample_format {
SampleFormat::F32 => try_callback!(F32, f32),
SampleFormat::I16 => try_callback!(I16, i16),
SampleFormat::U16 => try_callback!(U16, u16),
}
Ok(())
})?;
// TODO: start playing now? is that consistent with the other backends?
audio_unit.start()?;
// Add the stream to the list of streams within `self`.
self.add_stream(stream_id, audio_unit);
Ok(StreamId(stream_id))
}
#[inline]
pub fn build_output_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
let mut audio_unit = audio_unit_from_device(device, false)?;
// The scope and element for working with a device's output stream.
let scope = Scope::Input;
let element = Element::Output;
// Set the stream in interleaved mode.
let asbd = asbd_from_format(format);
audio_unit.set_property(kAudioUnitProperty_StreamFormat, scope, element, Some(&asbd))?;
// Determine the future ID of the stream.
let stream_id = self.next_stream_id();
// Register the callback that is being called by coreaudio whenever it needs data to be
// fed to the audio buffer.
let active_callbacks = self.active_callbacks.clone();
let sample_format = format.data_type;
let bytes_per_channel = format.data_type.sample_size();
type Args = render_callback::Args<data::Raw>;
audio_unit.set_render_callback(move |args: Args| unsafe {
// If `run()` is currently running, then a callback will be available from this list. // If `run()` is currently running, then a callback will be available from this list.
// Otherwise, we just fill the buffer with zeroes and return. // Otherwise, we just fill the buffer with zeroes and return.
@ -331,7 +566,6 @@ impl EventLoop {
mData: data mData: data
} = (*args.data.data).mBuffers[0]; } = (*args.data.data).mBuffers[0];
let mut callbacks = active_callbacks.callbacks.lock().unwrap(); let mut callbacks = active_callbacks.callbacks.lock().unwrap();
// A small macro to simplify handling the callback for different sample types. // A small macro to simplify handling the callback for different sample types.
@ -348,9 +582,10 @@ impl EventLoop {
return Ok(()); return Ok(());
} }
}; };
let buffer = Buffer { buffer: data_slice }; let buffer = OutputBuffer { buffer: data_slice };
let unknown_type_buffer = UnknownTypeBuffer::$SampleFormat(::Buffer { target: Some(buffer) }); let unknown_type_buffer = UnknownTypeOutputBuffer::$SampleFormat(::OutputBuffer { target: Some(buffer) });
callback(VoiceId(voice_id), unknown_type_buffer); let stream_data = StreamData::Output { buffer: unknown_type_buffer };
callback(StreamId(stream_id), stream_data);
}}; }};
} }
@ -366,54 +601,59 @@ impl EventLoop {
// TODO: start playing now? is that consistent with the other backends? // TODO: start playing now? is that consistent with the other backends?
audio_unit.start()?; audio_unit.start()?;
// Add the voice to the list of voices within `self`. // Add the stream to the list of streams within `self`.
{ self.add_stream(stream_id, audio_unit);
let inner = VoiceInner {
playing: true,
audio_unit: audio_unit,
};
if voice_id == voices_lock.len() { Ok(StreamId(stream_id))
voices_lock.push(Some(inner)); }
} else {
voices_lock[voice_id] = Some(inner); pub fn destroy_stream(&self, stream_id: StreamId) {
let mut streams = self.streams.lock().unwrap();
streams[stream_id.0] = None;
}
pub fn play_stream(&self, stream: StreamId) {
let mut streams = self.streams.lock().unwrap();
let stream = streams[stream.0].as_mut().unwrap();
if !stream.playing {
stream.audio_unit.start().unwrap();
stream.playing = true;
} }
} }
Ok(VoiceId(voice_id)) pub fn pause_stream(&self, stream: StreamId) {
} let mut streams = self.streams.lock().unwrap();
let stream = streams[stream.0].as_mut().unwrap();
pub fn destroy_voice(&self, voice_id: VoiceId) { if stream.playing {
let mut voices = self.voices.lock().unwrap(); stream.audio_unit.stop().unwrap();
voices[voice_id.0] = None; stream.playing = false;
}
pub fn play(&self, voice: VoiceId) {
let mut voices = self.voices.lock().unwrap();
let voice = voices[voice.0].as_mut().unwrap();
if !voice.playing {
voice.audio_unit.start().unwrap();
voice.playing = true;
}
}
pub fn pause(&self, voice: VoiceId) {
let mut voices = self.voices.lock().unwrap();
let voice = voices[voice.0].as_mut().unwrap();
if voice.playing {
voice.audio_unit.stop().unwrap();
voice.playing = false;
} }
} }
} }
pub struct Buffer<'a, T: 'a> { pub struct InputBuffer<'a, T: 'a> {
buffer: &'a [T],
}
pub struct OutputBuffer<'a, T: 'a> {
buffer: &'a mut [T], buffer: &'a mut [T],
} }
impl<'a, T> Buffer<'a, T> impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
&self.buffer
}
#[inline]
pub fn finish(self) {
// Nothing to be done.
}
}
impl<'a, T> OutputBuffer<'a, T>
where T: Sample where T: Sample
{ {
#[inline] #[inline]

View File

@ -9,23 +9,25 @@ use stdweb::web::TypedArray;
use stdweb::web::set_timeout; use stdweb::web::set_timeout;
use CreationError; use CreationError;
use DefaultFormatError;
use Format; use Format;
use FormatsEnumerationError; use FormatsEnumerationError;
use Sample; use Sample;
use StreamData;
use SupportedFormat; use SupportedFormat;
use UnknownTypeBuffer; use UnknownTypeOutputBuffer;
// The emscripten backend works by having a global variable named `_cpal_audio_contexts`, which // The emscripten backend works by having a global variable named `_cpal_audio_contexts`, which
// is an array of `AudioContext` objects. A voice ID corresponds to an entry in this array. // is an array of `AudioContext` objects. A stream ID corresponds to an entry in this array.
// //
// Creating a voice creates a new `AudioContext`. Destroying a voice destroys it. // Creating a stream creates a new `AudioContext`. Destroying a stream destroys it.
// TODO: handle latency better ; right now we just use setInterval with the amount of sound data // TODO: handle latency better ; right now we just use setInterval with the amount of sound data
// that is in each buffer ; this is obviously bad, and also the schedule is too tight and there may // that is in each buffer ; this is obviously bad, and also the schedule is too tight and there may
// be underflows // be underflows
pub struct EventLoop { pub struct EventLoop {
voices: Mutex<Vec<Option<Reference>>>, streams: Mutex<Vec<Option<Reference>>>,
} }
impl EventLoop { impl EventLoop {
@ -33,12 +35,12 @@ impl EventLoop {
pub fn new() -> EventLoop { pub fn new() -> EventLoop {
stdweb::initialize(); stdweb::initialize();
EventLoop { voices: Mutex::new(Vec::new()) } EventLoop { streams: Mutex::new(Vec::new()) }
} }
#[inline] #[inline]
pub fn run<F>(&self, callback: F) -> ! pub fn run<F>(&self, callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer) where F: FnMut(StreamId, StreamData)
{ {
// The `run` function uses `set_timeout` to invoke a Rust callback repeatidely. The job // The `run` function uses `set_timeout` to invoke a Rust callback repeatidely. The job
// of this callback is to fill the content of the audio buffers. // of this callback is to fill the content of the audio buffers.
@ -47,27 +49,29 @@ impl EventLoop {
// and to the `callback` parameter that was passed to `run`. // and to the `callback` parameter that was passed to `run`.
fn callback_fn<F>(user_data_ptr: *mut c_void) fn callback_fn<F>(user_data_ptr: *mut c_void)
where F: FnMut(VoiceId, UnknownTypeBuffer) where F: FnMut(StreamId, StreamData)
{ {
unsafe { unsafe {
let user_data_ptr2 = user_data_ptr as *mut (&EventLoop, F); let user_data_ptr2 = user_data_ptr as *mut (&EventLoop, F);
let user_data = &mut *user_data_ptr2; let user_data = &mut *user_data_ptr2;
let user_cb = &mut user_data.1; let user_cb = &mut user_data.1;
let voices = user_data.0.voices.lock().unwrap().clone(); let streams = user_data.0.streams.lock().unwrap().clone();
for (voice_id, voice) in voices.iter().enumerate() { for (stream_id, stream) in streams.iter().enumerate() {
let voice = match voice.as_ref() { let stream = match stream.as_ref() {
Some(v) => v, Some(v) => v,
None => continue, None => continue,
}; };
let buffer = Buffer { let buffer = OutputBuffer {
temporary_buffer: vec![0.0; 44100 * 2 / 3], temporary_buffer: vec![0.0; 44100 * 2 / 3],
voice: &voice, stream: &stream,
}; };
user_cb(VoiceId(voice_id), let id = StreamId(stream_id);
::UnknownTypeBuffer::F32(::Buffer { target: Some(buffer) })); let buffer = UnknownTypeOutputBuffer::F32(::OutputBuffer { target: Some(buffer) });
let data = StreamData::Output { buffer: buffer };
user_cb(StreamId(stream_id), data);
} }
set_timeout(|| callback_fn::<F>(user_data_ptr), 330); set_timeout(|| callback_fn::<F>(user_data_ptr), 330);
@ -83,51 +87,56 @@ impl EventLoop {
} }
#[inline] #[inline]
pub fn build_voice(&self, _: &Endpoint, _format: &Format) -> Result<VoiceId, CreationError> { pub fn build_input_stream(&self, _: &Device, _format: &Format) -> Result<StreamId, CreationError> {
let voice = js!(return new AudioContext()).into_reference().unwrap(); unimplemented!();
}
let mut voices = self.voices.lock().unwrap(); #[inline]
let voice_id = if let Some(pos) = voices.iter().position(|v| v.is_none()) { pub fn build_output_stream(&self, _: &Device, _format: &Format) -> Result<StreamId, CreationError> {
voices[pos] = Some(voice); let stream = js!(return new AudioContext()).into_reference().unwrap();
let mut streams = self.streams.lock().unwrap();
let stream_id = if let Some(pos) = streams.iter().position(|v| v.is_none()) {
streams[pos] = Some(stream);
pos pos
} else { } else {
let l = voices.len(); let l = streams.len();
voices.push(Some(voice)); streams.push(Some(stream));
l l
}; };
Ok(VoiceId(voice_id)) Ok(StreamId(stream_id))
} }
#[inline] #[inline]
pub fn destroy_voice(&self, voice_id: VoiceId) { pub fn destroy_stream(&self, stream_id: StreamId) {
self.voices.lock().unwrap()[voice_id.0] = None; self.streams.lock().unwrap()[stream_id.0] = None;
} }
#[inline] #[inline]
pub fn play(&self, voice_id: VoiceId) { pub fn play_stream(&self, stream_id: StreamId) {
let voices = self.voices.lock().unwrap(); let streams = self.streams.lock().unwrap();
let voice = voices let stream = streams
.get(voice_id.0) .get(stream_id.0)
.and_then(|v| v.as_ref()) .and_then(|v| v.as_ref())
.expect("invalid voice ID"); .expect("invalid stream ID");
js!(@{voice}.resume()); js!(@{stream}.resume());
} }
#[inline] #[inline]
pub fn pause(&self, voice_id: VoiceId) { pub fn pause_stream(&self, stream_id: StreamId) {
let voices = self.voices.lock().unwrap(); let streams = self.streams.lock().unwrap();
let voice = voices let stream = streams
.get(voice_id.0) .get(stream_id.0)
.and_then(|v| v.as_ref()) .and_then(|v| v.as_ref())
.expect("invalid voice ID"); .expect("invalid stream ID");
js!(@{voice}.suspend()); js!(@{stream}.suspend());
} }
} }
// Index within the `voices` array of the events loop. // Index within the `streams` array of the events loop.
#[derive(Debug, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize); pub struct StreamId(usize);
// Detects whether the `AudioContext` global variable is available. // Detects whether the `AudioContext` global variable is available.
fn is_webaudio_available() -> bool { fn is_webaudio_available() -> bool {
@ -142,20 +151,20 @@ fn is_webaudio_available() -> bool {
} }
// Content is false if the iterator is empty. // Content is false if the iterator is empty.
pub struct EndpointsIterator(bool); pub struct Devices(bool);
impl Default for EndpointsIterator { impl Default for Devices {
fn default() -> EndpointsIterator { fn default() -> Devices {
// We produce an empty iterator if the WebAudio API isn't available. // We produce an empty iterator if the WebAudio API isn't available.
EndpointsIterator(is_webaudio_available()) Devices(is_webaudio_available())
} }
} }
impl Iterator for EndpointsIterator { impl Iterator for Devices {
type Item = Endpoint; type Item = Device;
#[inline] #[inline]
fn next(&mut self) -> Option<Endpoint> { fn next(&mut self) -> Option<Device> {
if self.0 { if self.0 {
self.0 = false; self.0 = false;
Some(Endpoint) Some(Device)
} else { } else {
None None
} }
@ -163,20 +172,35 @@ impl Iterator for EndpointsIterator {
} }
#[inline] #[inline]
pub fn default_endpoint() -> Option<Endpoint> { pub fn default_input_device() -> Option<Device> {
unimplemented!();
}
#[inline]
pub fn default_output_device() -> Option<Device> {
if is_webaudio_available() { if is_webaudio_available() {
Some(Endpoint) Some(Device)
} else { } else {
None None
} }
} }
#[derive(Clone, Debug, PartialEq, Eq)] #[derive(Clone, Debug, PartialEq, Eq)]
pub struct Endpoint; pub struct Device;
impl Endpoint { impl Device {
#[inline] #[inline]
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> { pub fn name(&self) -> String {
"Default Device".to_owned()
}
#[inline]
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
unimplemented!();
}
#[inline]
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
// TODO: right now cpal's API doesn't allow flexibility here // TODO: right now cpal's API doesn't allow flexibility here
// "44100" and "2" (channels) have also been hard-coded in the rest of the code ; if // "44100" and "2" (channels) have also been hard-coded in the rest of the code ; if
// this ever becomes more flexible, don't forget to change that // this ever becomes more flexible, don't forget to change that
@ -192,22 +216,41 @@ impl Endpoint {
) )
} }
#[inline] pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
pub fn name(&self) -> String { unimplemented!();
"Default endpoint".to_owned() }
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!();
} }
} }
pub type SupportedFormatsIterator = ::std::vec::IntoIter<SupportedFormat>; pub type SupportedInputFormats = ::std::vec::IntoIter<SupportedFormat>;
pub type SupportedOutputFormats = ::std::vec::IntoIter<SupportedFormat>;
pub struct Buffer<'a, T: 'a> pub struct InputBuffer<'a, T: 'a> {
marker: ::std::marker::PhantomData<&'a T>,
}
pub struct OutputBuffer<'a, T: 'a>
where T: Sample where T: Sample
{ {
temporary_buffer: Vec<T>, temporary_buffer: Vec<T>,
voice: &'a Reference, stream: &'a Reference,
} }
impl<'a, T> Buffer<'a, T> impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
unimplemented!()
}
#[inline]
pub fn finish(self) {
}
}
impl<'a, T> OutputBuffer<'a, T>
where T: Sample where T: Sample
{ {
#[inline] #[inline]
@ -239,7 +282,7 @@ impl<'a, T> Buffer<'a, T>
js!( js!(
var src_buffer = new Float32Array(@{typed_array}.buffer); var src_buffer = new Float32Array(@{typed_array}.buffer);
var context = @{self.voice}; var context = @{self.stream};
var buf_len = @{self.temporary_buffer.len() as u32}; var buf_len = @{self.temporary_buffer.len() as u32};
var num_channels = @{num_channels}; var num_channels = @{num_channels};

View File

@ -2,74 +2,76 @@
//! //!
//! Here are some concepts cpal exposes: //! Here are some concepts cpal exposes:
//! //!
//! - An endpoint is a target where the data of the audio channel will be played. //! - A `Device` is an audio device that may have any number of input and output streams.
//! - A voice is an open audio channel which you can stream audio data to. You have to choose which //! - A stream is an open audio channel. Input streams allow you to receive audio data, output
//! endpoint your voice targets before you create one. //! streams allow you to play audio data. You must choose which `Device` runs your stream before
//! - An event loop is a collection of voices. Each voice must belong to an event loop, and all the //! you create one.
//! voices that belong to an event loop are managed together. //! - An `EventLoop` is a collection of streams being run by one or more `Device`. Each stream must
//! belong to an `EventLoop`, and all the streams that belong to an `EventLoop` are managed
//! together.
//! //!
//! In order to play a sound, you first need to create an event loop: //! The first step is to create an `EventLoop`:
//! //!
//! ``` //! ```
//! use cpal::EventLoop; //! use cpal::EventLoop;
//! let event_loop = EventLoop::new(); //! let event_loop = EventLoop::new();
//! ``` //! ```
//! //!
//! Then choose an endpoint. You can either use the default endpoint with the `default_endpoint()` //! Then choose a `Device`. The easiest way is to use the default input or output `Device` via the
//! function, or enumerate all the available endpoints with the `endpoints()` function. Beware that //! `default_input_device()` or `default_output_device()` functions. Alternatively you can
//! `default_endpoint()` returns an `Option` in case no endpoint is available on the system. //! enumerate all the available devices with the `devices()` function. Beware that the
//! `default_*_device()` functions return an `Option` in case no device is available for that
//! stream type on the system.
//! //!
//! ``` //! ```
//! // Note: we call `unwrap()` because it is convenient, but you should avoid doing that in a real //! let device = cpal::default_output_device().expect("no output device available");
//! // code.
//! let endpoint = cpal::default_endpoint().expect("no endpoint is available");
//! ``` //! ```
//! //!
//! Before we can create a voice, we must decide what the format of the audio samples is going to //! Before we can create a stream, we must decide what the format of the audio samples is going to
//! be. You can query all the supported formats with the `supported_formats()` method, which //! be. You can query all the supported formats with the `supported_input_formats()` and
//! produces a list of `SupportedFormat` structs which can later be turned into actual `Format` //! `supported_output_formats()` methods. These produce a list of `SupportedFormat` structs which
//! structs. If you don't want to query the list of formats, you can also build your own `Format` //! can later be turned into actual `Format` structs. If you don't want to query the list of
//! manually, but doing so could lead to an error when building the voice if the format ends up not //! formats, you can also build your own `Format` manually, but doing so could lead to an error
//! being supported. //! when building the stream if the format is not supported by the device.
//! //!
//! > **Note**: the `supported_formats()` method could return an error for example if the device //! > **Note**: the `supported_formats()` method could return an error for example if the device
//! > has been disconnected. //! > has been disconnected.
//! //!
//! ```no_run //! ```no_run
//! # let endpoint = cpal::default_endpoint().unwrap(); //! # let device = cpal::default_output_device().unwrap();
//! let mut supported_formats_range = endpoint.supported_formats() //! let mut supported_formats_range = device.supported_output_formats()
//! .expect("error while querying formats"); //! .expect("error while querying formats");
//! let format = supported_formats_range.next().expect("no supported format?!") //! let format = supported_formats_range.next()
//! .expect("no supported format?!")
//! .with_max_sample_rate(); //! .with_max_sample_rate();
//! ``` //! ```
//! //!
//! Now that we have everything, we can create a voice from that event loop: //! Now that we have everything, we can create a stream from our event loop:
//! //!
//! ```no_run //! ```no_run
//! # let endpoint = cpal::default_endpoint().unwrap(); //! # let device = cpal::default_output_device().unwrap();
//! # let format = endpoint.supported_formats().unwrap().next().unwrap().with_max_sample_rate(); //! # let format = device.supported_output_formats().unwrap().next().unwrap().with_max_sample_rate();
//! # let event_loop = cpal::EventLoop::new(); //! # let event_loop = cpal::EventLoop::new();
//! let voice_id = event_loop.build_voice(&endpoint, &format).unwrap(); //! let stream_id = event_loop.build_output_stream(&device, &format).unwrap();
//! ``` //! ```
//! //!
//! The value returned by `build_voice()` is of type `VoiceId` and is an identifier that will //! The value returned by `build_output_stream()` is of type `StreamId` and is an identifier that
//! allow you to control the voice. //! will allow you to control the stream.
//! //!
//! There is a last step to perform before going forward, which is to start the voice. This is done //! Now we must start the stream. This is done with the `play_stream()` method on the event loop.
//! with the `play()` method on the event loop.
//! //!
//! ``` //! ```
//! # let event_loop: cpal::EventLoop = return; //! # let event_loop: cpal::EventLoop = return;
//! # let voice_id: cpal::VoiceId = return; //! # let stream_id: cpal::StreamId = return;
//! event_loop.play(voice_id); //! event_loop.play_stream(stream_id);
//! ``` //! ```
//! //!
//! Once everything is done, you must call `run()` on the `event_loop`. //! Once everything is ready! Now we call `run()` on the `event_loop` to begin processing.
//! //!
//! ```no_run //! ```no_run
//! # let event_loop = cpal::EventLoop::new(); //! # let event_loop = cpal::EventLoop::new();
//! event_loop.run(move |_voice_id, _buffer| { //! event_loop.run(move |_stream_id, _stream_data| {
//! // write data to `buffer` here //! // read or write stream data here
//! }); //! });
//! ``` //! ```
//! //!
@ -77,34 +79,35 @@
//! > separate thread. //! > separate thread.
//! //!
//! While `run()` is running, the audio device of the user will from time to time call the callback //! While `run()` is running, the audio device of the user will from time to time call the callback
//! that you passed to this function. The callback gets passed the voice ID, and a struct of type //! that you passed to this function. The callback gets passed the stream ID an instance of type
//! `UnknownTypeBuffer` that represents the buffer that must be filled with audio samples. The //! `StreamData` that represents the data that must be read from or written to. The inner
//! `UnknownTypeBuffer` can be one of `I16`, `U16` or `F32` depending on the format that was passed //! `UnknownTypeOutputBuffer` can be one of `I16`, `U16` or `F32` depending on the format that was
//! to `build_voice`. //! passed to `build_output_stream`.
//! //!
//! In this example, we simply simply fill the buffer with zeroes. //! In this example, we simply simply fill the given output buffer with zeroes.
//! //!
//! ```no_run //! ```no_run
//! use cpal::UnknownTypeBuffer; //! use cpal::{StreamData, UnknownTypeOutputBuffer};
//! //!
//! # let event_loop = cpal::EventLoop::new(); //! # let event_loop = cpal::EventLoop::new();
//! event_loop.run(move |_voice_id, mut buffer| { //! event_loop.run(move |_stream_id, mut stream_data| {
//! match buffer { //! match stream_data {
//! UnknownTypeBuffer::U16(mut buffer) => { //! StreamData::Output { buffer: UnknownTypeOutputBuffer::U16(mut buffer) } => {
//! for elem in buffer.iter_mut() { //! for elem in buffer.iter_mut() {
//! *elem = u16::max_value() / 2; //! *elem = u16::max_value() / 2;
//! } //! }
//! }, //! },
//! UnknownTypeBuffer::I16(mut buffer) => { //! StreamData::Output { buffer: UnknownTypeOutputBuffer::I16(mut buffer) } => {
//! for elem in buffer.iter_mut() { //! for elem in buffer.iter_mut() {
//! *elem = 0; //! *elem = 0;
//! } //! }
//! }, //! },
//! UnknownTypeBuffer::F32(mut buffer) => { //! StreamData::Output { buffer: UnknownTypeOutputBuffer::F32(mut buffer) } => {
//! for elem in buffer.iter_mut() { //! for elem in buffer.iter_mut() {
//! *elem = 0.0; //! *elem = 0.0;
//! } //! }
//! }, //! },
//! _ => (),
//! } //! }
//! }); //! });
//! ``` //! ```
@ -122,12 +125,13 @@ extern crate stdweb;
pub use samples_formats::{Sample, SampleFormat}; pub use samples_formats::{Sample, SampleFormat};
#[cfg(all(not(windows), not(target_os = "linux"), not(target_os = "freebsd"), #[cfg(not(any(windows, target_os = "linux", target_os = "freebsd",
not(target_os = "macos"), not(target_os = "ios"), not(target_os = "emscripten")))] target_os = "macos", target_os = "ios", target_os = "emscripten")))]
use null as cpal_impl; use null as cpal_impl;
use std::error::Error; use std::error::Error;
use std::fmt; use std::fmt;
use std::iter;
use std::ops::{Deref, DerefMut}; use std::ops::{Deref, DerefMut};
mod null; mod null;
@ -149,102 +153,30 @@ mod cpal_impl;
#[path = "emscripten/mod.rs"] #[path = "emscripten/mod.rs"]
mod cpal_impl; mod cpal_impl;
/// An iterator for the list of formats that are supported by the backend. /// An opaque type that identifies a device that is capable of either audio input or output.
/// ///
/// See [`endpoints()`](fn.endpoints.html). /// Please note that `Device`s may become invalid if they get disconnected. Therefore all the
pub struct EndpointsIterator(cpal_impl::EndpointsIterator); /// methods that involve a device return a `Result`.
impl Iterator for EndpointsIterator {
type Item = Endpoint;
#[inline]
fn next(&mut self) -> Option<Endpoint> {
self.0.next().map(Endpoint)
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.0.size_hint()
}
}
/// Return an iterator to the list of formats that are supported by the system.
///
/// Can be empty if the system doesn't support audio in general.
#[inline]
pub fn endpoints() -> EndpointsIterator {
EndpointsIterator(Default::default())
}
/// Deprecated. Use `endpoints()` instead.
#[inline]
#[deprecated]
pub fn get_endpoints_list() -> EndpointsIterator {
EndpointsIterator(Default::default())
}
/// Return the default endpoint, or `None` if no device is available.
#[inline]
pub fn default_endpoint() -> Option<Endpoint> {
cpal_impl::default_endpoint().map(Endpoint)
}
/// Deprecated. Use `default_endpoint()` instead.
#[inline]
#[deprecated]
pub fn get_default_endpoint() -> Option<Endpoint> {
default_endpoint()
}
/// An opaque type that identifies an endpoint that is capable of playing audio.
///
/// Please note that endpoints may become invalid if they get disconnected. Therefore all the
/// methods that involve an endpoint return a `Result`.
#[derive(Clone, PartialEq, Eq)] #[derive(Clone, PartialEq, Eq)]
pub struct Endpoint(cpal_impl::Endpoint); pub struct Device(cpal_impl::Device);
impl Endpoint { /// Collection of voices managed together.
/// Returns an iterator that produces the list of formats that are supported by the backend.
/// ///
/// Can return an error if the endpoint is no longer valid (eg. it has been disconnected). /// Created with the [`new`](struct.EventLoop.html#method.new) method.
/// The returned iterator should never be empty. pub struct EventLoop(cpal_impl::EventLoop);
#[inline]
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> {
Ok(SupportedFormatsIterator(self.0.supported_formats()?))
}
/// Deprecated. Use `supported_formats` instead. /// Identifier of a stream within the `EventLoop`.
#[inline] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
#[deprecated] pub struct StreamId(cpal_impl::StreamId);
pub fn get_supported_formats_list(
&self)
-> Result<SupportedFormatsIterator, FormatsEnumerationError> {
self.supported_formats()
}
/// Returns the name of the endpoint.
// TODO: human-readable or system name?
#[inline]
pub fn name(&self) -> String {
self.0.name()
}
/// Deprecated. Use `name()` instead.
#[deprecated]
#[inline]
pub fn get_name(&self) -> String {
self.name()
}
}
/// Number of channels. /// Number of channels.
pub type ChannelCount = u16; pub type ChannelCount = u16;
/// /// The number of samples processed per second for a single channel of audio.
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)] #[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
pub struct SampleRate(pub u32); pub struct SampleRate(pub u32);
/// Describes a format. /// The format of an input or output audio stream.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct Format { pub struct Format {
pub channels: ChannelCount, pub channels: ChannelCount,
@ -252,26 +184,7 @@ pub struct Format {
pub data_type: SampleFormat, pub data_type: SampleFormat,
} }
/// An iterator that produces a list of formats supported by the endpoint. /// Describes a range of supported stream formats.
///
/// See [`Endpoint::supported_formats()`](struct.Endpoint.html#method.supported_formats).
pub struct SupportedFormatsIterator(cpal_impl::SupportedFormatsIterator);
impl Iterator for SupportedFormatsIterator {
type Item = SupportedFormat;
#[inline]
fn next(&mut self) -> Option<SupportedFormat> {
self.0.next()
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
self.0.size_hint()
}
}
/// Describes a range of supported formats.
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq)]
pub struct SupportedFormat { pub struct SupportedFormat {
pub channels: ChannelCount, pub channels: ChannelCount,
@ -279,10 +192,304 @@ pub struct SupportedFormat {
pub min_sample_rate: SampleRate, pub min_sample_rate: SampleRate,
/// Maximum value for the samples rate of the supported formats. /// Maximum value for the samples rate of the supported formats.
pub max_sample_rate: SampleRate, pub max_sample_rate: SampleRate,
/// Type of data expected by the endpoint. /// Type of data expected by the device.
pub data_type: SampleFormat, pub data_type: SampleFormat,
} }
/// Stream data passed to the `EventLoop::run` callback.
pub enum StreamData<'a> {
Input {
buffer: UnknownTypeInputBuffer<'a>,
},
Output {
buffer: UnknownTypeOutputBuffer<'a>,
},
}
/// Represents a buffer containing audio data that may be read.
///
/// This struct implements the `Deref` trait targeting `[T]`. Therefore this buffer can be read the
/// same way as reading from a `Vec` or any other kind of Rust array.
// TODO: explain audio stuff in general
pub struct InputBuffer<'a, T: 'a>
where
T: Sample,
{
// Always contains something, taken by `Drop`
// TODO: change that
buffer: Option<cpal_impl::InputBuffer<'a, T>>,
}
/// Represents a buffer that must be filled with audio data.
///
/// You should destroy this object as soon as possible. Data is only sent to the audio device when
/// this object is destroyed.
///
/// This struct implements the `Deref` and `DerefMut` traits to `[T]`. Therefore writing to this
/// buffer is done in the same way as writing to a `Vec` or any other kind of Rust array.
// TODO: explain audio stuff in general
#[must_use]
pub struct OutputBuffer<'a, T: 'a>
where
T: Sample,
{
// Always contains something, taken by `Drop`
// TODO: change that
target: Option<cpal_impl::OutputBuffer<'a, T>>,
}
/// This is the struct that is provided to you by cpal when you want to read samples from a buffer.
///
/// Since the type of data is only known at runtime, you have to read the right buffer.
pub enum UnknownTypeInputBuffer<'a> {
/// Samples whose format is `u16`.
U16(InputBuffer<'a, u16>),
/// Samples whose format is `i16`.
I16(InputBuffer<'a, i16>),
/// Samples whose format is `f32`.
F32(InputBuffer<'a, f32>),
}
/// This is the struct that is provided to you by cpal when you want to write samples to a buffer.
///
/// Since the type of data is only known at runtime, you have to fill the right buffer.
pub enum UnknownTypeOutputBuffer<'a> {
/// Samples whose format is `u16`.
U16(OutputBuffer<'a, u16>),
/// Samples whose format is `i16`.
I16(OutputBuffer<'a, i16>),
/// Samples whose format is `f32`.
F32(OutputBuffer<'a, f32>),
}
/// An iterator yielding all `Device`s currently available to the system.
///
/// See [`devices()`](fn.devices.html).
pub struct Devices(cpal_impl::Devices);
/// A `Devices` yielding only *input* devices.
pub type InputDevices = iter::Filter<Devices, fn(&Device) -> bool>;
/// A `Devices` yielding only *output* devices.
pub type OutputDevices = iter::Filter<Devices, fn(&Device) -> bool>;
/// An iterator that produces a list of input stream formats supported by the device.
///
/// See [`Device::supported_input_formats()`](struct.Device.html#method.supported_input_formats).
pub struct SupportedInputFormats(cpal_impl::SupportedInputFormats);
/// An iterator that produces a list of output stream formats supported by the device.
///
/// See [`Device::supported_output_formats()`](struct.Device.html#method.supported_output_formats).
pub struct SupportedOutputFormats(cpal_impl::SupportedOutputFormats);
/// Error that can happen when enumerating the list of supported formats.
#[derive(Debug)]
pub enum FormatsEnumerationError {
/// The device no longer exists. This can happen if the device is disconnected while the
/// program is running.
DeviceNotAvailable,
}
/// May occur when attempting to request the default input or output stream format from a `Device`.
#[derive(Debug)]
pub enum DefaultFormatError {
/// The device no longer exists. This can happen if the device is disconnected while the
/// program is running.
DeviceNotAvailable,
/// Returned if e.g. the default input format was requested on an output-only audio device.
StreamTypeNotSupported,
}
/// Error that can happen when creating a `Voice`.
#[derive(Debug)]
pub enum CreationError {
/// The device no longer exists. This can happen if the device is disconnected while the
/// program is running.
DeviceNotAvailable,
/// The required format is not supported.
FormatNotSupported,
}
/// An iterator yielding all `Device`s currently available to the system.
///
/// Can be empty if the system does not support audio in general.
#[inline]
pub fn devices() -> Devices {
Devices(Default::default())
}
/// An iterator yielding all `Device`s currently available to the system that support one or more
/// input stream formats.
///
/// Can be empty if the system does not support audio input.
pub fn input_devices() -> InputDevices {
fn supports_input(device: &Device) -> bool {
device.supported_input_formats()
.map(|mut iter| iter.next().is_some())
.unwrap_or(false)
}
devices().filter(supports_input)
}
/// An iterator yielding all `Device`s currently available to the system that support one or more
/// output stream formats.
///
/// Can be empty if the system does not support audio output.
pub fn output_devices() -> OutputDevices {
fn supports_output(device: &Device) -> bool {
device.supported_output_formats()
.map(|mut iter| iter.next().is_some())
.unwrap_or(false)
}
devices().filter(supports_output)
}
/// The default input audio device on the system.
///
/// Returns `None` if no input device is available.
pub fn default_input_device() -> Option<Device> {
cpal_impl::default_input_device().map(Device)
}
/// The default output audio device on the system.
///
/// Returns `None` if no output device is available.
pub fn default_output_device() -> Option<Device> {
cpal_impl::default_output_device().map(Device)
}
impl Device {
/// The human-readable name of the device.
#[inline]
pub fn name(&self) -> String {
self.0.name()
}
/// An iterator yielding formats that are supported by the backend.
///
/// Can return an error if the device is no longer valid (eg. it has been disconnected).
#[inline]
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
Ok(SupportedInputFormats(self.0.supported_input_formats()?))
}
/// An iterator yielding output stream formats that are supported by the device.
///
/// Can return an error if the device is no longer valid (eg. it has been disconnected).
#[inline]
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
Ok(SupportedOutputFormats(self.0.supported_output_formats()?))
}
/// The default input stream format for the device.
#[inline]
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
self.0.default_input_format()
}
/// The default output stream format for the device.
#[inline]
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
self.0.default_output_format()
}
}
impl EventLoop {
/// Initializes a new events loop.
#[inline]
pub fn new() -> EventLoop {
EventLoop(cpal_impl::EventLoop::new())
}
/// Creates a new input stream that will run from the given device and with the given format.
///
/// On success, returns an identifier for the stream.
///
/// Can return an error if the device is no longer valid, or if the input stream format is not
/// supported by the device.
#[inline]
pub fn build_input_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
self.0.build_input_stream(&device.0, format).map(StreamId)
}
/// Creates a new output stream that will play on the given device and with the given format.
///
/// On success, returns an identifier for the stream.
///
/// Can return an error if the device is no longer valid, or if the output stream format is not
/// supported by the device.
#[inline]
pub fn build_output_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
self.0.build_output_stream(&device.0, format).map(StreamId)
}
/// Instructs the audio device that it should start playing the stream with the given ID.
///
/// Has no effect is the stream was already playing.
///
/// Only call this after you have submitted some data, otherwise you may hear some glitches.
///
/// # Panic
///
/// If the stream does not exist, this function can either panic or be a no-op.
///
#[inline]
pub fn play_stream(&self, stream: StreamId) {
self.0.play_stream(stream.0)
}
/// Instructs the audio device that it should stop playing the stream with the given ID.
///
/// Has no effect is the stream was already paused.
///
/// If you call `play` afterwards, the playback will resume where it was.
///
/// # Panic
///
/// If the stream does not exist, this function can either panic or be a no-op.
///
#[inline]
pub fn pause_stream(&self, stream: StreamId) {
self.0.pause_stream(stream.0)
}
/// Destroys an existing stream.
///
/// # Panic
///
/// If the stream does not exist, this function can either panic or be a no-op.
///
#[inline]
pub fn destroy_stream(&self, stream_id: StreamId) {
self.0.destroy_stream(stream_id.0)
}
/// Takes control of the current thread and begins the stream processing.
///
/// > **Note**: Since it takes control of the thread, this method is best called on a separate
/// > thread.
///
/// Whenever a stream needs to be fed some data, the closure passed as parameter is called.
/// You can call the other methods of `EventLoop` without getting a deadlock.
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(StreamId, StreamData) + Send
{
self.0.run(move |id, data| callback(StreamId(id), data))
}
}
impl SupportedFormat { impl SupportedFormat {
/// Turns this `SupportedFormat` into a `Format` corresponding to the maximum samples rate. /// Turns this `SupportedFormat` into a `Format` corresponding to the maximum samples rate.
#[inline] #[inline]
@ -293,6 +500,150 @@ impl SupportedFormat {
data_type: self.data_type, data_type: self.data_type,
} }
} }
/// A comparison function which compares two `SupportedFormat`s in terms of their priority of
/// use as a default stream format.
///
/// Some backends do not provide a default stream format for their audio devices. In these
/// cases, CPAL attempts to decide on a reasonable default format for the user. To do this we
/// use the "greatest" of all supported stream formats when compared with this method.
///
/// Formats are prioritised by the following heuristics:
///
/// **Channels**:
///
/// - Stereo
/// - Mono
/// - Max available channels
///
/// **Sample format**:
/// - f32
/// - i16
/// - u16
///
/// **Sample rate**:
///
/// - 44100 (cd quality)
/// - Max sample rate
pub fn cmp_default_heuristics(&self, other: &Self) -> std::cmp::Ordering {
use std::cmp::Ordering::Equal;
use SampleFormat::{F32, I16, U16};
let cmp_stereo = (self.channels == 2).cmp(&(other.channels == 2));
if cmp_stereo != Equal {
return cmp_stereo;
}
let cmp_mono = (self.channels == 1).cmp(&(other.channels == 1));
if cmp_mono != Equal {
return cmp_mono;
}
let cmp_channels = self.channels.cmp(&other.channels);
if cmp_channels != Equal {
return cmp_channels;
}
let cmp_f32 = (self.data_type == F32).cmp(&(other.data_type == F32));
if cmp_f32 != Equal {
return cmp_f32;
}
let cmp_i16 = (self.data_type == I16).cmp(&(other.data_type == I16));
if cmp_i16 != Equal {
return cmp_i16;
}
let cmp_u16 = (self.data_type == U16).cmp(&(other.data_type == U16));
if cmp_u16 != Equal {
return cmp_u16;
}
const HZ_44100: SampleRate = SampleRate(44_100);
let r44100_in_self = self.min_sample_rate <= HZ_44100
&& HZ_44100 <= self.max_sample_rate;
let r44100_in_other = other.min_sample_rate <= HZ_44100
&& HZ_44100 <= other.max_sample_rate;
let cmp_r44100 = r44100_in_self.cmp(&r44100_in_other);
if cmp_r44100 != Equal {
return cmp_r44100;
}
self.max_sample_rate.cmp(&other.max_sample_rate)
}
}
impl<'a, T> Deref for InputBuffer<'a, T>
where T: Sample
{
type Target = [T];
#[inline]
fn deref(&self) -> &[T] {
self.buffer.as_ref().unwrap().buffer()
}
}
impl<'a, T> Drop for InputBuffer<'a, T>
where T: Sample
{
#[inline]
fn drop(&mut self) {
self.buffer.take().unwrap().finish();
}
}
impl<'a, T> Deref for OutputBuffer<'a, T>
where T: Sample
{
type Target = [T];
#[inline]
fn deref(&self) -> &[T] {
panic!("It is forbidden to read from the audio buffer");
}
}
impl<'a, T> DerefMut for OutputBuffer<'a, T>
where T: Sample
{
#[inline]
fn deref_mut(&mut self) -> &mut [T] {
self.target.as_mut().unwrap().buffer()
}
}
impl<'a, T> Drop for OutputBuffer<'a, T>
where T: Sample
{
#[inline]
fn drop(&mut self) {
self.target.take().unwrap().finish();
}
}
impl<'a> UnknownTypeInputBuffer<'a> {
/// Returns the length of the buffer in number of samples.
#[inline]
pub fn len(&self) -> usize {
match self {
&UnknownTypeInputBuffer::U16(ref buf) => buf.len(),
&UnknownTypeInputBuffer::I16(ref buf) => buf.len(),
&UnknownTypeInputBuffer::F32(ref buf) => buf.len(),
}
}
}
impl<'a> UnknownTypeOutputBuffer<'a> {
/// Returns the length of the buffer in number of samples.
#[inline]
pub fn len(&self) -> usize {
match self {
&UnknownTypeOutputBuffer::U16(ref buf) => buf.target.as_ref().unwrap().len(),
&UnknownTypeOutputBuffer::I16(ref buf) => buf.target.as_ref().unwrap().len(),
&UnknownTypeOutputBuffer::F32(ref buf) => buf.target.as_ref().unwrap().len(),
}
}
} }
impl From<Format> for SupportedFormat { impl From<Format> for SupportedFormat {
@ -307,123 +658,48 @@ impl From<Format> for SupportedFormat {
} }
} }
/// Collection of voices managed together. impl Iterator for Devices {
/// type Item = Device;
/// Created with the [`new`](struct.EventLoop.html#method.new) method.
pub struct EventLoop(cpal_impl::EventLoop);
impl EventLoop {
/// Initializes a new events loop.
#[inline] #[inline]
pub fn new() -> EventLoop { fn next(&mut self) -> Option<Self::Item> {
EventLoop(cpal_impl::EventLoop::new()) self.0.next().map(Device)
} }
/// Creates a new voice that will play on the given endpoint and with the given format.
///
/// On success, returns an identifier for the voice.
///
/// Can return an error if the endpoint is no longer valid, or if the format is not supported
/// by the endpoint.
#[inline] #[inline]
pub fn build_voice(&self, endpoint: &Endpoint, format: &Format) fn size_hint(&self) -> (usize, Option<usize>) {
-> Result<VoiceId, CreationError> { self.0.size_hint()
self.0.build_voice(&endpoint.0, format).map(VoiceId) }
} }
/// Destroys an existing voice. impl Iterator for SupportedInputFormats {
/// type Item = SupportedFormat;
/// # Panic
///
/// If the voice doesn't exist, this function can either panic or be a no-op.
///
#[inline] #[inline]
pub fn destroy_voice(&self, voice_id: VoiceId) { fn next(&mut self) -> Option<SupportedFormat> {
self.0.destroy_voice(voice_id.0) self.0.next()
} }
/// Takes control of the current thread and processes the sounds.
///
/// > **Note**: Since it takes control of the thread, this method is best called on a separate
/// > thread.
///
/// Whenever a voice needs to be fed some data, the closure passed as parameter is called.
/// You can call the other methods of `EventLoop` without getting a deadlock.
#[inline] #[inline]
pub fn run<F>(&self, mut callback: F) -> ! fn size_hint(&self) -> (usize, Option<usize>) {
where F: FnMut(VoiceId, UnknownTypeBuffer) + Send self.0.size_hint()
{ }
self.0.run(move |id, buf| callback(VoiceId(id), buf))
} }
/// Instructs the audio device that it should start playing. impl Iterator for SupportedOutputFormats {
/// type Item = SupportedFormat;
/// Has no effect is the voice was already playing.
///
/// Only call this after you have submitted some data, otherwise you may hear
/// some glitches.
///
/// # Panic
///
/// If the voice doesn't exist, this function can either panic or be a no-op.
///
#[inline] #[inline]
pub fn play(&self, voice: VoiceId) { fn next(&mut self) -> Option<SupportedFormat> {
self.0.play(voice.0) self.0.next()
} }
/// Instructs the audio device that it should stop playing.
///
/// Has no effect is the voice was already paused.
///
/// If you call `play` afterwards, the playback will resume where it was.
///
/// # Panic
///
/// If the voice doesn't exist, this function can either panic or be a no-op.
///
#[inline] #[inline]
pub fn pause(&self, voice: VoiceId) { fn size_hint(&self) -> (usize, Option<usize>) {
self.0.pause(voice.0) self.0.size_hint()
} }
} }
/// Identifier of a voice in an events loop.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(cpal_impl::VoiceId);
/// This is the struct that is provided to you by cpal when you want to write samples to a buffer.
///
/// Since the type of data is only known at runtime, you have to fill the right buffer.
pub enum UnknownTypeBuffer<'a> {
/// Samples whose format is `u16`.
U16(Buffer<'a, u16>),
/// Samples whose format is `i16`.
I16(Buffer<'a, i16>),
/// Samples whose format is `f32`.
F32(Buffer<'a, f32>),
}
impl<'a> UnknownTypeBuffer<'a> {
/// Returns the length of the buffer in number of samples.
#[inline]
pub fn len(&self) -> usize {
match self {
&UnknownTypeBuffer::U16(ref buf) => buf.target.as_ref().unwrap().len(),
&UnknownTypeBuffer::I16(ref buf) => buf.target.as_ref().unwrap().len(),
&UnknownTypeBuffer::F32(ref buf) => buf.target.as_ref().unwrap().len(),
}
}
}
/// Error that can happen when enumerating the list of supported formats.
#[derive(Debug)]
pub enum FormatsEnumerationError {
/// The device no longer exists. This can happen if the device is disconnected while the
/// program is running.
DeviceNotAvailable,
}
impl fmt::Display for FormatsEnumerationError { impl fmt::Display for FormatsEnumerationError {
#[inline] #[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
@ -442,17 +718,6 @@ impl Error for FormatsEnumerationError {
} }
} }
/// Error that can happen when creating a `Voice`.
#[derive(Debug)]
pub enum CreationError {
/// The device no longer exists. This can happen if the device is disconnected while the
/// program is running.
DeviceNotAvailable,
/// The required format is not supported.
FormatNotSupported,
}
impl fmt::Display for CreationError { impl fmt::Display for CreationError {
#[inline] #[inline]
fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> { fn fmt(&self, fmt: &mut fmt::Formatter) -> Result<(), fmt::Error> {
@ -475,48 +740,23 @@ impl Error for CreationError {
} }
} }
/// Represents a buffer that must be filled with audio data. // If a backend does not provide an API for retrieving supported formats, we query it with a bunch
/// // of commonly used rates. This is always the case for wasapi and is sometimes the case for alsa.
/// You should destroy this object as soon as possible. Data is only sent to the audio device when //
/// this object is destroyed. // If a rate you desire is missing from this list, feel free to add it!
/// #[cfg(target_os = "windows")]
/// This struct implements the `Deref` and `DerefMut` traits to `[T]`. Therefore writing to this const COMMON_SAMPLE_RATES: &'static [SampleRate] = &[
/// buffer is done in the same way as writing to a `Vec` or any other kind of Rust array. SampleRate(5512),
// TODO: explain audio stuff in general SampleRate(8000),
#[must_use] SampleRate(11025),
pub struct Buffer<'a, T: 'a> SampleRate(16000),
where T: Sample SampleRate(22050),
{ SampleRate(32000),
// Always contains something, taken by `Drop` SampleRate(44100),
// TODO: change that SampleRate(48000),
target: Option<cpal_impl::Buffer<'a, T>>, SampleRate(64000),
} SampleRate(88200),
SampleRate(96000),
impl<'a, T> Deref for Buffer<'a, T> SampleRate(176400),
where T: Sample SampleRate(192000),
{ ];
type Target = [T];
#[inline]
fn deref(&self) -> &[T] {
panic!("It is forbidden to read from the audio buffer");
}
}
impl<'a, T> DerefMut for Buffer<'a, T>
where T: Sample
{
#[inline]
fn deref_mut(&mut self) -> &mut [T] {
self.target.as_mut().unwrap().buffer()
}
}
impl<'a, T> Drop for Buffer<'a, T>
where T: Sample
{
#[inline]
fn drop(&mut self) {
self.target.take().unwrap().finish();
}
}

View File

@ -3,12 +3,14 @@
use std::marker::PhantomData; use std::marker::PhantomData;
use CreationError; use CreationError;
use DefaultFormatError;
use Format; use Format;
use FormatsEnumerationError; use FormatsEnumerationError;
use StreamData;
use SupportedFormat; use SupportedFormat;
use UnknownTypeBuffer;
pub struct EventLoop; pub struct EventLoop;
impl EventLoop { impl EventLoop {
#[inline] #[inline]
pub fn new() -> EventLoop { pub fn new() -> EventLoop {
@ -17,59 +19,84 @@ impl EventLoop {
#[inline] #[inline]
pub fn run<F>(&self, _callback: F) -> ! pub fn run<F>(&self, _callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer) where F: FnMut(StreamId, StreamData)
{ {
loop { /* TODO: don't spin */ } loop { /* TODO: don't spin */ }
} }
#[inline] #[inline]
pub fn build_voice(&self, _: &Endpoint, _: &Format) -> Result<VoiceId, CreationError> { pub fn build_input_stream(&self, _: &Device, _: &Format) -> Result<StreamId, CreationError> {
Err(CreationError::DeviceNotAvailable) Err(CreationError::DeviceNotAvailable)
} }
#[inline] #[inline]
pub fn destroy_voice(&self, _: VoiceId) { pub fn build_output_stream(&self, _: &Device, _: &Format) -> Result<StreamId, CreationError> {
unreachable!() Err(CreationError::DeviceNotAvailable)
} }
#[inline] #[inline]
pub fn play(&self, _: VoiceId) { pub fn destroy_stream(&self, _: StreamId) {
unimplemented!()
}
#[inline]
pub fn play_stream(&self, _: StreamId) {
panic!() panic!()
} }
#[inline] #[inline]
pub fn pause(&self, _: VoiceId) { pub fn pause_stream(&self, _: StreamId) {
panic!() panic!()
} }
} }
#[derive(Debug, Clone, PartialEq, Eq, Hash)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId; pub struct StreamId;
#[derive(Default)] #[derive(Default)]
pub struct EndpointsIterator; pub struct Devices;
impl Iterator for EndpointsIterator { impl Iterator for Devices {
type Item = Endpoint; type Item = Device;
#[inline] #[inline]
fn next(&mut self) -> Option<Endpoint> { fn next(&mut self) -> Option<Device> {
None None
} }
} }
#[inline] #[inline]
pub fn default_endpoint() -> Option<Endpoint> { pub fn default_input_device() -> Option<Device> {
None
}
#[inline]
pub fn default_output_device() -> Option<Device> {
None None
} }
#[derive(Clone, Debug, PartialEq, Eq)] #[derive(Clone, Debug, PartialEq, Eq)]
pub struct Endpoint; pub struct Device;
impl Endpoint { impl Device {
#[inline] #[inline]
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> { pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
unreachable!() unimplemented!()
}
#[inline]
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
unimplemented!()
}
#[inline]
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!()
}
#[inline]
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!()
} }
#[inline] #[inline]
@ -78,9 +105,10 @@ impl Endpoint {
} }
} }
pub struct SupportedFormatsIterator; pub struct SupportedInputFormats;
pub struct SupportedOutputFormats;
impl Iterator for SupportedFormatsIterator { impl Iterator for SupportedInputFormats {
type Item = SupportedFormat; type Item = SupportedFormat;
#[inline] #[inline]
@ -89,14 +117,38 @@ impl Iterator for SupportedFormatsIterator {
} }
} }
pub struct Buffer<'a, T: 'a> { impl Iterator for SupportedOutputFormats {
type Item = SupportedFormat;
#[inline]
fn next(&mut self) -> Option<SupportedFormat> {
None
}
}
pub struct InputBuffer<'a, T: 'a> {
marker: PhantomData<&'a T>,
}
pub struct OutputBuffer<'a, T: 'a> {
marker: PhantomData<&'a mut T>, marker: PhantomData<&'a mut T>,
} }
impl<'a, T> Buffer<'a, T> { impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
unimplemented!()
}
#[inline]
pub fn finish(self) {
}
}
impl<'a, T> OutputBuffer<'a, T> {
#[inline] #[inline]
pub fn buffer(&mut self) -> &mut [T] { pub fn buffer(&mut self) -> &mut [T] {
unreachable!() unimplemented!()
} }
#[inline] #[inline]

727
src/wasapi/device.rs Normal file
View File

@ -0,0 +1,727 @@
use std;
use std::ffi::OsString;
use std::io::Error as IoError;
use std::mem;
use std::ops::{Deref, DerefMut};
use std::os::windows::ffi::OsStringExt;
use std::ptr;
use std::slice;
use std::sync::{Arc, Mutex, MutexGuard};
use DefaultFormatError;
use Format;
use FormatsEnumerationError;
use SampleFormat;
use SampleRate;
use SupportedFormat;
use COMMON_SAMPLE_RATES;
use super::check_result;
use super::com;
use super::winapi::Interface;
use super::winapi::shared::devpkey;
use super::winapi::shared::ksmedia;
use super::winapi::shared::guiddef::{
GUID,
};
use super::winapi::shared::winerror;
use super::winapi::shared::minwindef::{
DWORD,
};
use super::winapi::shared::mmreg;
use super::winapi::shared::wtypes;
use super::winapi::um::coml2api;
use super::winapi::um::audioclient::{
IAudioClient,
IID_IAudioClient,
AUDCLNT_E_DEVICE_INVALIDATED,
};
use super::winapi::um::audiosessiontypes::{
AUDCLNT_SHAREMODE_SHARED,
};
use super::winapi::um::combaseapi::{
CoCreateInstance,
CoTaskMemFree,
CLSCTX_ALL,
PropVariantClear,
};
use super::winapi::um::mmdeviceapi::{
eAll,
eCapture,
eConsole,
eRender,
CLSID_MMDeviceEnumerator,
DEVICE_STATE_ACTIVE,
EDataFlow,
IMMDevice,
IMMDeviceCollection,
IMMDeviceEnumerator,
IMMEndpoint,
};
pub type SupportedInputFormats = std::vec::IntoIter<SupportedFormat>;
pub type SupportedOutputFormats = std::vec::IntoIter<SupportedFormat>;
/// Wrapper because of that stupid decision to remove `Send` and `Sync` from raw pointers.
#[derive(Copy, Clone)]
struct IAudioClientWrapper(*mut IAudioClient);
unsafe impl Send for IAudioClientWrapper {
}
unsafe impl Sync for IAudioClientWrapper {
}
/// An opaque type that identifies an end point.
pub struct Device {
device: *mut IMMDevice,
/// We cache an uninitialized `IAudioClient` so that we can call functions from it without
/// having to create/destroy audio clients all the time.
future_audio_client: Arc<Mutex<Option<IAudioClientWrapper>>>, // TODO: add NonZero around the ptr
}
struct Endpoint {
endpoint: *mut IMMEndpoint,
}
enum WaveFormat {
Ex(mmreg::WAVEFORMATEX),
Extensible(mmreg::WAVEFORMATEXTENSIBLE),
}
// Use RAII to make sure CoTaskMemFree is called when we are responsible for freeing.
struct WaveFormatExPtr(*mut mmreg::WAVEFORMATEX);
impl Drop for WaveFormatExPtr {
fn drop(&mut self) {
unsafe {
CoTaskMemFree(self.0 as *mut _);
}
}
}
impl WaveFormat {
// Given a pointer to some format, returns a valid copy of the format.
pub fn copy_from_waveformatex_ptr(ptr: *const mmreg::WAVEFORMATEX) -> Option<Self> {
unsafe {
match (*ptr).wFormatTag {
mmreg::WAVE_FORMAT_PCM | mmreg::WAVE_FORMAT_IEEE_FLOAT => {
Some(WaveFormat::Ex(*ptr))
},
mmreg::WAVE_FORMAT_EXTENSIBLE => {
let extensible_ptr = ptr as *const mmreg::WAVEFORMATEXTENSIBLE;
Some(WaveFormat::Extensible(*extensible_ptr))
},
_ => None,
}
}
}
// Get the pointer to the WAVEFORMATEX struct.
pub fn as_ptr(&self) -> *const mmreg::WAVEFORMATEX {
self.deref() as *const _
}
}
impl Deref for WaveFormat {
type Target = mmreg::WAVEFORMATEX;
fn deref(&self) -> &Self::Target {
match *self {
WaveFormat::Ex(ref f) => f,
WaveFormat::Extensible(ref f) => &f.Format,
}
}
}
impl DerefMut for WaveFormat {
fn deref_mut(&mut self) -> &mut Self::Target {
match *self {
WaveFormat::Ex(ref mut f) => f,
WaveFormat::Extensible(ref mut f) => &mut f.Format,
}
}
}
unsafe fn immendpoint_from_immdevice(device: *const IMMDevice) -> *mut IMMEndpoint {
let mut endpoint: *mut IMMEndpoint = mem::uninitialized();
check_result((*device).QueryInterface(&IMMEndpoint::uuidof(), &mut endpoint as *mut _ as *mut _))
.expect("could not query IMMDevice interface for IMMEndpoint");
endpoint
}
unsafe fn data_flow_from_immendpoint(endpoint: *const IMMEndpoint) -> EDataFlow {
let mut data_flow = mem::uninitialized();
check_result((*endpoint).GetDataFlow(&mut data_flow))
.expect("could not get endpoint data_flow");
data_flow
}
// Given the audio client and format, returns whether or not the format is supported.
pub unsafe fn is_format_supported(
client: *const IAudioClient,
waveformatex_ptr: *const mmreg::WAVEFORMATEX,
) -> Result<bool, FormatsEnumerationError>
{
/*
// `IsFormatSupported` checks whether the format is supported and fills
// a `WAVEFORMATEX`
let mut dummy_fmt_ptr: *mut mmreg::WAVEFORMATEX = mem::uninitialized();
let hresult =
(*audio_client)
.IsFormatSupported(share_mode, &format_attempt.Format, &mut dummy_fmt_ptr);
// we free that `WAVEFORMATEX` immediately after because we don't need it
if !dummy_fmt_ptr.is_null() {
CoTaskMemFree(dummy_fmt_ptr as *mut _);
}
// `IsFormatSupported` can return `S_FALSE` (which means that a compatible format
// has been found) but we also treat this as an error
match (hresult, check_result(hresult)) {
(_, Err(ref e))
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
(_, Err(e)) => {
(*audio_client).Release();
panic!("{:?}", e);
},
(winerror::S_FALSE, _) => {
(*audio_client).Release();
return Err(CreationError::FormatNotSupported);
},
(_, Ok(())) => (),
};
*/
// Check if the given format is supported.
let is_supported = |waveformatex_ptr, mut closest_waveformatex_ptr| {
let result = (*client).IsFormatSupported(
AUDCLNT_SHAREMODE_SHARED,
waveformatex_ptr,
&mut closest_waveformatex_ptr,
);
// `IsFormatSupported` can return `S_FALSE` (which means that a compatible format
// has been found, but not an exact match) so we also treat this as unsupported.
match (result, check_result(result)) {
(_, Err(ref e)) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(FormatsEnumerationError::DeviceNotAvailable);
},
(_, Err(_)) => {
Ok(false)
},
(winerror::S_FALSE, _) => {
Ok(false)
},
(_, Ok(())) => {
Ok(true)
},
}
};
// First we want to retrieve a pointer to the `WAVEFORMATEX`.
// Although `GetMixFormat` writes the format to a given `WAVEFORMATEX` pointer,
// the pointer itself may actually point to a `WAVEFORMATEXTENSIBLE` structure.
// We check the wFormatTag to determine this and get a pointer to the correct type.
match (*waveformatex_ptr).wFormatTag {
mmreg::WAVE_FORMAT_PCM | mmreg::WAVE_FORMAT_IEEE_FLOAT => {
let mut closest_waveformatex = *waveformatex_ptr;
let mut closest_waveformatex_ptr = &mut closest_waveformatex as *mut _;
is_supported(waveformatex_ptr, closest_waveformatex_ptr)
},
mmreg::WAVE_FORMAT_EXTENSIBLE => {
let waveformatextensible_ptr =
waveformatex_ptr as *const mmreg::WAVEFORMATEXTENSIBLE;
let mut closest_waveformatextensible = *waveformatextensible_ptr;
let closest_waveformatextensible_ptr =
&mut closest_waveformatextensible as *mut _;
let mut closest_waveformatex_ptr =
closest_waveformatextensible_ptr as *mut mmreg::WAVEFORMATEX;
is_supported(waveformatex_ptr, closest_waveformatex_ptr)
},
_ => Ok(false),
}
}
// Get a cpal Format from a WAVEFORMATEX.
unsafe fn format_from_waveformatex_ptr(
waveformatex_ptr: *const mmreg::WAVEFORMATEX,
) -> Option<Format>
{
fn cmp_guid(a: &GUID, b: &GUID) -> bool {
a.Data1 == b.Data1
&& a.Data2 == b.Data2
&& a.Data3 == b.Data3
&& a.Data4 == b.Data4
}
let data_type = match ((*waveformatex_ptr).wBitsPerSample, (*waveformatex_ptr).wFormatTag) {
(16, mmreg::WAVE_FORMAT_PCM) => SampleFormat::I16,
(32, mmreg::WAVE_FORMAT_IEEE_FLOAT) => SampleFormat::F32,
(n_bits, mmreg::WAVE_FORMAT_EXTENSIBLE) => {
let waveformatextensible_ptr = waveformatex_ptr as *const mmreg::WAVEFORMATEXTENSIBLE;
let sub = (*waveformatextensible_ptr).SubFormat;
if n_bits == 16 && cmp_guid(&sub, &ksmedia::KSDATAFORMAT_SUBTYPE_PCM) {
SampleFormat::I16
} else if n_bits == 32 && cmp_guid(&sub, &ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT) {
SampleFormat::F32
} else {
return None;
}
},
// Unknown data format returned by GetMixFormat.
_ => return None,
};
let format = Format {
channels: (*waveformatex_ptr).nChannels as _,
sample_rate: SampleRate((*waveformatex_ptr).nSamplesPerSec),
data_type: data_type,
};
Some(format)
}
unsafe impl Send for Device {
}
unsafe impl Sync for Device {
}
impl Device {
pub fn name(&self) -> String {
unsafe {
// Open the device's property store.
let mut property_store = ptr::null_mut();
(*self.device).OpenPropertyStore(coml2api::STGM_READ, &mut property_store);
// Get the endpoint's friendly-name property.
let mut property_value = mem::zeroed();
check_result(
(*property_store).GetValue(
&devpkey::DEVPKEY_Device_FriendlyName as *const _ as *const _,
&mut property_value
)
).expect("failed to get friendly-name from property store");
// Read the friendly-name from the union data field, expecting a *const u16.
assert_eq!(property_value.vt, wtypes::VT_LPWSTR as _);
let ptr_usize: usize = *(&property_value.data as *const _ as *const usize);
let ptr_utf16 = ptr_usize as *const u16;
// Find the length of the friendly name.
let mut len = 0;
while *ptr_utf16.offset(len) != 0 {
len += 1;
}
// Create the utf16 slice and covert it into a string.
let name_slice = slice::from_raw_parts(ptr_utf16, len as usize);
let name_os_string: OsString = OsStringExt::from_wide(name_slice);
let name_string = name_os_string.into_string().unwrap();
// Clean up the property.
PropVariantClear(&mut property_value);
name_string
}
}
#[inline]
fn from_immdevice(device: *mut IMMDevice) -> Self {
Device {
device: device,
future_audio_client: Arc::new(Mutex::new(None)),
}
}
/// Ensures that `future_audio_client` contains a `Some` and returns a locked mutex to it.
fn ensure_future_audio_client(&self)
-> Result<MutexGuard<Option<IAudioClientWrapper>>, IoError> {
let mut lock = self.future_audio_client.lock().unwrap();
if lock.is_some() {
return Ok(lock);
}
let audio_client: *mut IAudioClient = unsafe {
let mut audio_client = mem::uninitialized();
let hresult = (*self.device).Activate(&IID_IAudioClient,
CLSCTX_ALL,
ptr::null_mut(),
&mut audio_client);
// can fail if the device has been disconnected since we enumerated it, or if
// the device doesn't support playback for some reason
check_result(hresult)?;
assert!(!audio_client.is_null());
audio_client as *mut _
};
*lock = Some(IAudioClientWrapper(audio_client));
Ok(lock)
}
/// Returns an uninitialized `IAudioClient`.
#[inline]
pub(crate) fn build_audioclient(&self) -> Result<*mut IAudioClient, IoError> {
let mut lock = self.ensure_future_audio_client()?;
let client = lock.unwrap().0;
*lock = None;
Ok(client)
}
// There is no way to query the list of all formats that are supported by the
// audio processor, so instead we just trial some commonly supported formats.
//
// Common formats are trialed by first getting the default format (returned via
// `GetMixFormat`) and then mutating that format with common sample rates and
// querying them via `IsFormatSupported`.
//
// When calling `IsFormatSupported` with the shared-mode audio engine, only the default
// number of channels seems to be supported. Any more or less returns an invalid
// parameter error. Thus we just assume that the default number of channels is the only
// number supported.
fn supported_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
// initializing COM because we call `CoTaskMemFree` to release the format.
com::com_initialized();
// Retrieve the `IAudioClient`.
let lock = match self.ensure_future_audio_client() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(FormatsEnumerationError::DeviceNotAvailable),
e => e.unwrap(),
};
let client = lock.unwrap().0;
unsafe {
// Retrieve the pointer to the default WAVEFORMATEX.
let mut default_waveformatex_ptr = WaveFormatExPtr(mem::uninitialized());
match check_result((*client).GetMixFormat(&mut default_waveformatex_ptr.0)) {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(FormatsEnumerationError::DeviceNotAvailable);
},
Err(e) => panic!("{:?}", e),
Ok(()) => (),
};
// If the default format can't succeed we have no hope of finding other formats.
assert_eq!(try!(is_format_supported(client, default_waveformatex_ptr.0)), true);
// Copy the format to use as a test format (as to avoid mutating the original format).
let mut test_format = {
match WaveFormat::copy_from_waveformatex_ptr(default_waveformatex_ptr.0) {
Some(f) => f,
// If the format is neither EX or EXTENSIBLE we don't know how to work with it.
None => return Ok(vec![].into_iter()),
}
};
// Begin testing common sample rates.
//
// NOTE: We should really be testing for whole ranges here, but it is infeasible to
// test every sample rate up to the overflow limit as the `IsFormatSupported` method is
// quite slow.
let mut supported_sample_rates: Vec<u32> = Vec::new();
for &rate in COMMON_SAMPLE_RATES {
let rate = rate.0 as DWORD;
test_format.nSamplesPerSec = rate;
test_format.nAvgBytesPerSec =
rate * (*default_waveformatex_ptr.0).nBlockAlign as DWORD;
if try!(is_format_supported(client, test_format.as_ptr())) {
supported_sample_rates.push(rate);
}
}
// If the common rates don't include the default one, add the default.
let default_sr = (*default_waveformatex_ptr.0).nSamplesPerSec as _;
if !supported_sample_rates.iter().any(|&r| r == default_sr) {
supported_sample_rates.push(default_sr);
}
// Reset the sample rate on the test format now that we're done.
test_format.nSamplesPerSec = (*default_waveformatex_ptr.0).nSamplesPerSec;
test_format.nAvgBytesPerSec = (*default_waveformatex_ptr.0).nAvgBytesPerSec;
// TODO: Test the different sample formats?
// Create the supported formats.
let mut format = format_from_waveformatex_ptr(default_waveformatex_ptr.0)
.expect("could not create a cpal::Format from a WAVEFORMATEX");
let mut supported_formats = Vec::with_capacity(supported_sample_rates.len());
for rate in supported_sample_rates {
format.sample_rate = SampleRate(rate as _);
supported_formats.push(SupportedFormat::from(format.clone()));
}
Ok(supported_formats.into_iter())
}
}
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
if self.data_flow() == eCapture {
self.supported_formats()
// If it's an output device, assume no input formats.
} else {
Ok(vec![].into_iter())
}
}
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
if self.data_flow() == eRender {
self.supported_formats()
// If it's an input device, assume no output formats.
} else {
Ok(vec![].into_iter())
}
}
// We always create voices in shared mode, therefore all samples go through an audio
// processor to mix them together.
//
// One format is guaranteed to be supported, the one returned by `GetMixFormat`.
fn default_format(&self) -> Result<Format, DefaultFormatError> {
// initializing COM because we call `CoTaskMemFree`
com::com_initialized();
let lock = match self.ensure_future_audio_client() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(DefaultFormatError::DeviceNotAvailable),
e => e.unwrap(),
};
let client = lock.unwrap().0;
unsafe {
let mut format_ptr = WaveFormatExPtr(mem::uninitialized());
match check_result((*client).GetMixFormat(&mut format_ptr.0)) {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(DefaultFormatError::DeviceNotAvailable);
},
Err(e) => panic!("{:?}", e),
Ok(()) => (),
};
format_from_waveformatex_ptr(format_ptr.0)
.ok_or(DefaultFormatError::StreamTypeNotSupported)
}
}
fn data_flow(&self) -> EDataFlow {
let endpoint = Endpoint::from(self.device as *const _);
endpoint.data_flow()
}
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
if self.data_flow() == eCapture {
self.default_format()
} else {
Err(DefaultFormatError::StreamTypeNotSupported)
}
}
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
let data_flow = self.data_flow();
if data_flow == eRender {
self.default_format()
} else {
Err(DefaultFormatError::StreamTypeNotSupported)
}
}
}
impl PartialEq for Device {
#[inline]
fn eq(&self, other: &Device) -> bool {
self.device == other.device
}
}
impl Eq for Device {
}
impl Clone for Device {
#[inline]
fn clone(&self) -> Device {
unsafe {
(*self.device).AddRef();
}
Device {
device: self.device,
future_audio_client: self.future_audio_client.clone(),
}
}
}
impl Drop for Device {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.device).Release();
}
if let Some(client) = self.future_audio_client.lock().unwrap().take() {
unsafe {
(*client.0).Release();
}
}
}
}
impl Drop for Endpoint {
fn drop(&mut self) {
unsafe {
(*self.endpoint).Release();
}
}
}
impl From<*const IMMDevice> for Endpoint {
fn from(device: *const IMMDevice) -> Self {
unsafe {
let endpoint = immendpoint_from_immdevice(device);
Endpoint { endpoint: endpoint }
}
}
}
impl Endpoint {
fn data_flow(&self) -> EDataFlow {
unsafe {
data_flow_from_immendpoint(self.endpoint)
}
}
}
lazy_static! {
static ref ENUMERATOR: Enumerator = {
// COM initialization is thread local, but we only need to have COM initialized in the
// thread we create the objects in
com::com_initialized();
// building the devices enumerator object
unsafe {
let mut enumerator: *mut IMMDeviceEnumerator = mem::uninitialized();
let hresult = CoCreateInstance(
&CLSID_MMDeviceEnumerator,
ptr::null_mut(),
CLSCTX_ALL,
&IMMDeviceEnumerator::uuidof(),
&mut enumerator as *mut *mut IMMDeviceEnumerator as *mut _,
);
check_result(hresult).unwrap();
Enumerator(enumerator)
}
};
}
/// RAII object around `IMMDeviceEnumerator`.
struct Enumerator(*mut IMMDeviceEnumerator);
unsafe impl Send for Enumerator {
}
unsafe impl Sync for Enumerator {
}
impl Drop for Enumerator {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.0).Release();
}
}
}
/// WASAPI implementation for `Devices`.
pub struct Devices {
collection: *mut IMMDeviceCollection,
total_count: u32,
next_item: u32,
}
unsafe impl Send for Devices {
}
unsafe impl Sync for Devices {
}
impl Drop for Devices {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.collection).Release();
}
}
}
impl Default for Devices {
fn default() -> Devices {
unsafe {
let mut collection: *mut IMMDeviceCollection = mem::uninitialized();
// can fail because of wrong parameters (should never happen) or out of memory
check_result(
(*ENUMERATOR.0).EnumAudioEndpoints(
eAll,
DEVICE_STATE_ACTIVE,
&mut collection,
)
).unwrap();
let mut count = mem::uninitialized();
// can fail if the parameter is null, which should never happen
check_result((*collection).GetCount(&mut count)).unwrap();
Devices {
collection: collection,
total_count: count,
next_item: 0,
}
}
}
}
impl Iterator for Devices {
type Item = Device;
fn next(&mut self) -> Option<Device> {
if self.next_item >= self.total_count {
return None;
}
unsafe {
let mut device = mem::uninitialized();
// can fail if out of range, which we just checked above
check_result((*self.collection).Item(self.next_item, &mut device)).unwrap();
self.next_item += 1;
Some(Device::from_immdevice(device))
}
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let num = self.total_count - self.next_item;
let num = num as usize;
(num, Some(num))
}
}
fn default_device(data_flow: EDataFlow) -> Option<Device> {
unsafe {
let mut device = mem::uninitialized();
let hres = (*ENUMERATOR.0)
.GetDefaultAudioEndpoint(data_flow, eConsole, &mut device);
if let Err(_err) = check_result(hres) {
return None; // TODO: check specifically for `E_NOTFOUND`, and panic otherwise
}
Some(Device::from_immdevice(device))
}
}
pub fn default_input_device() -> Option<Device> {
default_device(eCapture)
}
pub fn default_output_device() -> Option<Device> {
default_device(eRender)
}

View File

@ -1,378 +0,0 @@
use std::ffi::OsString;
use std::io::Error as IoError;
use std::mem;
use std::option::IntoIter as OptionIntoIter;
use std::os::windows::ffi::OsStringExt;
use std::ptr;
use std::slice;
use std::sync::{Arc, Mutex, MutexGuard};
use ChannelCount;
use FormatsEnumerationError;
use SampleFormat;
use SampleRate;
use SupportedFormat;
use super::check_result;
use super::com;
use super::winapi::Interface;
use super::winapi::shared::ksmedia;
use super::winapi::shared::guiddef::{
GUID,
};
use super::winapi::shared::mmreg::{
WAVE_FORMAT_PCM,
WAVE_FORMAT_EXTENSIBLE,
WAVEFORMATEXTENSIBLE,
};
use super::winapi::um::audioclient::{
IAudioClient,
IID_IAudioClient,
AUDCLNT_E_DEVICE_INVALIDATED,
};
use super::winapi::um::combaseapi::{
CoCreateInstance,
CoTaskMemFree,
CLSCTX_ALL,
};
use super::winapi::um::mmdeviceapi::{
eConsole,
eRender,
CLSID_MMDeviceEnumerator,
DEVICE_STATE_ACTIVE,
IMMDevice,
IMMDeviceCollection,
IMMDeviceEnumerator,
};
pub type SupportedFormatsIterator = OptionIntoIter<SupportedFormat>;
/// Wrapper because of that stupid decision to remove `Send` and `Sync` from raw pointers.
#[derive(Copy, Clone)]
struct IAudioClientWrapper(*mut IAudioClient);
unsafe impl Send for IAudioClientWrapper {
}
unsafe impl Sync for IAudioClientWrapper {
}
/// An opaque type that identifies an end point.
pub struct Endpoint {
device: *mut IMMDevice,
/// We cache an uninitialized `IAudioClient` so that we can call functions from it without
/// having to create/destroy audio clients all the time.
future_audio_client: Arc<Mutex<Option<IAudioClientWrapper>>>, // TODO: add NonZero around the ptr
}
unsafe impl Send for Endpoint {
}
unsafe impl Sync for Endpoint {
}
impl Endpoint {
// TODO: this function returns a GUID of the endpoin
// instead it should use the property store and return the friendly name
pub fn name(&self) -> String {
unsafe {
let mut name_ptr = mem::uninitialized();
// can only fail if wrong params or out of memory
check_result((*self.device).GetId(&mut name_ptr)).unwrap();
// finding the length of the name
let mut len = 0;
while *name_ptr.offset(len) != 0 {
len += 1;
}
// building a slice containing the name
let name_slice = slice::from_raw_parts(name_ptr, len as usize);
// and turning it into a string
let name_string: OsString = OsStringExt::from_wide(name_slice);
CoTaskMemFree(name_ptr as *mut _);
name_string.into_string().unwrap()
}
}
#[inline]
fn from_immdevice(device: *mut IMMDevice) -> Endpoint {
Endpoint {
device: device,
future_audio_client: Arc::new(Mutex::new(None)),
}
}
/// Ensures that `future_audio_client` contains a `Some` and returns a locked mutex to it.
fn ensure_future_audio_client(&self)
-> Result<MutexGuard<Option<IAudioClientWrapper>>, IoError> {
let mut lock = self.future_audio_client.lock().unwrap();
if lock.is_some() {
return Ok(lock);
}
let audio_client: *mut IAudioClient = unsafe {
let mut audio_client = mem::uninitialized();
let hresult = (*self.device).Activate(&IID_IAudioClient,
CLSCTX_ALL,
ptr::null_mut(),
&mut audio_client);
// can fail if the device has been disconnected since we enumerated it, or if
// the device doesn't support playback for some reason
check_result(hresult)?;
assert!(!audio_client.is_null());
audio_client as *mut _
};
*lock = Some(IAudioClientWrapper(audio_client));
Ok(lock)
}
/// Returns an uninitialized `IAudioClient`.
#[inline]
pub(crate) fn build_audioclient(&self) -> Result<*mut IAudioClient, IoError> {
let mut lock = self.ensure_future_audio_client()?;
let client = lock.unwrap().0;
*lock = None;
Ok(client)
}
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> {
// We always create voices in shared mode, therefore all samples go through an audio
// processor to mix them together.
// However there is no way to query the list of all formats that are supported by the
// audio processor, but one format is guaranteed to be supported, the one returned by
// `GetMixFormat`.
// initializing COM because we call `CoTaskMemFree`
com::com_initialized();
let lock = match self.ensure_future_audio_client() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(FormatsEnumerationError::DeviceNotAvailable),
e => e.unwrap(),
};
let client = lock.unwrap().0;
unsafe {
let mut format_ptr = mem::uninitialized();
match check_result((*client).GetMixFormat(&mut format_ptr)) {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(FormatsEnumerationError::DeviceNotAvailable);
},
Err(e) => panic!("{:?}", e),
Ok(()) => (),
};
let format = {
let (channels, data_type) = match (*format_ptr).wFormatTag {
WAVE_FORMAT_PCM => {
(2, SampleFormat::I16)
},
WAVE_FORMAT_EXTENSIBLE => {
let format_ptr = format_ptr as *const WAVEFORMATEXTENSIBLE;
let channels = (*format_ptr).Format.nChannels as ChannelCount;
let format = {
fn cmp_guid(a: &GUID, b: &GUID) -> bool {
a.Data1 == b.Data1 && a.Data2 == b.Data2 && a.Data3 == b.Data3 &&
a.Data4 == b.Data4
}
if cmp_guid(&(*format_ptr).SubFormat,
&ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT)
{
SampleFormat::F32
} else if cmp_guid(&(*format_ptr).SubFormat,
&ksmedia::KSDATAFORMAT_SUBTYPE_PCM)
{
SampleFormat::I16
} else {
panic!("Unknown SubFormat GUID returned by GetMixFormat");
// TODO: Re-add this to end of panic. Getting
// `trait Debug is not satisfied` error.
//(*format_ptr).SubFormat)
}
};
(channels, format)
},
f => panic!("Unknown data format returned by GetMixFormat: {:?}", f),
};
SupportedFormat {
channels: channels,
min_sample_rate: SampleRate((*format_ptr).nSamplesPerSec),
max_sample_rate: SampleRate((*format_ptr).nSamplesPerSec),
data_type: data_type,
}
};
CoTaskMemFree(format_ptr as *mut _);
Ok(Some(format).into_iter())
}
}
}
impl PartialEq for Endpoint {
#[inline]
fn eq(&self, other: &Endpoint) -> bool {
self.device == other.device
}
}
impl Eq for Endpoint {
}
impl Clone for Endpoint {
#[inline]
fn clone(&self) -> Endpoint {
unsafe {
(*self.device).AddRef();
}
Endpoint {
device: self.device,
future_audio_client: self.future_audio_client.clone(),
}
}
}
impl Drop for Endpoint {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.device).Release();
}
if let Some(client) = self.future_audio_client.lock().unwrap().take() {
unsafe {
(*client.0).Release();
}
}
}
}
lazy_static! {
static ref ENUMERATOR: Enumerator = {
// COM initialization is thread local, but we only need to have COM initialized in the
// thread we create the objects in
com::com_initialized();
// building the devices enumerator object
unsafe {
let mut enumerator: *mut IMMDeviceEnumerator = mem::uninitialized();
let hresult = CoCreateInstance(&CLSID_MMDeviceEnumerator,
ptr::null_mut(), CLSCTX_ALL,
&IMMDeviceEnumerator::uuidof(),
&mut enumerator
as *mut *mut IMMDeviceEnumerator
as *mut _);
check_result(hresult).unwrap();
Enumerator(enumerator)
}
};
}
/// RAII object around `IMMDeviceEnumerator`.
struct Enumerator(*mut IMMDeviceEnumerator);
unsafe impl Send for Enumerator {
}
unsafe impl Sync for Enumerator {
}
impl Drop for Enumerator {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.0).Release();
}
}
}
/// WASAPI implementation for `EndpointsIterator`.
pub struct EndpointsIterator {
collection: *mut IMMDeviceCollection,
total_count: u32,
next_item: u32,
}
unsafe impl Send for EndpointsIterator {
}
unsafe impl Sync for EndpointsIterator {
}
impl Drop for EndpointsIterator {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.collection).Release();
}
}
}
impl Default for EndpointsIterator {
fn default() -> EndpointsIterator {
unsafe {
let mut collection: *mut IMMDeviceCollection = mem::uninitialized();
// can fail because of wrong parameters (should never happen) or out of memory
check_result((*ENUMERATOR.0).EnumAudioEndpoints(eRender,
DEVICE_STATE_ACTIVE,
&mut collection))
.unwrap();
let mut count = mem::uninitialized();
// can fail if the parameter is null, which should never happen
check_result((*collection).GetCount(&mut count)).unwrap();
EndpointsIterator {
collection: collection,
total_count: count,
next_item: 0,
}
}
}
}
impl Iterator for EndpointsIterator {
type Item = Endpoint;
fn next(&mut self) -> Option<Endpoint> {
if self.next_item >= self.total_count {
return None;
}
unsafe {
let mut device = mem::uninitialized();
// can fail if out of range, which we just checked above
check_result((*self.collection).Item(self.next_item, &mut device)).unwrap();
self.next_item += 1;
Some(Endpoint::from_immdevice(device))
}
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let num = self.total_count - self.next_item;
let num = num as usize;
(num, Some(num))
}
}
pub fn default_endpoint() -> Option<Endpoint> {
unsafe {
let mut device = mem::uninitialized();
let hres = (*ENUMERATOR.0)
.GetDefaultAudioEndpoint(eRender, eConsole, &mut device);
if let Err(_err) = check_result(hres) {
return None; // TODO: check specifically for `E_NOTFOUND`, and panic otherwise
}
Some(Endpoint::from_immdevice(device))
}
}

View File

@ -2,13 +2,13 @@ extern crate winapi;
use std::io::Error as IoError; use std::io::Error as IoError;
pub use self::endpoint::{Endpoint, EndpointsIterator, SupportedFormatsIterator, default_endpoint}; pub use self::device::{Device, Devices, SupportedInputFormats, SupportedOutputFormats, default_input_device, default_output_device};
pub use self::voice::{Buffer, EventLoop, VoiceId}; pub use self::stream::{InputBuffer, OutputBuffer, EventLoop, StreamId};
use self::winapi::um::winnt::HRESULT; use self::winapi::um::winnt::HRESULT;
mod com; mod com;
mod endpoint; mod device;
mod voice; mod stream;
#[inline] #[inline]
fn check_result(result: HRESULT) -> Result<(), IoError> { fn check_result(result: HRESULT) -> Result<(), IoError> {

768
src/wasapi/stream.rs Normal file
View File

@ -0,0 +1,768 @@
use super::Device;
use super::check_result;
use super::com;
use super::winapi::shared::basetsd::UINT32;
use super::winapi::shared::ksmedia;
use super::winapi::shared::minwindef::{BYTE, DWORD, FALSE, WORD};
use super::winapi::shared::mmreg;
use super::winapi::um::audioclient::{self, AUDCLNT_E_DEVICE_INVALIDATED};
use super::winapi::um::audiosessiontypes::{AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_EVENTCALLBACK};
use super::winapi::um::handleapi;
use super::winapi::um::synchapi;
use super::winapi::um::winbase;
use super::winapi::um::winnt;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::slice;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use CreationError;
use Format;
use SampleFormat;
use StreamData;
use UnknownTypeOutputBuffer;
use UnknownTypeInputBuffer;
pub struct EventLoop {
// Data used by the `run()` function implementation. The mutex is kept lock permanently by
// `run()`. This ensures that two `run()` invocations can't run at the same time, and also
// means that we shouldn't try to lock this field from anywhere else but `run()`.
run_context: Mutex<RunContext>,
// Identifier of the next stream to create. Each new stream increases this counter. If the
// counter overflows, there's a panic.
// TODO: use AtomicU64 instead
next_stream_id: AtomicUsize,
// Commands processed by the `run()` method that is currently running.
// `pending_scheduled_event` must be signalled whenever a command is added here, so that it
// will get picked up.
// TODO: use a lock-free container
commands: Mutex<Vec<Command>>,
// This event is signalled after a new entry is added to `commands`, so that the `run()`
// method can be notified.
pending_scheduled_event: winnt::HANDLE,
}
struct RunContext {
// Streams that have been created in this event loop.
streams: Vec<StreamInner>,
// Handles corresponding to the `event` field of each element of `voices`. Must always be in
// sync with `voices`, except that the first element is always `pending_scheduled_event`.
handles: Vec<winnt::HANDLE>,
}
enum Command {
NewStream(StreamInner),
DestroyStream(StreamId),
PlayStream(StreamId),
PauseStream(StreamId),
}
enum AudioClientFlow {
Render {
render_client: *mut audioclient::IAudioRenderClient,
},
Capture {
capture_client: *mut audioclient::IAudioCaptureClient,
},
}
struct StreamInner {
id: StreamId,
audio_client: *mut audioclient::IAudioClient,
client_flow: AudioClientFlow,
// Event that is signalled by WASAPI whenever audio data must be written.
event: winnt::HANDLE,
// True if the stream is currently playing. False if paused.
playing: bool,
// Number of frames of audio data in the underlying buffer allocated by WASAPI.
max_frames_in_buffer: UINT32,
// Number of bytes that each frame occupies.
bytes_per_frame: WORD,
// The sample format with which the stream was created.
sample_format: SampleFormat,
}
impl EventLoop {
pub fn new() -> EventLoop {
let pending_scheduled_event =
unsafe { synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null()) };
EventLoop {
pending_scheduled_event: pending_scheduled_event,
run_context: Mutex::new(RunContext {
streams: Vec::new(),
handles: vec![pending_scheduled_event],
}),
next_stream_id: AtomicUsize::new(0),
commands: Mutex::new(Vec::new()),
}
}
pub fn build_input_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
unsafe {
// Making sure that COM is initialized.
// It's not actually sure that this is required, but when in doubt do it.
com::com_initialized();
// Obtaining a `IAudioClient`.
let audio_client = match device.build_audioclient() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(CreationError::DeviceNotAvailable),
e => e.unwrap(),
};
// Computing the format and initializing the device.
let waveformatex = {
let format_attempt = format_to_waveformatextensible(format)
.ok_or(CreationError::FormatNotSupported)?;
let share_mode = AUDCLNT_SHAREMODE_SHARED;
// Ensure the format is supported.
match super::device::is_format_supported(audio_client, &format_attempt.Format) {
Ok(false) => return Err(CreationError::FormatNotSupported),
Err(_) => return Err(CreationError::DeviceNotAvailable),
_ => (),
}
// finally initializing the audio client
let hresult = (*audio_client).Initialize(
share_mode,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
&format_attempt.Format,
ptr::null(),
);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
format_attempt.Format
};
// obtaining the size of the samples buffer in number of frames
let max_frames_in_buffer = {
let mut max_frames_in_buffer = mem::uninitialized();
let hresult = (*audio_client).GetBufferSize(&mut max_frames_in_buffer);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
max_frames_in_buffer
};
// Creating the event that will be signalled whenever we need to submit some samples.
let event = {
let event = synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null());
if event == ptr::null_mut() {
(*audio_client).Release();
panic!("Failed to create event");
}
match check_result((*audio_client).SetEventHandle(event)) {
Err(_) => {
(*audio_client).Release();
panic!("Failed to call SetEventHandle")
},
Ok(_) => (),
};
event
};
// Building a `IAudioCaptureClient` that will be used to read captured samples.
let capture_client = {
let mut capture_client: *mut audioclient::IAudioCaptureClient = mem::uninitialized();
let hresult = (*audio_client).GetService(
&audioclient::IID_IAudioCaptureClient,
&mut capture_client as *mut *mut audioclient::IAudioCaptureClient as *mut _,
);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
&mut *capture_client
};
let new_stream_id = StreamId(self.next_stream_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_stream_id.0, usize::max_value()); // check for overflows
// Once we built the `StreamInner`, we add a command that will be picked up by the
// `run()` method and added to the `RunContext`.
{
let client_flow = AudioClientFlow::Capture {
capture_client: capture_client,
};
let inner = StreamInner {
id: new_stream_id.clone(),
audio_client: audio_client,
client_flow: client_flow,
event: event,
playing: false,
max_frames_in_buffer: max_frames_in_buffer,
bytes_per_frame: waveformatex.nBlockAlign,
sample_format: format.data_type,
};
self.commands.lock().unwrap().push(Command::NewStream(inner));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
};
Ok(new_stream_id)
}
}
pub fn build_output_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
unsafe {
// Making sure that COM is initialized.
// It's not actually sure that this is required, but when in doubt do it.
com::com_initialized();
// Obtaining a `IAudioClient`.
let audio_client = match device.build_audioclient() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(CreationError::DeviceNotAvailable),
e => e.unwrap(),
};
// Computing the format and initializing the device.
let waveformatex = {
let format_attempt = format_to_waveformatextensible(format)
.ok_or(CreationError::FormatNotSupported)?;
let share_mode = AUDCLNT_SHAREMODE_SHARED;
// Ensure the format is supported.
match super::device::is_format_supported(audio_client, &format_attempt.Format) {
Ok(false) => return Err(CreationError::FormatNotSupported),
Err(_) => return Err(CreationError::DeviceNotAvailable),
_ => (),
}
// finally initializing the audio client
let hresult = (*audio_client).Initialize(share_mode,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
&format_attempt.Format,
ptr::null());
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
format_attempt.Format
};
// Creating the event that will be signalled whenever we need to submit some samples.
let event = {
let event = synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null());
if event == ptr::null_mut() {
(*audio_client).Release();
panic!("Failed to create event");
}
match check_result((*audio_client).SetEventHandle(event)) {
Err(_) => {
(*audio_client).Release();
panic!("Failed to call SetEventHandle")
},
Ok(_) => (),
};
event
};
// obtaining the size of the samples buffer in number of frames
let max_frames_in_buffer = {
let mut max_frames_in_buffer = mem::uninitialized();
let hresult = (*audio_client).GetBufferSize(&mut max_frames_in_buffer);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
max_frames_in_buffer
};
// Building a `IAudioRenderClient` that will be used to fill the samples buffer.
let render_client = {
let mut render_client: *mut audioclient::IAudioRenderClient = mem::uninitialized();
let hresult = (*audio_client).GetService(&audioclient::IID_IAudioRenderClient,
&mut render_client as
*mut *mut audioclient::IAudioRenderClient as
*mut _);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
&mut *render_client
};
let new_stream_id = StreamId(self.next_stream_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_stream_id.0, usize::max_value()); // check for overflows
// Once we built the `StreamInner`, we add a command that will be picked up by the
// `run()` method and added to the `RunContext`.
{
let client_flow = AudioClientFlow::Render {
render_client: render_client,
};
let inner = StreamInner {
id: new_stream_id.clone(),
audio_client: audio_client,
client_flow: client_flow,
event: event,
playing: false,
max_frames_in_buffer: max_frames_in_buffer,
bytes_per_frame: waveformatex.nBlockAlign,
sample_format: format.data_type,
};
self.commands.lock().unwrap().push(Command::NewStream(inner));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
};
Ok(new_stream_id)
}
}
#[inline]
pub fn destroy_stream(&self, stream_id: StreamId) {
unsafe {
self.commands
.lock()
.unwrap()
.push(Command::DestroyStream(stream_id));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(StreamId, StreamData)
{
self.run_inner(&mut callback);
}
fn run_inner(&self, callback: &mut FnMut(StreamId, StreamData)) -> ! {
unsafe {
// We keep `run_context` locked forever, which guarantees that two invocations of
// `run()` cannot run simultaneously.
let mut run_context = self.run_context.lock().unwrap();
loop {
// Process the pending commands.
let mut commands_lock = self.commands.lock().unwrap();
for command in commands_lock.drain(..) {
match command {
Command::NewStream(stream_inner) => {
let event = stream_inner.event;
run_context.streams.push(stream_inner);
run_context.handles.push(event);
},
Command::DestroyStream(stream_id) => {
match run_context.streams.iter().position(|v| v.id == stream_id) {
None => continue,
Some(p) => {
run_context.handles.remove(p + 1);
run_context.streams.remove(p);
},
}
},
Command::PlayStream(stream_id) => {
if let Some(v) = run_context.streams.get_mut(stream_id.0) {
if !v.playing {
let hresult = (*v.audio_client).Start();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
Command::PauseStream(stream_id) => {
if let Some(v) = run_context.streams.get_mut(stream_id.0) {
if v.playing {
let hresult = (*v.audio_client).Stop();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
}
}
drop(commands_lock);
// Wait for any of the handles to be signalled, which means that the corresponding
// sound needs a buffer.
debug_assert!(run_context.handles.len() <= winnt::MAXIMUM_WAIT_OBJECTS as usize);
let result = synchapi::WaitForMultipleObjectsEx(run_context.handles.len() as u32,
run_context.handles.as_ptr(),
FALSE,
winbase::INFINITE, /* TODO: allow setting a timeout */
FALSE /* irrelevant parameter here */);
// Notifying the corresponding task handler.
debug_assert!(result >= winbase::WAIT_OBJECT_0);
let handle_id = (result - winbase::WAIT_OBJECT_0) as usize;
// If `handle_id` is 0, then it's `pending_scheduled_event` that was signalled in
// order for us to pick up the pending commands.
// Otherwise, a stream needs data.
if handle_id >= 1 {
let stream = &mut run_context.streams[handle_id - 1];
let stream_id = stream.id.clone();
// Obtaining the number of frames that are available to be written.
let mut frames_available = {
let mut padding = mem::uninitialized();
let hresult = (*stream.audio_client).GetCurrentPadding(&mut padding);
check_result(hresult).unwrap();
stream.max_frames_in_buffer - padding
};
if frames_available == 0 {
// TODO: can this happen?
continue;
}
let sample_size = stream.sample_format.sample_size();
// Obtaining a pointer to the buffer.
match stream.client_flow {
AudioClientFlow::Capture { capture_client } => {
// Get the available data in the shared buffer.
let mut buffer: *mut BYTE = mem::uninitialized();
let mut flags = mem::uninitialized();
let hresult = (*capture_client).GetBuffer(
&mut buffer,
&mut frames_available,
&mut flags,
ptr::null_mut(),
ptr::null_mut(),
);
check_result(hresult).unwrap();
debug_assert!(!buffer.is_null());
let buffer_len = frames_available as usize
* stream.bytes_per_frame as usize / sample_size;
// Simplify the capture callback sample format branches.
macro_rules! capture_callback {
($T:ty, $Variant:ident) => {{
let buffer_data = buffer as *mut _ as *const $T;
let slice = slice::from_raw_parts(buffer_data, buffer_len);
let input_buffer = InputBuffer { buffer: slice };
let unknown_buffer = UnknownTypeInputBuffer::$Variant(::InputBuffer {
buffer: Some(input_buffer),
});
let data = StreamData::Input { buffer: unknown_buffer };
callback(stream_id, data);
// Release the buffer.
let hresult = (*capture_client).ReleaseBuffer(frames_available);
match check_result(hresult) {
// Ignoring unavailable device error.
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
},
e => e.unwrap(),
};
}};
}
match stream.sample_format {
SampleFormat::F32 => capture_callback!(f32, F32),
SampleFormat::I16 => capture_callback!(i16, I16),
SampleFormat::U16 => capture_callback!(u16, U16),
}
},
AudioClientFlow::Render { render_client } => {
let mut buffer: *mut BYTE = mem::uninitialized();
let hresult = (*render_client).GetBuffer(
frames_available,
&mut buffer as *mut *mut _,
);
// FIXME: can return `AUDCLNT_E_DEVICE_INVALIDATED`
check_result(hresult).unwrap();
debug_assert!(!buffer.is_null());
let buffer_len = frames_available as usize
* stream.bytes_per_frame as usize / sample_size;
// Simplify the render callback sample format branches.
macro_rules! render_callback {
($T:ty, $Variant:ident) => {{
let buffer_data = buffer as *mut $T;
let output_buffer = OutputBuffer {
stream: stream,
buffer_data: buffer_data,
buffer_len: buffer_len,
frames: frames_available,
marker: PhantomData,
};
let unknown_buffer = UnknownTypeOutputBuffer::$Variant(::OutputBuffer {
target: Some(output_buffer)
});
let data = StreamData::Output { buffer: unknown_buffer };
callback(stream_id, data);
}};
}
match stream.sample_format {
SampleFormat::F32 => render_callback!(f32, F32),
SampleFormat::I16 => render_callback!(i16, I16),
SampleFormat::U16 => render_callback!(u16, U16),
}
},
}
}
}
}
}
#[inline]
pub fn play_stream(&self, stream: StreamId) {
unsafe {
self.commands.lock().unwrap().push(Command::PlayStream(stream));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn pause_stream(&self, stream: StreamId) {
unsafe {
self.commands.lock().unwrap().push(Command::PauseStream(stream));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
}
impl Drop for EventLoop {
#[inline]
fn drop(&mut self) {
unsafe {
handleapi::CloseHandle(self.pending_scheduled_event);
}
}
}
unsafe impl Send for EventLoop {
}
unsafe impl Sync for EventLoop {
}
// The content of a stream ID is a number that was fetched from `next_stream_id`.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct StreamId(usize);
impl Drop for AudioClientFlow {
fn drop(&mut self) {
unsafe {
match *self {
AudioClientFlow::Capture { capture_client } => (*capture_client).Release(),
AudioClientFlow::Render { render_client } => (*render_client).Release(),
};
}
}
}
impl Drop for StreamInner {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.audio_client).Release();
handleapi::CloseHandle(self.event);
}
}
}
pub struct InputBuffer<'a, T: 'a> {
buffer: &'a [T],
}
pub struct OutputBuffer<'a, T: 'a> {
stream: &'a mut StreamInner,
buffer_data: *mut T,
buffer_len: usize,
frames: UINT32,
marker: PhantomData<&'a mut [T]>,
}
unsafe impl<'a, T> Send for OutputBuffer<'a, T> {
}
impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
&self.buffer
}
#[inline]
pub fn finish(self) {
// Nothing to be done.
}
}
impl<'a, T> OutputBuffer<'a, T> {
#[inline]
pub fn buffer(&mut self) -> &mut [T] {
unsafe { slice::from_raw_parts_mut(self.buffer_data, self.buffer_len) }
}
#[inline]
pub fn len(&self) -> usize {
self.buffer_len
}
#[inline]
pub fn finish(self) {
unsafe {
let hresult = match self.stream.client_flow {
AudioClientFlow::Render { render_client } => {
(*render_client).ReleaseBuffer(self.frames as u32, 0)
},
_ => unreachable!(),
};
match check_result(hresult) {
// Ignoring the error that is produced if the device has been disconnected.
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => (),
e => e.unwrap(),
};
}
}
}
// Turns a `Format` into a `WAVEFORMATEXTENSIBLE`.
//
// Returns `None` if the WAVEFORMATEXTENSIBLE does not support the given format.
fn format_to_waveformatextensible(format: &Format) -> Option<mmreg::WAVEFORMATEXTENSIBLE> {
let format_tag = match format.data_type {
SampleFormat::I16 => mmreg::WAVE_FORMAT_PCM,
SampleFormat::F32 => mmreg::WAVE_FORMAT_EXTENSIBLE,
SampleFormat::U16 => return None,
};
let channels = format.channels as WORD;
let sample_rate = format.sample_rate.0 as DWORD;
let sample_bytes = format.data_type.sample_size() as WORD;
let avg_bytes_per_sec = channels as DWORD * sample_rate * sample_bytes as DWORD;
let block_align = channels * sample_bytes;
let bits_per_sample = 8 * sample_bytes;
let cb_size = match format.data_type {
SampleFormat::I16 => 0,
SampleFormat::F32 => {
let extensible_size = mem::size_of::<mmreg::WAVEFORMATEXTENSIBLE>();
let ex_size = mem::size_of::<mmreg::WAVEFORMATEX>();
(extensible_size - ex_size) as WORD
},
SampleFormat::U16 => return None,
};
let waveformatex = mmreg::WAVEFORMATEX {
wFormatTag: format_tag,
nChannels: channels,
nSamplesPerSec: sample_rate,
nAvgBytesPerSec: avg_bytes_per_sec,
nBlockAlign: block_align,
wBitsPerSample: bits_per_sample,
cbSize: cb_size,
};
// CPAL does not care about speaker positions, so pass audio straight through.
// TODO: This constant should be defined in winapi but is missing.
const KSAUDIO_SPEAKER_DIRECTOUT: DWORD = 0;
let channel_mask = KSAUDIO_SPEAKER_DIRECTOUT;
let sub_format = match format.data_type {
SampleFormat::I16 => ksmedia::KSDATAFORMAT_SUBTYPE_PCM,
SampleFormat::F32 => ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT,
SampleFormat::U16 => return None,
};
let waveformatextensible = mmreg::WAVEFORMATEXTENSIBLE {
Format: waveformatex,
Samples: bits_per_sample as WORD,
dwChannelMask: channel_mask,
SubFormat: sub_format,
};
Some(waveformatextensible)
}

View File

@ -1,537 +0,0 @@
use super::Endpoint;
use super::check_result;
use super::com;
use super::winapi::shared::basetsd::UINT32;
use super::winapi::shared::ksmedia;
use super::winapi::shared::minwindef::{BYTE, DWORD, FALSE, WORD};
use super::winapi::shared::mmreg;
use super::winapi::shared::winerror;
use super::winapi::um::audioclient::{self, AUDCLNT_E_DEVICE_INVALIDATED};
use super::winapi::um::audiosessiontypes::{AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_EVENTCALLBACK};
use super::winapi::um::combaseapi::CoTaskMemFree;
use super::winapi::um::handleapi;
use super::winapi::um::synchapi;
use super::winapi::um::winbase;
use super::winapi::um::winnt;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::slice;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use CreationError;
use Format;
use SampleFormat;
use UnknownTypeBuffer;
pub struct EventLoop {
// Data used by the `run()` function implementation. The mutex is kept lock permanently by
// `run()`. This ensures that two `run()` invocations can't run at the same time, and also
// means that we shouldn't try to lock this field from anywhere else but `run()`.
run_context: Mutex<RunContext>,
// Identifier of the next voice to create. Each new voice increases this counter. If the
// counter overflows, there's a panic.
// TODO: use AtomicU64 instead
next_voice_id: AtomicUsize,
// Commands processed by the `run()` method that is currently running.
// `pending_scheduled_event` must be signalled whenever a command is added here, so that it
// will get picked up.
// TODO: use a lock-free container
commands: Mutex<Vec<Command>>,
// This event is signalled after a new entry is added to `commands`, so that the `run()`
// method can be notified.
pending_scheduled_event: winnt::HANDLE,
}
struct RunContext {
// Voices that have been created in this event loop.
voices: Vec<VoiceInner>,
// Handles corresponding to the `event` field of each element of `voices`. Must always be in
// sync with `voices`, except that the first element is always `pending_scheduled_event`.
handles: Vec<winnt::HANDLE>,
}
enum Command {
NewVoice(VoiceInner),
DestroyVoice(VoiceId),
Play(VoiceId),
Pause(VoiceId),
}
struct VoiceInner {
id: VoiceId,
audio_client: *mut audioclient::IAudioClient,
render_client: *mut audioclient::IAudioRenderClient,
// Event that is signalled by WASAPI whenever audio data must be written.
event: winnt::HANDLE,
// True if the voice is currently playing. False if paused.
playing: bool,
// Number of frames of audio data in the underlying buffer allocated by WASAPI.
max_frames_in_buffer: UINT32,
// Number of bytes that each frame occupies.
bytes_per_frame: WORD,
}
impl EventLoop {
pub fn new() -> EventLoop {
let pending_scheduled_event =
unsafe { synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null()) };
EventLoop {
pending_scheduled_event: pending_scheduled_event,
run_context: Mutex::new(RunContext {
voices: Vec::new(),
handles: vec![pending_scheduled_event],
}),
next_voice_id: AtomicUsize::new(0),
commands: Mutex::new(Vec::new()),
}
}
pub fn build_voice(&self, end_point: &Endpoint, format: &Format)
-> Result<VoiceId, CreationError> {
unsafe {
// Making sure that COM is initialized.
// It's not actually sure that this is required, but when in doubt do it.
com::com_initialized();
// Obtaining a `IAudioClient`.
let audio_client = match end_point.build_audioclient() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(CreationError::DeviceNotAvailable),
e => e.unwrap(),
};
// Computing the format and initializing the device.
let format = {
let format_attempt = format_to_waveformatextensible(format)?;
let share_mode = AUDCLNT_SHAREMODE_SHARED;
// `IsFormatSupported` checks whether the format is supported and fills
// a `WAVEFORMATEX`
let mut dummy_fmt_ptr: *mut mmreg::WAVEFORMATEX = mem::uninitialized();
let hresult =
(*audio_client)
.IsFormatSupported(share_mode, &format_attempt.Format, &mut dummy_fmt_ptr);
// we free that `WAVEFORMATEX` immediately after because we don't need it
if !dummy_fmt_ptr.is_null() {
CoTaskMemFree(dummy_fmt_ptr as *mut _);
}
// `IsFormatSupported` can return `S_FALSE` (which means that a compatible format
// has been found) but we also treat this as an error
match (hresult, check_result(hresult)) {
(_, Err(ref e))
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
(_, Err(e)) => {
(*audio_client).Release();
panic!("{:?}", e);
},
(winerror::S_FALSE, _) => {
(*audio_client).Release();
return Err(CreationError::FormatNotSupported);
},
(_, Ok(())) => (),
};
// finally initializing the audio client
let hresult = (*audio_client).Initialize(share_mode,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
&format_attempt.Format,
ptr::null());
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
format_attempt.Format
};
// Creating the event that will be signalled whenever we need to submit some samples.
let event = {
let event = synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null());
if event == ptr::null_mut() {
(*audio_client).Release();
panic!("Failed to create event");
}
match check_result((*audio_client).SetEventHandle(event)) {
Err(_) => {
(*audio_client).Release();
panic!("Failed to call SetEventHandle")
},
Ok(_) => (),
};
event
};
// obtaining the size of the samples buffer in number of frames
let max_frames_in_buffer = {
let mut max_frames_in_buffer = mem::uninitialized();
let hresult = (*audio_client).GetBufferSize(&mut max_frames_in_buffer);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
max_frames_in_buffer
};
// Building a `IAudioRenderClient` that will be used to fill the samples buffer.
let render_client = {
let mut render_client: *mut audioclient::IAudioRenderClient = mem::uninitialized();
let hresult = (*audio_client).GetService(&audioclient::IID_IAudioRenderClient,
&mut render_client as
*mut *mut audioclient::IAudioRenderClient as
*mut _);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
&mut *render_client
};
let new_voice_id = VoiceId(self.next_voice_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_voice_id.0, usize::max_value()); // check for overflows
// Once we built the `VoiceInner`, we add a command that will be picked up by the
// `run()` method and added to the `RunContext`.
{
let inner = VoiceInner {
id: new_voice_id.clone(),
audio_client: audio_client,
render_client: render_client,
event: event,
playing: false,
max_frames_in_buffer: max_frames_in_buffer,
bytes_per_frame: format.nBlockAlign,
};
self.commands.lock().unwrap().push(Command::NewVoice(inner));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
};
Ok(new_voice_id)
}
}
#[inline]
pub fn destroy_voice(&self, voice_id: VoiceId) {
unsafe {
self.commands
.lock()
.unwrap()
.push(Command::DestroyVoice(voice_id));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer)
{
self.run_inner(&mut callback);
}
fn run_inner(&self, callback: &mut FnMut(VoiceId, UnknownTypeBuffer)) -> ! {
unsafe {
// We keep `run_context` locked forever, which guarantees that two invocations of
// `run()` cannot run simultaneously.
let mut run_context = self.run_context.lock().unwrap();
loop {
// Process the pending commands.
let mut commands_lock = self.commands.lock().unwrap();
for command in commands_lock.drain(..) {
match command {
Command::NewVoice(voice_inner) => {
let event = voice_inner.event;
run_context.voices.push(voice_inner);
run_context.handles.push(event);
},
Command::DestroyVoice(voice_id) => {
match run_context.voices.iter().position(|v| v.id == voice_id) {
None => continue,
Some(p) => {
run_context.handles.remove(p + 1);
run_context.voices.remove(p);
},
}
},
Command::Play(voice_id) => {
if let Some(v) = run_context.voices.get_mut(voice_id.0) {
if !v.playing {
let hresult = (*v.audio_client).Start();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
Command::Pause(voice_id) => {
if let Some(v) = run_context.voices.get_mut(voice_id.0) {
if v.playing {
let hresult = (*v.audio_client).Stop();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
}
}
drop(commands_lock);
// Wait for any of the handles to be signalled, which means that the corresponding
// sound needs a buffer.
debug_assert!(run_context.handles.len() <= winnt::MAXIMUM_WAIT_OBJECTS as usize);
let result = synchapi::WaitForMultipleObjectsEx(run_context.handles.len() as u32,
run_context.handles.as_ptr(),
FALSE,
winbase::INFINITE, /* TODO: allow setting a timeout */
FALSE /* irrelevant parameter here */);
// Notifying the corresponding task handler.
debug_assert!(result >= winbase::WAIT_OBJECT_0);
let handle_id = (result - winbase::WAIT_OBJECT_0) as usize;
// If `handle_id` is 0, then it's `pending_scheduled_event` that was signalled in
// order for us to pick up the pending commands.
// Otherwise, a voice needs data.
if handle_id >= 1 {
let voice = &mut run_context.voices[handle_id - 1];
let voice_id = voice.id.clone();
// Obtaining the number of frames that are available to be written.
let frames_available = {
let mut padding = mem::uninitialized();
let hresult = (*voice.audio_client).GetCurrentPadding(&mut padding);
check_result(hresult).unwrap();
voice.max_frames_in_buffer - padding
};
if frames_available == 0 {
// TODO: can this happen?
continue;
}
// Obtaining a pointer to the buffer.
let (buffer_data, buffer_len) = {
let mut buffer: *mut BYTE = mem::uninitialized();
let hresult = (*voice.render_client)
.GetBuffer(frames_available, &mut buffer as *mut *mut _);
check_result(hresult).unwrap(); // FIXME: can return `AUDCLNT_E_DEVICE_INVALIDATED`
debug_assert!(!buffer.is_null());
(buffer as *mut _,
frames_available as usize * voice.bytes_per_frame as usize /
mem::size_of::<f32>()) // FIXME: correct size when not f32
};
let buffer = Buffer {
voice: voice,
buffer_data: buffer_data,
buffer_len: buffer_len,
frames: frames_available,
marker: PhantomData,
};
let buffer = UnknownTypeBuffer::F32(::Buffer { target: Some(buffer) }); // FIXME: not always f32
callback(voice_id, buffer);
}
}
}
}
#[inline]
pub fn play(&self, voice: VoiceId) {
unsafe {
self.commands.lock().unwrap().push(Command::Play(voice));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn pause(&self, voice: VoiceId) {
unsafe {
self.commands.lock().unwrap().push(Command::Pause(voice));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
}
impl Drop for EventLoop {
#[inline]
fn drop(&mut self) {
unsafe {
handleapi::CloseHandle(self.pending_scheduled_event);
}
}
}
unsafe impl Send for EventLoop {
}
unsafe impl Sync for EventLoop {
}
// The content of a voice ID is a number that was fetched from `next_voice_id`.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize);
impl Drop for VoiceInner {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.render_client).Release();
(*self.audio_client).Release();
handleapi::CloseHandle(self.event);
}
}
}
pub struct Buffer<'a, T: 'a> {
voice: &'a mut VoiceInner,
buffer_data: *mut T,
buffer_len: usize,
frames: UINT32,
marker: PhantomData<&'a mut [T]>,
}
unsafe impl<'a, T> Send for Buffer<'a, T> {
}
impl<'a, T> Buffer<'a, T> {
#[inline]
pub fn buffer(&mut self) -> &mut [T] {
unsafe { slice::from_raw_parts_mut(self.buffer_data, self.buffer_len) }
}
#[inline]
pub fn len(&self) -> usize {
self.buffer_len
}
#[inline]
pub fn finish(self) {
unsafe {
let hresult = (*self.voice.render_client).ReleaseBuffer(self.frames as u32, 0);
match check_result(hresult) {
// Ignoring the error that is produced if the device has been disconnected.
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => (),
e => e.unwrap(),
};
}
}
}
// Turns a `Format` into a `WAVEFORMATEXTENSIBLE`.
fn format_to_waveformatextensible(format: &Format)
-> Result<mmreg::WAVEFORMATEXTENSIBLE, CreationError> {
Ok(mmreg::WAVEFORMATEXTENSIBLE {
Format: mmreg::WAVEFORMATEX {
wFormatTag: match format.data_type {
SampleFormat::I16 => mmreg::WAVE_FORMAT_PCM,
SampleFormat::F32 => mmreg::WAVE_FORMAT_EXTENSIBLE,
SampleFormat::U16 => return Err(CreationError::FormatNotSupported),
},
nChannels: format.channels as WORD,
nSamplesPerSec: format.sample_rate.0 as DWORD,
nAvgBytesPerSec: format.channels as DWORD *
format.sample_rate.0 as DWORD *
format.data_type.sample_size() as DWORD,
nBlockAlign: format.channels as WORD *
format.data_type.sample_size() as WORD,
wBitsPerSample: 8 * format.data_type.sample_size() as WORD,
cbSize: match format.data_type {
SampleFormat::I16 => 0,
SampleFormat::F32 => (mem::size_of::<mmreg::WAVEFORMATEXTENSIBLE>() -
mem::size_of::<mmreg::WAVEFORMATEX>()) as
WORD,
SampleFormat::U16 => return Err(CreationError::FormatNotSupported),
},
},
Samples: 8 * format.data_type.sample_size() as WORD,
dwChannelMask: {
let mut mask = 0;
const CHANNEL_POSITIONS: &'static [DWORD] = &[
mmreg::SPEAKER_FRONT_LEFT,
mmreg::SPEAKER_FRONT_RIGHT,
mmreg::SPEAKER_FRONT_CENTER,
mmreg::SPEAKER_LOW_FREQUENCY,
mmreg::SPEAKER_BACK_LEFT,
mmreg::SPEAKER_BACK_RIGHT,
mmreg::SPEAKER_FRONT_LEFT_OF_CENTER,
mmreg::SPEAKER_FRONT_RIGHT_OF_CENTER,
mmreg::SPEAKER_BACK_CENTER,
mmreg::SPEAKER_SIDE_LEFT,
mmreg::SPEAKER_SIDE_RIGHT,
mmreg::SPEAKER_TOP_CENTER,
mmreg::SPEAKER_TOP_FRONT_LEFT,
mmreg::SPEAKER_TOP_FRONT_CENTER,
mmreg::SPEAKER_TOP_FRONT_RIGHT,
mmreg::SPEAKER_TOP_BACK_LEFT,
mmreg::SPEAKER_TOP_BACK_CENTER,
mmreg::SPEAKER_TOP_BACK_RIGHT,
];
for i in 0..format.channels {
let raw_value = CHANNEL_POSITIONS[i as usize];
mask = mask | raw_value;
}
mask
},
SubFormat: match format.data_type {
SampleFormat::I16 => ksmedia::KSDATAFORMAT_SUBTYPE_PCM,
SampleFormat::F32 => ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT,
SampleFormat::U16 => return Err(CreationError::FormatNotSupported),
},
})
}