Update to a more general Device and Stream API. Add support for input streams (E.g. microphone). Add default format methods. (#201)

* Update to a more general Device and Stream API

This update prepares for adding input stream support by removing the
`Endpoint` type (which only supports output streams) in favour of a more
general `Device` type which may support any number of input or output
streams. Previously discussed at #117.

The name `Voice` has been replaced with the more ubiquitous name
`Stream`. See #118 for justification.

Also introduces a new `StreamData` which is now passed to the
`EventLoop::run` callback rather than the `UnknownTypeBuffer`.
`StreamData` allows for passing either `Input` data to be read, or
`Output` data to be written.

The `beep.rs` example has been updated for the API changes.

None of the backends have been updated for this API change yet. Backends
will be updated in the following commits.

Closes #117.
Closes #118.

* Update ALSA backend for new `Device` and `Stream` API.

* Update wasapi backend for new `Device` and `Stream` API.

* Update enumerate.rs example for new `Device` and `Stream` API.

* Update coreaudio backend for new `Device` and `Stream` API.

* Fix lib doc tests for Device and Stream API update

* Update emscripten backend for new `Device` and `Stream` API.

* Update null backend for new `Device` and `Stream` API.

* Merge match exprs in beep.rs example

* Add Input variant along with UnknownTypeInputBuffer and InputBuffer

UnknownTypeBuffer and Buffer have been renamed to
UnknownTypeOutputBuffer and OutputBuffer respectively.

No backends have yet been updated for this name change or the addition
of the InputBuffer.

* Update null backend for introduction of InputBuffer

* Update emscripten backend for introduction of InputBuffer

* Make InputBuffer inner field an option to call finish in drop

* Update alsa backend for introduction of InputBuffer

* Update wasapi backend for introduction of InputBuffer

* Update coreaudio backend for introduction of InputBuffer

* Update enumerate.rs example to provide more detail about devices

The enumerate.rs example now also displays:

- Supported input stream formats.
- Default input stream format.
- Default output stream format.

This should also be useful for testing the progress of #201.

* Add `record_wav.rs` example for demonstrating input streams

Records a ~3 second WAV file to `$CARGO_MANIFEST_DIR/recorded.wav` using
the default input device and default input format.

Uses hound 3.0 to create and write to the WAV file.

This should also be useful for testing the input stream implementations
for each different cpal backend.

* Implement input stream support for coreaudio backend

This implements the following for the coreaudio backend:

- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream

The `enumerate.rs` and `record_wav.rs` examples now work successfully on
macos.

* Add `SupportedFormat::cmp_default_heuristics` method

This adds a comparison function which compares two `SupportedFormat`s in
terms of their priority of use as a default stream format.

Some backends (such as ALSA) do not provide a default stream format for
their audio devices. In these cases, CPAL attempts to decide on a
reasonable default format for the user. To do this we use the "greatest"
of all supported stream formats when compared with this method.

* Implement input stream support for ALSA backend

This implements the following for the ALSA backend:

- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream

Note that ALSA itself does not give default stream formats for its
devices. Thus the newly added `SupportedFormat::cmp_default_heuristics`
method is used to determine the most suitable, supported stream format
to use as the default.

The `enumerate.rs` and `record_wav.rs` examples now work successfully on
my linux machine.

* Implement input stream support for wasapi backend

This implements the following for the wasapi backend:

- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream

Note that wasapi does not enumerate supported input/output stream
formats for its devices. Instead, we query the `IsFormatSupported`
method for supported formats ourselves.

* Fix some warnings in the alsa backend

* Update CHANGELOG for introduction of input streams and related items

* Update README to show latest features supported by CPAL

* Simplify beep example using Device::default_output_format

* Remove old commented code from wasapi/stream.rs
This commit is contained in:
mitchmindtree 2018-02-13 00:10:24 +11:00 committed by Pierre Krieger
parent b47e46a4ac
commit c38bbb26e4
19 changed files with 3441 additions and 1891 deletions

1
.gitignore vendored
View File

@ -2,3 +2,4 @@
/Cargo.lock
.cargo/
.DS_Store
recorded.wav

View File

@ -1,5 +1,19 @@
# Unreleased
- Add `record_wav.rs` example. Records 3 seconds to
`$CARGO_MANIFEST_DIR/recorded.wav` using default input device.
- Update `enumerate.rs` example to display default input/output devices and
formats.
- Add input stream support to coreaudio, alsa and windows backends.
- Introduce `StreamData` type for handling either input or output streams in
`EventLoop::run` callback.
- Add `Device::supported_{input/output}_formats` methods.
- Add `Device::default_{input/output}_format` methods.
- Add `default_{input/output}_device` functions.
- Replace usage of `Voice` with `Stream` throughout the crate.
- Remove `Endpoint` in favour of `Device` for supporting both input and output
streams.
# Version 0.7.0 (2018-02-04)
- Rename ChannelsCount to ChannelCount.

View File

@ -11,15 +11,18 @@ keywords = ["audio", "sound"]
[dependencies]
lazy_static = "0.2"
[dev-dependencies]
hound = "3.0"
[target.'cfg(target_os = "windows")'.dependencies]
winapi = { version = "0.3", features = ["audiosessiontypes", "audioclient", "combaseapi", "debug", "handleapi", "ksmedia", "mmdeviceapi", "objbase", "std", "synchapi", "winuser"] }
winapi = { version = "0.3", features = ["audiosessiontypes", "audioclient", "coml2api", "combaseapi", "debug", "devpkey", "handleapi", "ksmedia", "mmdeviceapi", "objbase", "std", "synchapi", "winuser"] }
[target.'cfg(any(target_os = "linux", target_os = "dragonfly", target_os = "freebsd", target_os = "openbsd"))'.dependencies]
alsa-sys = { version = "0.1", path = "alsa-sys" }
libc = "0.2"
[target.'cfg(any(target_os = "macos", target_os = "ios"))'.dependencies]
coreaudio-rs = { version = "0.8.1", default-features = false, features = ["audio_unit", "core_audio"] }
coreaudio-rs = { version = "0.9.0", default-features = false, features = ["audio_unit", "core_audio"] }
core-foundation-sys = "0.5.1" # For linking to CoreFoundation.framework and handling device name `CFString`s.
[target.'cfg(target_os = "emscripten")'.dependencies]

View File

@ -1,8 +1,21 @@
# CPAL - Cross-platform audio library
# CPAL - Cross-Platform Audio Library
[Documentation](https://docs.rs/cpal)
[![Build Status](https://travis-ci.org/tomaka/cpal.svg?branch=master)](https://travis-ci.org/tomaka/cpal) [![Crates.io](https://img.shields.io/crates/v/cpal.svg)](https://crates.io/crates/cpal) [![docs.rs](https://docs.rs/cpal/badge.svg)](https://docs.rs/cpal/)
Low-level library for audio playback in pure Rust.
Low-level library for audio input and output in pure Rust.
This library allows you to open a channel with the audio device of the user's machine, and
send PCM data to it.
This library currently supports the following:
- Enumerate all available audio devices.
- Get the current default input and output devices.
- Enumerate known supported input and output stream formats for a device.
- Get the current default input and output stream formats for a device.
- Build and run input and output PCM streams on a chosen device with a given stream format.
Currently supported backends include:
- Linux (via ALSA)
- Windows
- macOS (via CoreAudio)
- iOS (via CoreAudio)
- Emscripten

View File

@ -1,17 +1,11 @@
extern crate cpal;
fn main() {
let endpoint = cpal::default_endpoint().expect("Failed to get default endpoint");
let format = endpoint
.supported_formats()
.unwrap()
.next()
.expect("Failed to get endpoint format")
.with_max_sample_rate();
let device = cpal::default_output_device().expect("Failed to get default output device");
let format = device.default_output_format().expect("Failed to get default output format");
let event_loop = cpal::EventLoop::new();
let voice_id = event_loop.build_voice(&endpoint, &format).unwrap();
event_loop.play(voice_id);
let stream_id = event_loop.build_output_stream(&device, &format).unwrap();
event_loop.play_stream(stream_id.clone());
let sample_rate = format.sample_rate.0 as f32;
let mut sample_clock = 0f32;
@ -22,9 +16,9 @@ fn main() {
(sample_clock * 440.0 * 2.0 * 3.141592 / sample_rate).sin()
};
event_loop.run(move |_, buffer| {
match buffer {
cpal::UnknownTypeBuffer::U16(mut buffer) => {
event_loop.run(move |_, data| {
match data {
cpal::StreamData::Output { buffer: cpal::UnknownTypeOutputBuffer::U16(mut buffer) } => {
for sample in buffer.chunks_mut(format.channels as usize) {
let value = ((next_value() * 0.5 + 0.5) * std::u16::MAX as f32) as u16;
for out in sample.iter_mut() {
@ -32,8 +26,7 @@ fn main() {
}
}
},
cpal::UnknownTypeBuffer::I16(mut buffer) => {
cpal::StreamData::Output { buffer: cpal::UnknownTypeOutputBuffer::I16(mut buffer) } => {
for sample in buffer.chunks_mut(format.channels as usize) {
let value = (next_value() * std::i16::MAX as f32) as i16;
for out in sample.iter_mut() {
@ -41,8 +34,7 @@ fn main() {
}
}
},
cpal::UnknownTypeBuffer::F32(mut buffer) => {
cpal::StreamData::Output { buffer: cpal::UnknownTypeOutputBuffer::F32(mut buffer) } => {
for sample in buffer.chunks_mut(format.channels as usize) {
let value = next_value();
for out in sample.iter_mut() {
@ -50,6 +42,7 @@ fn main() {
}
}
},
};
_ => (),
}
});
}

View File

@ -1,25 +1,50 @@
extern crate cpal;
fn main() {
println!("Default Endpoint:\n {:?}", cpal::default_endpoint().map(|e| e.name()));
println!("Default Input Device:\n {:?}", cpal::default_input_device().map(|e| e.name()));
println!("Default Output Device:\n {:?}", cpal::default_output_device().map(|e| e.name()));
let endpoints = cpal::endpoints();
println!("Endpoints: ");
for (endpoint_index, endpoint) in endpoints.enumerate() {
println!("{}. Endpoint \"{}\" Audio formats: ",
endpoint_index + 1,
endpoint.name());
let devices = cpal::devices();
println!("Devices: ");
for (device_index, device) in devices.enumerate() {
println!("{}. \"{}\"",
device_index + 1,
device.name());
let formats = match endpoint.supported_formats() {
Ok(f) => f,
// Input formats
if let Ok(fmt) = device.default_input_format() {
println!(" Default input stream format:\n {:?}", fmt);
}
let mut input_formats = match device.supported_input_formats() {
Ok(f) => f.peekable(),
Err(e) => {
println!("Error: {:?}", e);
continue;
},
};
if input_formats.peek().is_some() {
println!(" All supported input stream formats:");
for (format_index, format) in input_formats.enumerate() {
println!(" {}.{}. {:?}", device_index + 1, format_index + 1, format);
}
}
for (format_index, format) in formats.enumerate() {
println!("{}.{}. {:?}", endpoint_index + 1, format_index + 1, format);
// Output formats
if let Ok(fmt) = device.default_output_format() {
println!(" Default output stream format:\n {:?}", fmt);
}
let mut output_formats = match device.supported_output_formats() {
Ok(f) => f.peekable(),
Err(e) => {
println!("Error: {:?}", e);
continue;
},
};
if output_formats.peek().is_some() {
println!(" All supported output stream formats:");
for (format_index, format) in output_formats.enumerate() {
println!(" {}.{}. {:?}", device_index + 1, format_index + 1, format);
}
}
}
}

95
examples/record_wav.rs Normal file
View File

@ -0,0 +1,95 @@
//! Records a WAV file (roughly 3 seconds long) using the default input device and format.
//!
//! The input data is recorded to "$CARGO_MANIFEST_DIR/recorded.wav".
extern crate cpal;
extern crate hound;
fn main() {
// Setup the default input device and stream with the default input format.
let device = cpal::default_input_device().expect("Failed to get default input device");
println!("Default input device: {}", device.name());
let format = device.default_input_format().expect("Failed to get default input format");
println!("Default input format: {:?}", format);
let event_loop = cpal::EventLoop::new();
let stream_id = event_loop.build_input_stream(&device, &format)
.expect("Failed to build input stream");
event_loop.play_stream(stream_id);
// The WAV file we're recording to.
const PATH: &'static str = concat!(env!("CARGO_MANIFEST_DIR"), "/recorded.wav");
let spec = wav_spec_from_format(&format);
let writer = hound::WavWriter::create(PATH, spec).unwrap();
let writer = std::sync::Arc::new(std::sync::Mutex::new(Some(writer)));
// A flag to indicate that recording is in progress.
println!("Begin recording...");
let recording = std::sync::Arc::new(std::sync::atomic::AtomicBool::new(true));
// Run the input stream on a separate thread.
let writer_2 = writer.clone();
let recording_2 = recording.clone();
std::thread::spawn(move || {
event_loop.run(move |_, data| {
// If we're done recording, return early.
if !recording_2.load(std::sync::atomic::Ordering::Relaxed) {
return;
}
// Otherwise write to the wav writer.
match data {
cpal::StreamData::Input { buffer: cpal::UnknownTypeInputBuffer::U16(buffer) } => {
if let Ok(mut guard) = writer_2.try_lock() {
if let Some(writer) = guard.as_mut() {
for sample in buffer.iter() {
let sample = cpal::Sample::to_i16(sample);
writer.write_sample(sample).ok();
}
}
}
},
cpal::StreamData::Input { buffer: cpal::UnknownTypeInputBuffer::I16(buffer) } => {
if let Ok(mut guard) = writer_2.try_lock() {
if let Some(writer) = guard.as_mut() {
for &sample in buffer.iter() {
writer.write_sample(sample).ok();
}
}
}
},
cpal::StreamData::Input { buffer: cpal::UnknownTypeInputBuffer::F32(buffer) } => {
if let Ok(mut guard) = writer_2.try_lock() {
if let Some(writer) = guard.as_mut() {
for &sample in buffer.iter() {
writer.write_sample(sample).ok();
}
}
}
},
_ => (),
}
});
});
// Let recording go for roughly three seconds.
std::thread::sleep(std::time::Duration::from_secs(3));
recording.store(false, std::sync::atomic::Ordering::Relaxed);
writer.lock().unwrap().take().unwrap().finalize().unwrap();
println!("Recording {} complete!", PATH);
}
fn sample_format(format: cpal::SampleFormat) -> hound::SampleFormat {
match format {
cpal::SampleFormat::U16 => hound::SampleFormat::Int,
cpal::SampleFormat::I16 => hound::SampleFormat::Int,
cpal::SampleFormat::F32 => hound::SampleFormat::Float,
}
}
fn wav_spec_from_format(format: &cpal::Format) -> hound::WavSpec {
hound::WavSpec {
channels: format.channels as _,
sample_rate: format.sample_rate.0 as _,
bits_per_sample: (format.data_type.sample_size() * 8) as _,
sample_format: sample_format(format.data_type),
}
}

View File

@ -1,14 +1,11 @@
use super::Endpoint;
use super::Device;
use super::alsa;
use super::check_errors;
use super::libc;
use std::ffi::CStr;
use std::ffi::CString;
use std::mem;
/// ALSA implementation for `EndpointsIterator`.
pub struct EndpointsIterator {
/// ALSA implementation for `Devices`.
pub struct Devices {
// we keep the original list so that we can pass it to the free function
global_list: *const *const u8,
@ -16,12 +13,12 @@ pub struct EndpointsIterator {
next_str: *const *const u8,
}
unsafe impl Send for EndpointsIterator {
unsafe impl Send for Devices {
}
unsafe impl Sync for EndpointsIterator {
unsafe impl Sync for Devices {
}
impl Drop for EndpointsIterator {
impl Drop for Devices {
#[inline]
fn drop(&mut self) {
unsafe {
@ -30,8 +27,8 @@ impl Drop for EndpointsIterator {
}
}
impl Default for EndpointsIterator {
fn default() -> EndpointsIterator {
impl Default for Devices {
fn default() -> Devices {
unsafe {
let mut hints = mem::uninitialized();
// TODO: check in which situation this can fail
@ -40,7 +37,7 @@ impl Default for EndpointsIterator {
let hints = hints as *const *const u8;
EndpointsIterator {
Devices {
global_list: hints,
next_str: hints,
}
@ -48,10 +45,10 @@ impl Default for EndpointsIterator {
}
}
impl Iterator for EndpointsIterator {
type Item = Endpoint;
impl Iterator for Devices {
type Item = Device;
fn next(&mut self) -> Option<Endpoint> {
fn next(&mut self) -> Option<Device> {
loop {
unsafe {
if (*self.next_str).is_null() {
@ -62,10 +59,9 @@ impl Iterator for EndpointsIterator {
let n_ptr = alsa::snd_device_name_get_hint(*self.next_str as *const _,
b"NAME\0".as_ptr() as *const _);
if !n_ptr.is_null() {
let n = CStr::from_ptr(n_ptr).to_bytes().to_vec();
let n = String::from_utf8(n).unwrap();
libc::free(n_ptr as *mut _);
Some(n)
let bytes = CString::from_raw(n_ptr).into_bytes();
let string = String::from_utf8(bytes).unwrap();
Some(string)
} else {
None
}
@ -75,10 +71,9 @@ impl Iterator for EndpointsIterator {
let n_ptr = alsa::snd_device_name_get_hint(*self.next_str as *const _,
b"IOID\0".as_ptr() as *const _);
if !n_ptr.is_null() {
let n = CStr::from_ptr(n_ptr).to_bytes().to_vec();
let n = String::from_utf8(n).unwrap();
libc::free(n_ptr as *mut _);
Some(n)
let bytes = CString::from_raw(n_ptr).into_bytes();
let string = String::from_utf8(bytes).unwrap();
Some(string)
} else {
None
}
@ -92,24 +87,46 @@ impl Iterator for EndpointsIterator {
}
}
if let Some(name) = name {
// trying to open the PCM device to see if it can be opened
let name_zeroed = CString::new(name.clone()).unwrap();
let mut playback_handle = mem::uninitialized();
if alsa::snd_pcm_open(&mut playback_handle,
name_zeroed.as_ptr() as *const _,
alsa::SND_PCM_STREAM_PLAYBACK,
alsa::SND_PCM_NONBLOCK) == 0
{
alsa::snd_pcm_close(playback_handle);
} else {
continue;
}
let name = match name {
Some(name) => {
// Ignoring the `null` device.
if name == "null" {
continue;
}
name
},
_ => continue,
};
// ignoring the `null` device
if name != "null" {
return Some(Endpoint(name));
}
// trying to open the PCM device to see if it can be opened
let name_zeroed = CString::new(&name[..]).unwrap();
// See if the device has an available output stream.
let mut playback_handle = mem::uninitialized();
let has_available_output = alsa::snd_pcm_open(
&mut playback_handle,
name_zeroed.as_ptr() as *const _,
alsa::SND_PCM_STREAM_PLAYBACK,
alsa::SND_PCM_NONBLOCK,
) == 0;
if has_available_output {
alsa::snd_pcm_close(playback_handle);
}
// See if the device has an available input stream.
let mut capture_handle = mem::uninitialized();
let has_available_input = alsa::snd_pcm_open(
&mut capture_handle,
name_zeroed.as_ptr() as *const _,
alsa::SND_PCM_STREAM_CAPTURE,
alsa::SND_PCM_NONBLOCK,
) == 0;
if has_available_input {
alsa::snd_pcm_close(capture_handle);
}
if has_available_output || has_available_input {
return Some(Device(name));
}
}
}
@ -117,6 +134,11 @@ impl Iterator for EndpointsIterator {
}
#[inline]
pub fn default_endpoint() -> Option<Endpoint> {
Some(Endpoint("default".to_owned()))
pub fn default_input_device() -> Option<Device> {
Some(Device("default".to_owned()))
}
#[inline]
pub fn default_output_device() -> Option<Device> {
Some(Device("default".to_owned()))
}

File diff suppressed because it is too large Load Diff

View File

@ -8,6 +8,7 @@ use super::coreaudio::sys::{
AudioObjectGetPropertyData,
AudioObjectGetPropertyDataSize,
kAudioHardwareNoError,
kAudioHardwarePropertyDefaultInputDevice,
kAudioHardwarePropertyDefaultOutputDevice,
kAudioHardwarePropertyDevices,
kAudioObjectPropertyElementMaster,
@ -15,9 +16,9 @@ use super::coreaudio::sys::{
kAudioObjectSystemObject,
OSStatus,
};
use super::Endpoint;
use super::Device;
unsafe fn audio_output_devices() -> Result<Vec<AudioDeviceID>, OSStatus> {
unsafe fn audio_devices() -> Result<Vec<AudioDeviceID>, OSStatus> {
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioHardwarePropertyDevices,
mScope: kAudioObjectPropertyScopeGlobal,
@ -58,42 +59,62 @@ unsafe fn audio_output_devices() -> Result<Vec<AudioDeviceID>, OSStatus> {
audio_devices.set_len(device_count as usize);
// Only keep the devices that have some supported output format.
audio_devices.retain(|&id| {
let e = Endpoint { audio_device_id: id };
match e.supported_formats() {
Err(_) => false,
Ok(mut fmts) => fmts.next().is_some(),
}
});
Ok(audio_devices)
}
pub struct EndpointsIterator(VecIntoIter<AudioDeviceID>);
pub struct Devices(VecIntoIter<AudioDeviceID>);
unsafe impl Send for EndpointsIterator {
unsafe impl Send for Devices {
}
unsafe impl Sync for EndpointsIterator {
unsafe impl Sync for Devices {
}
impl Default for EndpointsIterator {
impl Default for Devices {
fn default() -> Self {
let devices = unsafe {
audio_output_devices().expect("failed to get audio output devices")
audio_devices().expect("failed to get audio output devices")
};
EndpointsIterator(devices.into_iter())
Devices(devices.into_iter())
}
}
impl Iterator for EndpointsIterator {
type Item = Endpoint;
fn next(&mut self) -> Option<Endpoint> {
self.0.next().map(|id| Endpoint { audio_device_id: id })
impl Iterator for Devices {
type Item = Device;
fn next(&mut self) -> Option<Device> {
self.0.next().map(|id| Device { audio_device_id: id })
}
}
pub fn default_endpoint() -> Option<Endpoint> {
pub fn default_input_device() -> Option<Device> {
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioHardwarePropertyDefaultInputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
mElement: kAudioObjectPropertyElementMaster,
};
let audio_device_id: AudioDeviceID = 0;
let data_size = mem::size_of::<AudioDeviceID>();;
let status = unsafe {
AudioObjectGetPropertyData(
kAudioObjectSystemObject,
&property_address as *const _,
0,
null(),
&data_size as *const _ as *mut _,
&audio_device_id as *const _ as *mut _,
)
};
if status != kAudioHardwareNoError as i32 {
return None;
}
let device = Device {
audio_device_id: audio_device_id,
};
Some(device)
}
pub fn default_output_device() -> Option<Device> {
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioHardwarePropertyDefaultOutputDevice,
mScope: kAudioObjectPropertyScopeGlobal,
@ -116,10 +137,11 @@ pub fn default_endpoint() -> Option<Endpoint> {
return None;
}
let endpoint = Endpoint {
let device = Device {
audio_device_id: audio_device_id,
};
Some(endpoint)
Some(device)
}
pub type SupportedFormatsIterator = VecIntoIter<SupportedFormat>;
pub type SupportedInputFormats = VecIntoIter<SupportedFormat>;
pub type SupportedOutputFormats = VecIntoIter<SupportedFormat>;

View File

@ -3,13 +3,16 @@ extern crate core_foundation_sys;
use ChannelCount;
use CreationError;
use DefaultFormatError;
use Format;
use FormatsEnumerationError;
use Sample;
use SampleFormat;
use SampleRate;
use StreamData;
use SupportedFormat;
use UnknownTypeBuffer;
use UnknownTypeInputBuffer;
use UnknownTypeOutputBuffer;
use std::ffi::CStr;
use std::mem;
@ -29,12 +32,15 @@ use self::coreaudio::sys::{
AudioObjectGetPropertyData,
AudioObjectGetPropertyDataSize,
AudioObjectPropertyAddress,
AudioObjectPropertyScope,
AudioStreamBasicDescription,
AudioValueRange,
kAudioDevicePropertyAvailableNominalSampleRates,
kAudioDevicePropertyDeviceNameCFString,
kAudioObjectPropertyScopeInput,
kAudioDevicePropertyScopeOutput,
kAudioDevicePropertyStreamConfiguration,
kAudioDevicePropertyStreamFormat,
kAudioFormatFlagIsFloat,
kAudioFormatFlagIsPacked,
kAudioFormatLinearPCM,
@ -42,8 +48,10 @@ use self::coreaudio::sys::{
kAudioObjectPropertyElementMaster,
kAudioObjectPropertyScopeOutput,
kAudioOutputUnitProperty_CurrentDevice,
kAudioOutputUnitProperty_EnableIO,
kAudioUnitProperty_StreamFormat,
kCFStringEncodingUTF8,
OSStatus,
};
use self::core_foundation_sys::string::{
CFStringRef,
@ -52,18 +60,52 @@ use self::core_foundation_sys::string::{
mod enumerate;
pub use self::enumerate::{EndpointsIterator, SupportedFormatsIterator, default_endpoint};
pub use self::enumerate::{Devices, SupportedInputFormats, SupportedOutputFormats, default_input_device, default_output_device};
#[derive(Clone, PartialEq, Eq)]
pub struct Endpoint {
pub struct Device {
audio_device_id: AudioDeviceID,
}
impl Endpoint {
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> {
impl Device {
pub fn name(&self) -> String {
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioDevicePropertyDeviceNameCFString,
mScope: kAudioDevicePropertyScopeOutput,
mElement: kAudioObjectPropertyElementMaster,
};
let device_name: CFStringRef = null();
let data_size = mem::size_of::<CFStringRef>();
let c_str = unsafe {
let status = AudioObjectGetPropertyData(
self.audio_device_id,
&property_address as *const _,
0,
null(),
&data_size as *const _ as *mut _,
&device_name as *const _ as *mut _,
);
if status != kAudioHardwareNoError as i32 {
return format!("<OSStatus: {:?}>", status);
}
let c_string: *const c_char = CFStringGetCStringPtr(device_name, kCFStringEncodingUTF8);
if c_string == null() {
return "<null>".into();
}
CStr::from_ptr(c_string as *mut _)
};
c_str.to_string_lossy().into_owned()
}
// Logic re-used between `supported_input_formats` and `supported_output_formats`.
fn supported_formats(
&self,
scope: AudioObjectPropertyScope,
) -> Result<SupportedOutputFormats, FormatsEnumerationError>
{
let mut property_address = AudioObjectPropertyAddress {
mSelector: kAudioDevicePropertyStreamConfiguration,
mScope: kAudioObjectPropertyScopeOutput,
mScope: scope,
mElement: kAudioObjectPropertyElementMaster,
};
@ -163,52 +205,113 @@ impl Endpoint {
}
}
pub fn name(&self) -> String {
pub fn supported_input_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
self.supported_formats(kAudioObjectPropertyScopeInput)
}
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
self.supported_formats(kAudioObjectPropertyScopeOutput)
}
fn default_format(
&self,
scope: AudioObjectPropertyScope,
) -> Result<Format, DefaultFormatError>
{
fn default_format_error_from_os_status(status: OSStatus) -> Option<DefaultFormatError> {
let err = match coreaudio::Error::from_os_status(status) {
Err(err) => err,
Ok(_) => return None,
};
match err {
coreaudio::Error::RenderCallbackBufferFormatDoesNotMatchAudioUnitStreamFormat |
coreaudio::Error::NoKnownSubtype |
coreaudio::Error::AudioUnit(coreaudio::error::AudioUnitError::FormatNotSupported) |
coreaudio::Error::AudioCodec(_) |
coreaudio::Error::AudioFormat(_) => Some(DefaultFormatError::StreamTypeNotSupported),
_ => Some(DefaultFormatError::DeviceNotAvailable),
}
}
let property_address = AudioObjectPropertyAddress {
mSelector: kAudioDevicePropertyDeviceNameCFString,
mScope: kAudioDevicePropertyScopeOutput,
mSelector: kAudioDevicePropertyStreamFormat,
mScope: scope,
mElement: kAudioObjectPropertyElementMaster,
};
let device_name: CFStringRef = null();
let data_size = mem::size_of::<CFStringRef>();
let c_str = unsafe {
unsafe {
let asbd: AudioStreamBasicDescription = mem::uninitialized();
let data_size = mem::size_of::<AudioStreamBasicDescription>() as u32;
let status = AudioObjectGetPropertyData(
self.audio_device_id,
&property_address as *const _,
0,
null(),
&data_size as *const _ as *mut _,
&device_name as *const _ as *mut _,
&asbd as *const _ as *mut _,
);
if status != kAudioHardwareNoError as i32 {
return format!("<OSStatus: {:?}>", status);
let err = default_format_error_from_os_status(status)
.expect("no known error for OsStatus");
return Err(err);
}
let c_string: *const c_char = CFStringGetCStringPtr(device_name, kCFStringEncodingUTF8);
if c_string == null() {
return "<null>".into();
}
CStr::from_ptr(c_string as *mut _)
};
c_str.to_string_lossy().into_owned()
let sample_format = {
let audio_format = coreaudio::audio_unit::AudioFormat::from_format_and_flag(
asbd.mFormatID,
Some(asbd.mFormatFlags),
);
let flags = match audio_format {
Some(coreaudio::audio_unit::AudioFormat::LinearPCM(flags)) => flags,
_ => return Err(DefaultFormatError::StreamTypeNotSupported),
};
let maybe_sample_format =
coreaudio::audio_unit::SampleFormat::from_flags_and_bytes_per_frame(
flags,
asbd.mBytesPerFrame,
);
match maybe_sample_format {
Some(coreaudio::audio_unit::SampleFormat::F32) => SampleFormat::F32,
Some(coreaudio::audio_unit::SampleFormat::I16) => SampleFormat::I16,
_ => return Err(DefaultFormatError::StreamTypeNotSupported),
}
};
let format = Format {
sample_rate: SampleRate(asbd.mSampleRate as _),
channels: asbd.mChannelsPerFrame as _,
data_type: sample_format,
};
Ok(format)
}
}
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
self.default_format(kAudioObjectPropertyScopeInput)
}
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
self.default_format(kAudioObjectPropertyScopeOutput)
}
}
// The ID of a voice is its index within the `voices` array of the events loop.
// The ID of a stream is its index within the `streams` array of the events loop.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize);
pub struct StreamId(usize);
pub struct EventLoop {
// This `Arc` is shared with all the callbacks of coreaudio.
active_callbacks: Arc<ActiveCallbacks>,
voices: Mutex<Vec<Option<VoiceInner>>>,
streams: Mutex<Vec<Option<StreamInner>>>,
}
struct ActiveCallbacks {
// Whenever the `run()` method is called with a callback, this callback is put in this list.
callbacks: Mutex<Vec<&'static mut (FnMut(VoiceId, UnknownTypeBuffer) + Send)>>,
callbacks: Mutex<Vec<&'static mut (FnMut(StreamId, StreamData) + Send)>>,
}
struct VoiceInner {
struct StreamInner {
playing: bool,
audio_unit: AudioUnit,
}
@ -227,25 +330,98 @@ impl From<coreaudio::Error> for CreationError {
}
}
// Create a coreaudio AudioStreamBasicDescription from a CPAL Format.
fn asbd_from_format(format: &Format) -> AudioStreamBasicDescription {
let n_channels = format.channels as usize;
let sample_rate = format.sample_rate.0;
let bytes_per_channel = format.data_type.sample_size();
let bits_per_channel = bytes_per_channel * 8;
let bytes_per_frame = n_channels * bytes_per_channel;
let frames_per_packet = 1;
let bytes_per_packet = frames_per_packet * bytes_per_frame;
let sample_format = format.data_type;
let format_flags = match sample_format {
SampleFormat::F32 => (kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked) as u32,
_ => kAudioFormatFlagIsPacked as u32,
};
let asbd = AudioStreamBasicDescription {
mBitsPerChannel: bits_per_channel as _,
mBytesPerFrame: bytes_per_frame as _,
mChannelsPerFrame: n_channels as _,
mBytesPerPacket: bytes_per_packet as _,
mFramesPerPacket: frames_per_packet as _,
mFormatFlags: format_flags,
mFormatID: kAudioFormatLinearPCM,
mSampleRate: sample_rate as _,
..Default::default()
};
asbd
}
fn audio_unit_from_device(device: &Device, input: bool) -> Result<AudioUnit, coreaudio::Error> {
let mut audio_unit = {
let au_type = if cfg!(target_os = "ios") {
// The HalOutput unit isn't available in iOS unfortunately.
// RemoteIO is a sensible replacement.
// See https://goo.gl/CWwRTx
coreaudio::audio_unit::IOType::RemoteIO
} else {
coreaudio::audio_unit::IOType::HalOutput
};
AudioUnit::new(au_type)?
};
if input {
// Enable input processing.
let enable_input = 1u32;
audio_unit.set_property(
kAudioOutputUnitProperty_EnableIO,
Scope::Input,
Element::Input,
Some(&enable_input),
)?;
// Disable output processing.
let disable_output = 0u32;
audio_unit.set_property(
kAudioOutputUnitProperty_EnableIO,
Scope::Output,
Element::Output,
Some(&disable_output),
)?;
}
audio_unit.set_property(
kAudioOutputUnitProperty_CurrentDevice,
Scope::Global,
Element::Output,
Some(&device.audio_device_id),
)?;
Ok(audio_unit)
}
impl EventLoop {
#[inline]
pub fn new() -> EventLoop {
EventLoop {
active_callbacks: Arc::new(ActiveCallbacks { callbacks: Mutex::new(Vec::new()) }),
voices: Mutex::new(Vec::new()),
streams: Mutex::new(Vec::new()),
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer) + Send
where F: FnMut(StreamId, StreamData) + Send
{
let callback: &mut (FnMut(VoiceId, UnknownTypeBuffer) + Send) = &mut callback;
self.active_callbacks
.callbacks
.lock()
.unwrap()
.push(unsafe { mem::transmute(callback) });
{
let callback: &mut (FnMut(StreamId, StreamData) + Send) = &mut callback;
self.active_callbacks
.callbacks
.lock()
.unwrap()
.push(unsafe { mem::transmute(callback) });
}
loop {
// So the loop does not get optimised out in --release
@ -256,72 +432,131 @@ impl EventLoop {
// we remove the callback from `active_callbacks`.
}
#[inline]
pub fn build_voice(&self, endpoint: &Endpoint, format: &Format)
-> Result<VoiceId, CreationError> {
let mut audio_unit = {
let au_type = if cfg!(target_os = "ios") {
// The DefaultOutput unit isn't available in iOS unfortunately.
// RemoteIO is a sensible replacement.
// See https://goo.gl/CWwRTx
coreaudio::audio_unit::IOType::RemoteIO
} else {
coreaudio::audio_unit::IOType::DefaultOutput
};
AudioUnit::new(au_type)?
};
// TODO: Set the audio output unit device as the given endpoint device.
audio_unit.set_property(
kAudioOutputUnitProperty_CurrentDevice,
Scope::Global,
Element::Output,
Some(&endpoint.audio_device_id),
)?;
// Set the stream in interleaved mode.
let n_channels = format.channels as usize;
let sample_rate = format.sample_rate.0;
let bytes_per_channel = format.data_type.sample_size();
let bits_per_channel = bytes_per_channel * 8;
let bytes_per_frame = n_channels * bytes_per_channel;
let frames_per_packet = 1;
let bytes_per_packet = frames_per_packet * bytes_per_frame;
let sample_format = format.data_type;
let format_flags = match sample_format {
SampleFormat::F32 => (kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked) as u32,
_ => kAudioFormatFlagIsPacked as u32,
};
let asbd = AudioStreamBasicDescription {
mBitsPerChannel: bits_per_channel as _,
mBytesPerFrame: bytes_per_frame as _,
mChannelsPerFrame: n_channels as _,
mBytesPerPacket: bytes_per_packet as _,
mFramesPerPacket: frames_per_packet as _,
mFormatFlags: format_flags,
mFormatID: kAudioFormatLinearPCM,
mSampleRate: sample_rate as _,
..Default::default()
};
audio_unit.set_property(
kAudioUnitProperty_StreamFormat,
Scope::Input,
Element::Output,
Some(&asbd)
)?;
// Determine the future ID of the voice.
let mut voices_lock = self.voices.lock().unwrap();
let voice_id = voices_lock
fn next_stream_id(&self) -> usize {
let streams_lock = self.streams.lock().unwrap();
let stream_id = streams_lock
.iter()
.position(|n| n.is_none())
.unwrap_or(voices_lock.len());
.unwrap_or(streams_lock.len());
stream_id
}
// Add the stream to the list of streams within `self`.
fn add_stream(&self, stream_id: usize, au: AudioUnit) {
let inner = StreamInner {
playing: true,
audio_unit: au,
};
let mut streams_lock = self.streams.lock().unwrap();
if stream_id == streams_lock.len() {
streams_lock.push(Some(inner));
} else {
streams_lock[stream_id] = Some(inner);
}
}
#[inline]
pub fn build_input_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
let mut audio_unit = audio_unit_from_device(device, true)?;
// The scope and element for working with a device's output stream.
let scope = Scope::Output;
let element = Element::Input;
// Set the stream in interleaved mode.
let asbd = asbd_from_format(format);
audio_unit.set_property(kAudioUnitProperty_StreamFormat, scope, element, Some(&asbd))?;
// Determine the future ID of the stream.
let stream_id = self.next_stream_id();
// Register the callback that is being called by coreaudio whenever it needs data to be
// fed to the audio buffer.
let active_callbacks = self.active_callbacks.clone();
audio_unit.set_render_callback(move |args: render_callback::Args<data::Raw>| unsafe {
let sample_format = format.data_type;
let bytes_per_channel = format.data_type.sample_size();
type Args = render_callback::Args<data::Raw>;
audio_unit.set_input_callback(move |args: Args| unsafe {
let ptr = (*args.data.data).mBuffers.as_ptr() as *const AudioBuffer;
let len = (*args.data.data).mNumberBuffers as usize;
let buffers: &[AudioBuffer] = slice::from_raw_parts(ptr, len);
// TODO: Perhaps loop over all buffers instead?
let AudioBuffer {
mNumberChannels: _num_channels,
mDataByteSize: data_byte_size,
mData: data
} = buffers[0];
let mut callbacks = active_callbacks.callbacks.lock().unwrap();
// A small macro to simplify handling the callback for different sample types.
macro_rules! try_callback {
($SampleFormat:ident, $SampleType:ty) => {{
let data_len = (data_byte_size as usize / bytes_per_channel) as usize;
let data_slice = slice::from_raw_parts(data as *const $SampleType, data_len);
let callback = match callbacks.get_mut(0) {
Some(cb) => cb,
None => return Ok(()),
};
let buffer = InputBuffer { buffer: data_slice };
let unknown_type_buffer = UnknownTypeInputBuffer::$SampleFormat(::InputBuffer { buffer: Some(buffer) });
let stream_data = StreamData::Input { buffer: unknown_type_buffer };
callback(StreamId(stream_id), stream_data);
}};
}
match sample_format {
SampleFormat::F32 => try_callback!(F32, f32),
SampleFormat::I16 => try_callback!(I16, i16),
SampleFormat::U16 => try_callback!(U16, u16),
}
Ok(())
})?;
// TODO: start playing now? is that consistent with the other backends?
audio_unit.start()?;
// Add the stream to the list of streams within `self`.
self.add_stream(stream_id, audio_unit);
Ok(StreamId(stream_id))
}
#[inline]
pub fn build_output_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
let mut audio_unit = audio_unit_from_device(device, false)?;
// The scope and element for working with a device's output stream.
let scope = Scope::Input;
let element = Element::Output;
// Set the stream in interleaved mode.
let asbd = asbd_from_format(format);
audio_unit.set_property(kAudioUnitProperty_StreamFormat, scope, element, Some(&asbd))?;
// Determine the future ID of the stream.
let stream_id = self.next_stream_id();
// Register the callback that is being called by coreaudio whenever it needs data to be
// fed to the audio buffer.
let active_callbacks = self.active_callbacks.clone();
let sample_format = format.data_type;
let bytes_per_channel = format.data_type.sample_size();
type Args = render_callback::Args<data::Raw>;
audio_unit.set_render_callback(move |args: Args| unsafe {
// If `run()` is currently running, then a callback will be available from this list.
// Otherwise, we just fill the buffer with zeroes and return.
@ -331,7 +566,6 @@ impl EventLoop {
mData: data
} = (*args.data.data).mBuffers[0];
let mut callbacks = active_callbacks.callbacks.lock().unwrap();
// A small macro to simplify handling the callback for different sample types.
@ -348,9 +582,10 @@ impl EventLoop {
return Ok(());
}
};
let buffer = Buffer { buffer: data_slice };
let unknown_type_buffer = UnknownTypeBuffer::$SampleFormat(::Buffer { target: Some(buffer) });
callback(VoiceId(voice_id), unknown_type_buffer);
let buffer = OutputBuffer { buffer: data_slice };
let unknown_type_buffer = UnknownTypeOutputBuffer::$SampleFormat(::OutputBuffer { target: Some(buffer) });
let stream_data = StreamData::Output { buffer: unknown_type_buffer };
callback(StreamId(stream_id), stream_data);
}};
}
@ -366,54 +601,59 @@ impl EventLoop {
// TODO: start playing now? is that consistent with the other backends?
audio_unit.start()?;
// Add the voice to the list of voices within `self`.
{
let inner = VoiceInner {
playing: true,
audio_unit: audio_unit,
};
// Add the stream to the list of streams within `self`.
self.add_stream(stream_id, audio_unit);
if voice_id == voices_lock.len() {
voices_lock.push(Some(inner));
} else {
voices_lock[voice_id] = Some(inner);
}
}
Ok(VoiceId(voice_id))
Ok(StreamId(stream_id))
}
pub fn destroy_voice(&self, voice_id: VoiceId) {
let mut voices = self.voices.lock().unwrap();
voices[voice_id.0] = None;
pub fn destroy_stream(&self, stream_id: StreamId) {
let mut streams = self.streams.lock().unwrap();
streams[stream_id.0] = None;
}
pub fn play(&self, voice: VoiceId) {
let mut voices = self.voices.lock().unwrap();
let voice = voices[voice.0].as_mut().unwrap();
pub fn play_stream(&self, stream: StreamId) {
let mut streams = self.streams.lock().unwrap();
let stream = streams[stream.0].as_mut().unwrap();
if !voice.playing {
voice.audio_unit.start().unwrap();
voice.playing = true;
if !stream.playing {
stream.audio_unit.start().unwrap();
stream.playing = true;
}
}
pub fn pause(&self, voice: VoiceId) {
let mut voices = self.voices.lock().unwrap();
let voice = voices[voice.0].as_mut().unwrap();
pub fn pause_stream(&self, stream: StreamId) {
let mut streams = self.streams.lock().unwrap();
let stream = streams[stream.0].as_mut().unwrap();
if voice.playing {
voice.audio_unit.stop().unwrap();
voice.playing = false;
if stream.playing {
stream.audio_unit.stop().unwrap();
stream.playing = false;
}
}
}
pub struct Buffer<'a, T: 'a> {
pub struct InputBuffer<'a, T: 'a> {
buffer: &'a [T],
}
pub struct OutputBuffer<'a, T: 'a> {
buffer: &'a mut [T],
}
impl<'a, T> Buffer<'a, T>
impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
&self.buffer
}
#[inline]
pub fn finish(self) {
// Nothing to be done.
}
}
impl<'a, T> OutputBuffer<'a, T>
where T: Sample
{
#[inline]

View File

@ -9,23 +9,25 @@ use stdweb::web::TypedArray;
use stdweb::web::set_timeout;
use CreationError;
use DefaultFormatError;
use Format;
use FormatsEnumerationError;
use Sample;
use StreamData;
use SupportedFormat;
use UnknownTypeBuffer;
use UnknownTypeOutputBuffer;
// The emscripten backend works by having a global variable named `_cpal_audio_contexts`, which
// is an array of `AudioContext` objects. A voice ID corresponds to an entry in this array.
// is an array of `AudioContext` objects. A stream ID corresponds to an entry in this array.
//
// Creating a voice creates a new `AudioContext`. Destroying a voice destroys it.
// Creating a stream creates a new `AudioContext`. Destroying a stream destroys it.
// TODO: handle latency better ; right now we just use setInterval with the amount of sound data
// that is in each buffer ; this is obviously bad, and also the schedule is too tight and there may
// be underflows
pub struct EventLoop {
voices: Mutex<Vec<Option<Reference>>>,
streams: Mutex<Vec<Option<Reference>>>,
}
impl EventLoop {
@ -33,12 +35,12 @@ impl EventLoop {
pub fn new() -> EventLoop {
stdweb::initialize();
EventLoop { voices: Mutex::new(Vec::new()) }
EventLoop { streams: Mutex::new(Vec::new()) }
}
#[inline]
pub fn run<F>(&self, callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer)
where F: FnMut(StreamId, StreamData)
{
// The `run` function uses `set_timeout` to invoke a Rust callback repeatidely. The job
// of this callback is to fill the content of the audio buffers.
@ -47,27 +49,29 @@ impl EventLoop {
// and to the `callback` parameter that was passed to `run`.
fn callback_fn<F>(user_data_ptr: *mut c_void)
where F: FnMut(VoiceId, UnknownTypeBuffer)
where F: FnMut(StreamId, StreamData)
{
unsafe {
let user_data_ptr2 = user_data_ptr as *mut (&EventLoop, F);
let user_data = &mut *user_data_ptr2;
let user_cb = &mut user_data.1;
let voices = user_data.0.voices.lock().unwrap().clone();
for (voice_id, voice) in voices.iter().enumerate() {
let voice = match voice.as_ref() {
let streams = user_data.0.streams.lock().unwrap().clone();
for (stream_id, stream) in streams.iter().enumerate() {
let stream = match stream.as_ref() {
Some(v) => v,
None => continue,
};
let buffer = Buffer {
let buffer = OutputBuffer {
temporary_buffer: vec![0.0; 44100 * 2 / 3],
voice: &voice,
stream: &stream,
};
user_cb(VoiceId(voice_id),
::UnknownTypeBuffer::F32(::Buffer { target: Some(buffer) }));
let id = StreamId(stream_id);
let buffer = UnknownTypeOutputBuffer::F32(::OutputBuffer { target: Some(buffer) });
let data = StreamData::Output { buffer: buffer };
user_cb(StreamId(stream_id), data);
}
set_timeout(|| callback_fn::<F>(user_data_ptr), 330);
@ -83,51 +87,56 @@ impl EventLoop {
}
#[inline]
pub fn build_voice(&self, _: &Endpoint, _format: &Format) -> Result<VoiceId, CreationError> {
let voice = js!(return new AudioContext()).into_reference().unwrap();
pub fn build_input_stream(&self, _: &Device, _format: &Format) -> Result<StreamId, CreationError> {
unimplemented!();
}
let mut voices = self.voices.lock().unwrap();
let voice_id = if let Some(pos) = voices.iter().position(|v| v.is_none()) {
voices[pos] = Some(voice);
#[inline]
pub fn build_output_stream(&self, _: &Device, _format: &Format) -> Result<StreamId, CreationError> {
let stream = js!(return new AudioContext()).into_reference().unwrap();
let mut streams = self.streams.lock().unwrap();
let stream_id = if let Some(pos) = streams.iter().position(|v| v.is_none()) {
streams[pos] = Some(stream);
pos
} else {
let l = voices.len();
voices.push(Some(voice));
let l = streams.len();
streams.push(Some(stream));
l
};
Ok(VoiceId(voice_id))
Ok(StreamId(stream_id))
}
#[inline]
pub fn destroy_voice(&self, voice_id: VoiceId) {
self.voices.lock().unwrap()[voice_id.0] = None;
pub fn destroy_stream(&self, stream_id: StreamId) {
self.streams.lock().unwrap()[stream_id.0] = None;
}
#[inline]
pub fn play(&self, voice_id: VoiceId) {
let voices = self.voices.lock().unwrap();
let voice = voices
.get(voice_id.0)
pub fn play_stream(&self, stream_id: StreamId) {
let streams = self.streams.lock().unwrap();
let stream = streams
.get(stream_id.0)
.and_then(|v| v.as_ref())
.expect("invalid voice ID");
js!(@{voice}.resume());
.expect("invalid stream ID");
js!(@{stream}.resume());
}
#[inline]
pub fn pause(&self, voice_id: VoiceId) {
let voices = self.voices.lock().unwrap();
let voice = voices
.get(voice_id.0)
pub fn pause_stream(&self, stream_id: StreamId) {
let streams = self.streams.lock().unwrap();
let stream = streams
.get(stream_id.0)
.and_then(|v| v.as_ref())
.expect("invalid voice ID");
js!(@{voice}.suspend());
.expect("invalid stream ID");
js!(@{stream}.suspend());
}
}
// Index within the `voices` array of the events loop.
// Index within the `streams` array of the events loop.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize);
pub struct StreamId(usize);
// Detects whether the `AudioContext` global variable is available.
fn is_webaudio_available() -> bool {
@ -142,20 +151,20 @@ fn is_webaudio_available() -> bool {
}
// Content is false if the iterator is empty.
pub struct EndpointsIterator(bool);
impl Default for EndpointsIterator {
fn default() -> EndpointsIterator {
pub struct Devices(bool);
impl Default for Devices {
fn default() -> Devices {
// We produce an empty iterator if the WebAudio API isn't available.
EndpointsIterator(is_webaudio_available())
Devices(is_webaudio_available())
}
}
impl Iterator for EndpointsIterator {
type Item = Endpoint;
impl Iterator for Devices {
type Item = Device;
#[inline]
fn next(&mut self) -> Option<Endpoint> {
fn next(&mut self) -> Option<Device> {
if self.0 {
self.0 = false;
Some(Endpoint)
Some(Device)
} else {
None
}
@ -163,20 +172,35 @@ impl Iterator for EndpointsIterator {
}
#[inline]
pub fn default_endpoint() -> Option<Endpoint> {
pub fn default_input_device() -> Option<Device> {
unimplemented!();
}
#[inline]
pub fn default_output_device() -> Option<Device> {
if is_webaudio_available() {
Some(Endpoint)
Some(Device)
} else {
None
}
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct Endpoint;
pub struct Device;
impl Endpoint {
impl Device {
#[inline]
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> {
pub fn name(&self) -> String {
"Default Device".to_owned()
}
#[inline]
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
unimplemented!();
}
#[inline]
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
// TODO: right now cpal's API doesn't allow flexibility here
// "44100" and "2" (channels) have also been hard-coded in the rest of the code ; if
// this ever becomes more flexible, don't forget to change that
@ -192,22 +216,41 @@ impl Endpoint {
)
}
#[inline]
pub fn name(&self) -> String {
"Default endpoint".to_owned()
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!();
}
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!();
}
}
pub type SupportedFormatsIterator = ::std::vec::IntoIter<SupportedFormat>;
pub type SupportedInputFormats = ::std::vec::IntoIter<SupportedFormat>;
pub type SupportedOutputFormats = ::std::vec::IntoIter<SupportedFormat>;
pub struct Buffer<'a, T: 'a>
pub struct InputBuffer<'a, T: 'a> {
marker: ::std::marker::PhantomData<&'a T>,
}
pub struct OutputBuffer<'a, T: 'a>
where T: Sample
{
temporary_buffer: Vec<T>,
voice: &'a Reference,
stream: &'a Reference,
}
impl<'a, T> Buffer<'a, T>
impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
unimplemented!()
}
#[inline]
pub fn finish(self) {
}
}
impl<'a, T> OutputBuffer<'a, T>
where T: Sample
{
#[inline]
@ -239,7 +282,7 @@ impl<'a, T> Buffer<'a, T>
js!(
var src_buffer = new Float32Array(@{typed_array}.buffer);
var context = @{self.voice};
var context = @{self.stream};
var buf_len = @{self.temporary_buffer.len() as u32};
var num_channels = @{num_channels};

File diff suppressed because it is too large Load Diff

View File

@ -3,12 +3,14 @@
use std::marker::PhantomData;
use CreationError;
use DefaultFormatError;
use Format;
use FormatsEnumerationError;
use StreamData;
use SupportedFormat;
use UnknownTypeBuffer;
pub struct EventLoop;
impl EventLoop {
#[inline]
pub fn new() -> EventLoop {
@ -17,59 +19,84 @@ impl EventLoop {
#[inline]
pub fn run<F>(&self, _callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer)
where F: FnMut(StreamId, StreamData)
{
loop { /* TODO: don't spin */ }
}
#[inline]
pub fn build_voice(&self, _: &Endpoint, _: &Format) -> Result<VoiceId, CreationError> {
pub fn build_input_stream(&self, _: &Device, _: &Format) -> Result<StreamId, CreationError> {
Err(CreationError::DeviceNotAvailable)
}
#[inline]
pub fn destroy_voice(&self, _: VoiceId) {
unreachable!()
pub fn build_output_stream(&self, _: &Device, _: &Format) -> Result<StreamId, CreationError> {
Err(CreationError::DeviceNotAvailable)
}
#[inline]
pub fn play(&self, _: VoiceId) {
pub fn destroy_stream(&self, _: StreamId) {
unimplemented!()
}
#[inline]
pub fn play_stream(&self, _: StreamId) {
panic!()
}
#[inline]
pub fn pause(&self, _: VoiceId) {
pub fn pause_stream(&self, _: StreamId) {
panic!()
}
}
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId;
pub struct StreamId;
#[derive(Default)]
pub struct EndpointsIterator;
pub struct Devices;
impl Iterator for EndpointsIterator {
type Item = Endpoint;
impl Iterator for Devices {
type Item = Device;
#[inline]
fn next(&mut self) -> Option<Endpoint> {
fn next(&mut self) -> Option<Device> {
None
}
}
#[inline]
pub fn default_endpoint() -> Option<Endpoint> {
pub fn default_input_device() -> Option<Device> {
None
}
#[inline]
pub fn default_output_device() -> Option<Device> {
None
}
#[derive(Clone, Debug, PartialEq, Eq)]
pub struct Endpoint;
pub struct Device;
impl Endpoint {
impl Device {
#[inline]
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> {
unreachable!()
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
unimplemented!()
}
#[inline]
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
unimplemented!()
}
#[inline]
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!()
}
#[inline]
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
unimplemented!()
}
#[inline]
@ -78,9 +105,10 @@ impl Endpoint {
}
}
pub struct SupportedFormatsIterator;
pub struct SupportedInputFormats;
pub struct SupportedOutputFormats;
impl Iterator for SupportedFormatsIterator {
impl Iterator for SupportedInputFormats {
type Item = SupportedFormat;
#[inline]
@ -89,14 +117,38 @@ impl Iterator for SupportedFormatsIterator {
}
}
pub struct Buffer<'a, T: 'a> {
impl Iterator for SupportedOutputFormats {
type Item = SupportedFormat;
#[inline]
fn next(&mut self) -> Option<SupportedFormat> {
None
}
}
pub struct InputBuffer<'a, T: 'a> {
marker: PhantomData<&'a T>,
}
pub struct OutputBuffer<'a, T: 'a> {
marker: PhantomData<&'a mut T>,
}
impl<'a, T> Buffer<'a, T> {
impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
unimplemented!()
}
#[inline]
pub fn finish(self) {
}
}
impl<'a, T> OutputBuffer<'a, T> {
#[inline]
pub fn buffer(&mut self) -> &mut [T] {
unreachable!()
unimplemented!()
}
#[inline]

727
src/wasapi/device.rs Normal file
View File

@ -0,0 +1,727 @@
use std;
use std::ffi::OsString;
use std::io::Error as IoError;
use std::mem;
use std::ops::{Deref, DerefMut};
use std::os::windows::ffi::OsStringExt;
use std::ptr;
use std::slice;
use std::sync::{Arc, Mutex, MutexGuard};
use DefaultFormatError;
use Format;
use FormatsEnumerationError;
use SampleFormat;
use SampleRate;
use SupportedFormat;
use COMMON_SAMPLE_RATES;
use super::check_result;
use super::com;
use super::winapi::Interface;
use super::winapi::shared::devpkey;
use super::winapi::shared::ksmedia;
use super::winapi::shared::guiddef::{
GUID,
};
use super::winapi::shared::winerror;
use super::winapi::shared::minwindef::{
DWORD,
};
use super::winapi::shared::mmreg;
use super::winapi::shared::wtypes;
use super::winapi::um::coml2api;
use super::winapi::um::audioclient::{
IAudioClient,
IID_IAudioClient,
AUDCLNT_E_DEVICE_INVALIDATED,
};
use super::winapi::um::audiosessiontypes::{
AUDCLNT_SHAREMODE_SHARED,
};
use super::winapi::um::combaseapi::{
CoCreateInstance,
CoTaskMemFree,
CLSCTX_ALL,
PropVariantClear,
};
use super::winapi::um::mmdeviceapi::{
eAll,
eCapture,
eConsole,
eRender,
CLSID_MMDeviceEnumerator,
DEVICE_STATE_ACTIVE,
EDataFlow,
IMMDevice,
IMMDeviceCollection,
IMMDeviceEnumerator,
IMMEndpoint,
};
pub type SupportedInputFormats = std::vec::IntoIter<SupportedFormat>;
pub type SupportedOutputFormats = std::vec::IntoIter<SupportedFormat>;
/// Wrapper because of that stupid decision to remove `Send` and `Sync` from raw pointers.
#[derive(Copy, Clone)]
struct IAudioClientWrapper(*mut IAudioClient);
unsafe impl Send for IAudioClientWrapper {
}
unsafe impl Sync for IAudioClientWrapper {
}
/// An opaque type that identifies an end point.
pub struct Device {
device: *mut IMMDevice,
/// We cache an uninitialized `IAudioClient` so that we can call functions from it without
/// having to create/destroy audio clients all the time.
future_audio_client: Arc<Mutex<Option<IAudioClientWrapper>>>, // TODO: add NonZero around the ptr
}
struct Endpoint {
endpoint: *mut IMMEndpoint,
}
enum WaveFormat {
Ex(mmreg::WAVEFORMATEX),
Extensible(mmreg::WAVEFORMATEXTENSIBLE),
}
// Use RAII to make sure CoTaskMemFree is called when we are responsible for freeing.
struct WaveFormatExPtr(*mut mmreg::WAVEFORMATEX);
impl Drop for WaveFormatExPtr {
fn drop(&mut self) {
unsafe {
CoTaskMemFree(self.0 as *mut _);
}
}
}
impl WaveFormat {
// Given a pointer to some format, returns a valid copy of the format.
pub fn copy_from_waveformatex_ptr(ptr: *const mmreg::WAVEFORMATEX) -> Option<Self> {
unsafe {
match (*ptr).wFormatTag {
mmreg::WAVE_FORMAT_PCM | mmreg::WAVE_FORMAT_IEEE_FLOAT => {
Some(WaveFormat::Ex(*ptr))
},
mmreg::WAVE_FORMAT_EXTENSIBLE => {
let extensible_ptr = ptr as *const mmreg::WAVEFORMATEXTENSIBLE;
Some(WaveFormat::Extensible(*extensible_ptr))
},
_ => None,
}
}
}
// Get the pointer to the WAVEFORMATEX struct.
pub fn as_ptr(&self) -> *const mmreg::WAVEFORMATEX {
self.deref() as *const _
}
}
impl Deref for WaveFormat {
type Target = mmreg::WAVEFORMATEX;
fn deref(&self) -> &Self::Target {
match *self {
WaveFormat::Ex(ref f) => f,
WaveFormat::Extensible(ref f) => &f.Format,
}
}
}
impl DerefMut for WaveFormat {
fn deref_mut(&mut self) -> &mut Self::Target {
match *self {
WaveFormat::Ex(ref mut f) => f,
WaveFormat::Extensible(ref mut f) => &mut f.Format,
}
}
}
unsafe fn immendpoint_from_immdevice(device: *const IMMDevice) -> *mut IMMEndpoint {
let mut endpoint: *mut IMMEndpoint = mem::uninitialized();
check_result((*device).QueryInterface(&IMMEndpoint::uuidof(), &mut endpoint as *mut _ as *mut _))
.expect("could not query IMMDevice interface for IMMEndpoint");
endpoint
}
unsafe fn data_flow_from_immendpoint(endpoint: *const IMMEndpoint) -> EDataFlow {
let mut data_flow = mem::uninitialized();
check_result((*endpoint).GetDataFlow(&mut data_flow))
.expect("could not get endpoint data_flow");
data_flow
}
// Given the audio client and format, returns whether or not the format is supported.
pub unsafe fn is_format_supported(
client: *const IAudioClient,
waveformatex_ptr: *const mmreg::WAVEFORMATEX,
) -> Result<bool, FormatsEnumerationError>
{
/*
// `IsFormatSupported` checks whether the format is supported and fills
// a `WAVEFORMATEX`
let mut dummy_fmt_ptr: *mut mmreg::WAVEFORMATEX = mem::uninitialized();
let hresult =
(*audio_client)
.IsFormatSupported(share_mode, &format_attempt.Format, &mut dummy_fmt_ptr);
// we free that `WAVEFORMATEX` immediately after because we don't need it
if !dummy_fmt_ptr.is_null() {
CoTaskMemFree(dummy_fmt_ptr as *mut _);
}
// `IsFormatSupported` can return `S_FALSE` (which means that a compatible format
// has been found) but we also treat this as an error
match (hresult, check_result(hresult)) {
(_, Err(ref e))
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
(_, Err(e)) => {
(*audio_client).Release();
panic!("{:?}", e);
},
(winerror::S_FALSE, _) => {
(*audio_client).Release();
return Err(CreationError::FormatNotSupported);
},
(_, Ok(())) => (),
};
*/
// Check if the given format is supported.
let is_supported = |waveformatex_ptr, mut closest_waveformatex_ptr| {
let result = (*client).IsFormatSupported(
AUDCLNT_SHAREMODE_SHARED,
waveformatex_ptr,
&mut closest_waveformatex_ptr,
);
// `IsFormatSupported` can return `S_FALSE` (which means that a compatible format
// has been found, but not an exact match) so we also treat this as unsupported.
match (result, check_result(result)) {
(_, Err(ref e)) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(FormatsEnumerationError::DeviceNotAvailable);
},
(_, Err(_)) => {
Ok(false)
},
(winerror::S_FALSE, _) => {
Ok(false)
},
(_, Ok(())) => {
Ok(true)
},
}
};
// First we want to retrieve a pointer to the `WAVEFORMATEX`.
// Although `GetMixFormat` writes the format to a given `WAVEFORMATEX` pointer,
// the pointer itself may actually point to a `WAVEFORMATEXTENSIBLE` structure.
// We check the wFormatTag to determine this and get a pointer to the correct type.
match (*waveformatex_ptr).wFormatTag {
mmreg::WAVE_FORMAT_PCM | mmreg::WAVE_FORMAT_IEEE_FLOAT => {
let mut closest_waveformatex = *waveformatex_ptr;
let mut closest_waveformatex_ptr = &mut closest_waveformatex as *mut _;
is_supported(waveformatex_ptr, closest_waveformatex_ptr)
},
mmreg::WAVE_FORMAT_EXTENSIBLE => {
let waveformatextensible_ptr =
waveformatex_ptr as *const mmreg::WAVEFORMATEXTENSIBLE;
let mut closest_waveformatextensible = *waveformatextensible_ptr;
let closest_waveformatextensible_ptr =
&mut closest_waveformatextensible as *mut _;
let mut closest_waveformatex_ptr =
closest_waveformatextensible_ptr as *mut mmreg::WAVEFORMATEX;
is_supported(waveformatex_ptr, closest_waveformatex_ptr)
},
_ => Ok(false),
}
}
// Get a cpal Format from a WAVEFORMATEX.
unsafe fn format_from_waveformatex_ptr(
waveformatex_ptr: *const mmreg::WAVEFORMATEX,
) -> Option<Format>
{
fn cmp_guid(a: &GUID, b: &GUID) -> bool {
a.Data1 == b.Data1
&& a.Data2 == b.Data2
&& a.Data3 == b.Data3
&& a.Data4 == b.Data4
}
let data_type = match ((*waveformatex_ptr).wBitsPerSample, (*waveformatex_ptr).wFormatTag) {
(16, mmreg::WAVE_FORMAT_PCM) => SampleFormat::I16,
(32, mmreg::WAVE_FORMAT_IEEE_FLOAT) => SampleFormat::F32,
(n_bits, mmreg::WAVE_FORMAT_EXTENSIBLE) => {
let waveformatextensible_ptr = waveformatex_ptr as *const mmreg::WAVEFORMATEXTENSIBLE;
let sub = (*waveformatextensible_ptr).SubFormat;
if n_bits == 16 && cmp_guid(&sub, &ksmedia::KSDATAFORMAT_SUBTYPE_PCM) {
SampleFormat::I16
} else if n_bits == 32 && cmp_guid(&sub, &ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT) {
SampleFormat::F32
} else {
return None;
}
},
// Unknown data format returned by GetMixFormat.
_ => return None,
};
let format = Format {
channels: (*waveformatex_ptr).nChannels as _,
sample_rate: SampleRate((*waveformatex_ptr).nSamplesPerSec),
data_type: data_type,
};
Some(format)
}
unsafe impl Send for Device {
}
unsafe impl Sync for Device {
}
impl Device {
pub fn name(&self) -> String {
unsafe {
// Open the device's property store.
let mut property_store = ptr::null_mut();
(*self.device).OpenPropertyStore(coml2api::STGM_READ, &mut property_store);
// Get the endpoint's friendly-name property.
let mut property_value = mem::zeroed();
check_result(
(*property_store).GetValue(
&devpkey::DEVPKEY_Device_FriendlyName as *const _ as *const _,
&mut property_value
)
).expect("failed to get friendly-name from property store");
// Read the friendly-name from the union data field, expecting a *const u16.
assert_eq!(property_value.vt, wtypes::VT_LPWSTR as _);
let ptr_usize: usize = *(&property_value.data as *const _ as *const usize);
let ptr_utf16 = ptr_usize as *const u16;
// Find the length of the friendly name.
let mut len = 0;
while *ptr_utf16.offset(len) != 0 {
len += 1;
}
// Create the utf16 slice and covert it into a string.
let name_slice = slice::from_raw_parts(ptr_utf16, len as usize);
let name_os_string: OsString = OsStringExt::from_wide(name_slice);
let name_string = name_os_string.into_string().unwrap();
// Clean up the property.
PropVariantClear(&mut property_value);
name_string
}
}
#[inline]
fn from_immdevice(device: *mut IMMDevice) -> Self {
Device {
device: device,
future_audio_client: Arc::new(Mutex::new(None)),
}
}
/// Ensures that `future_audio_client` contains a `Some` and returns a locked mutex to it.
fn ensure_future_audio_client(&self)
-> Result<MutexGuard<Option<IAudioClientWrapper>>, IoError> {
let mut lock = self.future_audio_client.lock().unwrap();
if lock.is_some() {
return Ok(lock);
}
let audio_client: *mut IAudioClient = unsafe {
let mut audio_client = mem::uninitialized();
let hresult = (*self.device).Activate(&IID_IAudioClient,
CLSCTX_ALL,
ptr::null_mut(),
&mut audio_client);
// can fail if the device has been disconnected since we enumerated it, or if
// the device doesn't support playback for some reason
check_result(hresult)?;
assert!(!audio_client.is_null());
audio_client as *mut _
};
*lock = Some(IAudioClientWrapper(audio_client));
Ok(lock)
}
/// Returns an uninitialized `IAudioClient`.
#[inline]
pub(crate) fn build_audioclient(&self) -> Result<*mut IAudioClient, IoError> {
let mut lock = self.ensure_future_audio_client()?;
let client = lock.unwrap().0;
*lock = None;
Ok(client)
}
// There is no way to query the list of all formats that are supported by the
// audio processor, so instead we just trial some commonly supported formats.
//
// Common formats are trialed by first getting the default format (returned via
// `GetMixFormat`) and then mutating that format with common sample rates and
// querying them via `IsFormatSupported`.
//
// When calling `IsFormatSupported` with the shared-mode audio engine, only the default
// number of channels seems to be supported. Any more or less returns an invalid
// parameter error. Thus we just assume that the default number of channels is the only
// number supported.
fn supported_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
// initializing COM because we call `CoTaskMemFree` to release the format.
com::com_initialized();
// Retrieve the `IAudioClient`.
let lock = match self.ensure_future_audio_client() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(FormatsEnumerationError::DeviceNotAvailable),
e => e.unwrap(),
};
let client = lock.unwrap().0;
unsafe {
// Retrieve the pointer to the default WAVEFORMATEX.
let mut default_waveformatex_ptr = WaveFormatExPtr(mem::uninitialized());
match check_result((*client).GetMixFormat(&mut default_waveformatex_ptr.0)) {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(FormatsEnumerationError::DeviceNotAvailable);
},
Err(e) => panic!("{:?}", e),
Ok(()) => (),
};
// If the default format can't succeed we have no hope of finding other formats.
assert_eq!(try!(is_format_supported(client, default_waveformatex_ptr.0)), true);
// Copy the format to use as a test format (as to avoid mutating the original format).
let mut test_format = {
match WaveFormat::copy_from_waveformatex_ptr(default_waveformatex_ptr.0) {
Some(f) => f,
// If the format is neither EX or EXTENSIBLE we don't know how to work with it.
None => return Ok(vec![].into_iter()),
}
};
// Begin testing common sample rates.
//
// NOTE: We should really be testing for whole ranges here, but it is infeasible to
// test every sample rate up to the overflow limit as the `IsFormatSupported` method is
// quite slow.
let mut supported_sample_rates: Vec<u32> = Vec::new();
for &rate in COMMON_SAMPLE_RATES {
let rate = rate.0 as DWORD;
test_format.nSamplesPerSec = rate;
test_format.nAvgBytesPerSec =
rate * (*default_waveformatex_ptr.0).nBlockAlign as DWORD;
if try!(is_format_supported(client, test_format.as_ptr())) {
supported_sample_rates.push(rate);
}
}
// If the common rates don't include the default one, add the default.
let default_sr = (*default_waveformatex_ptr.0).nSamplesPerSec as _;
if !supported_sample_rates.iter().any(|&r| r == default_sr) {
supported_sample_rates.push(default_sr);
}
// Reset the sample rate on the test format now that we're done.
test_format.nSamplesPerSec = (*default_waveformatex_ptr.0).nSamplesPerSec;
test_format.nAvgBytesPerSec = (*default_waveformatex_ptr.0).nAvgBytesPerSec;
// TODO: Test the different sample formats?
// Create the supported formats.
let mut format = format_from_waveformatex_ptr(default_waveformatex_ptr.0)
.expect("could not create a cpal::Format from a WAVEFORMATEX");
let mut supported_formats = Vec::with_capacity(supported_sample_rates.len());
for rate in supported_sample_rates {
format.sample_rate = SampleRate(rate as _);
supported_formats.push(SupportedFormat::from(format.clone()));
}
Ok(supported_formats.into_iter())
}
}
pub fn supported_input_formats(&self) -> Result<SupportedInputFormats, FormatsEnumerationError> {
if self.data_flow() == eCapture {
self.supported_formats()
// If it's an output device, assume no input formats.
} else {
Ok(vec![].into_iter())
}
}
pub fn supported_output_formats(&self) -> Result<SupportedOutputFormats, FormatsEnumerationError> {
if self.data_flow() == eRender {
self.supported_formats()
// If it's an input device, assume no output formats.
} else {
Ok(vec![].into_iter())
}
}
// We always create voices in shared mode, therefore all samples go through an audio
// processor to mix them together.
//
// One format is guaranteed to be supported, the one returned by `GetMixFormat`.
fn default_format(&self) -> Result<Format, DefaultFormatError> {
// initializing COM because we call `CoTaskMemFree`
com::com_initialized();
let lock = match self.ensure_future_audio_client() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(DefaultFormatError::DeviceNotAvailable),
e => e.unwrap(),
};
let client = lock.unwrap().0;
unsafe {
let mut format_ptr = WaveFormatExPtr(mem::uninitialized());
match check_result((*client).GetMixFormat(&mut format_ptr.0)) {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(DefaultFormatError::DeviceNotAvailable);
},
Err(e) => panic!("{:?}", e),
Ok(()) => (),
};
format_from_waveformatex_ptr(format_ptr.0)
.ok_or(DefaultFormatError::StreamTypeNotSupported)
}
}
fn data_flow(&self) -> EDataFlow {
let endpoint = Endpoint::from(self.device as *const _);
endpoint.data_flow()
}
pub fn default_input_format(&self) -> Result<Format, DefaultFormatError> {
if self.data_flow() == eCapture {
self.default_format()
} else {
Err(DefaultFormatError::StreamTypeNotSupported)
}
}
pub fn default_output_format(&self) -> Result<Format, DefaultFormatError> {
let data_flow = self.data_flow();
if data_flow == eRender {
self.default_format()
} else {
Err(DefaultFormatError::StreamTypeNotSupported)
}
}
}
impl PartialEq for Device {
#[inline]
fn eq(&self, other: &Device) -> bool {
self.device == other.device
}
}
impl Eq for Device {
}
impl Clone for Device {
#[inline]
fn clone(&self) -> Device {
unsafe {
(*self.device).AddRef();
}
Device {
device: self.device,
future_audio_client: self.future_audio_client.clone(),
}
}
}
impl Drop for Device {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.device).Release();
}
if let Some(client) = self.future_audio_client.lock().unwrap().take() {
unsafe {
(*client.0).Release();
}
}
}
}
impl Drop for Endpoint {
fn drop(&mut self) {
unsafe {
(*self.endpoint).Release();
}
}
}
impl From<*const IMMDevice> for Endpoint {
fn from(device: *const IMMDevice) -> Self {
unsafe {
let endpoint = immendpoint_from_immdevice(device);
Endpoint { endpoint: endpoint }
}
}
}
impl Endpoint {
fn data_flow(&self) -> EDataFlow {
unsafe {
data_flow_from_immendpoint(self.endpoint)
}
}
}
lazy_static! {
static ref ENUMERATOR: Enumerator = {
// COM initialization is thread local, but we only need to have COM initialized in the
// thread we create the objects in
com::com_initialized();
// building the devices enumerator object
unsafe {
let mut enumerator: *mut IMMDeviceEnumerator = mem::uninitialized();
let hresult = CoCreateInstance(
&CLSID_MMDeviceEnumerator,
ptr::null_mut(),
CLSCTX_ALL,
&IMMDeviceEnumerator::uuidof(),
&mut enumerator as *mut *mut IMMDeviceEnumerator as *mut _,
);
check_result(hresult).unwrap();
Enumerator(enumerator)
}
};
}
/// RAII object around `IMMDeviceEnumerator`.
struct Enumerator(*mut IMMDeviceEnumerator);
unsafe impl Send for Enumerator {
}
unsafe impl Sync for Enumerator {
}
impl Drop for Enumerator {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.0).Release();
}
}
}
/// WASAPI implementation for `Devices`.
pub struct Devices {
collection: *mut IMMDeviceCollection,
total_count: u32,
next_item: u32,
}
unsafe impl Send for Devices {
}
unsafe impl Sync for Devices {
}
impl Drop for Devices {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.collection).Release();
}
}
}
impl Default for Devices {
fn default() -> Devices {
unsafe {
let mut collection: *mut IMMDeviceCollection = mem::uninitialized();
// can fail because of wrong parameters (should never happen) or out of memory
check_result(
(*ENUMERATOR.0).EnumAudioEndpoints(
eAll,
DEVICE_STATE_ACTIVE,
&mut collection,
)
).unwrap();
let mut count = mem::uninitialized();
// can fail if the parameter is null, which should never happen
check_result((*collection).GetCount(&mut count)).unwrap();
Devices {
collection: collection,
total_count: count,
next_item: 0,
}
}
}
}
impl Iterator for Devices {
type Item = Device;
fn next(&mut self) -> Option<Device> {
if self.next_item >= self.total_count {
return None;
}
unsafe {
let mut device = mem::uninitialized();
// can fail if out of range, which we just checked above
check_result((*self.collection).Item(self.next_item, &mut device)).unwrap();
self.next_item += 1;
Some(Device::from_immdevice(device))
}
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let num = self.total_count - self.next_item;
let num = num as usize;
(num, Some(num))
}
}
fn default_device(data_flow: EDataFlow) -> Option<Device> {
unsafe {
let mut device = mem::uninitialized();
let hres = (*ENUMERATOR.0)
.GetDefaultAudioEndpoint(data_flow, eConsole, &mut device);
if let Err(_err) = check_result(hres) {
return None; // TODO: check specifically for `E_NOTFOUND`, and panic otherwise
}
Some(Device::from_immdevice(device))
}
}
pub fn default_input_device() -> Option<Device> {
default_device(eCapture)
}
pub fn default_output_device() -> Option<Device> {
default_device(eRender)
}

View File

@ -1,378 +0,0 @@
use std::ffi::OsString;
use std::io::Error as IoError;
use std::mem;
use std::option::IntoIter as OptionIntoIter;
use std::os::windows::ffi::OsStringExt;
use std::ptr;
use std::slice;
use std::sync::{Arc, Mutex, MutexGuard};
use ChannelCount;
use FormatsEnumerationError;
use SampleFormat;
use SampleRate;
use SupportedFormat;
use super::check_result;
use super::com;
use super::winapi::Interface;
use super::winapi::shared::ksmedia;
use super::winapi::shared::guiddef::{
GUID,
};
use super::winapi::shared::mmreg::{
WAVE_FORMAT_PCM,
WAVE_FORMAT_EXTENSIBLE,
WAVEFORMATEXTENSIBLE,
};
use super::winapi::um::audioclient::{
IAudioClient,
IID_IAudioClient,
AUDCLNT_E_DEVICE_INVALIDATED,
};
use super::winapi::um::combaseapi::{
CoCreateInstance,
CoTaskMemFree,
CLSCTX_ALL,
};
use super::winapi::um::mmdeviceapi::{
eConsole,
eRender,
CLSID_MMDeviceEnumerator,
DEVICE_STATE_ACTIVE,
IMMDevice,
IMMDeviceCollection,
IMMDeviceEnumerator,
};
pub type SupportedFormatsIterator = OptionIntoIter<SupportedFormat>;
/// Wrapper because of that stupid decision to remove `Send` and `Sync` from raw pointers.
#[derive(Copy, Clone)]
struct IAudioClientWrapper(*mut IAudioClient);
unsafe impl Send for IAudioClientWrapper {
}
unsafe impl Sync for IAudioClientWrapper {
}
/// An opaque type that identifies an end point.
pub struct Endpoint {
device: *mut IMMDevice,
/// We cache an uninitialized `IAudioClient` so that we can call functions from it without
/// having to create/destroy audio clients all the time.
future_audio_client: Arc<Mutex<Option<IAudioClientWrapper>>>, // TODO: add NonZero around the ptr
}
unsafe impl Send for Endpoint {
}
unsafe impl Sync for Endpoint {
}
impl Endpoint {
// TODO: this function returns a GUID of the endpoin
// instead it should use the property store and return the friendly name
pub fn name(&self) -> String {
unsafe {
let mut name_ptr = mem::uninitialized();
// can only fail if wrong params or out of memory
check_result((*self.device).GetId(&mut name_ptr)).unwrap();
// finding the length of the name
let mut len = 0;
while *name_ptr.offset(len) != 0 {
len += 1;
}
// building a slice containing the name
let name_slice = slice::from_raw_parts(name_ptr, len as usize);
// and turning it into a string
let name_string: OsString = OsStringExt::from_wide(name_slice);
CoTaskMemFree(name_ptr as *mut _);
name_string.into_string().unwrap()
}
}
#[inline]
fn from_immdevice(device: *mut IMMDevice) -> Endpoint {
Endpoint {
device: device,
future_audio_client: Arc::new(Mutex::new(None)),
}
}
/// Ensures that `future_audio_client` contains a `Some` and returns a locked mutex to it.
fn ensure_future_audio_client(&self)
-> Result<MutexGuard<Option<IAudioClientWrapper>>, IoError> {
let mut lock = self.future_audio_client.lock().unwrap();
if lock.is_some() {
return Ok(lock);
}
let audio_client: *mut IAudioClient = unsafe {
let mut audio_client = mem::uninitialized();
let hresult = (*self.device).Activate(&IID_IAudioClient,
CLSCTX_ALL,
ptr::null_mut(),
&mut audio_client);
// can fail if the device has been disconnected since we enumerated it, or if
// the device doesn't support playback for some reason
check_result(hresult)?;
assert!(!audio_client.is_null());
audio_client as *mut _
};
*lock = Some(IAudioClientWrapper(audio_client));
Ok(lock)
}
/// Returns an uninitialized `IAudioClient`.
#[inline]
pub(crate) fn build_audioclient(&self) -> Result<*mut IAudioClient, IoError> {
let mut lock = self.ensure_future_audio_client()?;
let client = lock.unwrap().0;
*lock = None;
Ok(client)
}
pub fn supported_formats(&self) -> Result<SupportedFormatsIterator, FormatsEnumerationError> {
// We always create voices in shared mode, therefore all samples go through an audio
// processor to mix them together.
// However there is no way to query the list of all formats that are supported by the
// audio processor, but one format is guaranteed to be supported, the one returned by
// `GetMixFormat`.
// initializing COM because we call `CoTaskMemFree`
com::com_initialized();
let lock = match self.ensure_future_audio_client() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(FormatsEnumerationError::DeviceNotAvailable),
e => e.unwrap(),
};
let client = lock.unwrap().0;
unsafe {
let mut format_ptr = mem::uninitialized();
match check_result((*client).GetMixFormat(&mut format_ptr)) {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
return Err(FormatsEnumerationError::DeviceNotAvailable);
},
Err(e) => panic!("{:?}", e),
Ok(()) => (),
};
let format = {
let (channels, data_type) = match (*format_ptr).wFormatTag {
WAVE_FORMAT_PCM => {
(2, SampleFormat::I16)
},
WAVE_FORMAT_EXTENSIBLE => {
let format_ptr = format_ptr as *const WAVEFORMATEXTENSIBLE;
let channels = (*format_ptr).Format.nChannels as ChannelCount;
let format = {
fn cmp_guid(a: &GUID, b: &GUID) -> bool {
a.Data1 == b.Data1 && a.Data2 == b.Data2 && a.Data3 == b.Data3 &&
a.Data4 == b.Data4
}
if cmp_guid(&(*format_ptr).SubFormat,
&ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT)
{
SampleFormat::F32
} else if cmp_guid(&(*format_ptr).SubFormat,
&ksmedia::KSDATAFORMAT_SUBTYPE_PCM)
{
SampleFormat::I16
} else {
panic!("Unknown SubFormat GUID returned by GetMixFormat");
// TODO: Re-add this to end of panic. Getting
// `trait Debug is not satisfied` error.
//(*format_ptr).SubFormat)
}
};
(channels, format)
},
f => panic!("Unknown data format returned by GetMixFormat: {:?}", f),
};
SupportedFormat {
channels: channels,
min_sample_rate: SampleRate((*format_ptr).nSamplesPerSec),
max_sample_rate: SampleRate((*format_ptr).nSamplesPerSec),
data_type: data_type,
}
};
CoTaskMemFree(format_ptr as *mut _);
Ok(Some(format).into_iter())
}
}
}
impl PartialEq for Endpoint {
#[inline]
fn eq(&self, other: &Endpoint) -> bool {
self.device == other.device
}
}
impl Eq for Endpoint {
}
impl Clone for Endpoint {
#[inline]
fn clone(&self) -> Endpoint {
unsafe {
(*self.device).AddRef();
}
Endpoint {
device: self.device,
future_audio_client: self.future_audio_client.clone(),
}
}
}
impl Drop for Endpoint {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.device).Release();
}
if let Some(client) = self.future_audio_client.lock().unwrap().take() {
unsafe {
(*client.0).Release();
}
}
}
}
lazy_static! {
static ref ENUMERATOR: Enumerator = {
// COM initialization is thread local, but we only need to have COM initialized in the
// thread we create the objects in
com::com_initialized();
// building the devices enumerator object
unsafe {
let mut enumerator: *mut IMMDeviceEnumerator = mem::uninitialized();
let hresult = CoCreateInstance(&CLSID_MMDeviceEnumerator,
ptr::null_mut(), CLSCTX_ALL,
&IMMDeviceEnumerator::uuidof(),
&mut enumerator
as *mut *mut IMMDeviceEnumerator
as *mut _);
check_result(hresult).unwrap();
Enumerator(enumerator)
}
};
}
/// RAII object around `IMMDeviceEnumerator`.
struct Enumerator(*mut IMMDeviceEnumerator);
unsafe impl Send for Enumerator {
}
unsafe impl Sync for Enumerator {
}
impl Drop for Enumerator {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.0).Release();
}
}
}
/// WASAPI implementation for `EndpointsIterator`.
pub struct EndpointsIterator {
collection: *mut IMMDeviceCollection,
total_count: u32,
next_item: u32,
}
unsafe impl Send for EndpointsIterator {
}
unsafe impl Sync for EndpointsIterator {
}
impl Drop for EndpointsIterator {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.collection).Release();
}
}
}
impl Default for EndpointsIterator {
fn default() -> EndpointsIterator {
unsafe {
let mut collection: *mut IMMDeviceCollection = mem::uninitialized();
// can fail because of wrong parameters (should never happen) or out of memory
check_result((*ENUMERATOR.0).EnumAudioEndpoints(eRender,
DEVICE_STATE_ACTIVE,
&mut collection))
.unwrap();
let mut count = mem::uninitialized();
// can fail if the parameter is null, which should never happen
check_result((*collection).GetCount(&mut count)).unwrap();
EndpointsIterator {
collection: collection,
total_count: count,
next_item: 0,
}
}
}
}
impl Iterator for EndpointsIterator {
type Item = Endpoint;
fn next(&mut self) -> Option<Endpoint> {
if self.next_item >= self.total_count {
return None;
}
unsafe {
let mut device = mem::uninitialized();
// can fail if out of range, which we just checked above
check_result((*self.collection).Item(self.next_item, &mut device)).unwrap();
self.next_item += 1;
Some(Endpoint::from_immdevice(device))
}
}
#[inline]
fn size_hint(&self) -> (usize, Option<usize>) {
let num = self.total_count - self.next_item;
let num = num as usize;
(num, Some(num))
}
}
pub fn default_endpoint() -> Option<Endpoint> {
unsafe {
let mut device = mem::uninitialized();
let hres = (*ENUMERATOR.0)
.GetDefaultAudioEndpoint(eRender, eConsole, &mut device);
if let Err(_err) = check_result(hres) {
return None; // TODO: check specifically for `E_NOTFOUND`, and panic otherwise
}
Some(Endpoint::from_immdevice(device))
}
}

View File

@ -2,13 +2,13 @@ extern crate winapi;
use std::io::Error as IoError;
pub use self::endpoint::{Endpoint, EndpointsIterator, SupportedFormatsIterator, default_endpoint};
pub use self::voice::{Buffer, EventLoop, VoiceId};
pub use self::device::{Device, Devices, SupportedInputFormats, SupportedOutputFormats, default_input_device, default_output_device};
pub use self::stream::{InputBuffer, OutputBuffer, EventLoop, StreamId};
use self::winapi::um::winnt::HRESULT;
mod com;
mod endpoint;
mod voice;
mod device;
mod stream;
#[inline]
fn check_result(result: HRESULT) -> Result<(), IoError> {

768
src/wasapi/stream.rs Normal file
View File

@ -0,0 +1,768 @@
use super::Device;
use super::check_result;
use super::com;
use super::winapi::shared::basetsd::UINT32;
use super::winapi::shared::ksmedia;
use super::winapi::shared::minwindef::{BYTE, DWORD, FALSE, WORD};
use super::winapi::shared::mmreg;
use super::winapi::um::audioclient::{self, AUDCLNT_E_DEVICE_INVALIDATED};
use super::winapi::um::audiosessiontypes::{AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_EVENTCALLBACK};
use super::winapi::um::handleapi;
use super::winapi::um::synchapi;
use super::winapi::um::winbase;
use super::winapi::um::winnt;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::slice;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use CreationError;
use Format;
use SampleFormat;
use StreamData;
use UnknownTypeOutputBuffer;
use UnknownTypeInputBuffer;
pub struct EventLoop {
// Data used by the `run()` function implementation. The mutex is kept lock permanently by
// `run()`. This ensures that two `run()` invocations can't run at the same time, and also
// means that we shouldn't try to lock this field from anywhere else but `run()`.
run_context: Mutex<RunContext>,
// Identifier of the next stream to create. Each new stream increases this counter. If the
// counter overflows, there's a panic.
// TODO: use AtomicU64 instead
next_stream_id: AtomicUsize,
// Commands processed by the `run()` method that is currently running.
// `pending_scheduled_event` must be signalled whenever a command is added here, so that it
// will get picked up.
// TODO: use a lock-free container
commands: Mutex<Vec<Command>>,
// This event is signalled after a new entry is added to `commands`, so that the `run()`
// method can be notified.
pending_scheduled_event: winnt::HANDLE,
}
struct RunContext {
// Streams that have been created in this event loop.
streams: Vec<StreamInner>,
// Handles corresponding to the `event` field of each element of `voices`. Must always be in
// sync with `voices`, except that the first element is always `pending_scheduled_event`.
handles: Vec<winnt::HANDLE>,
}
enum Command {
NewStream(StreamInner),
DestroyStream(StreamId),
PlayStream(StreamId),
PauseStream(StreamId),
}
enum AudioClientFlow {
Render {
render_client: *mut audioclient::IAudioRenderClient,
},
Capture {
capture_client: *mut audioclient::IAudioCaptureClient,
},
}
struct StreamInner {
id: StreamId,
audio_client: *mut audioclient::IAudioClient,
client_flow: AudioClientFlow,
// Event that is signalled by WASAPI whenever audio data must be written.
event: winnt::HANDLE,
// True if the stream is currently playing. False if paused.
playing: bool,
// Number of frames of audio data in the underlying buffer allocated by WASAPI.
max_frames_in_buffer: UINT32,
// Number of bytes that each frame occupies.
bytes_per_frame: WORD,
// The sample format with which the stream was created.
sample_format: SampleFormat,
}
impl EventLoop {
pub fn new() -> EventLoop {
let pending_scheduled_event =
unsafe { synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null()) };
EventLoop {
pending_scheduled_event: pending_scheduled_event,
run_context: Mutex::new(RunContext {
streams: Vec::new(),
handles: vec![pending_scheduled_event],
}),
next_stream_id: AtomicUsize::new(0),
commands: Mutex::new(Vec::new()),
}
}
pub fn build_input_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
unsafe {
// Making sure that COM is initialized.
// It's not actually sure that this is required, but when in doubt do it.
com::com_initialized();
// Obtaining a `IAudioClient`.
let audio_client = match device.build_audioclient() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(CreationError::DeviceNotAvailable),
e => e.unwrap(),
};
// Computing the format and initializing the device.
let waveformatex = {
let format_attempt = format_to_waveformatextensible(format)
.ok_or(CreationError::FormatNotSupported)?;
let share_mode = AUDCLNT_SHAREMODE_SHARED;
// Ensure the format is supported.
match super::device::is_format_supported(audio_client, &format_attempt.Format) {
Ok(false) => return Err(CreationError::FormatNotSupported),
Err(_) => return Err(CreationError::DeviceNotAvailable),
_ => (),
}
// finally initializing the audio client
let hresult = (*audio_client).Initialize(
share_mode,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
&format_attempt.Format,
ptr::null(),
);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
format_attempt.Format
};
// obtaining the size of the samples buffer in number of frames
let max_frames_in_buffer = {
let mut max_frames_in_buffer = mem::uninitialized();
let hresult = (*audio_client).GetBufferSize(&mut max_frames_in_buffer);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
max_frames_in_buffer
};
// Creating the event that will be signalled whenever we need to submit some samples.
let event = {
let event = synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null());
if event == ptr::null_mut() {
(*audio_client).Release();
panic!("Failed to create event");
}
match check_result((*audio_client).SetEventHandle(event)) {
Err(_) => {
(*audio_client).Release();
panic!("Failed to call SetEventHandle")
},
Ok(_) => (),
};
event
};
// Building a `IAudioCaptureClient` that will be used to read captured samples.
let capture_client = {
let mut capture_client: *mut audioclient::IAudioCaptureClient = mem::uninitialized();
let hresult = (*audio_client).GetService(
&audioclient::IID_IAudioCaptureClient,
&mut capture_client as *mut *mut audioclient::IAudioCaptureClient as *mut _,
);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
&mut *capture_client
};
let new_stream_id = StreamId(self.next_stream_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_stream_id.0, usize::max_value()); // check for overflows
// Once we built the `StreamInner`, we add a command that will be picked up by the
// `run()` method and added to the `RunContext`.
{
let client_flow = AudioClientFlow::Capture {
capture_client: capture_client,
};
let inner = StreamInner {
id: new_stream_id.clone(),
audio_client: audio_client,
client_flow: client_flow,
event: event,
playing: false,
max_frames_in_buffer: max_frames_in_buffer,
bytes_per_frame: waveformatex.nBlockAlign,
sample_format: format.data_type,
};
self.commands.lock().unwrap().push(Command::NewStream(inner));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
};
Ok(new_stream_id)
}
}
pub fn build_output_stream(
&self,
device: &Device,
format: &Format,
) -> Result<StreamId, CreationError>
{
unsafe {
// Making sure that COM is initialized.
// It's not actually sure that this is required, but when in doubt do it.
com::com_initialized();
// Obtaining a `IAudioClient`.
let audio_client = match device.build_audioclient() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(CreationError::DeviceNotAvailable),
e => e.unwrap(),
};
// Computing the format and initializing the device.
let waveformatex = {
let format_attempt = format_to_waveformatextensible(format)
.ok_or(CreationError::FormatNotSupported)?;
let share_mode = AUDCLNT_SHAREMODE_SHARED;
// Ensure the format is supported.
match super::device::is_format_supported(audio_client, &format_attempt.Format) {
Ok(false) => return Err(CreationError::FormatNotSupported),
Err(_) => return Err(CreationError::DeviceNotAvailable),
_ => (),
}
// finally initializing the audio client
let hresult = (*audio_client).Initialize(share_mode,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
&format_attempt.Format,
ptr::null());
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
format_attempt.Format
};
// Creating the event that will be signalled whenever we need to submit some samples.
let event = {
let event = synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null());
if event == ptr::null_mut() {
(*audio_client).Release();
panic!("Failed to create event");
}
match check_result((*audio_client).SetEventHandle(event)) {
Err(_) => {
(*audio_client).Release();
panic!("Failed to call SetEventHandle")
},
Ok(_) => (),
};
event
};
// obtaining the size of the samples buffer in number of frames
let max_frames_in_buffer = {
let mut max_frames_in_buffer = mem::uninitialized();
let hresult = (*audio_client).GetBufferSize(&mut max_frames_in_buffer);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
max_frames_in_buffer
};
// Building a `IAudioRenderClient` that will be used to fill the samples buffer.
let render_client = {
let mut render_client: *mut audioclient::IAudioRenderClient = mem::uninitialized();
let hresult = (*audio_client).GetService(&audioclient::IID_IAudioRenderClient,
&mut render_client as
*mut *mut audioclient::IAudioRenderClient as
*mut _);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
&mut *render_client
};
let new_stream_id = StreamId(self.next_stream_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_stream_id.0, usize::max_value()); // check for overflows
// Once we built the `StreamInner`, we add a command that will be picked up by the
// `run()` method and added to the `RunContext`.
{
let client_flow = AudioClientFlow::Render {
render_client: render_client,
};
let inner = StreamInner {
id: new_stream_id.clone(),
audio_client: audio_client,
client_flow: client_flow,
event: event,
playing: false,
max_frames_in_buffer: max_frames_in_buffer,
bytes_per_frame: waveformatex.nBlockAlign,
sample_format: format.data_type,
};
self.commands.lock().unwrap().push(Command::NewStream(inner));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
};
Ok(new_stream_id)
}
}
#[inline]
pub fn destroy_stream(&self, stream_id: StreamId) {
unsafe {
self.commands
.lock()
.unwrap()
.push(Command::DestroyStream(stream_id));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(StreamId, StreamData)
{
self.run_inner(&mut callback);
}
fn run_inner(&self, callback: &mut FnMut(StreamId, StreamData)) -> ! {
unsafe {
// We keep `run_context` locked forever, which guarantees that two invocations of
// `run()` cannot run simultaneously.
let mut run_context = self.run_context.lock().unwrap();
loop {
// Process the pending commands.
let mut commands_lock = self.commands.lock().unwrap();
for command in commands_lock.drain(..) {
match command {
Command::NewStream(stream_inner) => {
let event = stream_inner.event;
run_context.streams.push(stream_inner);
run_context.handles.push(event);
},
Command::DestroyStream(stream_id) => {
match run_context.streams.iter().position(|v| v.id == stream_id) {
None => continue,
Some(p) => {
run_context.handles.remove(p + 1);
run_context.streams.remove(p);
},
}
},
Command::PlayStream(stream_id) => {
if let Some(v) = run_context.streams.get_mut(stream_id.0) {
if !v.playing {
let hresult = (*v.audio_client).Start();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
Command::PauseStream(stream_id) => {
if let Some(v) = run_context.streams.get_mut(stream_id.0) {
if v.playing {
let hresult = (*v.audio_client).Stop();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
}
}
drop(commands_lock);
// Wait for any of the handles to be signalled, which means that the corresponding
// sound needs a buffer.
debug_assert!(run_context.handles.len() <= winnt::MAXIMUM_WAIT_OBJECTS as usize);
let result = synchapi::WaitForMultipleObjectsEx(run_context.handles.len() as u32,
run_context.handles.as_ptr(),
FALSE,
winbase::INFINITE, /* TODO: allow setting a timeout */
FALSE /* irrelevant parameter here */);
// Notifying the corresponding task handler.
debug_assert!(result >= winbase::WAIT_OBJECT_0);
let handle_id = (result - winbase::WAIT_OBJECT_0) as usize;
// If `handle_id` is 0, then it's `pending_scheduled_event` that was signalled in
// order for us to pick up the pending commands.
// Otherwise, a stream needs data.
if handle_id >= 1 {
let stream = &mut run_context.streams[handle_id - 1];
let stream_id = stream.id.clone();
// Obtaining the number of frames that are available to be written.
let mut frames_available = {
let mut padding = mem::uninitialized();
let hresult = (*stream.audio_client).GetCurrentPadding(&mut padding);
check_result(hresult).unwrap();
stream.max_frames_in_buffer - padding
};
if frames_available == 0 {
// TODO: can this happen?
continue;
}
let sample_size = stream.sample_format.sample_size();
// Obtaining a pointer to the buffer.
match stream.client_flow {
AudioClientFlow::Capture { capture_client } => {
// Get the available data in the shared buffer.
let mut buffer: *mut BYTE = mem::uninitialized();
let mut flags = mem::uninitialized();
let hresult = (*capture_client).GetBuffer(
&mut buffer,
&mut frames_available,
&mut flags,
ptr::null_mut(),
ptr::null_mut(),
);
check_result(hresult).unwrap();
debug_assert!(!buffer.is_null());
let buffer_len = frames_available as usize
* stream.bytes_per_frame as usize / sample_size;
// Simplify the capture callback sample format branches.
macro_rules! capture_callback {
($T:ty, $Variant:ident) => {{
let buffer_data = buffer as *mut _ as *const $T;
let slice = slice::from_raw_parts(buffer_data, buffer_len);
let input_buffer = InputBuffer { buffer: slice };
let unknown_buffer = UnknownTypeInputBuffer::$Variant(::InputBuffer {
buffer: Some(input_buffer),
});
let data = StreamData::Input { buffer: unknown_buffer };
callback(stream_id, data);
// Release the buffer.
let hresult = (*capture_client).ReleaseBuffer(frames_available);
match check_result(hresult) {
// Ignoring unavailable device error.
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
},
e => e.unwrap(),
};
}};
}
match stream.sample_format {
SampleFormat::F32 => capture_callback!(f32, F32),
SampleFormat::I16 => capture_callback!(i16, I16),
SampleFormat::U16 => capture_callback!(u16, U16),
}
},
AudioClientFlow::Render { render_client } => {
let mut buffer: *mut BYTE = mem::uninitialized();
let hresult = (*render_client).GetBuffer(
frames_available,
&mut buffer as *mut *mut _,
);
// FIXME: can return `AUDCLNT_E_DEVICE_INVALIDATED`
check_result(hresult).unwrap();
debug_assert!(!buffer.is_null());
let buffer_len = frames_available as usize
* stream.bytes_per_frame as usize / sample_size;
// Simplify the render callback sample format branches.
macro_rules! render_callback {
($T:ty, $Variant:ident) => {{
let buffer_data = buffer as *mut $T;
let output_buffer = OutputBuffer {
stream: stream,
buffer_data: buffer_data,
buffer_len: buffer_len,
frames: frames_available,
marker: PhantomData,
};
let unknown_buffer = UnknownTypeOutputBuffer::$Variant(::OutputBuffer {
target: Some(output_buffer)
});
let data = StreamData::Output { buffer: unknown_buffer };
callback(stream_id, data);
}};
}
match stream.sample_format {
SampleFormat::F32 => render_callback!(f32, F32),
SampleFormat::I16 => render_callback!(i16, I16),
SampleFormat::U16 => render_callback!(u16, U16),
}
},
}
}
}
}
}
#[inline]
pub fn play_stream(&self, stream: StreamId) {
unsafe {
self.commands.lock().unwrap().push(Command::PlayStream(stream));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn pause_stream(&self, stream: StreamId) {
unsafe {
self.commands.lock().unwrap().push(Command::PauseStream(stream));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
}
impl Drop for EventLoop {
#[inline]
fn drop(&mut self) {
unsafe {
handleapi::CloseHandle(self.pending_scheduled_event);
}
}
}
unsafe impl Send for EventLoop {
}
unsafe impl Sync for EventLoop {
}
// The content of a stream ID is a number that was fetched from `next_stream_id`.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct StreamId(usize);
impl Drop for AudioClientFlow {
fn drop(&mut self) {
unsafe {
match *self {
AudioClientFlow::Capture { capture_client } => (*capture_client).Release(),
AudioClientFlow::Render { render_client } => (*render_client).Release(),
};
}
}
}
impl Drop for StreamInner {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.audio_client).Release();
handleapi::CloseHandle(self.event);
}
}
}
pub struct InputBuffer<'a, T: 'a> {
buffer: &'a [T],
}
pub struct OutputBuffer<'a, T: 'a> {
stream: &'a mut StreamInner,
buffer_data: *mut T,
buffer_len: usize,
frames: UINT32,
marker: PhantomData<&'a mut [T]>,
}
unsafe impl<'a, T> Send for OutputBuffer<'a, T> {
}
impl<'a, T> InputBuffer<'a, T> {
#[inline]
pub fn buffer(&self) -> &[T] {
&self.buffer
}
#[inline]
pub fn finish(self) {
// Nothing to be done.
}
}
impl<'a, T> OutputBuffer<'a, T> {
#[inline]
pub fn buffer(&mut self) -> &mut [T] {
unsafe { slice::from_raw_parts_mut(self.buffer_data, self.buffer_len) }
}
#[inline]
pub fn len(&self) -> usize {
self.buffer_len
}
#[inline]
pub fn finish(self) {
unsafe {
let hresult = match self.stream.client_flow {
AudioClientFlow::Render { render_client } => {
(*render_client).ReleaseBuffer(self.frames as u32, 0)
},
_ => unreachable!(),
};
match check_result(hresult) {
// Ignoring the error that is produced if the device has been disconnected.
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => (),
e => e.unwrap(),
};
}
}
}
// Turns a `Format` into a `WAVEFORMATEXTENSIBLE`.
//
// Returns `None` if the WAVEFORMATEXTENSIBLE does not support the given format.
fn format_to_waveformatextensible(format: &Format) -> Option<mmreg::WAVEFORMATEXTENSIBLE> {
let format_tag = match format.data_type {
SampleFormat::I16 => mmreg::WAVE_FORMAT_PCM,
SampleFormat::F32 => mmreg::WAVE_FORMAT_EXTENSIBLE,
SampleFormat::U16 => return None,
};
let channels = format.channels as WORD;
let sample_rate = format.sample_rate.0 as DWORD;
let sample_bytes = format.data_type.sample_size() as WORD;
let avg_bytes_per_sec = channels as DWORD * sample_rate * sample_bytes as DWORD;
let block_align = channels * sample_bytes;
let bits_per_sample = 8 * sample_bytes;
let cb_size = match format.data_type {
SampleFormat::I16 => 0,
SampleFormat::F32 => {
let extensible_size = mem::size_of::<mmreg::WAVEFORMATEXTENSIBLE>();
let ex_size = mem::size_of::<mmreg::WAVEFORMATEX>();
(extensible_size - ex_size) as WORD
},
SampleFormat::U16 => return None,
};
let waveformatex = mmreg::WAVEFORMATEX {
wFormatTag: format_tag,
nChannels: channels,
nSamplesPerSec: sample_rate,
nAvgBytesPerSec: avg_bytes_per_sec,
nBlockAlign: block_align,
wBitsPerSample: bits_per_sample,
cbSize: cb_size,
};
// CPAL does not care about speaker positions, so pass audio straight through.
// TODO: This constant should be defined in winapi but is missing.
const KSAUDIO_SPEAKER_DIRECTOUT: DWORD = 0;
let channel_mask = KSAUDIO_SPEAKER_DIRECTOUT;
let sub_format = match format.data_type {
SampleFormat::I16 => ksmedia::KSDATAFORMAT_SUBTYPE_PCM,
SampleFormat::F32 => ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT,
SampleFormat::U16 => return None,
};
let waveformatextensible = mmreg::WAVEFORMATEXTENSIBLE {
Format: waveformatex,
Samples: bits_per_sample as WORD,
dwChannelMask: channel_mask,
SubFormat: sub_format,
};
Some(waveformatextensible)
}

View File

@ -1,537 +0,0 @@
use super::Endpoint;
use super::check_result;
use super::com;
use super::winapi::shared::basetsd::UINT32;
use super::winapi::shared::ksmedia;
use super::winapi::shared::minwindef::{BYTE, DWORD, FALSE, WORD};
use super::winapi::shared::mmreg;
use super::winapi::shared::winerror;
use super::winapi::um::audioclient::{self, AUDCLNT_E_DEVICE_INVALIDATED};
use super::winapi::um::audiosessiontypes::{AUDCLNT_SHAREMODE_SHARED, AUDCLNT_STREAMFLAGS_EVENTCALLBACK};
use super::winapi::um::combaseapi::CoTaskMemFree;
use super::winapi::um::handleapi;
use super::winapi::um::synchapi;
use super::winapi::um::winbase;
use super::winapi::um::winnt;
use std::marker::PhantomData;
use std::mem;
use std::ptr;
use std::slice;
use std::sync::Mutex;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::Ordering;
use CreationError;
use Format;
use SampleFormat;
use UnknownTypeBuffer;
pub struct EventLoop {
// Data used by the `run()` function implementation. The mutex is kept lock permanently by
// `run()`. This ensures that two `run()` invocations can't run at the same time, and also
// means that we shouldn't try to lock this field from anywhere else but `run()`.
run_context: Mutex<RunContext>,
// Identifier of the next voice to create. Each new voice increases this counter. If the
// counter overflows, there's a panic.
// TODO: use AtomicU64 instead
next_voice_id: AtomicUsize,
// Commands processed by the `run()` method that is currently running.
// `pending_scheduled_event` must be signalled whenever a command is added here, so that it
// will get picked up.
// TODO: use a lock-free container
commands: Mutex<Vec<Command>>,
// This event is signalled after a new entry is added to `commands`, so that the `run()`
// method can be notified.
pending_scheduled_event: winnt::HANDLE,
}
struct RunContext {
// Voices that have been created in this event loop.
voices: Vec<VoiceInner>,
// Handles corresponding to the `event` field of each element of `voices`. Must always be in
// sync with `voices`, except that the first element is always `pending_scheduled_event`.
handles: Vec<winnt::HANDLE>,
}
enum Command {
NewVoice(VoiceInner),
DestroyVoice(VoiceId),
Play(VoiceId),
Pause(VoiceId),
}
struct VoiceInner {
id: VoiceId,
audio_client: *mut audioclient::IAudioClient,
render_client: *mut audioclient::IAudioRenderClient,
// Event that is signalled by WASAPI whenever audio data must be written.
event: winnt::HANDLE,
// True if the voice is currently playing. False if paused.
playing: bool,
// Number of frames of audio data in the underlying buffer allocated by WASAPI.
max_frames_in_buffer: UINT32,
// Number of bytes that each frame occupies.
bytes_per_frame: WORD,
}
impl EventLoop {
pub fn new() -> EventLoop {
let pending_scheduled_event =
unsafe { synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null()) };
EventLoop {
pending_scheduled_event: pending_scheduled_event,
run_context: Mutex::new(RunContext {
voices: Vec::new(),
handles: vec![pending_scheduled_event],
}),
next_voice_id: AtomicUsize::new(0),
commands: Mutex::new(Vec::new()),
}
}
pub fn build_voice(&self, end_point: &Endpoint, format: &Format)
-> Result<VoiceId, CreationError> {
unsafe {
// Making sure that COM is initialized.
// It's not actually sure that this is required, but when in doubt do it.
com::com_initialized();
// Obtaining a `IAudioClient`.
let audio_client = match end_point.build_audioclient() {
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) =>
return Err(CreationError::DeviceNotAvailable),
e => e.unwrap(),
};
// Computing the format and initializing the device.
let format = {
let format_attempt = format_to_waveformatextensible(format)?;
let share_mode = AUDCLNT_SHAREMODE_SHARED;
// `IsFormatSupported` checks whether the format is supported and fills
// a `WAVEFORMATEX`
let mut dummy_fmt_ptr: *mut mmreg::WAVEFORMATEX = mem::uninitialized();
let hresult =
(*audio_client)
.IsFormatSupported(share_mode, &format_attempt.Format, &mut dummy_fmt_ptr);
// we free that `WAVEFORMATEX` immediately after because we don't need it
if !dummy_fmt_ptr.is_null() {
CoTaskMemFree(dummy_fmt_ptr as *mut _);
}
// `IsFormatSupported` can return `S_FALSE` (which means that a compatible format
// has been found) but we also treat this as an error
match (hresult, check_result(hresult)) {
(_, Err(ref e))
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
(_, Err(e)) => {
(*audio_client).Release();
panic!("{:?}", e);
},
(winerror::S_FALSE, _) => {
(*audio_client).Release();
return Err(CreationError::FormatNotSupported);
},
(_, Ok(())) => (),
};
// finally initializing the audio client
let hresult = (*audio_client).Initialize(share_mode,
AUDCLNT_STREAMFLAGS_EVENTCALLBACK,
0,
0,
&format_attempt.Format,
ptr::null());
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
format_attempt.Format
};
// Creating the event that will be signalled whenever we need to submit some samples.
let event = {
let event = synchapi::CreateEventA(ptr::null_mut(), 0, 0, ptr::null());
if event == ptr::null_mut() {
(*audio_client).Release();
panic!("Failed to create event");
}
match check_result((*audio_client).SetEventHandle(event)) {
Err(_) => {
(*audio_client).Release();
panic!("Failed to call SetEventHandle")
},
Ok(_) => (),
};
event
};
// obtaining the size of the samples buffer in number of frames
let max_frames_in_buffer = {
let mut max_frames_in_buffer = mem::uninitialized();
let hresult = (*audio_client).GetBufferSize(&mut max_frames_in_buffer);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
max_frames_in_buffer
};
// Building a `IAudioRenderClient` that will be used to fill the samples buffer.
let render_client = {
let mut render_client: *mut audioclient::IAudioRenderClient = mem::uninitialized();
let hresult = (*audio_client).GetService(&audioclient::IID_IAudioRenderClient,
&mut render_client as
*mut *mut audioclient::IAudioRenderClient as
*mut _);
match check_result(hresult) {
Err(ref e)
if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => {
(*audio_client).Release();
return Err(CreationError::DeviceNotAvailable);
},
Err(e) => {
(*audio_client).Release();
panic!("{:?}", e);
},
Ok(()) => (),
};
&mut *render_client
};
let new_voice_id = VoiceId(self.next_voice_id.fetch_add(1, Ordering::Relaxed));
assert_ne!(new_voice_id.0, usize::max_value()); // check for overflows
// Once we built the `VoiceInner`, we add a command that will be picked up by the
// `run()` method and added to the `RunContext`.
{
let inner = VoiceInner {
id: new_voice_id.clone(),
audio_client: audio_client,
render_client: render_client,
event: event,
playing: false,
max_frames_in_buffer: max_frames_in_buffer,
bytes_per_frame: format.nBlockAlign,
};
self.commands.lock().unwrap().push(Command::NewVoice(inner));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
};
Ok(new_voice_id)
}
}
#[inline]
pub fn destroy_voice(&self, voice_id: VoiceId) {
unsafe {
self.commands
.lock()
.unwrap()
.push(Command::DestroyVoice(voice_id));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn run<F>(&self, mut callback: F) -> !
where F: FnMut(VoiceId, UnknownTypeBuffer)
{
self.run_inner(&mut callback);
}
fn run_inner(&self, callback: &mut FnMut(VoiceId, UnknownTypeBuffer)) -> ! {
unsafe {
// We keep `run_context` locked forever, which guarantees that two invocations of
// `run()` cannot run simultaneously.
let mut run_context = self.run_context.lock().unwrap();
loop {
// Process the pending commands.
let mut commands_lock = self.commands.lock().unwrap();
for command in commands_lock.drain(..) {
match command {
Command::NewVoice(voice_inner) => {
let event = voice_inner.event;
run_context.voices.push(voice_inner);
run_context.handles.push(event);
},
Command::DestroyVoice(voice_id) => {
match run_context.voices.iter().position(|v| v.id == voice_id) {
None => continue,
Some(p) => {
run_context.handles.remove(p + 1);
run_context.voices.remove(p);
},
}
},
Command::Play(voice_id) => {
if let Some(v) = run_context.voices.get_mut(voice_id.0) {
if !v.playing {
let hresult = (*v.audio_client).Start();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
Command::Pause(voice_id) => {
if let Some(v) = run_context.voices.get_mut(voice_id.0) {
if v.playing {
let hresult = (*v.audio_client).Stop();
check_result(hresult).unwrap();
v.playing = true;
}
}
},
}
}
drop(commands_lock);
// Wait for any of the handles to be signalled, which means that the corresponding
// sound needs a buffer.
debug_assert!(run_context.handles.len() <= winnt::MAXIMUM_WAIT_OBJECTS as usize);
let result = synchapi::WaitForMultipleObjectsEx(run_context.handles.len() as u32,
run_context.handles.as_ptr(),
FALSE,
winbase::INFINITE, /* TODO: allow setting a timeout */
FALSE /* irrelevant parameter here */);
// Notifying the corresponding task handler.
debug_assert!(result >= winbase::WAIT_OBJECT_0);
let handle_id = (result - winbase::WAIT_OBJECT_0) as usize;
// If `handle_id` is 0, then it's `pending_scheduled_event` that was signalled in
// order for us to pick up the pending commands.
// Otherwise, a voice needs data.
if handle_id >= 1 {
let voice = &mut run_context.voices[handle_id - 1];
let voice_id = voice.id.clone();
// Obtaining the number of frames that are available to be written.
let frames_available = {
let mut padding = mem::uninitialized();
let hresult = (*voice.audio_client).GetCurrentPadding(&mut padding);
check_result(hresult).unwrap();
voice.max_frames_in_buffer - padding
};
if frames_available == 0 {
// TODO: can this happen?
continue;
}
// Obtaining a pointer to the buffer.
let (buffer_data, buffer_len) = {
let mut buffer: *mut BYTE = mem::uninitialized();
let hresult = (*voice.render_client)
.GetBuffer(frames_available, &mut buffer as *mut *mut _);
check_result(hresult).unwrap(); // FIXME: can return `AUDCLNT_E_DEVICE_INVALIDATED`
debug_assert!(!buffer.is_null());
(buffer as *mut _,
frames_available as usize * voice.bytes_per_frame as usize /
mem::size_of::<f32>()) // FIXME: correct size when not f32
};
let buffer = Buffer {
voice: voice,
buffer_data: buffer_data,
buffer_len: buffer_len,
frames: frames_available,
marker: PhantomData,
};
let buffer = UnknownTypeBuffer::F32(::Buffer { target: Some(buffer) }); // FIXME: not always f32
callback(voice_id, buffer);
}
}
}
}
#[inline]
pub fn play(&self, voice: VoiceId) {
unsafe {
self.commands.lock().unwrap().push(Command::Play(voice));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
#[inline]
pub fn pause(&self, voice: VoiceId) {
unsafe {
self.commands.lock().unwrap().push(Command::Pause(voice));
let result = synchapi::SetEvent(self.pending_scheduled_event);
assert!(result != 0);
}
}
}
impl Drop for EventLoop {
#[inline]
fn drop(&mut self) {
unsafe {
handleapi::CloseHandle(self.pending_scheduled_event);
}
}
}
unsafe impl Send for EventLoop {
}
unsafe impl Sync for EventLoop {
}
// The content of a voice ID is a number that was fetched from `next_voice_id`.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct VoiceId(usize);
impl Drop for VoiceInner {
#[inline]
fn drop(&mut self) {
unsafe {
(*self.render_client).Release();
(*self.audio_client).Release();
handleapi::CloseHandle(self.event);
}
}
}
pub struct Buffer<'a, T: 'a> {
voice: &'a mut VoiceInner,
buffer_data: *mut T,
buffer_len: usize,
frames: UINT32,
marker: PhantomData<&'a mut [T]>,
}
unsafe impl<'a, T> Send for Buffer<'a, T> {
}
impl<'a, T> Buffer<'a, T> {
#[inline]
pub fn buffer(&mut self) -> &mut [T] {
unsafe { slice::from_raw_parts_mut(self.buffer_data, self.buffer_len) }
}
#[inline]
pub fn len(&self) -> usize {
self.buffer_len
}
#[inline]
pub fn finish(self) {
unsafe {
let hresult = (*self.voice.render_client).ReleaseBuffer(self.frames as u32, 0);
match check_result(hresult) {
// Ignoring the error that is produced if the device has been disconnected.
Err(ref e) if e.raw_os_error() == Some(AUDCLNT_E_DEVICE_INVALIDATED) => (),
e => e.unwrap(),
};
}
}
}
// Turns a `Format` into a `WAVEFORMATEXTENSIBLE`.
fn format_to_waveformatextensible(format: &Format)
-> Result<mmreg::WAVEFORMATEXTENSIBLE, CreationError> {
Ok(mmreg::WAVEFORMATEXTENSIBLE {
Format: mmreg::WAVEFORMATEX {
wFormatTag: match format.data_type {
SampleFormat::I16 => mmreg::WAVE_FORMAT_PCM,
SampleFormat::F32 => mmreg::WAVE_FORMAT_EXTENSIBLE,
SampleFormat::U16 => return Err(CreationError::FormatNotSupported),
},
nChannels: format.channels as WORD,
nSamplesPerSec: format.sample_rate.0 as DWORD,
nAvgBytesPerSec: format.channels as DWORD *
format.sample_rate.0 as DWORD *
format.data_type.sample_size() as DWORD,
nBlockAlign: format.channels as WORD *
format.data_type.sample_size() as WORD,
wBitsPerSample: 8 * format.data_type.sample_size() as WORD,
cbSize: match format.data_type {
SampleFormat::I16 => 0,
SampleFormat::F32 => (mem::size_of::<mmreg::WAVEFORMATEXTENSIBLE>() -
mem::size_of::<mmreg::WAVEFORMATEX>()) as
WORD,
SampleFormat::U16 => return Err(CreationError::FormatNotSupported),
},
},
Samples: 8 * format.data_type.sample_size() as WORD,
dwChannelMask: {
let mut mask = 0;
const CHANNEL_POSITIONS: &'static [DWORD] = &[
mmreg::SPEAKER_FRONT_LEFT,
mmreg::SPEAKER_FRONT_RIGHT,
mmreg::SPEAKER_FRONT_CENTER,
mmreg::SPEAKER_LOW_FREQUENCY,
mmreg::SPEAKER_BACK_LEFT,
mmreg::SPEAKER_BACK_RIGHT,
mmreg::SPEAKER_FRONT_LEFT_OF_CENTER,
mmreg::SPEAKER_FRONT_RIGHT_OF_CENTER,
mmreg::SPEAKER_BACK_CENTER,
mmreg::SPEAKER_SIDE_LEFT,
mmreg::SPEAKER_SIDE_RIGHT,
mmreg::SPEAKER_TOP_CENTER,
mmreg::SPEAKER_TOP_FRONT_LEFT,
mmreg::SPEAKER_TOP_FRONT_CENTER,
mmreg::SPEAKER_TOP_FRONT_RIGHT,
mmreg::SPEAKER_TOP_BACK_LEFT,
mmreg::SPEAKER_TOP_BACK_CENTER,
mmreg::SPEAKER_TOP_BACK_RIGHT,
];
for i in 0..format.channels {
let raw_value = CHANNEL_POSITIONS[i as usize];
mask = mask | raw_value;
}
mask
},
SubFormat: match format.data_type {
SampleFormat::I16 => ksmedia::KSDATAFORMAT_SUBTYPE_PCM,
SampleFormat::F32 => ksmedia::KSDATAFORMAT_SUBTYPE_IEEE_FLOAT,
SampleFormat::U16 => return Err(CreationError::FormatNotSupported),
},
})
}