This is a draft implementation of #294. I'll leave this open for
feedback and potentially better trait naming suggestions or better
solutions in general!
cc @ishitatsuyuki
This solution was originally posted by @HybridEidolon in #185. Sorry it
took so long! I thought it might be easier to open a new PR as half of
your implementation here has already been implemented in a following PR
(namely, the change from an unnecessary `Vec` of callbacks to a single
user callback).
Closes#185.
Re-exports host-specific types so that they are available within the
platform module if necessary (e.g. host::asla::Host as AlsaHost).
Allows for converting platform-specific host types (e.g. AlsaHost) into
the dynamically dispatched type generated for the target platform
(`Host`).
This is an implementation of the API described at #204. Please see that
issue for more details on the motivation.
-----
A **Host** provides access to the available audio devices on the system.
Some platforms have more than one host available, e.g.
wasapi/asio/dsound on windows, alsa/pulse/jack on linux and so on. As a
result, some audio devices are only available on certain hosts, while
others are only available on other hosts. Every platform supported by
CPAL has at least one **DefaultHost** that is guaranteed to be available
(alsa, wasapi and coreaudio). Currently, the default hosts are the only
hosts supported by CPAL, however this will change as of landing #221 (cc
@freesig). These changes should also accommodate support for other hosts
such as jack #250 (cc @derekdreery) and pulseaudio (cc @knappador) #259.
This introduces a suite of traits allowing for both compile time and
runtime dispatch of different hosts and their uniquely associated device
and event loop types.
A new private **host** module has been added containing the individual
host implementations, each in their own submodule gated to the platforms
on which they are available.
A new **platform** module has been added containing platform-specific
items, including a dynamically dispatched host type that allows for
easily switching between hosts at runtime.
The **ALL_HOSTS** slice contains a **HostId** for each host supported on
the current platform. The **available_hosts** function produces a
**HostId** for each host that is currently *available* on the platform.
The **host_from_id** function allows for initialising a host from its
associated ID, failing with a **HostUnavailable** error. The
**default_host** function returns the default host and should never
fail.
Please see the examples for a demonstration of the change in usage. For
the most part, things look the same at the surface level, however the
role of device enumeration and creating the event loop have been moved
from global functions to host methods. The enumerate.rs example has been
updated to enumerate all devices for each host, not just the default.
**TODO**
- [x] Add the new **Host** API
- [x] Update examples for the new API.
- [x] ALSA host
- [ ] WASAPI host
- [ ] CoreAudio host
- [ ] Emscripten host **Follow-up PR**
- [ ] ASIO host #221
cc @ishitatsuyuki more to review for you if you're interested, but it
might be easier after #288 lands and this gets rebased.
This commit significantly refactors the alsa backend's `EventLoop::run`
implementation in order to allow for better error handling throughout
the loop. This removes many cases that would previously `panic!` in
favour of calling the user callback with the necessary error and
removing the corrupt stream. Seeing as the method cannot return, a
catch-all `panic!` still exists at the end of the method, however this
refactor should make it much easier to remove this restriction in the
future.
This adds the following types:
- `StreamEvent`
- `CloseStreamCause`
- `StreamError`
These allow for notifying the user of the following events:
- A stream has been played.
- A stream has been paused.
- A stream has been closed due to user destroying stream.
- A stream has been closed due to an error.
This allows for properly handling potential failure on macOS. We should
also consider propagating the mutex/channel poison errors through these
new types, especially considering the potential removal of the event
loop in favour of switching over to high-priority audio threads on
windows and linux.
The coreaudio and wasapi backends may both potentially fail to produce
the name associated with a device. This changes the API to allow for
returning the errors in these cases.
See the documentation for both new errors for details.
The new `DevicesError` has been added to allow for returning errors when
enumerating devices. This has allowed to remove multiple potential
`panic!`s in each of the alsa, coreaudio and wasapi backends.
Since #269 this `panic!` is certainly unnecessary as `InputBuffer` and
`OutputBuffer` are a thin wrapper around a slice. That said, I'm
struggling to understand exactly why this `panic!` was necessary in the
first place.
This closes#228.
- ALSA backend: reuse the buffers
- Make `InputBuffer` and `OutputBuffer` types just a wrapper of slice
* Buffer is now submitted at the end of callback
The internal alsa, null and emscripten Device implementations already
implemented Debug; but the coreaudio and wasapi ones, and therefore
also the wrapper, did not.
I decided to eschew the `Device(…)` wrapping in the outer layer
(hence a custom implementation rather than `#[derive(Debug)]`),
because `Device(Device)`, `Device(Device { … })` and so forth all
look better without the extra `Device(…)` wrapping.
On the wasapi and coreaudio implementations I put both the pointer and
name. Name because it’s useful, pointer because on Windows at least
I believe duplicated names are possible. (e.g. two monitors that include
monitors, of the same type; I haven’t strictly confirmed this, because I
killed those off harshly on my machine and don’t want to reinstate
them.)
I do not have access to a macOS device to confirm that the coreaudio
implementation is sane, but I think it is.
Instead of taking the easy way out and killing the whole program by panicking, device enumeration and stream creation will now report the error variant 'Unknown'
* Fix the PauseStream handler to flip the `playing` bit.
* Changelog entry on the wasapi stream resuming fix.
* Moved the changelog entry to the Unreleased.
* [coreaudio] Fix handling of non-default sample rates for input streams
Currently when building an input stream the coreaudio backend only
specifies the sample rate for the audio unit, however coreaudio requires
that the audio unit sample rate matches the device sample rate.
This changes the `build_input_stream` behaviour to:
1. Check if the device sample rate differs from the desired one.
2. If so, check that there are no existing audio units using the device
at the current sample rate. If there are, panic with a message
explaining why.
3. Otherwise, change the device sample rate.
4. Continue building the input stream audio unit as normal.
Closes#213.
* Update CHANGELOG for coreaudio input stream sample rate fix
* Publish 0.8.1 for coreaudio input stream sample rate fix
* Update to a more general Device and Stream API
This update prepares for adding input stream support by removing the
`Endpoint` type (which only supports output streams) in favour of a more
general `Device` type which may support any number of input or output
streams. Previously discussed at #117.
The name `Voice` has been replaced with the more ubiquitous name
`Stream`. See #118 for justification.
Also introduces a new `StreamData` which is now passed to the
`EventLoop::run` callback rather than the `UnknownTypeBuffer`.
`StreamData` allows for passing either `Input` data to be read, or
`Output` data to be written.
The `beep.rs` example has been updated for the API changes.
None of the backends have been updated for this API change yet. Backends
will be updated in the following commits.
Closes#117.
Closes#118.
* Update ALSA backend for new `Device` and `Stream` API.
* Update wasapi backend for new `Device` and `Stream` API.
* Update enumerate.rs example for new `Device` and `Stream` API.
* Update coreaudio backend for new `Device` and `Stream` API.
* Fix lib doc tests for Device and Stream API update
* Update emscripten backend for new `Device` and `Stream` API.
* Update null backend for new `Device` and `Stream` API.
* Merge match exprs in beep.rs example
* Add Input variant along with UnknownTypeInputBuffer and InputBuffer
UnknownTypeBuffer and Buffer have been renamed to
UnknownTypeOutputBuffer and OutputBuffer respectively.
No backends have yet been updated for this name change or the addition
of the InputBuffer.
* Update null backend for introduction of InputBuffer
* Update emscripten backend for introduction of InputBuffer
* Make InputBuffer inner field an option to call finish in drop
* Update alsa backend for introduction of InputBuffer
* Update wasapi backend for introduction of InputBuffer
* Update coreaudio backend for introduction of InputBuffer
* Update enumerate.rs example to provide more detail about devices
The enumerate.rs example now also displays:
- Supported input stream formats.
- Default input stream format.
- Default output stream format.
This should also be useful for testing the progress of #201.
* Add `record_wav.rs` example for demonstrating input streams
Records a ~3 second WAV file to `$CARGO_MANIFEST_DIR/recorded.wav` using
the default input device and default input format.
Uses hound 3.0 to create and write to the WAV file.
This should also be useful for testing the input stream implementations
for each different cpal backend.
* Implement input stream support for coreaudio backend
This implements the following for the coreaudio backend:
- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream
The `enumerate.rs` and `record_wav.rs` examples now work successfully on
macos.
* Add `SupportedFormat::cmp_default_heuristics` method
This adds a comparison function which compares two `SupportedFormat`s in
terms of their priority of use as a default stream format.
Some backends (such as ALSA) do not provide a default stream format for
their audio devices. In these cases, CPAL attempts to decide on a
reasonable default format for the user. To do this we use the "greatest"
of all supported stream formats when compared with this method.
* Implement input stream support for ALSA backend
This implements the following for the ALSA backend:
- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream
Note that ALSA itself does not give default stream formats for its
devices. Thus the newly added `SupportedFormat::cmp_default_heuristics`
method is used to determine the most suitable, supported stream format
to use as the default.
The `enumerate.rs` and `record_wav.rs` examples now work successfully on
my linux machine.
* Implement input stream support for wasapi backend
This implements the following for the wasapi backend:
- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream
Note that wasapi does not enumerate supported input/output stream
formats for its devices. Instead, we query the `IsFormatSupported`
method for supported formats ourselves.
* Fix some warnings in the alsa backend
* Update CHANGELOG for introduction of input streams and related items
* Update README to show latest features supported by CPAL
* Simplify beep example using Device::default_output_format
* Remove old commented code from wasapi/stream.rs
Adds only the necessary cargo features to reduce compile time and reduce
the chance of linking errors occurring for unused libraries (e.g.
d3d12.dll fails to link on my win10 VM).
I thought I'd try and land this before before working on the wasapi
backend implementation for #201.
Tested both beep.rs and enumerate.rs and they work fine with the update.
* Rename SamplesRate to SampleRate and samples_rate to sample_rate
* Rename ChannelsCount to ChannelCount
* Update CHANGELOG for SamplesRate and ChannelsCount renaming
* Remove ChannelPosition API
This removes the ChannelPosition API from the lib root and updates the
ALSA backend and examples accordingly. The other backends have not yet
been updated.
Related discussion at #187.
* Update windows backend to removal of ChannelPosition API
The windows backend now assumes the channel position order is equal to
the channel position mask order. E.g. channel 0 will always be front
left, channel 1 will always be front right, etc.
Compiled and ran both examples successfully.
* Update coreaudio backend to removal of ChannelPosition API
Compiled and ran both examples successfully.
* Update emscriptem backend for removal of ChannelPosition API
* Update CHANGELOG for ChannelPosition removal
Based on #195.
Also implements proper handling of the given `Endpoint` in the
macos implementation of the `build_voice` method.
Updates to the latest coreaudio-sys and coreaudio-rs which include the
additional necessary frameworks.
Also adds a line that prints the name of the default device in the
`enumeration.rs` example.
Updates the CHANGELOG for this PR.
Closes#194.
Related to #180.
Related external issues:
- RustAudio/coreaudio-sys#4
- RustAudio/coreaudio-rs#57
* Implement `pause` and `play` for ALSA backend
This commit also ensures that the Voice is initially paused when
returned to remain consistent with the rest of the CPAL backends.
Related to #175.
* Remove ineffective pause from end of build_voice method
* ALSA - Change `is_paused` flag from `AtomicBool` to `bool`
* Add pause and play ALSA addition to CHANGELOG
* Use the js! macro from stdweb
* Rework the Buffer::finish method
* Use references from stdweb
* Fix emscripten warnings
* Rework the run() method to use stdweb
* Adjust timings
* Add entry in CHANGELOG
* Rework the API to not use futures anymore
* Add some comments
* Update the MacOS backend
* Restore the null implementation
* Add an emscripten backend
* Remove erroneously added feature
* Fix to_f32 formula
* [WIP] Alsa backend
* Alsa backend compiling
* Working ALSA backend
* Fix tests
* Move WASAPI endpoint to endpoint module
* Fix WASAPI warnings
* Rework the WASAPI backend
* Check overflows for voice ID
* Add comments and minor fixes to WASAPI backend
* Add a changelog
iOS provides three I/O (input/output) units. The vast majority of audio-unit applications use the Remote I/O unit, which connects to input and output audio hardware and provides low-latency access to individual incoming and outgoing audio sample values. For VoIP apps, the Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation and other features. To send audio back to your application rather than to output audio hardware, use the Generic Output unit.
See https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/UsingSpecificAudioUnits/UsingSpecificAudioUnits.html
snd_pcm_sw_params_set_avail_min was being hardcoded to 4096, which
seems to be problematic for lower sample rates. This update sets
the value to the buffer size as supplied by snd_pcm_get_params(),
which is what alsa own sample code does.
This should fix https://github.com/tomaka/cpal/issues/142
snd_pcm_pause could have been used but not all hardware implement it, so
I propose not to use it.
In this implementation:
there are two kind of scheduling: wait for resume signal and wait for
pcm to be available
if the stream is paused then it return notready and wait for resume
the event loop is different as it manages descriptors corresponding to
voices according to the nature of the scheduling.
there is still a FIXME: in voice.play the is signal is send even if
the event loop wasn't waiting for resume.
It doesn't seem to create any issue. But it happens when you write
voice.pause();voice.play();
The old method always returned _RUNNING on some machines.
This new method seems to produce the expected behaviour.
Note: -32 is probably -EPIPE, but the appropriate constant was not
available at this time.