This allows for properly handling potential failure on macOS. We should
also consider propagating the mutex/channel poison errors through these
new types, especially considering the potential removal of the event
loop in favour of switching over to high-priority audio threads on
windows and linux.
The coreaudio and wasapi backends may both potentially fail to produce
the name associated with a device. This changes the API to allow for
returning the errors in these cases.
- ALSA backend: reuse the buffers
- Make `InputBuffer` and `OutputBuffer` types just a wrapper of slice
* Buffer is now submitted at the end of callback
The internal alsa, null and emscripten Device implementations already
implemented Debug; but the coreaudio and wasapi ones, and therefore
also the wrapper, did not.
I decided to eschew the `Device(…)` wrapping in the outer layer
(hence a custom implementation rather than `#[derive(Debug)]`),
because `Device(Device)`, `Device(Device { … })` and so forth all
look better without the extra `Device(…)` wrapping.
On the wasapi and coreaudio implementations I put both the pointer and
name. Name because it’s useful, pointer because on Windows at least
I believe duplicated names are possible. (e.g. two monitors that include
monitors, of the same type; I haven’t strictly confirmed this, because I
killed those off harshly on my machine and don’t want to reinstate
them.)
I do not have access to a macOS device to confirm that the coreaudio
implementation is sane, but I think it is.
* [coreaudio] Fix handling of non-default sample rates for input streams
Currently when building an input stream the coreaudio backend only
specifies the sample rate for the audio unit, however coreaudio requires
that the audio unit sample rate matches the device sample rate.
This changes the `build_input_stream` behaviour to:
1. Check if the device sample rate differs from the desired one.
2. If so, check that there are no existing audio units using the device
at the current sample rate. If there are, panic with a message
explaining why.
3. Otherwise, change the device sample rate.
4. Continue building the input stream audio unit as normal.
Closes#213.
* Update CHANGELOG for coreaudio input stream sample rate fix
* Publish 0.8.1 for coreaudio input stream sample rate fix
* Update to a more general Device and Stream API
This update prepares for adding input stream support by removing the
`Endpoint` type (which only supports output streams) in favour of a more
general `Device` type which may support any number of input or output
streams. Previously discussed at #117.
The name `Voice` has been replaced with the more ubiquitous name
`Stream`. See #118 for justification.
Also introduces a new `StreamData` which is now passed to the
`EventLoop::run` callback rather than the `UnknownTypeBuffer`.
`StreamData` allows for passing either `Input` data to be read, or
`Output` data to be written.
The `beep.rs` example has been updated for the API changes.
None of the backends have been updated for this API change yet. Backends
will be updated in the following commits.
Closes#117.
Closes#118.
* Update ALSA backend for new `Device` and `Stream` API.
* Update wasapi backend for new `Device` and `Stream` API.
* Update enumerate.rs example for new `Device` and `Stream` API.
* Update coreaudio backend for new `Device` and `Stream` API.
* Fix lib doc tests for Device and Stream API update
* Update emscripten backend for new `Device` and `Stream` API.
* Update null backend for new `Device` and `Stream` API.
* Merge match exprs in beep.rs example
* Add Input variant along with UnknownTypeInputBuffer and InputBuffer
UnknownTypeBuffer and Buffer have been renamed to
UnknownTypeOutputBuffer and OutputBuffer respectively.
No backends have yet been updated for this name change or the addition
of the InputBuffer.
* Update null backend for introduction of InputBuffer
* Update emscripten backend for introduction of InputBuffer
* Make InputBuffer inner field an option to call finish in drop
* Update alsa backend for introduction of InputBuffer
* Update wasapi backend for introduction of InputBuffer
* Update coreaudio backend for introduction of InputBuffer
* Update enumerate.rs example to provide more detail about devices
The enumerate.rs example now also displays:
- Supported input stream formats.
- Default input stream format.
- Default output stream format.
This should also be useful for testing the progress of #201.
* Add `record_wav.rs` example for demonstrating input streams
Records a ~3 second WAV file to `$CARGO_MANIFEST_DIR/recorded.wav` using
the default input device and default input format.
Uses hound 3.0 to create and write to the WAV file.
This should also be useful for testing the input stream implementations
for each different cpal backend.
* Implement input stream support for coreaudio backend
This implements the following for the coreaudio backend:
- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream
The `enumerate.rs` and `record_wav.rs` examples now work successfully on
macos.
* Add `SupportedFormat::cmp_default_heuristics` method
This adds a comparison function which compares two `SupportedFormat`s in
terms of their priority of use as a default stream format.
Some backends (such as ALSA) do not provide a default stream format for
their audio devices. In these cases, CPAL attempts to decide on a
reasonable default format for the user. To do this we use the "greatest"
of all supported stream formats when compared with this method.
* Implement input stream support for ALSA backend
This implements the following for the ALSA backend:
- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream
Note that ALSA itself does not give default stream formats for its
devices. Thus the newly added `SupportedFormat::cmp_default_heuristics`
method is used to determine the most suitable, supported stream format
to use as the default.
The `enumerate.rs` and `record_wav.rs` examples now work successfully on
my linux machine.
* Implement input stream support for wasapi backend
This implements the following for the wasapi backend:
- Device::supported_input_formats
- Device::default_input_format
- Device::default_output_format
- EventLoop::build_input_stream
Note that wasapi does not enumerate supported input/output stream
formats for its devices. Instead, we query the `IsFormatSupported`
method for supported formats ourselves.
* Fix some warnings in the alsa backend
* Update CHANGELOG for introduction of input streams and related items
* Update README to show latest features supported by CPAL
* Simplify beep example using Device::default_output_format
* Remove old commented code from wasapi/stream.rs
* Rename SamplesRate to SampleRate and samples_rate to sample_rate
* Rename ChannelsCount to ChannelCount
* Update CHANGELOG for SamplesRate and ChannelsCount renaming
* Remove ChannelPosition API
This removes the ChannelPosition API from the lib root and updates the
ALSA backend and examples accordingly. The other backends have not yet
been updated.
Related discussion at #187.
* Update windows backend to removal of ChannelPosition API
The windows backend now assumes the channel position order is equal to
the channel position mask order. E.g. channel 0 will always be front
left, channel 1 will always be front right, etc.
Compiled and ran both examples successfully.
* Update coreaudio backend to removal of ChannelPosition API
Compiled and ran both examples successfully.
* Update emscriptem backend for removal of ChannelPosition API
* Update CHANGELOG for ChannelPosition removal
Based on #195.
Also implements proper handling of the given `Endpoint` in the
macos implementation of the `build_voice` method.
Updates to the latest coreaudio-sys and coreaudio-rs which include the
additional necessary frameworks.
Also adds a line that prints the name of the default device in the
`enumeration.rs` example.
Updates the CHANGELOG for this PR.
Closes#194.
Related to #180.
Related external issues:
- RustAudio/coreaudio-sys#4
- RustAudio/coreaudio-rs#57
* Rework the API to not use futures anymore
* Add some comments
* Update the MacOS backend
* Restore the null implementation
* Add an emscripten backend
* Remove erroneously added feature
* Fix to_f32 formula
* [WIP] Alsa backend
* Alsa backend compiling
* Working ALSA backend
* Fix tests
* Move WASAPI endpoint to endpoint module
* Fix WASAPI warnings
* Rework the WASAPI backend
* Check overflows for voice ID
* Add comments and minor fixes to WASAPI backend
* Add a changelog
iOS provides three I/O (input/output) units. The vast majority of audio-unit applications use the Remote I/O unit, which connects to input and output audio hardware and provides low-latency access to individual incoming and outgoing audio sample values. For VoIP apps, the Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation and other features. To send audio back to your application rather than to output audio hardware, use the Generic Output unit.
See https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/UsingSpecificAudioUnits/UsingSpecificAudioUnits.html