* Remove ChannelPosition API
This removes the ChannelPosition API from the lib root and updates the
ALSA backend and examples accordingly. The other backends have not yet
been updated.
Related discussion at #187.
* Update windows backend to removal of ChannelPosition API
The windows backend now assumes the channel position order is equal to
the channel position mask order. E.g. channel 0 will always be front
left, channel 1 will always be front right, etc.
Compiled and ran both examples successfully.
* Update coreaudio backend to removal of ChannelPosition API
Compiled and ran both examples successfully.
* Update emscriptem backend for removal of ChannelPosition API
* Update CHANGELOG for ChannelPosition removal
Based on #195.
Also implements proper handling of the given `Endpoint` in the
macos implementation of the `build_voice` method.
Updates to the latest coreaudio-sys and coreaudio-rs which include the
additional necessary frameworks.
Also adds a line that prints the name of the default device in the
`enumeration.rs` example.
Updates the CHANGELOG for this PR.
Closes#194.
Related to #180.
Related external issues:
- RustAudio/coreaudio-sys#4
- RustAudio/coreaudio-rs#57
There does not seem to be any major API breakage, however the emscripten
and macos backends have been pretty heavily refactored so I thought it
best to bump to 0.6 (rather than 0.5.2) just in case there is any subtle
behavioural breakage. Happy to change this to 0.5.2 though if someone
can confirm there will be no downstream breakage.
* Implement `pause` and `play` for ALSA backend
This commit also ensures that the Voice is initially paused when
returned to remain consistent with the rest of the CPAL backends.
Related to #175.
* Remove ineffective pause from end of build_voice method
* ALSA - Change `is_paused` flag from `AtomicBool` to `bool`
* Add pause and play ALSA addition to CHANGELOG
* Use the js! macro from stdweb
* Rework the Buffer::finish method
* Use references from stdweb
* Fix emscripten warnings
* Rework the run() method to use stdweb
* Adjust timings
* Add entry in CHANGELOG
* Rework the API to not use futures anymore
* Add some comments
* Update the MacOS backend
* Restore the null implementation
* Add an emscripten backend
* Remove erroneously added feature
* Fix to_f32 formula
* [WIP] Alsa backend
* Alsa backend compiling
* Working ALSA backend
* Fix tests
* Move WASAPI endpoint to endpoint module
* Fix WASAPI warnings
* Rework the WASAPI backend
* Check overflows for voice ID
* Add comments and minor fixes to WASAPI backend
* Add a changelog
iOS provides three I/O (input/output) units. The vast majority of audio-unit applications use the Remote I/O unit, which connects to input and output audio hardware and provides low-latency access to individual incoming and outgoing audio sample values. For VoIP apps, the Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation and other features. To send audio back to your application rather than to output audio hardware, use the Generic Output unit.
See https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/UsingSpecificAudioUnits/UsingSpecificAudioUnits.html