* Rework the API to not use futures anymore
* Add some comments
* Update the MacOS backend
* Restore the null implementation
* Add an emscripten backend
* Remove erroneously added feature
* Fix to_f32 formula
* [WIP] Alsa backend
* Alsa backend compiling
* Working ALSA backend
* Fix tests
* Move WASAPI endpoint to endpoint module
* Fix WASAPI warnings
* Rework the WASAPI backend
* Check overflows for voice ID
* Add comments and minor fixes to WASAPI backend
* Add a changelog
iOS provides three I/O (input/output) units. The vast majority of audio-unit applications use the Remote I/O unit, which connects to input and output audio hardware and provides low-latency access to individual incoming and outgoing audio sample values. For VoIP apps, the Voice-Processing I/O unit extends the Remote I/O unit by adding acoustic echo cancelation and other features. To send audio back to your application rather than to output audio hardware, use the Generic Output unit.
See https://developer.apple.com/library/content/documentation/MusicAudio/Conceptual/AudioUnitHostingGuide_iOS/UsingSpecificAudioUnits/UsingSpecificAudioUnits.html
snd_pcm_sw_params_set_avail_min was being hardcoded to 4096, which
seems to be problematic for lower sample rates. This update sets
the value to the buffer size as supplied by snd_pcm_get_params(),
which is what alsa own sample code does.
This should fix https://github.com/tomaka/cpal/issues/142
snd_pcm_pause could have been used but not all hardware implement it, so
I propose not to use it.
In this implementation:
there are two kind of scheduling: wait for resume signal and wait for
pcm to be available
if the stream is paused then it return notready and wait for resume
the event loop is different as it manages descriptors corresponding to
voices according to the nature of the scheduling.
there is still a FIXME: in voice.play the is signal is send even if
the event loop wasn't waiting for resume.
It doesn't seem to create any issue. But it happens when you write
voice.pause();voice.play();