I'm curious, how did you figure out how to do the conversion? Do you have some sort of professional experience with this stuff?
The thing I discussed should be common knowledge in CoreAudio programming. See
http://atastypixel.com/blog/four-common-mistakes-in-audio-development/
There are four obvious mistakes talked in that article:
1. Don’t hold locks on the audio thread. Like pthread_mutex_lock or @synchronized.
2. Don’t use Objective-C/Swift on the audio thread. Like [myInstance doAThing] or myInstance.something.
3 Don’t allocate memory on the audio thread. Like malloc(), or new Abcd or [MyClass alloc].
4 Don’t do file or network IO on the audio thread. Like read, write or sendto.
But most, if not all, developers simply do not follow. Examples:
1. VLC. Gigantic render function. It's funny that in the comment it tries to enforce not using locks, wait and io in the renderer, but two lines later you see code saying "lock_lock(p_sys);"
https://github.com/videolan/vlc/blo.../modules/audio_output/coreaudio_common.c#L140
2. libao, the xiph (the org which maintain flac, vorbis) library which virtually all open source player uses, has a complex audio renderer with mutex locking even today:
https://github.com/xiph/libao/blob/master/src/plugins/macosx/ao_macosx.c#L127
3. The Cog.app, which a lot of Mac users use and recommend, not only call objective C functions, but also call stream reader that mallocs/frees.
https://bitbucket.org/mamburu/cog/s...viewer=file-view-default#OutputCoreAudio.m-38
The list just goes on and on! Practically, the performance of those projects are really bad even on my 2017 Macbook Pro --- Cog.app and VLC can barely render 92/24 without hiccups.
It seems the entire industry (including reviewers, developers, users) only concentrates on pretty UI and functionality, without paying attention to code performance and audio playback quality.
In comparasion, see how I implemented the renderer function in cmus and MPD:
https://github.com/cmus/cmus/blob/master/op/coreaudio.c#L268
https://github.com/MusicPlayerDaemon/MPD/blob/master/src/output/plugins/OSXOutputPlugin.cxx#L684
just 2-3 lines of code that just copy data from a lock free ring buffer.
thus, even with 352.8/32 it's still smooth like butter!
There are other good implementations. MPV project's implementation is similar to mine:
https://github.com/mpv-player/mpv/b...b7484f27d033f7a5/audio/out/ao_coreaudio.c#L74
The ao_read_data function is similar to reading the buffer in a lock free way:
https://github.com/mpv-player/mpv/b...89d42bfa3c9162ecef1f84e/audio/out/pull.c#L137
But unfortunately it calls mp_raw_time_us, which is a system call (which blocks until kernel return the back the time).
If you take a look and research how all music players implement their renderer callback in the market, you can safely arrive at the conclusion that most are on par with the design quality of Schiit DACs.