Warning: this is an extremely nerdy post about programming an esoteric Apple API.
If you’re working at a very low level with Core Audio, you are very likely using the Remote IO audio unit. It’s the best way to create a low-latency audio app. It’s not really worth it for a podcast client or an MP3 player, but if you need to generate audio on the fly, with <10 ms of latency, it seems to be the only way.
Basically you set up the Remote IO audio unit so it’s connected to the system audio out. You feed audio data to the audio unit via a callback function.
One of the well known issues with this approach is that the system calls back to your app on a realtime thread. So on this thread, all operations must be completed in a deterministic amount of time. That means no allocation, deallocation, locks, or GCD dispatch (it allocates memory). The problem here is that.. The audio thread needs read/write access to data which tells it how to play music, and the user needs read/write access to that same data. So how do you synchronize these threads without locks?
This is what I’ve been pondering whilst writing my new app. Up until now, my only solution was to bite the bullet and use GCD. It actually worked out ok performance wise, but in the end was futile. I would have had to use GCD to execute synchronous audio-rate operations, and make the realtime thread wait for them to finish — now that’s just downright silly.
I’ve only had one truly successful strategy so far. Firstly, the realtime thread needs to “own” the song data. If you can’t make the realtime thread wait (ever), then it just needs to have exclusive access to the song data. There no safe way I can see to give direct access to the main thread and remote IO thread. So I decided that I needed to pass parametrized commands to the realtime thread, and just let the remote IO callback mutate the song data.
Luckily I was already using the Command design pattern to execute changes in the song data (which made undo/redo functions a cinch). And it became obvious that I would need a lock-free queue to pass these commands to the remote IO thread. So if you want to change a note on step 14 of your song, you allocate a WriteNote command (on the main thread) with the note and the step as parameters. Then enqueue the Command in a lock free ring buffer, such as Michael Tyson’s TPCircularBuffer. The remote IO thread will pull out any new commands, and execute them before generating the next audio buffer. So there’s no conflict.
Any new objects/buffers which need to be allocated, are done in the constructor of the command. So like, any new samples can be loaded there. When something must be deallocated, send a dealloc command. That command will grab the data (let’s say a sample buffer) on the remote IO thread. The command will then be enqueued on a second ring buffer, which is read by a special pthread. That thread can then safely deallocate the buffer.
It kind of seems like there should be an simpler way to do this, but this way works pretty well, and is totally thread safe. It also keeps the realtime thread running smoothly, which is the top priority.
Let me know if you think I’m crazy, or missing something.