Wavetable scanning in LGPT


So I’ve been spending some quality time with LGPT lately. I like it a lot. There are some definite weak points (no prelisten, filter/pitch oscillation is .. Not great), but I’m warming up to it.
I noticed that herr prof, over at chipmusic.org, uploaded a bunch of wavetables for LGPT last year. Good stuff 👍. I was expecting single-cycle waveforms, but they are actually things like the wavetables from Animoog. They were designed to be scanned through, so you can have a dynamic wavetable synthesizer, instead of a static sound.

It turns out that you can move through a wavetable, in LGPT, using the LPOF command. Pretty cool!

A few caveats I’ve found: with wavetable synthesis, reducing noise is a common concern. LGPT has only linear interpolation, so higher frequencies will be hard to make sound good, without a lot of oversampling. So you may have to work with long files. 256 cycles, at 2048 samples per cycle, at 44.1kHz is ~11.89 seconds long! That’s a big wav file.

Also, the fastest rate at which you can update the wavetable position is on every “tick” [read here if you don’t know what that is]. So it helps if you turn up the tempo, and use more ticks per step. If you’re ok with having a “steppy” sound, then you don’t have to.

And, of course, you must increment the loop position by a multiple of the cycle size, otherwise you will be looping from the middle of a cycle, instead of the beginning. So if your cycle size is 2048 samples long [0x800 in hex], you would stick:

00 LPOF 0800
01 HOP 0000

…in your instrument table. That will move through one cycle per tick. Change it to LPOF 1000 [4096 samples in decimal], and it will progress at two cycles per tick.

Still trying to get backwards motion working right.

Just make sure you know how long the cycles are in each wav file, and you’re good to go!

Mix Master

Just a little music post. Lately I’ve been listening to mixes by Jellica over at Kitten Rock, an idm/techno/weird-shiz net label. I really dig his taste in music. Check it out.

 

Edit:  While we’re on the subject, check out Henry Home sweet too.

 

Edit 2:

Okay, I’m just gonna unload here:

Je Mappelle – gameboy music 2013-2015:
https://jemappelle.bandcamp.com/album/gameboy-music-2013-2015

Fighter X – Lo-Tek Underground EP:
https://fighterx.bandcamp.com/album/lo-tek-underground-ep

Jamatar – Spacesounds:
https://jamatar.bandcamp.com/album/spacesounds-4

CTrix – My First Famitracker (Mixed):
http://chipmusic.org/cTrix/music/my-first-famitracker-mixed

Concurrency with Real Time Audio Threads

Warning: this is an extremely nerdy post about programming an esoteric Apple API.

If you’re working at a very low level with Core Audio, you are very likely using the Remote IO audio unit. It’s the best way to create a low-latency audio app. It’s not really worth it for a podcast client or an MP3 player, but if you need to generate audio on the fly, with <10 ms of latency, it seems to be the only way.

Basically you set up the Remote IO audio unit so it’s connected to the system audio out. You feed audio data to the audio unit via a callback function.

One of the well known issues with this approach is that the system calls back to your app on a realtime thread. So on this thread, all operations must be completed in a deterministic amount of time. That means no allocation, deallocation, locks, or GCD dispatch (it allocates memory). The problem here is that.. The audio thread needs read/write access to data which tells it how to play music, and the user needs read/write access to that same data. So how do you synchronize these threads without locks?

This is what I’ve been pondering whilst writing my new app. Up until now, my only solution was to bite the bullet and use GCD. It actually worked out ok performance wise, but in the end was futile. I would have had to use GCD to execute synchronous audio-rate operations, and make the realtime thread wait for them to finish — now that’s just downright silly.

I’ve only had one truly successful strategy so far. Firstly, the realtime thread needs to “own” the song data. If you can’t make the realtime thread wait (ever), then it just needs to have exclusive access to the song data. There no safe way I can see to give direct access to the main thread and remote IO thread. So I decided that I needed to pass parametrized commands to the realtime thread, and just let the remote IO callback mutate the song data.

Luckily I was already using the Command design pattern to execute changes in the song data (which made undo/redo functions a cinch). And it became obvious that I would need a lock-free queue to pass these commands to the remote IO thread. So if you want to change a note on step 14 of your song, you allocate a WriteNote command (on the main thread) with the note and the step as parameters. Then enqueue the Command in a lock free ring buffer, such as Michael Tyson’s TPCircularBuffer. The remote IO thread will pull out any new commands, and execute them before generating the next audio buffer. So there’s no conflict.

Any new objects/buffers which need to be allocated, are done in the constructor of the command. So like, any new samples can be loaded there. When something must be deallocated, send a dealloc command. That command will grab the data (let’s say a sample buffer) on the remote IO thread. The command will then be enqueued on a second ring buffer, which is read by a special pthread. That thread can then safely deallocate the buffer.

It kind of seems like there should be an simpler way to do this, but this way works pretty well, and is totally thread safe. It also keeps the realtime thread running smoothly, which is the top priority.

Let me know if you think I’m crazy, or missing something.

SqrSyn Export

Recently I read a review for SquareSynth where the user was confused by the audio export options. I’d just like to address this question here (since Apple doesn’t provide a way to respond to App Store reviews). When you tap the Export button in SquareSynth two things happen, the audio is copied to the clipboard, and shared with iTunes.

Clipboard

How the clipboard works is that SquareSynth copies a stereo wav file on to the clipboard. You then would go into another app and paste it, but there’s a caveat; not all apps support audio paste. When another app tries to use the data on the clipboard, it looks at the type of the data on the clipboard. If it recognizes the type as something it knows how to use, it will do so. Otherwise it will just do nothing.

So, to copy wav data from SquareSynth to another app, using the clipboard, you first tap the Export button in the Record overlay.

the button is in here

Now you switch to another app that knows how to use wav data from the clipboard. Here’s two lists of apps which support it:

https://iosaudio.wordpress.com/audio-copy-paste-the-master-list/

http://www.sonomawireworks.com/iphone/audiocopy/

These apps will have an Audio Paste option for you to use (consult the manuals). The clipboard will hold your audio until you copy something else.

File Sharing

The other thing that happens when you hit the Export button is a wav gets shared with iTunes (for your Mac/PC). To access this file:

◦ Connect your iDevice to your Mac/PC.

◦ Go to the “Apps” section

◦ Locate the File Sharing area at the bottom of the page.

◦ Locate SquareSynth in the list of File Sharing apps, and click it.

◦ You will see your file in SquareSynth Documents list. Drag it to your desktop, or use the Save To… button.

Enjoy!

iOS State of the Music

Well it’s that time of year — WWDC time. This is my annual time to gaze in to the future (in fall, with the release of iOS 9) put forth by our Apple overlords, and attempt to prepare for it. Sometimes the changes are very small, and I only have to do a little maintenance; sometimes it’s a HUGE, paradigm shifting, app obsoleting cannonball.

This year appears to be a cannon ball. In a way, iOS 9 does for music app developers what iOS 8 did for every other developer. With the introduction of App Extensions, iOS 8 greatly expanded the scope of interaction between apps, and the OS in general. It made developers rethink how their apps could fit into a users workflow. This was a big game changer for most apps, especially those concerned with productivity. For music apps… There was not a lot to make use of. I mean, there was some potential for streamlining wav file editing with extensions, but the big pain points in the pro-audio workflow went largely unaddressed.

Well the future has finally remembered us, and with iOS 9 Apple will be introducing Audio Unit extensions. Anyone familiar with audio production in the Apple world should know that Audio Units are virtual instruments, and effects modules, usable in a Digital Audio Workstation (like Ableton Live, Cubase, Audio Mulch, Renoise, etc). You can think of this as Apples answer to VSTs.

This new API will allow us to have what many have longed for on iOS for years: A method of writing music where you can compose entirely within one app, and use third party instruments and audio processors, just as on a PC. The only way to approach this sort of work flow now, is to write your song in a DAW (like BeatMaker 2, or Cubasis), use CoreMidi to send notes and modulation to each instrument or effect app, then set up AudioBus to route all of the audio back into the DAW, and record it. Don’t get me wrong, it’s great to have these tools available, but it’s a pain in the neck to get this all set up, and working right. Not only that, but the composer also needs to worry quite a lot about CPU/Memory load. Not all apps are optimized very well, and latency is a big concern in this long round trip through multiple stand-alone apps.

Well, third party instruments and effects can now be loaded and used within your host app (in iOS 9, of course). Just as on OS X, Audio Units will be able to have their own interface, which will be projected into your composition app. You can tweak a setting, write a few notes, and tweak it again, without app switching! How crazy is that?

Of course one of the first things that comes to mind is that devs will now be able to port existing Audio Units from OS X to iOS. It’s important to emphasize that Audio Units are not fully interchangeable between iOS and OS X. You can’t just take all of your old AUs and copy them into your iPad. Audio units on iOS are implemented as extensions, so the developer will need to create a standalone app, and build the AU as an extension. But the good news is that version 3 Audio Units are totally cross platform between OS X and iOS, if they don’t have a custom GUI (Graphical User Interface). The AUViewController class, which developers will use to make a custom GUI, is cross-platform, but UIKit and AppKit are specific to each OS (They provide the basic building blocks of the User Interface). So their is some work to create compatibility. Version 2 Audio Units will be able to use a “Bridge” class which will help developers port old plugins to iOS.

While all of this progress is amazingly cool, we are now left to wonder what will happen to the infrastructure we’ve built in order to get around the limitations of the old iOS. Developers have spent countless hours integrating with Inter App Audio, and Audiobus, and Core-Midi, in an attempt to liberate audio functionally from its app sandbox. It’s unclear to me how much of my midi code will be reusable (probably very little), and It’s a little frustrating to have the rug pulled out from under you. I will continue to support Audiobus I believe, because really it’s still a powerful tool and the Audiobus community is amazing. They are really enthusiastic about pro-audio experiences on iPads and iPhones.

I’m now scrapping plans of adding a sequencer to SquareSynth, because I’m not sure if there’s a point. You will be able to use the app in a DAW shortly, so why bother? This will totally change the way I will build audio apps in the future. Stand alone features will suffer for sure. There’s less reason to make your standalone like a musical-hub if your app can be run in many different instances, and seamlessly chained together with other apps; Instruments and effect apps will get much, much thinner, and DAWs will get much fatter. On one hand, it’s great, because we can make very focused tools, and write less code. But a lot of existing code now seems redundant. We will have to wait and see how users adopt this new model. Perhaps for some people a heavy duty standalone synth will still be preferable.

In spite of the friction and uncertainty, I am very excited for the arrival of the modular iOS universe. It’s been a long time coming.

Song of the Week

Ok, I probably won’t do this every week. But I thought this would be cool. There are a few SquareSynth users out there who have posted their creations to Sound Cloud. I have listened to a bunch of them, and this one really got my head bobbing.

This is a short song by bit nibbler. Congrats, you’re visibility has skyrocketed to about 5 listeners per week.😉

Be sure to upload your tracks to SoundCloud with hashtag #SquareSynth! Everybody! Do it!