Algorithmic sound is a fascinating field! It's ubiquitous in everyday life, but not many people understand how it works. It's actually quite simple, and the lightswitch moment of understanding something that you previously took to be black magic is glorious.
This talk focuses primarily on computer sound representation and synthesis, which is relatively simple to implement regardless of language. As such, this talk will be suitable for attendees of all levels of iOS experience. In this session we’ll go back to basics and take a look at the core components of sound, including how we perceive sounds, and how we can represent them within a computer. We’ll go on to look at the theory behind the CoreAudio Framework, and finish up with a live demo, where we use audio units to create our own simple synthesiser.
Sebastian loves sound. He graduated from the University of Tasmania with Honours in Computing, focusing on artistic computing via evolutionary sound synthesis. He loves listening to and making sounds, and has been played with many ensembles including the Grainger Wind Symphony, Zelman Symphony Orchestra, and Ochre Trio. He has spoken at a number of programming and artistic computing conferences, notably TEDx Hobart and every /dev/world since 2012, and is currently an iOS Developer at Art Processors.