The State of Android & Audio

August 8, 2014
297

Over the past several years, mobile has become the go-to platform for most people’s media consumption. From audio playback to movie streaming, there is a growing amount of content available in your pocket and on your tablet, and the market is still expanding.

Today we are seeing a move towards high-end 3D gaming environments, live music aids, and even home studio audio software suites designed to work on mobile devices and tablets. However, Android sadly has not been the forefront of this growing market, that position is firmly held by Apple.

Particularly in a creative capacity, tablets are quickly replacing laptops for music creation and live performance uses. Not to mention that there’s a whole market for digital effects, which can be purchased at much lower costs than traditional analogue equipment.

Line 6 Amplifi

Line 6’s latest digital effects amplifier is designed entirely around a mobile interfaces, but Android support is nowhere in sight.

However, the migration towards more digital content demands higher levels of processing power, on a platform limited by smaller batteries and thermal limitations. Android owners pride themselves on having some of the best hardware in the market, so why is it that Android seems so far behind its rival when it comes to audio applications?

A little about audio processing

solememo-myvision-n16-sg-4

Our mobile phones are more than powerful enough for simple playback tasks. However, as processing power has increased, we have also begun to demand more signal processing from our mobile devices, at a lot more of it in real-time, too.

We may take it for granted, but even when playing a game, every sound file takes time to be hauled from memory, converted from binary information to numerical values, before being pushed to a DAC, takes up valuable clock cycles. Additional post processing, such as passing the file through your optimized EQ settings or supplementing the sound with extra reverb, takes up even more time, and modern applications are becoming increasingly complex.

Although modern mobile processors long surpassed the multiple GHz mark and can match high-end PC equipment in core count, these simple figures are not all that matters when it comes to digital signal processing. Different processors designs complete different tasks in a different number of clock cycles, making some CPUs faster than others at the same tasks. This is why direct GHz and core count comparisons don’t always apply across designs.

The Beat Suite Recording Studio

Android might not be able to compete with expensive studio grade hardware, but a lot can be accomplished on a tight processing budget, if you know where to optimize.

With real-time audio, it is essential to be able to process floating point data, digital numbers with a decimal point, and SIMD (single instruction, multiple data) instructions quickly, preferably all within the short amount of time between samples, which is typically 44100 or 48000 kHz for most audio applications. Floating point units, a mathematical coprocessor commonly found in CPU core designs, are used for calculating mathematical operations on the digital audio signal with high levels of accuracy.

Multiple cores are not so important for audio -- instead brute speed is the key.

Multiple cores are not so important for audio, as most DSP algorithms are not optimized for multiple threads, instead brute speed is the key. The limitations of mobile processors, in this regard, can be found in the smaller memory bus bandwidth and smaller CPU cache’s, compared with beefier desktop grade CPUs. This can mean that your mobile CPU might actually end up spending more time waiting for data that it does processing it.

An example of one of the more – if not the most – demanding audio processing tasks is time stretching, where the tempo/speed of an audio sample is altered without the trade-offs in pitch/frequency alteration that can come from changing a samples wavelength. In this technique, audio is converted to digital, a Fast Fourier Transform algorithm then extracts the frequency information from the sound, which is used to correct/restore frequency information as the sample is stretched or shrunk in the time domain.

fast fourier transform example

Fast Fourier Transform is the process of extracting specific frequency information from a more complex waveform, and is highly CPU intensive.

Sounds pretty complicated right? This type of process puts a huge strain on the CPU, which can result in unacceptable latency. There are actually fewer than five FFT algorithms in the world which can run this type of process on mobile devices efficiently.

The maximum latency in any real-time system should ideally not exceed 20ms, which is roughly the perceptual limit of delay in humans. Any longer and our brains will notice the difference between sound coming in and out of a system, or between a button press and something happening on screen. Unfortunately, typical Android latency lies in the region of 100 to 250 milliseconds.

In a bid to increase performance and work around some of these shortcomings, mobile SoC developers, like Qualcomm, have started including their own dedicated DSP hardware alongside their main processors.

ARM & DSP

ARM has long included floating point units in nearly all of its core designs, excluding the Cortex-M3 and below, and supports extra Digital Signal Processing and SIMD extensions in its mobile processors.

ARM’s SIMD extension and NEON engine are particularly important for these types of scenarios

This DSP processing capability is aimed at keeping power consumption down, while offering the maximum performance available, up to 75 percent higher than that which can be achieved without the extensions. ARM’s tools are used for a range of common mobile applications, from monitoring sensors, to voice recognition, VOIP, and audio encode/decode.

ARM’s SIMD extension and NEON engine, found in the commonplace ARMv7 architecture, are particularly important for the types of scenarios that we are talking about. ARM has made particular optimization for faster sleep, 4-8x DSP algorithm performance enhancements, dedicated tools for Fast Fourier Transform applications, and a whole host of other optimization for performing complex and processor heavy mathematical calculations on a strict power budget.

Cortex-A15-chip-diagram

ARM’s NEON Data Engine and Floating Point Units, found in all Cortex-A designs, are essential for efficient DSP processing.

The move to ARM’s 64-bit ARMv8 architecture could also have some useful benefits for audio software developers and consumers, as audio applications can be heavily memory dependent, and 64-bit could allow for devices with larger pools of RAM.

However, there is only so much that ARM can do on its own, and ARM’s library only really serves as an example as to how developers could go about creating their own lower level code. Without a fully-fledged library, different developers would have to go over the same processes again and again just to build the basic tools that they need. A further hindrance to smaller development teams is the high cost of ARM’s proprietary compiler.

Audio Development and Android

While we have mobile hardware that is clearly capable of providing a high-quality audio app experience, there seems to be a lack of software support for developers on Google’s side of things.

For the app developer, the first port of call is usually the Android SDK. However, Google’s Media API’s for Android are rather limited, to say the least. You won’t find many useful tools, barring the very basic MediaRecorder and playback from file functions. Delving a little deeper into the various Android packages will reveal a few tools for an equalizer, reverb pre-sets, and noise suppression. However, there aren’t any acceptable tools for low-latency real time audio processing, and the various operating system fragments found out in the wild often mean that these tools can be hit and miss depending on the user’s hardware.

Android Audio Examples

Android has an acceptable selection of audio focused apps already, but the platform doesn’t play so nice with the wider world of audio.

Compare this situation to Apple’s iOS platform, it couldn’t be in greater contrast. Apple has long included its Core Audio digital audio infrastructure in its operating systems, which offers developers a dedicated software framework for a variety of applications, such as the ones that we have already discussed.

There seems to be a lack of software support for developers on Google’s side of things

The Core Audio library includes tools for mixing and converting signals and files, easily implementing signal chains, as well as essential built in effects, while maintaining high performance. Apple also includes easy access to its hardware abstraction layer, allowing audio applications to effortlessly interface and communicate with other pieces of hardware, such as microphones or output devices that accept incoming audio signals. Most of this functionality is completely missing from the Android platform.

Apple Core Audio digital studio

As painful as it is to admit, Apple’s Core Audio platform is far more developer friendly than the Android ecosystem.

Instead, more complicated applications may find that they have to do a lot more of the low level coding themselves, adding to development times and costs. This is the primary reason as to why Android is so far behind Apple, when it comes to advanced audio applications. That is, unless you can find a third party SDK.

Introducing – Superpowered

Superpowered is one of the few feature rich audio SDKs available for mobile, which has just recently been made available for Android. It offers up a range of tools for Android and iOS developers to easily implement some more complex audio applications and effects. The SDK offers up a library of pre-built functions for audio filters, reverbs, echo effect, time domain stretching, and FFT, all designed with high quality studio grade audio in mind.

Superpowered has been built from the ground up to maximise DSP performance, while sidestepping issues with Android audio issues

Unlike other audio engines, Superpowered is not a wrapper around Core Audio or Android’s pre-build library. Instead, it has been built from the ground up to maximise DSP performance, while sidestepping issues with Android fragmentation, its lacklustre feature set, and latency issues. Superpowered claims that, as a result, it can even outperform Apple’s industry renowned Core Audio platform, which is no mean feat.

Superpowered is designed for ARM devices that make use of the NEON architecture extension, which basically means that 99% of smartphone and tablets are covered. It can be used to speed up development of almost anything audio related, from DJ apps and instrument effects, to audio book readers, podcast apps, and games. The video below shows off Superpowered’s co-founder demoing a wearable powered DJ interface powered by the platform.

Importantly for developers, Superpowered is a cross platform SDK, allowing apps to seamlessly operate on both Android and iOS without any differences in audio quality. While iOS may be the lead platform at the moment, this opens the door for a wider number of developers to consider Android too.

Superpowered isn’t stopping with audio though, the company will also be releasing DSP SDKs for image and video processing in the near future. Which could open up Android to a new generation of media editing apps and content.

If you’re a developer interesting Superpowered’s SDK, the good news is that it is free to download and implement in your app. Once your app reaches 50,000 installs, Superpowered help you set up a contract with them, which includes extra support for your app.

With hindsight, Android’s lack of out-of-the-box support for advanced audio apps and features appears to have been a missed opportunity. Fortunately, third party developers have stepped up to provide solutions to the problem. In the future, hopefully Android will prove itself to be a worthy platform for the power media developer too.

Comments

Load More