Best daily deals

Affiliate links on Android Authority may earn us a commission. Learn more.

Android's Sound Amplifier feature is rolling out to more devices (Updated)

Google unveiled details about Sound Amplifier and its new Dynamic Processing Effect that are heading our way with Android P.
By
July 24, 2019

Update, July 24, 2019 (12:00PM EST): Originally announced during Google I/O 2018, Google’s Sound Amplifier app is now available for Android devices running Android 6.0 Marshmallow and above.

As the name suggests, Sound Amplifier filters, augments, and amplifies certain environmental sounds, such as those from people and TVs. The app sorts out what it deems important and filters out background noise to make the amplified audio stand out more. You can even adjust the sound for each of your ears.

Sound Amplifier is available from the Play Store at the link below.


Original article, May 11, 2018 (12:52AM EST): The Google I/O keynote may already seem like a distant memory, but that doesn’t mean there aren’t little jewels to be dug up. At quite literally the last session of the day, Google unveiled details about Sound Amplifier and its new Dynamic Processing Effect that are heading our way with Android P.

The new features offer a lot of flexible options to produce superior sound in real time. We’re not just talking the standard bass boost or stereo enhance features here. Google intends for this new Audio Framework to be used for everything from microphone noise suppression to leveling out the volume of a movie you’re watching late at night.

Android P beta hands-on: Gestures galore
Features
android p beta

The Dynamics Processing Effect is baked into AOSP with Android P and will be available to both OEMs and app developers. This means that every smartphone brand stands to benefit, although it’s also possible that some manufacturers may still prefer to use other third party or in-house audio signal chains, at least in part of their design.

A closer look at the architecture

The Dynamics Processing Effect is broken down into four stages for each audio channel, and the framework supports stereo and 5.1 surround sound. There’s a highly flexible Pre-EQ at the input, which allows the developer to configure both the number and width of the bands. This filtering stage is most likely to be used in conjunction with microphone audio to prepare it for the next stage.

The second stage is a multi-band compressor/expander. A compressor smoothly reduces volume for louder sounds, while an expander does the reverse and increases the amplitude of quiet sounds. As this is a multi-band system, different frequencies can be compressed or expanded by different amounts, making this a very handy tool for noise and background suppression.

This stage is followed by a Post-EQ, designed to fine tune the output signal. This works in the same way as the Pre-EQ, giving developers full control over the filters. Finally, there’s a single band limiter, designed to roll off loud pops or other sounds and protect the output speaker or headphones. The limiter in each channel can be added to a group so that the group limits their outputs by the same amount, preserving the stereo or surround sound image.

What will it be used for?

With more than 100 different options available, these settings won’t be made available to the user. Instead, it will be up to app developers to implement these as specific features in their own software and expose more limited and common-sense controls to the user.

At I/O, Google showcased a new Accessibility Service feature called Sound Amplifier, which distills the 100+ available parameters from the Dynamics Processing Effect down into just 2 sliders for the user. These are used to adjust the overall volume with the loudness slider, then the Tuning slider filters out different background frequencies. The end result sees the user able to better pick out a speakers voice against the background noise in a pre-recorded video. Other settings include “active listening” to filter out background noise in real-time, and controls for the microphone.

With apps, both music and video can benefit from the extra controls. With video in particular, a TV “midnight mode” could be implemented to quieten the overall volume yet ensure that speech is still audible. Loudness maximization and mastering were also listed as potential uses, and real-time apps making use of the microphone or voice will surely benefit from the EQ and noise suppression controls.

Google also anticipates that manufacturers will make use of this new Audio Framework to configure and improve the sound quality of their hardware. Microphone and call noise suppression seems like an obvious candidate. Speaker tuning is also an important step in modern smartphones and now designers will be able to do this directly using tools within AOSP rather than via third-party tools. The same technology could also be used for tuning headphones.

Wrap up

Following on from the Bluetooth audio enhancements with Android Oreo, it’s good to see some more under-the-hood audio features making their way to us with Android P. We’ll have to wait and see exactly how manufacturers and app developers use these new tools, but they’re bound to result in some better sounding experiences in the near future.