Affiliate links on Android Authority may earn us a commission. Learn more.
What is the Google Pixel 4's Neural Core?
The Google Pixel 4 and Pixel 4 XL boast the company’s best-ever software features and camera capabilities, which are possible due, in part, to the inclusion of the Pixel Neural Core. This little chip sits alongside the Pixel 4’s main processor to efficiently run Google’s powerful machine learning workloads. These range from image processing to voice recognition and transcription capabilities.
The Neural Core is not Google’s first crack at an in-house machine learning processor for smartphones though. The Pixel 2 and 3 shipped with the Pixel Visual Core, designed to offload imaging tasks like HDR+ for improved performance and energy efficiency. The Pixel 4’s Neural Core builds on this foundation with improved capabilities and a selection of new use cases.
Don’t miss: Here’s what the Pixel 4’s ASTRO mode can do
How does the Pixel’s Neural Core work?
Google has not yet shared details on exactly what goes on inside its Neural Core. However, we do know how its previous generation Visual Core works and the type of techniques typically used in machine learning (ML) processors.
Unlike a traditional CPU built to handle a wide range of computational tasks, machine learning processors (NPUs), like the Neural Core, are optimized for a few specific complex mathematical tasks. This makes them more like digital signal processors (DSP) or graphics processing units (GPU) but optimized for the specific operations used by machine learning algorithms.
Fused multiply-accumulate is a very common imaging and voice ML operation that you won’t find wide scale support for inside a CPU. Unlike modern 32 and 64-bit CPUs, these operations are performed on data sizes of just 16, 8, and even 4 bits. Custom hardware blocks are required to support these operations at maximum efficiency.
The Neural Core accelerates image and voice algorithms that run less efficiently on a CPU or GPU.
The Neural Core builds dedicated arithmetic logic units into hardware to handle these instructions quickly and with minimal power consumption, rather than taking up multiple CPU cycles. The chip most likely comprises of hundreds of these ALUs in multiple cores, with shared local memory, and a microprocessor overseeing task scheduling.
Google will have almost certainly optimized its latest hardware design around its Assistant, voice, and imaging software algorithms. Dependence on the Neural Core’s ALU capabilities is why the Pixel 4’s latest photography features won’t make their way to older handsets.
What can the Neural Core do?
The Neural Core appears to be a key ingredient in a number of new functions packed into the Google Pixel 4 and Pixel 4 XL. These improvements center mainly around image and voice processing.
On the photography side, the list includes Dual Exposure Controls, an astrophotography mode, live HDR+ previews, and Night Sight. Google’s Pixel 4 performs dual exposure adjustments and HDR+ in real-time so that users can see the results of their pictures before hitting the shutter. This points to a major increase in computational power for images compared to the Pixel 3.
In addition, these phones introduce learning-based white balance into its camera system, which fixes yellow hues often associated with low light pictures. Multiple exposure captures, sky detection, and image combination are used for the ASTRO Night Sight mode, requiring a hefty amount of processing and image detection smarts. Finally, the Pixel 4’s Frequent Faces feature identifies and recommends better photos of people you frequently photograph.
The Pixel 4 also boasts an updated Assistant, sporting improved language models that can now run natively on the device rather than in the cloud. New voice features also include Continued Conversations, audio transcription via the Recorder app, and improved speech recognition. The Neural Core also likely plays a role in the phone’s 3D face unlock technology too.
A wave of the future
Google’s Neural Core is the company’s most powerful piece of in-house smartphone silicon yet, enabling more efficient real-time image editing than before. Google prides itself on its industry-leading machine learning technologies and the Pixel 4’s biggest selling points rely on the fruits of these investments.
However, the Pixel 4 is not the only smartphone sporting a powerful machine learning processor. All smartphones powered by a Snapdragon 855 offer dedicated machine learning hardware. Although Google clearly has different needs and perhaps even more powerful requirements for its smartphones. Likewise, HUAWEI’s Kirin 990 boasts dual in-house NPUs powerful enough for real-time bokeh video effects and more.
Machine learning hardware is fast becoming a cornerstone of flagship smartphone image, video, and AI capabilities. Google, with its Pixel 4 and Neural Core, places right near the top of the pack.
Up next: Google’s Pixel cameras aren’t trying to be cameras at all