The world of smartphone cameras is a tricky one, not that people are relying on point and shoots less and less, while placing more demand on their phones. There was a rough transition period between the two product categories, but an AI arms race has made figuring out which cameras are worth your time. Part of the problem is that objective results only tell you so much — if Instagram has proven anything at all, it’s that “perfect” isn’t always what looks the best.
Consequently, we primarily focus on subjective results, as past data regarding objective performance has proven to be a poor predictor of subjective picture quality. Some measures are more useful than others, but as humans are our target audience: this allows us to take a more direct approach to your average consumer.
While we fully admit that our constellation of tests don’t capture every single possible situation out there, we feel that our scores will adequately reflect the quality of each camera — especially given how much goes into each test. In instances where there are multiple cameras per phone, we record the best results from the most capable module.
Once we’ve set up each phone and are ready, we take the phones out on the town to see how they perform in a standard set of situations. Nothing like using a camera in the real world to highlight shortcomings, right? While you may not be enthralled with every shot, keep in mind we’re stressing the limits of a camera that by all accounts shouldn’t work at all due to the physics of the situation.
The advent of AI cameras and computational photography throws a wrench into the gears of our testing, as cameras tend to behave differently when these features are enabled. Additionally, these features also seek to eliminate the need for post-processing, applying machine learning to what makes a photo more pleasing to the eye.
We are working on a more standardized test for this, but for now comparisons will have to do. Wherever we can, we’ll provide comparison sliders of the same scene shot on two cameras so you can see it for yourself.
For every phone that we review, we take several photos in a narrowly-defined set of situations that a typical smartphone user might want to take a picture in. For example, in a dimly-lit restaurant, a bright landscape during the day, a selfie, etc.
We then rate the photos taken in each situation on a scale of 1-10, and compile the results. The following are the scores we record and keep for our use:
- Portrait mode
- Zoom / telephoto
- App functionality
Keep in mind, one shot isn’t enough for any of these categories, nor are they repeated across categories. That way, human error is minimized as best as it can be hoped to when taking many, many photos.
A note on variance
Not everyone will agree with how we do things in this test. That’s fine. It’s also why we publish our results. That way, everyone can look and see what matters to them, and make their own decisions. Keep in mind, that this testing primarily reveals flaws, which could be mitigated with image editing. We do not edit our photos in our tests, so you may be able to improve their aesthetics with your own experimentation.
Additionally, you might find that you like technically-imperfect things like ovsersaturation of color, oversharpening, beauty modes, or aggressive noise reduction. While it’s objectively a poor performance, more and more people ask more and more things from their smartphone cameras that would shock a professional photographer. Because we’re not writing our reviews for the professional shutterbug, we instead try to test for breadth instead of depth when it comes to camera performance.