The world of smartphone cameras is a tricky one, not that people are relying on point and shoots less and less, while placing more demand on their phones. There was a rough transition period between the two product categories, but an AI arms race has made figuring out which cameras are worth your time. Part of the problem is that objective results only tell you so much — if Instagram has proven anything at all, it’s that “perfect” isn’t always what looks the best.

Philosophy

Consequently, we split up our camera testing into both subjective and objective results. This way, we can account for not only how technically capable each device is, but how well it manages its creative modes, computational photography, and AI.

While we fully admit that our constellation of tests don’t capture every single possible situation out there, we feel that our scores will adequately reflect the quality of each camera — especially given how much goes into each test. In instances where there are multiple cameras per phone, we record the best results from the most capable module.

Objective testing

In order to objectively test the raw capabilities of each phone camera, we take it to a fixed lab illuminated by lighting that’s fixed at a color temperature of 6500K, and then take shots of several test charts. This is done to get gross readings of sharpness, color accuracy, video capabilities, and more.

The lab is lined with a dark material called duvetyne so that reflections from around the room don’t color results any, or invalidate any test results. Once we’ve run our tests a few times and have achieved repeatable results, we then enter that into our scoring system. As many phones are so close in performance that no human eye could tell the difference in certain metrics, we apply our scoring curves to each of the results. This way, no phone is unfairly punished for an imperceptible error, or unduly rewarded for having slightly-better results than everything else.

The results we collect are:

  1. Video sharpness (4K)
  2. Stills sharpness (MTF50, LW/PH)
  3. Oversharpening
  4. Color accuracy (ΔC 00, Saturation corrected)
  5. Color saturation (%)
  6. Noise (%)

We collect more data, but those metrics are what we assess the performance ceiling of each camera module.

Subjective testing

Once our objective testing is done, we take the phones out on the town to see how they perform in a standard set of situations. Nothing like using a camera in the real world to highlight shortcomings. While you may not be enthralled with every shot, keep in mind we’re stressing the limits of a camera that by all accounts shouldn’t work at all due to the physics of the situation.

The advent of AI cameras and computational photography throws a wrench into the gears of our testing, as cameras tend to behave differently when these features are enabled. Additionally, these features also seek to eliminate the need for post-processing, applying machine learning to what makes a photo more pleasing to the eye.

We are working on a more standardized test for this, but for now comparisons will have to do. Wherever we can, we’ll provide comparison sliders of the same scene shot on two cameras so you can see it for yourself. This section will be updated once we find a more valid way of assessing the images with a less subjective eye.

A note on variance

Not everyone will agree with how we do things in this test. That’s fine. It’s also why we publish our results. That way, everyone can look and see what matters to them, and make their own decisions. Keep in mind, that this testing only reveals flaws, which could be mitigated with image editing. We do not edit our photos in our tests, so you may be able to improve their aesthetics with your own experimentation.