There’s no shortage of strong opinions on display tech online, but we’re able to objectively test it! There’s a lot of hyperbole and… somewhat manufactured vitriol out there, so we’re interested in sifting through what science has to say.

The tests

In order to test displays accurately, we want to determine the best possible results from each smartphone. For example, when we test peak brightness, we set the brightness to max, and test both automatic and default peak brightness.

We place a spectrophotometer on each screen to record key metrics from each smartphone. We then run several test patterns and record the results to build a greater picture of how well each unit performs.

For the rest, we rely on the testing suite CalMAN provided by our friends at SpectraCal. This software is able to coordinate an app on the phone with a photospectrometer to put each display — be it OLED, LCD, or otherwise — through its paces. Because displays are often calibrated to meet different philosophies, it’s our policy to adjust our testing to fit the phone’s intended standards, so errors are weighed against the correct color gamuts, gamma, and the like. This way, displays calibrated for DCI-P3 aren’t measured against an sRGB gamut and so on.

A set of charts detailing the greyscale performance of the Samsung Galaxy S9+.

Here’s an example of the data we gather on screen performance.

Using CalMAN, we can correctly measure color accuracy, gamma, brightness, and more. In our deep dive reviews, you can see trouble spots that we point out. But not to worry, we can contextualize all the results for you in easy-to-digest charts and comparisons.

The scores

Consequently, the math is a little difficult when comparing different results, so we take special care to take this into account. Our system makes this easy, and SpectraCal’s software makes the job almost automatic, as all we have to do is run the tests and export the data.

We measure:

  1. Color temperature
  2. RGB balance
  3. Pixel density
  4. Peak brightness
  5. Color error (DeltaE2000)
  6. Greyscale error
  7. Alternate display modes

For the scores that are perceptually-limited, we apply our normalization curves so that the scores for each test aren’t unfairly punished or rewarded. This way, only differences that are noticeable to a real, actual human being will be represented by large swings in points. If you can’t see the difference, there’s no sense in hammering something that gets 99.9% of the way there, right?

We also make sure to run every test for both the default display mode and the best-performing display modes for each phone. This way, we’re both fair and accurate to what the manufacturer is trying to achieve with their devices.


Read comments