Why incremental change is difficult to shout about.
I’ve been writing product and software reviews on the Northlight Images site for about 10 years now, and whilst the writing side of things has become easier with practice, finding the differences between one model and what replaces it gets increasingly difficult.
Over the few years I’ve been writing printer reviews, I’ve found it increasingly difficult to tell them apart by print quality alone.
At 17″ paper size and above, the rate of change is far slower, so there tends to be distinct differences between models, although Canon’s recent x400 range is not different enough to the previous x300 range for me to consider replacing our iPF8300.
If the model from 2012 produced very poor B&W prints, but the 2014 model produced good ones, then that would be fine, but it isn’t the case. Indeed, to say that the 2014 model is a big improvement almost implies that the old one was not good enough (if so, then why did I buy one?)
At the consumer end of printers, well, they are just that… consumer items. I don’t review small printers (below A3+) since the changes are so frequent, and quite difficult to spot.
I’m almost sorry for the marketing people who have to think up campaigns for new products. I say almost sorry, since I have to read through some of the drivel that makes it through into press releases ;-)
But surely a list of features would help?
I’ve long had a policy of not doing either comparative reviews or giving ratings/scores to products I review.
Too many reviews list feature set ‘A’ against feature set ‘B’ and draw conclusions that because camera ‘A’ weighs 438gm and ‘B’ weighs 452 gm then ‘A’ gets an extra point. One manufacturer did confide in me that their new product had several ‘features’ in their marketing blurb, just so that lazy reviewers could see the difference.
If you are in a hurry, most press information packs will include helpful notes for reviewers. I get to read enough of these that I can spot where chunks of them have been used for ‘inspiration’ by reviewers. I do read them, but usually after writing a review, just in case I’ve missed some important aspect of a product (usually because it was of no interest to me – it happens…)
But comparisons help people choose
Comparative reviews often tell you more about the biases/preferences of the author than the products, and unless done in a strict methodical manner are meaningless. They run the risk of comparing features that are not really the same, just to get a pros/cons list at the end of the article.
Ratings and scores in a review are always a warning flag to me – they are as much an aid to hard pressed reviewers in generating copy as they might be to a reader. Measurements and charts have their place, but they need explanations as to why the reader should care what they say.
I prefer to let readers make up their own minds based on features I’ve chosen to explore in my review, with the consistency coming from the fact that with any printer for example, you’ll be looking at similar things.
I know that some people like to delegate the mental effort of understanding what they want to a table of figures, but I’m afraid they will just have to do a bit more comprehension work themselves for stuff I write ;-)
Where there are biases, I try to mention them. For example:
- I’ve no Windows PCs at all, so everything gets covered on a Mac.
- If I’m looking at a camera or lens, then video might get mentioned, but that’s about all.
- I used to do research in HCI and usability, so I have a very low tolerance for bad software and interface design.
- Printer running costs are notoriously difficult to measure, and I often only get a printer for a month or so to use.
- I seriously dislike photos with an obvious ‘HDR look’. Just because dials go up to 11 does not mean that 0,1 or 2 won’t do.
And the Northlight Gold Pineapple goes to…
In terms of equipment and software, we’ve seen tremendous advances in digital imaging and printing over the last 10-15 years, and perhaps should not be surprised that really major jumps in (useful) performance become rarer, and improvements more difficult to point out in a way that is meaningful for people wanting to buy a product.
I’ve always believed in writing reviews that are useful for people with a product, not just those looking to see if they want to buy it, so questions, feedback and suggestions are always very welcome.
However, I’m always looking to improve things in the articles I write, so perhaps I should look at adding spurious performance figures to the end of reviews?
I’m also inclined to confer prestigious awards, so the recent iPF6450 will get a ‘Northlight Gold Pineapple’ award and our ‘highly recommended’ status, but I’m still not quite sure whether it scores 88% or 89% for its final score.