July 27, 2016, 2:17 p.m.

Appalling Science

We live in a day and age where information is available at the touch of a button. When I recently decided to research which thermal compound is best and how to best apply it to the CPU in my new workstation, I naturally turned to Google.

Usually within the first 2-4 pages one get all the relevant results provided your keywords are good. This was not a hard search to perform:

thermal compound cpu compared

thermal compound cpu reviewed

how to best apply thermal compounds cpu

and so on. I must have reviewed over 20 or so articles / comparisons, both written and video based. I believe this is a statistically significant sample to base my rant on. That being me wasting 2 hours of my life on crap science.

Firstly, like so many internet searches, I received conflicting results. One site would mention product A came out on top by a far margin. Another site would claim product B came out on top. The first site would claim product B came in close to last, and the second site would claim product A lost. These are mutually inconsistent conclusions and meaningless. How can a product be both the best and the worst? There is a lot of science that is like this. Think apple - one day scientists told us it is good for us, the next that it is bad.

The root cause of most of these conflicting results I believe stem from one fatal flaw in ALL their testing methodologies. Here is my logic.

Question 1: Which Thermal Compound is Best?

Lemma1: It can be assumed that all commercial thermal compounds will be reasonably good - otherwise the manufacturers would never be able to sell them and claim any competitive advantage. I am not stating all are equal, just that it is reasonable to expect that they will all mostly work. Therefore it is not unreasonable to assume variance between the compounds could be small.

Question 2: How Best To Apply Thermal Compound?

Lemma 2: Questions 1 and 2 are inextricably linked and one cannot be tested without taking the other in to consideration.

Given these criteria, it follows that to test Question 1, you need to have answered Question 2 first. That said, the best solution for Question 2 might not apply equally to all products, since properties such as viscosity, density etc. will certainly influence the results. To simplify the test, it is just paramount that the same application method be used in testing Question 1.

To test Question 2, it is imperative that results be repeatable. Most of the reviewers made the MAJOR mistake of testing each application method once, writing down the results and moving on to the next method. Furthermore, most of them did not measure the amounts used and did not repeat the application in exactly the same way. Since this is very hard to do using the applicators given, it would at least improve the outcome of the tests if the reviewers attempted to keep the amounts applied the same, the method it was applied the same and then repeated each method with the same compound at least 20 times and discarding the outliers. Had this be done, based on the standard deviation the result would most probably be much more accurate and also reflect upon an important insight - how big is the standard deviation? If it is large, then it might overshadow any benefits one product has over another.

For Question 1, the best method of Question 2 needs to be used and repeated as well. Once again the standard deviation needs to be taken and compared with the standard deviation of the results of testing Question 2. If σ2 is larger than σ1, it means the exact way you apply the thermal compound is more important than the product you are using. If σ1 is larger, then it implies the product is more important than the exact way it is applied (naturally these numbers need to be based on the same unit of measure - in this case, °C).

I know this is much more work, but if you are going to make claims about something, it best be correct or do not do it at all.

My gripe is that with the internet being a conduit for every person and his cat's voice, everybody thinks they are qualified to do so and then does just that. And since most people are similarly disadvantaged, they would not know they are being fed junk science when they consume these reviews.

I feel that the scientific process is getting lost in volume of data. Peer review, verification and integrity is being lost.

In fact, I have seen so many examples of this that I have began to stop reading online reviews unless it is from a trusted source. I have stopped reading customer reviews on products because if you do, you will never buy a product. Customer reviews suffer from bias. Only people extremely happy or extremely dissatisfied with a product will go online and comment. Due to human behaviour, this will be strongly biased towards negative reviews as people prefer to rant (like me, here) when they are upset than to commend someone for doing a good job. Therefore the reviews usually skew the opinion of the product negatively, whereas it is quite possible 99% of the population who owns said product, is extremely happy with it.