22 Aug 2011

How do you know that?


“How do you know that?” – the need for evidence.
Rob Norton
Summary of paper presented at
the AFSA meeting, Geelong, August 2011 and
the Agrivision Clients Day, Swan Hiil, October, 2011
Getting the Dirt of Dairy Soils, July/August 2012.

Fertilizers are evaluated for their environmental safety and their potential hazard in manufacture, distribution and handling to ensure those in the supply chain are not harmed. Those issues are important issues for the industry, but unfortunately there is rarely any requirement to present scientifically valid data on product efficacy – or simply – does it work?

A key to the continued development of fertilizers to support sustainable food production has been a clear understanding of soil science and crop agronomy, with testing regimes put in-place by government and non-government agencies to test if the products worked – and if not why not. In the past two decades, there have been many “alternative” fertilizer products coming onto the market. These may be in response to new markets such as the organics industry, or by those searching for strategies to unlock nutrients bound in the soil. Some are also the inevitable “snake oil”.

When checking on the claims of a product, the first and most important thing is the evidence the supplier has about the crop response. This evidence should be done in a scientifically credible way using methods that is explainable and reproducible.

Appropriate controls – every fertilizer experiment should have a nil treatment (no added fertilizer) and a standard practice. Without these checks, there is no indication if the new product actually did anything, or if it was better than the standard treatment. Comparisons should be done at least on a nutrient to nutrient basis where similar amounts of the nutrient are applied so the comparative efficacy is clear.

Replicated – are the trials replicated, which means are the treatments at a particular site repeated so that the information collected can be statistically compared. Replication is the basis of establishing natural variation in an experiment, and without that, the effects of the treatments cannot be distinguished from luck.

Randomization – the treatments should be randomized in such a way that one is not necessarily in the same place in each replication. Often treatments will be blocked together so that paddock trends can be accounted for in the analysis.

Repeated – one trial in one year at one site does not give proof of a response. Has the trial been done on relevant soil types, in appropriate regions and on the same test crop.

Compared statistically – a replicated trial will have a mean (or average) and a measure of error for that mean. The error term gives a range of “normal” values for the mean so that the ranges of different treatments can be compared. Means are significantly different when these ranges do not overlap at a particular probability. If they do overlap, even though the numbers are different, there would be no “significant” difference between the treatments.

Scientists start with the premise that there is no difference between the treatments, and design experiments to test this. Endorsements and product testimonials are no substitute for good experimental design and robust statistical analyses.

Dr Jim Virgona, an academic at Charles Sturt University in Australia recently coined the term “evidence based agriculture” which demands that good science be used to support decisions by growers and advisors. When presented with product claims, the question to ask is “How do you know that?” It is up to the those marketing to provide the evidence we need to keep our farming systems sustainable and productive.

More about: Nutrient Mythbusters