Methodological discrepancies can cause all kinds of confusion especially when someone is not hammering these simple keystone fallicies out of the literature. A great and relevant example of this is the common garden and often spotted jump squat.
A strew of studies have tried to obtain the %RM at which maximal power out put is achieved (a trait of questionable value to start with). Reported values for the optimal range of external load have ranged from 0-90% a somewhat widespread report.
More recent studies have reported that the actual peak power load is 0-1% this was put down to errors in the way previous studies collected and processed their data.
There are various factors as to why this topic along with other more “niche” factors receive more attention than other topics – i.e. observational studies are easier than training studies, studies that require a non trained population are easier than trying to source a trained population etc.
These quibbles aside this example of recommendations being extrapolated from flawed data can be found in a very high percentage of the sports science (and certainly strength and conditioning) literature.
When observational studies use non validated (or loosely) validated measures on small populations and then run them through weak statistical tests apparent phenomenon appear when in actuality non exist.
These apparent phenomenon then gets lifted from the literature by practitioners who think they are being scientific in their approach trying to “optimise their programmes”. When the quality of data is so low you would be as well closing your eyes and choosing an intervention with a marker pen.
What we are left with in this case is 20 odd years of coaches using a certain percentage for jump squats in search of maximal power output when they should have just done some box jumps.
I’m sure many coaches have worked this one out themselves by now, however this is a great example of a common pitfall in basing your approach in the research with out a proper appreciation for the limitations offered by many study designs.
So as always stay critical and keep your approach rooted in results and try to keep the cause and effect of your approach as non-tenuous as possible.