As Christmas approaches I find myself more regularly reminding my kids that Santa only comes to good boys and girls. Which makes me wonder whether the reason for my increasing use of such bribery is just because I’m acutely aware of it at this time of year, thanks to the barrage of Christmas songs and bombardment of yuletide advertising, or because I subconsciously know that the effectiveness of the bribe increases as Christmas gets closer. If Christmas is far away, then time preference may cause the value of future presents to be discounted such that it is outweighed by the cost of being good at present (which is apparently very high for some children), and I will need to think of another parenting strategy. Time preference may even be higher for some children, such as the very young, and the Santa bribe might only work on Christmas Eve! Indeed, anecdotal evidence has shown that the daily chocolate in an advent calendar can provide more leverage than presents from Santa at the end of the month. The effectiveness of the Santa bribe may also vary according to a child’s risk aversion, given the many uncertainties at play: will their letter reach the North Pole? can the Elves manufacture the latest technology? how will Santa get into the house if it doesn’t have a chimney? Nevertheless, I’ll keep utilizing this trick over the next few days in the hope that it will prevent crying, pouting and general naughtiness from my little angels.
Iain Chalmers wrote in an recent blog that failure to publish research results is one of the worst forms of scientific misconduct. As evidence of this he cites the use of anti-arrhythmic drugs to treat heart arrhythmias under the false assumption that these drugs would reduce mortality. They in fact did the opposite and unpublished British research had previously shown this.
As Iain Chalmers points out, common reasons for trials remaining unpublished in the past have been commercial in origin. Most often involving the suspension of drug development. Another reason, often proffered by researchers, is that journal editors do not like to publish negative studies. In my view this is a common mis-perception. I can certainly recall publishing negative studies in PharmacoEconomics. What is important to me is that the negative study is an addition to the literature and also has the potential to impact on healthcare decision making. The fact that the conclusion is negative would not count against a paper and may in fact add to its appeal.
A recent editorial in Nature opines that “too many sloppy mistakes are creeping into scientific papers”. It states that there is growing unease amongst editors at Nature about what appears to be a slapdash approach by authors not only to the conduct and analysis of studies but also to the reporting of the analyses. I have to say that this somewhat also matches my experience with HEOR papers. It is not uncommon for me to find errors in calculations, inconsistencies in reported values for outcome measures and p-values between the text, tables and abstract and reference errors – to name just a few. This trend is particularly alarming given that some journals have little or no post acceptance editing – such as PLOS ONE.
Nature suggests that journals should facilitate quick online review by readers and should also have the capability to allow authors to post raw data. Nature also suggests that principal investigators and department heads should take more care to supervise their post docs and graduate students. This might be a useful sticking plaster but the more fundamental issue is the pressure within academia to publish or perish. Until we change this mindset and perhaps look to AltMetrics and other forms of academic recognition we will continue to need to rely on stopgap measures.
I have written previously about the need for transparent and detailed reporting of scientific research. There should be enough detail in a study report to allow an interested reader to replicate the study should they wish. Replication is a fundamental scientific tenet. Yet, like the general science field, the evidence that the healthcare field is living up to this ideal is not very encouraging.
According to an article in the Wall Street Journal, Amgen more often than not are unable to reproduce findings published in scientific journals. In addition, Bayer has halted nearly two thirds of its early phase projects for the same reason. Unfortunately, the situation within health economics and outcomes research is, in my experience, even worse.
It is a constant battle between editors wanting to improve the transparent and detailed reporting of papers and authors (or more likely study sponsors) wanting to hide behind concerns of intellectual property. In the past few weeks I have had three such examples. In two of these examples, the papers were reports of complex modelling studies where it was very unclear in the original submitted paper how the input paramater values were derived. After several weeks of correspondence (akin to having a wisdom tooth extracted) we managed to obtain a supplementary appendix which went a little way to meeting our request. In the third example, we had a battle with the authors of a survey-based paper who even had concerns about sharing the survey with the reviewers, yet alone the readers!
How we assuage the need for transparency yet overcome the concerns regarding intellectual property I do not know. Perhaps in the case of modelling a move to reference case models might be a move in the right direction? However, how authors of survey-based research expect readers to critically evaluate such research without recourse to the survey instrument is beyond my comprehension. Perhaps this is one more indication of how interests in financial returns are stymying the pursuit of science and hindering advancements in patients’ healthcare.
One final comment. I do believe that those of us who elect to become members of the leadership of various guideline/good practice groups, such as the ISPOR Task Forces, do need to live by what we preach. Editors need to ensure that published papers are in accordance with accepted reporting guidelines. As authors, if we preach transparency then every paper we put our name to should live up to that ideal. In my experience this has not always been the case.