As Christmas approaches I find myself more regularly reminding my kids that Santa only comes to good boys and girls. Which makes me wonder whether the reason for my increasing use of such bribery is just because I’m acutely aware of it at this time of year, thanks to the barrage of Christmas songs and bombardment of yuletide advertising, or because I subconsciously know that the effectiveness of the bribe increases as Christmas gets closer. If Christmas is far away, then time preference may cause the value of future presents to be discounted such that it is outweighed by the cost of being good at present (which is apparently very high for some children), and I will need to think of another parenting strategy. Time preference may even be higher for some children, such as the very young, and the Santa bribe might only work on Christmas Eve! Indeed, anecdotal evidence has shown that the daily chocolate in an advent calendar can provide more leverage than presents from Santa at the end of the month. The effectiveness of the Santa bribe may also vary according to a child’s risk aversion, given the many uncertainties at play: will their letter reach the North Pole? can the Elves manufacture the latest technology? how will Santa get into the house if it doesn’t have a chimney? Nevertheless, I’ll keep utilizing this trick over the next few days in the hope that it will prevent crying, pouting and general naughtiness from my little angels.
Budget impact analysis (BIA) is on the rise. Indeed, the prevalence of BIAs in the literature is growing exponentially, with half of the articles with ‘budget impact analysis’ in the title in PubMed published in the last 2 years. Furthermore, there are now a few guidance papers on BIA, including Canadian guidelines and an ISPOR task force report. And with budget-holders’ belts not looking like loosening up in a hurry, I imagine we’ll continue to see more and more of these studies.
This begs the question of whether BIAs should be seen as complementing cost-effectiveness analyses (CEAs), as has often been touted, or as competing with them. This is a particularly pertinent question if the findings of the two types of analyses are at odds with each other, i.e. if an intervention is shown to have a favourable cost-effectiveness profile but an unfavourable budget impact, or vice versa. I think the answer to the question is that it depends on how BIAs are used. If a funding decision is based on a BIA alone, rightly or wrongly, it could obviously undermine a CEA. Indeed, a BIA with its short-term narrow financial perspective will often paint a very different picture to the holistic view of a full CEA, especially one with a lifetime horizon and societal perspective. So, BIAs may well compete with CEAs.
On the other hand, if viewed simultaneously, a BIA could make the big picture of a CEA even bigger, adding a snapshot of the short-term affordability of an intervention to its perceived cost effectiveness. However, there are a number of barriers to BIAs being used in conjunction with, rather than instead of, CEAs. BIAs might be better aligned with the objectives of some decision makers, such as US payers, or it could be that they are simply easier to understand. Moreover, a BIA might be the only type of analysis available for the intervention. Even if BIAs and CEAs are considered together, the amount of weight given to each will probably vary depending on the perspective of the decision maker. While it is easy to argue that a wide and long-term perspective is best, the reality is that the opposite is often the case, as evidenced by the relatively low priority given to preventative public health interventions whose benefits are not realized until years after their cost.
So, BIAs have the potential to complement CEAs, but it’s not a given.
The alignment of health resource allocation with public and patient preferences remains an ongoing challenge for both researchers and decision makers. Indeed, the Health Economics Journal Club on Twitter (#HEJC) discussed a recently published paper on this topic during their inaugural online get together at the beginning of the month. The paper described a choice experiment that was used to elicit preferences for a wide range of criteria from a sample the UK population. The participants supported the prioritization of severe diseases, unmet needs and innovative treatments, but did not support the prioritization of children, disadvantaged populations, terminal diseases, rare diseases or cancer drugs.
Previous studies have also attempted to uncover such public preferences and their findings have varied. There are a number of potential reasons for the inconsistent results. Firstly, methodological differences between the studies may be a contributing factor. Indeed, it is now well recognized that the elicitation technique can have a significant impact on stated preferences. In accordance with this, much of the #HEJC discussion was around study methods. Secondly, the different results might also represent true but different preferences of different samples. This may be due to samples not being representative of their general populations or, alternatively, due to actual differences between preferences of different populations. Of course, this is not a problem if the populations have different sources of healthcare funding.
However, even if true and representative preferences of a population can be successfully captured, there is still the question of whose preferences should be used to guide resource allocation. Should healthcare funding reflect the preferences of the general public or those of patients who are the end users of the healthcare? Another question is whether preference heterogeneity can be accounted for, or is it just another case of majority rules? This question is particularly pertinent for some of the groups that fulfil the criteria being assessed for potential prioritization such as patients with rare diseases and disadvantaged minority populations. Moreover, other candidates for prioritization might actually be the most difficult people to elicit preferences from, such as children, severely ill patients and those with mental disorders or poor education. However, even the best educated members of the public might not comprehend the opportunity costs of special funding for “life-saving” treatments for a high profile disease . . .
This is just a glimpse at the significant challenges associated with identifying and incorporating public/patient preferences into health resource allocation decisions. Therefore, the recent study and corresponding #HEJC discussion may seem like small steps towards a seemingly astronomical goal, but they’re hopefully steps in the right direction.
The Office of Health Economics announced recently that it was awarded a £457000 grant from the UK government to produce a value set for the new five level version of the EQ-5D. The new five level version of the EQ-5D is a welcome initiative. It is well known that the current 3-level version is somewhat of a blunt instrument, incapable of capturing subtle changes in quality of life, such as in end -of -life scenarios , and in patients with eye disorders. The OHE research intends to address some interesting research questions including how to account for preference heterogeneity and exploring the use of discrete choice data. However, if the overall aim of this grant is more pragmatic and the intention is to somehow improve the quality-adjusted life-year (QALY) and help society make better decisions about scarce healthcare resources it is simply throwing good money after bad. We seem to be forever trying to paper over gaping cracks when it comes to the QALY. The lack of validity of many of the assumptions behind the QALY are well known. Many people do not act in a rational manner, especially when emotions are running high, such as is often the case in health-care decisions. So, why don’t we stop trying to prop up a broken paradigm and start funding research that looks for new metrics/methods which more accurately capture patients’ behaviour and preferences, and can also help decision makers make difficult decisions around scarce resources. Maybe it is because as a research community we too are unable to make rational decisions. The time and effort already invested in developing the QALY and the industry that has developed around it is clouding our judgement as to future direction.
Interest and activity in the field of patient-centred outcomes research has taken off in recent years, even before the Patient-Centered Outcomes Research Institute (PCORI) took centre stage in the US. Indeed, the first journal dedicated to such research, The Patient: Patient-Centered Outcomes Research, was launched over 4 years ago. However, up until now the influence of this research may have been hampered by a poor understanding of its varied and often complex methods and uncertainty around its validity. There has even been confusion about what actually constitutes a ‘patient-centred outcome’.
Therefore, it was welcome news that PCORI was developing a Methodology Report proposing standards for the conduct of patient-centred outcomes research. With regard to the standards, PCORI Methodology Committee Chair Dr Sherine Gabriel stated “We believe that methods matter – that patients deserve research results that meet the highest scientific standards.” It should be noted that there are already a number of ISPOR Good Practices for Outcomes Research consensus guidance reports published and in development for some specific aspects of patient-reported outcomes and preference-based methods.
PCORI’s draft Methodology Report is currently open for public comment until 14 September. “We now need the input of all health care stakeholders to ensure that their perspective is reflected in this resource,” said Dr Gabriel. Surely, the most important perspective to be reflected in the standards is that of the patient, so let’s hope patient views will be captured during the public comment period. For the research to be truly patient centred, patient involvement is required at all possible stages, from methods guidance to study design to data collection to interpretation and dissemination of findings.
Indeed, appropriate study reporting will be vital in translating patient-centred outcomes research into improved patient outcomes. Hopefully, PCORI’s Methodology Report will address this need to some extent and it’s good to see that transparency is already a focus of the standards. However, there may still be a need for guidance dedicated to the reporting of patient-centred outcomes research, similar to the reporting standards for economic evaluations currently being developed by ISPOR (CHEERS: Consolidated Health Economic Evaluation Reporting Standards).