Risk, Metrics & Reputation: Partnerships 2012 Conference Report Part 2

This report was originally written for Clinical Research Focus magazine, but wasn’t published when it was fresh. I’ve published here to add it to my personal archives…

The 2012 “Partnerships in Clinical Trials” conference in Hamburg attracted around 700 delegates and nearly 100 exhibitors to discuss the challenges and opportunities in working with commercial partners throughout clinical development. The first part of this report, published previously, presented some of my personal highlights from the conference’s plenary sessions. This article will focus on a variety of the topics discussed in the parallel sessions.

Risk-based monitoring

One of the most widely-discussed topics around clinical operations at the moment is risk-based monitoring (RBM). Following guidance from both the FDA and EMA in 2011 years, most companies are considering how to adapt their oversight to be more proportionate to the specific risks (to participants and to data quality) of the project. A show of hands at the start of this session demonstrated that roughly half the delegates in the room were actively piloting RBM within their organisations. According to the session chairman, Dr Elspeth Carnan, Executive Director and Head of Global Clinical Site Management at Amgen, many companies are interested, but are struggling with precisely what to do. There is little clear evidence of widespread adoption of fully risk-based monitoring solutions.

The basic model for RBM is to develop criteria to reduce the amount of source document verification (SDV) that is conducted, targeting sites where there is least certainty for good quality. This reduction in active site-based monitoring is accompanied by automated remote monitoring using EDC systems, with triggers in the centralised analysis in place to prompt site visits in specific circumstances. This increased use of centralised monitoring offers the potential to maintain excellent data quality and speed while reducing the resource burden. This could result in a reduced requirement for monitors, or enable them to focus their attention on training and motivation for sites that need more help. Elspeth confirmed that Amgen are currently in a pilot phase with RBM, and in addition to a reduction in the time and cost associated with site visits, the increased use of real-time data enables better medical monitoring of the overall study.

Geoff Taylor, Director of Clinical Quality Assurance at Eisai, commented on the threshold at which RBM becomes possible; before starting to reduce on-site monitoring, it’s necessary for have confidence in the quality of a site’s performance. This should be automated, and ideally built into the CTMS, comparing performance and quality metrics with pre-defined rules across the study. He also highlighted the shift of attitude required to move to RBM: in the past, we have existed within a “cocoon” of direct regulation, where companies are told precisely what to do. The current guidelines “rip up” this mindset, replacing it with a position where, “as long as you have good, supportable data, how you do it is up to you”. This is a position I’ve heard, particularly from leadership at the FDA, on many issues over the past few years; the real test, though, will be how inspectors react to this new way of working, and indeed the response to any instance where a significant issue is missed despite an appropriate RBM plan having been applied rigorously. Concerns of this type may be partly responsible for so many companies still being at a pilot phase…

Klaus Beinhauer, Senior Director and Regional Head of Monitoring & Site Management for Bayer, identified some key criteria for making RBM work. It is important to develop robust reports, but it is even more important to have clarity over hand-off points between clinical and data management teams. Change management is important, particularly for CRAs who have spent their entire careers conducting 100% SDV. Another important issue is how much more agile companies will need to be in allocating monitoring resource: as data changes, the peaks and troughs of resource requirement will increase in magnitude, and the balance of sites and studies requiring on-site monitoring will shift. Managers will need to think very carefully about how they will handle this.

Implementing RBM for CROs

In a second session, Ben Dudley, Executive Director of Alliance Management for Covance, gave a more practical talk on how to implement RBM into clinical development partnerships. He reiterated that risk-based monitoring is not simply reducing SDV across the board, nor is it treating all sites and all studies in the same way: it is essential to identify and address the key risk points that have the greatest potential to impact on patient safety and data quality. This risk-adaptation must be dynamic, data-driven and iterative, and future product approvals may depend on demonstrating that these risks have been identified and adequately addressed.

Ben highlighted that different sponsors may have different drivers to ask for RBM: one sponsor might be trying to achieve cost savings and assume that there will be no change in quality, while another might be looking to focus on data quality to align with regulators’ expectations and assume that there would be no overall change in cost. It’s important for CROs to identify the drivers and to adapt the RBM model used accordingly. However, it is also essential to do this within the context of standardised company processes and objective evaluations to ensure that risk avoidance and mitigation are robust in either scenario.

As is always the case in these sorts of relationships, Ben advised sponsors to engage early to discuss their thinking around RBM with their CRO partners, to keep the enhanced risk assessment off the critical path for the study and to leverage the CRO’s experience in implementing RBM for other clients. He suggested that by pre-defining the process and governance arrangements, the strategic and tactical activities can be separated, so that sponsors can use a more efficient “trust but verify” model of oversight during the study.

Metrics & insights

It is a truth universally acknowledged that clinical development is becoming more expensive, with pressure to do more with less. The reason generally given for this is that studies are becoming more complex and clinical trial timelines are increasing, with patient recruitment generally considered the main reason trials fail to complete on time. However, in a presentation tucked-away in a mid-afternoon slot, Christine Blazynski, Chief Science Officer for Citeline turned this hypothesis on its head!

Citeline has a number of proprietary data products tracking various performance metrics across the clinical trial sector, drawing data from over 20,000 public domain sources (including publications and registries but also press releases and company reports). Christine used these to assess trends in trial duration across a number of disease areas. Her team examined completed, industry-sponsored phase 2 and phase 3 clinical trials that had started from 1999 to 2010. The study looked at trials with variable study periods based on endpoint-driven protocols (eg, oncology and cardiovascular studies) but also at trials with defined study periods (eg, for chronic conditions such as RA, asthma and diabetes).

She presented data for a series of therapy areas, including breast cancer, rheumatoid arthritis and HCV, and demonstrated that in all cases, average trial duration has actually decreased. In all but a handful of cases, patient enrolment time has also trended downwards. This has huge implications for the ‘received wisdom’ across the industry, and also brings into question the true cause of the spiralling cost of R&D. She suggested that possible reasons could include the investment in local infrastructure and intelligence to underpin the globalisation of research, while more complex studies might also involve more procedures, more samples and higher associated transport and analysis costs (particularly, again, when working in unfamiliar countries).

Continuing her presentation, Christine looked to dig a little deeper into reasons behind this downward trend. Here, the story was less consistent, varying by therapeutic area. For phase 3 studies in RA, it appeared that studies were simply being conducted more efficiently: enrolment time and study time had both decreased, as had the number of countries and sites used. (The total number of patients accrued was not reported, so it’s possible that studies were simply designed to be smaller.) In HCV, breast cancer and type 2 diabetes, enrolment and study periods had also decreased, but there was a marked increase in the number of countries used and, for HCV, a doubling in the average number of sites.

She ended her presentation with some questions to stimulate further research. What are the forces driving these trends around geography; will the geographical breadth of studies keep expanding or (as some industry leaders are suggesting) has the trend towards globalisation already peaked? What are the implications of recruiting more specific patient groups (eg, genetic/ethnic populations, elderly or paediatric patients etc.) And most crucially for this audience, which companies are most effective in driving trials forward?

Standardised performance metrics

The following presentation also looked at performance metrics, albeit from a different perspective: developing and using the right ones! Guy Mascaro is President of the Metrics Champion Consortium (www.metricschampion.org), a not-for-profit organisation comprising nearly 100 pharma companies, CROs and universities, with the aim of improving the clinical development process through the use of standardised performance metrics around cost, time and quality. Guy started his presentation by reminding us that metrics are not weapons to inflict on others, but they can play an important role in influencing behaviour in sponsors, CROs and sites. This integrated view is vital, as misaligned performance metrics (or, worse still, performance metrics that folk think are aligned but are actually subtly different) can create challenges and undermine the very performance they are intended to improve!

Metrics can be developed around speed (ie, timeliness and cycle time metrics), cost/efficiency and quality. Guy explained that it is vital to focus on all of these equally, as over-reliance on metrics of one type can detract from performance in other aspects. To encourage this integrated approach, Key Performance Indicators (KPIs) and Key Quality Indicators (KQIs) can be be integrated into a single metric. For example, if you have a KPI of “approve protocol on time”, it can be integrated with a KQI of “protocol quality score” to give a combined metric of “approve high quality (quality >x%) protocol on time”.

The MCC has developed sets of performance metrics in several areas, with the clinical trial performance metrics being published in 2010. Each metric is rigorously defined, including measurement unit, target range, reporting frequency etc. and organisations are encouraged to use the same definitions to ensure that inter- and intra-company reporting is unambiguous. The set of clinical trial performance metrics include 52 individual metrics (from “protocol quality tool” to “study drug-related SAEs reported per dosed subject”) and four exploratory metrics. These are presented on a process map for the entire study, showing where in the study the metric becomes relevant and which other metrics are related.

Combinations of metrics can be used to develop scoring tools, for example to inform site selection within or between countries. These tools can be used strictly as the basis for decisions, or can be used more subtly to inform choices and suggest when to mitigate risk. For example, if a site scores poorly in the selection tool but involves a Key Opinion Leader, the decision might be made to retain the site in the study but to plan specific risk mitigation activities. While this is perhaps little different from how these issues were handled in years gone by, the use of the scoring tool has documented the risk in advance and triggered a documented decision on how to proceed, both very important activities under the new risk-adaptive mindset (as discussed above).

Going beyond philanthropy

The final presentation to be covered in this report looks at the pharmaceutical industry from a very different angle. At his keynote speech at the 2012 ICR Annual Conference, David Gillen drew our attention to the Access to Medicines Index (www.accesstomedicineindex.org), which ranks pharma companies on various aspects of corporate social responsibility, going beyond philanthropy to take account of patents, pricing and public policy and more on the developing world. The report was first compiled in 2008, then in 2010 and the 2012 report was published a few weeks after this presentation. The driving force behind the project, Wim Leereveld, spoke to a packed conference room, discussing the thinking behind the project and its increasing impact on pharma’s reputation.

The goal of the project is for pharma companies to learn from their peers to improve practice around access to medicines. It also plays to the natural competitiveness of corporate CEOs: if a company comes in at #15 in any ranking, a typical CEO will want to improve on that position next time!

The report was developed with the full cooperation of the stakeholder companies: representatives came together to discuss and refine the criteria to be used, and their relative importance, so that all the companies could ‘buy in’ to the project’s transparency and agreed that the independent assessment and ranking was valid. Activities and impact were measured across 103 countries, based on World Bank and United Nations classifications around low-middle income and medium human development. The 2010 report was ranked #5 in a “rate the rater’s” ranking for credibility of methodology and results, and was described as “a very important project” by the Director-General of the World Health Organisation, Dr Margaret Chan.

In 2008 and 2010, GSK topped the list, with Merck, Novartis and Sanofi-Aventis also in the top five in both reports, with Novo Nordisk and Gilead both also scoring well. Each individual company report is broken down across the different types of criteria, so we can see that one company scores well for patenting while another performs better on philanthropy. For 2012, the weighting of the categories was changed again, with emphasis shifting from commitments to performance, and from R&D to equitable pricing and distribution. Of course, he could not discuss the actual results prior to publication.

Interestingly, for a report that many of us had assumed to be directed at clarifying pharma’s reputation with the general public, Wim explained that the prime target of the report is actually the investor community. He displayed the logos of 30 leading investment companies who support and take notice of the report, and who manage combined assets of $3.7 trillion. This emphasis is pragmatic, as their support of the pharma companies forces businesses to take notice on a financial level rather than on a purely social and reputational level that could be over-ruled by investors’ drive for increase return on investment (ROI). In the Q&A session after his presentation, I asked about the impact on pharma’s reputation with the general public. Wim replied that the public is a much more diverse audience, and more difficult to reach, and that assistance on getting the message out more widely would be appreciated.

The 2012 report was published a few weeks after this conference. We plan to bring you an analysis of the new report and an exclusive interview with Wim in a future issue of CRfocus.