Articles, Blog

ClinAction Workshop: AHRQ’s Perspective on Clinical Utility of Genomics – Gurvaneet Randhawa

ClinAction Workshop: AHRQ’s Perspective on Clinical Utility of Genomics – Gurvaneet Randhawa


Gurvaneet Randhawa:
Thank you. So I must thank Ned first because I was wondering when I was preparing the talk
how much of detail and specific examples I should get into or not and I decided not to,
hoping that he would get into some and he did. So thanks, Ned. My talk — I will spend just a few minutes
on the background and not discuss many of the things that Ned has already mentioned,
but hopefully move the conversation into what is AHRQ doing and what are the needs and the
barriers that we are facing right now that need to be overcome. So the background is — the first context
— we have numerous reports, whether from the EGAPP, the U.S. Preventive Services Task
Force, from the NIH General Science Conferences and many other guideline developers and systematic
reviews. There are large gaps in our knowledge of the impact of therapeutics and especially
in diagnostics, on patient outcomes in real world clinical practice. And this is not just
for rare diseases, although this is more so for rare disease; it’s even in common diseases.
So that’s one challenge that we face right now. The second, which Ned had mentioned is marginal
benefit, is this issue especially for common diseases. We don’t lack treatments. We don’t
lack diagnostic tests. There are plenty, whether it’s for treating high blood pressure, for
lowering your cholesterol, for treating osteoporosis. We don’t have a shortage of drugs. What we
need to know for anything new is what is the added value of this new thing, new technology,
new drug, new test? And everyone who needs this information has to be sure the information
is valid and credible on what the benefits are and what the harms are, regardless of
what the context of making the decision is, whether it’s the clinician having a patient
walk into the clinic, whether it’s a guideline developer, whether it’s a peer who wants to
make a recovery decision or a federal agency that wants to make a regulatory decision.
There may be some other aspects beyond benefits and harms to consider but this is certainly
the critical element. Another issue that we face, and there are
numerous examples of this, is that for many diseases, even in common diseases, the natural
history and the pathogenesis of the disease are often incompletely understood. So this
is an issue when you decide, are we studying surrogate markers? Are these actually surrogate
markers or not? And if they are, if you’re seeing an improvement in the surrogate marker,
will that translate to a benefit in the health outcome or not? And you have numerous examples
when that hasn’t panned out to be true. So you can see a common disease like osteoporosis,
sodium fluoride should increase bone marrow density; it doesn’t decrease fracture risk.
And that’s really what the patient cares about, not that their bones are dense but that you
actually have fractures. Same is true for screening for prostate cancer, hepatitis C.
There are so many examples when the natural issue of the disease and the unknowns limit
the ability for a guideline developer to say, clearly there are more benefits than harms. So what are the reasons we are facing this?
And I’ll put two points across. One I think is, there are limitations in our existing
infrastructure capabilities. So the electronic data bases that we have, they don’t talk to
each other, the information is siloed, often it’s not the right kind of information and
so we have problems that need to be overcome from the infrastructure point of view. It’s
also partly due to the study methods. Whether it’s observation studies or NMS controlled
trials, depending on the question being asked, there are often issues that can lead to bias
and confounding that affect the validity of the results. So you want the results to be
valid and generalizable. And a last point, of course, and for several
reasons we can’t have a long discussion on this, the goals of biomedical researchers
are not typically aligned with those of clinical providers. So with this context, one of the
challenges that we are facing is, can we improve our health care delivery infrastructure so
that we can use it for research, we can use it for improving quality of care and for new
information like genetic tests. Now, the other thing that had briefly been
mentioned is comparative effective research. I won’t go into the definition in detail.
This is the one that the Federal Coordinating Council came up with. I’ll just highlight
three things in here that I think are important. One is that in comparative effectiveness,
you are looking at the benefits and harms of different interventions, so it’s not a
placebo. It’s not doing nothing. It’s actually comparing different ordinative interventions,
whether it’s diagnostics or therapeutics, in a real world setting, which is important.
It’s not in an artificial, highly selective patient population, highly selective clinical
settings where you don’t know if you can generalize the results. It’s actually a real world practice. And the last part of the definition which
I think is important is, we are doing this to improve health outcomes, not the surrogate
markers, not for creating new knowledge, but to actually improve the quality of life or
the care of the patient. So, what AHRQ has done in the past several
years, and this started with the Medicare Modernization Act, was to create a new program
called Effective Health Care which focused on comparative effectiveness. And the four
goals of this program are to create new knowledge, to review and synthesize existing knowledge
— and that actually has been something we’ve been doing for a long time. The evidence-based
practice center reviews that Ned mentioned are part of what we use for reviewing and
synthesizing the existing knowledge. Then the two other components are to translate
and disseminate the findings including tools such as clinical decision support tools, decision
aids, and to train and build the capacity in this field which is still new. So I have only one slide on genomics projects
but this is to tell you that AHRQ has not been inactive in this field. So I mentioned
the evidence-based practice centers or the EBPC reports which have helped many different
guideline developers, EGAPP, U.S. Parental Services Task Force, the NIH General Science
Conferences on Family History, CMS and their MEDCAC process, CDC, and of course topics
that get nominated by clinical societies. We have also done work in creating new knowledge.
We have funded an NMS controlled trial, this was the Marshfield Clinic, on looking at warfarin
gene-based dosing calculator and comparing that to a clinical dosing calculator alone.
That is published in Genetics in Medicine. And there are two add-on genomics projects
in prospect studies. I will tell you in more detail what the prospect stands for. We also created a new computer based clinical
decision support tool for assessing BRCA mutation risk in the primary care setting. And this
was done because the U.S. Parental Services Task Force had made a recommendation for primary
care that when there are women who are at high risk they should be referred for appropriate
counseling and testing. The challenge is, the primary care clinician does not have the
time and sometimes some can argue the skills to actually get detailed cancer family history
to know what the BRCA risk of the woman is. So we created a tool for that. And it’s not
live because we spent more time creating the tool. We thought there was much more knowledge
about what to do in primary care. It turns out there wasn’t. So we spent most of our
resources in creating the tool, not so much on validating the tool. And so we actually
have a collaboration with the CDC to do bigger studies and get a sense of how well this tool
performs in the real world. Then we also had two, I guess, conceptual
reports, I would call them. One was done in collaboration with the CDC to look at the
existing infrastructure in the U.S. and to ascertain how well can we use the infrastructure
to look at utilization of genetic tests or the outcomes of genetic tests. And another
one, which we recently released a few months ago, was looking at the analytic validity,
quality rating and evaluation frameworks. So this was a report to build on the work
that EGAPP has done, the Parental Services Task Force has done, the CDC has done with
the ACCE framework and an older Thornbury-Fryback framework on evaluating diagnostic tests.
So this report essentially looked at different clinical contexts in which — or scenarios
— when you use a genetic test, who the audience is, who the user is, and then what are of
the most important questions that are — that should be addressed in an evidence review. So our work on creating new infrastructure:
We started two pilot projects back in 2007 on distributed resource networks. So for those
who are not familiar with distributed research, the traditional model of research is all the
participating sides, organizations send their database into one large centralized database
which, then, there are some issues about both the quality of the data as well as privacy
and confidentiality of the information available in the data. So people are always nervous
in giving their data to an unknown centralized entity that can use it any time in the future. One way around this is, can we actually do
distributed research where the data or the databases actually reside in different clinical
organizations? They are partnering only on an as needed project to distribute the information,
selected information, so that you’re not sharing all the information in one repository. And
this will allow you the ability to connect different electronic medical records, to connect
different databases and overcome some of the privacy and confidentiality concerns. So we had two different projects that we funded.
One was to create a new — this was a Darknet project. This was the University of Colorado.
They had linked six different EMRs in the first go around, linked the EMRs with claims
database, pharmacy databases, clinical lab databases, and showed that can actually be
done and that you can also collect patient reported outcomes using this linkage to improve
the quality of care and use it for comparative effectiveness research. The other was to enhance an existing collaboration,
the HMO Research Network, which was — they’ve already spent many years building the virtual
data warehouse. The challenge is, can you actually get virtual data warehouses from
the different organizations to talk to each other and generate the information? So we published that. This was done two years
ago in Anecdotal Journal of Medicine. And we learned both from the successes and the
challenges in these projects so that our goal was to build on this and build new systems
that are multipurpose, so not just for research but also for quality improvement, for disease
surveillance, clinical support. These are dynamic so it’s not just one data entry static;
you can’t do anything with that, but you can go back and change, add new fields, change
the data as needed. These need to be electronic so they are based on EMRs or EHRs from the
get-go. And they can collect perspective data. And this spanned several of the AHRQ portfolios.
This is just to tell you that this has widespread interest at AHRQ and also this is a new multidisciplinary
effort. So our good fortune getting the ARRA funding
which was, for those of you who haven’t followed it, $1.1 billion for comparative effectiveness
research, all of this about 100 million were spent in building these new systems. So I had mentioned prospect earlier, so this
is one of the RFAs I have taken the lead in writing on perspective outcome systems that
use patient specific electronic data to compare tests and therapies. We awarded six RONs [spelled
phonetically] on these. Then we also came up with two other RFAs and because of the
time crunch I didn’t have enough time to think of creative new acronyms so these are just
as is. One was Scalable Distributed Research Networks. We funded three RONs here. And the
third one is the enhanced registers that can be used both for quality improvement and for
comparative effectiveness research. The fourth RFA was, it’s well and good to
do the research, can you actually bring the lessons learned in a convenient forum so that
you can advance the national dialogue in analytic methods, in clinical informatics and in the
data governance issues? So we awarded to AcademyHealth a cooperative agreement on creating a new
electronic data methods forum. So the common themes across these RON projects
— the requirements were, they had to be able to link multiple health care delivery sites.
So in this case it would be inpatient care, outpatient care, specialty clinics, nursing
home, long term care. So these had to be different care delivery sites. It’s not just linking
two clinics and one academic center and saying this is enough. They needed to connect multiple
databases, be it different electronic health records, be it linking with claims databases,
pharmacy databases. They needed to focus on priority populations and conditions, so the
concern about undersold populations, generalizability of the results, those were to be addressed.
They needed to demonstrate they can connect prospective patient-centered outcomes to use
it for comparative effectiveness research so that you can ultimately get valid and generalizable
conclusions. Another theme that we stressed was, there
was a focus on governance in stakeholder engagement and this is all in an effort to make it sustainable.
We knew the RFA funding was a one-time large bonus but if the projects do things that are
valuable to different stakeholders, be it patients, providers, payers, clinical guideline
developers, professional societies, then the hope is once initial investment is done, there’ll
be support to sustain this beyond the three-year timeline of these projects. And now the other special features of the
registry and distributed projects: For the registry, the requirement was to build on
an existing registry because the three-year timeline did not allow us to start a new registry
and then to show they can use it for comparative effectiveness research.
Another requirement was to do a comparative effectiveness research and quality improvement.
So you heard some of the challenges about potentials in research and clinical practice,
well the same happens in people who do quality improvement and who do research. Generally,
quality improvement folks don’t have to worry about IRB but on the other hand, they’re not
looking to publish findings to get grant funding. So they do live in different worlds and can
you actually bring those two worlds together when you’re building the registry and make
it sustainable and therefore hopefully scalable? The other RON — other RFA focused on distributed
research networks where the emphasis was — emphasis was to build on multiple cohorts. So we had
asked for at least four different cohorts of at least two different unrelated conditions.
So this is sort of a contrast to registries where registries can often be disease specific
or patient population specific. But — all right — I guess I won’t apply
this now. [laughter] Male Speaker:
[unintelligible] [laughter] Gurvaneet Randhawa:
There is nothing confidential here so there is no reason for security on this slide. And the other challenge, as you heard, is,
it’s one thing doing research; it’s another thing trying to use information in real life
clinical practice. So you need to have data that you can get soon. You can’t wait for
a few years and then say, okay, now what do I do with my patient? So one of the challenges
with these distributed research network projects was, can you get near real time data collection
and analysis and of course like the registries, make them sustainable and scalable? So I’ll just spend a couple of minutes on
what I hope is something that you can engage with, the EDM forum. So this is a central
repository and resource for information on collecting perspective electronic clinical
data that is being done in all of these projects. There’s a website and I will have that at
the end that you can access as you want. The purpose is for them to collect and synthesize
the lessons learned across all of these 11 projects, to engage the different stakeholders
in the science but also to learn from them what their needs and challenges are, and to
build the resources and tools to advance the science in this field. The activities of this
forum are on analytic methods, clinical informatics, as I mentioned, data governance which includes
security, privacy and access of information, and there’s a new subcommittee on the learning
health care system which talks about what I would call non research issues. This is
quality improvement, clinical support and meaningful engagement. So this is the organizational chart; I’ll
just leave this as my last slide. There’s a — the PI of this is Erin Holve at AcademyHealth.
There’s a steering committee and Ned Calonge who is here, he is the chair of that. There
are 11 projects investigators that are part of the forum. And I’ll stop there. [applause] Male Speaker:
Thank you. We can take one or two comments to questions. Bruce? Bruce Blumberg:
Could you help me to understand how the mission and scope of work of AHRQ overlap with and/or
is distinct from the evolving scope and mission for PCORI? Gurvaneet Randhawa:
Certainly. Well AHRQ, of course, predates PCORI for the longest time. The — AHRQ’s
mission has been the effectiveness, safety, efficiency and quality of health care. From
our understanding, PCORI is still evolving. It’s focused primarily on patient-centered
outcomes. So what happens about issues that are not directly relevant to patient-centered
outcomes, it’s not clear if PCORI is going to be taking those on or not. There is certainly collaboration between the
two. PCORI has funded AHRQ activities or will be funding AHRQ activities on dissemination,
on training, so there will be some amount of collaboration. But down the road, what
is that PCORI will actually do hasn’t yet been clarified. That — I think that from
what I heard the last time, we will know more about that in January about their specific
topic areas and projects and the mechanisms of funding for those. Female Speaker:
I’m on the Methodology Committee at PCORI [inaudible]. Male Speaker:
[inaudible] Female Speaker:
So I’m going to speak both as a member of the Methodology Committee and also having
been very involved with stakeholders who worked to put — you know, to support PCORI back
when it was called Compared Effectiveness Entity, then is through the [unintelligible].
And I think the intent is that the vision of PCORI patient-centered outcomes research
incorporates comparative effectiveness but it is larger and will incorporate new kinds
of information that will add to it. So it includes that agenda and goes beyond it. What
the priorities and the agendas will be is still being worked out by PCORI. The rules
of the road for that are still being set. The Methodology Committee has a pretty strict
task, which is to get a comparative effectiveness guidelines report, methods report, delivered
in May. I think there has always been the intent, at least on the part of the stakeholders
who are funding PCORI, it is largely funded through payer funds, some through government
funds, that this should amplify what AHRQ is able to do and not replace what AHRQ is
able to do. I think there is a high appreciation that
what we often need is new primary evidence, so many systematic reviews and other efforts
and with the conclusion that we really don’t have the primary evidence. So so this was
seen as a vehicle to start to fund that primary evidence. There really are not entities that
exist now that have that as their mission or their interest. Sponsors that are for registration
are interested in their product, not comparison. The NIH I think is more infused with the spirit
of comparative effectiveness but has not really been — seen that as its mission. And this
really is sort of the one place that this important social objective can be lodged and
is now enhanced with a broader vision of patient centeredness. So. Male Speaker:
Thank you. Male Speaker:
Thank you very much for the presentation. Just — the big devil in comparative effectiveness
research is channeling. Or put more simply, new drugs are given to slightly sicker people.
And I wondered with your methodological research how you were getting on with that particular
issue? Gurvaneet Randhawa:
In the U.S., there is the FDA labeling that tells you what the drug or the clinical scenarios
what it can be used and not used for. But there is also what we call off label use.
And the comparative effectiveness research doesn’t limit ourself to only FDA approved
indications. Any — so the main issue for comparative effectiveness research is, do
you actually have the evidence, not on what was originally approved for, what it’s being
used for now. So if things have changed over time and that change has been captured in
publications, then that forms the basis of comparative effectiveness research. But to how well this is characterized, that’s
going to be the challenge, is to make sure. Many of the databases that we have, for example,
when they are doing observation studies, they don’t capture the severity of the disease,
the test results. So it’s very hard to know what patient, what type of the patients were
given these medications and are they comparable? And so those are all challenges. I think once
we get more clinical details in the databases and can link them, hopefully we can address
some of those issues. Male Speaker:
Thank you very much. We now have time for —

Tagged , , , , ,

Leave a Reply

Your email address will not be published. Required fields are marked *