User login
As patients, physicians fare nearly the same as everyone else
For patients, including patients who are physicians, knowledge isn’t power, according to investigators.
A literature review and retrospective analysis of more than 35,000 physicians treated as patients revealed minimal associations between level of medical knowledge and quality of health outcomes, reported Michael D. Frakes, PhD, of Duke University, Durham, N.C., and colleagues. The study findings stand in opposition to the “widely prevailing view” that information and medical knowledge among patients are integral to realizing high-quality, low-cost health care, the investigators noted.
“[This] research is particularly relevant to modern discussions and debates about the consumer-driven health care movement and the use of plans with high deductibles and high copayments to encourage greater patient and consumer involvement in health care decision making,” Dr. Frakes said in an interview. “Recent research has suggested that the financial incentives created by such structures discourage the use of both low-value care and high-value care. Some have argued that greater disclosure of information to patients may address this concern and steer patients towards high-value decisions. Our results cast doubt on the potential for information initiatives alone to meet this aim.”
The study is one of the first of its kind, the investigators noted in the National Bureau of Economic Research working paper. Other than a 2016 publication that found that physician mothers were less likely to have cesarean sections (Am Econ J: Econ Policy. 2016;8[1]:115-41), “there is no work which has been able to study the role of physicians as patients,” they wrote.
To fill this gap, the investigators turned to a unique data source: The Military Health System, which provides insurance to active and retired military personnel and their families. Military Health System spending exceeds $50 billion per year, constituting a major portion of American health care expenditures, and with more than 35,000 military physicians treated as patients, the dataset is highly relevant and powerful. The investigators objectively evaluated health outcomes by focusing on evidence-based, measurable clinical decisions deemed “high value” or “low value,” comparing how the frequency of these choices related with physician versus nonphysician patient status.
Coauthor Jonathan Gruber, PhD, of the Massachusetts Institute of Technology in Cambridge, Mass., explained this methodology in an interview. “The literature is clear that high-value care has positive health outcomes with relatively small increases in health care spending, and that low-value care has no impact on health outcomes with large increases in spending.”
“One concern with this analysis, of course, is that physicians may be of different health statuses and have different tastes for medical interventions than nonphysicians,” the investigators wrote. They addressed this problem in five ways, by focusing on widely accepted medical standards that apply to all patients; examining both high- and low-value care to eliminate one-sided bias; controlling for underlying health differences across groups; comparing physicians with other military officers to account for underlying tastes; and evaluating military officer dependents in comparison with physician dependents, the latter of whom may benefit from medical knowledge by virtue of personal relationship.
“Our results suggest that physicians do only slightly better than nonphysicians,” the investigators wrote, “but not by much and not always.” Low-value care was slightly less common among physicians, but this difference was described as “modest.” Analysis of high-value care was more mixed, with some results supporting equivalence between groups and others pointing to a slightly higher rate of high-value care among physician patients.
“These results provide a rough boundary on the extent to which additional information disclosure [beyond prevailing levels] can be expected to improve the delivery of health care in the U.S.,” the investigators wrote. “[M]ost of the explanation behind the over- and underutilization of low- and high-value services likely arises from factors other than informational deficiencies of patients.”
“Perhaps one interpretation of these findings is that patients remain generally deferential to the care recommendations of their treating physicians, even in the case of near fully informed patients,” the investigators wrote, noting that this interpretation aligns with a recent working paper that found that physicians play a greater role in selecting the site of MRI scans than patient cost-sharing factors.
Looking to the future, Dr. Gruber said that he and his colleagues plan on exploring “what drives this lack of response among physicians [as patients].”
The study was funded by the National Institute on Aging. The investigators reported no conflicts of interest.
SOURCE: Frakes MD et al. Natl Bur Econ Res. 2019 Jul. doi: 10.3386/w26038.
This article was updated 8/6/19.
For patients, including patients who are physicians, knowledge isn’t power, according to investigators.
A literature review and retrospective analysis of more than 35,000 physicians treated as patients revealed minimal associations between level of medical knowledge and quality of health outcomes, reported Michael D. Frakes, PhD, of Duke University, Durham, N.C., and colleagues. The study findings stand in opposition to the “widely prevailing view” that information and medical knowledge among patients are integral to realizing high-quality, low-cost health care, the investigators noted.
“[This] research is particularly relevant to modern discussions and debates about the consumer-driven health care movement and the use of plans with high deductibles and high copayments to encourage greater patient and consumer involvement in health care decision making,” Dr. Frakes said in an interview. “Recent research has suggested that the financial incentives created by such structures discourage the use of both low-value care and high-value care. Some have argued that greater disclosure of information to patients may address this concern and steer patients towards high-value decisions. Our results cast doubt on the potential for information initiatives alone to meet this aim.”
The study is one of the first of its kind, the investigators noted in the National Bureau of Economic Research working paper. Other than a 2016 publication that found that physician mothers were less likely to have cesarean sections (Am Econ J: Econ Policy. 2016;8[1]:115-41), “there is no work which has been able to study the role of physicians as patients,” they wrote.
To fill this gap, the investigators turned to a unique data source: The Military Health System, which provides insurance to active and retired military personnel and their families. Military Health System spending exceeds $50 billion per year, constituting a major portion of American health care expenditures, and with more than 35,000 military physicians treated as patients, the dataset is highly relevant and powerful. The investigators objectively evaluated health outcomes by focusing on evidence-based, measurable clinical decisions deemed “high value” or “low value,” comparing how the frequency of these choices related with physician versus nonphysician patient status.
Coauthor Jonathan Gruber, PhD, of the Massachusetts Institute of Technology in Cambridge, Mass., explained this methodology in an interview. “The literature is clear that high-value care has positive health outcomes with relatively small increases in health care spending, and that low-value care has no impact on health outcomes with large increases in spending.”
“One concern with this analysis, of course, is that physicians may be of different health statuses and have different tastes for medical interventions than nonphysicians,” the investigators wrote. They addressed this problem in five ways, by focusing on widely accepted medical standards that apply to all patients; examining both high- and low-value care to eliminate one-sided bias; controlling for underlying health differences across groups; comparing physicians with other military officers to account for underlying tastes; and evaluating military officer dependents in comparison with physician dependents, the latter of whom may benefit from medical knowledge by virtue of personal relationship.
“Our results suggest that physicians do only slightly better than nonphysicians,” the investigators wrote, “but not by much and not always.” Low-value care was slightly less common among physicians, but this difference was described as “modest.” Analysis of high-value care was more mixed, with some results supporting equivalence between groups and others pointing to a slightly higher rate of high-value care among physician patients.
“These results provide a rough boundary on the extent to which additional information disclosure [beyond prevailing levels] can be expected to improve the delivery of health care in the U.S.,” the investigators wrote. “[M]ost of the explanation behind the over- and underutilization of low- and high-value services likely arises from factors other than informational deficiencies of patients.”
“Perhaps one interpretation of these findings is that patients remain generally deferential to the care recommendations of their treating physicians, even in the case of near fully informed patients,” the investigators wrote, noting that this interpretation aligns with a recent working paper that found that physicians play a greater role in selecting the site of MRI scans than patient cost-sharing factors.
Looking to the future, Dr. Gruber said that he and his colleagues plan on exploring “what drives this lack of response among physicians [as patients].”
The study was funded by the National Institute on Aging. The investigators reported no conflicts of interest.
SOURCE: Frakes MD et al. Natl Bur Econ Res. 2019 Jul. doi: 10.3386/w26038.
This article was updated 8/6/19.
For patients, including patients who are physicians, knowledge isn’t power, according to investigators.
A literature review and retrospective analysis of more than 35,000 physicians treated as patients revealed minimal associations between level of medical knowledge and quality of health outcomes, reported Michael D. Frakes, PhD, of Duke University, Durham, N.C., and colleagues. The study findings stand in opposition to the “widely prevailing view” that information and medical knowledge among patients are integral to realizing high-quality, low-cost health care, the investigators noted.
“[This] research is particularly relevant to modern discussions and debates about the consumer-driven health care movement and the use of plans with high deductibles and high copayments to encourage greater patient and consumer involvement in health care decision making,” Dr. Frakes said in an interview. “Recent research has suggested that the financial incentives created by such structures discourage the use of both low-value care and high-value care. Some have argued that greater disclosure of information to patients may address this concern and steer patients towards high-value decisions. Our results cast doubt on the potential for information initiatives alone to meet this aim.”
The study is one of the first of its kind, the investigators noted in the National Bureau of Economic Research working paper. Other than a 2016 publication that found that physician mothers were less likely to have cesarean sections (Am Econ J: Econ Policy. 2016;8[1]:115-41), “there is no work which has been able to study the role of physicians as patients,” they wrote.
To fill this gap, the investigators turned to a unique data source: The Military Health System, which provides insurance to active and retired military personnel and their families. Military Health System spending exceeds $50 billion per year, constituting a major portion of American health care expenditures, and with more than 35,000 military physicians treated as patients, the dataset is highly relevant and powerful. The investigators objectively evaluated health outcomes by focusing on evidence-based, measurable clinical decisions deemed “high value” or “low value,” comparing how the frequency of these choices related with physician versus nonphysician patient status.
Coauthor Jonathan Gruber, PhD, of the Massachusetts Institute of Technology in Cambridge, Mass., explained this methodology in an interview. “The literature is clear that high-value care has positive health outcomes with relatively small increases in health care spending, and that low-value care has no impact on health outcomes with large increases in spending.”
“One concern with this analysis, of course, is that physicians may be of different health statuses and have different tastes for medical interventions than nonphysicians,” the investigators wrote. They addressed this problem in five ways, by focusing on widely accepted medical standards that apply to all patients; examining both high- and low-value care to eliminate one-sided bias; controlling for underlying health differences across groups; comparing physicians with other military officers to account for underlying tastes; and evaluating military officer dependents in comparison with physician dependents, the latter of whom may benefit from medical knowledge by virtue of personal relationship.
“Our results suggest that physicians do only slightly better than nonphysicians,” the investigators wrote, “but not by much and not always.” Low-value care was slightly less common among physicians, but this difference was described as “modest.” Analysis of high-value care was more mixed, with some results supporting equivalence between groups and others pointing to a slightly higher rate of high-value care among physician patients.
“These results provide a rough boundary on the extent to which additional information disclosure [beyond prevailing levels] can be expected to improve the delivery of health care in the U.S.,” the investigators wrote. “[M]ost of the explanation behind the over- and underutilization of low- and high-value services likely arises from factors other than informational deficiencies of patients.”
“Perhaps one interpretation of these findings is that patients remain generally deferential to the care recommendations of their treating physicians, even in the case of near fully informed patients,” the investigators wrote, noting that this interpretation aligns with a recent working paper that found that physicians play a greater role in selecting the site of MRI scans than patient cost-sharing factors.
Looking to the future, Dr. Gruber said that he and his colleagues plan on exploring “what drives this lack of response among physicians [as patients].”
The study was funded by the National Institute on Aging. The investigators reported no conflicts of interest.
SOURCE: Frakes MD et al. Natl Bur Econ Res. 2019 Jul. doi: 10.3386/w26038.
This article was updated 8/6/19.
FROM THE NATIONAL BUREAU OF ECONOMIC RESEARCH
Depression, anxiety among elderly breast cancer survivors linked to increased opioid use, death
Mental health comorbidities increase the rates of opioid use and mortality among breast cancer survivors on endocrine therapy, based on a retrospective study of more than 10,000 patients in a Medicare-linked database.
Screen for mental health conditions in the early stages of cancer care and lean toward opioid alternatives for pain management, advised lead author Raj Desai, MS, of the University of Florida, Gainesville, and colleagues.
“The complex relationship among breast cancer, mental health problems, and the use of opioids is not well understood, despite the high prevalence of mental health comorbidities like depression and anxiety in breast cancer survivors, and the high rate of opioid use in those on AET [adjuvant endocrine therapy],” the investigators wrote in the Journal of Oncology Practice.
“Therefore, this study aimed to determine whether breast cancer survivors with varying levels of mental health comorbidities, such as depression and anxiety, are more likely to use opioids for AET-related pain,” they added.
The study involved 10,452 breast cancer survivors who first filled an AET prescription from 2006 to 2012 and had follow-up records available for at least 2 years. All patients had a diagnosis of incident, primary, hormone receptor–positive, stage I-III breast cancer. Data were drawn from the Surveillance, Epidemiology, and End Results–Medicare linked database. Records were evaluated for diagnoses of mental health conditions such as depression and anxiety, opioid use, and survival.
Analysis showed that the most common mental health conditions were depression and anxiety, diagnosed in 554 and 246 women, respectively. Patients with mental health comorbidities were compared with patients who did not have such problems, using both unmatched and matched cohorts. While unmatched comparison for opioid use was not statistically significant, matched comparison showed that survivors with mental health comorbidities were 33% more likely to use opioids than those without mental health comorbidities (95% confidence interval, 1.06-1.68). Similarly, greater adjusted probabilities of opioid use were reported in the mental health comorbidity cohort (72.5% vs. 66.9%; P = .01).
Concerning survival, unmatched comparison revealed a 44% higher risk of death among women with depression and a 32% increase associated with anxiety. Matched comparison showed an even higher increased risk of mortality among women with any mental health comorbidity (49%; P less than .05).
The investigators concluded that opioid use among breast cancer survivors with mental health comorbidities “remains a significant problem.”
“A need exists for collaborative care in the management of mental health comorbidities in women with breast cancer, which could improve symptoms, adherence to treatment, and recovery from these mental conditions,” the investigators wrote. “Mental health treatments also are recommended to be offered in primary care, which not only would be convenient for patients, but also would reduce the stigma associated with treatments for mental health comorbidities and improve the patient-provider relationship.”
The investigators reported financial relationships with Merck.
SOURCE: Desai R et al. J Oncol Pract. 2019 Jul 19. doi: 10.1200/JOP.18.00781.
Mental health comorbidities increase the rates of opioid use and mortality among breast cancer survivors on endocrine therapy, based on a retrospective study of more than 10,000 patients in a Medicare-linked database.
Screen for mental health conditions in the early stages of cancer care and lean toward opioid alternatives for pain management, advised lead author Raj Desai, MS, of the University of Florida, Gainesville, and colleagues.
“The complex relationship among breast cancer, mental health problems, and the use of opioids is not well understood, despite the high prevalence of mental health comorbidities like depression and anxiety in breast cancer survivors, and the high rate of opioid use in those on AET [adjuvant endocrine therapy],” the investigators wrote in the Journal of Oncology Practice.
“Therefore, this study aimed to determine whether breast cancer survivors with varying levels of mental health comorbidities, such as depression and anxiety, are more likely to use opioids for AET-related pain,” they added.
The study involved 10,452 breast cancer survivors who first filled an AET prescription from 2006 to 2012 and had follow-up records available for at least 2 years. All patients had a diagnosis of incident, primary, hormone receptor–positive, stage I-III breast cancer. Data were drawn from the Surveillance, Epidemiology, and End Results–Medicare linked database. Records were evaluated for diagnoses of mental health conditions such as depression and anxiety, opioid use, and survival.
Analysis showed that the most common mental health conditions were depression and anxiety, diagnosed in 554 and 246 women, respectively. Patients with mental health comorbidities were compared with patients who did not have such problems, using both unmatched and matched cohorts. While unmatched comparison for opioid use was not statistically significant, matched comparison showed that survivors with mental health comorbidities were 33% more likely to use opioids than those without mental health comorbidities (95% confidence interval, 1.06-1.68). Similarly, greater adjusted probabilities of opioid use were reported in the mental health comorbidity cohort (72.5% vs. 66.9%; P = .01).
Concerning survival, unmatched comparison revealed a 44% higher risk of death among women with depression and a 32% increase associated with anxiety. Matched comparison showed an even higher increased risk of mortality among women with any mental health comorbidity (49%; P less than .05).
The investigators concluded that opioid use among breast cancer survivors with mental health comorbidities “remains a significant problem.”
“A need exists for collaborative care in the management of mental health comorbidities in women with breast cancer, which could improve symptoms, adherence to treatment, and recovery from these mental conditions,” the investigators wrote. “Mental health treatments also are recommended to be offered in primary care, which not only would be convenient for patients, but also would reduce the stigma associated with treatments for mental health comorbidities and improve the patient-provider relationship.”
The investigators reported financial relationships with Merck.
SOURCE: Desai R et al. J Oncol Pract. 2019 Jul 19. doi: 10.1200/JOP.18.00781.
Mental health comorbidities increase the rates of opioid use and mortality among breast cancer survivors on endocrine therapy, based on a retrospective study of more than 10,000 patients in a Medicare-linked database.
Screen for mental health conditions in the early stages of cancer care and lean toward opioid alternatives for pain management, advised lead author Raj Desai, MS, of the University of Florida, Gainesville, and colleagues.
“The complex relationship among breast cancer, mental health problems, and the use of opioids is not well understood, despite the high prevalence of mental health comorbidities like depression and anxiety in breast cancer survivors, and the high rate of opioid use in those on AET [adjuvant endocrine therapy],” the investigators wrote in the Journal of Oncology Practice.
“Therefore, this study aimed to determine whether breast cancer survivors with varying levels of mental health comorbidities, such as depression and anxiety, are more likely to use opioids for AET-related pain,” they added.
The study involved 10,452 breast cancer survivors who first filled an AET prescription from 2006 to 2012 and had follow-up records available for at least 2 years. All patients had a diagnosis of incident, primary, hormone receptor–positive, stage I-III breast cancer. Data were drawn from the Surveillance, Epidemiology, and End Results–Medicare linked database. Records were evaluated for diagnoses of mental health conditions such as depression and anxiety, opioid use, and survival.
Analysis showed that the most common mental health conditions were depression and anxiety, diagnosed in 554 and 246 women, respectively. Patients with mental health comorbidities were compared with patients who did not have such problems, using both unmatched and matched cohorts. While unmatched comparison for opioid use was not statistically significant, matched comparison showed that survivors with mental health comorbidities were 33% more likely to use opioids than those without mental health comorbidities (95% confidence interval, 1.06-1.68). Similarly, greater adjusted probabilities of opioid use were reported in the mental health comorbidity cohort (72.5% vs. 66.9%; P = .01).
Concerning survival, unmatched comparison revealed a 44% higher risk of death among women with depression and a 32% increase associated with anxiety. Matched comparison showed an even higher increased risk of mortality among women with any mental health comorbidity (49%; P less than .05).
The investigators concluded that opioid use among breast cancer survivors with mental health comorbidities “remains a significant problem.”
“A need exists for collaborative care in the management of mental health comorbidities in women with breast cancer, which could improve symptoms, adherence to treatment, and recovery from these mental conditions,” the investigators wrote. “Mental health treatments also are recommended to be offered in primary care, which not only would be convenient for patients, but also would reduce the stigma associated with treatments for mental health comorbidities and improve the patient-provider relationship.”
The investigators reported financial relationships with Merck.
SOURCE: Desai R et al. J Oncol Pract. 2019 Jul 19. doi: 10.1200/JOP.18.00781.
FROM THE JOURNAL OF ONCOLOGY PRACTICE
Intravenous CNS chemo looks best for testicular DLBCL
For patients with testicular diffuse large B-cell lymphoma (T-DLBCL), intravenous central nervous system (CNS)–directed chemotherapy and prophylactic treatment of the contralateral testis offer the best survival outcomes, according to findings from a recent retrospective analysis.
In contrast, intrathecal chemotherapy offered no benefit, reported lead author Susanna Mannisto, MD, of Helsinki University Hospital in Finland, and her colleagues. Survival advantages gained by CNS-directed chemotherapy were generally due to control of systemic disease rather than prevention of CNS progression, they noted.
There have not been randomized trials conducted specifically in T-DLBCL and treatment guidelines are currently based on phase 2 trials. Treatment for T-DLBCL typically involves cyclophosphamide, doxorubicin, vincristine, prednisolone plus rituximab (R–CHOP), with the addition of CNS prophylaxis with either intravenous or intrathecal methotrexate or cytarabine (AraC) in eligible patients. “Thus far, however, no prospective randomised studies on the benefit of this approach have been published,” the investigators wrote. The study is in the European Journal of Cancer.
They drew data from the Danish Lymphoma Registry and three Southern Finland University Hospitals. Out of 235 patients diagnosed with T-DLBCL between 1987 and 2013, 192 (82%) were treated with curative anthracycline-based chemotherapy. As some cases dated back to 1987, about one-third (36%) were treated before rituximab was available. A slightly higher proportion received intravenous CNS-directed chemotherapy (40%). Due to data availability, 189 patients were included in the survival analysis.
Results of multivariate analyses showed that CNS-directed chemotherapy (intravenous methotrexate, high-dose AraC, or both), was associated with an approximately 58% decreased risk of death across the entire population (hazard ratio [HR] for overall survival, 0.419; 95% confidence interval, 0.256-0.686; P = .001), with elderly patients realizing the greatest benefit.
“[I]t is important to note that we did not observe reduction in the CNS recurrence rate, but rather a better control of the systemic disease, suggesting that R–CHOP alone may not be a sufficient therapy for the patients with testicular DLBCL involvement,” the investigators wrote.
Intrathecal CNS-targeted therapy did not correlate with improved survival in the study. Likewise, for the entire population, rituximab did not boost survival; however, for high risk patients (International Prognostic Index, 3-5), immunochemotherapy did provide a better rate of 5-year disease-specific survival than that of conventional chemotherapy (44% vs 14%; P = .019).
Treatment of the contralateral testis was worthwhile for the entire population, with a 46% decreased risk of death (HR for overall survival, 0.514, 95% CI, 0.338-0.782; P = .002).
Shortest overall survival times, regardless of treatment type, were reported among patients with poor performance status, elevated lactate dehydrogenase (LDH), primary T-DLBCL, more extranodal sites, higher International Prognostic Index, and patients older than 70 years.
“[O]ur results recommend aggressive chemotherapy with [intravenous high-dose methotrexate and/or high-dose AraC] for better control of systemic disease for eligible patients,” the investigators concluded. “In addition, treatment of the contralateral testis with either radiotherapy or orchiectomy should be included in the management strategy of patients with T-DLBCL.”
The study was funded by the Academy of Finland, the Finnish Cancer Foundation, the Sigrid Juselius Foundation, and others. The investigators reported additional relationships with Pfizer, Takeda, Roche, Sanofi, and others.
SOURCE: Mannisto S et al. Eur J Cancer. 2019 May 10. doi: 10.1016/j.ejca.2019.04.004.
For patients with testicular diffuse large B-cell lymphoma (T-DLBCL), intravenous central nervous system (CNS)–directed chemotherapy and prophylactic treatment of the contralateral testis offer the best survival outcomes, according to findings from a recent retrospective analysis.
In contrast, intrathecal chemotherapy offered no benefit, reported lead author Susanna Mannisto, MD, of Helsinki University Hospital in Finland, and her colleagues. Survival advantages gained by CNS-directed chemotherapy were generally due to control of systemic disease rather than prevention of CNS progression, they noted.
There have not been randomized trials conducted specifically in T-DLBCL and treatment guidelines are currently based on phase 2 trials. Treatment for T-DLBCL typically involves cyclophosphamide, doxorubicin, vincristine, prednisolone plus rituximab (R–CHOP), with the addition of CNS prophylaxis with either intravenous or intrathecal methotrexate or cytarabine (AraC) in eligible patients. “Thus far, however, no prospective randomised studies on the benefit of this approach have been published,” the investigators wrote. The study is in the European Journal of Cancer.
They drew data from the Danish Lymphoma Registry and three Southern Finland University Hospitals. Out of 235 patients diagnosed with T-DLBCL between 1987 and 2013, 192 (82%) were treated with curative anthracycline-based chemotherapy. As some cases dated back to 1987, about one-third (36%) were treated before rituximab was available. A slightly higher proportion received intravenous CNS-directed chemotherapy (40%). Due to data availability, 189 patients were included in the survival analysis.
Results of multivariate analyses showed that CNS-directed chemotherapy (intravenous methotrexate, high-dose AraC, or both), was associated with an approximately 58% decreased risk of death across the entire population (hazard ratio [HR] for overall survival, 0.419; 95% confidence interval, 0.256-0.686; P = .001), with elderly patients realizing the greatest benefit.
“[I]t is important to note that we did not observe reduction in the CNS recurrence rate, but rather a better control of the systemic disease, suggesting that R–CHOP alone may not be a sufficient therapy for the patients with testicular DLBCL involvement,” the investigators wrote.
Intrathecal CNS-targeted therapy did not correlate with improved survival in the study. Likewise, for the entire population, rituximab did not boost survival; however, for high risk patients (International Prognostic Index, 3-5), immunochemotherapy did provide a better rate of 5-year disease-specific survival than that of conventional chemotherapy (44% vs 14%; P = .019).
Treatment of the contralateral testis was worthwhile for the entire population, with a 46% decreased risk of death (HR for overall survival, 0.514, 95% CI, 0.338-0.782; P = .002).
Shortest overall survival times, regardless of treatment type, were reported among patients with poor performance status, elevated lactate dehydrogenase (LDH), primary T-DLBCL, more extranodal sites, higher International Prognostic Index, and patients older than 70 years.
“[O]ur results recommend aggressive chemotherapy with [intravenous high-dose methotrexate and/or high-dose AraC] for better control of systemic disease for eligible patients,” the investigators concluded. “In addition, treatment of the contralateral testis with either radiotherapy or orchiectomy should be included in the management strategy of patients with T-DLBCL.”
The study was funded by the Academy of Finland, the Finnish Cancer Foundation, the Sigrid Juselius Foundation, and others. The investigators reported additional relationships with Pfizer, Takeda, Roche, Sanofi, and others.
SOURCE: Mannisto S et al. Eur J Cancer. 2019 May 10. doi: 10.1016/j.ejca.2019.04.004.
For patients with testicular diffuse large B-cell lymphoma (T-DLBCL), intravenous central nervous system (CNS)–directed chemotherapy and prophylactic treatment of the contralateral testis offer the best survival outcomes, according to findings from a recent retrospective analysis.
In contrast, intrathecal chemotherapy offered no benefit, reported lead author Susanna Mannisto, MD, of Helsinki University Hospital in Finland, and her colleagues. Survival advantages gained by CNS-directed chemotherapy were generally due to control of systemic disease rather than prevention of CNS progression, they noted.
There have not been randomized trials conducted specifically in T-DLBCL and treatment guidelines are currently based on phase 2 trials. Treatment for T-DLBCL typically involves cyclophosphamide, doxorubicin, vincristine, prednisolone plus rituximab (R–CHOP), with the addition of CNS prophylaxis with either intravenous or intrathecal methotrexate or cytarabine (AraC) in eligible patients. “Thus far, however, no prospective randomised studies on the benefit of this approach have been published,” the investigators wrote. The study is in the European Journal of Cancer.
They drew data from the Danish Lymphoma Registry and three Southern Finland University Hospitals. Out of 235 patients diagnosed with T-DLBCL between 1987 and 2013, 192 (82%) were treated with curative anthracycline-based chemotherapy. As some cases dated back to 1987, about one-third (36%) were treated before rituximab was available. A slightly higher proportion received intravenous CNS-directed chemotherapy (40%). Due to data availability, 189 patients were included in the survival analysis.
Results of multivariate analyses showed that CNS-directed chemotherapy (intravenous methotrexate, high-dose AraC, or both), was associated with an approximately 58% decreased risk of death across the entire population (hazard ratio [HR] for overall survival, 0.419; 95% confidence interval, 0.256-0.686; P = .001), with elderly patients realizing the greatest benefit.
“[I]t is important to note that we did not observe reduction in the CNS recurrence rate, but rather a better control of the systemic disease, suggesting that R–CHOP alone may not be a sufficient therapy for the patients with testicular DLBCL involvement,” the investigators wrote.
Intrathecal CNS-targeted therapy did not correlate with improved survival in the study. Likewise, for the entire population, rituximab did not boost survival; however, for high risk patients (International Prognostic Index, 3-5), immunochemotherapy did provide a better rate of 5-year disease-specific survival than that of conventional chemotherapy (44% vs 14%; P = .019).
Treatment of the contralateral testis was worthwhile for the entire population, with a 46% decreased risk of death (HR for overall survival, 0.514, 95% CI, 0.338-0.782; P = .002).
Shortest overall survival times, regardless of treatment type, were reported among patients with poor performance status, elevated lactate dehydrogenase (LDH), primary T-DLBCL, more extranodal sites, higher International Prognostic Index, and patients older than 70 years.
“[O]ur results recommend aggressive chemotherapy with [intravenous high-dose methotrexate and/or high-dose AraC] for better control of systemic disease for eligible patients,” the investigators concluded. “In addition, treatment of the contralateral testis with either radiotherapy or orchiectomy should be included in the management strategy of patients with T-DLBCL.”
The study was funded by the Academy of Finland, the Finnish Cancer Foundation, the Sigrid Juselius Foundation, and others. The investigators reported additional relationships with Pfizer, Takeda, Roche, Sanofi, and others.
SOURCE: Mannisto S et al. Eur J Cancer. 2019 May 10. doi: 10.1016/j.ejca.2019.04.004.
FROM EUROPEAN JOURNAL OF CANCER
COPD eosinophil counts predict steroid responders
Triple therapy with an inhaled corticosteroid is particularly helpful for patients with chronic obstructive pulmonary disease (COPD) who have high baseline eosinophil counts, a trial involving more than 10,000 patients found.
Former smokers received greater benefit from inhaled corticosteroids (ICS) than did current smokers, reported lead author Steven Pascoe, MBBS, of GlaxoSmithKline and colleagues. The investigators noted that these findings can help personalize therapy for patients with COPD, which can be challenging to treat because of its heterogeneity. The study was published in Lancet Respiratory Medicine.
The phase 3 IMPACT trial compared single-inhaler fluticasone furoate–umeclidinium–vilanterol with umeclidinium-vilanterol and fluticasone furoate–vilanterol in patients with moderate to very severe COPD at high risk of exacerbation. Of the 10,333 patients involved, approximately one-quarter (26%) had one or more severe exacerbations in the previous year and half (47%) had two or more moderate exacerbations in the same time period. All patients were symptomatic and were aged 40 years or older. A variety of baseline and demographic patient characteristics were recorded, including blood eosinophil count, smoking status, and others. Responses to therapy were measured with trough forced expiratory volume in 1 second (FEV1), symptom scoring, and a quality of life questionnaire.
After 52 weeks, results showed that higher baseline eosinophil counts were associated with progressively greater benefits in favor of triple therapy. For patients with baseline blood eosinophil counts of at least 310 cells per mcL, triple therapy was associated with about half as many moderate and severe exacerbations as treatment with umeclidinium-vilanterol (rate ratio = 0.56; 95% confidence interval, 0.47-0.66). For patients with less than 90 cells per mcL at baseline, the rate ratio for the same two regimens was 0.88, but with a confidence interval crossing 1 (0.74-1.04). For fluticasone furoate–vilanterol vs. umeclidinium-vilanterol, high baseline eosinophil count again demonstrated its predictive power for ICS efficacy, again with an associated rate ratio of 0.56 (0.47-0.66), compared with 1.09 (0.91-1.29) for patients below the lower threshold. Symptom scoring, quality of life, and FEV1 followed a similar trend, although the investigators noted that this was “less marked” for FEV1. Although the trend held regardless of smoking status, benefits were more pronounced among former smokers than current smokers.
“In former smokers, ICS benefits were observed at all blood eosinophil counts when comparing triple therapy with umeclidinium-vilanterol, whereas in current smokers no ICS benefit was observed at lower eosinophil counts, less than approximately 200 eosinophils per [mcL],” the investigators wrote.
“Overall, these results show the potential use of blood eosinophil counts in conjunction with smoking status to predict the magnitude of ICS response within a dual or triple-combination therapy,” the investigators concluded. “Future approaches to the pharmacological management of COPD should move beyond the simple dichotomization of each clinical or biomarker variable, toward more complex algorithms that integrate the interactions between important variables including exacerbation history, smoking status, and blood eosinophil counts.”
The study was funded by GlaxoSmithKline. The investigators disclosed additional relationships with AstraZeneca, Boehringer Ingelheim, Chiesi, CSA Medical, and others.
SOURCE: Pascoe S et al. Lancet Resp Med. 2019 Jul 4. doi: 10.1016/S2213-2600(19)30190-0.
Triple therapy with an inhaled corticosteroid is particularly helpful for patients with chronic obstructive pulmonary disease (COPD) who have high baseline eosinophil counts, a trial involving more than 10,000 patients found.
Former smokers received greater benefit from inhaled corticosteroids (ICS) than did current smokers, reported lead author Steven Pascoe, MBBS, of GlaxoSmithKline and colleagues. The investigators noted that these findings can help personalize therapy for patients with COPD, which can be challenging to treat because of its heterogeneity. The study was published in Lancet Respiratory Medicine.
The phase 3 IMPACT trial compared single-inhaler fluticasone furoate–umeclidinium–vilanterol with umeclidinium-vilanterol and fluticasone furoate–vilanterol in patients with moderate to very severe COPD at high risk of exacerbation. Of the 10,333 patients involved, approximately one-quarter (26%) had one or more severe exacerbations in the previous year and half (47%) had two or more moderate exacerbations in the same time period. All patients were symptomatic and were aged 40 years or older. A variety of baseline and demographic patient characteristics were recorded, including blood eosinophil count, smoking status, and others. Responses to therapy were measured with trough forced expiratory volume in 1 second (FEV1), symptom scoring, and a quality of life questionnaire.
After 52 weeks, results showed that higher baseline eosinophil counts were associated with progressively greater benefits in favor of triple therapy. For patients with baseline blood eosinophil counts of at least 310 cells per mcL, triple therapy was associated with about half as many moderate and severe exacerbations as treatment with umeclidinium-vilanterol (rate ratio = 0.56; 95% confidence interval, 0.47-0.66). For patients with less than 90 cells per mcL at baseline, the rate ratio for the same two regimens was 0.88, but with a confidence interval crossing 1 (0.74-1.04). For fluticasone furoate–vilanterol vs. umeclidinium-vilanterol, high baseline eosinophil count again demonstrated its predictive power for ICS efficacy, again with an associated rate ratio of 0.56 (0.47-0.66), compared with 1.09 (0.91-1.29) for patients below the lower threshold. Symptom scoring, quality of life, and FEV1 followed a similar trend, although the investigators noted that this was “less marked” for FEV1. Although the trend held regardless of smoking status, benefits were more pronounced among former smokers than current smokers.
“In former smokers, ICS benefits were observed at all blood eosinophil counts when comparing triple therapy with umeclidinium-vilanterol, whereas in current smokers no ICS benefit was observed at lower eosinophil counts, less than approximately 200 eosinophils per [mcL],” the investigators wrote.
“Overall, these results show the potential use of blood eosinophil counts in conjunction with smoking status to predict the magnitude of ICS response within a dual or triple-combination therapy,” the investigators concluded. “Future approaches to the pharmacological management of COPD should move beyond the simple dichotomization of each clinical or biomarker variable, toward more complex algorithms that integrate the interactions between important variables including exacerbation history, smoking status, and blood eosinophil counts.”
The study was funded by GlaxoSmithKline. The investigators disclosed additional relationships with AstraZeneca, Boehringer Ingelheim, Chiesi, CSA Medical, and others.
SOURCE: Pascoe S et al. Lancet Resp Med. 2019 Jul 4. doi: 10.1016/S2213-2600(19)30190-0.
Triple therapy with an inhaled corticosteroid is particularly helpful for patients with chronic obstructive pulmonary disease (COPD) who have high baseline eosinophil counts, a trial involving more than 10,000 patients found.
Former smokers received greater benefit from inhaled corticosteroids (ICS) than did current smokers, reported lead author Steven Pascoe, MBBS, of GlaxoSmithKline and colleagues. The investigators noted that these findings can help personalize therapy for patients with COPD, which can be challenging to treat because of its heterogeneity. The study was published in Lancet Respiratory Medicine.
The phase 3 IMPACT trial compared single-inhaler fluticasone furoate–umeclidinium–vilanterol with umeclidinium-vilanterol and fluticasone furoate–vilanterol in patients with moderate to very severe COPD at high risk of exacerbation. Of the 10,333 patients involved, approximately one-quarter (26%) had one or more severe exacerbations in the previous year and half (47%) had two or more moderate exacerbations in the same time period. All patients were symptomatic and were aged 40 years or older. A variety of baseline and demographic patient characteristics were recorded, including blood eosinophil count, smoking status, and others. Responses to therapy were measured with trough forced expiratory volume in 1 second (FEV1), symptom scoring, and a quality of life questionnaire.
After 52 weeks, results showed that higher baseline eosinophil counts were associated with progressively greater benefits in favor of triple therapy. For patients with baseline blood eosinophil counts of at least 310 cells per mcL, triple therapy was associated with about half as many moderate and severe exacerbations as treatment with umeclidinium-vilanterol (rate ratio = 0.56; 95% confidence interval, 0.47-0.66). For patients with less than 90 cells per mcL at baseline, the rate ratio for the same two regimens was 0.88, but with a confidence interval crossing 1 (0.74-1.04). For fluticasone furoate–vilanterol vs. umeclidinium-vilanterol, high baseline eosinophil count again demonstrated its predictive power for ICS efficacy, again with an associated rate ratio of 0.56 (0.47-0.66), compared with 1.09 (0.91-1.29) for patients below the lower threshold. Symptom scoring, quality of life, and FEV1 followed a similar trend, although the investigators noted that this was “less marked” for FEV1. Although the trend held regardless of smoking status, benefits were more pronounced among former smokers than current smokers.
“In former smokers, ICS benefits were observed at all blood eosinophil counts when comparing triple therapy with umeclidinium-vilanterol, whereas in current smokers no ICS benefit was observed at lower eosinophil counts, less than approximately 200 eosinophils per [mcL],” the investigators wrote.
“Overall, these results show the potential use of blood eosinophil counts in conjunction with smoking status to predict the magnitude of ICS response within a dual or triple-combination therapy,” the investigators concluded. “Future approaches to the pharmacological management of COPD should move beyond the simple dichotomization of each clinical or biomarker variable, toward more complex algorithms that integrate the interactions between important variables including exacerbation history, smoking status, and blood eosinophil counts.”
The study was funded by GlaxoSmithKline. The investigators disclosed additional relationships with AstraZeneca, Boehringer Ingelheim, Chiesi, CSA Medical, and others.
SOURCE: Pascoe S et al. Lancet Resp Med. 2019 Jul 4. doi: 10.1016/S2213-2600(19)30190-0.
FROM LANCET RESPIRATORY MEDICINE
Frontline pembro + chemo shows superiority against NSCLC
A combination of pembrolizumab and platinum chemotherapy is the most effective frontline treatment for patients with non–small cell lung cancer (NSCLC), according to a retrospective analysis of more than 16,000 patients.
The study showed that responses to therapy are best predicted by an aggregate of tumor mutational burden (TMB), programmed cell death ligand 1 (PD-L1) expression, and proportion of CD8+ T-cell tumor-infiltrating lymphocytes (TILs), reported Yunfang Yu, MD, of Sun Yat-sen University, Guangzhou, China, and colleagues.
“[I]mmunotherapy has produced inconsistent results in previous randomized clinical trials,” the investigators wrote, citing inconsistent survival outcomes in CheckMate-026 and publications by Takayama and Wu. “Moreover, independent immune-related biomarkers that are currently used, such as PD-L1 and TMB, have achieved clinical relevance for a selection of patients to some extent, but to our knowledge, they are still far from clear and established.” The report is in JAMA Network Open.
To gain clarity, the investigators performed a large-scale meta-analysis (n = 14,395) and individual patient-level study (n = 1,833) involving patients with NSCLC. Data were drawn from a variety of sources, including PubMed, EMBASE, Cochrane, conference proceedings, and others. Primary outcomes were median overall survival and progression-free survival. Secondary outcomes were objective response rate and durable clinical benefit. The Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) was used as a reporting guideline.
Analysis showed the superiority of immunotherapy to conventional therapy with significantly extended median overall survival and progression-free survival, both with an immunotherapy-favoring hazard ratio (HR) of 0.76 (P less than .001). Immunotherapy survival advantages were also reported individually for checkpoint inhibitors (HR, 0.75), tumor vaccines (HR, 0.83), and cellular immunotherapy (HR, 0.40). Of these three, checkpoint inhibitors and tumor vaccines showed superiority for progression-free survival. For first-line therapy, a combination of a pembrolizumab and chemotherapy was associated with better progression-free and overall survival than were other immunotherapies.
For patients treated with checkpoint inhibitors, higher levels of PD-L1 expression, TMB, or neo-antigen burden (NAB) were each prognostically valuable; however, the most powerful predictive tool was a combination of PD-L1 expression, TMB, and proportion of CD8+ T-cell TILs, with a 3-year overall survival area under the curve of 0.659. In addition, RYR1 and MGAM mutations were independently associated with durable clinical benefits.
“Future development of an optimized, integrated predictive model for immunotherapy should consider the integration of multiple approaches involving biomarkers associated with the T cell–inflamed tumor microenvironment, such as PD-L1 expression, ICs, and those associated with tumor neoepitope burden,” the investigators wrote.
The study was funded by the National Natural Science Foundation of China, Natural Science Foundation of Guangdong Province, and Guangzhou Science and Technology Program. The investigators disclosed no conflicts of interest.
SOURCE: Yu et al. JAMA Open. 2019 Jul 10. doi: 10.1001/jamanetworkopen.2019.6879.
A combination of pembrolizumab and platinum chemotherapy is the most effective frontline treatment for patients with non–small cell lung cancer (NSCLC), according to a retrospective analysis of more than 16,000 patients.
The study showed that responses to therapy are best predicted by an aggregate of tumor mutational burden (TMB), programmed cell death ligand 1 (PD-L1) expression, and proportion of CD8+ T-cell tumor-infiltrating lymphocytes (TILs), reported Yunfang Yu, MD, of Sun Yat-sen University, Guangzhou, China, and colleagues.
“[I]mmunotherapy has produced inconsistent results in previous randomized clinical trials,” the investigators wrote, citing inconsistent survival outcomes in CheckMate-026 and publications by Takayama and Wu. “Moreover, independent immune-related biomarkers that are currently used, such as PD-L1 and TMB, have achieved clinical relevance for a selection of patients to some extent, but to our knowledge, they are still far from clear and established.” The report is in JAMA Network Open.
To gain clarity, the investigators performed a large-scale meta-analysis (n = 14,395) and individual patient-level study (n = 1,833) involving patients with NSCLC. Data were drawn from a variety of sources, including PubMed, EMBASE, Cochrane, conference proceedings, and others. Primary outcomes were median overall survival and progression-free survival. Secondary outcomes were objective response rate and durable clinical benefit. The Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) was used as a reporting guideline.
Analysis showed the superiority of immunotherapy to conventional therapy with significantly extended median overall survival and progression-free survival, both with an immunotherapy-favoring hazard ratio (HR) of 0.76 (P less than .001). Immunotherapy survival advantages were also reported individually for checkpoint inhibitors (HR, 0.75), tumor vaccines (HR, 0.83), and cellular immunotherapy (HR, 0.40). Of these three, checkpoint inhibitors and tumor vaccines showed superiority for progression-free survival. For first-line therapy, a combination of a pembrolizumab and chemotherapy was associated with better progression-free and overall survival than were other immunotherapies.
For patients treated with checkpoint inhibitors, higher levels of PD-L1 expression, TMB, or neo-antigen burden (NAB) were each prognostically valuable; however, the most powerful predictive tool was a combination of PD-L1 expression, TMB, and proportion of CD8+ T-cell TILs, with a 3-year overall survival area under the curve of 0.659. In addition, RYR1 and MGAM mutations were independently associated with durable clinical benefits.
“Future development of an optimized, integrated predictive model for immunotherapy should consider the integration of multiple approaches involving biomarkers associated with the T cell–inflamed tumor microenvironment, such as PD-L1 expression, ICs, and those associated with tumor neoepitope burden,” the investigators wrote.
The study was funded by the National Natural Science Foundation of China, Natural Science Foundation of Guangdong Province, and Guangzhou Science and Technology Program. The investigators disclosed no conflicts of interest.
SOURCE: Yu et al. JAMA Open. 2019 Jul 10. doi: 10.1001/jamanetworkopen.2019.6879.
A combination of pembrolizumab and platinum chemotherapy is the most effective frontline treatment for patients with non–small cell lung cancer (NSCLC), according to a retrospective analysis of more than 16,000 patients.
The study showed that responses to therapy are best predicted by an aggregate of tumor mutational burden (TMB), programmed cell death ligand 1 (PD-L1) expression, and proportion of CD8+ T-cell tumor-infiltrating lymphocytes (TILs), reported Yunfang Yu, MD, of Sun Yat-sen University, Guangzhou, China, and colleagues.
“[I]mmunotherapy has produced inconsistent results in previous randomized clinical trials,” the investigators wrote, citing inconsistent survival outcomes in CheckMate-026 and publications by Takayama and Wu. “Moreover, independent immune-related biomarkers that are currently used, such as PD-L1 and TMB, have achieved clinical relevance for a selection of patients to some extent, but to our knowledge, they are still far from clear and established.” The report is in JAMA Network Open.
To gain clarity, the investigators performed a large-scale meta-analysis (n = 14,395) and individual patient-level study (n = 1,833) involving patients with NSCLC. Data were drawn from a variety of sources, including PubMed, EMBASE, Cochrane, conference proceedings, and others. Primary outcomes were median overall survival and progression-free survival. Secondary outcomes were objective response rate and durable clinical benefit. The Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) was used as a reporting guideline.
Analysis showed the superiority of immunotherapy to conventional therapy with significantly extended median overall survival and progression-free survival, both with an immunotherapy-favoring hazard ratio (HR) of 0.76 (P less than .001). Immunotherapy survival advantages were also reported individually for checkpoint inhibitors (HR, 0.75), tumor vaccines (HR, 0.83), and cellular immunotherapy (HR, 0.40). Of these three, checkpoint inhibitors and tumor vaccines showed superiority for progression-free survival. For first-line therapy, a combination of a pembrolizumab and chemotherapy was associated with better progression-free and overall survival than were other immunotherapies.
For patients treated with checkpoint inhibitors, higher levels of PD-L1 expression, TMB, or neo-antigen burden (NAB) were each prognostically valuable; however, the most powerful predictive tool was a combination of PD-L1 expression, TMB, and proportion of CD8+ T-cell TILs, with a 3-year overall survival area under the curve of 0.659. In addition, RYR1 and MGAM mutations were independently associated with durable clinical benefits.
“Future development of an optimized, integrated predictive model for immunotherapy should consider the integration of multiple approaches involving biomarkers associated with the T cell–inflamed tumor microenvironment, such as PD-L1 expression, ICs, and those associated with tumor neoepitope burden,” the investigators wrote.
The study was funded by the National Natural Science Foundation of China, Natural Science Foundation of Guangdong Province, and Guangzhou Science and Technology Program. The investigators disclosed no conflicts of interest.
SOURCE: Yu et al. JAMA Open. 2019 Jul 10. doi: 10.1001/jamanetworkopen.2019.6879.
FROM JAMA NETWORK OPEN
HCC surveillance after anti-HCV therapy cost effective only for patients with cirrhosis
For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.
Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”
The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.
Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.
Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.
“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.
Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.
Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.
Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.
“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.
“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.
“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.
The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.
This story was updated on 7/12/2019.
SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.
For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.
Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”
The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.
Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.
Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.
“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.
Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.
Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.
Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.
“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.
“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.
“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.
The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.
This story was updated on 7/12/2019.
SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.
For patients with hepatitis C virus (HCV)–related cirrhosis (F4), but not those with advanced fibrosis (F3), hepatocellular carcinoma (HCC) surveillance after a sustained virologic response (SVR) is cost effective, according to investigators.
Current international guidelines call for HCC surveillance among all patients with advanced fibrosis (F3) or cirrhosis (F4) who have achieved SVR, but this is “very unlikely to be cost effective,” reported lead author Hooman Farhang Zangneh, MD, of Toronto General Hospital and colleagues. “HCV-related HCC rarely occurs in patients without cirrhosis,” the investigators explained in Clinical Gastroenterology and Hepatology. “With cirrhosis present, HCC incidence is 1.4% to 4.9% per year. If found early, options for curative therapy include radiofrequency ablation (RFA), surgical resection, and liver transplantation.”
The investigators developed a Markov model to determine which at-risk patients could undergo surveillance while remaining below willingness-to-pay thresholds. Specifically, cost-effectiveness was assessed for ultrasound screenings annually (every year) or biannually (twice a year) among patients with advanced fibrosis (F3) or compensated cirrhosis (F4) who were aged 50 years and had an SVR. Relevant data were drawn from expert opinions, medical literature, and Canada Life Tables. Various HCC incidence rates were tested, including a constant annual rate, rates based on type of antiviral treatment (direct-acting and interferon-based therapies), others based on stage of fibrosis, and another that increased with age. The model was validated by applying it to patients with F3 or F4 fibrosis who had not yet achieved an SVR. All monetary values were reported in 2015 Canadian dollars.
Representative of current guidelines, the investigators first tested costs when conducting surveillance among all patients with F3 or F4 fibrosis with an assumed constant HCC annual incidence rate of 0.5%. Biannual ultrasound surveillance after SVR caught more cases of HCC still in a curable stage (78%) than no surveillance (29%); however, false-positives were relatively common at 21.8% and 15.7% for biannual and annual surveillance, respectively. The investigators noted that in the real world, some of these false-positives are not detected by more advanced imaging, so patients go on to receive unnecessary RFA, which incurs additional costs. Partly for this reason, while biannual surveillance was more effective, it was also more expensive, with an incremental cost-effectiveness ratio (ICER) of $106,792 per quality-adjusted life-years (QALY), compared with $72,105 per QALY for annual surveillance.
Including only patients with F3 fibrosis after interferon-based therapy, using an HCC incidence of 0.23%, biannual and annual ICERs rose to $484,160 and $204,708 per QALY, respectively, both of which exceed standard willingness-to-pay thresholds. In comparison, annual and biannual ICERs were at most $55,850 and $42,305 per QALY, respectively, among patients with cirrhosis before interferon-induced SVR, using an HCC incidence rate of up to 1.39% per year.
“These results suggest that biannual (or annual) HCC surveillance is likely to be cost effective for patients with cirrhosis, but not for patients with F3 fibrosis before SVR,” the investigators wrote.
Costs for HCC surveillance among cirrhosis patients after direct-acting antiviral-induced SVR were still lower, at $43,229 and $34,307 per QALY, which were far lower than costs for patients with F3 fibrosis, which were $188,157 and $111,667 per QALY.
Focusing on the evident savings associated with surveillance of patients with cirrhosis, the investigators tested two diagnostic thresholds within this population with the aim of reducing costs further. They found that surveillance of patients with a pretreatment aspartate aminotransferase to platelet ratio index (APRI) greater than 2.0 (HCC incidence, 0.89%) was associated with biannual and annual ICERs of $48,729 and $37,806 per QALY, respectively, but when APRI was less than 2.0 (HCC incidence, 0.093%), surveillance was less effective and more expensive than no surveillance at all. A similar trend was found for an FIB-4 threshold of 3.25.
Employment of age-stratified risk of HCC also reduced costs of screening for patients with cirrhosis. With this strategy, ICER was $48,432 per QALY for biannual surveillance and $37,201 per QALY for annual surveillance.
“These data suggest that, if we assume HCC incidence increases with age, biannual or annual surveillance will be cost effective for the vast majority, if not all, patients with cirrhosis before SVR,” the investigators wrote.
“Our analysis suggests that HCC surveillance is very unlikely to be cost effective in patients with F3 fibrosis, whereas both annual and biannual modalities are likely to be cost effective at standard willingness-to-pay thresholds for patients with cirrhosis compared with no surveillance,” the investigators wrote.
“Additional long-term follow-up data are required to help identify patients at highest risk of HCC after SVR to tailor surveillance guidelines,” the investigators concluded.
The study was funded by the Toronto Centre for Liver Disease. The investigators declared no conflicts of interest.
This story was updated on 7/12/2019.
SOURCE: Zangneh et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.018.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Genomic study reveals five subtypes of colorectal cancer
Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.
In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.
Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.
“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.
Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).
With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).
“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”
In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.
The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.
Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.
Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.
“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”
Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”
The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.
SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.
Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.
The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.
William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.
The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.
William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
Genomic, epigenomic, and transcriptomic information has revealed molecular subclasses of CRC, which has refined our understanding of the molecular and cellular biology of CRC and improved our treatment of patients with CRC. Several reliable and clinically useful molecular subtypes of colorectal cancer have been identified, including microsatellite unstable (MSI), chromosomal unstable (CIN), CpG island methylator phenotype (CIMP), and CMS 1-4 subtypes. Despite these substantial advances, it is also clear that we still only partially grasp the molecular and cellular biology driving CRC.
The studies by Fennell et al. provide new insights into the CIMP subtype of CRC that address this knowledge gap. Using a large CRC cohort and more detailed molecular information than available in prior studies, they have identified previously unrecognized CRC CIMP subtypes that have unique methylomes and mutation patterns. These 5 CIMP subclasses vary with regard to location in the colon, frequency of mutations in KRAS, BRAF, and MSI, as well as alterations in epigenetic regulatory genes. The observations related to differences in frequencies of MSI, and mutations in KRAS and BRAF help demystify the heterogeneity in clinical and cellular behavior that has been seen in the broader class of CIMP cancers. Perhaps most importantly, their studies identify plausible driver molecular alterations unique to the CIMP subclasses, such as subclass specific mutations in epigenetic regulatory genes and activated oncogenes. These are promising novel targets for chemoprevention strategies and therapies. Fennell and colleagues have now set the stage for functional studies of these molecular alterations to determine their true role in the cellular and clinical behavior of CRC.
William M. Grady, MD, is the Rodger C. Haggitt Professor of Medicine, department of medicine, division of gastroenterology, University of Washington School of Medicine, and clinical research division, Fred Hutchinson Cancer Research Center, Seattle. He is an advisory board member for Freenome and SEngine; has consulted for DiaCarta, Boehringer Ingelheim, and Guardant Health; and has conducted industry-sponsored research for Jannsenn and Cambridge Epigenetic.
Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.
In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.
Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.
“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.
Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).
With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).
“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”
In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.
The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.
Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.
Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.
“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”
Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”
The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.
SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.
Colorectal cancer can be divided into five DNA methylation subtypes that predict molecular and clinical behavior and may offer future therapeutic targets, according to investigators.
In 216 unselected colorectal cancers, five subtypes of the CpG island methylator phenotype (CIMP) showed “striking” associations with sex, age, and tumor location, reported lead author Lochlan Fennell, MD, of the QIMR Berghofer Medical Research Institute in Queensland, Australia, and colleagues. CIMP level increased with age in a stepwise fashion, they noted.
Further associations with CIMP subtype and BRAF mutation status support the investigators’ recent report that sessile serrated adenomas are rare in young patients and pose little risk of malignancy. With additional research, these findings could “inform the development of patient-centric surveillance for young and older patients who present with sessile serrated adenomas,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology.
“CIMP can be detected using a standardized marker panel to stratify tumors as CIMP-high, CIMP-low, or CIMP-negative.” In the present study, the investigators expanded these three existing subtypes into five subtypes, allowing for better prediction of clinical and molecular characteristics associated with disease progression.
Initial genomic testing showed that 13.4% of cases carried a BRAF V600E mutation, 34.7% were mutated at KRAS codon 12 or 13, and almost half of the patients (42.2%) had a TP53 mutation. Sorted into the three previously described subtypes, CIMP negative was most common (68.5%), followed by CIMP low (20.4%), and CIMP high (11.1%). About two-thirds (66%) of BRAF mutant cancers were CIMP high, compared with just 3% of BRAF wild-type cases (P less than .0001). KRAS mutated cases were more often CIMP-low than KRAS wild-type cancers (34.6% vs. 12.8%; P less than .001).
With use of Illumina HumanMethylation450 Bead Chip arrays and recursively partitioned mixed model clustering, five methylation clusters were identified; specifically, these were CIMP-H1 and CIMP-H2 (high methylation levels), CIMP-L1 and CIMP-L2 (intermediate methylation levels), and CIMP-negative (low methylation level). As described above, methylation level demonstrated a direct relationship with age, ranging from CIMP-negative (61.9 years) to CIMP-H1 (75.2 years). The investigators also reported unique characteristics of each new subtype. For instance, the CIMP-H1 cluster had many features in common with cases of serrated neoplasia, such as BRAF mutation positivity (73.9%; P less than .0001).
“BRAF mutations are a hallmark of the serrated neoplasia pathway, and indicate that these cancers probably arose in serrated precursor lesions,” the investigators wrote. “We previously showed that the colonoscopic incidence of sessile serrated adenomas does not differ between patients aged in their 30s and patients who are much older, whereas BRAF mutant cancers were restricted to older individuals, suggesting these BRAF mutant polyps may have limited malignant potential in young patients.”
In contrast with the CIMP-H1 cases, CIMP-H2 cancers were more often KRAS mutant (54.5% vs. 17.4%). Other findings revealed associations with subtype and location; for example, CIMP-L1 cases were located equally in the distal and proximal colon, whereas CIMP-L2 cases more often localized to the distal colon and rectum. Of note for CIMP-negative cancers, most (62.3%) occurred in the distal colon, and none had a BRAF mutation.
The five methylation subtypes also showed associations with consensus molecular subtypes (CMS) to varying degrees. The two strongest correlations were found in CIMP-H1 cancers and CIMP-H2 cancers, which were most frequently classified as CMS1 (69.6%) and CMS3 (54.5%), respectively.
Using CIBERSORT, the investigators detected a variety of associations between the five subtypes and stromal immune cell composition. For example, CIMP-H1 cases were enriched for macrophages, compared with the other subtypes, except CIMP-L2. Mast cells showed a stepwise relationship with subtype; they contributed the most to the immune microenvironment of CIMP-negative cancers and the least to cases classified as CIMP-H1. A converse trend was found with natural killer cells.
Of note, in CIMP-H1 and CIMP-H2 cancers, oncogenes were significantly more likely than tumor-suppressor genes to undergo gene body methylation, which is positively correlated with gene expression, and oncogenes in these subtypes had significantly greater gene body methylation than normal colonic mucosa.
“The five subtypes identified in this study are highly correlated with key clinical and molecular features, including patient age, tumor location, microsatellite instability, and oncogenic mitogen-activated protein kinase mutations,” they wrote. “We show that cancers with high DNA methylation show an increased preponderance for mutating genes involved in epigenetic regulation, and namely those that are implicated in the chromatin remodeling process.”
Concluding, the investigators explained the role of their research in future therapy development. “Our analyses have identified potentially druggable vulnerabilities in cancers of different methylation subtypes,” they wrote. “Inhibitors targeting synthetic lethalities, such as SWI/SNF component inhibitors for those with ARID mutations, should be evaluated because these agents may be clinically beneficial to certain patient subsets.”
The study was funded by the National Health and Medical Research Council, the US National Institutes of Health, Pathology Queensland, and others. The investigators disclosed no conflicts of interest.
SOURCE: Fennell L et al. CMGH. 2019 Apr 4. doi: 10.1016/j.jcmgh.2019.04.002.
FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY
Underwater endoscopic mucosal resection may be an option for colorectal lesions
without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.
UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.
Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology
Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”
To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.
The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.
Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.
The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”
During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.
Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”
“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”
Additional tips included using saline instead of distilled water, and employing thin, soft snares.
The investigators reported no external funding or conflicts of interest.
SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.
without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.
UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.
Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology
Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”
To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.
The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.
Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.
The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”
During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.
Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”
“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”
Additional tips included using saline instead of distilled water, and employing thin, soft snares.
The investigators reported no external funding or conflicts of interest.
SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.
without increasing procedure time or risk of adverse events, based on a recent head-to-head trial conducted in Japan.
UEMR was associated with higher R0 and en bloc resection rates than was conventional EMR (CEMR) when used for intermediate-size colorectal lesions, reported lead author Takeshi Yamashina, MD, of Osaka (Japan) International Cancer Institute, and colleagues. The study was the first multicenter, randomized trial to demonstrate the superiority of UEMR over CEMR, they noted.
Although CEMR is a well-established method of removing sessile colorectal lesions, those larger than 10 mm can be difficult to resect en bloc, which contributes to a local recurrence rate exceeding 15% when alternative, piecemeal resection is performed, the investigators explained in Gastroenterology
Recently, UEMR has emerged as “an alternative to CEMR and is reported to be effective for removing flat or large colorectal polyps,” the investigators wrote. “With UEMR, the bowel lumen is filled with water instead of air/CO2, and the lesion is captured and resected with a snare without submucosal injection of normal saline.”
To find out if UEMR offers better results than CEMR, the investigators recruited 211 patients with 214 colorectal lesions at five centers in Japan. Patients were aged at least 20 years and had mucosal lesions of 10-20 mm in diameter. Based on macroscopic appearance, pit pattern classification with magnifying chromoendoscopy, or narrow-band imaging, lesions were classified as adenoma, sessile serrated adenoma/polyp, or intramucosal adenocarcinoma. Patients were randomly assigned in a 1:1 ratio to the UEMR or CEMR group, and just prior to the procedure, operators were informed of the allocated treatment. Ten expert operators were involved, each with at least 10 years of experience, in addition to 18 nonexpert operators with less than 10 years of experience. The primary endpoint was the difference in R0 resection rate between the two groups, with R0 defined as en bloc resection with histologically negative margins. Secondary endpoints were en bloc resection rate, adverse events, and procedure time.
The results showed a clear win for UEMR, with an R0 rate of 69%, compared with 50% for CEMR (P = .011), and an en bloc resection rate that followed the same trend (89% vs. 75%; P = .007). Neither median procedure times nor number of adverse events were significantly different between groups.
Subset analysis showed that UEMR was best suited for lesions at least 15 mm in diameter, although the investigators pointed out the superior R0 resection rate with UEMR held steady regardless of lesion morphology, size, location, or operator experience level.
The investigators suggested that the findings give reason to amend some existing recommendations. “Although the European Society of Gastrointestinal Endoscopy Clinical Guidelines suggest hot-snare polypectomy with submucosal injection for removing sessile polyps 10-19 mm in size, we found that UEMR was more effective than CEMR, in terms of better R0 and en bloc resection rates,” they wrote. “Hence, we think that UEMR will become an alternative to CEMR. It could fill the gap for removing polyps 9 mm [or larger] (indication for removal by cold-snare polypectomy) and [smaller than] 20 mm (indication for ESD removal).”
During the discussion, the investigators explained that UEMR achieves better outcomes primarily by improving access to lesions. Water immersion causes lesions to float upright into the lumen, while keeping the muscularis propria circular behind the submucosa, which allows for easier snaring and decreases risk of perforation. Furthermore, the investigators noted, water immersion limits flexure angulation, luminal distension, and loop formation, all of which improve maneuverability and visibility.
Still, UEMR may take some operator adjustment, the investigators added, going on to provide some pointers. “In practice, we think it is important to fill the entire lumen only with fluid, so we always deflate the lumen completely and then fill it with fluid,” they wrote. “[When the lumen is filled], it is not necessary to change the patient’s position during the UEMR procedure.”
“Also, in cases with unclear endoscopic vision, endoscopists are familiar with air insufflation but, during UEMR, it is better to infuse the fluid to expand the lumen and maintain a good endoscopic view. Therefore, for the beginner, we recommend that the air insufflation button of the endoscopy machine be switched off.”
Additional tips included using saline instead of distilled water, and employing thin, soft snares.
The investigators reported no external funding or conflicts of interest.
SOURCE: Yamashina T et al. Gastro. 2018 Apr 11. doi: 10.1053/j.gastro.2019.04.005.
FROM GASTROENTEROLOGY
Formal weight loss programs improve NAFLD
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues.
“Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine.“However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease.
The literature search returned 22 eligible studies involving 2,588 patients. The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
The AGA Practice guide on Obesity and Weight management, Education and Resources (POWER) paper provides physicians with a comprehensive, multi-disciplinary process to guide and personalize innovative obesity care for safe and effective weight management. Learn more at https://www.gastro.org/practice-guidance/practice-updates/obesity-practice-guide
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues.
“Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine.“However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease.
The literature search returned 22 eligible studies involving 2,588 patients. The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
The AGA Practice guide on Obesity and Weight management, Education and Resources (POWER) paper provides physicians with a comprehensive, multi-disciplinary process to guide and personalize innovative obesity care for safe and effective weight management. Learn more at https://www.gastro.org/practice-guidance/practice-updates/obesity-practice-guide
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues.
“Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine.“However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease.
The literature search returned 22 eligible studies involving 2,588 patients. The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
The AGA Practice guide on Obesity and Weight management, Education and Resources (POWER) paper provides physicians with a comprehensive, multi-disciplinary process to guide and personalize innovative obesity care for safe and effective weight management. Learn more at https://www.gastro.org/practice-guidance/practice-updates/obesity-practice-guide
FROM JAMA INTERNAL MEDICINE
Key clinical point:
Study details: A meta-analysis of randomized clinicals involving weight loss interventions for patients with nonalcoholic fatty liver disease (NAFLD).
Disclosures: The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
Source: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
Formal weight loss programs improve NAFLD
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues. “Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine. “However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease. The literature search returned 22 eligible studies involving 2,588 patients.
The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
Past studies have attempted to investigate the relationship between weight loss and nonalcoholic fatty liver disease (NAFLD), but they did so with various interventions and outcomes measures. Fortunately, the study by Dr. Koutoukidis and colleagues helps clear up this variability with a well-conducted systematic review. The results offer a convincing case that formal weight loss programs should be a cornerstone of NALFD treatment, based on improvements in blood, histologic, and radiologic biomarkers of liver disease. Since pharmacologic options for NAFLD are limited, these findings are particularly important.
Although the study did not reveal improvements in fibrosis or inflammation with weight loss, this is likely due to the scarcity of trials with histologic measures or long-term follow-up. Where long-term follow-up was available, weight loss was not maintained, disallowing clear conclusions. Still, other studies have shown that sustained weight loss is associated with improvements in fibrosis and mortality, so clinicians should feel encouraged that formal weight loss programs for patients with NAFLD likely have life-saving consequences.
Elizabeth Adler, MD and Danielle Brandman, MD , are with the University of California, San Francisco. Dr. Brandman reported financial affiliations with Conatus, Gilead, and Allergan. Their remarks are adapted from an accompanying editorial (JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2244 ).
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues. “Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine. “However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease. The literature search returned 22 eligible studies involving 2,588 patients.
The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
For patients with nonalcoholic fatty liver disease (NAFLD), formal weight loss programs lead to statistically and clinically significant improvements in biomarkers of liver disease, based on a recent meta-analysis.
The findings support changing NAFLD guidelines to recommend weight loss interventions, according to lead author Dimitrios A. Koutoukidis, PhD, of the University of Oxford, UK, and colleagues. “Clinical guidelines around the world recommend physicians offer advice on lifestyle modification, which mostly includes weight loss through hypoenergetic diets and increased physical activity,” the investigators wrote in JAMA Internal Medicine. “However, whether clinicians provide advice and the type of advice they give vary greatly, and guidelines rarely specifically recommend treatment programs to support weight loss,” they added.
To investigate associations between methods of weight loss and improvements in NAFLD, the investigators screened for studies involving behavioral weight loss programs, pharmacotherapy, or bariatric surgery, alone or in combination. To limit confounding, studies combining weight loss with other potential treatments, such as medications, were excluded. Weight loss interventions were compared to liver disease outcomes associated with lower-intensity weight loss intervention or none or minimal weight loss support, using at least 1 reported biomarker of liver disease. The literature search returned 22 eligible studies involving 2,588 patients.
The investigators found that more intensive weight loss programs were associated with greater weight loss than lower intensity methods (-3.61 kg; I2 = 95%). Multiple biomarkers of liver disease showed significant improvements in association with formal weight loss programs, including histologically or radiologically measured liver steatosis (standardized mean difference: -1.48; I2 = 94%), histologic NAFLD activity score (-0.92; I2= 95%), presence of nonalcoholic steatohepatitis (OR, 0.14; I2 =0%), alanine aminotransferase (-9.81 U/L; I2= 97%), aspartate transaminase (-4.84 U/L; I2 = 96%), alkaline phosphatase (-5.53 U/L; I2 = 96%), and gamma-glutamyl transferase (-4.35 U/L; I2 = 92%). Weight loss interventions were not significantly associated with histologic liver fibrosis or inflammation, the investigators noted.
“The advantages [of weight loss interventions] seem to be greater in people who are overweight and with NAFLD, but our exploratory results suggest that weight loss interventions might still be beneficial in the minority of people with healthy weight and NAFLD,” the investigators wrote. “Clinicians may use these findings to counsel people with NAFLD on the expected clinically significant improvements in liver biomarkers after weight loss and direct the patients toward valuable interventions.”
“The accumulated evidence supports changing the clinical guidelines and routine practice to recommend formal weight loss programs to treat people with NAFLD,” the investigators concluded.
The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
SOURCE: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.
FROM JAMA INTERNAL MEDICINE
Key clinical point:
Major finding: Weight loss interventions were associated with significantly decreased alanine aminotransferase (-9.81 U/L; I2 = 97%).
Study details: A meta-analysis of randomized clinicals involving weight loss interventions for patients with nonalcoholic fatty liver disease (NAFLD).
Disclosures: The study was funded by the National Institute for Health Research (NIHR) Oxford Biomedical Research Centre and the Oxford NIHR Collaboration and Leadership in Applied Health Research. The investigators reported grants for other research from Cambridge Weight Plan.
Source: Koutoukidis et al. JAMA Int Med. 2019 Jul 1. doi: 10.1001/jamainternmed.2019.2248.