Creatinine variability linked to liver transplant outcomes

Article Type
Changed
Wed, 05/18/2022 - 18:06

Patients with greater changes in serum creatinine are more likely to have worse pre- and post–liver transplant outcomes. Moreover, underserved patients may be most frequently affected, according to a retrospective analysis of UNOS (United Network for Organ Sharing) data.

These results should drive further development of serum creatinine coefficient of variation (sCr CoV) as an independent predictor of renal-related mortality risk, according to lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.

“Intra-individual clinical and laboratory parameter dynamics often provide additional prognostic information – added information that goes beyond what can be found with cross-sectional data,” the researchers wrote in Hepatology. “This finding has been seen in several scenarios in the general population – intra-individual variability in blood pressure, weight, hemoglobin, and kidney function, have all been associated with worse clinical outcomes. However, in cirrhosis patients, and more specifically in patients awaiting a liver transplant, kidney function dynamics as a predictor of clinical outcomes has yet to be investigated.”

To gauge the predictive power of shifting kidney values, Dr. Cullaro and colleagues analyzed UNOS/OPTN (Organ Procurement and Transplantation Network) registry data from 2011 through 2019. Exclusion criteria included patients who were aged younger than 18 years, were listed as status 1, received a living donor liver transplantation, were on hemodialysis, or had fewer than three updates. The final dataset included 25,204 patients.

After the researchers sorted patients into low, intermediate, and high sCr CoV tertiles, they used logistic regression to determine relationships between higher sCr and a variety of covariates, such as age, sex, diagnosis, presence of acute kidney injury, or chronic kidney disease. A competing risk regression was then done to look for associations between wait list mortality and the covariables, with liver transplant used as the competing risk.

The median sCr CoV was 17.4% (interquartile range [IQR], 10.8%-29.5%). Patients in the bottom sCr CoV tertile had a median value of 8.8% (IQR, 6.6%-10.8%), compared with 17.4% (IQR, 14.8%-20.4%) in the intermediate variability group and 36.8% (IQR, 29.5%-48.8%) in the high variability group. High variability was associated with female sex, Hispanic ethnicity, ascites, and hepatic encephalopathy as well as higher body mass index, MELDNa score, and serum creatinine.

Of note, each decreasing serum creatinine variability tertile was associated with a significantly lower rate of wait list mortality (34.7% vs. 19.6% vs. 11.7%; P < .001). The creatinine variability profiles were similarly associated with the likelihood of receiving a liver transplant (52.3% vs. 48.9% vs. 43.7%; P < .001) and posttransplant mortality (7.5% vs. 5.5% vs. 3.9%; P < .001).

A multivariate model showed that each 10% increase in sCr CoV predicted an 8% increased risk of a combined outcome comprising post–liver transplant death or post–liver transplant kidney transplant (KALT), independently of other variables (adjusted hazard ratio, 1.08; 95% confidence interval, 1.05-1.11).

“These data highlight that all fluctuations in sCr are associated with worse pre- and post–liver transplant outcomes,” the investigators concluded. “Moreover, the groups that are most underserved by sCr, specifically women, were most likely to have greater sCr CoVs. We believe our work lays the foundation for implementing the sCr CoV as an independent metric of renal-related mortality risk and may be most beneficial for those groups most underserved by sCr values alone.”

According to Brian P. Lee, MD, a hepatologist with Keck Medicine of USC and assistant professor of clinical medicine with the Keck School of Medicine of USC in Los Angeles, “this is a great study ... in an area of high need” that used “high quality data.”

Current liver allocation strategies depend on a snapshot of kidney function, but these new findings suggest that a more dynamic approach may be needed. “As a practicing liver specialist I see that creatinine numbers can fluctuate a lot. ... So which number do you use when you’re trying to calculate what a patient’s risk of death is on the wait list? This study gets toward that answer. If there is a lot of variability, these might be higher risk patients; these might be patients that we should put higher on the transplant waiting list,” said Dr. Lee.

He suggested that clinicians should account for creatinine fluctuations when considering mortality risk; however, the evidence is “not quite there yet ... in terms of changing transplant policy and allocation.” He pointed out three unanswered questions: Why are creatinine values fluctuating? How should fluctuations be scored for risk modeling? And, what impact would those risk scores have on transplant waitlist prioritization?

“I think that that’s the work that you would need to do before you could really change national transplant policy,” Dr. Lee concluded.

The study was supported by the National Institutes of Health and the UCSF Liver Center. Dr. Cullaro and another author have disclosed relationships with Mallinckrodt Pharmaceuticals and Axcella Health, respectively. Dr. Lee reported no conflicts of interest.

Publications
Topics
Sections

Patients with greater changes in serum creatinine are more likely to have worse pre- and post–liver transplant outcomes. Moreover, underserved patients may be most frequently affected, according to a retrospective analysis of UNOS (United Network for Organ Sharing) data.

These results should drive further development of serum creatinine coefficient of variation (sCr CoV) as an independent predictor of renal-related mortality risk, according to lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.

“Intra-individual clinical and laboratory parameter dynamics often provide additional prognostic information – added information that goes beyond what can be found with cross-sectional data,” the researchers wrote in Hepatology. “This finding has been seen in several scenarios in the general population – intra-individual variability in blood pressure, weight, hemoglobin, and kidney function, have all been associated with worse clinical outcomes. However, in cirrhosis patients, and more specifically in patients awaiting a liver transplant, kidney function dynamics as a predictor of clinical outcomes has yet to be investigated.”

To gauge the predictive power of shifting kidney values, Dr. Cullaro and colleagues analyzed UNOS/OPTN (Organ Procurement and Transplantation Network) registry data from 2011 through 2019. Exclusion criteria included patients who were aged younger than 18 years, were listed as status 1, received a living donor liver transplantation, were on hemodialysis, or had fewer than three updates. The final dataset included 25,204 patients.

After the researchers sorted patients into low, intermediate, and high sCr CoV tertiles, they used logistic regression to determine relationships between higher sCr and a variety of covariates, such as age, sex, diagnosis, presence of acute kidney injury, or chronic kidney disease. A competing risk regression was then done to look for associations between wait list mortality and the covariables, with liver transplant used as the competing risk.

The median sCr CoV was 17.4% (interquartile range [IQR], 10.8%-29.5%). Patients in the bottom sCr CoV tertile had a median value of 8.8% (IQR, 6.6%-10.8%), compared with 17.4% (IQR, 14.8%-20.4%) in the intermediate variability group and 36.8% (IQR, 29.5%-48.8%) in the high variability group. High variability was associated with female sex, Hispanic ethnicity, ascites, and hepatic encephalopathy as well as higher body mass index, MELDNa score, and serum creatinine.

Of note, each decreasing serum creatinine variability tertile was associated with a significantly lower rate of wait list mortality (34.7% vs. 19.6% vs. 11.7%; P < .001). The creatinine variability profiles were similarly associated with the likelihood of receiving a liver transplant (52.3% vs. 48.9% vs. 43.7%; P < .001) and posttransplant mortality (7.5% vs. 5.5% vs. 3.9%; P < .001).

A multivariate model showed that each 10% increase in sCr CoV predicted an 8% increased risk of a combined outcome comprising post–liver transplant death or post–liver transplant kidney transplant (KALT), independently of other variables (adjusted hazard ratio, 1.08; 95% confidence interval, 1.05-1.11).

“These data highlight that all fluctuations in sCr are associated with worse pre- and post–liver transplant outcomes,” the investigators concluded. “Moreover, the groups that are most underserved by sCr, specifically women, were most likely to have greater sCr CoVs. We believe our work lays the foundation for implementing the sCr CoV as an independent metric of renal-related mortality risk and may be most beneficial for those groups most underserved by sCr values alone.”

According to Brian P. Lee, MD, a hepatologist with Keck Medicine of USC and assistant professor of clinical medicine with the Keck School of Medicine of USC in Los Angeles, “this is a great study ... in an area of high need” that used “high quality data.”

Current liver allocation strategies depend on a snapshot of kidney function, but these new findings suggest that a more dynamic approach may be needed. “As a practicing liver specialist I see that creatinine numbers can fluctuate a lot. ... So which number do you use when you’re trying to calculate what a patient’s risk of death is on the wait list? This study gets toward that answer. If there is a lot of variability, these might be higher risk patients; these might be patients that we should put higher on the transplant waiting list,” said Dr. Lee.

He suggested that clinicians should account for creatinine fluctuations when considering mortality risk; however, the evidence is “not quite there yet ... in terms of changing transplant policy and allocation.” He pointed out three unanswered questions: Why are creatinine values fluctuating? How should fluctuations be scored for risk modeling? And, what impact would those risk scores have on transplant waitlist prioritization?

“I think that that’s the work that you would need to do before you could really change national transplant policy,” Dr. Lee concluded.

The study was supported by the National Institutes of Health and the UCSF Liver Center. Dr. Cullaro and another author have disclosed relationships with Mallinckrodt Pharmaceuticals and Axcella Health, respectively. Dr. Lee reported no conflicts of interest.

Patients with greater changes in serum creatinine are more likely to have worse pre- and post–liver transplant outcomes. Moreover, underserved patients may be most frequently affected, according to a retrospective analysis of UNOS (United Network for Organ Sharing) data.

These results should drive further development of serum creatinine coefficient of variation (sCr CoV) as an independent predictor of renal-related mortality risk, according to lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.

“Intra-individual clinical and laboratory parameter dynamics often provide additional prognostic information – added information that goes beyond what can be found with cross-sectional data,” the researchers wrote in Hepatology. “This finding has been seen in several scenarios in the general population – intra-individual variability in blood pressure, weight, hemoglobin, and kidney function, have all been associated with worse clinical outcomes. However, in cirrhosis patients, and more specifically in patients awaiting a liver transplant, kidney function dynamics as a predictor of clinical outcomes has yet to be investigated.”

To gauge the predictive power of shifting kidney values, Dr. Cullaro and colleagues analyzed UNOS/OPTN (Organ Procurement and Transplantation Network) registry data from 2011 through 2019. Exclusion criteria included patients who were aged younger than 18 years, were listed as status 1, received a living donor liver transplantation, were on hemodialysis, or had fewer than three updates. The final dataset included 25,204 patients.

After the researchers sorted patients into low, intermediate, and high sCr CoV tertiles, they used logistic regression to determine relationships between higher sCr and a variety of covariates, such as age, sex, diagnosis, presence of acute kidney injury, or chronic kidney disease. A competing risk regression was then done to look for associations between wait list mortality and the covariables, with liver transplant used as the competing risk.

The median sCr CoV was 17.4% (interquartile range [IQR], 10.8%-29.5%). Patients in the bottom sCr CoV tertile had a median value of 8.8% (IQR, 6.6%-10.8%), compared with 17.4% (IQR, 14.8%-20.4%) in the intermediate variability group and 36.8% (IQR, 29.5%-48.8%) in the high variability group. High variability was associated with female sex, Hispanic ethnicity, ascites, and hepatic encephalopathy as well as higher body mass index, MELDNa score, and serum creatinine.

Of note, each decreasing serum creatinine variability tertile was associated with a significantly lower rate of wait list mortality (34.7% vs. 19.6% vs. 11.7%; P < .001). The creatinine variability profiles were similarly associated with the likelihood of receiving a liver transplant (52.3% vs. 48.9% vs. 43.7%; P < .001) and posttransplant mortality (7.5% vs. 5.5% vs. 3.9%; P < .001).

A multivariate model showed that each 10% increase in sCr CoV predicted an 8% increased risk of a combined outcome comprising post–liver transplant death or post–liver transplant kidney transplant (KALT), independently of other variables (adjusted hazard ratio, 1.08; 95% confidence interval, 1.05-1.11).

“These data highlight that all fluctuations in sCr are associated with worse pre- and post–liver transplant outcomes,” the investigators concluded. “Moreover, the groups that are most underserved by sCr, specifically women, were most likely to have greater sCr CoVs. We believe our work lays the foundation for implementing the sCr CoV as an independent metric of renal-related mortality risk and may be most beneficial for those groups most underserved by sCr values alone.”

According to Brian P. Lee, MD, a hepatologist with Keck Medicine of USC and assistant professor of clinical medicine with the Keck School of Medicine of USC in Los Angeles, “this is a great study ... in an area of high need” that used “high quality data.”

Current liver allocation strategies depend on a snapshot of kidney function, but these new findings suggest that a more dynamic approach may be needed. “As a practicing liver specialist I see that creatinine numbers can fluctuate a lot. ... So which number do you use when you’re trying to calculate what a patient’s risk of death is on the wait list? This study gets toward that answer. If there is a lot of variability, these might be higher risk patients; these might be patients that we should put higher on the transplant waiting list,” said Dr. Lee.

He suggested that clinicians should account for creatinine fluctuations when considering mortality risk; however, the evidence is “not quite there yet ... in terms of changing transplant policy and allocation.” He pointed out three unanswered questions: Why are creatinine values fluctuating? How should fluctuations be scored for risk modeling? And, what impact would those risk scores have on transplant waitlist prioritization?

“I think that that’s the work that you would need to do before you could really change national transplant policy,” Dr. Lee concluded.

The study was supported by the National Institutes of Health and the UCSF Liver Center. Dr. Cullaro and another author have disclosed relationships with Mallinckrodt Pharmaceuticals and Axcella Health, respectively. Dr. Lee reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

NAFLD vs. MAFLD: What’s in a name?

Article Type
Changed
Mon, 05/16/2022 - 08:11

Non-alcoholic fatty liver disease (NAFLD) and metabolic associated fatty liver disease (MAFLD) demonstrate highly similar clinical courses and mortality rates, and a name change may not be clinically beneficial, based on data from more than 17,000 patients.

Instead, etiologic subcategorization of fatty liver disease (FLD) should be considered, reported lead author Zobair M. Younossi, MD, of Betty and Guy Beatty Center for Integrated Research, Inova Health System, Falls Church, Va., and colleagues.

“There is debate about whether NAFLD is an appropriate name as the term ‘non-alcoholic’ overemphasizes the absence of alcohol use and underemphasizes the importance of the metabolic risk factors which are the main drivers of disease progression,” the investigators wrote in Hepatology. “It has been suggested that MAFLD may better reflect these risk factors. However, such a recommendation is made despite a lack of a general consensus on the definition of ‘metabolic health’ and disagreements in endocrinology circles about the term ‘metabolic syndrome.’ Nevertheless, a few investigators have suggested that MAFLD but not NAFLD is associated with increased fibrosis and mortality.”

To look for clinical differences between the two disease entities, Dr. Younossi and colleagues turned to the National Health and Nutrition Examination Survey (NHANES). Specifically, the NHANES III and NHANES 2017-2018 cohorts were employed, including 12,878 and 4,328 participants, respectively.

MAFLD was defined as FLD with overweight/obesity, evidence of metabolic dysregulation, or type 2 diabetes mellitus. NAFLD was defined as FLD without excessive alcohol consumption or other causes of chronic liver disease. Patients were sorted into four groups: NAFLD, MAFLD, both disease types, or neither disease type. Since the categories were not mutually exclusive, the investigators compared clinical characteristics based on 95% confidence intervals. If no overlap was found, then differences were deemed statistically significant.

Diagnoses of NAFLD and MAFLD were highly concordant (kappa coefficient = 0.83-0.94). After a median of 22.8 years follow-up, no significant differences were found between groups for cause-specific mortality, all-cause mortality, or major clinical characteristics except those inherent to the disease definitions (for example, lack of alcohol use in NAFLD). Greatest risk factors for advanced fibrosis in both groups were obesity, high-risk fibrosis, and type 2 diabetes mellitus.

As anticipated, by definition, alcoholic liver disease and excess alcohol use were documented in approximately 15% of patients with MAFLD, but in no patients with NAFLD. As such, alcoholic liver disease predicted liver-specific mortality for MAFLD (hazard ratio, 4.50; 95% confidence interval, 1.89-10.75) but not NAFLD. Conversely, insulin resistance predicted liver-specific mortality in NAFLD (HR, 3.57; 95% CI, 1.35-9.42) but not MAFLD (HR, 0.84; 95% CI, 0.36-1.95).

“These data do not support the notion that a name change from NAFLD to MAFLD will better capture the risk for long-term outcomes of these patients or better define metabolically at-risk patients who present with FLD,” the investigators concluded. “On the other hand, enlarging the definition to FLD with subcategories of ‘alcoholic,’ ‘non-alcoholic,’ ‘drug-induced,’ etc. has merit and needs to be further considered. In this context, a true international consensus group of experts supported by liver and non-liver scientific societies must undertake an evidence-based and comprehensive approach to this issue and assess both the benefits and risks of changing the name.”

Dr. Rohit Loomba

According to Rohit Loomba, MD, director of the NAFLD research center and professor of medicine in the division of gastroenterology and hepatology at University of California, San Diego, the study offers a preview of the consequences if NAFLD were changed to MAFLD, most notably by making alcohol a key driver of outcomes.

“If we change the name of a disease entity ... how does that impact natural history?” Dr. Loomba asked in an interview. “This paper gives you an idea. If you start calling it MAFLD, then people are dying from alcohol use, and they’re not dying from what we are currently seeing patients with NAFLD die of.”

He also noted that the name change could disrupt drug development and outcome measures since most drugs currently in development are directed at nonalcoholic steatohepatitis (NASH).

“Is it worth the headache?” Dr. Loomba asked. “How are we going to define NASH-related fibrosis? That probably will remain the same because the therapies that we will use to address that will remain consistent with what we are currently pursuing. ... It’s probably premature to change the nomenclature before assessing the impact on finding new treatment.”

Dr. Younossi disclosed relationships with BMS, Novartis, Gilead, and others. Dr. Loomba serves as a consultant to Aardvark Therapeutics, Altimmune, Anylam/Regeneron, Amgen, Arrowhead Pharmaceuticals, AstraZeneca, Bristol-Myer Squibb, CohBar, Eli Lilly, Galmed, Gilead, Glympse bio, Hightide, Inipharma, Intercept, Inventiva, Ionis, Janssen, Madrigal, Metacrine, NGM Biopharmaceuticals, Novartis, Novo Nordisk, Merck, Pfizer, Sagimet, Theratechnologies, 89 bio, Terns Pharmaceuticals, and Viking Therapeutics.

Publications
Topics
Sections

Non-alcoholic fatty liver disease (NAFLD) and metabolic associated fatty liver disease (MAFLD) demonstrate highly similar clinical courses and mortality rates, and a name change may not be clinically beneficial, based on data from more than 17,000 patients.

Instead, etiologic subcategorization of fatty liver disease (FLD) should be considered, reported lead author Zobair M. Younossi, MD, of Betty and Guy Beatty Center for Integrated Research, Inova Health System, Falls Church, Va., and colleagues.

“There is debate about whether NAFLD is an appropriate name as the term ‘non-alcoholic’ overemphasizes the absence of alcohol use and underemphasizes the importance of the metabolic risk factors which are the main drivers of disease progression,” the investigators wrote in Hepatology. “It has been suggested that MAFLD may better reflect these risk factors. However, such a recommendation is made despite a lack of a general consensus on the definition of ‘metabolic health’ and disagreements in endocrinology circles about the term ‘metabolic syndrome.’ Nevertheless, a few investigators have suggested that MAFLD but not NAFLD is associated with increased fibrosis and mortality.”

To look for clinical differences between the two disease entities, Dr. Younossi and colleagues turned to the National Health and Nutrition Examination Survey (NHANES). Specifically, the NHANES III and NHANES 2017-2018 cohorts were employed, including 12,878 and 4,328 participants, respectively.

MAFLD was defined as FLD with overweight/obesity, evidence of metabolic dysregulation, or type 2 diabetes mellitus. NAFLD was defined as FLD without excessive alcohol consumption or other causes of chronic liver disease. Patients were sorted into four groups: NAFLD, MAFLD, both disease types, or neither disease type. Since the categories were not mutually exclusive, the investigators compared clinical characteristics based on 95% confidence intervals. If no overlap was found, then differences were deemed statistically significant.

Diagnoses of NAFLD and MAFLD were highly concordant (kappa coefficient = 0.83-0.94). After a median of 22.8 years follow-up, no significant differences were found between groups for cause-specific mortality, all-cause mortality, or major clinical characteristics except those inherent to the disease definitions (for example, lack of alcohol use in NAFLD). Greatest risk factors for advanced fibrosis in both groups were obesity, high-risk fibrosis, and type 2 diabetes mellitus.

As anticipated, by definition, alcoholic liver disease and excess alcohol use were documented in approximately 15% of patients with MAFLD, but in no patients with NAFLD. As such, alcoholic liver disease predicted liver-specific mortality for MAFLD (hazard ratio, 4.50; 95% confidence interval, 1.89-10.75) but not NAFLD. Conversely, insulin resistance predicted liver-specific mortality in NAFLD (HR, 3.57; 95% CI, 1.35-9.42) but not MAFLD (HR, 0.84; 95% CI, 0.36-1.95).

“These data do not support the notion that a name change from NAFLD to MAFLD will better capture the risk for long-term outcomes of these patients or better define metabolically at-risk patients who present with FLD,” the investigators concluded. “On the other hand, enlarging the definition to FLD with subcategories of ‘alcoholic,’ ‘non-alcoholic,’ ‘drug-induced,’ etc. has merit and needs to be further considered. In this context, a true international consensus group of experts supported by liver and non-liver scientific societies must undertake an evidence-based and comprehensive approach to this issue and assess both the benefits and risks of changing the name.”

Dr. Rohit Loomba

According to Rohit Loomba, MD, director of the NAFLD research center and professor of medicine in the division of gastroenterology and hepatology at University of California, San Diego, the study offers a preview of the consequences if NAFLD were changed to MAFLD, most notably by making alcohol a key driver of outcomes.

“If we change the name of a disease entity ... how does that impact natural history?” Dr. Loomba asked in an interview. “This paper gives you an idea. If you start calling it MAFLD, then people are dying from alcohol use, and they’re not dying from what we are currently seeing patients with NAFLD die of.”

He also noted that the name change could disrupt drug development and outcome measures since most drugs currently in development are directed at nonalcoholic steatohepatitis (NASH).

“Is it worth the headache?” Dr. Loomba asked. “How are we going to define NASH-related fibrosis? That probably will remain the same because the therapies that we will use to address that will remain consistent with what we are currently pursuing. ... It’s probably premature to change the nomenclature before assessing the impact on finding new treatment.”

Dr. Younossi disclosed relationships with BMS, Novartis, Gilead, and others. Dr. Loomba serves as a consultant to Aardvark Therapeutics, Altimmune, Anylam/Regeneron, Amgen, Arrowhead Pharmaceuticals, AstraZeneca, Bristol-Myer Squibb, CohBar, Eli Lilly, Galmed, Gilead, Glympse bio, Hightide, Inipharma, Intercept, Inventiva, Ionis, Janssen, Madrigal, Metacrine, NGM Biopharmaceuticals, Novartis, Novo Nordisk, Merck, Pfizer, Sagimet, Theratechnologies, 89 bio, Terns Pharmaceuticals, and Viking Therapeutics.

Non-alcoholic fatty liver disease (NAFLD) and metabolic associated fatty liver disease (MAFLD) demonstrate highly similar clinical courses and mortality rates, and a name change may not be clinically beneficial, based on data from more than 17,000 patients.

Instead, etiologic subcategorization of fatty liver disease (FLD) should be considered, reported lead author Zobair M. Younossi, MD, of Betty and Guy Beatty Center for Integrated Research, Inova Health System, Falls Church, Va., and colleagues.

“There is debate about whether NAFLD is an appropriate name as the term ‘non-alcoholic’ overemphasizes the absence of alcohol use and underemphasizes the importance of the metabolic risk factors which are the main drivers of disease progression,” the investigators wrote in Hepatology. “It has been suggested that MAFLD may better reflect these risk factors. However, such a recommendation is made despite a lack of a general consensus on the definition of ‘metabolic health’ and disagreements in endocrinology circles about the term ‘metabolic syndrome.’ Nevertheless, a few investigators have suggested that MAFLD but not NAFLD is associated with increased fibrosis and mortality.”

To look for clinical differences between the two disease entities, Dr. Younossi and colleagues turned to the National Health and Nutrition Examination Survey (NHANES). Specifically, the NHANES III and NHANES 2017-2018 cohorts were employed, including 12,878 and 4,328 participants, respectively.

MAFLD was defined as FLD with overweight/obesity, evidence of metabolic dysregulation, or type 2 diabetes mellitus. NAFLD was defined as FLD without excessive alcohol consumption or other causes of chronic liver disease. Patients were sorted into four groups: NAFLD, MAFLD, both disease types, or neither disease type. Since the categories were not mutually exclusive, the investigators compared clinical characteristics based on 95% confidence intervals. If no overlap was found, then differences were deemed statistically significant.

Diagnoses of NAFLD and MAFLD were highly concordant (kappa coefficient = 0.83-0.94). After a median of 22.8 years follow-up, no significant differences were found between groups for cause-specific mortality, all-cause mortality, or major clinical characteristics except those inherent to the disease definitions (for example, lack of alcohol use in NAFLD). Greatest risk factors for advanced fibrosis in both groups were obesity, high-risk fibrosis, and type 2 diabetes mellitus.

As anticipated, by definition, alcoholic liver disease and excess alcohol use were documented in approximately 15% of patients with MAFLD, but in no patients with NAFLD. As such, alcoholic liver disease predicted liver-specific mortality for MAFLD (hazard ratio, 4.50; 95% confidence interval, 1.89-10.75) but not NAFLD. Conversely, insulin resistance predicted liver-specific mortality in NAFLD (HR, 3.57; 95% CI, 1.35-9.42) but not MAFLD (HR, 0.84; 95% CI, 0.36-1.95).

“These data do not support the notion that a name change from NAFLD to MAFLD will better capture the risk for long-term outcomes of these patients or better define metabolically at-risk patients who present with FLD,” the investigators concluded. “On the other hand, enlarging the definition to FLD with subcategories of ‘alcoholic,’ ‘non-alcoholic,’ ‘drug-induced,’ etc. has merit and needs to be further considered. In this context, a true international consensus group of experts supported by liver and non-liver scientific societies must undertake an evidence-based and comprehensive approach to this issue and assess both the benefits and risks of changing the name.”

Dr. Rohit Loomba

According to Rohit Loomba, MD, director of the NAFLD research center and professor of medicine in the division of gastroenterology and hepatology at University of California, San Diego, the study offers a preview of the consequences if NAFLD were changed to MAFLD, most notably by making alcohol a key driver of outcomes.

“If we change the name of a disease entity ... how does that impact natural history?” Dr. Loomba asked in an interview. “This paper gives you an idea. If you start calling it MAFLD, then people are dying from alcohol use, and they’re not dying from what we are currently seeing patients with NAFLD die of.”

He also noted that the name change could disrupt drug development and outcome measures since most drugs currently in development are directed at nonalcoholic steatohepatitis (NASH).

“Is it worth the headache?” Dr. Loomba asked. “How are we going to define NASH-related fibrosis? That probably will remain the same because the therapies that we will use to address that will remain consistent with what we are currently pursuing. ... It’s probably premature to change the nomenclature before assessing the impact on finding new treatment.”

Dr. Younossi disclosed relationships with BMS, Novartis, Gilead, and others. Dr. Loomba serves as a consultant to Aardvark Therapeutics, Altimmune, Anylam/Regeneron, Amgen, Arrowhead Pharmaceuticals, AstraZeneca, Bristol-Myer Squibb, CohBar, Eli Lilly, Galmed, Gilead, Glympse bio, Hightide, Inipharma, Intercept, Inventiva, Ionis, Janssen, Madrigal, Metacrine, NGM Biopharmaceuticals, Novartis, Novo Nordisk, Merck, Pfizer, Sagimet, Theratechnologies, 89 bio, Terns Pharmaceuticals, and Viking Therapeutics.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA Clinical Practice Update: Expert review of dietary options for IBS

Article Type
Changed
Thu, 05/19/2022 - 11:16

The American Gastroenterological Association has published a clinical practice update on dietary interventions for patients with irritable bowel syndrome (IBS). The topics range from identification of suitable candidates for dietary interventions, to levels of evidence for specific diets, which are becoming increasingly recognized for their key role in managing patients with IBS, according to lead author William D. Chey, MD, of the University of Michigan, Ann Arbor, and colleagues.

“Most medical therapies for IBS improve global symptoms in fewer than one-half of patients, with a therapeutic gain of 7%-15% over placebo,” the researchers wrote in Gastroenterology. “Most patients with IBS associate their GI symptoms with eating food.”

Dr. William D. Chey

According to Dr. Chey and colleagues, clinicians who are considering dietary modifications for treating IBS should first recognize the inherent challenges presented by this process and be aware that new diets won’t work for everyone.

“Specialty diets require planning and preparation, which may be impractical for some patients,” they wrote, noting that individuals with “decreased cognitive abilities and significant psychiatric disease” may be unable to follow diets or interpret their own responses to specific foods. Special diets may also be inappropriate for patients with financial constraints, and “should be avoided in patients with an eating disorder.”

Because of the challenges involved in dietary interventions, Dr. Chey and colleagues advised clinical support from a registered dietitian nutritionist or other resource.

Patients who are suitable candidates for intervention and willing to try a new diet should first provide information about their current eating habits. A food trial can then be personalized and implemented for a predetermined amount of time. If the patient does not respond, then the diet should be stopped and changed to a new diet or another intervention.

Dr. Chey and colleagues discussed three specific dietary interventions and their corresponding levels of evidence: soluble fiber; the low-fermentable oligo-, di-, and monosaccharides and polyols (FODMAP) diet; and a gluten-free diet.

“Soluble fiber is efficacious in treating global symptoms of IBS,” they wrote, citing 15 randomized controlled trials. Soluble fiber is most suitable for patients with constipation-predominant IBS, and different soluble fibers may yield different outcomes based on characteristics such as rate of fermentation and viscosity. In contrast, insoluble fiber is unlikely to help with IBS, and may worsen abdominal pain and bloating.

The low-FODMAP diet is “currently the most evidence-based diet intervention for IBS,” especially for patients with diarrhea-predominant IBS. Dr. Chey and colleagues offered a clear roadmap for employing the diet. First, patients should eat only low-FODMAP foods for 4-6 weeks. If symptoms don’t improve, the diet should be stopped. If symptoms do improve, foods containing a single FODMAP should be reintroduced one at a time, each in increasing quantities over 3 days, alongside documentation of symptom responses. Finally, the diet should be personalized based on these responses. The majority of patients (close to 80%) “can liberalize” a low-FODMAP diet based on their responses.

In contrast with the low-FODMAP diet, which has a relatively solid body of supporting evidence, efficacy data are still limited for treating IBS with a gluten-free diet. “Although observational studies found that most patients with IBS improve with a gluten-free diet, randomized controlled trials have yielded mixed results,” Dr. Chey and colleagues explained.

Their report cited a recent monograph on the topic that concluded that gluten-free eating offered no significant benefit over placebo (relative risk, 0.46; 95% confidence interval, 0.16-1.28). While some studies have documented positive results with a gluten-free diet, Dr. Chey and colleagues suggested that confounding variables such as the nocebo effect and the impact of other dietary factors have yet to be ruled out. “At present, it remains unclear whether a gluten-free diet is of benefit to patients with IBS.”

Dr. Chey and colleagues also explored IBS biomarkers. While some early data have shown that biomarkers may predict dietary responses, “there is insufficient evidence to support their routine use in clinical practice. ... Further efforts to identify and validate biomarkers that predict response to dietary interventions are needed to deliver ‘personalized nutrition.’ ”

The clinical practice update was commissioned and approved by the AGA CPU Committee and the AGA Governing Board. The researchers disclosed relationships with Biomerica, Salix, Mauna Kea Technologies, and others.

This article was updated May 19, 2022.

Publications
Topics
Sections

The American Gastroenterological Association has published a clinical practice update on dietary interventions for patients with irritable bowel syndrome (IBS). The topics range from identification of suitable candidates for dietary interventions, to levels of evidence for specific diets, which are becoming increasingly recognized for their key role in managing patients with IBS, according to lead author William D. Chey, MD, of the University of Michigan, Ann Arbor, and colleagues.

“Most medical therapies for IBS improve global symptoms in fewer than one-half of patients, with a therapeutic gain of 7%-15% over placebo,” the researchers wrote in Gastroenterology. “Most patients with IBS associate their GI symptoms with eating food.”

Dr. William D. Chey

According to Dr. Chey and colleagues, clinicians who are considering dietary modifications for treating IBS should first recognize the inherent challenges presented by this process and be aware that new diets won’t work for everyone.

“Specialty diets require planning and preparation, which may be impractical for some patients,” they wrote, noting that individuals with “decreased cognitive abilities and significant psychiatric disease” may be unable to follow diets or interpret their own responses to specific foods. Special diets may also be inappropriate for patients with financial constraints, and “should be avoided in patients with an eating disorder.”

Because of the challenges involved in dietary interventions, Dr. Chey and colleagues advised clinical support from a registered dietitian nutritionist or other resource.

Patients who are suitable candidates for intervention and willing to try a new diet should first provide information about their current eating habits. A food trial can then be personalized and implemented for a predetermined amount of time. If the patient does not respond, then the diet should be stopped and changed to a new diet or another intervention.

Dr. Chey and colleagues discussed three specific dietary interventions and their corresponding levels of evidence: soluble fiber; the low-fermentable oligo-, di-, and monosaccharides and polyols (FODMAP) diet; and a gluten-free diet.

“Soluble fiber is efficacious in treating global symptoms of IBS,” they wrote, citing 15 randomized controlled trials. Soluble fiber is most suitable for patients with constipation-predominant IBS, and different soluble fibers may yield different outcomes based on characteristics such as rate of fermentation and viscosity. In contrast, insoluble fiber is unlikely to help with IBS, and may worsen abdominal pain and bloating.

The low-FODMAP diet is “currently the most evidence-based diet intervention for IBS,” especially for patients with diarrhea-predominant IBS. Dr. Chey and colleagues offered a clear roadmap for employing the diet. First, patients should eat only low-FODMAP foods for 4-6 weeks. If symptoms don’t improve, the diet should be stopped. If symptoms do improve, foods containing a single FODMAP should be reintroduced one at a time, each in increasing quantities over 3 days, alongside documentation of symptom responses. Finally, the diet should be personalized based on these responses. The majority of patients (close to 80%) “can liberalize” a low-FODMAP diet based on their responses.

In contrast with the low-FODMAP diet, which has a relatively solid body of supporting evidence, efficacy data are still limited for treating IBS with a gluten-free diet. “Although observational studies found that most patients with IBS improve with a gluten-free diet, randomized controlled trials have yielded mixed results,” Dr. Chey and colleagues explained.

Their report cited a recent monograph on the topic that concluded that gluten-free eating offered no significant benefit over placebo (relative risk, 0.46; 95% confidence interval, 0.16-1.28). While some studies have documented positive results with a gluten-free diet, Dr. Chey and colleagues suggested that confounding variables such as the nocebo effect and the impact of other dietary factors have yet to be ruled out. “At present, it remains unclear whether a gluten-free diet is of benefit to patients with IBS.”

Dr. Chey and colleagues also explored IBS biomarkers. While some early data have shown that biomarkers may predict dietary responses, “there is insufficient evidence to support their routine use in clinical practice. ... Further efforts to identify and validate biomarkers that predict response to dietary interventions are needed to deliver ‘personalized nutrition.’ ”

The clinical practice update was commissioned and approved by the AGA CPU Committee and the AGA Governing Board. The researchers disclosed relationships with Biomerica, Salix, Mauna Kea Technologies, and others.

This article was updated May 19, 2022.

The American Gastroenterological Association has published a clinical practice update on dietary interventions for patients with irritable bowel syndrome (IBS). The topics range from identification of suitable candidates for dietary interventions, to levels of evidence for specific diets, which are becoming increasingly recognized for their key role in managing patients with IBS, according to lead author William D. Chey, MD, of the University of Michigan, Ann Arbor, and colleagues.

“Most medical therapies for IBS improve global symptoms in fewer than one-half of patients, with a therapeutic gain of 7%-15% over placebo,” the researchers wrote in Gastroenterology. “Most patients with IBS associate their GI symptoms with eating food.”

Dr. William D. Chey

According to Dr. Chey and colleagues, clinicians who are considering dietary modifications for treating IBS should first recognize the inherent challenges presented by this process and be aware that new diets won’t work for everyone.

“Specialty diets require planning and preparation, which may be impractical for some patients,” they wrote, noting that individuals with “decreased cognitive abilities and significant psychiatric disease” may be unable to follow diets or interpret their own responses to specific foods. Special diets may also be inappropriate for patients with financial constraints, and “should be avoided in patients with an eating disorder.”

Because of the challenges involved in dietary interventions, Dr. Chey and colleagues advised clinical support from a registered dietitian nutritionist or other resource.

Patients who are suitable candidates for intervention and willing to try a new diet should first provide information about their current eating habits. A food trial can then be personalized and implemented for a predetermined amount of time. If the patient does not respond, then the diet should be stopped and changed to a new diet or another intervention.

Dr. Chey and colleagues discussed three specific dietary interventions and their corresponding levels of evidence: soluble fiber; the low-fermentable oligo-, di-, and monosaccharides and polyols (FODMAP) diet; and a gluten-free diet.

“Soluble fiber is efficacious in treating global symptoms of IBS,” they wrote, citing 15 randomized controlled trials. Soluble fiber is most suitable for patients with constipation-predominant IBS, and different soluble fibers may yield different outcomes based on characteristics such as rate of fermentation and viscosity. In contrast, insoluble fiber is unlikely to help with IBS, and may worsen abdominal pain and bloating.

The low-FODMAP diet is “currently the most evidence-based diet intervention for IBS,” especially for patients with diarrhea-predominant IBS. Dr. Chey and colleagues offered a clear roadmap for employing the diet. First, patients should eat only low-FODMAP foods for 4-6 weeks. If symptoms don’t improve, the diet should be stopped. If symptoms do improve, foods containing a single FODMAP should be reintroduced one at a time, each in increasing quantities over 3 days, alongside documentation of symptom responses. Finally, the diet should be personalized based on these responses. The majority of patients (close to 80%) “can liberalize” a low-FODMAP diet based on their responses.

In contrast with the low-FODMAP diet, which has a relatively solid body of supporting evidence, efficacy data are still limited for treating IBS with a gluten-free diet. “Although observational studies found that most patients with IBS improve with a gluten-free diet, randomized controlled trials have yielded mixed results,” Dr. Chey and colleagues explained.

Their report cited a recent monograph on the topic that concluded that gluten-free eating offered no significant benefit over placebo (relative risk, 0.46; 95% confidence interval, 0.16-1.28). While some studies have documented positive results with a gluten-free diet, Dr. Chey and colleagues suggested that confounding variables such as the nocebo effect and the impact of other dietary factors have yet to be ruled out. “At present, it remains unclear whether a gluten-free diet is of benefit to patients with IBS.”

Dr. Chey and colleagues also explored IBS biomarkers. While some early data have shown that biomarkers may predict dietary responses, “there is insufficient evidence to support their routine use in clinical practice. ... Further efforts to identify and validate biomarkers that predict response to dietary interventions are needed to deliver ‘personalized nutrition.’ ”

The clinical practice update was commissioned and approved by the AGA CPU Committee and the AGA Governing Board. The researchers disclosed relationships with Biomerica, Salix, Mauna Kea Technologies, and others.

This article was updated May 19, 2022.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study casts doubt on safety, efficacy of L-serine supplementation for AD

Article Type
Changed
Fri, 07/01/2022 - 13:29

 

While previous research suggests that dietary supplementation with L-serine may be beneficial for patients with Alzheimer’s disease (AD), a new study cast doubt on the potential efficacy, and even the safety, of this treatment.

When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.

This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.

Dr. Sheng Zhong

Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.

“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.

They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).

‘Long-term use of D-serine contributes to neuronal death’ suggests research

“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”

According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”

Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.

Roles of PHGDH and serine in Alzheimer’s disease require further study

Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.

“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”

Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”

While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”

At this point, it may be too early to tell, according to Dr. Barger.

He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.

Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.

“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”

Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”

Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.

“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.

The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.

Issue
Neurology Reviews - 30(7)
Publications
Topics
Sections

 

While previous research suggests that dietary supplementation with L-serine may be beneficial for patients with Alzheimer’s disease (AD), a new study cast doubt on the potential efficacy, and even the safety, of this treatment.

When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.

This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.

Dr. Sheng Zhong

Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.

“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.

They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).

‘Long-term use of D-serine contributes to neuronal death’ suggests research

“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”

According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”

Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.

Roles of PHGDH and serine in Alzheimer’s disease require further study

Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.

“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”

Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”

While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”

At this point, it may be too early to tell, according to Dr. Barger.

He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.

Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.

“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”

Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”

Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.

“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.

The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.

 

While previous research suggests that dietary supplementation with L-serine may be beneficial for patients with Alzheimer’s disease (AD), a new study cast doubt on the potential efficacy, and even the safety, of this treatment.

When given to patients with AD, L-serine supplements could be driving abnormally increased serine levels in the brain even higher, potentially accelerating neuronal death, according to study author Xu Chen, PhD, of the University of California, San Diego, and colleagues.

This conclusion conflicts with a 2020 study by Juliette Le Douce, PhD, and colleagues, who reported that oral L-serine supplementation may act as a “ready-to-use therapy” for AD, based on their findings that patients with AD had low levels of PHGDH, an enzyme necessary for synthesizing serine, and AD-like mice had low levels of serine.

Dr. Sheng Zhong

Writing in Cell Metabolism, Dr. Chen and colleagues framed the present study, and their findings, in this context.

“In contrast to the work of Le Douce et al., here we report that PHGDH mRNA and protein levels are increased in the brains of two mouse models of AD and/or tauopathy, and are also progressively increased in human brains with no, early, and late AD pathology, as well as in people with no, asymptomatic, and symptomatic AD,” they wrote.

They suggested adjusting clinical recommendations for L-serine, the form of the amino acid commonly found in supplements. In the body, L-serine is converted to D-serine, which acts on the NMDA receptor (NMDAR).

‘Long-term use of D-serine contributes to neuronal death’ suggests research

“We feel oral L-serine as a ready-to-use therapy to AD warrants precaution,” Dr. Chen and colleagues wrote. “This is because despite being a cognitive enhancer, some [research] suggests that long-term use of D-serine contributes to neuronal death in AD through excitotoxicity. Furthermore, D-serine, as a co-agonist of NMDAR, would be expected to oppose NMDAR antagonists, which have proven clinical benefits in treating AD.”

According to principal author Sheng Zhong, PhD, of the University of California, San Diego, “Research is needed to test if targeting PHGDH can ameliorate cognitive decline in AD.”

Dr. Zhong also noted that the present findings support the “promise of using a specific RNA in blood as a biomarker for early detection of Alzheimer’s disease.” This approach is currently being validated at UCSD Shiley-Marcos Alzheimer’s Disease Research Center, he added.

Roles of PHGDH and serine in Alzheimer’s disease require further study

Commenting on both studies, Steve W. Barger, PhD, of the University of Arkansas for Medical Sciences, Little Rock, suggested that more work is needed to better understand the roles of PHGDH and serine in AD before clinical applications can be considered.

“In the end, these two studies fail to provide the clarity we need in designing evidence-based therapeutic hypotheses,” Dr. Barger said in an interview. “We still do not have a firm grasp on the role that D-serine plays in AD. Indeed, the evidence regarding even a single enzyme contributing to its levels is ambiguous.”

Dr. Barger, who has published extensively on the topic of neuronal death, with a particular focus on Alzheimer’s disease, noted that “determination of what happens to D-serine levels in AD has been of interest for decades,” but levels of the amino acid have been notoriously challenging to measure because “D-serine can disappear rapidly from the brain and its fluids after death.”

While Dr. Le Douce and colleagues did measure levels of serine in mice, Dr. Barger noted that the study by Dr. Chen and colleagues was conducted with more “quantitatively rigorous methods.” Even though Dr. Chen and colleagues “did not assay the levels of D-serine itself ... the implication of their findings is that PHGDH is poised to elevate this critical neurotransmitter,” leading to their conclusion that serine supplementation is “potentially dangerous.”

At this point, it may be too early to tell, according to Dr. Barger.

He suggested that conclusions drawn from PHGDH levels alone are “always limited,” and conclusions based on serine levels may be equally dubious, considering that the activities and effects of serine “are quite complex,” and may be influenced by other physiologic processes, including the effects of gut bacteria.

Instead, Dr. Barger suggested that changes in PHGDH and serine may be interpreted as signals coming from a more relevant process upstream: glucose metabolism.

“What we can say confidently is that the glucose metabolism that PHGDH connects to D-serine is most definitely a factor in AD,” he said. “Countless studies have documented what now appears to be a universal decline in glucose delivery to the cerebral cortex, even before frank dementia sets in.”

Dr. Barger noted that declining glucose delivery coincides with some of the earliest events in the development of AD, perhaps “linking accumulation of amyloid β-peptide to subsequent neurofibrillary tangles and tissue atrophy.”

Dr. Barger’s own work recently demonstrated that AD is associated with “an irregularity in the insertion of a specific glucose transporter (GLUT1) into the cell surface” of astrocytes.

“It could be more effective to direct therapeutic interventions at these events lying upstream of PHGDH or serine,” he concluded.

The study was partly supported by a Kreuger v. Wyeth research award. The investigators and Dr. Barger reported no conflicts of interest.

Issue
Neurology Reviews - 30(7)
Issue
Neurology Reviews - 30(7)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL METABOLISM

Citation Override
Publish date: May 11, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cellular gene profiling may predict IBD treatment response

These insights are an important start
Article Type
Changed
Tue, 06/07/2022 - 10:34

Transcriptomic profiling of phagocytes in the lamina propria of patients with inflammatory bowel disease (IBD) may guide future treatment selection, according to investigators.

Mucosal gut biopsies revealed that phagocytic gene expression correlated with inflammatory states, types of IBD, and responses to therapy, lead author Gillian E. Jacobsen a MD/PhD candidate at the University of Miami and colleagues reported.

In an article in Gastro Hep Advances, the investigators wrote that “lamina propria phagocytes along with epithelial cells represent a first line of defense and play a balancing act between tolerance toward commensal microbes and generation of immune responses toward pathogenic microorganisms. ... Inappropriate responses by lamina propria phagocytes have been linked to IBD.”

To better understand these responses, the researchers collected 111 gut mucosal biopsies from 54 patients with IBD, among whom 59% were taking biologics, 72% had inflammation in at least one biopsy site, and 41% had previously used at least one other biologic. Samples were analyzed to determine cell phenotypes, gene expression, and cytokine responses to in vitro Janus kinase (JAK) inhibitor exposure.

Ms. Jacobsen and colleagues noted that most reports that address the function of phagocytes focus on circulating dendritic cells, monocytes, or monocyte-derived macrophages, rather than on resident phagocyte populations located in the lamina propria. However, these circulating cells “do not reflect intestinal inflammation, or whole tissue biopsies.”

Phagocytes based on CD11b expression and phenotyped CD11b+-enriched cells using flow cytometry were identified. In samples with active inflammation, cells were most often granulocytes (45.5%), followed by macrophages (22.6%) and monocytes (9.4%). Uninflamed samples had a slightly lower proportion of granulocytes (33.6%), about the same proportion of macrophages (22.7%), and a higher rate of B cells (15.6% vs. 9.0%).

Ms. Jacobsen and colleagues highlighted the absolute uptick in granulocytes, including neutrophils.

“Neutrophilic infiltration is a major indicator of IBD activity and may be critically linked to ongoing inflammation,” they wrote. “These data demonstrate that CD11b+ enrichment reflects the inflammatory state of the biopsies.”

The investigators also showed that transcriptional profiles of lamina propria CD11b+ cells differed “greatly” between colon and ileum, which suggested that “the location or cellular environment plays a marked role in determining the gene expression of phagocytes.”

CD11b+ cell gene expression profiles also correlated with ulcerative colitis versus Crohn’s disease, although the researchers noted that these patterns were less pronounced than correlations with inflammatory states

“There are pathways common to inflammation regardless of the IBD type that could be used as markers of inflammation or targets for therapy.”

Comparing colon samples from patients who responded to anti–tumor necrosis factor therapy with those who were refractory to anti-TNF therapy revealed significant associations between response type and 52 differentially expressed genes.

“These genes were mostly immunoglobulin genes up-regulated in the anti–TNF-treated inflamed colon, suggesting that CD11b+ B cells may play a role in medication refractoriness.”

Evaluating inflamed colon and anti-TNF refractory ileum revealed differential expression of OSM, a known marker of TNF-resistant disease, as well as TREM1, a proinflammatory marker. In contrast, NTS genes showed high expression in uninflamed samples on anti-TNF therapy. The researchers noted that these findings “may be used to build precision medicine approaches in IBD.”

Further experiments showed that in vitro exposure of anti-TNF refractory samples to JAK inhibitors resulted in significantly reduced secretion of interleukin-8 and TNF-alpha.

“Our study provides functional data that JAK inhibition with tofacitinib (JAK1/JAK3) or ruxolitinib (JAK1/JAK2) inhibits lipopolysaccharide-induced cytokine production even in TNF-refractory samples,” the researchers wrote. “These data inform the response of patients to JAK inhibitors, including those refractory to other treatments.”

The study was supported by Pfizer, the National Institute of Diabetes and Digestive and Kidney Diseases, the Micky & Madeleine Arison Family Foundation Crohn’s & Colitis Discovery Laboratory, and Martin Kalser Chair in Gastroenterology at the University of Miami. The investigators disclosed additional relationships with Takeda, Abbvie, Eli Lilly, and others.

Body

Inflammatory bowel diseases are complex and heterogenous disorders driven by inappropriate immune responses to luminal substances, including diet and microbes, resulting in chronic inflammation of the gastrointestinal tract. Therapies for IBD largely center around suppressing immune responses; however, given the complexity and heterogeneity of these diseases, consensus on which aspect of the immune response to suppress and which cell type to target in a given patient is unclear.

Dr. Sreeram Udayan
In this study, Jacobsen et al. profiled CD11b+ lamina propria phagocytes from biopsy specimens of patients with IBD and identified genes differentially expressed depending on the inflammation status (uninflamed vs. inflamed), tissue type (colon vs. ileum), and the type of IBD (ulcerative colitis vs. Crohn’s disease). This study is notable in that it studied CD11b+ cells from the gut, as opposed to many studies examining circulating cellular populations, and evaluated the response of these resident populations to emerging therapies for IBD. The authors find that even in patients refractory to anti-TNF-alpha therapy, the most common biologic used for IBD, CD11b+ cellular populations can be modulated, and inflammatory responses suppressed with Janus kinase inhibitors in in vitro studies, which suggests that this may be a therapeutic approach for this difficult-to-treat patient population. Beyond these objective observations, this study also could foreshadow future approaches to use intestinal biopsies to tailor immunotherapies for personalized therapy for IBD particularly in difficult to treat refractory cases.

Sreeram Udayan, PhD, and Rodney D. Newberry, MD, are with the division of gastroenterology in the department of medicine at Washington University, St. Louis.

Publications
Topics
Sections
Body

Inflammatory bowel diseases are complex and heterogenous disorders driven by inappropriate immune responses to luminal substances, including diet and microbes, resulting in chronic inflammation of the gastrointestinal tract. Therapies for IBD largely center around suppressing immune responses; however, given the complexity and heterogeneity of these diseases, consensus on which aspect of the immune response to suppress and which cell type to target in a given patient is unclear.

Dr. Sreeram Udayan
In this study, Jacobsen et al. profiled CD11b+ lamina propria phagocytes from biopsy specimens of patients with IBD and identified genes differentially expressed depending on the inflammation status (uninflamed vs. inflamed), tissue type (colon vs. ileum), and the type of IBD (ulcerative colitis vs. Crohn’s disease). This study is notable in that it studied CD11b+ cells from the gut, as opposed to many studies examining circulating cellular populations, and evaluated the response of these resident populations to emerging therapies for IBD. The authors find that even in patients refractory to anti-TNF-alpha therapy, the most common biologic used for IBD, CD11b+ cellular populations can be modulated, and inflammatory responses suppressed with Janus kinase inhibitors in in vitro studies, which suggests that this may be a therapeutic approach for this difficult-to-treat patient population. Beyond these objective observations, this study also could foreshadow future approaches to use intestinal biopsies to tailor immunotherapies for personalized therapy for IBD particularly in difficult to treat refractory cases.

Sreeram Udayan, PhD, and Rodney D. Newberry, MD, are with the division of gastroenterology in the department of medicine at Washington University, St. Louis.

Body

Inflammatory bowel diseases are complex and heterogenous disorders driven by inappropriate immune responses to luminal substances, including diet and microbes, resulting in chronic inflammation of the gastrointestinal tract. Therapies for IBD largely center around suppressing immune responses; however, given the complexity and heterogeneity of these diseases, consensus on which aspect of the immune response to suppress and which cell type to target in a given patient is unclear.

Dr. Sreeram Udayan
In this study, Jacobsen et al. profiled CD11b+ lamina propria phagocytes from biopsy specimens of patients with IBD and identified genes differentially expressed depending on the inflammation status (uninflamed vs. inflamed), tissue type (colon vs. ileum), and the type of IBD (ulcerative colitis vs. Crohn’s disease). This study is notable in that it studied CD11b+ cells from the gut, as opposed to many studies examining circulating cellular populations, and evaluated the response of these resident populations to emerging therapies for IBD. The authors find that even in patients refractory to anti-TNF-alpha therapy, the most common biologic used for IBD, CD11b+ cellular populations can be modulated, and inflammatory responses suppressed with Janus kinase inhibitors in in vitro studies, which suggests that this may be a therapeutic approach for this difficult-to-treat patient population. Beyond these objective observations, this study also could foreshadow future approaches to use intestinal biopsies to tailor immunotherapies for personalized therapy for IBD particularly in difficult to treat refractory cases.

Sreeram Udayan, PhD, and Rodney D. Newberry, MD, are with the division of gastroenterology in the department of medicine at Washington University, St. Louis.

Title
These insights are an important start
These insights are an important start

Transcriptomic profiling of phagocytes in the lamina propria of patients with inflammatory bowel disease (IBD) may guide future treatment selection, according to investigators.

Mucosal gut biopsies revealed that phagocytic gene expression correlated with inflammatory states, types of IBD, and responses to therapy, lead author Gillian E. Jacobsen a MD/PhD candidate at the University of Miami and colleagues reported.

In an article in Gastro Hep Advances, the investigators wrote that “lamina propria phagocytes along with epithelial cells represent a first line of defense and play a balancing act between tolerance toward commensal microbes and generation of immune responses toward pathogenic microorganisms. ... Inappropriate responses by lamina propria phagocytes have been linked to IBD.”

To better understand these responses, the researchers collected 111 gut mucosal biopsies from 54 patients with IBD, among whom 59% were taking biologics, 72% had inflammation in at least one biopsy site, and 41% had previously used at least one other biologic. Samples were analyzed to determine cell phenotypes, gene expression, and cytokine responses to in vitro Janus kinase (JAK) inhibitor exposure.

Ms. Jacobsen and colleagues noted that most reports that address the function of phagocytes focus on circulating dendritic cells, monocytes, or monocyte-derived macrophages, rather than on resident phagocyte populations located in the lamina propria. However, these circulating cells “do not reflect intestinal inflammation, or whole tissue biopsies.”

Phagocytes based on CD11b expression and phenotyped CD11b+-enriched cells using flow cytometry were identified. In samples with active inflammation, cells were most often granulocytes (45.5%), followed by macrophages (22.6%) and monocytes (9.4%). Uninflamed samples had a slightly lower proportion of granulocytes (33.6%), about the same proportion of macrophages (22.7%), and a higher rate of B cells (15.6% vs. 9.0%).

Ms. Jacobsen and colleagues highlighted the absolute uptick in granulocytes, including neutrophils.

“Neutrophilic infiltration is a major indicator of IBD activity and may be critically linked to ongoing inflammation,” they wrote. “These data demonstrate that CD11b+ enrichment reflects the inflammatory state of the biopsies.”

The investigators also showed that transcriptional profiles of lamina propria CD11b+ cells differed “greatly” between colon and ileum, which suggested that “the location or cellular environment plays a marked role in determining the gene expression of phagocytes.”

CD11b+ cell gene expression profiles also correlated with ulcerative colitis versus Crohn’s disease, although the researchers noted that these patterns were less pronounced than correlations with inflammatory states

“There are pathways common to inflammation regardless of the IBD type that could be used as markers of inflammation or targets for therapy.”

Comparing colon samples from patients who responded to anti–tumor necrosis factor therapy with those who were refractory to anti-TNF therapy revealed significant associations between response type and 52 differentially expressed genes.

“These genes were mostly immunoglobulin genes up-regulated in the anti–TNF-treated inflamed colon, suggesting that CD11b+ B cells may play a role in medication refractoriness.”

Evaluating inflamed colon and anti-TNF refractory ileum revealed differential expression of OSM, a known marker of TNF-resistant disease, as well as TREM1, a proinflammatory marker. In contrast, NTS genes showed high expression in uninflamed samples on anti-TNF therapy. The researchers noted that these findings “may be used to build precision medicine approaches in IBD.”

Further experiments showed that in vitro exposure of anti-TNF refractory samples to JAK inhibitors resulted in significantly reduced secretion of interleukin-8 and TNF-alpha.

“Our study provides functional data that JAK inhibition with tofacitinib (JAK1/JAK3) or ruxolitinib (JAK1/JAK2) inhibits lipopolysaccharide-induced cytokine production even in TNF-refractory samples,” the researchers wrote. “These data inform the response of patients to JAK inhibitors, including those refractory to other treatments.”

The study was supported by Pfizer, the National Institute of Diabetes and Digestive and Kidney Diseases, the Micky & Madeleine Arison Family Foundation Crohn’s & Colitis Discovery Laboratory, and Martin Kalser Chair in Gastroenterology at the University of Miami. The investigators disclosed additional relationships with Takeda, Abbvie, Eli Lilly, and others.

Transcriptomic profiling of phagocytes in the lamina propria of patients with inflammatory bowel disease (IBD) may guide future treatment selection, according to investigators.

Mucosal gut biopsies revealed that phagocytic gene expression correlated with inflammatory states, types of IBD, and responses to therapy, lead author Gillian E. Jacobsen a MD/PhD candidate at the University of Miami and colleagues reported.

In an article in Gastro Hep Advances, the investigators wrote that “lamina propria phagocytes along with epithelial cells represent a first line of defense and play a balancing act between tolerance toward commensal microbes and generation of immune responses toward pathogenic microorganisms. ... Inappropriate responses by lamina propria phagocytes have been linked to IBD.”

To better understand these responses, the researchers collected 111 gut mucosal biopsies from 54 patients with IBD, among whom 59% were taking biologics, 72% had inflammation in at least one biopsy site, and 41% had previously used at least one other biologic. Samples were analyzed to determine cell phenotypes, gene expression, and cytokine responses to in vitro Janus kinase (JAK) inhibitor exposure.

Ms. Jacobsen and colleagues noted that most reports that address the function of phagocytes focus on circulating dendritic cells, monocytes, or monocyte-derived macrophages, rather than on resident phagocyte populations located in the lamina propria. However, these circulating cells “do not reflect intestinal inflammation, or whole tissue biopsies.”

Phagocytes based on CD11b expression and phenotyped CD11b+-enriched cells using flow cytometry were identified. In samples with active inflammation, cells were most often granulocytes (45.5%), followed by macrophages (22.6%) and monocytes (9.4%). Uninflamed samples had a slightly lower proportion of granulocytes (33.6%), about the same proportion of macrophages (22.7%), and a higher rate of B cells (15.6% vs. 9.0%).

Ms. Jacobsen and colleagues highlighted the absolute uptick in granulocytes, including neutrophils.

“Neutrophilic infiltration is a major indicator of IBD activity and may be critically linked to ongoing inflammation,” they wrote. “These data demonstrate that CD11b+ enrichment reflects the inflammatory state of the biopsies.”

The investigators also showed that transcriptional profiles of lamina propria CD11b+ cells differed “greatly” between colon and ileum, which suggested that “the location or cellular environment plays a marked role in determining the gene expression of phagocytes.”

CD11b+ cell gene expression profiles also correlated with ulcerative colitis versus Crohn’s disease, although the researchers noted that these patterns were less pronounced than correlations with inflammatory states

“There are pathways common to inflammation regardless of the IBD type that could be used as markers of inflammation or targets for therapy.”

Comparing colon samples from patients who responded to anti–tumor necrosis factor therapy with those who were refractory to anti-TNF therapy revealed significant associations between response type and 52 differentially expressed genes.

“These genes were mostly immunoglobulin genes up-regulated in the anti–TNF-treated inflamed colon, suggesting that CD11b+ B cells may play a role in medication refractoriness.”

Evaluating inflamed colon and anti-TNF refractory ileum revealed differential expression of OSM, a known marker of TNF-resistant disease, as well as TREM1, a proinflammatory marker. In contrast, NTS genes showed high expression in uninflamed samples on anti-TNF therapy. The researchers noted that these findings “may be used to build precision medicine approaches in IBD.”

Further experiments showed that in vitro exposure of anti-TNF refractory samples to JAK inhibitors resulted in significantly reduced secretion of interleukin-8 and TNF-alpha.

“Our study provides functional data that JAK inhibition with tofacitinib (JAK1/JAK3) or ruxolitinib (JAK1/JAK2) inhibits lipopolysaccharide-induced cytokine production even in TNF-refractory samples,” the researchers wrote. “These data inform the response of patients to JAK inhibitors, including those refractory to other treatments.”

The study was supported by Pfizer, the National Institute of Diabetes and Digestive and Kidney Diseases, the Micky & Madeleine Arison Family Foundation Crohn’s & Colitis Discovery Laboratory, and Martin Kalser Chair in Gastroenterology at the University of Miami. The investigators disclosed additional relationships with Takeda, Abbvie, Eli Lilly, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Deep learning system outmatches pathologists in diagnosing liver lesions

Work smarter not harder
Article Type
Changed
Thu, 06/09/2022 - 13:37

A new deep learning system can classify hepatocellular nodular lesions (HNLs) via whole-slide images, improving risk stratification of patients and diagnostic rate of hepatocellular carcinoma (HCC), according to investigators.

While the model requires further validation, it could eventually be used to optimize accuracy and efficiency of histologic diagnoses, potentially decreasing reliance on pathologists, particularly in areas with limited access to subspecialists.

In an article published in Gastroenterology, Na Cheng, MD, of Sun Yat-sen University, Guangzhou, China, and colleagues wrote that the “diagnostic process [for HNLs] is laborious, time-consuming, and subject to the experience of the pathologists, often with significant interobserver and intraobserver variability. ... Therefore, [an] automated analysis system is highly demanded in the pathology field, which could considerably ease the workload, speed up the diagnosis, and facilitate the in-time treatment.”

To this end, Dr. Cheng and colleagues developed the hepatocellular-nodular artificial intelligence model (HnAIM) that can scan whole-image slides to identify seven types of tissue: well-differentiated HCC, high-grade dysplastic nodules, low-grade dysplastic nodules, hepatocellular adenoma, focal nodular hyperplasia, and background tissue.

Developing and testing HnAIM was a multistep process that began with three subspecialist pathologists, who independently reviewed and classified liver slides from surgical resection. Unanimous agreement was achieved in 649 slides from 462 patients. These slides were then scanned to create whole-slide images, which were divided into sets for training (70%), validation (15%), and internal testing (15%). Accuracy, measured by area under the curve (AUC), was over 99.9% for the internal testing set. The accuracy of HnAIM was independently, externally validated.

First, HnAIM evaluated liver biopsy slides from 30 patients at one center. Results were compared with diagnoses made by nine pathologists classified as either senior, intermediate, or junior. While HnAIM correctly diagnosed 100% of the cases, senior pathologists correctly diagnosed 94.4% of the cases, followed in accuracy by intermediate (86.7%) and junior (73.3%) pathologists.

The researchers noted that the “rate of agreement with subspecialists was higher for HnAIM than for all 9 pathologists at distinguishing 7 liver tissues, with important diagnostic implications for fragmentary or scarce biopsy specimens.”

Next, HnAIM evaluated 234 samples from three hospitals. Accuracy was slightly lower, with an AUC of 93.5%. The researchers highlighted how HnAIM consistently differentiated precancerous lesions and well-defined HCC from benign lesions and background tissues.

A final experiment showed how HnAIM reacted to the most challenging cases. The investigators selected 12 cases without definitive diagnoses and found that, similar to the findings of three subspecialist pathologists, HnAIM did not reach a single diagnostic conclusion.

The researchers reported that “This may be due to a number of potential reasons, such as inherent uncertainty in the 2-dimensional interpretation of a 3-dimensional specimen, the limited number of tissue samples, and cognitive factors such as anchoring.”

However, HnAIM contributed to the diagnostic process by generating multiple diagnostic possibilities with weighted likelihood. After reviewing these results, the expert pathologists reached consensus in 5 out of 12 cases. Moreover, two out of three expert pathologists agreed on all 12 cases, improving agreement rate from 25% to 100%.

The researchers concluded that the model holds the promise to facilitate human HNL diagnoses and improve efficiency and quality. It can also reduce the workload of pathologists, especially where subspecialists are unavailable.

The study was supported by the National Natural Science Foundation of China, the Guangdong Basic and Applied Basic Research Foundation, the Natural Science Foundation of Guangdong Province, and others. The investigators reported no conflicts of interest.

Body

As the prevalence of hepatocellular carcinoma (HCC) continues to rise, the early and accurate detection and diagnosis of HCC remains paramount to improving patient outcomes. In cases of typical or advanced HCC, an accurate diagnosis is made using CT or MR imaging. However, hepatocellular nodular lesions (HNLs) with atypical or inconclusive radiographic appearances are often biopsied to achieve a histopathologic diagnosis. In addition, accurate diagnosis of an HNL following liver resection or transplantation is important to long-term surveillance and management. An accurate histopathologic diagnosis relies on the availability of experienced subspecialty pathologists and remains a costly and labor-intensive process that can lead to delays in diagnosis and care.

Dr. Hannah P. Kim
In this study, Cheng et al. developed a deep learning system to differentiate histopathologic diagnoses of various HNLs, normal liver, and cirrhosis. Their model, hepatocellular-nodular artificial intelligence model (HnAIM), accurately classified various liver histology slides with an AUC of 93.5% using an external validation cohort. When compared to even the most experienced subspecialty pathologists, HnAIM demonstrated superior HNL histopathologic diagnostic accuracy. Utilization of HnAIM to either make or aid in the diagnosis of HNLs can lead to more accurate diagnoses in a more efficient and timely manner and has the potential to provide subspecialty care in areas that lack subspecialty pathologists. If this model is further validated, HnAIM may be used to improve the quality of care we are able to provide to our patients, ultimately with the ability to improve our diagnosis of HCC, prevent delays in treatment, and improve patient outcomes.

Hannah P. Kim, MD, MSCR, is an assistant professor in the division of gastroenterology, hepatology, and nutrition in the department of medicine at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.

Publications
Topics
Sections
Body

As the prevalence of hepatocellular carcinoma (HCC) continues to rise, the early and accurate detection and diagnosis of HCC remains paramount to improving patient outcomes. In cases of typical or advanced HCC, an accurate diagnosis is made using CT or MR imaging. However, hepatocellular nodular lesions (HNLs) with atypical or inconclusive radiographic appearances are often biopsied to achieve a histopathologic diagnosis. In addition, accurate diagnosis of an HNL following liver resection or transplantation is important to long-term surveillance and management. An accurate histopathologic diagnosis relies on the availability of experienced subspecialty pathologists and remains a costly and labor-intensive process that can lead to delays in diagnosis and care.

Dr. Hannah P. Kim
In this study, Cheng et al. developed a deep learning system to differentiate histopathologic diagnoses of various HNLs, normal liver, and cirrhosis. Their model, hepatocellular-nodular artificial intelligence model (HnAIM), accurately classified various liver histology slides with an AUC of 93.5% using an external validation cohort. When compared to even the most experienced subspecialty pathologists, HnAIM demonstrated superior HNL histopathologic diagnostic accuracy. Utilization of HnAIM to either make or aid in the diagnosis of HNLs can lead to more accurate diagnoses in a more efficient and timely manner and has the potential to provide subspecialty care in areas that lack subspecialty pathologists. If this model is further validated, HnAIM may be used to improve the quality of care we are able to provide to our patients, ultimately with the ability to improve our diagnosis of HCC, prevent delays in treatment, and improve patient outcomes.

Hannah P. Kim, MD, MSCR, is an assistant professor in the division of gastroenterology, hepatology, and nutrition in the department of medicine at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.

Body

As the prevalence of hepatocellular carcinoma (HCC) continues to rise, the early and accurate detection and diagnosis of HCC remains paramount to improving patient outcomes. In cases of typical or advanced HCC, an accurate diagnosis is made using CT or MR imaging. However, hepatocellular nodular lesions (HNLs) with atypical or inconclusive radiographic appearances are often biopsied to achieve a histopathologic diagnosis. In addition, accurate diagnosis of an HNL following liver resection or transplantation is important to long-term surveillance and management. An accurate histopathologic diagnosis relies on the availability of experienced subspecialty pathologists and remains a costly and labor-intensive process that can lead to delays in diagnosis and care.

Dr. Hannah P. Kim
In this study, Cheng et al. developed a deep learning system to differentiate histopathologic diagnoses of various HNLs, normal liver, and cirrhosis. Their model, hepatocellular-nodular artificial intelligence model (HnAIM), accurately classified various liver histology slides with an AUC of 93.5% using an external validation cohort. When compared to even the most experienced subspecialty pathologists, HnAIM demonstrated superior HNL histopathologic diagnostic accuracy. Utilization of HnAIM to either make or aid in the diagnosis of HNLs can lead to more accurate diagnoses in a more efficient and timely manner and has the potential to provide subspecialty care in areas that lack subspecialty pathologists. If this model is further validated, HnAIM may be used to improve the quality of care we are able to provide to our patients, ultimately with the ability to improve our diagnosis of HCC, prevent delays in treatment, and improve patient outcomes.

Hannah P. Kim, MD, MSCR, is an assistant professor in the division of gastroenterology, hepatology, and nutrition in the department of medicine at Vanderbilt University Medical Center, Nashville, Tenn. She has no conflicts of interest.

Title
Work smarter not harder
Work smarter not harder

A new deep learning system can classify hepatocellular nodular lesions (HNLs) via whole-slide images, improving risk stratification of patients and diagnostic rate of hepatocellular carcinoma (HCC), according to investigators.

While the model requires further validation, it could eventually be used to optimize accuracy and efficiency of histologic diagnoses, potentially decreasing reliance on pathologists, particularly in areas with limited access to subspecialists.

In an article published in Gastroenterology, Na Cheng, MD, of Sun Yat-sen University, Guangzhou, China, and colleagues wrote that the “diagnostic process [for HNLs] is laborious, time-consuming, and subject to the experience of the pathologists, often with significant interobserver and intraobserver variability. ... Therefore, [an] automated analysis system is highly demanded in the pathology field, which could considerably ease the workload, speed up the diagnosis, and facilitate the in-time treatment.”

To this end, Dr. Cheng and colleagues developed the hepatocellular-nodular artificial intelligence model (HnAIM) that can scan whole-image slides to identify seven types of tissue: well-differentiated HCC, high-grade dysplastic nodules, low-grade dysplastic nodules, hepatocellular adenoma, focal nodular hyperplasia, and background tissue.

Developing and testing HnAIM was a multistep process that began with three subspecialist pathologists, who independently reviewed and classified liver slides from surgical resection. Unanimous agreement was achieved in 649 slides from 462 patients. These slides were then scanned to create whole-slide images, which were divided into sets for training (70%), validation (15%), and internal testing (15%). Accuracy, measured by area under the curve (AUC), was over 99.9% for the internal testing set. The accuracy of HnAIM was independently, externally validated.

First, HnAIM evaluated liver biopsy slides from 30 patients at one center. Results were compared with diagnoses made by nine pathologists classified as either senior, intermediate, or junior. While HnAIM correctly diagnosed 100% of the cases, senior pathologists correctly diagnosed 94.4% of the cases, followed in accuracy by intermediate (86.7%) and junior (73.3%) pathologists.

The researchers noted that the “rate of agreement with subspecialists was higher for HnAIM than for all 9 pathologists at distinguishing 7 liver tissues, with important diagnostic implications for fragmentary or scarce biopsy specimens.”

Next, HnAIM evaluated 234 samples from three hospitals. Accuracy was slightly lower, with an AUC of 93.5%. The researchers highlighted how HnAIM consistently differentiated precancerous lesions and well-defined HCC from benign lesions and background tissues.

A final experiment showed how HnAIM reacted to the most challenging cases. The investigators selected 12 cases without definitive diagnoses and found that, similar to the findings of three subspecialist pathologists, HnAIM did not reach a single diagnostic conclusion.

The researchers reported that “This may be due to a number of potential reasons, such as inherent uncertainty in the 2-dimensional interpretation of a 3-dimensional specimen, the limited number of tissue samples, and cognitive factors such as anchoring.”

However, HnAIM contributed to the diagnostic process by generating multiple diagnostic possibilities with weighted likelihood. After reviewing these results, the expert pathologists reached consensus in 5 out of 12 cases. Moreover, two out of three expert pathologists agreed on all 12 cases, improving agreement rate from 25% to 100%.

The researchers concluded that the model holds the promise to facilitate human HNL diagnoses and improve efficiency and quality. It can also reduce the workload of pathologists, especially where subspecialists are unavailable.

The study was supported by the National Natural Science Foundation of China, the Guangdong Basic and Applied Basic Research Foundation, the Natural Science Foundation of Guangdong Province, and others. The investigators reported no conflicts of interest.

A new deep learning system can classify hepatocellular nodular lesions (HNLs) via whole-slide images, improving risk stratification of patients and diagnostic rate of hepatocellular carcinoma (HCC), according to investigators.

While the model requires further validation, it could eventually be used to optimize accuracy and efficiency of histologic diagnoses, potentially decreasing reliance on pathologists, particularly in areas with limited access to subspecialists.

In an article published in Gastroenterology, Na Cheng, MD, of Sun Yat-sen University, Guangzhou, China, and colleagues wrote that the “diagnostic process [for HNLs] is laborious, time-consuming, and subject to the experience of the pathologists, often with significant interobserver and intraobserver variability. ... Therefore, [an] automated analysis system is highly demanded in the pathology field, which could considerably ease the workload, speed up the diagnosis, and facilitate the in-time treatment.”

To this end, Dr. Cheng and colleagues developed the hepatocellular-nodular artificial intelligence model (HnAIM) that can scan whole-image slides to identify seven types of tissue: well-differentiated HCC, high-grade dysplastic nodules, low-grade dysplastic nodules, hepatocellular adenoma, focal nodular hyperplasia, and background tissue.

Developing and testing HnAIM was a multistep process that began with three subspecialist pathologists, who independently reviewed and classified liver slides from surgical resection. Unanimous agreement was achieved in 649 slides from 462 patients. These slides were then scanned to create whole-slide images, which were divided into sets for training (70%), validation (15%), and internal testing (15%). Accuracy, measured by area under the curve (AUC), was over 99.9% for the internal testing set. The accuracy of HnAIM was independently, externally validated.

First, HnAIM evaluated liver biopsy slides from 30 patients at one center. Results were compared with diagnoses made by nine pathologists classified as either senior, intermediate, or junior. While HnAIM correctly diagnosed 100% of the cases, senior pathologists correctly diagnosed 94.4% of the cases, followed in accuracy by intermediate (86.7%) and junior (73.3%) pathologists.

The researchers noted that the “rate of agreement with subspecialists was higher for HnAIM than for all 9 pathologists at distinguishing 7 liver tissues, with important diagnostic implications for fragmentary or scarce biopsy specimens.”

Next, HnAIM evaluated 234 samples from three hospitals. Accuracy was slightly lower, with an AUC of 93.5%. The researchers highlighted how HnAIM consistently differentiated precancerous lesions and well-defined HCC from benign lesions and background tissues.

A final experiment showed how HnAIM reacted to the most challenging cases. The investigators selected 12 cases without definitive diagnoses and found that, similar to the findings of three subspecialist pathologists, HnAIM did not reach a single diagnostic conclusion.

The researchers reported that “This may be due to a number of potential reasons, such as inherent uncertainty in the 2-dimensional interpretation of a 3-dimensional specimen, the limited number of tissue samples, and cognitive factors such as anchoring.”

However, HnAIM contributed to the diagnostic process by generating multiple diagnostic possibilities with weighted likelihood. After reviewing these results, the expert pathologists reached consensus in 5 out of 12 cases. Moreover, two out of three expert pathologists agreed on all 12 cases, improving agreement rate from 25% to 100%.

The researchers concluded that the model holds the promise to facilitate human HNL diagnoses and improve efficiency and quality. It can also reduce the workload of pathologists, especially where subspecialists are unavailable.

The study was supported by the National Natural Science Foundation of China, the Guangdong Basic and Applied Basic Research Foundation, the Natural Science Foundation of Guangdong Province, and others. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New guideline sheds light on diagnosis, treatment of rare GI syndromes

Article Type
Changed
Fri, 04/29/2022 - 09:17

A clinical practice guideline for the diagnosis and management of gastrointestinal hamartomatous polyposis syndromes has just been published by the U.S. Multi-Society Task Force on Colorectal Cancer, which is comprised of experts representing the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy.

Gastrointestinal hamartomatous polyposis syndromes are rare, autosomal dominant disorders associated with intestinal and extraintestinal tumors. Expert consensus statements have previously offered some recommendations for managing these syndromes, but clinical data are scarce, so the present review “is intended to establish a starting point for future research,” lead author C. Richard Boland, MD, of the University of California, San Diego, and colleagues reported.

According to the investigators, “there are essentially no long-term prospective controlled studies of comparative effectiveness of management strategies for these syndromes.” As a result, their recommendations are based on “low-quality” evidence according to GRADE criteria.

Still, Dr. Boland and colleagues highlighted that “there has been tremendous progress in recent years, both in understanding the underlying genetics that underpin these disorders and in elucidating the biology of associated premalignant and malignant conditions.”

The guideline was published online in Gastroenterology .
 

Four syndromes reviewed

The investigators gathered these data to provide an overview of genetic and clinical features for each syndrome, as well as management strategies. Four disorders are included: juvenile polyposis syndrome; Peutz-Jeghers syndrome; hereditary mixed polyposis syndrome; and PTEN-hamartoma tumor syndrome, encompassing Bannayan-Riley-Ruvalcaba syndrome and Cowden’s syndrome.

Although all gastrointestinal hamartomatous polyposis syndromes are caused by germline alterations, Dr. Boland and colleagues pointed out that diagnoses are typically made based on clinical criteria, with germline results serving as confirmatory evidence.

The guideline recommends that any patient with a family history of hamartomatous polyps, or with a history of at least two hamartomatous polyps, should undergo genetic testing. The guideline also provides more nuanced genetic testing algorithms for each syndrome.

Among all the hamartomatous polyp disorders, Peutz-Jeghers syndrome is most understood, according to the investigators. It is caused by aberrations in the STK11 gene, and is characterized by polyps with “branching bands of smooth muscle covered by hyperplastic glandular mucosa” that may occur in the stomach, small intestine, and colon. Patients are also at risk of extraintestinal neoplasia.

For management of Peutz-Jeghers syndrome, the guideline advises frequent endoscopic surveillance to prevent mechanical obstruction and bleeding, as well as multidisciplinary surveillance of the breasts, pancreas, ovaries, testes, and lungs.

Juvenile polyposis syndrome is most often characterized by solitary, sporadic polyps in the colorectum (98% of patients affected), followed distantly by polyps in the stomach (14%), ileum (7%), jejunum (7%), and duodenum (7%). The condition is linked with abnormalities in BMPR1A or SMAD4 genes, with SMAD4 germline abnormalities more often leading to “massive” gastric polyps, gastrointestinal bleeding, protein-losing enteropathy, and a higher incidence of gastric cancer in adulthood. Most patients with SMAD4 mutations also have hereditary hemorrhagic telangiectasia, characterized by gastrointestinal bleeding from mucocutaneous telangiectasias, arteriovenous malformations, and epistaxis.

Management of juvenile polyposis syndrome depends on frequent colonoscopies with polypectomies beginning at 12-15 years.

“The goal of surveillance in juvenile polyposis syndrome is to mitigate symptoms related to the disorder and decrease the risk of complications from the manifestations, including cancer,” Dr. Boland and colleagues wrote.

PTEN-hamartoma tumor syndrome, which includes both Bannayan-Riley-Ruvalcaba syndrome and Cowden’s syndrome, is caused by abnormalities in the eponymous PTEN gene. Patients with the condition have an increased risk of colon cancer and polyposis, as well as extraintestinal cancers.

Diagnosis of PTEN-hamartoma tumor syndrome may be complex, involving “clinical examination, mammography and breast MRI, thyroid ultrasound, transvaginal ultrasound, upper gastrointestinal endoscopy, colonoscopy, and renal ultrasound,” according to the guideline.

After diagnosis, frequent colonoscopies are recommended, typically starting at age 35 years, as well as continued surveillance of other organs.

Hereditary mixed polyposis syndrome, which involves attenuated colonic polyposis, is the rarest of the four disorders, having been reported in only “a few families,” according to the guideline. The condition has been linked with “large duplications of the promoter region or entire GREM1 gene.”

Onset is typically in the late 20s, “which is when colonoscopic surveillance should begin,” the investigators wrote. More data are needed to determine appropriate surveillance intervals and if the condition is associated with increased risk of extraintestinal neoplasia.

This call for more research into gastrointestinal hamartomatous polyposis syndromes carried through to the conclusion of the guideline.

“Long-term prospective studies of mutation carriers are still needed to further clarify the risk of cancer and the role of surveillance in these syndromes,” Dr. Boland and colleagues wrote. “With increases in genetic testing and evaluation, future studies will be conducted with more robust cohorts of genetically characterized, less heterogeneous populations. However, there is also a need to study patients and families with unusual phenotypes where no genotype can be found.”

The investigators disclosed no conflicts of interest with the current guideline; however, they provided a list of industry relationships, including Salix Pharmaceuticals, Ferring Pharmaceuticals, and Pfizer, among others.

Publications
Topics
Sections

A clinical practice guideline for the diagnosis and management of gastrointestinal hamartomatous polyposis syndromes has just been published by the U.S. Multi-Society Task Force on Colorectal Cancer, which is comprised of experts representing the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy.

Gastrointestinal hamartomatous polyposis syndromes are rare, autosomal dominant disorders associated with intestinal and extraintestinal tumors. Expert consensus statements have previously offered some recommendations for managing these syndromes, but clinical data are scarce, so the present review “is intended to establish a starting point for future research,” lead author C. Richard Boland, MD, of the University of California, San Diego, and colleagues reported.

According to the investigators, “there are essentially no long-term prospective controlled studies of comparative effectiveness of management strategies for these syndromes.” As a result, their recommendations are based on “low-quality” evidence according to GRADE criteria.

Still, Dr. Boland and colleagues highlighted that “there has been tremendous progress in recent years, both in understanding the underlying genetics that underpin these disorders and in elucidating the biology of associated premalignant and malignant conditions.”

The guideline was published online in Gastroenterology .
 

Four syndromes reviewed

The investigators gathered these data to provide an overview of genetic and clinical features for each syndrome, as well as management strategies. Four disorders are included: juvenile polyposis syndrome; Peutz-Jeghers syndrome; hereditary mixed polyposis syndrome; and PTEN-hamartoma tumor syndrome, encompassing Bannayan-Riley-Ruvalcaba syndrome and Cowden’s syndrome.

Although all gastrointestinal hamartomatous polyposis syndromes are caused by germline alterations, Dr. Boland and colleagues pointed out that diagnoses are typically made based on clinical criteria, with germline results serving as confirmatory evidence.

The guideline recommends that any patient with a family history of hamartomatous polyps, or with a history of at least two hamartomatous polyps, should undergo genetic testing. The guideline also provides more nuanced genetic testing algorithms for each syndrome.

Among all the hamartomatous polyp disorders, Peutz-Jeghers syndrome is most understood, according to the investigators. It is caused by aberrations in the STK11 gene, and is characterized by polyps with “branching bands of smooth muscle covered by hyperplastic glandular mucosa” that may occur in the stomach, small intestine, and colon. Patients are also at risk of extraintestinal neoplasia.

For management of Peutz-Jeghers syndrome, the guideline advises frequent endoscopic surveillance to prevent mechanical obstruction and bleeding, as well as multidisciplinary surveillance of the breasts, pancreas, ovaries, testes, and lungs.

Juvenile polyposis syndrome is most often characterized by solitary, sporadic polyps in the colorectum (98% of patients affected), followed distantly by polyps in the stomach (14%), ileum (7%), jejunum (7%), and duodenum (7%). The condition is linked with abnormalities in BMPR1A or SMAD4 genes, with SMAD4 germline abnormalities more often leading to “massive” gastric polyps, gastrointestinal bleeding, protein-losing enteropathy, and a higher incidence of gastric cancer in adulthood. Most patients with SMAD4 mutations also have hereditary hemorrhagic telangiectasia, characterized by gastrointestinal bleeding from mucocutaneous telangiectasias, arteriovenous malformations, and epistaxis.

Management of juvenile polyposis syndrome depends on frequent colonoscopies with polypectomies beginning at 12-15 years.

“The goal of surveillance in juvenile polyposis syndrome is to mitigate symptoms related to the disorder and decrease the risk of complications from the manifestations, including cancer,” Dr. Boland and colleagues wrote.

PTEN-hamartoma tumor syndrome, which includes both Bannayan-Riley-Ruvalcaba syndrome and Cowden’s syndrome, is caused by abnormalities in the eponymous PTEN gene. Patients with the condition have an increased risk of colon cancer and polyposis, as well as extraintestinal cancers.

Diagnosis of PTEN-hamartoma tumor syndrome may be complex, involving “clinical examination, mammography and breast MRI, thyroid ultrasound, transvaginal ultrasound, upper gastrointestinal endoscopy, colonoscopy, and renal ultrasound,” according to the guideline.

After diagnosis, frequent colonoscopies are recommended, typically starting at age 35 years, as well as continued surveillance of other organs.

Hereditary mixed polyposis syndrome, which involves attenuated colonic polyposis, is the rarest of the four disorders, having been reported in only “a few families,” according to the guideline. The condition has been linked with “large duplications of the promoter region or entire GREM1 gene.”

Onset is typically in the late 20s, “which is when colonoscopic surveillance should begin,” the investigators wrote. More data are needed to determine appropriate surveillance intervals and if the condition is associated with increased risk of extraintestinal neoplasia.

This call for more research into gastrointestinal hamartomatous polyposis syndromes carried through to the conclusion of the guideline.

“Long-term prospective studies of mutation carriers are still needed to further clarify the risk of cancer and the role of surveillance in these syndromes,” Dr. Boland and colleagues wrote. “With increases in genetic testing and evaluation, future studies will be conducted with more robust cohorts of genetically characterized, less heterogeneous populations. However, there is also a need to study patients and families with unusual phenotypes where no genotype can be found.”

The investigators disclosed no conflicts of interest with the current guideline; however, they provided a list of industry relationships, including Salix Pharmaceuticals, Ferring Pharmaceuticals, and Pfizer, among others.

A clinical practice guideline for the diagnosis and management of gastrointestinal hamartomatous polyposis syndromes has just been published by the U.S. Multi-Society Task Force on Colorectal Cancer, which is comprised of experts representing the American College of Gastroenterology, the American Gastroenterological Association, and the American Society for Gastrointestinal Endoscopy.

Gastrointestinal hamartomatous polyposis syndromes are rare, autosomal dominant disorders associated with intestinal and extraintestinal tumors. Expert consensus statements have previously offered some recommendations for managing these syndromes, but clinical data are scarce, so the present review “is intended to establish a starting point for future research,” lead author C. Richard Boland, MD, of the University of California, San Diego, and colleagues reported.

According to the investigators, “there are essentially no long-term prospective controlled studies of comparative effectiveness of management strategies for these syndromes.” As a result, their recommendations are based on “low-quality” evidence according to GRADE criteria.

Still, Dr. Boland and colleagues highlighted that “there has been tremendous progress in recent years, both in understanding the underlying genetics that underpin these disorders and in elucidating the biology of associated premalignant and malignant conditions.”

The guideline was published online in Gastroenterology .
 

Four syndromes reviewed

The investigators gathered these data to provide an overview of genetic and clinical features for each syndrome, as well as management strategies. Four disorders are included: juvenile polyposis syndrome; Peutz-Jeghers syndrome; hereditary mixed polyposis syndrome; and PTEN-hamartoma tumor syndrome, encompassing Bannayan-Riley-Ruvalcaba syndrome and Cowden’s syndrome.

Although all gastrointestinal hamartomatous polyposis syndromes are caused by germline alterations, Dr. Boland and colleagues pointed out that diagnoses are typically made based on clinical criteria, with germline results serving as confirmatory evidence.

The guideline recommends that any patient with a family history of hamartomatous polyps, or with a history of at least two hamartomatous polyps, should undergo genetic testing. The guideline also provides more nuanced genetic testing algorithms for each syndrome.

Among all the hamartomatous polyp disorders, Peutz-Jeghers syndrome is most understood, according to the investigators. It is caused by aberrations in the STK11 gene, and is characterized by polyps with “branching bands of smooth muscle covered by hyperplastic glandular mucosa” that may occur in the stomach, small intestine, and colon. Patients are also at risk of extraintestinal neoplasia.

For management of Peutz-Jeghers syndrome, the guideline advises frequent endoscopic surveillance to prevent mechanical obstruction and bleeding, as well as multidisciplinary surveillance of the breasts, pancreas, ovaries, testes, and lungs.

Juvenile polyposis syndrome is most often characterized by solitary, sporadic polyps in the colorectum (98% of patients affected), followed distantly by polyps in the stomach (14%), ileum (7%), jejunum (7%), and duodenum (7%). The condition is linked with abnormalities in BMPR1A or SMAD4 genes, with SMAD4 germline abnormalities more often leading to “massive” gastric polyps, gastrointestinal bleeding, protein-losing enteropathy, and a higher incidence of gastric cancer in adulthood. Most patients with SMAD4 mutations also have hereditary hemorrhagic telangiectasia, characterized by gastrointestinal bleeding from mucocutaneous telangiectasias, arteriovenous malformations, and epistaxis.

Management of juvenile polyposis syndrome depends on frequent colonoscopies with polypectomies beginning at 12-15 years.

“The goal of surveillance in juvenile polyposis syndrome is to mitigate symptoms related to the disorder and decrease the risk of complications from the manifestations, including cancer,” Dr. Boland and colleagues wrote.

PTEN-hamartoma tumor syndrome, which includes both Bannayan-Riley-Ruvalcaba syndrome and Cowden’s syndrome, is caused by abnormalities in the eponymous PTEN gene. Patients with the condition have an increased risk of colon cancer and polyposis, as well as extraintestinal cancers.

Diagnosis of PTEN-hamartoma tumor syndrome may be complex, involving “clinical examination, mammography and breast MRI, thyroid ultrasound, transvaginal ultrasound, upper gastrointestinal endoscopy, colonoscopy, and renal ultrasound,” according to the guideline.

After diagnosis, frequent colonoscopies are recommended, typically starting at age 35 years, as well as continued surveillance of other organs.

Hereditary mixed polyposis syndrome, which involves attenuated colonic polyposis, is the rarest of the four disorders, having been reported in only “a few families,” according to the guideline. The condition has been linked with “large duplications of the promoter region or entire GREM1 gene.”

Onset is typically in the late 20s, “which is when colonoscopic surveillance should begin,” the investigators wrote. More data are needed to determine appropriate surveillance intervals and if the condition is associated with increased risk of extraintestinal neoplasia.

This call for more research into gastrointestinal hamartomatous polyposis syndromes carried through to the conclusion of the guideline.

“Long-term prospective studies of mutation carriers are still needed to further clarify the risk of cancer and the role of surveillance in these syndromes,” Dr. Boland and colleagues wrote. “With increases in genetic testing and evaluation, future studies will be conducted with more robust cohorts of genetically characterized, less heterogeneous populations. However, there is also a need to study patients and families with unusual phenotypes where no genotype can be found.”

The investigators disclosed no conflicts of interest with the current guideline; however, they provided a list of industry relationships, including Salix Pharmaceuticals, Ferring Pharmaceuticals, and Pfizer, among others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Second-trimester blood test predicts preterm birth

Article Type
Changed
Tue, 04/26/2022 - 16:50

A new blood test performed in the second trimester could help identify pregnancies at risk of early and very early spontaneous preterm birth (sPTB), based on a prospective cohort trial.

The cell-free RNA (cfRNA) profiling tool could guide patient and provider decision-making, while the underlying research illuminates biological pathways that may facilitate novel interventions, reported lead author Joan Camunas-Soler, PhD, of Mirvie, South San Francisco, and colleagues.

“Given the complex etiology of this heterogeneous syndrome, it would be advantageous to develop predictive tests that provide insight on the specific pathophysiology leading to preterm birth for each particular pregnancy,” Dr. Camunas-Soler and colleagues wrote in the American Journal of Obstetrics and Gynecology. “Such an approach could inform the development of preventive treatments and targeted therapeutics that are currently lacking/difficult to implement due to the heterogeneous etiology of sPTB.”

Currently, the best predictor of sPTB is previous sPTB, according to the investigators. Although a combination approach that incorporates cervical length and fetal fibronectin in cervicovaginal fluid is “of use,” they noted, “this is not standard of care in the U.S.A. nor recommended by the American College of Obstetricians and Gynecologists or the Society for Maternal-Fetal Medicine.” Existing molecular tests lack clinical data and may be inaccurate across diverse patient populations, they added.

The present study aimed to address these shortcomings by creating a second-trimester blood test for predicting sPTB. To identify relevant biomarkers, the investigators compared RNA profiles that were differentially expressed in three types of cases: term birth, early sPTB, and very early sPTB.

Among 242 women who contributed second-trimester blood samples for analysis, 194 went on to have a term birth. Of the remaining 48 women who gave birth spontaneously before 35 weeks’ gestation, 32 delivered between 25 and 35 weeks (early sPTB), while 16 delivered before 25 weeks’ gestation (very early sPTB). Slightly more than half of the patients were White, about one-third were Black, approximately 10% were Asian, and the remainder were of unknown race/ethnicity. Cases of preeclampsia were excluded.

The gene discovery and modeling process revealed 25 distinct genes that were significantly associated with early sPTB, offering a risk model with a sensitivity of 76% and a specificity of 72% (area under the curve, 0.80; 95% confidence interval, 0.72-0.87). Very early sPTB was associated with a set of 39 genes, giving a model with a sensitivity of 64% and a specificity of 80% (area under the curve = 0.76; 95% CI, 0.63-0.87).

Characterization of the two RNA profiles offered a glimpse into the underlying biological processes driving preterm birth. The genes predicting early sPTB are largely responsible for extracellular matrix degradation and remodeling, which could, “in terms of mechanism, reflect ongoing processes associated with cervical shortening, a feature often detected some weeks prior to sPTB,” the investigators wrote. In contrast, genes associated with very early sPTB are linked with insulinlike growth factor transport, which drives fetal growth and placentation. These findings could lead to development of pathway-specific interventions, Dr. Camunas-Soler and colleagues suggested.

According to coauthor Michal A. Elovitz, MD, the Hilarie L. Morgan and Mitchell L. Morgan President’s Distinguished Professor in Women’s Health at the University of Pennsylvania, Philadelphia, and chief medical advisor at Mirvie, the proprietary RNA platform moves beyond “unreliable and at times biased clinical factors such as race, BMI, and maternal age” to offer a “precision-based approach to pregnancy health.”

Excluding traditional risk factors also “promises more equitable care than the use of broad sociodemographic factors that often result in bias,” she added, noting that this may help address the higher rate of pregnancy complications among Black patients.

When asked about the potential for false-positive results, considering reported specificity rates of 72%-80%, Dr. Elovitz suggested that such concerns among pregnant women are an “unfortunate misconception.”

“It is not reflective of what women want regarding knowledge about the health of their pregnancy,” she said in a written comment. “Rather than be left in the dark, women want to be prepared for what is to come in their pregnancy journey.”

In support of this statement, Dr. Elovitz cited a recent study involving women with preeclampsia and other hypertensive disorders in pregnancy. A questionnaire showed that women appreciated pregnancy risk models when making decisions, and reported that they would have greater peace of mind if such tests were available.

Dr. Laura Jelliffe-Pawlowski


Laura Jelliffe-Pawlowski, PhD, of the University of California, San Francisco, California Preterm Birth Initiative, supported Dr. Elovitz’s viewpoint.

“If you talk to women who have delivered preterm most (but not all) say that they would have wanted to know their risk so they could have been better prepared,” she said in a written comment. “I think we need to shift the narrative to empowerment away from fear.”

Dr. Jelliffe-Pawlowski, who holds a patent for a separate test predicting preterm birth, said that the Mirvie RNA platform is “promising,” although she expressed concern that excluding patients with preeclampsia – representing approximately 4% of pregnancies in the United States – may have clouded accuracy results.

“What is unclear is how the test would perform more generally when a sample of all pregnancies was included,” she said. “Without that information, it is hard to compare their findings with other predictive models without such exclusions.”

Regardless of the model used, Dr. Jelliffe-Pawlowski said that more research is needed to determine best clinical responses when risk of sPTB is increased.

“Ultimately we want to connect action with results,” she said. “Okay, so [a woman] is at high risk for delivering preterm – now what? There is a lot of untapped potential once you start to focus more with women and birthing people you know have a high likelihood of preterm birth.”

The study was supported by Mirvie, Tommy’s Charity, and the National Institute for Health Research Biomedical Research Centre. The investigators disclosed financial relationships with Mirvie, including equity interest and/or intellectual property rights. Cohort contributors were remunerated for sample collection and/or shipping. Dr. Jelliffe-Pawlowski holds a patent for a different preterm birth prediction blood test.

*This story was updated on 4/26/2022. 

Publications
Topics
Sections

A new blood test performed in the second trimester could help identify pregnancies at risk of early and very early spontaneous preterm birth (sPTB), based on a prospective cohort trial.

The cell-free RNA (cfRNA) profiling tool could guide patient and provider decision-making, while the underlying research illuminates biological pathways that may facilitate novel interventions, reported lead author Joan Camunas-Soler, PhD, of Mirvie, South San Francisco, and colleagues.

“Given the complex etiology of this heterogeneous syndrome, it would be advantageous to develop predictive tests that provide insight on the specific pathophysiology leading to preterm birth for each particular pregnancy,” Dr. Camunas-Soler and colleagues wrote in the American Journal of Obstetrics and Gynecology. “Such an approach could inform the development of preventive treatments and targeted therapeutics that are currently lacking/difficult to implement due to the heterogeneous etiology of sPTB.”

Currently, the best predictor of sPTB is previous sPTB, according to the investigators. Although a combination approach that incorporates cervical length and fetal fibronectin in cervicovaginal fluid is “of use,” they noted, “this is not standard of care in the U.S.A. nor recommended by the American College of Obstetricians and Gynecologists or the Society for Maternal-Fetal Medicine.” Existing molecular tests lack clinical data and may be inaccurate across diverse patient populations, they added.

The present study aimed to address these shortcomings by creating a second-trimester blood test for predicting sPTB. To identify relevant biomarkers, the investigators compared RNA profiles that were differentially expressed in three types of cases: term birth, early sPTB, and very early sPTB.

Among 242 women who contributed second-trimester blood samples for analysis, 194 went on to have a term birth. Of the remaining 48 women who gave birth spontaneously before 35 weeks’ gestation, 32 delivered between 25 and 35 weeks (early sPTB), while 16 delivered before 25 weeks’ gestation (very early sPTB). Slightly more than half of the patients were White, about one-third were Black, approximately 10% were Asian, and the remainder were of unknown race/ethnicity. Cases of preeclampsia were excluded.

The gene discovery and modeling process revealed 25 distinct genes that were significantly associated with early sPTB, offering a risk model with a sensitivity of 76% and a specificity of 72% (area under the curve, 0.80; 95% confidence interval, 0.72-0.87). Very early sPTB was associated with a set of 39 genes, giving a model with a sensitivity of 64% and a specificity of 80% (area under the curve = 0.76; 95% CI, 0.63-0.87).

Characterization of the two RNA profiles offered a glimpse into the underlying biological processes driving preterm birth. The genes predicting early sPTB are largely responsible for extracellular matrix degradation and remodeling, which could, “in terms of mechanism, reflect ongoing processes associated with cervical shortening, a feature often detected some weeks prior to sPTB,” the investigators wrote. In contrast, genes associated with very early sPTB are linked with insulinlike growth factor transport, which drives fetal growth and placentation. These findings could lead to development of pathway-specific interventions, Dr. Camunas-Soler and colleagues suggested.

According to coauthor Michal A. Elovitz, MD, the Hilarie L. Morgan and Mitchell L. Morgan President’s Distinguished Professor in Women’s Health at the University of Pennsylvania, Philadelphia, and chief medical advisor at Mirvie, the proprietary RNA platform moves beyond “unreliable and at times biased clinical factors such as race, BMI, and maternal age” to offer a “precision-based approach to pregnancy health.”

Excluding traditional risk factors also “promises more equitable care than the use of broad sociodemographic factors that often result in bias,” she added, noting that this may help address the higher rate of pregnancy complications among Black patients.

When asked about the potential for false-positive results, considering reported specificity rates of 72%-80%, Dr. Elovitz suggested that such concerns among pregnant women are an “unfortunate misconception.”

“It is not reflective of what women want regarding knowledge about the health of their pregnancy,” she said in a written comment. “Rather than be left in the dark, women want to be prepared for what is to come in their pregnancy journey.”

In support of this statement, Dr. Elovitz cited a recent study involving women with preeclampsia and other hypertensive disorders in pregnancy. A questionnaire showed that women appreciated pregnancy risk models when making decisions, and reported that they would have greater peace of mind if such tests were available.

Dr. Laura Jelliffe-Pawlowski


Laura Jelliffe-Pawlowski, PhD, of the University of California, San Francisco, California Preterm Birth Initiative, supported Dr. Elovitz’s viewpoint.

“If you talk to women who have delivered preterm most (but not all) say that they would have wanted to know their risk so they could have been better prepared,” she said in a written comment. “I think we need to shift the narrative to empowerment away from fear.”

Dr. Jelliffe-Pawlowski, who holds a patent for a separate test predicting preterm birth, said that the Mirvie RNA platform is “promising,” although she expressed concern that excluding patients with preeclampsia – representing approximately 4% of pregnancies in the United States – may have clouded accuracy results.

“What is unclear is how the test would perform more generally when a sample of all pregnancies was included,” she said. “Without that information, it is hard to compare their findings with other predictive models without such exclusions.”

Regardless of the model used, Dr. Jelliffe-Pawlowski said that more research is needed to determine best clinical responses when risk of sPTB is increased.

“Ultimately we want to connect action with results,” she said. “Okay, so [a woman] is at high risk for delivering preterm – now what? There is a lot of untapped potential once you start to focus more with women and birthing people you know have a high likelihood of preterm birth.”

The study was supported by Mirvie, Tommy’s Charity, and the National Institute for Health Research Biomedical Research Centre. The investigators disclosed financial relationships with Mirvie, including equity interest and/or intellectual property rights. Cohort contributors were remunerated for sample collection and/or shipping. Dr. Jelliffe-Pawlowski holds a patent for a different preterm birth prediction blood test.

*This story was updated on 4/26/2022. 

A new blood test performed in the second trimester could help identify pregnancies at risk of early and very early spontaneous preterm birth (sPTB), based on a prospective cohort trial.

The cell-free RNA (cfRNA) profiling tool could guide patient and provider decision-making, while the underlying research illuminates biological pathways that may facilitate novel interventions, reported lead author Joan Camunas-Soler, PhD, of Mirvie, South San Francisco, and colleagues.

“Given the complex etiology of this heterogeneous syndrome, it would be advantageous to develop predictive tests that provide insight on the specific pathophysiology leading to preterm birth for each particular pregnancy,” Dr. Camunas-Soler and colleagues wrote in the American Journal of Obstetrics and Gynecology. “Such an approach could inform the development of preventive treatments and targeted therapeutics that are currently lacking/difficult to implement due to the heterogeneous etiology of sPTB.”

Currently, the best predictor of sPTB is previous sPTB, according to the investigators. Although a combination approach that incorporates cervical length and fetal fibronectin in cervicovaginal fluid is “of use,” they noted, “this is not standard of care in the U.S.A. nor recommended by the American College of Obstetricians and Gynecologists or the Society for Maternal-Fetal Medicine.” Existing molecular tests lack clinical data and may be inaccurate across diverse patient populations, they added.

The present study aimed to address these shortcomings by creating a second-trimester blood test for predicting sPTB. To identify relevant biomarkers, the investigators compared RNA profiles that were differentially expressed in three types of cases: term birth, early sPTB, and very early sPTB.

Among 242 women who contributed second-trimester blood samples for analysis, 194 went on to have a term birth. Of the remaining 48 women who gave birth spontaneously before 35 weeks’ gestation, 32 delivered between 25 and 35 weeks (early sPTB), while 16 delivered before 25 weeks’ gestation (very early sPTB). Slightly more than half of the patients were White, about one-third were Black, approximately 10% were Asian, and the remainder were of unknown race/ethnicity. Cases of preeclampsia were excluded.

The gene discovery and modeling process revealed 25 distinct genes that were significantly associated with early sPTB, offering a risk model with a sensitivity of 76% and a specificity of 72% (area under the curve, 0.80; 95% confidence interval, 0.72-0.87). Very early sPTB was associated with a set of 39 genes, giving a model with a sensitivity of 64% and a specificity of 80% (area under the curve = 0.76; 95% CI, 0.63-0.87).

Characterization of the two RNA profiles offered a glimpse into the underlying biological processes driving preterm birth. The genes predicting early sPTB are largely responsible for extracellular matrix degradation and remodeling, which could, “in terms of mechanism, reflect ongoing processes associated with cervical shortening, a feature often detected some weeks prior to sPTB,” the investigators wrote. In contrast, genes associated with very early sPTB are linked with insulinlike growth factor transport, which drives fetal growth and placentation. These findings could lead to development of pathway-specific interventions, Dr. Camunas-Soler and colleagues suggested.

According to coauthor Michal A. Elovitz, MD, the Hilarie L. Morgan and Mitchell L. Morgan President’s Distinguished Professor in Women’s Health at the University of Pennsylvania, Philadelphia, and chief medical advisor at Mirvie, the proprietary RNA platform moves beyond “unreliable and at times biased clinical factors such as race, BMI, and maternal age” to offer a “precision-based approach to pregnancy health.”

Excluding traditional risk factors also “promises more equitable care than the use of broad sociodemographic factors that often result in bias,” she added, noting that this may help address the higher rate of pregnancy complications among Black patients.

When asked about the potential for false-positive results, considering reported specificity rates of 72%-80%, Dr. Elovitz suggested that such concerns among pregnant women are an “unfortunate misconception.”

“It is not reflective of what women want regarding knowledge about the health of their pregnancy,” she said in a written comment. “Rather than be left in the dark, women want to be prepared for what is to come in their pregnancy journey.”

In support of this statement, Dr. Elovitz cited a recent study involving women with preeclampsia and other hypertensive disorders in pregnancy. A questionnaire showed that women appreciated pregnancy risk models when making decisions, and reported that they would have greater peace of mind if such tests were available.

Dr. Laura Jelliffe-Pawlowski


Laura Jelliffe-Pawlowski, PhD, of the University of California, San Francisco, California Preterm Birth Initiative, supported Dr. Elovitz’s viewpoint.

“If you talk to women who have delivered preterm most (but not all) say that they would have wanted to know their risk so they could have been better prepared,” she said in a written comment. “I think we need to shift the narrative to empowerment away from fear.”

Dr. Jelliffe-Pawlowski, who holds a patent for a separate test predicting preterm birth, said that the Mirvie RNA platform is “promising,” although she expressed concern that excluding patients with preeclampsia – representing approximately 4% of pregnancies in the United States – may have clouded accuracy results.

“What is unclear is how the test would perform more generally when a sample of all pregnancies was included,” she said. “Without that information, it is hard to compare their findings with other predictive models without such exclusions.”

Regardless of the model used, Dr. Jelliffe-Pawlowski said that more research is needed to determine best clinical responses when risk of sPTB is increased.

“Ultimately we want to connect action with results,” she said. “Okay, so [a woman] is at high risk for delivering preterm – now what? There is a lot of untapped potential once you start to focus more with women and birthing people you know have a high likelihood of preterm birth.”

The study was supported by Mirvie, Tommy’s Charity, and the National Institute for Health Research Biomedical Research Centre. The investigators disclosed financial relationships with Mirvie, including equity interest and/or intellectual property rights. Cohort contributors were remunerated for sample collection and/or shipping. Dr. Jelliffe-Pawlowski holds a patent for a different preterm birth prediction blood test.

*This story was updated on 4/26/2022. 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AMERICAN JOURNAL OF OBSTETRICS AND GYNECOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Study: Fasting plus calorie counting offered no weight-loss benefit over calorie counting alone

Article Type
Changed
Fri, 04/22/2022 - 07:49

 

Not so fast! Daily fasting with calorie restriction may not lead to shedding more pounds than just cutting back on calories, according to the authors of a new study.

Over the course of a year, study participants who ate only from 8:00 a.m. to 4:00 p.m. did not lose significantly more weight than individuals who ate whenever they wanted, nor did they achieve significantly greater improvements in other obesity-related health measures like body mass index (BMI) or metabolic risk, reported lead author Deying Liu, MD, of Nanfang Hospital, Southern Medical University, Guangzhou, China, and colleagues.

“[Daily fasting] has gained popularity because it is a weight-loss strategy that is simple to follow, which may enhance adherence,” Dr. Liu and colleagues wrote in the New England Journal of Medicine. However, “the long-term efficacy and safety of time-restricted eating as a weight-loss strategy are still uncertain, and the long-term effects on weight loss of time-restricted eating as compared with daily calorie restriction alone have not been fully explored.”

To learn more, Dr. Liu and colleagues recruited 139 adult patients with BMIs between 28 and 45. Individuals with serious medical conditions, such as malignant tumors, diabetes, chronic kidney disease, and others were excluded. Other exclusion criteria included smoking, ongoing participation in a weight-loss program, GI surgery within the prior year, use of medications that impact energy balance and weight, and planned or current pregnancy.

All participants were advised to eat calorie-restricted diets, with ranges of 1,500-1,800 kcal per day for men and 1,200-1,500 kcal per day for women. To determine the added impact of fasting, participants were randomized in a 1:1 ratio into time-restricted (fasting) or non–time-restricted (nonfasting) groups, in which fasting participants ate only during an 8-hour window from 8:00 a.m. to 4:00 p.m., whereas nonfasting participants ate whenever they wanted.

At 6 months and 12 months, participants were re-evaluated for changes in weight, body fat, BMI, blood pressure, lean body mass, and metabolic risk factors, including glucose level, triglycerides, blood pressure, and others.
 

Caloric intake restriction seems to explain most of beneficial effects

At one-year follow-up, 118 participants (84.9%) remained in the study. Although members of the fasting group lost slightly more weight on average than those in the non-fasting group (mean, 8.0 kg vs. 6.3 kg), the difference between groups was not statistically significant (95% confidence interval, −4.0 to 0.4; P = .11).

Most of the other obesity-related health measures also trended toward favoring the fasting group, but again, none of these improvements was statistically significant. Weight circumference at 1 year, for example, decreased by a mean of 9.4 cm in the fasting group versus 8.8 cm in the nonfasting group, a net difference of 1.8 cm (95% CI, –4.0 to 0.5).

“We found that the two weight-loss regimens that we evaluated had similar success in patients with obesity, regardless of whether they reduced their calorie consumption through time-restricted eating or through calorie restriction alone,” Dr. Liu and colleagues concluded.

Principal investigator Huijie Zhang MD, PhD, professor, chief physician, and deputy director of the department of endocrinology and metabolism at Nafang Hospital, noted that their findings are “consistent with the findings in previous studies.”

“Our data suggest that caloric intake restriction explained most of the beneficial effects of a time-restricted eating regimen,” Dr. Zhang said.

Still, Dr. Zhang called time-restricted eating “a viable and sustainable approach for a person who wants to lose weight.”

More work is needed, Dr. Zhang said, to uncover the impact of fasting in “diverse groups,” including patients with chronic disease like diabetes and cardiovascular disease. Investigators should also conduct studies to compare outcomes between men and women, and evaluate the effects of other fasting durations.
 

 

 

Can trial be applied to a wider population?

According to Blandine Laferrère, MD, PhD, and Satchidananda Panda, PhD, of Columbia University Irving Medical Center, New York, and the Salk Institute for Biological Studies, La Jolla, Calif., respectively, “the results of the trial suggest that calorie restriction combined with time restriction, when delivered with intensive coaching and monitoring, is an approach that is as safe, sustainable, and effective for weight loss as calorie restriction alone.”

Yet Dr. Laferrère and Dr. Panda also expressed skepticism about broader implementation of a similar regime.

“The applicability of this trial to wider populations is debatable,” they wrote in an accompanying editorial. “The short time period for eating at baseline may be specific to the population studied, since investigators outside China have reported longer time windows. The rigorous coaching and monitoring by trial staff also leaves open the question of whether time-restricted eating is easier to adhere to than intentional calorie restriction. Such cost-benefit analyses are important for the assessment of the scalability of a lifestyle intervention.”
 

Duration is trial’s greatest strength

Kristina Varady, PhD, professor of nutrition in the department of kinesiology and nutrition at the University of Illinois at Chicago, said the “key strength” of the trial was its duration, at 12 months, making it the longest time-restricted eating trial to date”; however, she was critical of the design.

Dr. Kristina Varady

“Quite frankly, I’m surprised this study got into such a high-caliber medical journal,” Dr. Varady said in a written comment. “It doesn’t even have a control group! It goes to show how popular these diets are and how much people want to know about them.”

She also noted that “the study was flawed in that it didn’t really look at the effects of true time-restricted eating.” According to Dr. Varady, combining calorie restriction with time-restricted eating “kind of defeats the purpose” of a time-restricted diet.

“The main benefit of time-restricted eating is that you don’t need to count calories in order to lose weight,” Dr. Varady said, citing two of her own studies from 2018 and 2020. “Just by limiting the eating window to 8 hours per day, people naturally cut out 300-500 calories per day. That’s why people like [time-restricted eating] so much.”

Dr. Varady was also “very surprised” at the adherence data. At 1 year, approximately 85% of the patients were still following the protocol, a notably higher rate than most dietary intervention studies, which typically report adherence rates of 50-60%, she said. The high adherence rate was particularly unexpected because of the 8:00 a.m.–4:00 p.m. eating window, Dr. Varady added, since that meant skipping “the family/social meal every evening over 1 whole year!”

The study was funded by the National Key Research and Development Project and others. The study investigators reported no conflicts of interest. Dr. Varady disclosed author fees from the Hachette Book group for her book “The Every Other Day Diet.”

Publications
Topics
Sections

 

Not so fast! Daily fasting with calorie restriction may not lead to shedding more pounds than just cutting back on calories, according to the authors of a new study.

Over the course of a year, study participants who ate only from 8:00 a.m. to 4:00 p.m. did not lose significantly more weight than individuals who ate whenever they wanted, nor did they achieve significantly greater improvements in other obesity-related health measures like body mass index (BMI) or metabolic risk, reported lead author Deying Liu, MD, of Nanfang Hospital, Southern Medical University, Guangzhou, China, and colleagues.

“[Daily fasting] has gained popularity because it is a weight-loss strategy that is simple to follow, which may enhance adherence,” Dr. Liu and colleagues wrote in the New England Journal of Medicine. However, “the long-term efficacy and safety of time-restricted eating as a weight-loss strategy are still uncertain, and the long-term effects on weight loss of time-restricted eating as compared with daily calorie restriction alone have not been fully explored.”

To learn more, Dr. Liu and colleagues recruited 139 adult patients with BMIs between 28 and 45. Individuals with serious medical conditions, such as malignant tumors, diabetes, chronic kidney disease, and others were excluded. Other exclusion criteria included smoking, ongoing participation in a weight-loss program, GI surgery within the prior year, use of medications that impact energy balance and weight, and planned or current pregnancy.

All participants were advised to eat calorie-restricted diets, with ranges of 1,500-1,800 kcal per day for men and 1,200-1,500 kcal per day for women. To determine the added impact of fasting, participants were randomized in a 1:1 ratio into time-restricted (fasting) or non–time-restricted (nonfasting) groups, in which fasting participants ate only during an 8-hour window from 8:00 a.m. to 4:00 p.m., whereas nonfasting participants ate whenever they wanted.

At 6 months and 12 months, participants were re-evaluated for changes in weight, body fat, BMI, blood pressure, lean body mass, and metabolic risk factors, including glucose level, triglycerides, blood pressure, and others.
 

Caloric intake restriction seems to explain most of beneficial effects

At one-year follow-up, 118 participants (84.9%) remained in the study. Although members of the fasting group lost slightly more weight on average than those in the non-fasting group (mean, 8.0 kg vs. 6.3 kg), the difference between groups was not statistically significant (95% confidence interval, −4.0 to 0.4; P = .11).

Most of the other obesity-related health measures also trended toward favoring the fasting group, but again, none of these improvements was statistically significant. Weight circumference at 1 year, for example, decreased by a mean of 9.4 cm in the fasting group versus 8.8 cm in the nonfasting group, a net difference of 1.8 cm (95% CI, –4.0 to 0.5).

“We found that the two weight-loss regimens that we evaluated had similar success in patients with obesity, regardless of whether they reduced their calorie consumption through time-restricted eating or through calorie restriction alone,” Dr. Liu and colleagues concluded.

Principal investigator Huijie Zhang MD, PhD, professor, chief physician, and deputy director of the department of endocrinology and metabolism at Nafang Hospital, noted that their findings are “consistent with the findings in previous studies.”

“Our data suggest that caloric intake restriction explained most of the beneficial effects of a time-restricted eating regimen,” Dr. Zhang said.

Still, Dr. Zhang called time-restricted eating “a viable and sustainable approach for a person who wants to lose weight.”

More work is needed, Dr. Zhang said, to uncover the impact of fasting in “diverse groups,” including patients with chronic disease like diabetes and cardiovascular disease. Investigators should also conduct studies to compare outcomes between men and women, and evaluate the effects of other fasting durations.
 

 

 

Can trial be applied to a wider population?

According to Blandine Laferrère, MD, PhD, and Satchidananda Panda, PhD, of Columbia University Irving Medical Center, New York, and the Salk Institute for Biological Studies, La Jolla, Calif., respectively, “the results of the trial suggest that calorie restriction combined with time restriction, when delivered with intensive coaching and monitoring, is an approach that is as safe, sustainable, and effective for weight loss as calorie restriction alone.”

Yet Dr. Laferrère and Dr. Panda also expressed skepticism about broader implementation of a similar regime.

“The applicability of this trial to wider populations is debatable,” they wrote in an accompanying editorial. “The short time period for eating at baseline may be specific to the population studied, since investigators outside China have reported longer time windows. The rigorous coaching and monitoring by trial staff also leaves open the question of whether time-restricted eating is easier to adhere to than intentional calorie restriction. Such cost-benefit analyses are important for the assessment of the scalability of a lifestyle intervention.”
 

Duration is trial’s greatest strength

Kristina Varady, PhD, professor of nutrition in the department of kinesiology and nutrition at the University of Illinois at Chicago, said the “key strength” of the trial was its duration, at 12 months, making it the longest time-restricted eating trial to date”; however, she was critical of the design.

Dr. Kristina Varady

“Quite frankly, I’m surprised this study got into such a high-caliber medical journal,” Dr. Varady said in a written comment. “It doesn’t even have a control group! It goes to show how popular these diets are and how much people want to know about them.”

She also noted that “the study was flawed in that it didn’t really look at the effects of true time-restricted eating.” According to Dr. Varady, combining calorie restriction with time-restricted eating “kind of defeats the purpose” of a time-restricted diet.

“The main benefit of time-restricted eating is that you don’t need to count calories in order to lose weight,” Dr. Varady said, citing two of her own studies from 2018 and 2020. “Just by limiting the eating window to 8 hours per day, people naturally cut out 300-500 calories per day. That’s why people like [time-restricted eating] so much.”

Dr. Varady was also “very surprised” at the adherence data. At 1 year, approximately 85% of the patients were still following the protocol, a notably higher rate than most dietary intervention studies, which typically report adherence rates of 50-60%, she said. The high adherence rate was particularly unexpected because of the 8:00 a.m.–4:00 p.m. eating window, Dr. Varady added, since that meant skipping “the family/social meal every evening over 1 whole year!”

The study was funded by the National Key Research and Development Project and others. The study investigators reported no conflicts of interest. Dr. Varady disclosed author fees from the Hachette Book group for her book “The Every Other Day Diet.”

 

Not so fast! Daily fasting with calorie restriction may not lead to shedding more pounds than just cutting back on calories, according to the authors of a new study.

Over the course of a year, study participants who ate only from 8:00 a.m. to 4:00 p.m. did not lose significantly more weight than individuals who ate whenever they wanted, nor did they achieve significantly greater improvements in other obesity-related health measures like body mass index (BMI) or metabolic risk, reported lead author Deying Liu, MD, of Nanfang Hospital, Southern Medical University, Guangzhou, China, and colleagues.

“[Daily fasting] has gained popularity because it is a weight-loss strategy that is simple to follow, which may enhance adherence,” Dr. Liu and colleagues wrote in the New England Journal of Medicine. However, “the long-term efficacy and safety of time-restricted eating as a weight-loss strategy are still uncertain, and the long-term effects on weight loss of time-restricted eating as compared with daily calorie restriction alone have not been fully explored.”

To learn more, Dr. Liu and colleagues recruited 139 adult patients with BMIs between 28 and 45. Individuals with serious medical conditions, such as malignant tumors, diabetes, chronic kidney disease, and others were excluded. Other exclusion criteria included smoking, ongoing participation in a weight-loss program, GI surgery within the prior year, use of medications that impact energy balance and weight, and planned or current pregnancy.

All participants were advised to eat calorie-restricted diets, with ranges of 1,500-1,800 kcal per day for men and 1,200-1,500 kcal per day for women. To determine the added impact of fasting, participants were randomized in a 1:1 ratio into time-restricted (fasting) or non–time-restricted (nonfasting) groups, in which fasting participants ate only during an 8-hour window from 8:00 a.m. to 4:00 p.m., whereas nonfasting participants ate whenever they wanted.

At 6 months and 12 months, participants were re-evaluated for changes in weight, body fat, BMI, blood pressure, lean body mass, and metabolic risk factors, including glucose level, triglycerides, blood pressure, and others.
 

Caloric intake restriction seems to explain most of beneficial effects

At one-year follow-up, 118 participants (84.9%) remained in the study. Although members of the fasting group lost slightly more weight on average than those in the non-fasting group (mean, 8.0 kg vs. 6.3 kg), the difference between groups was not statistically significant (95% confidence interval, −4.0 to 0.4; P = .11).

Most of the other obesity-related health measures also trended toward favoring the fasting group, but again, none of these improvements was statistically significant. Weight circumference at 1 year, for example, decreased by a mean of 9.4 cm in the fasting group versus 8.8 cm in the nonfasting group, a net difference of 1.8 cm (95% CI, –4.0 to 0.5).

“We found that the two weight-loss regimens that we evaluated had similar success in patients with obesity, regardless of whether they reduced their calorie consumption through time-restricted eating or through calorie restriction alone,” Dr. Liu and colleagues concluded.

Principal investigator Huijie Zhang MD, PhD, professor, chief physician, and deputy director of the department of endocrinology and metabolism at Nafang Hospital, noted that their findings are “consistent with the findings in previous studies.”

“Our data suggest that caloric intake restriction explained most of the beneficial effects of a time-restricted eating regimen,” Dr. Zhang said.

Still, Dr. Zhang called time-restricted eating “a viable and sustainable approach for a person who wants to lose weight.”

More work is needed, Dr. Zhang said, to uncover the impact of fasting in “diverse groups,” including patients with chronic disease like diabetes and cardiovascular disease. Investigators should also conduct studies to compare outcomes between men and women, and evaluate the effects of other fasting durations.
 

 

 

Can trial be applied to a wider population?

According to Blandine Laferrère, MD, PhD, and Satchidananda Panda, PhD, of Columbia University Irving Medical Center, New York, and the Salk Institute for Biological Studies, La Jolla, Calif., respectively, “the results of the trial suggest that calorie restriction combined with time restriction, when delivered with intensive coaching and monitoring, is an approach that is as safe, sustainable, and effective for weight loss as calorie restriction alone.”

Yet Dr. Laferrère and Dr. Panda also expressed skepticism about broader implementation of a similar regime.

“The applicability of this trial to wider populations is debatable,” they wrote in an accompanying editorial. “The short time period for eating at baseline may be specific to the population studied, since investigators outside China have reported longer time windows. The rigorous coaching and monitoring by trial staff also leaves open the question of whether time-restricted eating is easier to adhere to than intentional calorie restriction. Such cost-benefit analyses are important for the assessment of the scalability of a lifestyle intervention.”
 

Duration is trial’s greatest strength

Kristina Varady, PhD, professor of nutrition in the department of kinesiology and nutrition at the University of Illinois at Chicago, said the “key strength” of the trial was its duration, at 12 months, making it the longest time-restricted eating trial to date”; however, she was critical of the design.

Dr. Kristina Varady

“Quite frankly, I’m surprised this study got into such a high-caliber medical journal,” Dr. Varady said in a written comment. “It doesn’t even have a control group! It goes to show how popular these diets are and how much people want to know about them.”

She also noted that “the study was flawed in that it didn’t really look at the effects of true time-restricted eating.” According to Dr. Varady, combining calorie restriction with time-restricted eating “kind of defeats the purpose” of a time-restricted diet.

“The main benefit of time-restricted eating is that you don’t need to count calories in order to lose weight,” Dr. Varady said, citing two of her own studies from 2018 and 2020. “Just by limiting the eating window to 8 hours per day, people naturally cut out 300-500 calories per day. That’s why people like [time-restricted eating] so much.”

Dr. Varady was also “very surprised” at the adherence data. At 1 year, approximately 85% of the patients were still following the protocol, a notably higher rate than most dietary intervention studies, which typically report adherence rates of 50-60%, she said. The high adherence rate was particularly unexpected because of the 8:00 a.m.–4:00 p.m. eating window, Dr. Varady added, since that meant skipping “the family/social meal every evening over 1 whole year!”

The study was funded by the National Key Research and Development Project and others. The study investigators reported no conflicts of interest. Dr. Varady disclosed author fees from the Hachette Book group for her book “The Every Other Day Diet.”

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Childhood abuse may increase risk of MS in women

Article Type
Changed
Thu, 12/15/2022 - 15:38

Emotional or sexual abuse in childhood may increase risk of multiple sclerosis (MS) in women, and risk may increase further with exposure to multiple kinds of abuse, according to the first prospective cohort study of its kind.

More research is needed to uncover underlying mechanisms of action, according to lead author Karine Eid, MD, a PhD candidate at Haukeland University Hospital, Bergen, Norway, and colleagues.

“Trauma and stressful life events have been associated with an increased risk of autoimmune disorders,” the investigators wrote in the Journal Of Neurology, Neurosurgery, & Psychiatry. “Whether adverse events in childhood can have an impact on MS susceptibility is not known.”

The present study recruited participants from the Norwegian Mother, Father and Child cohort, a population consisting of Norwegian women who were pregnant from 1999 to 2008. Of the 77,997 participating women, 14,477 reported emotional, sexual, and/or physical abuse in childhood, while the remaining 63,520 women reported no abuse. After a mean follow-up of 13 years, 300 women were diagnosed with MS, among whom 24% reported a history of childhood abuse, compared with 19% among women who did not develop MS.

To look for associations between childhood abuse and risk of MS, the investigators used a Cox model adjusted for confounders and mediators, including smoking, obesity, adult socioeconomic factors, and childhood social status. The model revealed that emotional abuse increased the risk of MS by 40% (hazard ratio [HR] 1.40; 95% confidence interval [CI], 1.03-1.90), and sexual abuse increased the risk of MS by 65% (HR 1.65; 95% CI, 1.13-2.39).

Although physical abuse alone did not significantly increase risk of MS (HR 1.31; 95% CI, 0.83-2.06), it did contribute to a dose-response relationship when women were exposed to more than one type of childhood abuse. Women exposed to two out of three abuse categories had a 66% increased risk of MS (HR 1.66; 95% CI, 1.04-2.67), whereas women exposed to all three types of abuse had the highest risk of MS, at 93% (HR 1.93; 95% CI, 1.02-3.67).

Dr. Eid and colleagues noted that their findings are supported by previous retrospective research, and discussed possible mechanisms of action.

“The increased risk of MS after exposure to childhood sexual and emotional abuse may have a biological explanation,” they wrote. “Childhood abuse can cause dysregulation of the hypothalamic-pituitary-adrenal axis, lead to oxidative stress, and induce a proinflammatory state decades into adulthood. Psychological stress has been shown to disrupt the blood-brain barrier and cause epigenetic changes that may increase the risk of neurodegenerative disorders, including MS.

“The underlying mechanisms behind this association should be investigated further,” they concluded.
 

Study findings should guide interventions

Commenting on the research, Ruth Ann Marrie, MD, PhD, professor of medicine and community health sciences and director of the multiple sclerosis clinic at Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, said that the present study “has several strengths compared to prior studies – including that it is prospective and the sample size.”

Dr. Marrie, who was not involved in the study, advised clinicians in the field to take note of the findings, as patients with a history of abuse may need unique interventions.

“Providers need to recognize the higher prevalence of childhood maltreatment in people with MS,” Dr. Marrie said in an interview. “These findings dovetail with others that suggest that adverse childhood experiences are associated with increased mental health concerns and pain catastrophizing in people with MS. Affected individuals may benefit from additional psychological supports and trauma-informed care.”

Tiffany Joy Braley, MD, associate professor of neurology, and Carri Polick, RN and PhD candidate at the school of nursing, University of Michigan, Ann Arbor, who published a case report last year highlighting the importance of evaluating stress exposure in MS, suggested that the findings should guide interventions at both a system and patient level.

“Although a cause-and-effect relationship cannot be established by the current study, these and related findings should be considered in the context of system level and policy interventions that address links between environment and health care disparities,” they said in a joint, written comment. “Given recent impetus to provide trauma-informed health care, these data could be particularly informative in neurological conditions which are associated with high mental health comorbidity. Traumatic stress screening practices could lead to referrals for appropriate support services and more personalized health care.”

While several mechanisms have been proposed to explain the link between traumatic stress and MS, more work is needed in this area, they added.

This knowledge gap was acknowledged by Dr. Marrie.

“Our understanding of the etiology of MS remains incomplete,” Dr. Marrie said. “We still need a better understanding of mechanisms by which adverse childhood experiences lead to MS, how they interact with other risk factors for MS (beyond smoking and obesity), and whether there are any interventions that can mitigate the risk of developing MS that is associated with adverse childhood experiences.”

The investigators disclosed relationships with Novartis, Biogen, Merck, and others. Dr. Marrie receives research support from the Canadian Institutes of Health Research, the National Multiple Sclerosis Society, MS Society of Canada, the Consortium of Multiple Sclerosis Centers, Crohn’s and Colitis Canada, Research Manitoba, and the Arthritis Society; she has no pharmaceutical support. Dr. Braley and Ms. Polick reported no conflicts of interest.

Issue
Neurology Reviews - 30(6)
Publications
Topics
Sections

Emotional or sexual abuse in childhood may increase risk of multiple sclerosis (MS) in women, and risk may increase further with exposure to multiple kinds of abuse, according to the first prospective cohort study of its kind.

More research is needed to uncover underlying mechanisms of action, according to lead author Karine Eid, MD, a PhD candidate at Haukeland University Hospital, Bergen, Norway, and colleagues.

“Trauma and stressful life events have been associated with an increased risk of autoimmune disorders,” the investigators wrote in the Journal Of Neurology, Neurosurgery, & Psychiatry. “Whether adverse events in childhood can have an impact on MS susceptibility is not known.”

The present study recruited participants from the Norwegian Mother, Father and Child cohort, a population consisting of Norwegian women who were pregnant from 1999 to 2008. Of the 77,997 participating women, 14,477 reported emotional, sexual, and/or physical abuse in childhood, while the remaining 63,520 women reported no abuse. After a mean follow-up of 13 years, 300 women were diagnosed with MS, among whom 24% reported a history of childhood abuse, compared with 19% among women who did not develop MS.

To look for associations between childhood abuse and risk of MS, the investigators used a Cox model adjusted for confounders and mediators, including smoking, obesity, adult socioeconomic factors, and childhood social status. The model revealed that emotional abuse increased the risk of MS by 40% (hazard ratio [HR] 1.40; 95% confidence interval [CI], 1.03-1.90), and sexual abuse increased the risk of MS by 65% (HR 1.65; 95% CI, 1.13-2.39).

Although physical abuse alone did not significantly increase risk of MS (HR 1.31; 95% CI, 0.83-2.06), it did contribute to a dose-response relationship when women were exposed to more than one type of childhood abuse. Women exposed to two out of three abuse categories had a 66% increased risk of MS (HR 1.66; 95% CI, 1.04-2.67), whereas women exposed to all three types of abuse had the highest risk of MS, at 93% (HR 1.93; 95% CI, 1.02-3.67).

Dr. Eid and colleagues noted that their findings are supported by previous retrospective research, and discussed possible mechanisms of action.

“The increased risk of MS after exposure to childhood sexual and emotional abuse may have a biological explanation,” they wrote. “Childhood abuse can cause dysregulation of the hypothalamic-pituitary-adrenal axis, lead to oxidative stress, and induce a proinflammatory state decades into adulthood. Psychological stress has been shown to disrupt the blood-brain barrier and cause epigenetic changes that may increase the risk of neurodegenerative disorders, including MS.

“The underlying mechanisms behind this association should be investigated further,” they concluded.
 

Study findings should guide interventions

Commenting on the research, Ruth Ann Marrie, MD, PhD, professor of medicine and community health sciences and director of the multiple sclerosis clinic at Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, said that the present study “has several strengths compared to prior studies – including that it is prospective and the sample size.”

Dr. Marrie, who was not involved in the study, advised clinicians in the field to take note of the findings, as patients with a history of abuse may need unique interventions.

“Providers need to recognize the higher prevalence of childhood maltreatment in people with MS,” Dr. Marrie said in an interview. “These findings dovetail with others that suggest that adverse childhood experiences are associated with increased mental health concerns and pain catastrophizing in people with MS. Affected individuals may benefit from additional psychological supports and trauma-informed care.”

Tiffany Joy Braley, MD, associate professor of neurology, and Carri Polick, RN and PhD candidate at the school of nursing, University of Michigan, Ann Arbor, who published a case report last year highlighting the importance of evaluating stress exposure in MS, suggested that the findings should guide interventions at both a system and patient level.

“Although a cause-and-effect relationship cannot be established by the current study, these and related findings should be considered in the context of system level and policy interventions that address links between environment and health care disparities,” they said in a joint, written comment. “Given recent impetus to provide trauma-informed health care, these data could be particularly informative in neurological conditions which are associated with high mental health comorbidity. Traumatic stress screening practices could lead to referrals for appropriate support services and more personalized health care.”

While several mechanisms have been proposed to explain the link between traumatic stress and MS, more work is needed in this area, they added.

This knowledge gap was acknowledged by Dr. Marrie.

“Our understanding of the etiology of MS remains incomplete,” Dr. Marrie said. “We still need a better understanding of mechanisms by which adverse childhood experiences lead to MS, how they interact with other risk factors for MS (beyond smoking and obesity), and whether there are any interventions that can mitigate the risk of developing MS that is associated with adverse childhood experiences.”

The investigators disclosed relationships with Novartis, Biogen, Merck, and others. Dr. Marrie receives research support from the Canadian Institutes of Health Research, the National Multiple Sclerosis Society, MS Society of Canada, the Consortium of Multiple Sclerosis Centers, Crohn’s and Colitis Canada, Research Manitoba, and the Arthritis Society; she has no pharmaceutical support. Dr. Braley and Ms. Polick reported no conflicts of interest.

Emotional or sexual abuse in childhood may increase risk of multiple sclerosis (MS) in women, and risk may increase further with exposure to multiple kinds of abuse, according to the first prospective cohort study of its kind.

More research is needed to uncover underlying mechanisms of action, according to lead author Karine Eid, MD, a PhD candidate at Haukeland University Hospital, Bergen, Norway, and colleagues.

“Trauma and stressful life events have been associated with an increased risk of autoimmune disorders,” the investigators wrote in the Journal Of Neurology, Neurosurgery, & Psychiatry. “Whether adverse events in childhood can have an impact on MS susceptibility is not known.”

The present study recruited participants from the Norwegian Mother, Father and Child cohort, a population consisting of Norwegian women who were pregnant from 1999 to 2008. Of the 77,997 participating women, 14,477 reported emotional, sexual, and/or physical abuse in childhood, while the remaining 63,520 women reported no abuse. After a mean follow-up of 13 years, 300 women were diagnosed with MS, among whom 24% reported a history of childhood abuse, compared with 19% among women who did not develop MS.

To look for associations between childhood abuse and risk of MS, the investigators used a Cox model adjusted for confounders and mediators, including smoking, obesity, adult socioeconomic factors, and childhood social status. The model revealed that emotional abuse increased the risk of MS by 40% (hazard ratio [HR] 1.40; 95% confidence interval [CI], 1.03-1.90), and sexual abuse increased the risk of MS by 65% (HR 1.65; 95% CI, 1.13-2.39).

Although physical abuse alone did not significantly increase risk of MS (HR 1.31; 95% CI, 0.83-2.06), it did contribute to a dose-response relationship when women were exposed to more than one type of childhood abuse. Women exposed to two out of three abuse categories had a 66% increased risk of MS (HR 1.66; 95% CI, 1.04-2.67), whereas women exposed to all three types of abuse had the highest risk of MS, at 93% (HR 1.93; 95% CI, 1.02-3.67).

Dr. Eid and colleagues noted that their findings are supported by previous retrospective research, and discussed possible mechanisms of action.

“The increased risk of MS after exposure to childhood sexual and emotional abuse may have a biological explanation,” they wrote. “Childhood abuse can cause dysregulation of the hypothalamic-pituitary-adrenal axis, lead to oxidative stress, and induce a proinflammatory state decades into adulthood. Psychological stress has been shown to disrupt the blood-brain barrier and cause epigenetic changes that may increase the risk of neurodegenerative disorders, including MS.

“The underlying mechanisms behind this association should be investigated further,” they concluded.
 

Study findings should guide interventions

Commenting on the research, Ruth Ann Marrie, MD, PhD, professor of medicine and community health sciences and director of the multiple sclerosis clinic at Max Rady College of Medicine, Rady Faculty of Health Sciences, University of Manitoba, Winnipeg, said that the present study “has several strengths compared to prior studies – including that it is prospective and the sample size.”

Dr. Marrie, who was not involved in the study, advised clinicians in the field to take note of the findings, as patients with a history of abuse may need unique interventions.

“Providers need to recognize the higher prevalence of childhood maltreatment in people with MS,” Dr. Marrie said in an interview. “These findings dovetail with others that suggest that adverse childhood experiences are associated with increased mental health concerns and pain catastrophizing in people with MS. Affected individuals may benefit from additional psychological supports and trauma-informed care.”

Tiffany Joy Braley, MD, associate professor of neurology, and Carri Polick, RN and PhD candidate at the school of nursing, University of Michigan, Ann Arbor, who published a case report last year highlighting the importance of evaluating stress exposure in MS, suggested that the findings should guide interventions at both a system and patient level.

“Although a cause-and-effect relationship cannot be established by the current study, these and related findings should be considered in the context of system level and policy interventions that address links between environment and health care disparities,” they said in a joint, written comment. “Given recent impetus to provide trauma-informed health care, these data could be particularly informative in neurological conditions which are associated with high mental health comorbidity. Traumatic stress screening practices could lead to referrals for appropriate support services and more personalized health care.”

While several mechanisms have been proposed to explain the link between traumatic stress and MS, more work is needed in this area, they added.

This knowledge gap was acknowledged by Dr. Marrie.

“Our understanding of the etiology of MS remains incomplete,” Dr. Marrie said. “We still need a better understanding of mechanisms by which adverse childhood experiences lead to MS, how they interact with other risk factors for MS (beyond smoking and obesity), and whether there are any interventions that can mitigate the risk of developing MS that is associated with adverse childhood experiences.”

The investigators disclosed relationships with Novartis, Biogen, Merck, and others. Dr. Marrie receives research support from the Canadian Institutes of Health Research, the National Multiple Sclerosis Society, MS Society of Canada, the Consortium of Multiple Sclerosis Centers, Crohn’s and Colitis Canada, Research Manitoba, and the Arthritis Society; she has no pharmaceutical support. Dr. Braley and Ms. Polick reported no conflicts of interest.

Issue
Neurology Reviews - 30(6)
Issue
Neurology Reviews - 30(6)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF NEUROLOGY, NEUROSURGERY, & PSYCHIATRY

Citation Override
Publish date: April 20, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article