Harnessing Doxycycline for STI Prevention: A Vital Role for Primary Care Physicians

Article Type
Changed
Thu, 09/19/2024 - 16:35

Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.

Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.

Dr. Santina J.G. Wheat

Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.

The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.

Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.

You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
 

Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.

References

Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.

Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).

Publications
Topics
Sections

Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.

Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.

Dr. Santina J.G. Wheat

Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.

The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.

Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.

You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
 

Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.

References

Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.

Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).

Primary care physicians frequently offer postexposure prophylaxis for various infections, including influenza, pertussis, tetanus, hepatitis, and Lyme disease, among others. However, the scope of postexposure prophylaxis in primary care is expanding, presenting an opportunity to further integrate it into patient care. As primary care providers, we have the unique advantage of being involved in both preventive care and immediate response, particularly in urgent care or triage scenarios. This dual role is crucial, as timely administration of postexposure prophylaxis can prevent infections from taking hold, especially following high-risk exposures.

Recently, the use of doxycycline as a form of postexposure prophylaxis for sexually transmitted infections (STIs) has gained attention. Traditionally, doxycycline has been used as preexposure or postexposure prophylaxis for conditions like malaria and Lyme disease but has not been widely employed for STI prevention until now. Doxycycline is a relatively common medication, generally safe with side effects that typically resolve upon discontinuation. Several open-label studies have shown that taking 200 mg of doxycycline within 72 hours of condomless sex significantly reduces the incidence of chlamydia, gonorrhea, and syphilis among gay, bisexual, and other men who have sex with men, as well as transgender women who have previously had a bacterial STI. However, these benefits have not been consistently observed among cisgender women and heterosexual men.

Dr. Santina J.G. Wheat

Given these findings, the Centers for Disease Control and Prevention now recommends that clinicians discuss the risks and benefits of doxycycline PEP (Doxy PEP) with gay, bisexual, and other men who have sex with men, as well as transgender women who have had a bacterial STI in the past 12 months. This discussion should be part of a shared decision-making process, advising the use of 200 mg of doxycycline within 72 hours of oral, vaginal, or anal sex, with the recommendation not to exceed 200 mg every 24 hours and to reassess the need for continued use every 3-6 months. Doxy PEP can be safely prescribed with preexposure prophylaxis for HIV (PrEP). Patients who receive PrEP may often be eligible for Doxy PEP, though the groups are not always the same.

The shared decision-making process is essential when considering Doxy PEP. While cost-effective and proven to reduce the risk of gonorrhea, chlamydia, and syphilis, its benefits vary among different populations. Moreover, some patients may experience side effects such as photosensitivity and gastrointestinal discomfort. Since the effectiveness of prophylaxis is closely tied to the timing of exposure and the patient’s current risk factors, it is important to regularly evaluate whether Doxy PEP remains beneficial. As there is not yet clear benefit to heterosexual men and cisgender women, opportunities still need to be explored for them.

Integrating Doxy PEP into a primary care practice can be done efficiently. A standing order protocol could be established for telehealth visits or nurse triage, allowing timely administration when patients report an exposure within 72 hours. It could also be incorporated into electronic medical records as part of a smart set for easy access to orders and as standard educational material in after-visit instructions. As this option is new, it is also important to discuss it with patients before they may need it so that they are aware should the need arise. While concerns about antibiotic resistance are valid, studies have not yet shown significant resistance issues related to Doxy PEP use, though ongoing monitoring is necessary.

You might wonder why primary care should prioritize this intervention. As the first point of contact, primary care providers are well-positioned to identify the need for prophylaxis, particularly since its effectiveness diminishes over time. Furthermore, the established, trusting relationships that primary care physicians often have with their patients create a nonjudgmental environment that encourages disclosure of potential exposures. This trust, combined with easier access to care, can make a significant difference in the timely provision of postexposure prophylaxis. By offering comprehensive, holistic care, including prophylaxis, primary care physicians can prevent infections and address conditions before they lead to serious complications. Therefore, family medicine physicians should consider incorporating Doxy PEP into their practices as a standard of care.
 

Dr. Wheat is vice chair of Diversity, Equity, and Inclusion, Department of Family and Community Medicine, and associate professor, Family and Community Medicine, at Northwestern University’s Feinberg School of Medicine, Chicago. She has no relevant financial disclosures.

References

Bachmann LH et al. CDC Clinical Guidelines on the Use of Doxycycline Postexposure Prophylaxis for Bacterial Sexually Transmitted Infection Prevention, United States, 2024. MMWR Recomm Rep 2024;73(No. RR-2):1-8.

Traeger MW et al. Potential Impact of Doxycycline Postexposure Prophylaxis Prescribing Strategies on Incidence of Bacterial Sexually Transmitted Infections. (Clin Infect Dis. 2023 Aug 18. doi: 10.1093/cid/ciad488).

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Controlling Six Risk Factors Can Combat CKD in Obesity

Article Type
Changed
Wed, 09/25/2024 - 06:11

 

TOPLINE:

Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.

METHODOLOGY:

  • Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
  • Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
  • Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
  • Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
  • The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.

TAKEAWAY:

  • During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
  • In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
  • The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
  • A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.

IN PRACTICE:

“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.

SOURCE:

The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.

LIMITATIONS:

The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.

DISCLOSURES:

The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.

METHODOLOGY:

  • Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
  • Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
  • Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
  • Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
  • The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.

TAKEAWAY:

  • During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
  • In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
  • The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
  • A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.

IN PRACTICE:

“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.

SOURCE:

The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.

LIMITATIONS:

The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.

DISCLOSURES:

The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Optimal management of blood pressure, A1c levels, low-density lipoprotein cholesterol (LDL-C), albuminuria, smoking, and physical activity may reduce the excess risk for chronic kidney disease (CKD) typically linked to obesity. The protective effect is more pronounced in men, in those with lower healthy food scores, and in users of diabetes medication.

METHODOLOGY:

  • Obesity is a significant risk factor for CKD, but it is unknown if managing multiple other obesity-related CKD risk factors can mitigate the excess CKD risk.
  • Researchers assessed CKD risk factor control in 97,538 participants with obesity from the UK Biobank and compared them with an equal number of age- and sex-matched control participants with normal body weight and no CKD at baseline.
  • Participants with obesity were assessed for six modifiable risk factors: Blood pressure, A1c levels, LDL-C, albuminuria, smoking, and physical activity.
  • Overall, 2487, 12,720, 32,388, 36,988, and 15,381 participants with obesity had at most two, three, four, five, and six risk factors under combined control, respectively, with the two or fewer group serving as the reference.
  • The primary outcome was incident CKD and the degree of combined risk factor control in persons. The CKD risk and risk factor control in participants with obesity were also compared with CKD incidence in matched normal weight participants.

TAKEAWAY:

  • During a median follow-up period of 10.8 years, 3954 cases of incident CKD were reported in participants with obesity and 1498 cases in matched persons of normal body mass index (BMI).
  • In a stepwise pattern, optimal control of each additional risk factor was associated with 11% (adjusted hazard ratio [aHR], 0.89; 95% CI, 0.86-0.91) reduction in the incidence of CKD events, down to a 49% reduction in CKD incidence (aHR, 0.51; 95% CI, 0.43-0.61) for combined control of all six risk factors in participants with obesity.
  • The protective effect of combined control of risk factors was more pronounced in men vs women, in those with lower vs higher healthy diet scores, and in users vs nonusers of diabetes medication.
  • A similar stepwise pattern emerged between the number of risk factors controlled and CKD risk in participants with obesity compared with matched individuals of normal BMI, with the excess CKD risk eliminated in participants with obesity with six risk factors under control.

IN PRACTICE:

“Comprehensive control of risk factors might effectively neutralize the excessive CKD risk associated with obesity, emphasizing the potential of a joint management approach in the prevention of CKD in this population,” the authors wrote.

SOURCE:

The study was led by Rui Tang, MS, Department of Epidemiology, School of Public Health and Tropical Medicine, Tulane University, New Orleans, Louisiana. It was published online in Diabetes, Obesity and Metabolism.

LIMITATIONS:

The evaluated risk factors for CKD were arbitrarily selected, which may not represent the ideal group. The study did not consider the time-varying effect of joint risk factor control owing to the lack of some variables such as A1c. The generalizability of the findings was limited because over 90% of the UK Biobank cohort is composed of White people and individuals with healthier behaviors compared with the overall UK population.

DISCLOSURES:

The study was supported by grants from the US National Heart, Lung, and Blood Institute and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Starting Mammography at Age 40 May Backfire Due to False Positives

Article Type
Changed
Thu, 09/19/2024 - 15:52

Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.

The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.

A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.

These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.

The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.

I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.

Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.

The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.

A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.

These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.

The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.

I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.

Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Earlier this year, I wrote a Medscape commentary to explain my disagreement with the US Preventive Services Task Force (USPSTF)’s updated recommendation that all women at average risk for breast cancer start screening mammography at age 40. The bottom line is that when the evidence doesn’t change, the guidelines shouldn’t change. Since then, other screening experts have criticized the USPSTF guideline on similar grounds, and a national survey reported that nearly 4 out of 10 women in their 40s preferred to delay breast cancer screening after viewing a decision aid and a personalized breast cancer risk estimate.

The decision analysis performed for the USPSTF guideline estimated that compared with having mammography beginning at age 50, 1000 women who begin at age 40 experience 519 more false-positive results and 62 more benign breast biopsies. Another study suggested that anxiety and other psychosocial harms resulting from a false-positive test are similar between patients who require a biopsy vs additional imaging only. Of greater concern, women who have false-positive results are less likely to return for their next scheduled screening exam.

A recent analysis of 2005-2017 data from the US Breast Cancer Surveillance Consortium found that about 1 in 10 mammograms had a false-positive result. Sixty percent of these patients underwent immediate additional imaging, 27% were recalled for diagnostic imaging within the next few days to weeks, and 13% were advised to have a biopsy. While patients who had additional imaging at the same visit were only 1.9% less likely to return for screening mammography within 30 months compared with those with normal mammograms, women who were recalled for short-interval follow-up or recommended for biopsy were 15.9% and 10% less likely to return, respectively. For unclear reasons, women who identified as Asian or Hispanic had even lower rates of return screening after false-positive results.

These differences matter because women in their 40s, with the lowest incidence of breast cancer among those undergoing screening, have a lot of false positives. A patient who follows the USPSTF recommendation and starts screening at age 40 has a 42% chance of having at least one false positive with every-other-year screening, or a 61% chance with annual screening, by the time she turns 50. If some of these patients are so turned off by false positives that they don’t return for regular mammography in their 50s and 60s, when screening is the most likely to catch clinically significant cancers at treatable stages, then moving up the starting age may backfire and cause net harm.

The recently implemented FDA rule requiring mammography reports to include breast density could compound this problem. Because younger women are more likely to have dense breasts, more of them will probably decide to have supplemental imaging for cancer. I previously pointed out that we don’t know whether supplemental imaging with breast ultrasonography or MRI reduces cancer deaths, but we do know that it increases false-positive results.

I have personally cared for several patients who abandoned screening mammography for long stretches, or permanently, after having endured one or more benign biopsies prompted by a false-positive result. I vividly recall one woman in her 60s who was very reluctant to have screening tests in general, and mammography in particular, for that reason. After she had been my patient for a few years, I finally persuaded her to resume screening. We were both surprised when her first mammogram in more than a decade revealed an early-stage breast cancer. Fortunately, the tumor was successfully treated, but for her, an earlier false-positive result nearly ended up having critical health consequences.

Dr. Lin is associate director, Family Medicine Residency Program, Lancaster General Hospital, Lancaster, Pennsylvania. He blogs at Common Sense Family Doctor. He has no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Should There Be a Mandatory Retirement Age for Physicians?

Article Type
Changed
Thu, 09/19/2024 - 15:47

This transcript has been edited for clarity

I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service? 

You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement. 

There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot. 

The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that. 

It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.

I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right? 

In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.

Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.

In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.

This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.

These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.

They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it. 

I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States. 

I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area. 

For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.

It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.

When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
 

Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity

I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service? 

You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement. 

There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot. 

The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that. 

It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.

I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right? 

In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.

Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.

In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.

This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.

These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.

They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it. 

I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States. 

I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area. 

For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.

It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.

When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
 

Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity

I’d like to pose a question: When should doctors retire? When, as practicing physicians or surgeons, do we become too old to deliver competent service? 

You will be amazed to hear, those of you who have listened to my videos before — and although it is a matter of public knowledge — that I’m 68. I know it’s impossible to imagine, due to this youthful appearance, visage, and so on, but I am. I’ve been a cancer doctor for 40 years; therefore, I need to think a little about retirement. 

There are two elements of this for me. I’m a university professor, and in Oxford we did vote, as a democracy of scholars, to have a mandatory retirement age around 68. This is so that we can bring new blood forward so that we can create the space to promote new professors, to bring youngsters in to make new ideas, and to get rid of us fusty old lot. 

The other argument would be, of course, that we are wise, we’re experienced, we are world-weary, and we’re successful — otherwise, we wouldn’t have lasted as academics as long. Nevertheless, we voted to do that. 

It’s possible to have a discussion with the university to extend this, and for those of us who are clinical academics, I have an honorary appointment as a consultant cancer physician in the hospital and my university professorial appointment, too.

I can extend it probably until I’m about 70. It feels like a nice, round number at which to retire — somewhat arbitrarily, one would admit. But does that feel right? 

In the United States, more than 25% of the physician workforce is over the age of 65. There are many studies showing that there is a 20% cognitive decline for most individuals between the ages of 45 and 65.

Are we as capable as an elderly workforce as once we were? Clearly, it’s hardly individualistic. It depends on each of our own health status, where we started from, and so on, but are there any general rules that we can apply? I think these are starting to creep in around the sense of revalidation.

In the United Kingdom, we have a General Medical Council (GMC). I need to have a license to practice from the GMC and a sense of fitness to practice. I have annual appraisals within the hospital system, in which I explore delivery of care, how I’m doing as a mentor, am I reaching the milestones I’ve set in terms of academic achievements, and so on.

This is a peer-to-peer process. We have senior physicians — people like myself — who act as appraisers to support our colleagues and to maintain that sense of fitness to practice. Every 5 years, I’m revalidated by the GMC. They take account of the annual appraisals and a report made by the senior physician within my hospital network who’s a so-called designated person.

These two elements come together with patient feedback, with 360-degree feedback from colleagues, and so on. This is quite a firmly regulated system that I think works. Our mandatory retirement age of 65 has gone. That was phased out by the government. In fact, our NHS is making an effort to retain older elders in the workforce.

They see the benefits of mentorship, experience, leadership, and networks. At a time when the majority of NHS are actively seeking to retire when 65, the NHS is trying to retain and pull back those of us who have been around for that wee bit longer and who still feel committed to doing it. 

I’d be really interested to see what you think. There’s variation from country to country. I know that, in Australia, they’re talking about annual appraisals of doctors over the age of 70. I’d be very interested to hear what you think is likely to happen in the United States. 

I think our system works pretty well, as long as you’re within the NHS and hospital system. If you wanted to still practice, but practice privately, you would still have to find somebody who’d be prepared to conduct appraisals and so on outside of the NHS. It’s an interesting area. 

For myself, I still feel competent. Patients seem to like me. That’s an objective assessment by this 360-degree thing in which patients reflected very positively, indeed, in my approach to the delivery of the care and so on, as did colleagues. I’m still publishing, I go to meetings, I cheer things, bits and bobs. I’d say I’m a wee bit unusual in terms of still having a strong academic profile in doing stuff.

It’s an interesting question. Richard Doll, one of the world’s great epidemiologists who, of course, was the dominant discoverer of the link between smoking and lung cancer, was attending seminars, sitting in the front row, and coming into university 3 days a week at age 90, continuing to be contributory with his extraordinarily sharp intellect and vast, vast experience.

When I think of experience, all young cancer doctors are now immunologists. When I was a young doctor, I was a clinical pharmacologist. There are many lessons and tricks that I learned which I do need to pass on to the younger generation of today. What do you think? Should there be a mandatory retirement age? How do we best measure, assess, and revalidate elderly physicians and surgeons? How can we continue to contribute to those who choose to do so? For the time being, as always, thanks for listening.
 

Dr. Kerr is professor, Nuffield Department of Clinical Laboratory Science, University of Oxford, and professor of cancer medicine, Oxford Cancer Centre, Oxford, United Kingdom. He has disclosed ties with Celleron Therapeutics, Oxford Cancer Biomarkers (Board of Directors); Afrox (charity; Trustee); GlaxoSmithKline and Bayer HealthCare Pharmaceuticals (Consultant), Genomic Health; Merck Serono, and Roche.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fecal Immunochemical Test Performance for CRC Screening Varies Widely

Article Type
Changed
Mon, 10/07/2024 - 02:24

Although considered a single class, fecal immunochemical tests (FITs) vary in their ability to detect advanced colorectal neoplasia (ACN) and should not be considered interchangeable, new research suggests.

In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.

“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.

The study was published online in Annals of Internal Medicine.
 

Wide Variation Found

Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.

Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.

Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.

The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.

A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.

The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).

“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.

Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT. 

The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.

The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.

The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).

Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.

“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
 

 

 

‘Jaw-Dropping’ Results

The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”

“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.

This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.

Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”

By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.

The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Although considered a single class, fecal immunochemical tests (FITs) vary in their ability to detect advanced colorectal neoplasia (ACN) and should not be considered interchangeable, new research suggests.

In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.

“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.

The study was published online in Annals of Internal Medicine.
 

Wide Variation Found

Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.

Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.

Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.

The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.

A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.

The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).

“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.

Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT. 

The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.

The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.

The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).

Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.

“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
 

 

 

‘Jaw-Dropping’ Results

The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”

“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.

This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.

Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”

By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.

The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Although considered a single class, fecal immunochemical tests (FITs) vary in their ability to detect advanced colorectal neoplasia (ACN) and should not be considered interchangeable, new research suggests.

In a comparative performance analysis of five commonly used FITs for colorectal cancer (CRC) screening, researchers found statistically significant differences in positivity rates, sensitivity, and specificity, as well as important differences in rates of unusable tests.

“Our findings have practical importance for FIT-based screening programs as these differences affect the need for repeated FIT, the yield of ACN detection, and the number of diagnostic colonoscopies that would be required to follow-up on abnormal findings,” wrote the researchers, led by Barcey T. Levy, MD, PhD, with University of Iowa, Iowa City.

The study was published online in Annals of Internal Medicine.
 

Wide Variation Found

Despite widespread use of FITs for CRC screening, there is limited data to help guide test selection. Understanding the comparative performance of different FITs is “crucial” for a successful FIT-based screening program, the researchers wrote.

Dr. Levy and colleagues directly compared the performance of five commercially available FITs — including four qualitative tests (Hemoccult ICT, Hemosure iFOB, OC-Light S FIT, and QuickVue iFOB) and one quantitative test (OC-Auto FIT) — using colonoscopy as the reference standard.

Participants included a diverse group of 3761 adults (mean age, 62 years; 63% women). Each participant was given all five tests and completed them using the same stool sample. They sent the tests by first class mail to a central location, where FITs were analyzed by a trained professional on the day of receipt.

The primary outcome was test performance (sensitivity and specificity) for ACN, defined as advanced polyps or CRC.

A total of 320 participants (8.5%) were found to have ACN based on colonoscopy results, including nine with CRC (0.2%) — rates that are similar to those found in other studies.

The sensitivity for detecting ACN ranged from 10.1% (Hemoccult ICT) to 36.7% (OC-Light S FIT), and specificity varied from 85.5% (OC-Light S FIT) to 96.6% (Hemoccult ICT).

“Given the variation in FIT cutoffs reported by manufacturers, it is not surprising that tests with lower cutoffs (such as OC-Light S FIT) had higher sensitivity than tests with higher cutoffs (such as Hemoccult ICT),” Dr. Levy and colleagues wrote.

Test positivity rates varied fourfold across FITs, from 3.9% for Hemoccult ICT to 16.4% for OC-Light S FIT. 

The rates of tests deemed unevaluable (due to factors such as indeterminant results or user mistakes) ranged from 0.2% for OC-Auto FIT to 2.5% for QuickVue iFOB.

The highest positive predictive value (PPV) was observed with OC-Auto FIT (28.9%) and the lowest with Hemosure iFOB (18.2%). The negative predictive value was similar across tests, ranging from 92.2% to 93.3%, indicating consistent performance in ruling out disease.

The study also identified significant differences in test sensitivity based on factors such as the location of neoplasia (higher sensitivity for distal lesions) and patient characteristics (higher sensitivity in people with higher body mass index and lower income).

Dr. Levy and colleagues said their findings have implications both in terms of clinical benefits and cost-effectiveness of CRC screening using FITs.

“Tests with lower sensitivity will miss more patients with CRC and advanced polyps, and tests with higher sensitivity and lower PPV will require more colonoscopies to detect patients with actionable findings,” they wrote.
 

 

 

‘Jaw-Dropping’ Results

The sensitivity results are “jaw-dropping,” Robert Smith, PhD, senior vice-president for cancer screening at the American Cancer Society, said in an interview. “A patient should have at least a 50/50 chance of having their colorectal cancer detected with a stool test at the time of testing.”

“What these numbers show is that the level that the manufacturers believe their test is performing is not reproduced,” Dr. Smith added.

This study adds to “concerns that have been raised about the inherent limitations and the performance of these tests that have been cleared for use and that are supposed to be lifesaving,” he said.

Clearance by the US Food and Drug Administration should mean that there’s essentially “no risk to using the test in terms of the test itself being harmful,” Dr. Smith said. But that’s not the case with FITs “because it’s harmful if you have cancer and your test doesn’t find it.”

By way of study limitations, Dr. Levy and colleagues said it’s important to note that they did not evaluate the “programmatic” sensitivity of repeating FIT testing every 1-2 years, as is generally recommended in screening guidelines. Therefore, the sensitivity of a single FIT may be lower than that of a repeated FIT. Also, variability in the FIT collection process by participants might have affected the results.

The study had no commercial funding. Disclosures for authors are available with the original article. Dr. Smith had no relevant disclosures.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hidden in Plain Sight: The Growing Epidemic of Ultraprocessed Food Addiction

Article Type
Changed
Thu, 09/19/2024 - 15:35

Over the past few decades, researchers have developed a compelling case against ultraprocessed foods and beverages, linking them to several chronic diseases and adverse health conditions. Yet, even as this evidence mounted, these food items have become increasingly prominent in diets globally. 

Now, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol. 

This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it. 
 

The Key Role of the Food Industry

Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).

Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.

To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous. 
 

How Many People Are Affected?

Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco. 

Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA. 

Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives. 

From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
 

Clinical Implications

Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.

Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.

Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.

There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories. 

What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
 

 

 

Treatment Options

Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.

Psychosocial approaches can also be used to address UPFA. Strategies include:

  • Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
  • Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
  • UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
  • Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.

Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.

Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
 

Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Over the past few decades, researchers have developed a compelling case against ultraprocessed foods and beverages, linking them to several chronic diseases and adverse health conditions. Yet, even as this evidence mounted, these food items have become increasingly prominent in diets globally. 

Now, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol. 

This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it. 
 

The Key Role of the Food Industry

Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).

Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.

To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous. 
 

How Many People Are Affected?

Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco. 

Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA. 

Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives. 

From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
 

Clinical Implications

Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.

Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.

Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.

There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories. 

What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
 

 

 

Treatment Options

Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.

Psychosocial approaches can also be used to address UPFA. Strategies include:

  • Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
  • Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
  • UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
  • Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.

Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.

Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
 

Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Over the past few decades, researchers have developed a compelling case against ultraprocessed foods and beverages, linking them to several chronic diseases and adverse health conditions. Yet, even as this evidence mounted, these food items have become increasingly prominent in diets globally. 

Now, recent studies are unlocking why cutting back on ultraprocessed foods can be so challenging. In their ability to fuel intense cravings, loss of control, and even withdrawal symptoms, ultraprocessed foods appear as capable of triggering addiction as traditional culprits like tobacco and alcohol. 

This has driven efforts to better understand the addictive nature of these foods and identify strategies for combating it. 
 

The Key Role of the Food Industry

Some foods are more likely to trigger addictions than others. For instance, in our studies, participants frequently mention chocolate, pizza, French fries, potato chips, and soda as some of the most addictive foods. What these foods all share is an ability to deliver high doses of refined carbohydrates, fat, or salt at levels exceeding those found in natural foods (eg, fruits, vegetables, beans).

Furthermore, ultraprocessed foods are industrially mass-produced in a process that relies on the heavy use of flavor enhancers and additives, as well as preservatives and packaging that make them shelf-stable. This has flooded our food supply with cheap, accessible, hyperrewarding foods that our brains are not well equipped to resist.

To add to these already substantial effects, the food industry often employs strategies reminiscent of Big Tobacco. They engineer foods to hit our “bliss points,” maximizing craving and fostering brand loyalty from a young age. This product engineering, coupled with aggressive marketing, makes these foods both attractive and seemingly ubiquitous. 
 

How Many People Are Affected?

Addiction to ultraprocessed food is more common than you might think. According to the Yale Food Addiction Scale — a tool that uses the same criteria for diagnosing substance use disorders to assess ultraprocessed food addiction (UPFA) — about 14% of adults and 12% of children show clinically significant signs of addiction to such foods. This is quite similar to addiction rates among adults for legal substances like alcohol and tobacco. 

Research has shown that behaviors and brain mechanisms contributing to addictive disorders, such as cravings and impulsivity, also apply to UPFA. 

Many more people outside of those who meet the criteria for UPFA are influenced by their addictive properties. Picture a teenager craving a sugary drink after school, a child needing the morning cereal fix, or adults reaching for candy and fast food; these scenarios illustrate how addictive ultraprocessed foods permeate our daily lives. 

From a public health standpoint, this comes at a significant cost. Even experiencing one or two symptoms of UPFA, such as intense cravings or a feeling of loss of control over intake, can lead to consuming too many calories, sugar, fat, and sodium in a way that puts health at risk.
 

Clinical Implications

Numerous studies have found that individuals who exhibit UPFA have more severe mental and physical health challenges. For example, UPFA is associated with higher rates of diet-related diseases (like type 2 diabetes), greater overall mental health issues, and generally poorer outcomes in weight loss treatments.

Despite the growing understanding of UPFA’s relevance in clinical settings, research is still limited on how to best treat, manage, or prevent it. Most of the existing work has focused on investigating whether UPFA is indeed a real condition, with efforts to create clinical guidelines only just beginning.

Of note, UPFA isn’t officially recognized as a diagnosis — yet. If it were, it could spark much more research into how to handle it clinically.

There is some debate about whether we really need this new diagnosis, given that eating disorders are already recognized. However, the statistics tell a different story: Around 14% of people might have UPFA compared with about 1% for binge-type eating disorders. This suggests that many individuals with problematic eating habits are currently flying under the radar with our existing diagnostic categories. 

What’s even more concerning is that these individuals often suffer significant problems and exhibit distinct brain differences, even if they do not neatly fit into an existing eating disorder diagnosis. Officially recognizing UPFA could open up new avenues for support and lead to better treatments aimed at reducing compulsive eating patterns.
 

 

 

Treatment Options

Treatment options for UPFA are still being explored. Initial evidence suggests that medications used for treating substance addiction, such as naltrexone and bupropion, might help with highly processed food addiction as well. Newer drugs, like glucagon-like peptide-1 receptor agonists, which appear to curb food cravings and manage addictive behaviors, also look promising.

Psychosocial approaches can also be used to address UPFA. Strategies include:

  • Helping individuals become more aware of their triggers for addictive patterns of intake. This often involves identifying certain types of food (eg, potato chips, candy), specific places or times of day (eg, sitting on the couch at night while watching TV), and particular emotional states (eg, anger, loneliness, boredom, sadness). Increasing awareness of personal triggers can help people minimize their exposure to these and develop coping strategies when they do arise.
  • Many people use ultraprocessed foods to cope with challenging emotions. Helping individuals develop healthier strategies to regulate their emotions can be key. This may include seeking out social support, journaling, going for a walk, or practicing mindfulness.
  • UPFA can be associated with erratic and inconsistent eating patterns. Stabilizing eating habits by consuming regular meals composed of more minimally processed foods (eg, vegetables, fruits, high-quality protein, beans) can help heal the body and reduce vulnerability to ultraprocessed food triggers.
  • Many people with UPFA have other existing mental health conditions, including mood disorders, anxiety, substance use disorders, or trauma-related disorders. Addressing these co-occurring mental health conditions can help reduce reliance on ultraprocessed foods.

Public-policy interventions may also help safeguard vulnerable populations from developing UPFA. For instance, support exists for policies to protect children from cigarette marketing and to put clear addiction warning labels on cigarette packages. A similar approach could be applied to reduce the harms associated with ultraprocessed foods, particularly for children.

Combating this growing problem requires treating ultraprocessed foods like other addictive substances. By identifying the threat posed by these common food items, we can not only help patients with UPFA, but also potentially stave off the development of several diet-related conditions.
 

Dr. Gearhardt, professor of psychology, University of Michigan, Ann Arbor, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bariatric Surgery and Weight Loss Make Brain Say Meh to Sweets

Article Type
Changed
Thu, 09/19/2024 - 14:17

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Revolutionizing Headache Medicine: The Role of Artificial Intelligence

Article Type
Changed
Wed, 09/25/2024 - 16:36
Display Headline
Revolutionizing Headache Medicine: The Role of Artificial Intelligence

 

 

As we move further into the 21st century, technology continues to revolutionize various facets of our lives. Healthcare is a prime example. Advances in technology have dramatically reshaped the way we develop medications, diagnose diseases, and enhance patient care. The rise of artificial intelligence (AI) and the widespread adoption of digital health technologies have marked a significant milestone in improving the quality of care. AI, with its ability to leverage algorithms, deep learning, and machine learning to process data, make decisions, and perform tasks autonomously, is becoming an integral part of modern society. It is embedded in various technologies that we rely on daily, from smartphones and smart home devices to content recommendations on streaming services and social media platforms.

 

In healthcare, AI has applications in numerous fields, such as radiology. AI streamlines processes such as organizing patient appointments, optimizing radiation protocols for safety and efficiency, and enhancing the documentation process through advanced image analysis. AI technology plays an integral role in imaging tasks like image enhancement, lesion detection, and precise measurement. In difficult-to-interpret radiologic studies, such as some mammography images, it can be a crucial aid to the radiologist. Additionally, the use of AI has significantly improved remote patient monitoring that enables healthcare professionals to monitor and assess patient conditions without needing in-person visits. Remote patient monitoring gained prominence during the COVID-19 pandemic and continues to be a valuable tool in post pandemic care. Study results have highlighted that AI-driven ambient dictation tools have increased provider engagement with patients during consultations while reducing the time spent documenting in electronic health records.

Like many other medical specialties, headache medicine also uses AI. Most prominently, AI has been used in models and engines in assisting with headache diagnoses. A noteworthy example of AI in headache medicine is the development of an online, computer-based diagnostic engine (CDE) by Rapoport et al, called BonTriage. This tool is designed to diagnose headaches by employing a rule set based on the International Classification of Headache Disorders-3 (ICHD-3) criteria for primary headache disorders while also evaluating secondary headaches and medication overuse headaches. By leveraging machine learning, the CDE has the potential to streamline the diagnostic process, reducing the number of questions needed to reach a diagnosis and making the experience more efficient. This information can then be printed as a PDF file and taken by the patient to a healthcare professional for further discussion, fostering a more accurate, fluid, and conversational consultation.

 

A study was conducted to evaluate the accuracy of the CDE. Participants were randomly assigned to 1 of 2 sequences: (1) using the CDE followed by a structured standard interview with a headache specialist using the same ICHD-3 criteria or (2) starting with the structured standard interview followed by the CDE. The results demonstrated nearly perfect agreement in diagnosing migraine and probable migraine between the CDE and structured standard interview (κ = 0.82, 95% CI: 0.74, 0.90). The CDE demonstrated a diagnostic accuracy of 91.6% (95% CI: 86.9%, 95.0%), a sensitivity rate of 89.0% (95% CI: 82.5%, 93.7%), and a specificity rate of 97.0% (95% CI: 89.5%, 99.6%).

 

A diagnostic engine such as this can save time that clinicians spend on documentation and allow more time for discussion with the patient. For instance, a patient can take the printout received from the CDE to an appointment; the printout gives a detailed history plus information about social and psychological issues, a list of medications taken, and results of previous testing. The CDE system was originally designed to help patients see a specialist in the environment of a nationwide lack of headache specialists. There are currently 45 million patients with headaches who are seeking treatment with only around 550 certified headache specialists in the United States. The CDE printed information can help a patient obtain a consultation from a clinician quickly and start evaluation and treatment earlier. This expert online consultation is currently free of charge.

 

Kwon et al developed a machine learningbased model designed to automatically classify headache disorders using data from a questionnaire. Their model was able to predict diagnoses for conditions such as migraine, tension-type headaches, trigeminal autonomic cephalalgia, epicranial headache, and thunderclap headaches. The model was trained on data from 2162 patients, all diagnosed by headache specialists, and achieved an overall accuracy of 81%, with a sensitivity of 88% and a specificity of 95% for diagnosing migraines. However, the model’s performance was less robust when applied to other headache disorders.

 

Katsuki et al developed an AI model to help non specialists accurately diagnose headaches. This model analyzed 17 variables and was trained on data from 2800 patients, with additional testing and refinement using data from another 200 patients. To evaluate its effectiveness, 2 groups of non-headache specialists each assessed 50 patients: 1 group relied solely on their expertise, while the other used the AI model. The group without AI assistance achieved an overall accuracy of 46% (κ = 0.21), while the group using the AI model significantly improved, reaching an overall accuracy of 83.2% (κ = 0.68).

 

Building on their work with AI for diagnosing headaches, Katsuki et al conducted a study using a smartphone application that tracked user-reported headache events alongside local weather data. The AI model revealed that lower barometric pressure, higher humidity, and increased rainfall were linked to the onset of headache attacks. The application also identified triggers for headaches in specific weather patterns, such as a drop in barometric pressure noted 6 hours before headache onset. The application of AI in monitoring weather changes could be crucial, especially given concerns that the rising frequency of severe weather events due to climate change may be exacerbating the severity and burden of migraine. Additionally, recent post hoc analyses of fremanezumab clinical trials have provided further evidence that weather changes can trigger headaches.

 

Rapoport and colleagues have also developed an application called Migraine Mentor, which accurately tracks headaches, triggers, health data, and response to medication on a smartphone. The patient spends 3 minutes a day answering a few questions about their day and whether they had a headache or took any medication. At 1 or 2 months, Migraine Mentor can generate a detailed report with data and current trends that is sent to the patient, which the patient can then share with the clinician. The application also reminds patients when to document data and take medication.

 

However, although the use of AI in headache medicine appears promising, caution must be exercised to ensure proper results and information are disseminated. One rapidly expanding application of AI is the widely popular ChatGPT. ChatGPT, which stands for generative pretraining transformer, is a type of large language model (LLM). An LLM is a deep learning algorithm designed to recognize, translate, predict, summarize, and generate text responses based on a given prompt. This model is trained on an extensive dataset that includes a diverse array of books, articles, and websites, exposing it to various language structures and styles. This training enables ChatGPT to generate responses that closely mimic human communication. LLMs are being used more and more in medicine to assist with generating patient documentation and educational materials.

 

However, Dr Fred Cohen published a perspective piece detailing how LLMs (such as ChatGPT) can produce misleading and inaccurate answers. In his example, he tasked ChatGPT to describe the epidemiology of migraines in penguins; the AI model generated a well-written and highly believable manuscript titled, “Migraine Under the Ice: Understanding Headaches in Antarctica's Feathered Friends.” The manuscript highlights that migraines are more prevalent in male penguins compared to females, with the peak age of onset occurring between 4 and 5 years. Additionally, emperor and king penguins are identified as being more susceptible to developing migraines compared to other penguin species. The paper was fictitious (as no studies on migraine in penguins have been written to date), exemplifying that these models can produce nonfactual materials.

 

For years, technological advancements have been reshaping many aspects of life, and medicine is no exception. AI has been successfully applied to streamline medical documentation, develop new drug targets, and deepen our understanding of various diseases. The field of headache medicine now also uses AI. Recent developments show significant promise, with AI aiding in the diagnosis of migraine and other headache disorders. AI models have even been used in the identification of potential drug targets for migraine treatment. Although there are still limitations to overcome, the future of AI in headache medicine appears bright.

 

If you would like to read more about Dr. Cohen’s work on AI and migraine, please visit fredcohenmd.com or TikTok @fredcohenmd. 

Author and Disclosure Information

Fred Cohen, MD,1,2 Alan Rapoport, MD3

 

1Department of Neurology, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai
2Department of Medicine, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai

3Department of Neurology, UCLA School of Medicine, Los Angeles

 

Disclosures:
Fred Cohen is a section editor for Current Pain and Headache Reports and has received honoraria from Springer Nature. He also has received honoraria from Medlink Neurology.

 

Alan Rapoport is the editor-in-chief of Neurology Reviews® and a co-founder with Dr Cowan and Dr Blyth of BonTriage.

 

Corresponding Author:

Fred Cohen, MD

fredcohenmd@gmail.com

Publications
Topics
Sections
Author and Disclosure Information

Fred Cohen, MD,1,2 Alan Rapoport, MD3

 

1Department of Neurology, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai
2Department of Medicine, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai

3Department of Neurology, UCLA School of Medicine, Los Angeles

 

Disclosures:
Fred Cohen is a section editor for Current Pain and Headache Reports and has received honoraria from Springer Nature. He also has received honoraria from Medlink Neurology.

 

Alan Rapoport is the editor-in-chief of Neurology Reviews® and a co-founder with Dr Cowan and Dr Blyth of BonTriage.

 

Corresponding Author:

Fred Cohen, MD

fredcohenmd@gmail.com

Author and Disclosure Information

Fred Cohen, MD,1,2 Alan Rapoport, MD3

 

1Department of Neurology, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai
2Department of Medicine, Mount Sinai Hospital, Icahn School of Medicine at Mount Sinai

3Department of Neurology, UCLA School of Medicine, Los Angeles

 

Disclosures:
Fred Cohen is a section editor for Current Pain and Headache Reports and has received honoraria from Springer Nature. He also has received honoraria from Medlink Neurology.

 

Alan Rapoport is the editor-in-chief of Neurology Reviews® and a co-founder with Dr Cowan and Dr Blyth of BonTriage.

 

Corresponding Author:

Fred Cohen, MD

fredcohenmd@gmail.com

 

 

As we move further into the 21st century, technology continues to revolutionize various facets of our lives. Healthcare is a prime example. Advances in technology have dramatically reshaped the way we develop medications, diagnose diseases, and enhance patient care. The rise of artificial intelligence (AI) and the widespread adoption of digital health technologies have marked a significant milestone in improving the quality of care. AI, with its ability to leverage algorithms, deep learning, and machine learning to process data, make decisions, and perform tasks autonomously, is becoming an integral part of modern society. It is embedded in various technologies that we rely on daily, from smartphones and smart home devices to content recommendations on streaming services and social media platforms.

 

In healthcare, AI has applications in numerous fields, such as radiology. AI streamlines processes such as organizing patient appointments, optimizing radiation protocols for safety and efficiency, and enhancing the documentation process through advanced image analysis. AI technology plays an integral role in imaging tasks like image enhancement, lesion detection, and precise measurement. In difficult-to-interpret radiologic studies, such as some mammography images, it can be a crucial aid to the radiologist. Additionally, the use of AI has significantly improved remote patient monitoring that enables healthcare professionals to monitor and assess patient conditions without needing in-person visits. Remote patient monitoring gained prominence during the COVID-19 pandemic and continues to be a valuable tool in post pandemic care. Study results have highlighted that AI-driven ambient dictation tools have increased provider engagement with patients during consultations while reducing the time spent documenting in electronic health records.

Like many other medical specialties, headache medicine also uses AI. Most prominently, AI has been used in models and engines in assisting with headache diagnoses. A noteworthy example of AI in headache medicine is the development of an online, computer-based diagnostic engine (CDE) by Rapoport et al, called BonTriage. This tool is designed to diagnose headaches by employing a rule set based on the International Classification of Headache Disorders-3 (ICHD-3) criteria for primary headache disorders while also evaluating secondary headaches and medication overuse headaches. By leveraging machine learning, the CDE has the potential to streamline the diagnostic process, reducing the number of questions needed to reach a diagnosis and making the experience more efficient. This information can then be printed as a PDF file and taken by the patient to a healthcare professional for further discussion, fostering a more accurate, fluid, and conversational consultation.

 

A study was conducted to evaluate the accuracy of the CDE. Participants were randomly assigned to 1 of 2 sequences: (1) using the CDE followed by a structured standard interview with a headache specialist using the same ICHD-3 criteria or (2) starting with the structured standard interview followed by the CDE. The results demonstrated nearly perfect agreement in diagnosing migraine and probable migraine between the CDE and structured standard interview (κ = 0.82, 95% CI: 0.74, 0.90). The CDE demonstrated a diagnostic accuracy of 91.6% (95% CI: 86.9%, 95.0%), a sensitivity rate of 89.0% (95% CI: 82.5%, 93.7%), and a specificity rate of 97.0% (95% CI: 89.5%, 99.6%).

 

A diagnostic engine such as this can save time that clinicians spend on documentation and allow more time for discussion with the patient. For instance, a patient can take the printout received from the CDE to an appointment; the printout gives a detailed history plus information about social and psychological issues, a list of medications taken, and results of previous testing. The CDE system was originally designed to help patients see a specialist in the environment of a nationwide lack of headache specialists. There are currently 45 million patients with headaches who are seeking treatment with only around 550 certified headache specialists in the United States. The CDE printed information can help a patient obtain a consultation from a clinician quickly and start evaluation and treatment earlier. This expert online consultation is currently free of charge.

 

Kwon et al developed a machine learningbased model designed to automatically classify headache disorders using data from a questionnaire. Their model was able to predict diagnoses for conditions such as migraine, tension-type headaches, trigeminal autonomic cephalalgia, epicranial headache, and thunderclap headaches. The model was trained on data from 2162 patients, all diagnosed by headache specialists, and achieved an overall accuracy of 81%, with a sensitivity of 88% and a specificity of 95% for diagnosing migraines. However, the model’s performance was less robust when applied to other headache disorders.

 

Katsuki et al developed an AI model to help non specialists accurately diagnose headaches. This model analyzed 17 variables and was trained on data from 2800 patients, with additional testing and refinement using data from another 200 patients. To evaluate its effectiveness, 2 groups of non-headache specialists each assessed 50 patients: 1 group relied solely on their expertise, while the other used the AI model. The group without AI assistance achieved an overall accuracy of 46% (κ = 0.21), while the group using the AI model significantly improved, reaching an overall accuracy of 83.2% (κ = 0.68).

 

Building on their work with AI for diagnosing headaches, Katsuki et al conducted a study using a smartphone application that tracked user-reported headache events alongside local weather data. The AI model revealed that lower barometric pressure, higher humidity, and increased rainfall were linked to the onset of headache attacks. The application also identified triggers for headaches in specific weather patterns, such as a drop in barometric pressure noted 6 hours before headache onset. The application of AI in monitoring weather changes could be crucial, especially given concerns that the rising frequency of severe weather events due to climate change may be exacerbating the severity and burden of migraine. Additionally, recent post hoc analyses of fremanezumab clinical trials have provided further evidence that weather changes can trigger headaches.

 

Rapoport and colleagues have also developed an application called Migraine Mentor, which accurately tracks headaches, triggers, health data, and response to medication on a smartphone. The patient spends 3 minutes a day answering a few questions about their day and whether they had a headache or took any medication. At 1 or 2 months, Migraine Mentor can generate a detailed report with data and current trends that is sent to the patient, which the patient can then share with the clinician. The application also reminds patients when to document data and take medication.

 

However, although the use of AI in headache medicine appears promising, caution must be exercised to ensure proper results and information are disseminated. One rapidly expanding application of AI is the widely popular ChatGPT. ChatGPT, which stands for generative pretraining transformer, is a type of large language model (LLM). An LLM is a deep learning algorithm designed to recognize, translate, predict, summarize, and generate text responses based on a given prompt. This model is trained on an extensive dataset that includes a diverse array of books, articles, and websites, exposing it to various language structures and styles. This training enables ChatGPT to generate responses that closely mimic human communication. LLMs are being used more and more in medicine to assist with generating patient documentation and educational materials.

 

However, Dr Fred Cohen published a perspective piece detailing how LLMs (such as ChatGPT) can produce misleading and inaccurate answers. In his example, he tasked ChatGPT to describe the epidemiology of migraines in penguins; the AI model generated a well-written and highly believable manuscript titled, “Migraine Under the Ice: Understanding Headaches in Antarctica's Feathered Friends.” The manuscript highlights that migraines are more prevalent in male penguins compared to females, with the peak age of onset occurring between 4 and 5 years. Additionally, emperor and king penguins are identified as being more susceptible to developing migraines compared to other penguin species. The paper was fictitious (as no studies on migraine in penguins have been written to date), exemplifying that these models can produce nonfactual materials.

 

For years, technological advancements have been reshaping many aspects of life, and medicine is no exception. AI has been successfully applied to streamline medical documentation, develop new drug targets, and deepen our understanding of various diseases. The field of headache medicine now also uses AI. Recent developments show significant promise, with AI aiding in the diagnosis of migraine and other headache disorders. AI models have even been used in the identification of potential drug targets for migraine treatment. Although there are still limitations to overcome, the future of AI in headache medicine appears bright.

 

If you would like to read more about Dr. Cohen’s work on AI and migraine, please visit fredcohenmd.com or TikTok @fredcohenmd. 

 

 

As we move further into the 21st century, technology continues to revolutionize various facets of our lives. Healthcare is a prime example. Advances in technology have dramatically reshaped the way we develop medications, diagnose diseases, and enhance patient care. The rise of artificial intelligence (AI) and the widespread adoption of digital health technologies have marked a significant milestone in improving the quality of care. AI, with its ability to leverage algorithms, deep learning, and machine learning to process data, make decisions, and perform tasks autonomously, is becoming an integral part of modern society. It is embedded in various technologies that we rely on daily, from smartphones and smart home devices to content recommendations on streaming services and social media platforms.

 

In healthcare, AI has applications in numerous fields, such as radiology. AI streamlines processes such as organizing patient appointments, optimizing radiation protocols for safety and efficiency, and enhancing the documentation process through advanced image analysis. AI technology plays an integral role in imaging tasks like image enhancement, lesion detection, and precise measurement. In difficult-to-interpret radiologic studies, such as some mammography images, it can be a crucial aid to the radiologist. Additionally, the use of AI has significantly improved remote patient monitoring that enables healthcare professionals to monitor and assess patient conditions without needing in-person visits. Remote patient monitoring gained prominence during the COVID-19 pandemic and continues to be a valuable tool in post pandemic care. Study results have highlighted that AI-driven ambient dictation tools have increased provider engagement with patients during consultations while reducing the time spent documenting in electronic health records.

Like many other medical specialties, headache medicine also uses AI. Most prominently, AI has been used in models and engines in assisting with headache diagnoses. A noteworthy example of AI in headache medicine is the development of an online, computer-based diagnostic engine (CDE) by Rapoport et al, called BonTriage. This tool is designed to diagnose headaches by employing a rule set based on the International Classification of Headache Disorders-3 (ICHD-3) criteria for primary headache disorders while also evaluating secondary headaches and medication overuse headaches. By leveraging machine learning, the CDE has the potential to streamline the diagnostic process, reducing the number of questions needed to reach a diagnosis and making the experience more efficient. This information can then be printed as a PDF file and taken by the patient to a healthcare professional for further discussion, fostering a more accurate, fluid, and conversational consultation.

 

A study was conducted to evaluate the accuracy of the CDE. Participants were randomly assigned to 1 of 2 sequences: (1) using the CDE followed by a structured standard interview with a headache specialist using the same ICHD-3 criteria or (2) starting with the structured standard interview followed by the CDE. The results demonstrated nearly perfect agreement in diagnosing migraine and probable migraine between the CDE and structured standard interview (κ = 0.82, 95% CI: 0.74, 0.90). The CDE demonstrated a diagnostic accuracy of 91.6% (95% CI: 86.9%, 95.0%), a sensitivity rate of 89.0% (95% CI: 82.5%, 93.7%), and a specificity rate of 97.0% (95% CI: 89.5%, 99.6%).

 

A diagnostic engine such as this can save time that clinicians spend on documentation and allow more time for discussion with the patient. For instance, a patient can take the printout received from the CDE to an appointment; the printout gives a detailed history plus information about social and psychological issues, a list of medications taken, and results of previous testing. The CDE system was originally designed to help patients see a specialist in the environment of a nationwide lack of headache specialists. There are currently 45 million patients with headaches who are seeking treatment with only around 550 certified headache specialists in the United States. The CDE printed information can help a patient obtain a consultation from a clinician quickly and start evaluation and treatment earlier. This expert online consultation is currently free of charge.

 

Kwon et al developed a machine learningbased model designed to automatically classify headache disorders using data from a questionnaire. Their model was able to predict diagnoses for conditions such as migraine, tension-type headaches, trigeminal autonomic cephalalgia, epicranial headache, and thunderclap headaches. The model was trained on data from 2162 patients, all diagnosed by headache specialists, and achieved an overall accuracy of 81%, with a sensitivity of 88% and a specificity of 95% for diagnosing migraines. However, the model’s performance was less robust when applied to other headache disorders.

 

Katsuki et al developed an AI model to help non specialists accurately diagnose headaches. This model analyzed 17 variables and was trained on data from 2800 patients, with additional testing and refinement using data from another 200 patients. To evaluate its effectiveness, 2 groups of non-headache specialists each assessed 50 patients: 1 group relied solely on their expertise, while the other used the AI model. The group without AI assistance achieved an overall accuracy of 46% (κ = 0.21), while the group using the AI model significantly improved, reaching an overall accuracy of 83.2% (κ = 0.68).

 

Building on their work with AI for diagnosing headaches, Katsuki et al conducted a study using a smartphone application that tracked user-reported headache events alongside local weather data. The AI model revealed that lower barometric pressure, higher humidity, and increased rainfall were linked to the onset of headache attacks. The application also identified triggers for headaches in specific weather patterns, such as a drop in barometric pressure noted 6 hours before headache onset. The application of AI in monitoring weather changes could be crucial, especially given concerns that the rising frequency of severe weather events due to climate change may be exacerbating the severity and burden of migraine. Additionally, recent post hoc analyses of fremanezumab clinical trials have provided further evidence that weather changes can trigger headaches.

 

Rapoport and colleagues have also developed an application called Migraine Mentor, which accurately tracks headaches, triggers, health data, and response to medication on a smartphone. The patient spends 3 minutes a day answering a few questions about their day and whether they had a headache or took any medication. At 1 or 2 months, Migraine Mentor can generate a detailed report with data and current trends that is sent to the patient, which the patient can then share with the clinician. The application also reminds patients when to document data and take medication.

 

However, although the use of AI in headache medicine appears promising, caution must be exercised to ensure proper results and information are disseminated. One rapidly expanding application of AI is the widely popular ChatGPT. ChatGPT, which stands for generative pretraining transformer, is a type of large language model (LLM). An LLM is a deep learning algorithm designed to recognize, translate, predict, summarize, and generate text responses based on a given prompt. This model is trained on an extensive dataset that includes a diverse array of books, articles, and websites, exposing it to various language structures and styles. This training enables ChatGPT to generate responses that closely mimic human communication. LLMs are being used more and more in medicine to assist with generating patient documentation and educational materials.

 

However, Dr Fred Cohen published a perspective piece detailing how LLMs (such as ChatGPT) can produce misleading and inaccurate answers. In his example, he tasked ChatGPT to describe the epidemiology of migraines in penguins; the AI model generated a well-written and highly believable manuscript titled, “Migraine Under the Ice: Understanding Headaches in Antarctica's Feathered Friends.” The manuscript highlights that migraines are more prevalent in male penguins compared to females, with the peak age of onset occurring between 4 and 5 years. Additionally, emperor and king penguins are identified as being more susceptible to developing migraines compared to other penguin species. The paper was fictitious (as no studies on migraine in penguins have been written to date), exemplifying that these models can produce nonfactual materials.

 

For years, technological advancements have been reshaping many aspects of life, and medicine is no exception. AI has been successfully applied to streamline medical documentation, develop new drug targets, and deepen our understanding of various diseases. The field of headache medicine now also uses AI. Recent developments show significant promise, with AI aiding in the diagnosis of migraine and other headache disorders. AI models have even been used in the identification of potential drug targets for migraine treatment. Although there are still limitations to overcome, the future of AI in headache medicine appears bright.

 

If you would like to read more about Dr. Cohen’s work on AI and migraine, please visit fredcohenmd.com or TikTok @fredcohenmd. 

Publications
Publications
Topics
Article Type
Display Headline
Revolutionizing Headache Medicine: The Role of Artificial Intelligence
Display Headline
Revolutionizing Headache Medicine: The Role of Artificial Intelligence
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Thu, 09/19/2024 - 13:15
Un-Gate On Date
Thu, 09/19/2024 - 13:15
Use ProPublica
CFC Schedule Remove Status
Thu, 09/19/2024 - 13:15
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Activity Salesforce Deliverable ID
398249.1
Activity ID
109171
Product Name
Clinical Briefings ICYMI
Product ID
112
Supporter Name /ID
Nurtec ODT [ 6660 ]

Guidance Will Aid Pediatric to Adult Diabetes Care Transfer

Article Type
Changed
Thu, 09/19/2024 - 13:09

— A new consensus statement in development will aim to advise on best practices for navigating the transition of youth with diabetes from pediatric to adult diabetes care, despite limited data.

Expected to be released in early 2025, the statement will be a joint effort of the International Society for Pediatric and Adolescent Diabetes (ISPAD), the American Diabetes Association (ADA), and the European Association for the Study of Diabetes (EASD). It will provide guidance on advance transition planning, the care transfer itself, and follow-up. Writing panel members presented an update on the statement’s development on September 13, 2024, at EASD’s annual meeting.

The care transition period is critical because “adolescents and young adults are the least likely of all age groups to achieve glycemic targets for a variety of physiological and psychosocial reasons ... Up to 60% of these individuals don’t transfer successfully from pediatric to adult care, with declines in attendance, adverse medical outcomes, and mental health challenges,” Frank J. Snoek, PhD, emeritus professor of medical psychology at Amsterdam University Medical College, Amsterdam, the Netherlands, said in introductory remarks at the EASD session.

Session chair Carine De Beaufort, MD, a pediatric endocrinologist in Luxembourg City, Luxembourg, told this news organization, “We know it’s a continuing process, which is extremely important for young people to move into the world. The last formal recommendations were published in 2011, so we thought it was time for an update. What we realized in doing a systematic review and scoping review is that there are a lot of suggestions and ideas not really associated with robust data, and it’s not so easy to get good outcome indicators.”

The final statement will provide clinical guidance but, at the same time, “will be very transparent where more work is needed,” she said.

Sarah Lyons, MD, associate professor of pediatrics at Baylor College of Medicine, Houston, broadly outlined the document. Pre-transition planning will include readiness assessments for transfer from pediatric to adult care. The transfer phase will include measures to prevent gaps in care. And the post-transition phase will cover incorporation into adult care, with follow-up of the individual’s progress for a period.

Across the three stages, the document is expected to recommend a multidisciplinary team approach including psychological support, education and assessment, family and peer support, and care coordination. It will also address practical considerations for patients and professionals including costs and insurance.

It will build upon previous guidelines, including those of ADA and general guidance on transition from pediatric to adult healthcare from the American Academy of Pediatrics. “Ideally, this process will be continuous, comprehensive, coordinated, individualized, and developmentally appropriate,” Dr. Lyons said.
 

‘It Shouldn’t Be Just One Conversation ... It Needs to Be a Process’

Asked to comment, ISPAD president David Maahs, MD, the Lucile Salter Packard Professor of Pediatrics and Division Chief of Pediatric Endocrinology at Stanford University, Palo Alto, California, told this news organization, “It shouldn’t be just one conversation and one visit. It needs to be a process where you talk about the need to transition to adult endocrine care and prepare the person with diabetes and their family for that transition. One of the challenges is if they don’t make it to that first appointment and you assume that they did, and then that’s one place where there can be a gap that people fall through the two systems.”

Dr. Maahs added, “Another issue that’s a big problem in the United States is that children lose their parents’ insurance at 26 ... Some become uninsured after that, or their insurance plan isn’t accepted by the adult provider.”
 

‘There Does Not Appear to Be Sufficient Data’

Steven James, PhD, RN, of the University of the Sunshine Coast, Brisbane, Australia, presented the limited data upon which the statement will be based. A systematic literature review yielded just 26 intervention trials looking at care transition for youth with type 1 or type 2 diabetes, including seven clinical trials with only one randomized.

In that trial, in which 205 youth aged 17-20 years were randomized to a structured 18-month transition program with a transition coordinator, the intervention was associated with increased clinic attendance, improved satisfaction with care, and decreased diabetes-related distress, but the benefits weren’t maintained 12 months after completion of the intervention.

The other trials produced mixed results in terms of metabolic outcomes, with improvements in A1c and reductions in diabetic ketoacidosis and hospitalizations seen in some but not others. Healthcare outcomes and utilization, psychosocial outcomes, transition-related knowledge, self-care, and care satisfaction were only occasionally assessed, Dr. James noted.

“The field is lacking empirically supported interventions that can improve patient physiologic and psychologic outcomes, prevent poor clinic attendance, and improve patient satisfaction in medical care ... There still does not appear to be sufficient data related to the impact of transition readiness or transfer-to-adult care programs.”
 

‘Quite a Lot of Variation in Practices Worldwide’

Dr. James also presented results from two online surveys undertaken by the document writing panel. One recently published survey in Diabetes Research and Clinical Practice examined healthcare professionals’ experiences and perceptions around diabetes care transitions. Of 372 respondents (75% physicians) from around the world — including a third in low-middle-income countries — fewer than half reported using transition readiness checklists (32.8%), provided written transition information (29.6%), or had a dedicated staff member to aid in the process (23.7%).

Similarly, few involved a psychologist (25.3%) or had a structured transition education program (22.6%). Even in high-income countries, fewer than half reported using these measures. Overall, a majority (91.9%) reported barriers to offering patients a positive transition experience.

“This shows to me that there is quite a lot of variation in practices worldwide ... There is a pressing need for an international consensus transition guideline,” Dr. James said.

Among the respondents’ beliefs, 53.8% thought that discussions about transitioning should be initiated at ages 15-17 years, while 27.8% thought 12-14 years was more appropriate. Large majorities favored use of a transition readiness checklist (93.6%), combined transition clinics (80.6%), having a dedicated transition coordinator/staff member available (85.8%), and involving a psychologist in the transition process (80.6%).

A similar survey of patients and carers will be published soon and will be included in the new statement’s evidence base, Dr. James said.

Dr. Maahs said that endorsement of the upcoming guidance from three different medical societies should help raise the profile of the issue. “Hopefully three professional organizations are able to speak with a united and louder voice than if it was just one group or one set of authors. I think this consensus statement can raise awareness, improve care, and help advocate for better care.”

Dr. De Beaufort, Dr. James, and Dr. Lyons had no disclosures. Dr. Snoek is an adviser/speaker for Abbott, Lilly, Novo Nordisk, and Sanofi and receives funding from Breakthrough T1D, Sanofi, and Novo Nordisk. Dr. Maahs has had research support from the National Institutes of Health, Breakthrough T1D, National Science Foundation, and the Helmsley Charitable Trust, and his institution has had research support from Medtronic, Dexcom, Insulet, Bigfoot Biomedical, Tandem, and Roche. He has consulted for Abbott, Aditxt, the Helmsley Charitable Trust, LifeScan, MannKind, Sanofi, Novo Nordisk, Eli Lilly, Medtronic, Insulet, Dompe, BioSpex, Provention Bio, Kriya, Enable Biosciences, and Bayer.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

— A new consensus statement in development will aim to advise on best practices for navigating the transition of youth with diabetes from pediatric to adult diabetes care, despite limited data.

Expected to be released in early 2025, the statement will be a joint effort of the International Society for Pediatric and Adolescent Diabetes (ISPAD), the American Diabetes Association (ADA), and the European Association for the Study of Diabetes (EASD). It will provide guidance on advance transition planning, the care transfer itself, and follow-up. Writing panel members presented an update on the statement’s development on September 13, 2024, at EASD’s annual meeting.

The care transition period is critical because “adolescents and young adults are the least likely of all age groups to achieve glycemic targets for a variety of physiological and psychosocial reasons ... Up to 60% of these individuals don’t transfer successfully from pediatric to adult care, with declines in attendance, adverse medical outcomes, and mental health challenges,” Frank J. Snoek, PhD, emeritus professor of medical psychology at Amsterdam University Medical College, Amsterdam, the Netherlands, said in introductory remarks at the EASD session.

Session chair Carine De Beaufort, MD, a pediatric endocrinologist in Luxembourg City, Luxembourg, told this news organization, “We know it’s a continuing process, which is extremely important for young people to move into the world. The last formal recommendations were published in 2011, so we thought it was time for an update. What we realized in doing a systematic review and scoping review is that there are a lot of suggestions and ideas not really associated with robust data, and it’s not so easy to get good outcome indicators.”

The final statement will provide clinical guidance but, at the same time, “will be very transparent where more work is needed,” she said.

Sarah Lyons, MD, associate professor of pediatrics at Baylor College of Medicine, Houston, broadly outlined the document. Pre-transition planning will include readiness assessments for transfer from pediatric to adult care. The transfer phase will include measures to prevent gaps in care. And the post-transition phase will cover incorporation into adult care, with follow-up of the individual’s progress for a period.

Across the three stages, the document is expected to recommend a multidisciplinary team approach including psychological support, education and assessment, family and peer support, and care coordination. It will also address practical considerations for patients and professionals including costs and insurance.

It will build upon previous guidelines, including those of ADA and general guidance on transition from pediatric to adult healthcare from the American Academy of Pediatrics. “Ideally, this process will be continuous, comprehensive, coordinated, individualized, and developmentally appropriate,” Dr. Lyons said.
 

‘It Shouldn’t Be Just One Conversation ... It Needs to Be a Process’

Asked to comment, ISPAD president David Maahs, MD, the Lucile Salter Packard Professor of Pediatrics and Division Chief of Pediatric Endocrinology at Stanford University, Palo Alto, California, told this news organization, “It shouldn’t be just one conversation and one visit. It needs to be a process where you talk about the need to transition to adult endocrine care and prepare the person with diabetes and their family for that transition. One of the challenges is if they don’t make it to that first appointment and you assume that they did, and then that’s one place where there can be a gap that people fall through the two systems.”

Dr. Maahs added, “Another issue that’s a big problem in the United States is that children lose their parents’ insurance at 26 ... Some become uninsured after that, or their insurance plan isn’t accepted by the adult provider.”
 

‘There Does Not Appear to Be Sufficient Data’

Steven James, PhD, RN, of the University of the Sunshine Coast, Brisbane, Australia, presented the limited data upon which the statement will be based. A systematic literature review yielded just 26 intervention trials looking at care transition for youth with type 1 or type 2 diabetes, including seven clinical trials with only one randomized.

In that trial, in which 205 youth aged 17-20 years were randomized to a structured 18-month transition program with a transition coordinator, the intervention was associated with increased clinic attendance, improved satisfaction with care, and decreased diabetes-related distress, but the benefits weren’t maintained 12 months after completion of the intervention.

The other trials produced mixed results in terms of metabolic outcomes, with improvements in A1c and reductions in diabetic ketoacidosis and hospitalizations seen in some but not others. Healthcare outcomes and utilization, psychosocial outcomes, transition-related knowledge, self-care, and care satisfaction were only occasionally assessed, Dr. James noted.

“The field is lacking empirically supported interventions that can improve patient physiologic and psychologic outcomes, prevent poor clinic attendance, and improve patient satisfaction in medical care ... There still does not appear to be sufficient data related to the impact of transition readiness or transfer-to-adult care programs.”
 

‘Quite a Lot of Variation in Practices Worldwide’

Dr. James also presented results from two online surveys undertaken by the document writing panel. One recently published survey in Diabetes Research and Clinical Practice examined healthcare professionals’ experiences and perceptions around diabetes care transitions. Of 372 respondents (75% physicians) from around the world — including a third in low-middle-income countries — fewer than half reported using transition readiness checklists (32.8%), provided written transition information (29.6%), or had a dedicated staff member to aid in the process (23.7%).

Similarly, few involved a psychologist (25.3%) or had a structured transition education program (22.6%). Even in high-income countries, fewer than half reported using these measures. Overall, a majority (91.9%) reported barriers to offering patients a positive transition experience.

“This shows to me that there is quite a lot of variation in practices worldwide ... There is a pressing need for an international consensus transition guideline,” Dr. James said.

Among the respondents’ beliefs, 53.8% thought that discussions about transitioning should be initiated at ages 15-17 years, while 27.8% thought 12-14 years was more appropriate. Large majorities favored use of a transition readiness checklist (93.6%), combined transition clinics (80.6%), having a dedicated transition coordinator/staff member available (85.8%), and involving a psychologist in the transition process (80.6%).

A similar survey of patients and carers will be published soon and will be included in the new statement’s evidence base, Dr. James said.

Dr. Maahs said that endorsement of the upcoming guidance from three different medical societies should help raise the profile of the issue. “Hopefully three professional organizations are able to speak with a united and louder voice than if it was just one group or one set of authors. I think this consensus statement can raise awareness, improve care, and help advocate for better care.”

Dr. De Beaufort, Dr. James, and Dr. Lyons had no disclosures. Dr. Snoek is an adviser/speaker for Abbott, Lilly, Novo Nordisk, and Sanofi and receives funding from Breakthrough T1D, Sanofi, and Novo Nordisk. Dr. Maahs has had research support from the National Institutes of Health, Breakthrough T1D, National Science Foundation, and the Helmsley Charitable Trust, and his institution has had research support from Medtronic, Dexcom, Insulet, Bigfoot Biomedical, Tandem, and Roche. He has consulted for Abbott, Aditxt, the Helmsley Charitable Trust, LifeScan, MannKind, Sanofi, Novo Nordisk, Eli Lilly, Medtronic, Insulet, Dompe, BioSpex, Provention Bio, Kriya, Enable Biosciences, and Bayer.
 

A version of this article first appeared on Medscape.com.

— A new consensus statement in development will aim to advise on best practices for navigating the transition of youth with diabetes from pediatric to adult diabetes care, despite limited data.

Expected to be released in early 2025, the statement will be a joint effort of the International Society for Pediatric and Adolescent Diabetes (ISPAD), the American Diabetes Association (ADA), and the European Association for the Study of Diabetes (EASD). It will provide guidance on advance transition planning, the care transfer itself, and follow-up. Writing panel members presented an update on the statement’s development on September 13, 2024, at EASD’s annual meeting.

The care transition period is critical because “adolescents and young adults are the least likely of all age groups to achieve glycemic targets for a variety of physiological and psychosocial reasons ... Up to 60% of these individuals don’t transfer successfully from pediatric to adult care, with declines in attendance, adverse medical outcomes, and mental health challenges,” Frank J. Snoek, PhD, emeritus professor of medical psychology at Amsterdam University Medical College, Amsterdam, the Netherlands, said in introductory remarks at the EASD session.

Session chair Carine De Beaufort, MD, a pediatric endocrinologist in Luxembourg City, Luxembourg, told this news organization, “We know it’s a continuing process, which is extremely important for young people to move into the world. The last formal recommendations were published in 2011, so we thought it was time for an update. What we realized in doing a systematic review and scoping review is that there are a lot of suggestions and ideas not really associated with robust data, and it’s not so easy to get good outcome indicators.”

The final statement will provide clinical guidance but, at the same time, “will be very transparent where more work is needed,” she said.

Sarah Lyons, MD, associate professor of pediatrics at Baylor College of Medicine, Houston, broadly outlined the document. Pre-transition planning will include readiness assessments for transfer from pediatric to adult care. The transfer phase will include measures to prevent gaps in care. And the post-transition phase will cover incorporation into adult care, with follow-up of the individual’s progress for a period.

Across the three stages, the document is expected to recommend a multidisciplinary team approach including psychological support, education and assessment, family and peer support, and care coordination. It will also address practical considerations for patients and professionals including costs and insurance.

It will build upon previous guidelines, including those of ADA and general guidance on transition from pediatric to adult healthcare from the American Academy of Pediatrics. “Ideally, this process will be continuous, comprehensive, coordinated, individualized, and developmentally appropriate,” Dr. Lyons said.
 

‘It Shouldn’t Be Just One Conversation ... It Needs to Be a Process’

Asked to comment, ISPAD president David Maahs, MD, the Lucile Salter Packard Professor of Pediatrics and Division Chief of Pediatric Endocrinology at Stanford University, Palo Alto, California, told this news organization, “It shouldn’t be just one conversation and one visit. It needs to be a process where you talk about the need to transition to adult endocrine care and prepare the person with diabetes and their family for that transition. One of the challenges is if they don’t make it to that first appointment and you assume that they did, and then that’s one place where there can be a gap that people fall through the two systems.”

Dr. Maahs added, “Another issue that’s a big problem in the United States is that children lose their parents’ insurance at 26 ... Some become uninsured after that, or their insurance plan isn’t accepted by the adult provider.”
 

‘There Does Not Appear to Be Sufficient Data’

Steven James, PhD, RN, of the University of the Sunshine Coast, Brisbane, Australia, presented the limited data upon which the statement will be based. A systematic literature review yielded just 26 intervention trials looking at care transition for youth with type 1 or type 2 diabetes, including seven clinical trials with only one randomized.

In that trial, in which 205 youth aged 17-20 years were randomized to a structured 18-month transition program with a transition coordinator, the intervention was associated with increased clinic attendance, improved satisfaction with care, and decreased diabetes-related distress, but the benefits weren’t maintained 12 months after completion of the intervention.

The other trials produced mixed results in terms of metabolic outcomes, with improvements in A1c and reductions in diabetic ketoacidosis and hospitalizations seen in some but not others. Healthcare outcomes and utilization, psychosocial outcomes, transition-related knowledge, self-care, and care satisfaction were only occasionally assessed, Dr. James noted.

“The field is lacking empirically supported interventions that can improve patient physiologic and psychologic outcomes, prevent poor clinic attendance, and improve patient satisfaction in medical care ... There still does not appear to be sufficient data related to the impact of transition readiness or transfer-to-adult care programs.”
 

‘Quite a Lot of Variation in Practices Worldwide’

Dr. James also presented results from two online surveys undertaken by the document writing panel. One recently published survey in Diabetes Research and Clinical Practice examined healthcare professionals’ experiences and perceptions around diabetes care transitions. Of 372 respondents (75% physicians) from around the world — including a third in low-middle-income countries — fewer than half reported using transition readiness checklists (32.8%), provided written transition information (29.6%), or had a dedicated staff member to aid in the process (23.7%).

Similarly, few involved a psychologist (25.3%) or had a structured transition education program (22.6%). Even in high-income countries, fewer than half reported using these measures. Overall, a majority (91.9%) reported barriers to offering patients a positive transition experience.

“This shows to me that there is quite a lot of variation in practices worldwide ... There is a pressing need for an international consensus transition guideline,” Dr. James said.

Among the respondents’ beliefs, 53.8% thought that discussions about transitioning should be initiated at ages 15-17 years, while 27.8% thought 12-14 years was more appropriate. Large majorities favored use of a transition readiness checklist (93.6%), combined transition clinics (80.6%), having a dedicated transition coordinator/staff member available (85.8%), and involving a psychologist in the transition process (80.6%).

A similar survey of patients and carers will be published soon and will be included in the new statement’s evidence base, Dr. James said.

Dr. Maahs said that endorsement of the upcoming guidance from three different medical societies should help raise the profile of the issue. “Hopefully three professional organizations are able to speak with a united and louder voice than if it was just one group or one set of authors. I think this consensus statement can raise awareness, improve care, and help advocate for better care.”

Dr. De Beaufort, Dr. James, and Dr. Lyons had no disclosures. Dr. Snoek is an adviser/speaker for Abbott, Lilly, Novo Nordisk, and Sanofi and receives funding from Breakthrough T1D, Sanofi, and Novo Nordisk. Dr. Maahs has had research support from the National Institutes of Health, Breakthrough T1D, National Science Foundation, and the Helmsley Charitable Trust, and his institution has had research support from Medtronic, Dexcom, Insulet, Bigfoot Biomedical, Tandem, and Roche. He has consulted for Abbott, Aditxt, the Helmsley Charitable Trust, LifeScan, MannKind, Sanofi, Novo Nordisk, Eli Lilly, Medtronic, Insulet, Dompe, BioSpex, Provention Bio, Kriya, Enable Biosciences, and Bayer.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EASD 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Triptans Trump Newer, More Expensive Meds for Acute Migraine

Article Type
Changed
Mon, 09/30/2024 - 08:58

Four triptans are more effective for acute migraine than newer, more expensive medications for this headache type, new research suggested.

Results of a large systematic review and meta-analysis showed that eletriptan, rizatriptan, sumatriptan, and zolmitriptan were more effective than lasmiditan, rimegepant, and ubrogepant, which investigators found were as effective as nonsteroidal anti-inflammatory drugs (NSAIDs).

International guidelines generally endorse NSAIDs as the first-line treatment for migraine and recommend triptans for moderate to severe episodes or when the response to NSAIDs is insufficient.

However, based on the study’s findings, these four triptans should be considered the treatment of choice for migraine, study investigator Andrea Cipriani, MD, PhD, professor of psychiatry at the University of Oxford in England and director of the Oxford Health Clinical Research Facility, told a press briefing.

The investigators added that these particular triptans should be “included in the WHO [World Health Organization] List of Essential Medicines to promote global accessibility and uniform standards of care.”

The study was published online in The BMJ.
 

Filling the Knowledge Gap

To date, almost all migraine studies have compared migraine drugs with placebo, so there’s a knowledge gap, said Dr. Cipriani. As a result, “there’s no clear consensus among experts and guidelines about which specific drug classes should be prescribed initially, and this is a clinical problem.”

The researchers pointed out that, in recent years, lasmiditan and gepants have been introduced as further treatment options, especially for patients in whom triptans are contraindicated because of their potential vasoconstrictive effects and higher risk for ischemic events.

The analysis included 137 double-blind, randomized, controlled trials that were primarily sponsored by the pharmaceutical industry. It included 89,445 adult outpatients with migraine (mean age, 40.3 years; 85.6% women).

Only drugs licensed for migraine or headache that are recommended in at least one country were included. Researchers divided these 17 drugs into five categories: Antipyretics (paracetamol), ditans (lasmiditan), gepants (rimegepant and ubrogepant), NSAIDs (acetylsalicylic acid, celecoxib, diclofenac potassium, ibuprofen, naproxen sodium, and phenazone), and triptans (almotriptan, eletriptan, frovatriptan, naratriptan, rizatriptan, sumatriptan, and zolmitriptan).

The study’s primary outcomes were freedom from pain at 2 hours and at 2-24 hours, without the use of rescue drugs.

Investigators used sumatriptan as the reference intervention because it is the most commonly prescribed migraine drug and is included in the WHO Model Lists of Essential Medicines.

The study showed all active interventions were better than placebo for pain freedom at 2 hours; with the exception of paracetamol and naratriptan, all were better for sustained pain freedom from 2 to 24 hours.

When the active interventions were compared with each other, eletriptan outperformed other drugs for achieving pain freedom at 2 hours. It was followed by rizatriptan, sumatriptan, and zolmitriptan (odds ratio [OR], 1.35-3.01). For sustained pain freedom up to 24 hours, the most efficacious interventions were eletriptan (OR, 1.41-2.73) and ibuprofen (OR, 3.16-4.82).

As for secondary efficacy outcomes, in head-to-head comparisons, eletriptan was superior to nearly all other active interventions for pain relief at 2 hours and for the use of rescue drugs.

As for adverse events, dizziness was more commonly associated with lasmiditan, eletriptan, sumatriptan, and zolmitriptan, while fatigue and sedation occurred more frequently with eletriptan and lasmiditan. Nausea was more often associated with lasmiditan, sumatriptan, zolmitriptan, and ubrogepant. Eletriptan was the only intervention most frequently associated with chest pain or discomfort.
 

 

 

Need to Update Guidelines?

Considering the new results, Dr. Cipriani and study coauthor Messoud Ashina, MD, PhD, professor of neurology, University of Copenhagen in Denmark, said clinical guidelines for acute migraine management should be updated.

While triptans are contraindicated in patients with vascular disease, the researchers noted that “cerebrovascular events may present primarily as migraine-like headaches, and misdiagnosis of transient ischemic attack and minor stroke as migraine is not rare.”

Recently, lasmiditan, rimegepant, and ubrogepant — which are not associated with vasoconstrictive effects — have been promoted as alternatives in patients for whom triptans are contraindicated or not tolerated. But the high costs of these drugs put them out of reach for some patients, the investigators noted.

Triptans are widely underutilized, Dr. Ashina noted during the press briefing. Current use ranges from 17% to 22% in the United States and from 3% to 22.5% in Europe.

The investigators said that triptans have been shown to be superior and should be promoted globally, adding that the limited access and substantial underutilization of these medications are “missed opportunities to offer more effective treatments.”

The new results underscore the importance of head-to-head trials, which is the gold standard, said Dr. Cipriani.

The investigators noted that the study’s main limitation was the quality of the data, which was deemed to be low, or very low, for most comparisons. Other potential limitations included lack of individual patient data; exclusion of combination drugs; inclusion of only oral treatments; and not considering type of oral formulation, consistency in response across migraine episodes, or cost-effectiveness.

The study also did not cover important clinical issues that might inform treatment decision-making, including drug overuse headache or potential withdrawal symptoms. And the authors were unable to quantify some outcomes, such as global functioning.
 

‘Best Profile’?

Reached for comment, Neurologist Nina Riggins, MD, PhD, Headache Center of Excellence, Palo Alto VA Medical Center in California, praised the authors for a “great job” of bringing attention to the topic.

However, she noted that the investigators’ characterization of the four triptans as having the “best profile” for acute migraine gave her pause.

“Calling triptans the medication with the ‘best profile’ might be not applicable in many cases,” she said. For example, those who need acute medication more than two to three times a week in addition to those with cardiovascular contraindications to triptans may fall outside of that category.

Dr. Riggins said that “it makes sense” that longer-acting triptans like frovatriptan and naratriptan may not rank highly for efficacy within the first 2 hours. However, these agents likely offer a superior therapeutic profile in specific contexts, such as menstrual-related migraine.

In addition, while triptans are known to cause medication overuse headache, this may not be the case with gepants, she noted.

In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Eloísa Rubio-Beltrán, PhD, research associate with The Migraine Trust at the Wolfson Sensory, Pain and Regeneration Centre, King’s College London in England, said the findings should affect migraine treatment guidelines.

“As the study highlights, due to their high efficacy and low cost, triptans should be the first-line treatment option for the acute treatment of migraine. These results should inform treatment guidelines and support the inclusion of the best performing triptans into the List of Essential Medicines, to optimize treatment, allowing patients to access more efficacious options,” said Dr. Rubio-Beltrán.

It is also important to note that gepants and ditans were developed to offer alternatives for patients who show no improvement from triptans, she added.

She pointed out that these medications were not developed as a substitute for triptans, but rather to expand the number of treatment options for migraine.

“Nonetheless,” she added, “this study highlights the need for further research on the pathophysiology of migraine, which will allow the discovery of novel targets, and thus, novel treatments options that will benefit all patient populations.”

The study was funded by the NIHR Oxford Health Biomedical Research Centre and the Lundbeck Foundation. Dr. Cipriani reported receiving research, educational, and consultancy fees from Italian Network for Pediatric Clinical Trials, Fondazione Cariplo, Lundbeck, and Angelini Pharma. Dr. Ashina is a consultant, speaker, or scientific adviser for AbbVie, Amgen, AstraZeneca, Eli Lilly, GSK, Lundbeck, Novartis, Pfizer, and Teva; is the past president of the International Headache Society; and an associate editor of The Journal of Headache and Pain and Brain. Dr. Riggins has consulted for Gerson Lehrman Group; participated in compensated work with AcademicCME and Association of Migraine Disorders; was a principal investigator on research with electroCore, Theranica, and Eli Lilly; serves on advisory boards for Theranica, Teva, Lundbeck, Amneal Pharmaceuticals, NeurologyLive, and Miles for Migraine; and is a project advisor for Clinical Awareness Initiative with Clinical Neurological Society of America. Dr. Rubio-Beltrán reported serving as a junior editorial board member of The Journal of Headache and Pain and a junior representative of the International Headache Society; receiving research support from The Migraine Trust, Eli Lilly, CoLucid Pharmaceuticals, Amgen, Novartis, and Kallyope; and receiving travel support from CoLucid Pharmaceuticals, Allergan, and Novartis.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Four triptans are more effective for acute migraine than newer, more expensive medications for this headache type, new research suggested.

Results of a large systematic review and meta-analysis showed that eletriptan, rizatriptan, sumatriptan, and zolmitriptan were more effective than lasmiditan, rimegepant, and ubrogepant, which investigators found were as effective as nonsteroidal anti-inflammatory drugs (NSAIDs).

International guidelines generally endorse NSAIDs as the first-line treatment for migraine and recommend triptans for moderate to severe episodes or when the response to NSAIDs is insufficient.

However, based on the study’s findings, these four triptans should be considered the treatment of choice for migraine, study investigator Andrea Cipriani, MD, PhD, professor of psychiatry at the University of Oxford in England and director of the Oxford Health Clinical Research Facility, told a press briefing.

The investigators added that these particular triptans should be “included in the WHO [World Health Organization] List of Essential Medicines to promote global accessibility and uniform standards of care.”

The study was published online in The BMJ.
 

Filling the Knowledge Gap

To date, almost all migraine studies have compared migraine drugs with placebo, so there’s a knowledge gap, said Dr. Cipriani. As a result, “there’s no clear consensus among experts and guidelines about which specific drug classes should be prescribed initially, and this is a clinical problem.”

The researchers pointed out that, in recent years, lasmiditan and gepants have been introduced as further treatment options, especially for patients in whom triptans are contraindicated because of their potential vasoconstrictive effects and higher risk for ischemic events.

The analysis included 137 double-blind, randomized, controlled trials that were primarily sponsored by the pharmaceutical industry. It included 89,445 adult outpatients with migraine (mean age, 40.3 years; 85.6% women).

Only drugs licensed for migraine or headache that are recommended in at least one country were included. Researchers divided these 17 drugs into five categories: Antipyretics (paracetamol), ditans (lasmiditan), gepants (rimegepant and ubrogepant), NSAIDs (acetylsalicylic acid, celecoxib, diclofenac potassium, ibuprofen, naproxen sodium, and phenazone), and triptans (almotriptan, eletriptan, frovatriptan, naratriptan, rizatriptan, sumatriptan, and zolmitriptan).

The study’s primary outcomes were freedom from pain at 2 hours and at 2-24 hours, without the use of rescue drugs.

Investigators used sumatriptan as the reference intervention because it is the most commonly prescribed migraine drug and is included in the WHO Model Lists of Essential Medicines.

The study showed all active interventions were better than placebo for pain freedom at 2 hours; with the exception of paracetamol and naratriptan, all were better for sustained pain freedom from 2 to 24 hours.

When the active interventions were compared with each other, eletriptan outperformed other drugs for achieving pain freedom at 2 hours. It was followed by rizatriptan, sumatriptan, and zolmitriptan (odds ratio [OR], 1.35-3.01). For sustained pain freedom up to 24 hours, the most efficacious interventions were eletriptan (OR, 1.41-2.73) and ibuprofen (OR, 3.16-4.82).

As for secondary efficacy outcomes, in head-to-head comparisons, eletriptan was superior to nearly all other active interventions for pain relief at 2 hours and for the use of rescue drugs.

As for adverse events, dizziness was more commonly associated with lasmiditan, eletriptan, sumatriptan, and zolmitriptan, while fatigue and sedation occurred more frequently with eletriptan and lasmiditan. Nausea was more often associated with lasmiditan, sumatriptan, zolmitriptan, and ubrogepant. Eletriptan was the only intervention most frequently associated with chest pain or discomfort.
 

 

 

Need to Update Guidelines?

Considering the new results, Dr. Cipriani and study coauthor Messoud Ashina, MD, PhD, professor of neurology, University of Copenhagen in Denmark, said clinical guidelines for acute migraine management should be updated.

While triptans are contraindicated in patients with vascular disease, the researchers noted that “cerebrovascular events may present primarily as migraine-like headaches, and misdiagnosis of transient ischemic attack and minor stroke as migraine is not rare.”

Recently, lasmiditan, rimegepant, and ubrogepant — which are not associated with vasoconstrictive effects — have been promoted as alternatives in patients for whom triptans are contraindicated or not tolerated. But the high costs of these drugs put them out of reach for some patients, the investigators noted.

Triptans are widely underutilized, Dr. Ashina noted during the press briefing. Current use ranges from 17% to 22% in the United States and from 3% to 22.5% in Europe.

The investigators said that triptans have been shown to be superior and should be promoted globally, adding that the limited access and substantial underutilization of these medications are “missed opportunities to offer more effective treatments.”

The new results underscore the importance of head-to-head trials, which is the gold standard, said Dr. Cipriani.

The investigators noted that the study’s main limitation was the quality of the data, which was deemed to be low, or very low, for most comparisons. Other potential limitations included lack of individual patient data; exclusion of combination drugs; inclusion of only oral treatments; and not considering type of oral formulation, consistency in response across migraine episodes, or cost-effectiveness.

The study also did not cover important clinical issues that might inform treatment decision-making, including drug overuse headache or potential withdrawal symptoms. And the authors were unable to quantify some outcomes, such as global functioning.
 

‘Best Profile’?

Reached for comment, Neurologist Nina Riggins, MD, PhD, Headache Center of Excellence, Palo Alto VA Medical Center in California, praised the authors for a “great job” of bringing attention to the topic.

However, she noted that the investigators’ characterization of the four triptans as having the “best profile” for acute migraine gave her pause.

“Calling triptans the medication with the ‘best profile’ might be not applicable in many cases,” she said. For example, those who need acute medication more than two to three times a week in addition to those with cardiovascular contraindications to triptans may fall outside of that category.

Dr. Riggins said that “it makes sense” that longer-acting triptans like frovatriptan and naratriptan may not rank highly for efficacy within the first 2 hours. However, these agents likely offer a superior therapeutic profile in specific contexts, such as menstrual-related migraine.

In addition, while triptans are known to cause medication overuse headache, this may not be the case with gepants, she noted.

In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Eloísa Rubio-Beltrán, PhD, research associate with The Migraine Trust at the Wolfson Sensory, Pain and Regeneration Centre, King’s College London in England, said the findings should affect migraine treatment guidelines.

“As the study highlights, due to their high efficacy and low cost, triptans should be the first-line treatment option for the acute treatment of migraine. These results should inform treatment guidelines and support the inclusion of the best performing triptans into the List of Essential Medicines, to optimize treatment, allowing patients to access more efficacious options,” said Dr. Rubio-Beltrán.

It is also important to note that gepants and ditans were developed to offer alternatives for patients who show no improvement from triptans, she added.

She pointed out that these medications were not developed as a substitute for triptans, but rather to expand the number of treatment options for migraine.

“Nonetheless,” she added, “this study highlights the need for further research on the pathophysiology of migraine, which will allow the discovery of novel targets, and thus, novel treatments options that will benefit all patient populations.”

The study was funded by the NIHR Oxford Health Biomedical Research Centre and the Lundbeck Foundation. Dr. Cipriani reported receiving research, educational, and consultancy fees from Italian Network for Pediatric Clinical Trials, Fondazione Cariplo, Lundbeck, and Angelini Pharma. Dr. Ashina is a consultant, speaker, or scientific adviser for AbbVie, Amgen, AstraZeneca, Eli Lilly, GSK, Lundbeck, Novartis, Pfizer, and Teva; is the past president of the International Headache Society; and an associate editor of The Journal of Headache and Pain and Brain. Dr. Riggins has consulted for Gerson Lehrman Group; participated in compensated work with AcademicCME and Association of Migraine Disorders; was a principal investigator on research with electroCore, Theranica, and Eli Lilly; serves on advisory boards for Theranica, Teva, Lundbeck, Amneal Pharmaceuticals, NeurologyLive, and Miles for Migraine; and is a project advisor for Clinical Awareness Initiative with Clinical Neurological Society of America. Dr. Rubio-Beltrán reported serving as a junior editorial board member of The Journal of Headache and Pain and a junior representative of the International Headache Society; receiving research support from The Migraine Trust, Eli Lilly, CoLucid Pharmaceuticals, Amgen, Novartis, and Kallyope; and receiving travel support from CoLucid Pharmaceuticals, Allergan, and Novartis.

A version of this article first appeared on Medscape.com.

Four triptans are more effective for acute migraine than newer, more expensive medications for this headache type, new research suggested.

Results of a large systematic review and meta-analysis showed that eletriptan, rizatriptan, sumatriptan, and zolmitriptan were more effective than lasmiditan, rimegepant, and ubrogepant, which investigators found were as effective as nonsteroidal anti-inflammatory drugs (NSAIDs).

International guidelines generally endorse NSAIDs as the first-line treatment for migraine and recommend triptans for moderate to severe episodes or when the response to NSAIDs is insufficient.

However, based on the study’s findings, these four triptans should be considered the treatment of choice for migraine, study investigator Andrea Cipriani, MD, PhD, professor of psychiatry at the University of Oxford in England and director of the Oxford Health Clinical Research Facility, told a press briefing.

The investigators added that these particular triptans should be “included in the WHO [World Health Organization] List of Essential Medicines to promote global accessibility and uniform standards of care.”

The study was published online in The BMJ.
 

Filling the Knowledge Gap

To date, almost all migraine studies have compared migraine drugs with placebo, so there’s a knowledge gap, said Dr. Cipriani. As a result, “there’s no clear consensus among experts and guidelines about which specific drug classes should be prescribed initially, and this is a clinical problem.”

The researchers pointed out that, in recent years, lasmiditan and gepants have been introduced as further treatment options, especially for patients in whom triptans are contraindicated because of their potential vasoconstrictive effects and higher risk for ischemic events.

The analysis included 137 double-blind, randomized, controlled trials that were primarily sponsored by the pharmaceutical industry. It included 89,445 adult outpatients with migraine (mean age, 40.3 years; 85.6% women).

Only drugs licensed for migraine or headache that are recommended in at least one country were included. Researchers divided these 17 drugs into five categories: Antipyretics (paracetamol), ditans (lasmiditan), gepants (rimegepant and ubrogepant), NSAIDs (acetylsalicylic acid, celecoxib, diclofenac potassium, ibuprofen, naproxen sodium, and phenazone), and triptans (almotriptan, eletriptan, frovatriptan, naratriptan, rizatriptan, sumatriptan, and zolmitriptan).

The study’s primary outcomes were freedom from pain at 2 hours and at 2-24 hours, without the use of rescue drugs.

Investigators used sumatriptan as the reference intervention because it is the most commonly prescribed migraine drug and is included in the WHO Model Lists of Essential Medicines.

The study showed all active interventions were better than placebo for pain freedom at 2 hours; with the exception of paracetamol and naratriptan, all were better for sustained pain freedom from 2 to 24 hours.

When the active interventions were compared with each other, eletriptan outperformed other drugs for achieving pain freedom at 2 hours. It was followed by rizatriptan, sumatriptan, and zolmitriptan (odds ratio [OR], 1.35-3.01). For sustained pain freedom up to 24 hours, the most efficacious interventions were eletriptan (OR, 1.41-2.73) and ibuprofen (OR, 3.16-4.82).

As for secondary efficacy outcomes, in head-to-head comparisons, eletriptan was superior to nearly all other active interventions for pain relief at 2 hours and for the use of rescue drugs.

As for adverse events, dizziness was more commonly associated with lasmiditan, eletriptan, sumatriptan, and zolmitriptan, while fatigue and sedation occurred more frequently with eletriptan and lasmiditan. Nausea was more often associated with lasmiditan, sumatriptan, zolmitriptan, and ubrogepant. Eletriptan was the only intervention most frequently associated with chest pain or discomfort.
 

 

 

Need to Update Guidelines?

Considering the new results, Dr. Cipriani and study coauthor Messoud Ashina, MD, PhD, professor of neurology, University of Copenhagen in Denmark, said clinical guidelines for acute migraine management should be updated.

While triptans are contraindicated in patients with vascular disease, the researchers noted that “cerebrovascular events may present primarily as migraine-like headaches, and misdiagnosis of transient ischemic attack and minor stroke as migraine is not rare.”

Recently, lasmiditan, rimegepant, and ubrogepant — which are not associated with vasoconstrictive effects — have been promoted as alternatives in patients for whom triptans are contraindicated or not tolerated. But the high costs of these drugs put them out of reach for some patients, the investigators noted.

Triptans are widely underutilized, Dr. Ashina noted during the press briefing. Current use ranges from 17% to 22% in the United States and from 3% to 22.5% in Europe.

The investigators said that triptans have been shown to be superior and should be promoted globally, adding that the limited access and substantial underutilization of these medications are “missed opportunities to offer more effective treatments.”

The new results underscore the importance of head-to-head trials, which is the gold standard, said Dr. Cipriani.

The investigators noted that the study’s main limitation was the quality of the data, which was deemed to be low, or very low, for most comparisons. Other potential limitations included lack of individual patient data; exclusion of combination drugs; inclusion of only oral treatments; and not considering type of oral formulation, consistency in response across migraine episodes, or cost-effectiveness.

The study also did not cover important clinical issues that might inform treatment decision-making, including drug overuse headache or potential withdrawal symptoms. And the authors were unable to quantify some outcomes, such as global functioning.
 

‘Best Profile’?

Reached for comment, Neurologist Nina Riggins, MD, PhD, Headache Center of Excellence, Palo Alto VA Medical Center in California, praised the authors for a “great job” of bringing attention to the topic.

However, she noted that the investigators’ characterization of the four triptans as having the “best profile” for acute migraine gave her pause.

“Calling triptans the medication with the ‘best profile’ might be not applicable in many cases,” she said. For example, those who need acute medication more than two to three times a week in addition to those with cardiovascular contraindications to triptans may fall outside of that category.

Dr. Riggins said that “it makes sense” that longer-acting triptans like frovatriptan and naratriptan may not rank highly for efficacy within the first 2 hours. However, these agents likely offer a superior therapeutic profile in specific contexts, such as menstrual-related migraine.

In addition, while triptans are known to cause medication overuse headache, this may not be the case with gepants, she noted.

In a release from the Science Media Center, a nonprofit organization promoting voices and views of the scientific community, Eloísa Rubio-Beltrán, PhD, research associate with The Migraine Trust at the Wolfson Sensory, Pain and Regeneration Centre, King’s College London in England, said the findings should affect migraine treatment guidelines.

“As the study highlights, due to their high efficacy and low cost, triptans should be the first-line treatment option for the acute treatment of migraine. These results should inform treatment guidelines and support the inclusion of the best performing triptans into the List of Essential Medicines, to optimize treatment, allowing patients to access more efficacious options,” said Dr. Rubio-Beltrán.

It is also important to note that gepants and ditans were developed to offer alternatives for patients who show no improvement from triptans, she added.

She pointed out that these medications were not developed as a substitute for triptans, but rather to expand the number of treatment options for migraine.

“Nonetheless,” she added, “this study highlights the need for further research on the pathophysiology of migraine, which will allow the discovery of novel targets, and thus, novel treatments options that will benefit all patient populations.”

The study was funded by the NIHR Oxford Health Biomedical Research Centre and the Lundbeck Foundation. Dr. Cipriani reported receiving research, educational, and consultancy fees from Italian Network for Pediatric Clinical Trials, Fondazione Cariplo, Lundbeck, and Angelini Pharma. Dr. Ashina is a consultant, speaker, or scientific adviser for AbbVie, Amgen, AstraZeneca, Eli Lilly, GSK, Lundbeck, Novartis, Pfizer, and Teva; is the past president of the International Headache Society; and an associate editor of The Journal of Headache and Pain and Brain. Dr. Riggins has consulted for Gerson Lehrman Group; participated in compensated work with AcademicCME and Association of Migraine Disorders; was a principal investigator on research with electroCore, Theranica, and Eli Lilly; serves on advisory boards for Theranica, Teva, Lundbeck, Amneal Pharmaceuticals, NeurologyLive, and Miles for Migraine; and is a project advisor for Clinical Awareness Initiative with Clinical Neurological Society of America. Dr. Rubio-Beltrán reported serving as a junior editorial board member of The Journal of Headache and Pain and a junior representative of the International Headache Society; receiving research support from The Migraine Trust, Eli Lilly, CoLucid Pharmaceuticals, Amgen, Novartis, and Kallyope; and receiving travel support from CoLucid Pharmaceuticals, Allergan, and Novartis.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BMJ

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article