User login
The best statins to lower non-HDL cholesterol in diabetes?
A network meta-analysis of 42 clinical trials concludes that rosuvastatin, simvastatin, and atorvastatin are the statins most effective at lowering non-high-density-lipoprotein cholesterol (non-HDL-C) in people with diabetes and at risk for cardiovascular disease.
The analysis focused on the efficacy of statin treatment on reducing non-HDL-C, as opposed to reducing low-density-lipoprotein cholesterol (LDL-C), which has traditionally been used as a surrogate to determine cardiovascular disease risk from hypercholesterolemia.
“The National Cholesterol Education Program in the United States recommends that LDL-C values should be used to estimate the risk of cardiovascular disease related to lipoproteins,” lead author Alexander Hodkinson, MD, senior National Institute for Health Research fellow, University of Manchester, England, told this news organization.
“But we believe that non-high-density-lipoprotein cholesterol is more strongly associated with the risk of cardiovascular disease, because non-HDL-C combines all the bad types of cholesterol, which LDL-C misses, so it could be a better tool than LDL-C for assessing CVD risk and effects of treatment. We already knew which of the statins reduce LDL-C, but we wanted to know which ones reduced non-HDL-C; hence the reason for our study,” Dr. Hodkinson said.
The findings were published online in BMJ.
In April 2021, the National Institute for Health and Care Excellence (NICE) in the United Kingdom updated guidelines for adults with diabetes to recommend that non-HDL-C should replace LDL-C as the primary target for reducing the risk for cardiovascular disease with lipid-lowering treatment.
Currently, NICE is alone in its recommendation. Other international guidelines do not have a non-HDL-C target and use LDL-C reduction instead. These include guidelines from the European Society of Cardiology (ESC), the American College of Cardiology (ACC), the American Heart Association (AHA), and the National Lipid Association.
Non–HDL-C is simple to calculate and can easily be done by clinicians by subtracting HDL-C from the total cholesterol level, he added.
This analysis compared the effectiveness of different statins at different intensities in reducing levels of non-HDL-C in 42 randomized controlled trials that included 20,193 adults with diabetes.
Compared with placebo, rosuvastatin, given at moderate- and high-intensity doses, and simvastatin and atorvastatin at high-intensity doses, were the best at lowering levels of non-HDL-C over an average treatment period of 12 weeks.
High-intensity rosuvastatin led to a 2.31 mmol/L reduction in non-HDL-C (95% credible interval, –3.39 to –1.21). Moderate-intensity rosuvastatin led to a 2.27 mmol/L reduction in non-HDL-C (95% credible interval, –3.00 to –1.49).
High-intensity simvastatin led to a 2.26 mmol/L reduction in non-HDL-C (95% credible interval, –2.99 to –1.51).
High-intensity atorvastatin led to a 2.20 mmol/L reduction in non-HDL-C (95% credible interval, –2.69 to –1.70).
Atorvastatin and simvastatin at any intensity and pravastatin at low intensity were also effective in reducing levels of non-HDL-C, the researchers noted.
In 4,670 patients who were at great risk for a major cardiovascular event, atorvastatin at high intensity showed the largest reduction in levels of non-HDL-C (1.98 mmol/L; 95% credible interval, –4.16 to 0.26).
In addition, high-intensity simvastatin and rosuvastatin were the most effective in reducing LDL-C.
High-intensity simvastatin led to a 1.93 mmol/L reduction in LDL-C (95% credible interval, –2.63 to –1.21), and high-intensity rosuvastatin led to a 1.76 mmol/L reduction in LDL-C (95% credible interval, –2.37 to –1.15).
In four studies, significant reductions in nonfatal myocardial infarction were shown for atorvastatin at moderate intensity, compared with placebo (relative risk, 0.57; 95% confidence interval, 0.43-.76). No significant differences were seen for discontinuations, nonfatal stroke, or cardiovascular death.
“We hope our findings will help guide clinicians on statin selection itself, and what types of doses they should be giving patients. These results support using NICE’s new policy guidelines on cholesterol monitoring, using this non-HDL-C measure, which contains all the bad types of cholesterol for patients with diabetes,” Dr. Hodkinson said.
“This study further emphasizes what we have known about the benefit of statin therapy in patients with type 2 diabetes,” Prakash Deedwania, MD, professor of medicine, University of California, San Francisco, told this news organization.
Dr. Deedwania and others have published data on patients with diabetes that showed that treatment with high-intensity atorvastatin was associated with significant reductions in major adverse cardiovascular events.
“Here they use non-HDL cholesterol as a target. The NICE guidelines are the only guidelines looking at non-HDL cholesterol; however, all guidelines suggest an LDL to be less than 70 in all people with diabetes, and for those with recent acute coronary syndromes, the latest evidence suggests the LDL should actually be less than 50,” said Dr. Deedwania, spokesperson for the AHA and ACC.
As far as which measure to use, he believes both are useful. “It’s six of one and half a dozen of the other, in my opinion. The societies have not recommended non-HDL cholesterol and it’s easier to stay with what is readily available for clinicians, and using LDL cholesterol is still okay. The results of this analysis are confirmatory, in that looking at non-HDL cholesterol gives results very similar to what these statins have shown for their effect on LDL cholesterol,” he said.
Non-HDL cholesterol a better marker?
For Robert Rosenson, MD, director of metabolism and lipids at Mount Sinai Health System and professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai, New York, non-HDL cholesterol is becoming an important marker of risk for several reasons.
“The focus on LDL cholesterol has been due to the causal relationship of LDL with atherosclerotic cardiovascular disease, but in the last few decades, non-HDL has emerged because more people are overweight, have insulin resistance, and have diabetes,” Dr. Rosenson told this news organization. “In those situations, the LDL cholesterol underrepresents the risk of the LDL particles. With insulin resistance, the particles become more triglycerides and less cholesterol, so on a per-particle basis, you need to get more LDL particles to get to a certain LDL cholesterol concentration.”
Non-HDL cholesterol testing does not require fasting, another advantage of using it to monitor cholesterol, he added.
What is often forgotten is that moderate- to high-intensity statins have very good triglyceride-lowering effects, Dr. Rosenson said.
“This article highlights that, by using higher doses, you get more triglyceride-lowering. Hopefully, this will get practitioners to recognize that non-HDL cholesterol is a better predictor of risk in people with diabetes,” he said.
The study was funded by the National Institute for Health Research. Dr. Hodkinson, Dr. Rosenson, and Dr. Deedwania report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A network meta-analysis of 42 clinical trials concludes that rosuvastatin, simvastatin, and atorvastatin are the statins most effective at lowering non-high-density-lipoprotein cholesterol (non-HDL-C) in people with diabetes and at risk for cardiovascular disease.
The analysis focused on the efficacy of statin treatment on reducing non-HDL-C, as opposed to reducing low-density-lipoprotein cholesterol (LDL-C), which has traditionally been used as a surrogate to determine cardiovascular disease risk from hypercholesterolemia.
“The National Cholesterol Education Program in the United States recommends that LDL-C values should be used to estimate the risk of cardiovascular disease related to lipoproteins,” lead author Alexander Hodkinson, MD, senior National Institute for Health Research fellow, University of Manchester, England, told this news organization.
“But we believe that non-high-density-lipoprotein cholesterol is more strongly associated with the risk of cardiovascular disease, because non-HDL-C combines all the bad types of cholesterol, which LDL-C misses, so it could be a better tool than LDL-C for assessing CVD risk and effects of treatment. We already knew which of the statins reduce LDL-C, but we wanted to know which ones reduced non-HDL-C; hence the reason for our study,” Dr. Hodkinson said.
The findings were published online in BMJ.
In April 2021, the National Institute for Health and Care Excellence (NICE) in the United Kingdom updated guidelines for adults with diabetes to recommend that non-HDL-C should replace LDL-C as the primary target for reducing the risk for cardiovascular disease with lipid-lowering treatment.
Currently, NICE is alone in its recommendation. Other international guidelines do not have a non-HDL-C target and use LDL-C reduction instead. These include guidelines from the European Society of Cardiology (ESC), the American College of Cardiology (ACC), the American Heart Association (AHA), and the National Lipid Association.
Non–HDL-C is simple to calculate and can easily be done by clinicians by subtracting HDL-C from the total cholesterol level, he added.
This analysis compared the effectiveness of different statins at different intensities in reducing levels of non-HDL-C in 42 randomized controlled trials that included 20,193 adults with diabetes.
Compared with placebo, rosuvastatin, given at moderate- and high-intensity doses, and simvastatin and atorvastatin at high-intensity doses, were the best at lowering levels of non-HDL-C over an average treatment period of 12 weeks.
High-intensity rosuvastatin led to a 2.31 mmol/L reduction in non-HDL-C (95% credible interval, –3.39 to –1.21). Moderate-intensity rosuvastatin led to a 2.27 mmol/L reduction in non-HDL-C (95% credible interval, –3.00 to –1.49).
High-intensity simvastatin led to a 2.26 mmol/L reduction in non-HDL-C (95% credible interval, –2.99 to –1.51).
High-intensity atorvastatin led to a 2.20 mmol/L reduction in non-HDL-C (95% credible interval, –2.69 to –1.70).
Atorvastatin and simvastatin at any intensity and pravastatin at low intensity were also effective in reducing levels of non-HDL-C, the researchers noted.
In 4,670 patients who were at great risk for a major cardiovascular event, atorvastatin at high intensity showed the largest reduction in levels of non-HDL-C (1.98 mmol/L; 95% credible interval, –4.16 to 0.26).
In addition, high-intensity simvastatin and rosuvastatin were the most effective in reducing LDL-C.
High-intensity simvastatin led to a 1.93 mmol/L reduction in LDL-C (95% credible interval, –2.63 to –1.21), and high-intensity rosuvastatin led to a 1.76 mmol/L reduction in LDL-C (95% credible interval, –2.37 to –1.15).
In four studies, significant reductions in nonfatal myocardial infarction were shown for atorvastatin at moderate intensity, compared with placebo (relative risk, 0.57; 95% confidence interval, 0.43-.76). No significant differences were seen for discontinuations, nonfatal stroke, or cardiovascular death.
“We hope our findings will help guide clinicians on statin selection itself, and what types of doses they should be giving patients. These results support using NICE’s new policy guidelines on cholesterol monitoring, using this non-HDL-C measure, which contains all the bad types of cholesterol for patients with diabetes,” Dr. Hodkinson said.
“This study further emphasizes what we have known about the benefit of statin therapy in patients with type 2 diabetes,” Prakash Deedwania, MD, professor of medicine, University of California, San Francisco, told this news organization.
Dr. Deedwania and others have published data on patients with diabetes that showed that treatment with high-intensity atorvastatin was associated with significant reductions in major adverse cardiovascular events.
“Here they use non-HDL cholesterol as a target. The NICE guidelines are the only guidelines looking at non-HDL cholesterol; however, all guidelines suggest an LDL to be less than 70 in all people with diabetes, and for those with recent acute coronary syndromes, the latest evidence suggests the LDL should actually be less than 50,” said Dr. Deedwania, spokesperson for the AHA and ACC.
As far as which measure to use, he believes both are useful. “It’s six of one and half a dozen of the other, in my opinion. The societies have not recommended non-HDL cholesterol and it’s easier to stay with what is readily available for clinicians, and using LDL cholesterol is still okay. The results of this analysis are confirmatory, in that looking at non-HDL cholesterol gives results very similar to what these statins have shown for their effect on LDL cholesterol,” he said.
Non-HDL cholesterol a better marker?
For Robert Rosenson, MD, director of metabolism and lipids at Mount Sinai Health System and professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai, New York, non-HDL cholesterol is becoming an important marker of risk for several reasons.
“The focus on LDL cholesterol has been due to the causal relationship of LDL with atherosclerotic cardiovascular disease, but in the last few decades, non-HDL has emerged because more people are overweight, have insulin resistance, and have diabetes,” Dr. Rosenson told this news organization. “In those situations, the LDL cholesterol underrepresents the risk of the LDL particles. With insulin resistance, the particles become more triglycerides and less cholesterol, so on a per-particle basis, you need to get more LDL particles to get to a certain LDL cholesterol concentration.”
Non-HDL cholesterol testing does not require fasting, another advantage of using it to monitor cholesterol, he added.
What is often forgotten is that moderate- to high-intensity statins have very good triglyceride-lowering effects, Dr. Rosenson said.
“This article highlights that, by using higher doses, you get more triglyceride-lowering. Hopefully, this will get practitioners to recognize that non-HDL cholesterol is a better predictor of risk in people with diabetes,” he said.
The study was funded by the National Institute for Health Research. Dr. Hodkinson, Dr. Rosenson, and Dr. Deedwania report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
A network meta-analysis of 42 clinical trials concludes that rosuvastatin, simvastatin, and atorvastatin are the statins most effective at lowering non-high-density-lipoprotein cholesterol (non-HDL-C) in people with diabetes and at risk for cardiovascular disease.
The analysis focused on the efficacy of statin treatment on reducing non-HDL-C, as opposed to reducing low-density-lipoprotein cholesterol (LDL-C), which has traditionally been used as a surrogate to determine cardiovascular disease risk from hypercholesterolemia.
“The National Cholesterol Education Program in the United States recommends that LDL-C values should be used to estimate the risk of cardiovascular disease related to lipoproteins,” lead author Alexander Hodkinson, MD, senior National Institute for Health Research fellow, University of Manchester, England, told this news organization.
“But we believe that non-high-density-lipoprotein cholesterol is more strongly associated with the risk of cardiovascular disease, because non-HDL-C combines all the bad types of cholesterol, which LDL-C misses, so it could be a better tool than LDL-C for assessing CVD risk and effects of treatment. We already knew which of the statins reduce LDL-C, but we wanted to know which ones reduced non-HDL-C; hence the reason for our study,” Dr. Hodkinson said.
The findings were published online in BMJ.
In April 2021, the National Institute for Health and Care Excellence (NICE) in the United Kingdom updated guidelines for adults with diabetes to recommend that non-HDL-C should replace LDL-C as the primary target for reducing the risk for cardiovascular disease with lipid-lowering treatment.
Currently, NICE is alone in its recommendation. Other international guidelines do not have a non-HDL-C target and use LDL-C reduction instead. These include guidelines from the European Society of Cardiology (ESC), the American College of Cardiology (ACC), the American Heart Association (AHA), and the National Lipid Association.
Non–HDL-C is simple to calculate and can easily be done by clinicians by subtracting HDL-C from the total cholesterol level, he added.
This analysis compared the effectiveness of different statins at different intensities in reducing levels of non-HDL-C in 42 randomized controlled trials that included 20,193 adults with diabetes.
Compared with placebo, rosuvastatin, given at moderate- and high-intensity doses, and simvastatin and atorvastatin at high-intensity doses, were the best at lowering levels of non-HDL-C over an average treatment period of 12 weeks.
High-intensity rosuvastatin led to a 2.31 mmol/L reduction in non-HDL-C (95% credible interval, –3.39 to –1.21). Moderate-intensity rosuvastatin led to a 2.27 mmol/L reduction in non-HDL-C (95% credible interval, –3.00 to –1.49).
High-intensity simvastatin led to a 2.26 mmol/L reduction in non-HDL-C (95% credible interval, –2.99 to –1.51).
High-intensity atorvastatin led to a 2.20 mmol/L reduction in non-HDL-C (95% credible interval, –2.69 to –1.70).
Atorvastatin and simvastatin at any intensity and pravastatin at low intensity were also effective in reducing levels of non-HDL-C, the researchers noted.
In 4,670 patients who were at great risk for a major cardiovascular event, atorvastatin at high intensity showed the largest reduction in levels of non-HDL-C (1.98 mmol/L; 95% credible interval, –4.16 to 0.26).
In addition, high-intensity simvastatin and rosuvastatin were the most effective in reducing LDL-C.
High-intensity simvastatin led to a 1.93 mmol/L reduction in LDL-C (95% credible interval, –2.63 to –1.21), and high-intensity rosuvastatin led to a 1.76 mmol/L reduction in LDL-C (95% credible interval, –2.37 to –1.15).
In four studies, significant reductions in nonfatal myocardial infarction were shown for atorvastatin at moderate intensity, compared with placebo (relative risk, 0.57; 95% confidence interval, 0.43-.76). No significant differences were seen for discontinuations, nonfatal stroke, or cardiovascular death.
“We hope our findings will help guide clinicians on statin selection itself, and what types of doses they should be giving patients. These results support using NICE’s new policy guidelines on cholesterol monitoring, using this non-HDL-C measure, which contains all the bad types of cholesterol for patients with diabetes,” Dr. Hodkinson said.
“This study further emphasizes what we have known about the benefit of statin therapy in patients with type 2 diabetes,” Prakash Deedwania, MD, professor of medicine, University of California, San Francisco, told this news organization.
Dr. Deedwania and others have published data on patients with diabetes that showed that treatment with high-intensity atorvastatin was associated with significant reductions in major adverse cardiovascular events.
“Here they use non-HDL cholesterol as a target. The NICE guidelines are the only guidelines looking at non-HDL cholesterol; however, all guidelines suggest an LDL to be less than 70 in all people with diabetes, and for those with recent acute coronary syndromes, the latest evidence suggests the LDL should actually be less than 50,” said Dr. Deedwania, spokesperson for the AHA and ACC.
As far as which measure to use, he believes both are useful. “It’s six of one and half a dozen of the other, in my opinion. The societies have not recommended non-HDL cholesterol and it’s easier to stay with what is readily available for clinicians, and using LDL cholesterol is still okay. The results of this analysis are confirmatory, in that looking at non-HDL cholesterol gives results very similar to what these statins have shown for their effect on LDL cholesterol,” he said.
Non-HDL cholesterol a better marker?
For Robert Rosenson, MD, director of metabolism and lipids at Mount Sinai Health System and professor of medicine and cardiology at the Icahn School of Medicine at Mount Sinai, New York, non-HDL cholesterol is becoming an important marker of risk for several reasons.
“The focus on LDL cholesterol has been due to the causal relationship of LDL with atherosclerotic cardiovascular disease, but in the last few decades, non-HDL has emerged because more people are overweight, have insulin resistance, and have diabetes,” Dr. Rosenson told this news organization. “In those situations, the LDL cholesterol underrepresents the risk of the LDL particles. With insulin resistance, the particles become more triglycerides and less cholesterol, so on a per-particle basis, you need to get more LDL particles to get to a certain LDL cholesterol concentration.”
Non-HDL cholesterol testing does not require fasting, another advantage of using it to monitor cholesterol, he added.
What is often forgotten is that moderate- to high-intensity statins have very good triglyceride-lowering effects, Dr. Rosenson said.
“This article highlights that, by using higher doses, you get more triglyceride-lowering. Hopefully, this will get practitioners to recognize that non-HDL cholesterol is a better predictor of risk in people with diabetes,” he said.
The study was funded by the National Institute for Health Research. Dr. Hodkinson, Dr. Rosenson, and Dr. Deedwania report no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Melanoma screening study stokes overdiagnosis debate
new research shows.
Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”
The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.
In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.
“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.
The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.
When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.
Primary care screening study
The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.
Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.
During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.
The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).
The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.
The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
Experts weigh in
Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.
The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.
“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”
The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.
When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.
“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”
Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.
However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”
According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”
“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”
The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.
A version of this article first appeared on Medscape.com.
new research shows.
Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”
The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.
In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.
“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.
The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.
When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.
Primary care screening study
The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.
Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.
During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.
The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).
The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.
The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
Experts weigh in
Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.
The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.
“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”
The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.
When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.
“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”
Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.
However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”
According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”
“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”
The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.
A version of this article first appeared on Medscape.com.
new research shows.
Without a corresponding decrease in melanoma mortality, an increase in the detection of those thin melanomas “raises the concern that early detection efforts, such as visual skin screening, may result in overdiagnosis,” the study authors wrote. “The value of a cancer screening program should most rigorously be measured not by the number of new, early cancers detected, but by its impact on the development of late-stage disease and its associated morbidity, cost, and mortality.”
The research, published in JAMA Dermatology, has reignited the controversy over the benefits and harms of primary care skin cancer screening, garnering two editorials that reflect different sides of the debate.
In one, Robert A. Swerlick, MD, pointed out that, “despite public messaging to the contrary, to my knowledge there is no evidence that routine skin examinations have any effect on melanoma mortality.
“The stage shift to smaller tumors should not be viewed as success and is very strong evidence of overdiagnosis,” wrote Dr. Swerlick, of the department of dermatology, Emory University, Atlanta.
The other editorial, however, argued that routine screening saves lives. “Most melanoma deaths are because of stage I disease, with an estimated 3%-15% of thin melanomas (≤ 1 mm) being lethal,” wrote a trio of editorialists from Oregon Health & Science University, Portland.
When considering the high mutation rate associated with melanoma and the current limits of treatment options, early diagnosis becomes “particularly important and counterbalances the risk of overdiagnosis,” the editorialists asserted.
Primary care screening study
The new findings come from an observational study of a quality improvement initiative conducted at the University of Pittsburgh Medical Center system between 2014 and 2018, in which primary care clinicians were offered training in melanoma identification through skin examination and were encouraged to offer annual skin cancer screening to patients aged 35 years and older.
Of 595,799 eligible patients, 144,851 (24.3%) were screened at least once during the study period. Those who received screening were more likely than unscreened patients to be older (median age, 59 vs. 55 years), women, and non-Hispanic White persons.
During a follow-up of 5 years, the researchers found that patients who received screening were significantly more likely than unscreened patients to be diagnosed with in situ melanoma (incidence, 30.4 vs. 14.4; hazard ratio, 2.6; P < .001) or thin invasive melanoma (incidence, 24.5 vs. 16.1; HR, 1.8; P < .001), after adjusting for factors that included age, sex, and race.
The screened patients were also more likely than unscreened patients to be diagnosed with in situ interval melanomas, defined as melanomas occurring at least 60 days after initial screening (incidence, 26.7 vs. 12.9; HR, 2.1; P < .001), as well as thin invasive interval melanomas (incidence, 18.5 vs. 14.4; HR, 1.3; P = .03).
The 60-day interval was included to account for the possible time to referral to a specialist for definitive diagnosis, the authors explained.
The incidence of the detection of melanomas thicker than 4 mm was lower in screened versus unscreened patients, but the difference was not statistically significant for all melanomas (2.7 vs. 3.3; HR, 0.8; P = .38) or interval melanomas (1.5 vs. 2.7; HR, 0.6; P = .15).
Experts weigh in
Although the follow-up period was of 5 years, not all patients were followed that long after undergoing screening. For instance, for some patients, follow-up occurred only 1 year after they had been screened.
The study’s senior author, Laura K. Ferris, MD, PhD, of the department of dermatology, University of Pittsburgh, noted that a longer follow-up could shift the results.
“When you look at the curves in our figures, you do start to see them separate more and more over time for the thicker melanomas,” Dr. Ferris said in an interview. “I do suspect that, if we followed patients longer, we might start to see a more significant difference.”
The findings nevertheless add to evidence that although routine screening substantially increases the detection of melanomas overall, these melanomas are often not the ones doctors are most worried about or that increase a person’s risk of mortality, Dr. Ferris noted.
When it comes to melanoma screening, balancing the risks and benefits is key. One major downside, Dr. Ferris said, is in regard to the burden such screening could place on the health care system, with potentially unproductive screenings causing delays in care for patients with more urgent needs.
“We are undersupplied in the dermatology workforce, and there is often a long wait to see dermatologists, so we really want to make sure, as trained professionals, that patients have access to us,” she said. “If we’re doing something that doesn’t have proven benefit and is increasing the wait time, that will come at the expense of other patients’ access.”
Costs involved in skin biopsies and excisions of borderline lesions as well as the potential to increase patients’ anxiety represent other important considerations, Dr. Ferris noted.
However, Sancy A. Leachman, MD, PhD, a coauthor of the editorial in favor of screening, said in an interview that “at the individual level, there are an almost infinite number of individual circumstances that could lead a person to decide that the potential benefits outweigh the harms.”
According to Dr. Leachman, who is chair of the department of dermatology, Oregon Health & Science University, these individual priorities may not align with those of the various decision-makers or with guidelines, such as those from the U.S. Preventive Services Task Force, which gives visual skin cancer screening of asymptomatic patients an “I” rating, indicating “insufficient evidence.”
“Many federal agencies and payer groups focus on minimizing costs and optimizing outcomes,” Dr. Leachman and coauthors wrote. As the only professional advocates for individual patients, physicians “have a responsibility to assure that the best interests of patients are served.”
The study was funded by the University of Pittsburgh Melanoma and Skin Cancer Program. Dr. Ferris and Dr. Swerlick disclosed no relevant financial relationships. Dr. Leachman is the principal investigator for War on Melanoma, an early-detection program in Oregon.
A version of this article first appeared on Medscape.com.
FROM JAMA DERMATOLOGY
Ukraine and PTSD: How psychiatry can help
The war in Ukraine is resulting in a devastating loss of life, catastrophic injuries, and physical destruction. But the war also will take an enormous mental health toll on millions of people, resulting in what I think will lead to an epidemic of posttraumatic stress disorder.
Think about the horrors that Ukrainians are experiencing. Millions of Ukrainians have been displaced to locations inside and outside of the country. People are being forced to leave behind family members, neighbors, and their pets and homes. In one recent news report, a Ukrainian woman who left Kyiv for Belgium reported having dreams in which she heard explosions. Smells, sounds, and even colors can trigger intrusive memories and a host of other problems. The mind can barely comprehend the scope of this human crisis.
Ukrainian soldiers are witnessing horrors that are unspeakable. Doctors, emergency service workers, and other medical professionals in Ukraine are being exposed to the catastrophe on a large scale. Children and youth are among the most affected victims, and it is difficult to predict the impact all of this upheaval is having on them.
The most important question for those of us who treat mental illness is “how will we help devastated people suffering from extreme trauma tied to death, dying, severe injuries, and torture by the invading soldiers?”
I have been treating patients with PTSD for many years. In my lifetime, the devastation in Ukraine will translate into what I expect will be the first overwhelming mass epidemic of PTSD – at least that I can recall. Yes, surely PTSD occurred during and after the Holocaust in the World War II era, but at that time, the mental health profession was not equipped to recognize it – even though the disorder most certainly existed. Even in ancient times, an Assyrian text from Mesopotamia (currently Iraq) described what we would define as PTSD symptoms in soldiers, such as sleep disturbances, flashbacks, and “low mood,” according to a 2014 article in the journal Early Science and Medicine.
The DSM-5 describes numerous criteria for PTSD mainly centering on trauma exposing a person to actual or threatened death, serious injury, or a variety of assaults, including direct exposure or witnessing the event. However, in my clinical experience, I’ve seen lesser events leading to PTSD. Much depends on how each individual processes what is occurring or has occurred.
What appears to be clear is that some key aspects of PTSD according to the DSM-5 – such as trauma-related thoughts or feelings, or trauma-related reminders, as well as nightmares and flashbacks – are likely occurring among Ukrainians. In addition, hypervigilance and exaggerated startle response seem to be key components of PTSD whether or not the cause is a major event or what one would perceive as less traumatic or dramatic.
I’ve certainly seen PTSD secondary to a hospitalization, especially in care involving ICUs or cardiac care units. In addition, I’ve had the occasion to note PTSD signs and symptoms after financial loss or divorce, situations in which some clinicians would never believe PTSD would occur, and would often diagnose as anxiety or depression. For me, again from a clinical point of view, it’s always been critical to assess how individuals process the event or events around them.
We know that there is already a shortage of mental health clinicians across the globe. This means that, in light of the hundreds of thousands – possibly millions – of Ukrainians affected by PTSD, a one-to-one approach will not do. For those Ukrainians who are able to find safe havens, I believe that PTSD symptoms can be debilitating, and the mental health community needs to begin putting supports in place now to address this trauma.
Specifically, proven cognitive-behavioral therapy (CBT) and guided imagery should be used to begin helping some of these people recover from the unbelievable trauma of war. For some, medication management might be helpful in those experiencing nightmares combined with anxiety and depression. But the main approach and first line of care should be CBT and guided imagery.
PTSD symptoms can make people feel like they are losing control, and prevent them from rebuilding their lives. We must do all we can in the mental health community to destigmatize care and develop support services to get ahead of this crisis. Only through medical, psychiatric, and health care organizations banding together using modern technology can the large number of people psychologically affected by this ongoing crisis be helped and saved.
Dr. London is a practicing psychiatrist who has been a newspaper columnist for 35 years, specializing in writing about short-term therapy, including cognitive-behavioral therapy and guided imagery. He is author of “Find Freedom Fast” (New York: Kettlehole Publishing, 2019). He has no conflicts of interest.
The war in Ukraine is resulting in a devastating loss of life, catastrophic injuries, and physical destruction. But the war also will take an enormous mental health toll on millions of people, resulting in what I think will lead to an epidemic of posttraumatic stress disorder.
Think about the horrors that Ukrainians are experiencing. Millions of Ukrainians have been displaced to locations inside and outside of the country. People are being forced to leave behind family members, neighbors, and their pets and homes. In one recent news report, a Ukrainian woman who left Kyiv for Belgium reported having dreams in which she heard explosions. Smells, sounds, and even colors can trigger intrusive memories and a host of other problems. The mind can barely comprehend the scope of this human crisis.
Ukrainian soldiers are witnessing horrors that are unspeakable. Doctors, emergency service workers, and other medical professionals in Ukraine are being exposed to the catastrophe on a large scale. Children and youth are among the most affected victims, and it is difficult to predict the impact all of this upheaval is having on them.
The most important question for those of us who treat mental illness is “how will we help devastated people suffering from extreme trauma tied to death, dying, severe injuries, and torture by the invading soldiers?”
I have been treating patients with PTSD for many years. In my lifetime, the devastation in Ukraine will translate into what I expect will be the first overwhelming mass epidemic of PTSD – at least that I can recall. Yes, surely PTSD occurred during and after the Holocaust in the World War II era, but at that time, the mental health profession was not equipped to recognize it – even though the disorder most certainly existed. Even in ancient times, an Assyrian text from Mesopotamia (currently Iraq) described what we would define as PTSD symptoms in soldiers, such as sleep disturbances, flashbacks, and “low mood,” according to a 2014 article in the journal Early Science and Medicine.
The DSM-5 describes numerous criteria for PTSD mainly centering on trauma exposing a person to actual or threatened death, serious injury, or a variety of assaults, including direct exposure or witnessing the event. However, in my clinical experience, I’ve seen lesser events leading to PTSD. Much depends on how each individual processes what is occurring or has occurred.
What appears to be clear is that some key aspects of PTSD according to the DSM-5 – such as trauma-related thoughts or feelings, or trauma-related reminders, as well as nightmares and flashbacks – are likely occurring among Ukrainians. In addition, hypervigilance and exaggerated startle response seem to be key components of PTSD whether or not the cause is a major event or what one would perceive as less traumatic or dramatic.
I’ve certainly seen PTSD secondary to a hospitalization, especially in care involving ICUs or cardiac care units. In addition, I’ve had the occasion to note PTSD signs and symptoms after financial loss or divorce, situations in which some clinicians would never believe PTSD would occur, and would often diagnose as anxiety or depression. For me, again from a clinical point of view, it’s always been critical to assess how individuals process the event or events around them.
We know that there is already a shortage of mental health clinicians across the globe. This means that, in light of the hundreds of thousands – possibly millions – of Ukrainians affected by PTSD, a one-to-one approach will not do. For those Ukrainians who are able to find safe havens, I believe that PTSD symptoms can be debilitating, and the mental health community needs to begin putting supports in place now to address this trauma.
Specifically, proven cognitive-behavioral therapy (CBT) and guided imagery should be used to begin helping some of these people recover from the unbelievable trauma of war. For some, medication management might be helpful in those experiencing nightmares combined with anxiety and depression. But the main approach and first line of care should be CBT and guided imagery.
PTSD symptoms can make people feel like they are losing control, and prevent them from rebuilding their lives. We must do all we can in the mental health community to destigmatize care and develop support services to get ahead of this crisis. Only through medical, psychiatric, and health care organizations banding together using modern technology can the large number of people psychologically affected by this ongoing crisis be helped and saved.
Dr. London is a practicing psychiatrist who has been a newspaper columnist for 35 years, specializing in writing about short-term therapy, including cognitive-behavioral therapy and guided imagery. He is author of “Find Freedom Fast” (New York: Kettlehole Publishing, 2019). He has no conflicts of interest.
The war in Ukraine is resulting in a devastating loss of life, catastrophic injuries, and physical destruction. But the war also will take an enormous mental health toll on millions of people, resulting in what I think will lead to an epidemic of posttraumatic stress disorder.
Think about the horrors that Ukrainians are experiencing. Millions of Ukrainians have been displaced to locations inside and outside of the country. People are being forced to leave behind family members, neighbors, and their pets and homes. In one recent news report, a Ukrainian woman who left Kyiv for Belgium reported having dreams in which she heard explosions. Smells, sounds, and even colors can trigger intrusive memories and a host of other problems. The mind can barely comprehend the scope of this human crisis.
Ukrainian soldiers are witnessing horrors that are unspeakable. Doctors, emergency service workers, and other medical professionals in Ukraine are being exposed to the catastrophe on a large scale. Children and youth are among the most affected victims, and it is difficult to predict the impact all of this upheaval is having on them.
The most important question for those of us who treat mental illness is “how will we help devastated people suffering from extreme trauma tied to death, dying, severe injuries, and torture by the invading soldiers?”
I have been treating patients with PTSD for many years. In my lifetime, the devastation in Ukraine will translate into what I expect will be the first overwhelming mass epidemic of PTSD – at least that I can recall. Yes, surely PTSD occurred during and after the Holocaust in the World War II era, but at that time, the mental health profession was not equipped to recognize it – even though the disorder most certainly existed. Even in ancient times, an Assyrian text from Mesopotamia (currently Iraq) described what we would define as PTSD symptoms in soldiers, such as sleep disturbances, flashbacks, and “low mood,” according to a 2014 article in the journal Early Science and Medicine.
The DSM-5 describes numerous criteria for PTSD mainly centering on trauma exposing a person to actual or threatened death, serious injury, or a variety of assaults, including direct exposure or witnessing the event. However, in my clinical experience, I’ve seen lesser events leading to PTSD. Much depends on how each individual processes what is occurring or has occurred.
What appears to be clear is that some key aspects of PTSD according to the DSM-5 – such as trauma-related thoughts or feelings, or trauma-related reminders, as well as nightmares and flashbacks – are likely occurring among Ukrainians. In addition, hypervigilance and exaggerated startle response seem to be key components of PTSD whether or not the cause is a major event or what one would perceive as less traumatic or dramatic.
I’ve certainly seen PTSD secondary to a hospitalization, especially in care involving ICUs or cardiac care units. In addition, I’ve had the occasion to note PTSD signs and symptoms after financial loss or divorce, situations in which some clinicians would never believe PTSD would occur, and would often diagnose as anxiety or depression. For me, again from a clinical point of view, it’s always been critical to assess how individuals process the event or events around them.
We know that there is already a shortage of mental health clinicians across the globe. This means that, in light of the hundreds of thousands – possibly millions – of Ukrainians affected by PTSD, a one-to-one approach will not do. For those Ukrainians who are able to find safe havens, I believe that PTSD symptoms can be debilitating, and the mental health community needs to begin putting supports in place now to address this trauma.
Specifically, proven cognitive-behavioral therapy (CBT) and guided imagery should be used to begin helping some of these people recover from the unbelievable trauma of war. For some, medication management might be helpful in those experiencing nightmares combined with anxiety and depression. But the main approach and first line of care should be CBT and guided imagery.
PTSD symptoms can make people feel like they are losing control, and prevent them from rebuilding their lives. We must do all we can in the mental health community to destigmatize care and develop support services to get ahead of this crisis. Only through medical, psychiatric, and health care organizations banding together using modern technology can the large number of people psychologically affected by this ongoing crisis be helped and saved.
Dr. London is a practicing psychiatrist who has been a newspaper columnist for 35 years, specializing in writing about short-term therapy, including cognitive-behavioral therapy and guided imagery. He is author of “Find Freedom Fast” (New York: Kettlehole Publishing, 2019). He has no conflicts of interest.
Long-term cannabis use linked to dementia risk factors
A large prospective, longitudinal study showed long-term cannabis users had an intelligence quotient (IQ) decline from age 18 to midlife (mean, 5.5 IQ points), poorer learning and processing speed, compared with childhood, and self-reported memory and attention problems. Long-term cannabis users also showed hippocampal atrophy at midlife (age 45), which combined with mild midlife cognitive deficits, all known risk factors for dementia.
“Long-term cannabis users – people who have used cannabis from 18 or 19 years old and continued using through midlife – showed cognitive deficits, compared with nonusers. They also showed more severe cognitive deficits, compared with long-term alcohol users and long-term tobacco users. But people who used infrequently or recreationally in midlife did not show as severe cognitive deficits. Cognitive deficits were confined to cannabis users,” lead investigator Madeline Meier, PhD, associate professor of psychology, Arizona State University, Tempe, said in an interview.
“Long-term cannabis users had smaller hippocampal volume, but we also found that smaller hippocampal volume did not explain the cognitive deficits among the long-term cannabis users,” she added.
The study was recently published online in the American Journal of Psychiatry.
Growing use in Boomers
Long-term cannabis use has been associated with memory problems. Studies examining the impact of cannabis use on the brain have shown conflicting results. Some suggest regular use in adolescence is associated with altered connectivity and reduced volume of brain regions involved in executive functions such as memory, learning, and impulse control compared with those who do not use cannabis.
Others found no significant structural differences between the brains of cannabis users and nonusers.
An earlier, large longitudinal study in New Zealand found that persistent cannabis use (with frequent use starting in adolescence) was associated with a loss of an average of six (or up to eight) IQ points measured in mid-adulthood.
Cannabis use is increasing among Baby Boomers – a group born between 1946 and 1964 – who used cannabis at historically high rates as young adults, and who now use it at historically high rates in midlife and as older adults.
To date, case-control studies, which are predominantly in adolescents and young adults, have found that cannabis users show subtle cognitive deficits and structural brain differences, but it is unclear whether these differences in young cannabis users might be larger in midlife and in older adults who have longer histories of use.
The study included a representative cohort of 1,037 individuals in Dunedin, New Zealand, born between April 1972 and March 1973, and followed from age 3 to 45.
Cannabis use and dependence were assessed at ages 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45.
“Most of the previous research has focused on adolescent and young-adult cannabis users. What we’re looking at here is long-term cannabis users in midlife, and we’re finding that long-term users show cognitive deficits. But we’re not just looking at a snapshot of people in midlife, we’re also doing a longitudinal comparison – comparing them to themselves in childhood. We saw that long-term cannabis users showed a decline in IQ from childhood to adulthood,” said Dr. Meier.
Participants in the study are members of the Dunedin Longitudinal Study, a representative birth cohort (n = 1,037; 91% of eligible births; 52% male) born between April 1972 and March 1973 in Dunedin, New Zealand, who participated in the first assessment at age 3.
This cohort matched socioeconomic status (SES), key health indicators, and demographics. Assessments were carried out at birth and ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45.
Shrinking hippocampal volume
Cannabis use, cognitive function, and hippocampal volume were assessed comparing long-term cannabis users (n = 84) against five distinct groups:
- Lifelong cannabis nonusers (n = 196) – to replicate the control group most often reported in the case-control literature
- Midlife recreational cannabis users (n = 65) – to determine if cognitive deficits and structural brain differences are apparent in nonproblem users – the majority of cannabis users
- Long-term tobacco users (n = 75)
- Long-term alcohol users (n = 57) – benchmark comparisons for any cannabis findings and to disentangle potential cannabis effects from tobacco and alcohol effects
- Cannabis quitters (n = 58) – to determine whether differences are apparent after cessation
Tests were conducted on dose-response associations using continuously measured persistence of cannabis use, rigorously adjusting for numerous confounders derived from multiple longitudinal waves and data sources.
The investigators also tested whether associations between continuously measured persistence of cannabis use and cognitive deficits were mediated by hippocampal volume differences.
The hippocampus was the area of focus because it has a high density of cannabinoid receptors and is also instrumental for learning and memory, which is one of the most consistently impaired cognitive domains in cannabis users, and has been the brain region that most consistently emerges as smaller in cannabis users relative to controls. Structural MRI was done at age 45 for 875 participants (93% of age 45 participants).
Of 997 cohort members still alive at age 45, 938 (94.1%) were assessed at age 45. Age 45 participants did not differ significantly from other participants on childhood SES, childhood self-control, or childhood IQ. Cognitive functioning among midlife recreational cannabis users was similar to representative cohort norms, suggesting that infrequent recreational cannabis use in midlife is unlikely to compromise cognitive functioning.
However, long-term cannabis users did not perform significantly worse on any test than cannabis quitters. Cannabis quitters showed subtle cognitive deficits that may explain inconsistent findings on the benefits of cessation.
Smaller hippocampal volume is thought to be a possible mediator of cannabis-related cognitive deficits because the hippocampus is rich in CB1 receptors and is involved in learning and memory.
Long-term cannabis users had smaller bilateral volume in total hippocampus and 5 of 12 structurally and functionally distinct subregions (tail, hippocampal amygdala transition area, CA1, molecular layer, and dentate gyrus), and significantly smaller volumes than midlife recreational cannabis users in the left and right hippocampus, and 3 of 12 subfields (tail, CA1, and molecular layer), compared with non-users, consistent with case-control studies.
More potent
“If you’ve been using cannabis very long term and now are in midlife, you might want to consider quitting. Quitting is associated with slightly better cognitive performance in midlife. We also need to watch for risk of dementia. We know that people who show cognitive deficits at midlife are at elevated risk for later life dementia. And the deficits we saw among long-term cannabis users (although fairly mild), they were in the range in terms of effect size of what we see among people in other studies who have gone on to develop dementia in later life,” said Dr. Meier.
The study findings conflict with those of other studies, including one by the same research group, which compared the cognitive functioning of twins who were discordant for cannabis use and found little evidence of cannabis-related cognitive deficits. Because long-term cannabis users also use tobacco, alcohol, and other illicit drugs, disentangling cannabis effects from other substances is challenging.
“Long-term cannabis users tend to be long-term polysubstance users, so it’s hard to isolate,” said Dr. Meier.
Additionally, some group sizes were small, raising concerns about low statistical power.
“Group sizes were small but we didn’t rely only on those group comparisons; however, we did find statistical differences. We also tested highly statistically powered dose-response associations between persistence of cannabis use over ages 18-45 and each of our outcomes (IQ, learning, and processing speed in midlife) while adjusting possible alternate explanations such as low childhood IQ, other substance use, [and] socioeconomic backgrounds.
“These dose-response associations used large sample sizes, were highly powered, and took into account a number of alternative explanations. These two different approaches showed very similar findings and one bolstered the other,” said Dr. Meier.
The study’s results were based on individuals who began using cannabis in the 1980s or ‘90s, but the concentration of tetrahydrocannabinol (THC) has risen in recent years.
“When the study began, THC concentration was approximately 4%. Over the last decade we have seen it go up to 12% or even higher. A recent study surveying U.S. dispensaries found 20% THC. If THC accounts for impairment, then the effects can be larger [with higher concentrations]. One of the challenges in the U.S. is that there are laws prohibiting researchers from testing cannabis, so we have to rely on product labels, which we know are unreliable,” said Dr. Meier.
A separate report is forthcoming with results of exploratory analyses of associations between long-term cannabis use and comprehensive MRI measures of global and regional gray and white matter.
The data will also be used to answer a number of different questions about cognitive deficits, brain structure, aging preparedness, social preparedness (strength of social networks), financial and health preparedness, and biological aging (the pace of aging relative to chronological age) in long-term cannabis users, Dr. Meier noted.
‘Fantastic’ research
Commenting on the research for this news organization , Andrew J. Saxon, MD, professor, department of psychiatry & behavioral sciences at University of Washington, Seattle, and a member of the American Psychiatric Association’s Council on Addiction Psychiatry, said the study “provides more evidence that heavy and regular cannabis use is not benign behavior.”
“It’s a fantastic piece of research in which they enrolled participants at birth and have followed them up to age 45. In most of the other research that has been done, we have no idea what their baseline was. What’s so remarkable here is that they can clearly demonstrate the loss of IQ points from childhood to age 45,” said Dr. Saxon.
“It is clear that, in people using cannabis long term, cognition is impaired. It would be good to have a better handle on how much cognitive function can be regained if you quit, because that could be a motivator for quitting in people where cannabis is having an adverse effect on their lives,” he added.
On the issue of THC potency, Dr. Saxon said that, while it’s true the potency of cannabis is increasing in terms of THC concentrations, the question is: “Do people who use cannabis use a set amount or do they imbibe until they achieve the state of altered consciousness that they’re seeking? Although there has been some research in the area of self-regulation and cannabis potency, we do not yet have the answers to determine if there is any causation,” said Dr. Saxon.
Dr. Meier and Dr. Saxon reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
A large prospective, longitudinal study showed long-term cannabis users had an intelligence quotient (IQ) decline from age 18 to midlife (mean, 5.5 IQ points), poorer learning and processing speed, compared with childhood, and self-reported memory and attention problems. Long-term cannabis users also showed hippocampal atrophy at midlife (age 45), which combined with mild midlife cognitive deficits, all known risk factors for dementia.
“Long-term cannabis users – people who have used cannabis from 18 or 19 years old and continued using through midlife – showed cognitive deficits, compared with nonusers. They also showed more severe cognitive deficits, compared with long-term alcohol users and long-term tobacco users. But people who used infrequently or recreationally in midlife did not show as severe cognitive deficits. Cognitive deficits were confined to cannabis users,” lead investigator Madeline Meier, PhD, associate professor of psychology, Arizona State University, Tempe, said in an interview.
“Long-term cannabis users had smaller hippocampal volume, but we also found that smaller hippocampal volume did not explain the cognitive deficits among the long-term cannabis users,” she added.
The study was recently published online in the American Journal of Psychiatry.
Growing use in Boomers
Long-term cannabis use has been associated with memory problems. Studies examining the impact of cannabis use on the brain have shown conflicting results. Some suggest regular use in adolescence is associated with altered connectivity and reduced volume of brain regions involved in executive functions such as memory, learning, and impulse control compared with those who do not use cannabis.
Others found no significant structural differences between the brains of cannabis users and nonusers.
An earlier, large longitudinal study in New Zealand found that persistent cannabis use (with frequent use starting in adolescence) was associated with a loss of an average of six (or up to eight) IQ points measured in mid-adulthood.
Cannabis use is increasing among Baby Boomers – a group born between 1946 and 1964 – who used cannabis at historically high rates as young adults, and who now use it at historically high rates in midlife and as older adults.
To date, case-control studies, which are predominantly in adolescents and young adults, have found that cannabis users show subtle cognitive deficits and structural brain differences, but it is unclear whether these differences in young cannabis users might be larger in midlife and in older adults who have longer histories of use.
The study included a representative cohort of 1,037 individuals in Dunedin, New Zealand, born between April 1972 and March 1973, and followed from age 3 to 45.
Cannabis use and dependence were assessed at ages 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45.
“Most of the previous research has focused on adolescent and young-adult cannabis users. What we’re looking at here is long-term cannabis users in midlife, and we’re finding that long-term users show cognitive deficits. But we’re not just looking at a snapshot of people in midlife, we’re also doing a longitudinal comparison – comparing them to themselves in childhood. We saw that long-term cannabis users showed a decline in IQ from childhood to adulthood,” said Dr. Meier.
Participants in the study are members of the Dunedin Longitudinal Study, a representative birth cohort (n = 1,037; 91% of eligible births; 52% male) born between April 1972 and March 1973 in Dunedin, New Zealand, who participated in the first assessment at age 3.
This cohort matched socioeconomic status (SES), key health indicators, and demographics. Assessments were carried out at birth and ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45.
Shrinking hippocampal volume
Cannabis use, cognitive function, and hippocampal volume were assessed comparing long-term cannabis users (n = 84) against five distinct groups:
- Lifelong cannabis nonusers (n = 196) – to replicate the control group most often reported in the case-control literature
- Midlife recreational cannabis users (n = 65) – to determine if cognitive deficits and structural brain differences are apparent in nonproblem users – the majority of cannabis users
- Long-term tobacco users (n = 75)
- Long-term alcohol users (n = 57) – benchmark comparisons for any cannabis findings and to disentangle potential cannabis effects from tobacco and alcohol effects
- Cannabis quitters (n = 58) – to determine whether differences are apparent after cessation
Tests were conducted on dose-response associations using continuously measured persistence of cannabis use, rigorously adjusting for numerous confounders derived from multiple longitudinal waves and data sources.
The investigators also tested whether associations between continuously measured persistence of cannabis use and cognitive deficits were mediated by hippocampal volume differences.
The hippocampus was the area of focus because it has a high density of cannabinoid receptors and is also instrumental for learning and memory, which is one of the most consistently impaired cognitive domains in cannabis users, and has been the brain region that most consistently emerges as smaller in cannabis users relative to controls. Structural MRI was done at age 45 for 875 participants (93% of age 45 participants).
Of 997 cohort members still alive at age 45, 938 (94.1%) were assessed at age 45. Age 45 participants did not differ significantly from other participants on childhood SES, childhood self-control, or childhood IQ. Cognitive functioning among midlife recreational cannabis users was similar to representative cohort norms, suggesting that infrequent recreational cannabis use in midlife is unlikely to compromise cognitive functioning.
However, long-term cannabis users did not perform significantly worse on any test than cannabis quitters. Cannabis quitters showed subtle cognitive deficits that may explain inconsistent findings on the benefits of cessation.
Smaller hippocampal volume is thought to be a possible mediator of cannabis-related cognitive deficits because the hippocampus is rich in CB1 receptors and is involved in learning and memory.
Long-term cannabis users had smaller bilateral volume in total hippocampus and 5 of 12 structurally and functionally distinct subregions (tail, hippocampal amygdala transition area, CA1, molecular layer, and dentate gyrus), and significantly smaller volumes than midlife recreational cannabis users in the left and right hippocampus, and 3 of 12 subfields (tail, CA1, and molecular layer), compared with non-users, consistent with case-control studies.
More potent
“If you’ve been using cannabis very long term and now are in midlife, you might want to consider quitting. Quitting is associated with slightly better cognitive performance in midlife. We also need to watch for risk of dementia. We know that people who show cognitive deficits at midlife are at elevated risk for later life dementia. And the deficits we saw among long-term cannabis users (although fairly mild), they were in the range in terms of effect size of what we see among people in other studies who have gone on to develop dementia in later life,” said Dr. Meier.
The study findings conflict with those of other studies, including one by the same research group, which compared the cognitive functioning of twins who were discordant for cannabis use and found little evidence of cannabis-related cognitive deficits. Because long-term cannabis users also use tobacco, alcohol, and other illicit drugs, disentangling cannabis effects from other substances is challenging.
“Long-term cannabis users tend to be long-term polysubstance users, so it’s hard to isolate,” said Dr. Meier.
Additionally, some group sizes were small, raising concerns about low statistical power.
“Group sizes were small but we didn’t rely only on those group comparisons; however, we did find statistical differences. We also tested highly statistically powered dose-response associations between persistence of cannabis use over ages 18-45 and each of our outcomes (IQ, learning, and processing speed in midlife) while adjusting possible alternate explanations such as low childhood IQ, other substance use, [and] socioeconomic backgrounds.
“These dose-response associations used large sample sizes, were highly powered, and took into account a number of alternative explanations. These two different approaches showed very similar findings and one bolstered the other,” said Dr. Meier.
The study’s results were based on individuals who began using cannabis in the 1980s or ‘90s, but the concentration of tetrahydrocannabinol (THC) has risen in recent years.
“When the study began, THC concentration was approximately 4%. Over the last decade we have seen it go up to 12% or even higher. A recent study surveying U.S. dispensaries found 20% THC. If THC accounts for impairment, then the effects can be larger [with higher concentrations]. One of the challenges in the U.S. is that there are laws prohibiting researchers from testing cannabis, so we have to rely on product labels, which we know are unreliable,” said Dr. Meier.
A separate report is forthcoming with results of exploratory analyses of associations between long-term cannabis use and comprehensive MRI measures of global and regional gray and white matter.
The data will also be used to answer a number of different questions about cognitive deficits, brain structure, aging preparedness, social preparedness (strength of social networks), financial and health preparedness, and biological aging (the pace of aging relative to chronological age) in long-term cannabis users, Dr. Meier noted.
‘Fantastic’ research
Commenting on the research for this news organization , Andrew J. Saxon, MD, professor, department of psychiatry & behavioral sciences at University of Washington, Seattle, and a member of the American Psychiatric Association’s Council on Addiction Psychiatry, said the study “provides more evidence that heavy and regular cannabis use is not benign behavior.”
“It’s a fantastic piece of research in which they enrolled participants at birth and have followed them up to age 45. In most of the other research that has been done, we have no idea what their baseline was. What’s so remarkable here is that they can clearly demonstrate the loss of IQ points from childhood to age 45,” said Dr. Saxon.
“It is clear that, in people using cannabis long term, cognition is impaired. It would be good to have a better handle on how much cognitive function can be regained if you quit, because that could be a motivator for quitting in people where cannabis is having an adverse effect on their lives,” he added.
On the issue of THC potency, Dr. Saxon said that, while it’s true the potency of cannabis is increasing in terms of THC concentrations, the question is: “Do people who use cannabis use a set amount or do they imbibe until they achieve the state of altered consciousness that they’re seeking? Although there has been some research in the area of self-regulation and cannabis potency, we do not yet have the answers to determine if there is any causation,” said Dr. Saxon.
Dr. Meier and Dr. Saxon reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
A large prospective, longitudinal study showed long-term cannabis users had an intelligence quotient (IQ) decline from age 18 to midlife (mean, 5.5 IQ points), poorer learning and processing speed, compared with childhood, and self-reported memory and attention problems. Long-term cannabis users also showed hippocampal atrophy at midlife (age 45), which combined with mild midlife cognitive deficits, all known risk factors for dementia.
“Long-term cannabis users – people who have used cannabis from 18 or 19 years old and continued using through midlife – showed cognitive deficits, compared with nonusers. They also showed more severe cognitive deficits, compared with long-term alcohol users and long-term tobacco users. But people who used infrequently or recreationally in midlife did not show as severe cognitive deficits. Cognitive deficits were confined to cannabis users,” lead investigator Madeline Meier, PhD, associate professor of psychology, Arizona State University, Tempe, said in an interview.
“Long-term cannabis users had smaller hippocampal volume, but we also found that smaller hippocampal volume did not explain the cognitive deficits among the long-term cannabis users,” she added.
The study was recently published online in the American Journal of Psychiatry.
Growing use in Boomers
Long-term cannabis use has been associated with memory problems. Studies examining the impact of cannabis use on the brain have shown conflicting results. Some suggest regular use in adolescence is associated with altered connectivity and reduced volume of brain regions involved in executive functions such as memory, learning, and impulse control compared with those who do not use cannabis.
Others found no significant structural differences between the brains of cannabis users and nonusers.
An earlier, large longitudinal study in New Zealand found that persistent cannabis use (with frequent use starting in adolescence) was associated with a loss of an average of six (or up to eight) IQ points measured in mid-adulthood.
Cannabis use is increasing among Baby Boomers – a group born between 1946 and 1964 – who used cannabis at historically high rates as young adults, and who now use it at historically high rates in midlife and as older adults.
To date, case-control studies, which are predominantly in adolescents and young adults, have found that cannabis users show subtle cognitive deficits and structural brain differences, but it is unclear whether these differences in young cannabis users might be larger in midlife and in older adults who have longer histories of use.
The study included a representative cohort of 1,037 individuals in Dunedin, New Zealand, born between April 1972 and March 1973, and followed from age 3 to 45.
Cannabis use and dependence were assessed at ages 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45.
“Most of the previous research has focused on adolescent and young-adult cannabis users. What we’re looking at here is long-term cannabis users in midlife, and we’re finding that long-term users show cognitive deficits. But we’re not just looking at a snapshot of people in midlife, we’re also doing a longitudinal comparison – comparing them to themselves in childhood. We saw that long-term cannabis users showed a decline in IQ from childhood to adulthood,” said Dr. Meier.
Participants in the study are members of the Dunedin Longitudinal Study, a representative birth cohort (n = 1,037; 91% of eligible births; 52% male) born between April 1972 and March 1973 in Dunedin, New Zealand, who participated in the first assessment at age 3.
This cohort matched socioeconomic status (SES), key health indicators, and demographics. Assessments were carried out at birth and ages 3, 5, 7, 9, 11, 13, 15, 18, 21, 26, 32, 38, and 45. IQ was assessed at ages 7, 9, 11, and 45. Specific neuropsychological functions and hippocampal volume were assessed at age 45.
Shrinking hippocampal volume
Cannabis use, cognitive function, and hippocampal volume were assessed comparing long-term cannabis users (n = 84) against five distinct groups:
- Lifelong cannabis nonusers (n = 196) – to replicate the control group most often reported in the case-control literature
- Midlife recreational cannabis users (n = 65) – to determine if cognitive deficits and structural brain differences are apparent in nonproblem users – the majority of cannabis users
- Long-term tobacco users (n = 75)
- Long-term alcohol users (n = 57) – benchmark comparisons for any cannabis findings and to disentangle potential cannabis effects from tobacco and alcohol effects
- Cannabis quitters (n = 58) – to determine whether differences are apparent after cessation
Tests were conducted on dose-response associations using continuously measured persistence of cannabis use, rigorously adjusting for numerous confounders derived from multiple longitudinal waves and data sources.
The investigators also tested whether associations between continuously measured persistence of cannabis use and cognitive deficits were mediated by hippocampal volume differences.
The hippocampus was the area of focus because it has a high density of cannabinoid receptors and is also instrumental for learning and memory, which is one of the most consistently impaired cognitive domains in cannabis users, and has been the brain region that most consistently emerges as smaller in cannabis users relative to controls. Structural MRI was done at age 45 for 875 participants (93% of age 45 participants).
Of 997 cohort members still alive at age 45, 938 (94.1%) were assessed at age 45. Age 45 participants did not differ significantly from other participants on childhood SES, childhood self-control, or childhood IQ. Cognitive functioning among midlife recreational cannabis users was similar to representative cohort norms, suggesting that infrequent recreational cannabis use in midlife is unlikely to compromise cognitive functioning.
However, long-term cannabis users did not perform significantly worse on any test than cannabis quitters. Cannabis quitters showed subtle cognitive deficits that may explain inconsistent findings on the benefits of cessation.
Smaller hippocampal volume is thought to be a possible mediator of cannabis-related cognitive deficits because the hippocampus is rich in CB1 receptors and is involved in learning and memory.
Long-term cannabis users had smaller bilateral volume in total hippocampus and 5 of 12 structurally and functionally distinct subregions (tail, hippocampal amygdala transition area, CA1, molecular layer, and dentate gyrus), and significantly smaller volumes than midlife recreational cannabis users in the left and right hippocampus, and 3 of 12 subfields (tail, CA1, and molecular layer), compared with non-users, consistent with case-control studies.
More potent
“If you’ve been using cannabis very long term and now are in midlife, you might want to consider quitting. Quitting is associated with slightly better cognitive performance in midlife. We also need to watch for risk of dementia. We know that people who show cognitive deficits at midlife are at elevated risk for later life dementia. And the deficits we saw among long-term cannabis users (although fairly mild), they were in the range in terms of effect size of what we see among people in other studies who have gone on to develop dementia in later life,” said Dr. Meier.
The study findings conflict with those of other studies, including one by the same research group, which compared the cognitive functioning of twins who were discordant for cannabis use and found little evidence of cannabis-related cognitive deficits. Because long-term cannabis users also use tobacco, alcohol, and other illicit drugs, disentangling cannabis effects from other substances is challenging.
“Long-term cannabis users tend to be long-term polysubstance users, so it’s hard to isolate,” said Dr. Meier.
Additionally, some group sizes were small, raising concerns about low statistical power.
“Group sizes were small but we didn’t rely only on those group comparisons; however, we did find statistical differences. We also tested highly statistically powered dose-response associations between persistence of cannabis use over ages 18-45 and each of our outcomes (IQ, learning, and processing speed in midlife) while adjusting possible alternate explanations such as low childhood IQ, other substance use, [and] socioeconomic backgrounds.
“These dose-response associations used large sample sizes, were highly powered, and took into account a number of alternative explanations. These two different approaches showed very similar findings and one bolstered the other,” said Dr. Meier.
The study’s results were based on individuals who began using cannabis in the 1980s or ‘90s, but the concentration of tetrahydrocannabinol (THC) has risen in recent years.
“When the study began, THC concentration was approximately 4%. Over the last decade we have seen it go up to 12% or even higher. A recent study surveying U.S. dispensaries found 20% THC. If THC accounts for impairment, then the effects can be larger [with higher concentrations]. One of the challenges in the U.S. is that there are laws prohibiting researchers from testing cannabis, so we have to rely on product labels, which we know are unreliable,” said Dr. Meier.
A separate report is forthcoming with results of exploratory analyses of associations between long-term cannabis use and comprehensive MRI measures of global and regional gray and white matter.
The data will also be used to answer a number of different questions about cognitive deficits, brain structure, aging preparedness, social preparedness (strength of social networks), financial and health preparedness, and biological aging (the pace of aging relative to chronological age) in long-term cannabis users, Dr. Meier noted.
‘Fantastic’ research
Commenting on the research for this news organization , Andrew J. Saxon, MD, professor, department of psychiatry & behavioral sciences at University of Washington, Seattle, and a member of the American Psychiatric Association’s Council on Addiction Psychiatry, said the study “provides more evidence that heavy and regular cannabis use is not benign behavior.”
“It’s a fantastic piece of research in which they enrolled participants at birth and have followed them up to age 45. In most of the other research that has been done, we have no idea what their baseline was. What’s so remarkable here is that they can clearly demonstrate the loss of IQ points from childhood to age 45,” said Dr. Saxon.
“It is clear that, in people using cannabis long term, cognition is impaired. It would be good to have a better handle on how much cognitive function can be regained if you quit, because that could be a motivator for quitting in people where cannabis is having an adverse effect on their lives,” he added.
On the issue of THC potency, Dr. Saxon said that, while it’s true the potency of cannabis is increasing in terms of THC concentrations, the question is: “Do people who use cannabis use a set amount or do they imbibe until they achieve the state of altered consciousness that they’re seeking? Although there has been some research in the area of self-regulation and cannabis potency, we do not yet have the answers to determine if there is any causation,” said Dr. Saxon.
Dr. Meier and Dr. Saxon reported no relevant financial conflicts of interest.
A version of this article first appeared on Medscape.com.
FROM THE AMERICAN JOURNAL OF PSYCHIATRY
New injectable gel can deliver immune cells directly to cancer tumors
A simple, two-ingredient gel may boost the fighting power of a groundbreaking cancer treatment, say Stanford University engineers.
The gel – made from water and a plant-based polymer – delivers targeted T cells adjacent to a cancer growth, taking aim at solid tumors.
It’s the latest development in CAR T-cell therapy, a type of immunotherapy that involves collecting the patient’s T cells, reengineering them to be stronger, and returning them to the patient’s body.
Results have been promising in blood cancers, such as leukemia and lymphoma, but less so in solid tumors, such as brain, breast, or kidney cancer, according to the National Cancer Institute.
The gel “is a really exciting step forward,” says Abigail Grosskopf, a PhD candidate at Stanford (Calif.) University, who is the lead study author, “because it can change the delivery of these cells and expand this kind of treatment to other cancers.”
CAR T-cell therapy: Limits in solid tumors
Currently available CAR T-cell therapies are administered by intravenous infusion. But that doesn’t do much against tumors in specific locations because the cells enter the bloodstream and flow throughout the body. The cancer-fighting effort exhausts the T cells, weakening their ability to infiltrate dense tumors.
CAR T cells need cytokines to tell them when to attack, Ms. Grosskopf explains. If delivered through an IV drip, the number of cytokines required to destroy a solid tumor would be toxic to other, healthy parts of the body.
So
In their study, which was published in Science Advances, the injections wiped out mouse tumors in 12 days. The gel degraded harmlessly a few weeks later.
A “leaky pen” that fights cancer
The reason a gel works better than a liquid is because of its staying power, says Ms. Grosskopf, who compares the method to a leaky pen.
The gel acts as the “pen,” releasing activated CAR T cells at regular intervals to attack the cancerous growth. Whereas liquid dissipates quickly, the gel’s structure is strong enough to stay in place for weeks, Ms. Grosskopf says. Plus, it’s biocompatible and harmless within the body, she adds.
More preclinical studies are needed before human clinical trials can occur, Ms. Grosskopf says.
“Not only could this be a way to deliver T cells and cytokines,” Ms. Grosskopf says, “but it may be used for other targeted therapy cancer drugs that are in development. So we see this as running parallel to those efforts.”
Taking an even broader view, the gel could have applications across medical specialties, such as slow-release delivery of vaccines.
A version of this article first appeared on Medscape.com.
A simple, two-ingredient gel may boost the fighting power of a groundbreaking cancer treatment, say Stanford University engineers.
The gel – made from water and a plant-based polymer – delivers targeted T cells adjacent to a cancer growth, taking aim at solid tumors.
It’s the latest development in CAR T-cell therapy, a type of immunotherapy that involves collecting the patient’s T cells, reengineering them to be stronger, and returning them to the patient’s body.
Results have been promising in blood cancers, such as leukemia and lymphoma, but less so in solid tumors, such as brain, breast, or kidney cancer, according to the National Cancer Institute.
The gel “is a really exciting step forward,” says Abigail Grosskopf, a PhD candidate at Stanford (Calif.) University, who is the lead study author, “because it can change the delivery of these cells and expand this kind of treatment to other cancers.”
CAR T-cell therapy: Limits in solid tumors
Currently available CAR T-cell therapies are administered by intravenous infusion. But that doesn’t do much against tumors in specific locations because the cells enter the bloodstream and flow throughout the body. The cancer-fighting effort exhausts the T cells, weakening their ability to infiltrate dense tumors.
CAR T cells need cytokines to tell them when to attack, Ms. Grosskopf explains. If delivered through an IV drip, the number of cytokines required to destroy a solid tumor would be toxic to other, healthy parts of the body.
So
In their study, which was published in Science Advances, the injections wiped out mouse tumors in 12 days. The gel degraded harmlessly a few weeks later.
A “leaky pen” that fights cancer
The reason a gel works better than a liquid is because of its staying power, says Ms. Grosskopf, who compares the method to a leaky pen.
The gel acts as the “pen,” releasing activated CAR T cells at regular intervals to attack the cancerous growth. Whereas liquid dissipates quickly, the gel’s structure is strong enough to stay in place for weeks, Ms. Grosskopf says. Plus, it’s biocompatible and harmless within the body, she adds.
More preclinical studies are needed before human clinical trials can occur, Ms. Grosskopf says.
“Not only could this be a way to deliver T cells and cytokines,” Ms. Grosskopf says, “but it may be used for other targeted therapy cancer drugs that are in development. So we see this as running parallel to those efforts.”
Taking an even broader view, the gel could have applications across medical specialties, such as slow-release delivery of vaccines.
A version of this article first appeared on Medscape.com.
A simple, two-ingredient gel may boost the fighting power of a groundbreaking cancer treatment, say Stanford University engineers.
The gel – made from water and a plant-based polymer – delivers targeted T cells adjacent to a cancer growth, taking aim at solid tumors.
It’s the latest development in CAR T-cell therapy, a type of immunotherapy that involves collecting the patient’s T cells, reengineering them to be stronger, and returning them to the patient’s body.
Results have been promising in blood cancers, such as leukemia and lymphoma, but less so in solid tumors, such as brain, breast, or kidney cancer, according to the National Cancer Institute.
The gel “is a really exciting step forward,” says Abigail Grosskopf, a PhD candidate at Stanford (Calif.) University, who is the lead study author, “because it can change the delivery of these cells and expand this kind of treatment to other cancers.”
CAR T-cell therapy: Limits in solid tumors
Currently available CAR T-cell therapies are administered by intravenous infusion. But that doesn’t do much against tumors in specific locations because the cells enter the bloodstream and flow throughout the body. The cancer-fighting effort exhausts the T cells, weakening their ability to infiltrate dense tumors.
CAR T cells need cytokines to tell them when to attack, Ms. Grosskopf explains. If delivered through an IV drip, the number of cytokines required to destroy a solid tumor would be toxic to other, healthy parts of the body.
So
In their study, which was published in Science Advances, the injections wiped out mouse tumors in 12 days. The gel degraded harmlessly a few weeks later.
A “leaky pen” that fights cancer
The reason a gel works better than a liquid is because of its staying power, says Ms. Grosskopf, who compares the method to a leaky pen.
The gel acts as the “pen,” releasing activated CAR T cells at regular intervals to attack the cancerous growth. Whereas liquid dissipates quickly, the gel’s structure is strong enough to stay in place for weeks, Ms. Grosskopf says. Plus, it’s biocompatible and harmless within the body, she adds.
More preclinical studies are needed before human clinical trials can occur, Ms. Grosskopf says.
“Not only could this be a way to deliver T cells and cytokines,” Ms. Grosskopf says, “but it may be used for other targeted therapy cancer drugs that are in development. So we see this as running parallel to those efforts.”
Taking an even broader view, the gel could have applications across medical specialties, such as slow-release delivery of vaccines.
A version of this article first appeared on Medscape.com.
FROM SCIENCE ADVANCES
U.S. life expectancy dropped by 2 years in 2020: Study
according to a new study.
The study, published in medRxiv, said U.S. life expectancy went from 78.86 years in 2019 to 76.99 years in 2020, during the thick of the global COVID-19 pandemic. Though vaccines were widely available in 2021, the U.S. life expectancy was expected to keep going down, to 76.60 years.
In “peer countries” – Austria, Belgium, Denmark, England and Wales, Finland, France, Germany, Israel, Italy, the Netherlands, New Zealand, Northern Ireland, Norway, Portugal, Scotland, South Korea, Spain, Sweden, and Switzerland – life expectancy went down only 0.57 years from 2019 to 2020 and increased by 0.28 years in 2021, the study said. The peer countries now have a life expectancy that’s 5 years longer than in the United States.
“The fact the U.S. lost so many more lives than other high-income countries speaks not only to how we managed the pandemic, but also to more deeply rooted problems that predated the pandemic,” said Steven H. Woolf, MD, one of the study authors and a professor of family medicine and population health at Virginia Commonwealth University, Richmond, according to Reuters.
“U.S. life expectancy has been falling behind other countries since the 1980s, and the gap has widened over time, especially in the last decade.”
Lack of universal health care, income and educational inequality, and less-healthy physical and social environments helped lead to the decline in American life expectancy, according to Dr. Woolf.
The life expectancy drop from 2019 to 2020 hit Black and Hispanic people hardest, according to the study. But the drop from 2020 to 2021 affected White people the most, with average life expectancy among them going down about a third of a year.
Researchers looked at death data from the National Center for Health Statistics, the Human Mortality Database, and overseas statistical agencies. Life expectancy for 2021 was estimated “using a previously validated modeling method,” the study said.
A version of this article first appeared on WebMD.com.
according to a new study.
The study, published in medRxiv, said U.S. life expectancy went from 78.86 years in 2019 to 76.99 years in 2020, during the thick of the global COVID-19 pandemic. Though vaccines were widely available in 2021, the U.S. life expectancy was expected to keep going down, to 76.60 years.
In “peer countries” – Austria, Belgium, Denmark, England and Wales, Finland, France, Germany, Israel, Italy, the Netherlands, New Zealand, Northern Ireland, Norway, Portugal, Scotland, South Korea, Spain, Sweden, and Switzerland – life expectancy went down only 0.57 years from 2019 to 2020 and increased by 0.28 years in 2021, the study said. The peer countries now have a life expectancy that’s 5 years longer than in the United States.
“The fact the U.S. lost so many more lives than other high-income countries speaks not only to how we managed the pandemic, but also to more deeply rooted problems that predated the pandemic,” said Steven H. Woolf, MD, one of the study authors and a professor of family medicine and population health at Virginia Commonwealth University, Richmond, according to Reuters.
“U.S. life expectancy has been falling behind other countries since the 1980s, and the gap has widened over time, especially in the last decade.”
Lack of universal health care, income and educational inequality, and less-healthy physical and social environments helped lead to the decline in American life expectancy, according to Dr. Woolf.
The life expectancy drop from 2019 to 2020 hit Black and Hispanic people hardest, according to the study. But the drop from 2020 to 2021 affected White people the most, with average life expectancy among them going down about a third of a year.
Researchers looked at death data from the National Center for Health Statistics, the Human Mortality Database, and overseas statistical agencies. Life expectancy for 2021 was estimated “using a previously validated modeling method,” the study said.
A version of this article first appeared on WebMD.com.
according to a new study.
The study, published in medRxiv, said U.S. life expectancy went from 78.86 years in 2019 to 76.99 years in 2020, during the thick of the global COVID-19 pandemic. Though vaccines were widely available in 2021, the U.S. life expectancy was expected to keep going down, to 76.60 years.
In “peer countries” – Austria, Belgium, Denmark, England and Wales, Finland, France, Germany, Israel, Italy, the Netherlands, New Zealand, Northern Ireland, Norway, Portugal, Scotland, South Korea, Spain, Sweden, and Switzerland – life expectancy went down only 0.57 years from 2019 to 2020 and increased by 0.28 years in 2021, the study said. The peer countries now have a life expectancy that’s 5 years longer than in the United States.
“The fact the U.S. lost so many more lives than other high-income countries speaks not only to how we managed the pandemic, but also to more deeply rooted problems that predated the pandemic,” said Steven H. Woolf, MD, one of the study authors and a professor of family medicine and population health at Virginia Commonwealth University, Richmond, according to Reuters.
“U.S. life expectancy has been falling behind other countries since the 1980s, and the gap has widened over time, especially in the last decade.”
Lack of universal health care, income and educational inequality, and less-healthy physical and social environments helped lead to the decline in American life expectancy, according to Dr. Woolf.
The life expectancy drop from 2019 to 2020 hit Black and Hispanic people hardest, according to the study. But the drop from 2020 to 2021 affected White people the most, with average life expectancy among them going down about a third of a year.
Researchers looked at death data from the National Center for Health Statistics, the Human Mortality Database, and overseas statistical agencies. Life expectancy for 2021 was estimated “using a previously validated modeling method,” the study said.
A version of this article first appeared on WebMD.com.
FROM MEDRXIV
Infectious disease pop quiz: Clinical challenge #23 for the ObGyn
What are the most common organisms that cause chorioamnionitis and puerperal endometritis?
Continue to the answer...
Chorioamnionitis and puerperal endometritis are polymicrobial, mixed aerobic-anaerobic infections. The dominant organisms are anaerobic gram-negative bacilli (Bacteroides and Prevotella species); anaerobic gram-positive cocci (Peptococcus species and Peptostreptococcus species); aerobic gram-negative bacilli (principally, Escherichia coli, Klebsiella pneumoniae, and Proteus species); and aerobic gram-positive cocci (enterococci, staphylococci, and group B streptococci).
- Duff P. Maternal and perinatal infections: bacterial. In: Landon MB, Galan HL, Jauniaux ERM, et al. Gabbe’s Obstetrics: Normal and Problem Pregnancies. 8th ed. Elsevier; 2021:1124-1146.
- Duff P. Maternal and fetal infections. In: Resnik R, Lockwood CJ, Moore TJ, et al. Creasy & Resnik’s Maternal-Fetal Medicine: Principles and Practice. 8th ed. Elsevier; 2019:862-919.
What are the most common organisms that cause chorioamnionitis and puerperal endometritis?
Continue to the answer...
Chorioamnionitis and puerperal endometritis are polymicrobial, mixed aerobic-anaerobic infections. The dominant organisms are anaerobic gram-negative bacilli (Bacteroides and Prevotella species); anaerobic gram-positive cocci (Peptococcus species and Peptostreptococcus species); aerobic gram-negative bacilli (principally, Escherichia coli, Klebsiella pneumoniae, and Proteus species); and aerobic gram-positive cocci (enterococci, staphylococci, and group B streptococci).
What are the most common organisms that cause chorioamnionitis and puerperal endometritis?
Continue to the answer...
Chorioamnionitis and puerperal endometritis are polymicrobial, mixed aerobic-anaerobic infections. The dominant organisms are anaerobic gram-negative bacilli (Bacteroides and Prevotella species); anaerobic gram-positive cocci (Peptococcus species and Peptostreptococcus species); aerobic gram-negative bacilli (principally, Escherichia coli, Klebsiella pneumoniae, and Proteus species); and aerobic gram-positive cocci (enterococci, staphylococci, and group B streptococci).
- Duff P. Maternal and perinatal infections: bacterial. In: Landon MB, Galan HL, Jauniaux ERM, et al. Gabbe’s Obstetrics: Normal and Problem Pregnancies. 8th ed. Elsevier; 2021:1124-1146.
- Duff P. Maternal and fetal infections. In: Resnik R, Lockwood CJ, Moore TJ, et al. Creasy & Resnik’s Maternal-Fetal Medicine: Principles and Practice. 8th ed. Elsevier; 2019:862-919.
- Duff P. Maternal and perinatal infections: bacterial. In: Landon MB, Galan HL, Jauniaux ERM, et al. Gabbe’s Obstetrics: Normal and Problem Pregnancies. 8th ed. Elsevier; 2021:1124-1146.
- Duff P. Maternal and fetal infections. In: Resnik R, Lockwood CJ, Moore TJ, et al. Creasy & Resnik’s Maternal-Fetal Medicine: Principles and Practice. 8th ed. Elsevier; 2019:862-919.
Weight gain may exacerbate structural damage in knee OA
researchers reported at the OARSI 2022 World Congress.
Using data from the Osteoarthritis Initiative (OAI), researchers from the University of California found that a greater than 5% increase in body weight over 4 years was associated with a 29% increased risk for medial joint space narrowing (JSN), compared with controls (P = .038). There was also a 34% increased risk for developing frequent knee pain (P = .009)
Conversely, weight loss appeared to offer some protection from structural damage in knee OA, Gabby B. Joseph, PhD, a specialist in radiology and biomedical imaging, said at the congress, sponsored by the Osteoarthritis Research Society International.
Indeed, individuals who had achieved a weight loss of more than 5% at 4-year follow up were less likely to have a worsened Kellgren and Lawrence (KL) grade than those whose body weight remained the same (odds ratio, 0.69, P = .009).
Weight loss was also associated with a higher change of experiencing resolution in knee pain over 12 months, with an OR of 1.40 (P = .019).
Importance of weight change in OA
“We know that weight loss has beneficial effects on knee OA symptoms, such as pain relief and improvement in physical function,” commented Xingzhong Jin, PhD, an NHMRC Early Career Fellow at the Centre for Big Data Research in Health at the University of New South Wales, Sydney.
“But what is unclear is whether weight loss could slow down structural degradation in the joint in the long run,” he said in an interview. “These findings mean that weight control is clearly very important for knee OA, in terms of improving symptoms as well as preventing structural progression.”
He added: “The evidence on hip OA is less clear. As most of the knowledge in this space was generated from people with knee OA, this work is an important contribution to knowledge around the care of people with hip OA.”
Why look at weight change effects in OA?
“Obesity is a modifiable risk factor for osteoarthritis,” Dr. Joseph said at the start of her virtual presentation. Indeed, patients with obesity are more than twice as likely to develop knee OA than their normal weight counterparts.
Although there have been various studies looking at weight loss and weight gain in OA, most have focused on weight loss rather than gain, and OA in the knee rather than the hip, she explained.
The aim of the present study, therefore, was to take a closer look at the possible effect of both weight gain and weight loss in people with hip or knee OA in terms of radiographic outcomes (KL grade change, medial JSN), symptomatic outcomes (knee pain and resolution at 12 months), and the need for joint replacement.
“The clinical implications are to develop targeted long-term strategies for site-specific informed recommendations to prevent joint degeneration,” Dr. Joseph said.
Using data on nearly 3,000 individuals from the OAI, Dr Joseph and collaborators classified people with OA into one of three groups: those with at least a 5% gain in weight, (n = 714), those with no (–3% to 3%) change in weight (n = 1,553), and those with at least a 5% loss in weight over a 4-year period.
The results, which were published in Arthritis Care & Research, also revealed no differences in the rate of total hip or knee arthroplasties between the groups, and no differences between the weight gain and weight loss groups and controls in term of hip radiographic or symptomatic changes.
“Why are there differing effects of weight change in the knee versus the hip? This could be multifactorial, but there could be a few things going on,” said Dr. Joseph. “First, the joint structure is clearly different between the knee and the hip. The knee is a hinge joint. The hip is a ball and socket joint malalignment could affect these in different ways.”
Additionally, “the knee also has thicker cartilage, the hip has thinner cartilage again, and the loading patterns may be different in these joints.”
There were also differences in the rate of progression between the knee and the hip, “this was especially noticeable for the radiographic progression,” Dr. Joseph said, with rates being higher in the knee.
Noting that the study is limited by its retrospective design, Dr. Joseph concluded: “We don’t know why these people lost or gained weight. So, this would be something that would be more apparent in a prospective study.
“Also, there were no MRI outcomes, as MRI imaging was not available in the hip in the OAI, but clearly morphology T1 and T2 would be useful to assess as outcomes here as well.”
The OAI is a public-private partnership funded by the National Institutes of Health and initial support from Merck, Novartis, GlaxoSmithKline and Pfizer. Dr. Joseph and Dr. Jin reported having no conflicts of interest to disclose.
researchers reported at the OARSI 2022 World Congress.
Using data from the Osteoarthritis Initiative (OAI), researchers from the University of California found that a greater than 5% increase in body weight over 4 years was associated with a 29% increased risk for medial joint space narrowing (JSN), compared with controls (P = .038). There was also a 34% increased risk for developing frequent knee pain (P = .009)
Conversely, weight loss appeared to offer some protection from structural damage in knee OA, Gabby B. Joseph, PhD, a specialist in radiology and biomedical imaging, said at the congress, sponsored by the Osteoarthritis Research Society International.
Indeed, individuals who had achieved a weight loss of more than 5% at 4-year follow up were less likely to have a worsened Kellgren and Lawrence (KL) grade than those whose body weight remained the same (odds ratio, 0.69, P = .009).
Weight loss was also associated with a higher change of experiencing resolution in knee pain over 12 months, with an OR of 1.40 (P = .019).
Importance of weight change in OA
“We know that weight loss has beneficial effects on knee OA symptoms, such as pain relief and improvement in physical function,” commented Xingzhong Jin, PhD, an NHMRC Early Career Fellow at the Centre for Big Data Research in Health at the University of New South Wales, Sydney.
“But what is unclear is whether weight loss could slow down structural degradation in the joint in the long run,” he said in an interview. “These findings mean that weight control is clearly very important for knee OA, in terms of improving symptoms as well as preventing structural progression.”
He added: “The evidence on hip OA is less clear. As most of the knowledge in this space was generated from people with knee OA, this work is an important contribution to knowledge around the care of people with hip OA.”
Why look at weight change effects in OA?
“Obesity is a modifiable risk factor for osteoarthritis,” Dr. Joseph said at the start of her virtual presentation. Indeed, patients with obesity are more than twice as likely to develop knee OA than their normal weight counterparts.
Although there have been various studies looking at weight loss and weight gain in OA, most have focused on weight loss rather than gain, and OA in the knee rather than the hip, she explained.
The aim of the present study, therefore, was to take a closer look at the possible effect of both weight gain and weight loss in people with hip or knee OA in terms of radiographic outcomes (KL grade change, medial JSN), symptomatic outcomes (knee pain and resolution at 12 months), and the need for joint replacement.
“The clinical implications are to develop targeted long-term strategies for site-specific informed recommendations to prevent joint degeneration,” Dr. Joseph said.
Using data on nearly 3,000 individuals from the OAI, Dr Joseph and collaborators classified people with OA into one of three groups: those with at least a 5% gain in weight, (n = 714), those with no (–3% to 3%) change in weight (n = 1,553), and those with at least a 5% loss in weight over a 4-year period.
The results, which were published in Arthritis Care & Research, also revealed no differences in the rate of total hip or knee arthroplasties between the groups, and no differences between the weight gain and weight loss groups and controls in term of hip radiographic or symptomatic changes.
“Why are there differing effects of weight change in the knee versus the hip? This could be multifactorial, but there could be a few things going on,” said Dr. Joseph. “First, the joint structure is clearly different between the knee and the hip. The knee is a hinge joint. The hip is a ball and socket joint malalignment could affect these in different ways.”
Additionally, “the knee also has thicker cartilage, the hip has thinner cartilage again, and the loading patterns may be different in these joints.”
There were also differences in the rate of progression between the knee and the hip, “this was especially noticeable for the radiographic progression,” Dr. Joseph said, with rates being higher in the knee.
Noting that the study is limited by its retrospective design, Dr. Joseph concluded: “We don’t know why these people lost or gained weight. So, this would be something that would be more apparent in a prospective study.
“Also, there were no MRI outcomes, as MRI imaging was not available in the hip in the OAI, but clearly morphology T1 and T2 would be useful to assess as outcomes here as well.”
The OAI is a public-private partnership funded by the National Institutes of Health and initial support from Merck, Novartis, GlaxoSmithKline and Pfizer. Dr. Joseph and Dr. Jin reported having no conflicts of interest to disclose.
researchers reported at the OARSI 2022 World Congress.
Using data from the Osteoarthritis Initiative (OAI), researchers from the University of California found that a greater than 5% increase in body weight over 4 years was associated with a 29% increased risk for medial joint space narrowing (JSN), compared with controls (P = .038). There was also a 34% increased risk for developing frequent knee pain (P = .009)
Conversely, weight loss appeared to offer some protection from structural damage in knee OA, Gabby B. Joseph, PhD, a specialist in radiology and biomedical imaging, said at the congress, sponsored by the Osteoarthritis Research Society International.
Indeed, individuals who had achieved a weight loss of more than 5% at 4-year follow up were less likely to have a worsened Kellgren and Lawrence (KL) grade than those whose body weight remained the same (odds ratio, 0.69, P = .009).
Weight loss was also associated with a higher change of experiencing resolution in knee pain over 12 months, with an OR of 1.40 (P = .019).
Importance of weight change in OA
“We know that weight loss has beneficial effects on knee OA symptoms, such as pain relief and improvement in physical function,” commented Xingzhong Jin, PhD, an NHMRC Early Career Fellow at the Centre for Big Data Research in Health at the University of New South Wales, Sydney.
“But what is unclear is whether weight loss could slow down structural degradation in the joint in the long run,” he said in an interview. “These findings mean that weight control is clearly very important for knee OA, in terms of improving symptoms as well as preventing structural progression.”
He added: “The evidence on hip OA is less clear. As most of the knowledge in this space was generated from people with knee OA, this work is an important contribution to knowledge around the care of people with hip OA.”
Why look at weight change effects in OA?
“Obesity is a modifiable risk factor for osteoarthritis,” Dr. Joseph said at the start of her virtual presentation. Indeed, patients with obesity are more than twice as likely to develop knee OA than their normal weight counterparts.
Although there have been various studies looking at weight loss and weight gain in OA, most have focused on weight loss rather than gain, and OA in the knee rather than the hip, she explained.
The aim of the present study, therefore, was to take a closer look at the possible effect of both weight gain and weight loss in people with hip or knee OA in terms of radiographic outcomes (KL grade change, medial JSN), symptomatic outcomes (knee pain and resolution at 12 months), and the need for joint replacement.
“The clinical implications are to develop targeted long-term strategies for site-specific informed recommendations to prevent joint degeneration,” Dr. Joseph said.
Using data on nearly 3,000 individuals from the OAI, Dr Joseph and collaborators classified people with OA into one of three groups: those with at least a 5% gain in weight, (n = 714), those with no (–3% to 3%) change in weight (n = 1,553), and those with at least a 5% loss in weight over a 4-year period.
The results, which were published in Arthritis Care & Research, also revealed no differences in the rate of total hip or knee arthroplasties between the groups, and no differences between the weight gain and weight loss groups and controls in term of hip radiographic or symptomatic changes.
“Why are there differing effects of weight change in the knee versus the hip? This could be multifactorial, but there could be a few things going on,” said Dr. Joseph. “First, the joint structure is clearly different between the knee and the hip. The knee is a hinge joint. The hip is a ball and socket joint malalignment could affect these in different ways.”
Additionally, “the knee also has thicker cartilage, the hip has thinner cartilage again, and the loading patterns may be different in these joints.”
There were also differences in the rate of progression between the knee and the hip, “this was especially noticeable for the radiographic progression,” Dr. Joseph said, with rates being higher in the knee.
Noting that the study is limited by its retrospective design, Dr. Joseph concluded: “We don’t know why these people lost or gained weight. So, this would be something that would be more apparent in a prospective study.
“Also, there were no MRI outcomes, as MRI imaging was not available in the hip in the OAI, but clearly morphology T1 and T2 would be useful to assess as outcomes here as well.”
The OAI is a public-private partnership funded by the National Institutes of Health and initial support from Merck, Novartis, GlaxoSmithKline and Pfizer. Dr. Joseph and Dr. Jin reported having no conflicts of interest to disclose.
FROM OARSI 2022
FDA recommends switch to partially or fully disposable duodenoscope models
Health care facilities and providers should complete the transition to fully disposable duodenoscopes and those with disposable components, the U.S. Food and Drug Administration announced this week after an analysis of postmarket surveillance studies was completed.
The FDA’s directive updates its April 2020 recommendations on the subject. It cites concerns about cleaning fixed endcap duodenoscopes and the increasing availability of models that eliminate the need for reprocessing.
The announcement highlighted the potential for a dramatic difference in between-patient contamination risk, reducing it “by half or more as compared to reusable, or fixed endcaps.”
“Interim results from one duodenoscope model with a removable component show a contamination rate of just 0.5%, as compared to older duodenoscope models which had contamination rates as high as 6%,” the FDA writes.
Duodenoscopes are used in more than 500,000 procedures each year in the United States and are key in assessing and treating diseases and conditions of the pancreas and bile ducts.
Upgrade to new models to decrease infections
Manufacturers no longer market fixed endcap models in the United States, but some health care facilities continue to use them. The FDA recommends that all fixed endcap models be replaced.
The FDA says some manufacturers are offering replacement programs to upgrade to a model with a disposable component at no cost.
Two fully disposable models and five with disposable components have been cleared by the FDA. (One model is no longer marketed and thus not listed here.)
Fully Disposable:
Ambu Innovation GmbH, Duodenoscope model aScope Duodeno
Boston Scientific Corporation, EXALT Model D Single-Use Duodenoscope
Disposable Components:
Fujifilm Corporation, Duodenoscope model ED-580XT
Olympus Medical Systems, Evis Exera III Duodenovideoscope Olympus TJF-Q190V
Pentax Medical, Duodenoscope model ED34-i10T2
Pentax Medical, Duodenoscope model ED32-i10
Additionally, the failure to correctly reprocess a duodenoscope could result in tissue or fluid from one patient transferring to a subsequent patient.
“In rare cases, this can lead to patient-to-patient disease transmission,” the FDA says.
Postmarket surveillance studies
In 2015, the FDA ordered three manufacturers of reusable devices (Fujifilm, Olympus, and Pentax) to conduct postmarket surveillance studies to determine contamination rates after reprocessing.
In 2019, the FDA also ordered postmarket surveillance studies to the makers of duodenoscopes with disposable endcaps to verify that the new designs reduce the contamination rate.
The final results of the fixed endcap design indicate that contamination rates were as high as 6.6% with high-concern organisms after contamination. High-concern organisms are those more often associated with disease, such as E coli and Pseudomonas contamination.
“As a result, Pentax and Olympus are withdrawing their fixed endcap duodenoscopes from the market, and Fujifilm has completed withdrawal of its fixed endcap duodenoscope,” the FDA writes.
Studies are not yet complete for duodenoscopes with removable components. As of August 12, 2021, the Fujifilm ED-580XT duodenoscope with a removable cap had 57% of the samples required. Interim results indicate that no samples tested positive for enough low-concern organisms to indicate a reprocessing failure, and only 0.5% tested positive for high-concern organisms.
In addition to the contamination risk sampling, each manufacturer was ordered to do postmarket surveillance studies to evaluate whether staff could understand and follow the manufacturer’s reprocessing instructions in real-world health care settings.
According to the FDA, the results showed that users frequently had difficulty understanding and following the manufacturers’ instructions and were not able to successfully complete reprocessing with the older models.
However, the newer models had high user success rates for understanding instructions and correctly performing reprocessing tasks, the FDA says.
A version of this article first appeared on Medscape.com.
AGA supports FDA’s continued efforts to reduce the risk of disease transmission by duodenoscopes. Through the AGA Center for GI Innovation and Technology, AGA continues to support innovation in medical technology. To get up to date on past challenges with scope infections and future directions, check out AGA’s Innovation in Duodenoscope Design program, consisting of articles, webinars, and podcasts with leading experts in this space.
Health care facilities and providers should complete the transition to fully disposable duodenoscopes and those with disposable components, the U.S. Food and Drug Administration announced this week after an analysis of postmarket surveillance studies was completed.
The FDA’s directive updates its April 2020 recommendations on the subject. It cites concerns about cleaning fixed endcap duodenoscopes and the increasing availability of models that eliminate the need for reprocessing.
The announcement highlighted the potential for a dramatic difference in between-patient contamination risk, reducing it “by half or more as compared to reusable, or fixed endcaps.”
“Interim results from one duodenoscope model with a removable component show a contamination rate of just 0.5%, as compared to older duodenoscope models which had contamination rates as high as 6%,” the FDA writes.
Duodenoscopes are used in more than 500,000 procedures each year in the United States and are key in assessing and treating diseases and conditions of the pancreas and bile ducts.
Upgrade to new models to decrease infections
Manufacturers no longer market fixed endcap models in the United States, but some health care facilities continue to use them. The FDA recommends that all fixed endcap models be replaced.
The FDA says some manufacturers are offering replacement programs to upgrade to a model with a disposable component at no cost.
Two fully disposable models and five with disposable components have been cleared by the FDA. (One model is no longer marketed and thus not listed here.)
Fully Disposable:
Ambu Innovation GmbH, Duodenoscope model aScope Duodeno
Boston Scientific Corporation, EXALT Model D Single-Use Duodenoscope
Disposable Components:
Fujifilm Corporation, Duodenoscope model ED-580XT
Olympus Medical Systems, Evis Exera III Duodenovideoscope Olympus TJF-Q190V
Pentax Medical, Duodenoscope model ED34-i10T2
Pentax Medical, Duodenoscope model ED32-i10
Additionally, the failure to correctly reprocess a duodenoscope could result in tissue or fluid from one patient transferring to a subsequent patient.
“In rare cases, this can lead to patient-to-patient disease transmission,” the FDA says.
Postmarket surveillance studies
In 2015, the FDA ordered three manufacturers of reusable devices (Fujifilm, Olympus, and Pentax) to conduct postmarket surveillance studies to determine contamination rates after reprocessing.
In 2019, the FDA also ordered postmarket surveillance studies to the makers of duodenoscopes with disposable endcaps to verify that the new designs reduce the contamination rate.
The final results of the fixed endcap design indicate that contamination rates were as high as 6.6% with high-concern organisms after contamination. High-concern organisms are those more often associated with disease, such as E coli and Pseudomonas contamination.
“As a result, Pentax and Olympus are withdrawing their fixed endcap duodenoscopes from the market, and Fujifilm has completed withdrawal of its fixed endcap duodenoscope,” the FDA writes.
Studies are not yet complete for duodenoscopes with removable components. As of August 12, 2021, the Fujifilm ED-580XT duodenoscope with a removable cap had 57% of the samples required. Interim results indicate that no samples tested positive for enough low-concern organisms to indicate a reprocessing failure, and only 0.5% tested positive for high-concern organisms.
In addition to the contamination risk sampling, each manufacturer was ordered to do postmarket surveillance studies to evaluate whether staff could understand and follow the manufacturer’s reprocessing instructions in real-world health care settings.
According to the FDA, the results showed that users frequently had difficulty understanding and following the manufacturers’ instructions and were not able to successfully complete reprocessing with the older models.
However, the newer models had high user success rates for understanding instructions and correctly performing reprocessing tasks, the FDA says.
A version of this article first appeared on Medscape.com.
AGA supports FDA’s continued efforts to reduce the risk of disease transmission by duodenoscopes. Through the AGA Center for GI Innovation and Technology, AGA continues to support innovation in medical technology. To get up to date on past challenges with scope infections and future directions, check out AGA’s Innovation in Duodenoscope Design program, consisting of articles, webinars, and podcasts with leading experts in this space.
Health care facilities and providers should complete the transition to fully disposable duodenoscopes and those with disposable components, the U.S. Food and Drug Administration announced this week after an analysis of postmarket surveillance studies was completed.
The FDA’s directive updates its April 2020 recommendations on the subject. It cites concerns about cleaning fixed endcap duodenoscopes and the increasing availability of models that eliminate the need for reprocessing.
The announcement highlighted the potential for a dramatic difference in between-patient contamination risk, reducing it “by half or more as compared to reusable, or fixed endcaps.”
“Interim results from one duodenoscope model with a removable component show a contamination rate of just 0.5%, as compared to older duodenoscope models which had contamination rates as high as 6%,” the FDA writes.
Duodenoscopes are used in more than 500,000 procedures each year in the United States and are key in assessing and treating diseases and conditions of the pancreas and bile ducts.
Upgrade to new models to decrease infections
Manufacturers no longer market fixed endcap models in the United States, but some health care facilities continue to use them. The FDA recommends that all fixed endcap models be replaced.
The FDA says some manufacturers are offering replacement programs to upgrade to a model with a disposable component at no cost.
Two fully disposable models and five with disposable components have been cleared by the FDA. (One model is no longer marketed and thus not listed here.)
Fully Disposable:
Ambu Innovation GmbH, Duodenoscope model aScope Duodeno
Boston Scientific Corporation, EXALT Model D Single-Use Duodenoscope
Disposable Components:
Fujifilm Corporation, Duodenoscope model ED-580XT
Olympus Medical Systems, Evis Exera III Duodenovideoscope Olympus TJF-Q190V
Pentax Medical, Duodenoscope model ED34-i10T2
Pentax Medical, Duodenoscope model ED32-i10
Additionally, the failure to correctly reprocess a duodenoscope could result in tissue or fluid from one patient transferring to a subsequent patient.
“In rare cases, this can lead to patient-to-patient disease transmission,” the FDA says.
Postmarket surveillance studies
In 2015, the FDA ordered three manufacturers of reusable devices (Fujifilm, Olympus, and Pentax) to conduct postmarket surveillance studies to determine contamination rates after reprocessing.
In 2019, the FDA also ordered postmarket surveillance studies to the makers of duodenoscopes with disposable endcaps to verify that the new designs reduce the contamination rate.
The final results of the fixed endcap design indicate that contamination rates were as high as 6.6% with high-concern organisms after contamination. High-concern organisms are those more often associated with disease, such as E coli and Pseudomonas contamination.
“As a result, Pentax and Olympus are withdrawing their fixed endcap duodenoscopes from the market, and Fujifilm has completed withdrawal of its fixed endcap duodenoscope,” the FDA writes.
Studies are not yet complete for duodenoscopes with removable components. As of August 12, 2021, the Fujifilm ED-580XT duodenoscope with a removable cap had 57% of the samples required. Interim results indicate that no samples tested positive for enough low-concern organisms to indicate a reprocessing failure, and only 0.5% tested positive for high-concern organisms.
In addition to the contamination risk sampling, each manufacturer was ordered to do postmarket surveillance studies to evaluate whether staff could understand and follow the manufacturer’s reprocessing instructions in real-world health care settings.
According to the FDA, the results showed that users frequently had difficulty understanding and following the manufacturers’ instructions and were not able to successfully complete reprocessing with the older models.
However, the newer models had high user success rates for understanding instructions and correctly performing reprocessing tasks, the FDA says.
A version of this article first appeared on Medscape.com.
AGA supports FDA’s continued efforts to reduce the risk of disease transmission by duodenoscopes. Through the AGA Center for GI Innovation and Technology, AGA continues to support innovation in medical technology. To get up to date on past challenges with scope infections and future directions, check out AGA’s Innovation in Duodenoscope Design program, consisting of articles, webinars, and podcasts with leading experts in this space.
Unraveling primary ovarian insufficiency
In the presentation of secondary amenorrhea, pregnancy is the No. 1 differential diagnosis. Once this has been excluded, an algorithm is initiated to determine the etiology, including an assessment of the hypothalamic-pituitary-ovarian axis. While the early onset of ovarian failure can be physically and psychologically disrupting, the effect on fertility is an especially devastating event. Previously identified by terms including premature ovarian failure and premature menopause, “primary ovarian insufficiency” (POI) is now the preferred designation. This month’s article will address the diagnosis, evaluation, and management of POI.
The definition of POI is the development of primary hypogonadism before the age of 40 years. Spontaneous POI occurs in approximately 1 in 250 women by age 35 years and 1 in 100 by age 40 years. After excluding pregnancy, the clinician should determine signs and symptoms that can lead to expedited and cost-efficient testing.
Consequences
POI is an important risk factor for bone loss and osteoporosis, especially in young women who develop ovarian dysfunction before they achieve peak adult bone mass. At the time of diagnosis of POI, a bone density test (dual-energy x-ray absorptiometry) should be obtained. Women with POI may also develop depression and anxiety as well as experience an increased risk for cardiovascular morbidity and mortality, possibly related to endothelial dysfunction.
Young women with spontaneous POI are at increased risk of developing autoimmune adrenal insufficiency (AAI), a potentially fatal disorder. Consequently, to diagnose AAI, serum adrenal cortical and 21-hydroxylase antibodies should be measured in all women who have a karyotype of 46,XX and experience spontaneous POI. Women with AAI have a 50% risk of developing adrenal insufficiency. Despite initial normal adrenal function, women with positive adrenal cortical antibodies should be followed annually.
Causes (see table for a more complete list)
Iatrogenic
Known causes of POI include chemotherapy/radiation often in the setting of cancer treatment. The three most commonly used drugs, cyclophosphamide, cisplatin, and doxorubicin, cause POI by inducing death and/or accelerated activation of primordial follicles and increased atresia of growing follicles. The most damaging agents are alkylating drugs. A cyclophosphamide equivalent dose calculator has been established for ovarian failure risk stratification from chemotherapy based on the cumulative dose of alkylating agents received.
One study estimated the radiosensitivity of the oocyte to be less than 2 Gy. Based upon this estimate, the authors calculated the dose of radiotherapy that would result in immediate and permanent ovarian failure in 97.5% of patients as follows:
- 20.3 Gy at birth
- 18.4 Gy at age 10 years
- 16.5 Gy at age 20 years
- 14.3 Gy at age 30 years

Genetic
Approximately 10% of cases are familial. A family history of POI raises concern for a fragile X premutation. Fragile X syndrome is an X-linked form of intellectual disability that is one of the most common causes of mental retardation worldwide. There is a strong relationship between age at menopause, including POI, and premutations for fragile X syndrome. The American College of Obstetricians and Gynecologists recommends that women with POI or an elevated follicle-stimulating hormone (FSH) level before age 40 years without known cause be screened for FMR1 premutations. Approximately 6% of cases of POI are associated with premutations in the FMR1 gene.
Turner syndrome is one of the most common causes of POI and results from the lack of a second X chromosome. The most common chromosomal defect in humans, TS occurs in up to 1.5% of conceptions, 10% of spontaneous abortions, and 1 of 2,500 live births.
Serum antiadrenal and/or anti–21-hydroxylase antibodies and antithyroid antiperoxidase antibodies, can aid in the diagnosis of adrenal gland, ovary, and thyroid autoimmune causes, which is found in 4% of women with spontaneous POI. Testing for the presence of 21-hydroxylase autoantibodies or adrenal autoantibodies is sufficient to make the diagnosis of autoimmune oophoritis in women with proven spontaneous POI.
The etiology of POI remains unknown in approximately 75%-90% of cases. However, studies using whole exome or whole genome sequencing have identified genetic variants in approximately 30%-35% of these patients.
Risk factors
Factors that are thought to play a role in determining the age of menopause, include genetics (e.g., FMR1 premutation and mosaic Turner syndrome), ethnicity (earlier among Hispanic women and later in Japanese American women when compared with White women), and smoking (reduced by approximately 2 years ).
Regarding ovarian aging, the holy grail of the reproductive life span is to predict menopause. While the definitive age eludes us, anti-Müllerian hormone levels appear to show promise. An ultrasensitive anti-Müllerian hormone assay (< 0.01 ng/mL) predicted a 79% probability of menopause within 12 months for women aged 51 and above; the probability was 51% for women below age 48.
Diagnosis
The three P’s of secondary amenorrhea are physiological, pharmacological, or pathological and can guide the clinician to a targeted evaluation. Physiological causes are pregnancy, the first 6 months of continuous breastfeeding (from elevated prolactin), and natural menopause. Pharmacological etiologies, excluding hormonal treatment that suppresses ovulation (combined oral contraceptives, gonadotropin-releasing hormone agonist/antagonist, or danazol), include agents that inhibit dopamine thereby increasing serum prolactin, such as metoclopramide; phenothiazine antipsychotics, such as haloperidol; and tardive dystonia dopamine-depleting medications, such as reserpine. Pathological causes include pituitary adenomas, thyroid disease, functional hypothalamic amenorrhea from changes in weight, exercise regimen, and stress.
Management
About 50%-75% of women with 46,XX spontaneous POI experience intermittent ovarian function and 5%-10% of women remain able to conceive. Anecdotally, a 32-year-old woman presented to me with primary infertility, secondary amenorrhea, and suspected POI based on vasomotor symptoms and elevated FSH levels. Pelvic ultrasound showed a hemorrhagic cyst, suspicious for a corpus luteum. Two weeks thereafter she reported a positive home urine human chorionic gonadotropin test and ultimately delivered twins. Her diagnosis of POI with amenorrhea remained postpartum.
Unless there is an absolute contraindication, estrogen therapy should be prescribed to women with POI to reduce the risk of osteoporosis, cardiovascular disease, and urogenital atrophy as well as to maintain sexual health and quality of life. For those with an intact uterus, women should receive progesterone because of the risk of endometrial hyperplasia from unopposed estrogen. Rather than oral estrogen, the use of transdermal or vaginal delivery of estrogen is a more physiological approach and provides lower risks of venous thromboembolism and gallbladder disease. Of note, standard postmenopausal hormone therapy, which has a much lower dose of estrogen than combined estrogen-progestin contraceptives, does not provide effective contraception. Per ACOG, systemic hormone treatment should be prescribed until age 50-51 years to all women with POI.
For fertility, women with spontaneous POI can be offered oocyte or embryo donation. The uterus does not age reproductively, unlike oocytes, therefore women can achieve reasonable pregnancy success rates through egg donation despite experiencing menopause.
Future potential options
Female germline stem cells have been isolated from neonatal mice and transplanted into sterile adult mice, who then were able to produce offspring. In a second study, oogonial stem cells were isolated from neonatal and adult mouse ovaries; pups were subsequently born from the oocytes. Further experiments are needed before the implications for humans can be determined.
Emotionally traumatic for most women, POI disrupts life plans, hopes, and dreams of raising a family. The approach to the patient with POI involves the above evidence-based testing along with empathy from the health care provider.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
In the presentation of secondary amenorrhea, pregnancy is the No. 1 differential diagnosis. Once this has been excluded, an algorithm is initiated to determine the etiology, including an assessment of the hypothalamic-pituitary-ovarian axis. While the early onset of ovarian failure can be physically and psychologically disrupting, the effect on fertility is an especially devastating event. Previously identified by terms including premature ovarian failure and premature menopause, “primary ovarian insufficiency” (POI) is now the preferred designation. This month’s article will address the diagnosis, evaluation, and management of POI.
The definition of POI is the development of primary hypogonadism before the age of 40 years. Spontaneous POI occurs in approximately 1 in 250 women by age 35 years and 1 in 100 by age 40 years. After excluding pregnancy, the clinician should determine signs and symptoms that can lead to expedited and cost-efficient testing.
Consequences
POI is an important risk factor for bone loss and osteoporosis, especially in young women who develop ovarian dysfunction before they achieve peak adult bone mass. At the time of diagnosis of POI, a bone density test (dual-energy x-ray absorptiometry) should be obtained. Women with POI may also develop depression and anxiety as well as experience an increased risk for cardiovascular morbidity and mortality, possibly related to endothelial dysfunction.
Young women with spontaneous POI are at increased risk of developing autoimmune adrenal insufficiency (AAI), a potentially fatal disorder. Consequently, to diagnose AAI, serum adrenal cortical and 21-hydroxylase antibodies should be measured in all women who have a karyotype of 46,XX and experience spontaneous POI. Women with AAI have a 50% risk of developing adrenal insufficiency. Despite initial normal adrenal function, women with positive adrenal cortical antibodies should be followed annually.
Causes (see table for a more complete list)
Iatrogenic
Known causes of POI include chemotherapy/radiation often in the setting of cancer treatment. The three most commonly used drugs, cyclophosphamide, cisplatin, and doxorubicin, cause POI by inducing death and/or accelerated activation of primordial follicles and increased atresia of growing follicles. The most damaging agents are alkylating drugs. A cyclophosphamide equivalent dose calculator has been established for ovarian failure risk stratification from chemotherapy based on the cumulative dose of alkylating agents received.
One study estimated the radiosensitivity of the oocyte to be less than 2 Gy. Based upon this estimate, the authors calculated the dose of radiotherapy that would result in immediate and permanent ovarian failure in 97.5% of patients as follows:
- 20.3 Gy at birth
- 18.4 Gy at age 10 years
- 16.5 Gy at age 20 years
- 14.3 Gy at age 30 years

Genetic
Approximately 10% of cases are familial. A family history of POI raises concern for a fragile X premutation. Fragile X syndrome is an X-linked form of intellectual disability that is one of the most common causes of mental retardation worldwide. There is a strong relationship between age at menopause, including POI, and premutations for fragile X syndrome. The American College of Obstetricians and Gynecologists recommends that women with POI or an elevated follicle-stimulating hormone (FSH) level before age 40 years without known cause be screened for FMR1 premutations. Approximately 6% of cases of POI are associated with premutations in the FMR1 gene.
Turner syndrome is one of the most common causes of POI and results from the lack of a second X chromosome. The most common chromosomal defect in humans, TS occurs in up to 1.5% of conceptions, 10% of spontaneous abortions, and 1 of 2,500 live births.
Serum antiadrenal and/or anti–21-hydroxylase antibodies and antithyroid antiperoxidase antibodies, can aid in the diagnosis of adrenal gland, ovary, and thyroid autoimmune causes, which is found in 4% of women with spontaneous POI. Testing for the presence of 21-hydroxylase autoantibodies or adrenal autoantibodies is sufficient to make the diagnosis of autoimmune oophoritis in women with proven spontaneous POI.
The etiology of POI remains unknown in approximately 75%-90% of cases. However, studies using whole exome or whole genome sequencing have identified genetic variants in approximately 30%-35% of these patients.
Risk factors
Factors that are thought to play a role in determining the age of menopause, include genetics (e.g., FMR1 premutation and mosaic Turner syndrome), ethnicity (earlier among Hispanic women and later in Japanese American women when compared with White women), and smoking (reduced by approximately 2 years ).
Regarding ovarian aging, the holy grail of the reproductive life span is to predict menopause. While the definitive age eludes us, anti-Müllerian hormone levels appear to show promise. An ultrasensitive anti-Müllerian hormone assay (< 0.01 ng/mL) predicted a 79% probability of menopause within 12 months for women aged 51 and above; the probability was 51% for women below age 48.
Diagnosis
The three P’s of secondary amenorrhea are physiological, pharmacological, or pathological and can guide the clinician to a targeted evaluation. Physiological causes are pregnancy, the first 6 months of continuous breastfeeding (from elevated prolactin), and natural menopause. Pharmacological etiologies, excluding hormonal treatment that suppresses ovulation (combined oral contraceptives, gonadotropin-releasing hormone agonist/antagonist, or danazol), include agents that inhibit dopamine thereby increasing serum prolactin, such as metoclopramide; phenothiazine antipsychotics, such as haloperidol; and tardive dystonia dopamine-depleting medications, such as reserpine. Pathological causes include pituitary adenomas, thyroid disease, functional hypothalamic amenorrhea from changes in weight, exercise regimen, and stress.
Management
About 50%-75% of women with 46,XX spontaneous POI experience intermittent ovarian function and 5%-10% of women remain able to conceive. Anecdotally, a 32-year-old woman presented to me with primary infertility, secondary amenorrhea, and suspected POI based on vasomotor symptoms and elevated FSH levels. Pelvic ultrasound showed a hemorrhagic cyst, suspicious for a corpus luteum. Two weeks thereafter she reported a positive home urine human chorionic gonadotropin test and ultimately delivered twins. Her diagnosis of POI with amenorrhea remained postpartum.
Unless there is an absolute contraindication, estrogen therapy should be prescribed to women with POI to reduce the risk of osteoporosis, cardiovascular disease, and urogenital atrophy as well as to maintain sexual health and quality of life. For those with an intact uterus, women should receive progesterone because of the risk of endometrial hyperplasia from unopposed estrogen. Rather than oral estrogen, the use of transdermal or vaginal delivery of estrogen is a more physiological approach and provides lower risks of venous thromboembolism and gallbladder disease. Of note, standard postmenopausal hormone therapy, which has a much lower dose of estrogen than combined estrogen-progestin contraceptives, does not provide effective contraception. Per ACOG, systemic hormone treatment should be prescribed until age 50-51 years to all women with POI.
For fertility, women with spontaneous POI can be offered oocyte or embryo donation. The uterus does not age reproductively, unlike oocytes, therefore women can achieve reasonable pregnancy success rates through egg donation despite experiencing menopause.
Future potential options
Female germline stem cells have been isolated from neonatal mice and transplanted into sterile adult mice, who then were able to produce offspring. In a second study, oogonial stem cells were isolated from neonatal and adult mouse ovaries; pups were subsequently born from the oocytes. Further experiments are needed before the implications for humans can be determined.
Emotionally traumatic for most women, POI disrupts life plans, hopes, and dreams of raising a family. The approach to the patient with POI involves the above evidence-based testing along with empathy from the health care provider.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.
In the presentation of secondary amenorrhea, pregnancy is the No. 1 differential diagnosis. Once this has been excluded, an algorithm is initiated to determine the etiology, including an assessment of the hypothalamic-pituitary-ovarian axis. While the early onset of ovarian failure can be physically and psychologically disrupting, the effect on fertility is an especially devastating event. Previously identified by terms including premature ovarian failure and premature menopause, “primary ovarian insufficiency” (POI) is now the preferred designation. This month’s article will address the diagnosis, evaluation, and management of POI.
The definition of POI is the development of primary hypogonadism before the age of 40 years. Spontaneous POI occurs in approximately 1 in 250 women by age 35 years and 1 in 100 by age 40 years. After excluding pregnancy, the clinician should determine signs and symptoms that can lead to expedited and cost-efficient testing.
Consequences
POI is an important risk factor for bone loss and osteoporosis, especially in young women who develop ovarian dysfunction before they achieve peak adult bone mass. At the time of diagnosis of POI, a bone density test (dual-energy x-ray absorptiometry) should be obtained. Women with POI may also develop depression and anxiety as well as experience an increased risk for cardiovascular morbidity and mortality, possibly related to endothelial dysfunction.
Young women with spontaneous POI are at increased risk of developing autoimmune adrenal insufficiency (AAI), a potentially fatal disorder. Consequently, to diagnose AAI, serum adrenal cortical and 21-hydroxylase antibodies should be measured in all women who have a karyotype of 46,XX and experience spontaneous POI. Women with AAI have a 50% risk of developing adrenal insufficiency. Despite initial normal adrenal function, women with positive adrenal cortical antibodies should be followed annually.
Causes (see table for a more complete list)
Iatrogenic
Known causes of POI include chemotherapy/radiation often in the setting of cancer treatment. The three most commonly used drugs, cyclophosphamide, cisplatin, and doxorubicin, cause POI by inducing death and/or accelerated activation of primordial follicles and increased atresia of growing follicles. The most damaging agents are alkylating drugs. A cyclophosphamide equivalent dose calculator has been established for ovarian failure risk stratification from chemotherapy based on the cumulative dose of alkylating agents received.
One study estimated the radiosensitivity of the oocyte to be less than 2 Gy. Based upon this estimate, the authors calculated the dose of radiotherapy that would result in immediate and permanent ovarian failure in 97.5% of patients as follows:
- 20.3 Gy at birth
- 18.4 Gy at age 10 years
- 16.5 Gy at age 20 years
- 14.3 Gy at age 30 years

Genetic
Approximately 10% of cases are familial. A family history of POI raises concern for a fragile X premutation. Fragile X syndrome is an X-linked form of intellectual disability that is one of the most common causes of mental retardation worldwide. There is a strong relationship between age at menopause, including POI, and premutations for fragile X syndrome. The American College of Obstetricians and Gynecologists recommends that women with POI or an elevated follicle-stimulating hormone (FSH) level before age 40 years without known cause be screened for FMR1 premutations. Approximately 6% of cases of POI are associated with premutations in the FMR1 gene.
Turner syndrome is one of the most common causes of POI and results from the lack of a second X chromosome. The most common chromosomal defect in humans, TS occurs in up to 1.5% of conceptions, 10% of spontaneous abortions, and 1 of 2,500 live births.
Serum antiadrenal and/or anti–21-hydroxylase antibodies and antithyroid antiperoxidase antibodies, can aid in the diagnosis of adrenal gland, ovary, and thyroid autoimmune causes, which is found in 4% of women with spontaneous POI. Testing for the presence of 21-hydroxylase autoantibodies or adrenal autoantibodies is sufficient to make the diagnosis of autoimmune oophoritis in women with proven spontaneous POI.
The etiology of POI remains unknown in approximately 75%-90% of cases. However, studies using whole exome or whole genome sequencing have identified genetic variants in approximately 30%-35% of these patients.
Risk factors
Factors that are thought to play a role in determining the age of menopause, include genetics (e.g., FMR1 premutation and mosaic Turner syndrome), ethnicity (earlier among Hispanic women and later in Japanese American women when compared with White women), and smoking (reduced by approximately 2 years ).
Regarding ovarian aging, the holy grail of the reproductive life span is to predict menopause. While the definitive age eludes us, anti-Müllerian hormone levels appear to show promise. An ultrasensitive anti-Müllerian hormone assay (< 0.01 ng/mL) predicted a 79% probability of menopause within 12 months for women aged 51 and above; the probability was 51% for women below age 48.
Diagnosis
The three P’s of secondary amenorrhea are physiological, pharmacological, or pathological and can guide the clinician to a targeted evaluation. Physiological causes are pregnancy, the first 6 months of continuous breastfeeding (from elevated prolactin), and natural menopause. Pharmacological etiologies, excluding hormonal treatment that suppresses ovulation (combined oral contraceptives, gonadotropin-releasing hormone agonist/antagonist, or danazol), include agents that inhibit dopamine thereby increasing serum prolactin, such as metoclopramide; phenothiazine antipsychotics, such as haloperidol; and tardive dystonia dopamine-depleting medications, such as reserpine. Pathological causes include pituitary adenomas, thyroid disease, functional hypothalamic amenorrhea from changes in weight, exercise regimen, and stress.
Management
About 50%-75% of women with 46,XX spontaneous POI experience intermittent ovarian function and 5%-10% of women remain able to conceive. Anecdotally, a 32-year-old woman presented to me with primary infertility, secondary amenorrhea, and suspected POI based on vasomotor symptoms and elevated FSH levels. Pelvic ultrasound showed a hemorrhagic cyst, suspicious for a corpus luteum. Two weeks thereafter she reported a positive home urine human chorionic gonadotropin test and ultimately delivered twins. Her diagnosis of POI with amenorrhea remained postpartum.
Unless there is an absolute contraindication, estrogen therapy should be prescribed to women with POI to reduce the risk of osteoporosis, cardiovascular disease, and urogenital atrophy as well as to maintain sexual health and quality of life. For those with an intact uterus, women should receive progesterone because of the risk of endometrial hyperplasia from unopposed estrogen. Rather than oral estrogen, the use of transdermal or vaginal delivery of estrogen is a more physiological approach and provides lower risks of venous thromboembolism and gallbladder disease. Of note, standard postmenopausal hormone therapy, which has a much lower dose of estrogen than combined estrogen-progestin contraceptives, does not provide effective contraception. Per ACOG, systemic hormone treatment should be prescribed until age 50-51 years to all women with POI.
For fertility, women with spontaneous POI can be offered oocyte or embryo donation. The uterus does not age reproductively, unlike oocytes, therefore women can achieve reasonable pregnancy success rates through egg donation despite experiencing menopause.
Future potential options
Female germline stem cells have been isolated from neonatal mice and transplanted into sterile adult mice, who then were able to produce offspring. In a second study, oogonial stem cells were isolated from neonatal and adult mouse ovaries; pups were subsequently born from the oocytes. Further experiments are needed before the implications for humans can be determined.
Emotionally traumatic for most women, POI disrupts life plans, hopes, and dreams of raising a family. The approach to the patient with POI involves the above evidence-based testing along with empathy from the health care provider.
Dr. Trolice is director of The IVF Center in Winter Park, Fla., and professor of obstetrics and gynecology at the University of Central Florida, Orlando.




