User login
FDA approves HIV injectable Cabenuva initiation without oral lead-in
Initiating treatment may become easier for adults living with HIV.
a combination injectable, without a lead-in period of oral tablets, according to a press release from Janssen Pharmaceuticals.Cabenuva combines rilpivirine (Janssen) and cabotegravir (ViiV Healthcare). The change offers patients and clinicians an option for a streamlined entry to treatment without the burden of daily pill taking, according to the release.
Cabenuva injections may be given as few as six times a year to manage HIV, according to Janssen. HIV patients with viral suppression previously had to complete an oral treatment regimen before starting monthly or bimonthly injections.
The injectable combination of cabotegravir, an HIV-1 integrase strand transfer inhibitor, and rilpivirine, an HIV-1 nonnucleoside reverse transcriptase inhibitor, is currently indicated as a complete treatment regimen to replace the current antiretroviral regimen for adults with HIV who are virologically suppressed,” according to the press release.
“Janssen and ViiV are exploring the future possibility of an ultra–long-acting version of Cabenuva, which could reduce the frequency of injections even further, according to the press release.
Access may improve, but barriers persist
“Despite advances in HIV care, many barriers remain, particularly for the most vulnerable populations,” Lina Rosengren-Hovee, MD, of the University of North Carolina at Chapel Hill, said in an interview.
“Care engagement has improved with the use of bridge counselors, rapid ART [antiretroviral therapy] initiation policies, and contact tracing,” she said. “Similarly, increasing access to multiple modalities of HIV treatment is critical to increase engagement in care.
“For patients, removing the oral lead-in primarily reduces the number of clinical visits to start injectable ART,” Dr. Rosengren-Hovee added. “It may also remove adherence barriers for patients who have difficulty taking a daily oral medication.”
But Dr. Rosengren-Hovee (who has no financial connection to the manufacturers) pointed out that access to Cabenuva may not be seamless. “Unless the medication is stocked in clinics, patients are not likely to receive their first injection during the initial visit. Labs are also required prior to initiation to ensure there is no contraindication to the medication, such as viral resistance to one of its components. Cost and insurance coverage are also likely to remain major obstacles.”
Dr. Rosengren-Hovee has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Initiating treatment may become easier for adults living with HIV.
a combination injectable, without a lead-in period of oral tablets, according to a press release from Janssen Pharmaceuticals.Cabenuva combines rilpivirine (Janssen) and cabotegravir (ViiV Healthcare). The change offers patients and clinicians an option for a streamlined entry to treatment without the burden of daily pill taking, according to the release.
Cabenuva injections may be given as few as six times a year to manage HIV, according to Janssen. HIV patients with viral suppression previously had to complete an oral treatment regimen before starting monthly or bimonthly injections.
The injectable combination of cabotegravir, an HIV-1 integrase strand transfer inhibitor, and rilpivirine, an HIV-1 nonnucleoside reverse transcriptase inhibitor, is currently indicated as a complete treatment regimen to replace the current antiretroviral regimen for adults with HIV who are virologically suppressed,” according to the press release.
“Janssen and ViiV are exploring the future possibility of an ultra–long-acting version of Cabenuva, which could reduce the frequency of injections even further, according to the press release.
Access may improve, but barriers persist
“Despite advances in HIV care, many barriers remain, particularly for the most vulnerable populations,” Lina Rosengren-Hovee, MD, of the University of North Carolina at Chapel Hill, said in an interview.
“Care engagement has improved with the use of bridge counselors, rapid ART [antiretroviral therapy] initiation policies, and contact tracing,” she said. “Similarly, increasing access to multiple modalities of HIV treatment is critical to increase engagement in care.
“For patients, removing the oral lead-in primarily reduces the number of clinical visits to start injectable ART,” Dr. Rosengren-Hovee added. “It may also remove adherence barriers for patients who have difficulty taking a daily oral medication.”
But Dr. Rosengren-Hovee (who has no financial connection to the manufacturers) pointed out that access to Cabenuva may not be seamless. “Unless the medication is stocked in clinics, patients are not likely to receive their first injection during the initial visit. Labs are also required prior to initiation to ensure there is no contraindication to the medication, such as viral resistance to one of its components. Cost and insurance coverage are also likely to remain major obstacles.”
Dr. Rosengren-Hovee has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Initiating treatment may become easier for adults living with HIV.
a combination injectable, without a lead-in period of oral tablets, according to a press release from Janssen Pharmaceuticals.Cabenuva combines rilpivirine (Janssen) and cabotegravir (ViiV Healthcare). The change offers patients and clinicians an option for a streamlined entry to treatment without the burden of daily pill taking, according to the release.
Cabenuva injections may be given as few as six times a year to manage HIV, according to Janssen. HIV patients with viral suppression previously had to complete an oral treatment regimen before starting monthly or bimonthly injections.
The injectable combination of cabotegravir, an HIV-1 integrase strand transfer inhibitor, and rilpivirine, an HIV-1 nonnucleoside reverse transcriptase inhibitor, is currently indicated as a complete treatment regimen to replace the current antiretroviral regimen for adults with HIV who are virologically suppressed,” according to the press release.
“Janssen and ViiV are exploring the future possibility of an ultra–long-acting version of Cabenuva, which could reduce the frequency of injections even further, according to the press release.
Access may improve, but barriers persist
“Despite advances in HIV care, many barriers remain, particularly for the most vulnerable populations,” Lina Rosengren-Hovee, MD, of the University of North Carolina at Chapel Hill, said in an interview.
“Care engagement has improved with the use of bridge counselors, rapid ART [antiretroviral therapy] initiation policies, and contact tracing,” she said. “Similarly, increasing access to multiple modalities of HIV treatment is critical to increase engagement in care.
“For patients, removing the oral lead-in primarily reduces the number of clinical visits to start injectable ART,” Dr. Rosengren-Hovee added. “It may also remove adherence barriers for patients who have difficulty taking a daily oral medication.”
But Dr. Rosengren-Hovee (who has no financial connection to the manufacturers) pointed out that access to Cabenuva may not be seamless. “Unless the medication is stocked in clinics, patients are not likely to receive their first injection during the initial visit. Labs are also required prior to initiation to ensure there is no contraindication to the medication, such as viral resistance to one of its components. Cost and insurance coverage are also likely to remain major obstacles.”
Dr. Rosengren-Hovee has disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Will serrated polyp detection rates be the next CRC metric?
A higher rate of serrated polyp detection was associated with a reduced risk of postcolonoscopy colorectal cancer, based on data from nearly 20,000 patients and 142 endoscopists, according to a study published in Gastrointestinal Endoscopy.
Higher rates of adenoma detection reduce the risk of postcolonoscopy colorectal cancer (PCCRC), but the data on detection rates for clinically significant serrated polyps and traditional serrated adenomas are limited, wrote Joseph C. Anderson, MD, of the Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“A unique challenge for endoscopists is that serrated polyps exhibit characteristics that can make them more difficult to detect than conventional adenomas. Thus, it is not surprising that several studies have demonstrated a wide variation in serrated polyp detection rates.” Even so, improved detection and resection of these polyps would likely improve CRC prevention, they noted.
The researchers reviewed data from the New Hampshire Colonoscopy Registry to explore the association between clinically significantly serrated polyp (CSSP) detection rates and subsequent PCCRC risk.
The study population included 19,532 patients with follow-up events at least 6 months after an index colonoscopy. Of these, 128 cases of CRC were diagnosed at least 6 months after an index exam. CSSP was defined as any sessile serrated polyp, traditional serrated adenoma, or any large hyperplastic polyp (> 1 cm) or proximal hyperplastic polyp > 5 mm. The exams were performed by 142 endoscopists, 92 of whom were gastroenterologists. The 50 nongastroenterologists included general surgeons, colorectal surgeons, and family practitioners.
The primary outcome was PCCRC, defined as any CRC diagnosis 6 months or longer after an index exam. Clinically significant serrated polyp detection rate (CSSDR) was determined by dividing the total number of complete screening exams with adequate prep and at least one CSSP by the total number of complete exams with adequate prep. CSSDR was divided into tertiles of less than 3%, 3% up to 9%, and 9% or higher.
Overall, the risk for PCCRC 6 months or more after an index exam was significantly lower for exams performed by endoscopists with detection rates of 3% up to 9% and for those with detection rates of 9% or higher compared to those with detection rates below 3% (hazard ratios 0.57 and 0.39, respectively).
Significantly more gastroenterologists were in the higher CSSDR categories compared to nongastroenterologists (P = .00005). The percentages of gastroenterologists in the three tertiles from lowest to highest detection were 15.2%, 50.0%, and 34.8%; compared to 46%, 44.0%, and 10.0%, respectively, for nongastroenterologists.
In adjusted analysis, higher detection rates were associated with lower CRC risk across all time periods.
The researchers also found higher CSSDR categories associated with lower PCCRC risk for exams by endoscopists with ADR rates of 25% or higher.
“It may be reasonable to question whether a separate serrated detection rate is needed in addition to ADR,” the researchers wrote in their discussion of the findings. “These data support our suggestion that endoscopists, even those with an ADR of 25% or higher, calculate their SDR at least once, a recommendation supported by a recent review of the American Gastroenterological Association,” they noted.
The study findings were limited by several factors, including the lack of information on specific endoscopic techniques, a lack of data on the molecular characteristics of the cancers, and potential residual confounding variables, the researchers noted.
However, the results were strengthened by the large number of participating endoscopists and by the longitudinal database that included detection rates for screening exams and detailed polyp pathology, they said. The results support the need for a serrated polyp detection rate benchmark to endure complete polyp detection and validate the use of CSSDR as a quality measure that adds to the knowledge of both colonoscopy quality and the role of the serrated pathway in colorectal cancer, they concluded.
Serrated pathway serves as predictor
The current study is an important addition to the knowledge of colorectal cancer risk, Atsushi Sakuraba, MD, PhD, associate professor of medicine at the University of Chicago, said in an interview.
“In addition to the conventional adenoma pathway, the serrated pathway has been recognized to account for a significant portion of colorectal cancer, but whether detection of serrated polyps [is] associated with reduction of CRC remains unknown,” he said.
Dr. Sakuraba said he was not surprised by the study findings. Given that the serrated pathway is now considered to account for approximately 10%-20% of all CRC cases, higher detection rates should result in lower risk of CRC, he noted.
The findings support the value of CSSDR in clinical practice, said Dr. Sakuraba. “The study has shown that a clinically significant serrated polyps detection rate of 3% was associated with lower postcolonoscopy CRC, so endoscopists should introduce this to their practice in addition to adenoma detection rates,” he said.
However, Dr. Sakuraba acknowledged the limitations of the current study and emphasized that it needs to be reproduced in other cohorts. Prospective studies might be helpful as well, he said.
The study received no outside funding. The researchers and Dr. Sakuraba had no financial conflicts to disclose.
A higher rate of serrated polyp detection was associated with a reduced risk of postcolonoscopy colorectal cancer, based on data from nearly 20,000 patients and 142 endoscopists, according to a study published in Gastrointestinal Endoscopy.
Higher rates of adenoma detection reduce the risk of postcolonoscopy colorectal cancer (PCCRC), but the data on detection rates for clinically significant serrated polyps and traditional serrated adenomas are limited, wrote Joseph C. Anderson, MD, of the Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“A unique challenge for endoscopists is that serrated polyps exhibit characteristics that can make them more difficult to detect than conventional adenomas. Thus, it is not surprising that several studies have demonstrated a wide variation in serrated polyp detection rates.” Even so, improved detection and resection of these polyps would likely improve CRC prevention, they noted.
The researchers reviewed data from the New Hampshire Colonoscopy Registry to explore the association between clinically significantly serrated polyp (CSSP) detection rates and subsequent PCCRC risk.
The study population included 19,532 patients with follow-up events at least 6 months after an index colonoscopy. Of these, 128 cases of CRC were diagnosed at least 6 months after an index exam. CSSP was defined as any sessile serrated polyp, traditional serrated adenoma, or any large hyperplastic polyp (> 1 cm) or proximal hyperplastic polyp > 5 mm. The exams were performed by 142 endoscopists, 92 of whom were gastroenterologists. The 50 nongastroenterologists included general surgeons, colorectal surgeons, and family practitioners.
The primary outcome was PCCRC, defined as any CRC diagnosis 6 months or longer after an index exam. Clinically significant serrated polyp detection rate (CSSDR) was determined by dividing the total number of complete screening exams with adequate prep and at least one CSSP by the total number of complete exams with adequate prep. CSSDR was divided into tertiles of less than 3%, 3% up to 9%, and 9% or higher.
Overall, the risk for PCCRC 6 months or more after an index exam was significantly lower for exams performed by endoscopists with detection rates of 3% up to 9% and for those with detection rates of 9% or higher compared to those with detection rates below 3% (hazard ratios 0.57 and 0.39, respectively).
Significantly more gastroenterologists were in the higher CSSDR categories compared to nongastroenterologists (P = .00005). The percentages of gastroenterologists in the three tertiles from lowest to highest detection were 15.2%, 50.0%, and 34.8%; compared to 46%, 44.0%, and 10.0%, respectively, for nongastroenterologists.
In adjusted analysis, higher detection rates were associated with lower CRC risk across all time periods.
The researchers also found higher CSSDR categories associated with lower PCCRC risk for exams by endoscopists with ADR rates of 25% or higher.
“It may be reasonable to question whether a separate serrated detection rate is needed in addition to ADR,” the researchers wrote in their discussion of the findings. “These data support our suggestion that endoscopists, even those with an ADR of 25% or higher, calculate their SDR at least once, a recommendation supported by a recent review of the American Gastroenterological Association,” they noted.
The study findings were limited by several factors, including the lack of information on specific endoscopic techniques, a lack of data on the molecular characteristics of the cancers, and potential residual confounding variables, the researchers noted.
However, the results were strengthened by the large number of participating endoscopists and by the longitudinal database that included detection rates for screening exams and detailed polyp pathology, they said. The results support the need for a serrated polyp detection rate benchmark to endure complete polyp detection and validate the use of CSSDR as a quality measure that adds to the knowledge of both colonoscopy quality and the role of the serrated pathway in colorectal cancer, they concluded.
Serrated pathway serves as predictor
The current study is an important addition to the knowledge of colorectal cancer risk, Atsushi Sakuraba, MD, PhD, associate professor of medicine at the University of Chicago, said in an interview.
“In addition to the conventional adenoma pathway, the serrated pathway has been recognized to account for a significant portion of colorectal cancer, but whether detection of serrated polyps [is] associated with reduction of CRC remains unknown,” he said.
Dr. Sakuraba said he was not surprised by the study findings. Given that the serrated pathway is now considered to account for approximately 10%-20% of all CRC cases, higher detection rates should result in lower risk of CRC, he noted.
The findings support the value of CSSDR in clinical practice, said Dr. Sakuraba. “The study has shown that a clinically significant serrated polyps detection rate of 3% was associated with lower postcolonoscopy CRC, so endoscopists should introduce this to their practice in addition to adenoma detection rates,” he said.
However, Dr. Sakuraba acknowledged the limitations of the current study and emphasized that it needs to be reproduced in other cohorts. Prospective studies might be helpful as well, he said.
The study received no outside funding. The researchers and Dr. Sakuraba had no financial conflicts to disclose.
A higher rate of serrated polyp detection was associated with a reduced risk of postcolonoscopy colorectal cancer, based on data from nearly 20,000 patients and 142 endoscopists, according to a study published in Gastrointestinal Endoscopy.
Higher rates of adenoma detection reduce the risk of postcolonoscopy colorectal cancer (PCCRC), but the data on detection rates for clinically significant serrated polyps and traditional serrated adenomas are limited, wrote Joseph C. Anderson, MD, of the Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“A unique challenge for endoscopists is that serrated polyps exhibit characteristics that can make them more difficult to detect than conventional adenomas. Thus, it is not surprising that several studies have demonstrated a wide variation in serrated polyp detection rates.” Even so, improved detection and resection of these polyps would likely improve CRC prevention, they noted.
The researchers reviewed data from the New Hampshire Colonoscopy Registry to explore the association between clinically significantly serrated polyp (CSSP) detection rates and subsequent PCCRC risk.
The study population included 19,532 patients with follow-up events at least 6 months after an index colonoscopy. Of these, 128 cases of CRC were diagnosed at least 6 months after an index exam. CSSP was defined as any sessile serrated polyp, traditional serrated adenoma, or any large hyperplastic polyp (> 1 cm) or proximal hyperplastic polyp > 5 mm. The exams were performed by 142 endoscopists, 92 of whom were gastroenterologists. The 50 nongastroenterologists included general surgeons, colorectal surgeons, and family practitioners.
The primary outcome was PCCRC, defined as any CRC diagnosis 6 months or longer after an index exam. Clinically significant serrated polyp detection rate (CSSDR) was determined by dividing the total number of complete screening exams with adequate prep and at least one CSSP by the total number of complete exams with adequate prep. CSSDR was divided into tertiles of less than 3%, 3% up to 9%, and 9% or higher.
Overall, the risk for PCCRC 6 months or more after an index exam was significantly lower for exams performed by endoscopists with detection rates of 3% up to 9% and for those with detection rates of 9% or higher compared to those with detection rates below 3% (hazard ratios 0.57 and 0.39, respectively).
Significantly more gastroenterologists were in the higher CSSDR categories compared to nongastroenterologists (P = .00005). The percentages of gastroenterologists in the three tertiles from lowest to highest detection were 15.2%, 50.0%, and 34.8%; compared to 46%, 44.0%, and 10.0%, respectively, for nongastroenterologists.
In adjusted analysis, higher detection rates were associated with lower CRC risk across all time periods.
The researchers also found higher CSSDR categories associated with lower PCCRC risk for exams by endoscopists with ADR rates of 25% or higher.
“It may be reasonable to question whether a separate serrated detection rate is needed in addition to ADR,” the researchers wrote in their discussion of the findings. “These data support our suggestion that endoscopists, even those with an ADR of 25% or higher, calculate their SDR at least once, a recommendation supported by a recent review of the American Gastroenterological Association,” they noted.
The study findings were limited by several factors, including the lack of information on specific endoscopic techniques, a lack of data on the molecular characteristics of the cancers, and potential residual confounding variables, the researchers noted.
However, the results were strengthened by the large number of participating endoscopists and by the longitudinal database that included detection rates for screening exams and detailed polyp pathology, they said. The results support the need for a serrated polyp detection rate benchmark to endure complete polyp detection and validate the use of CSSDR as a quality measure that adds to the knowledge of both colonoscopy quality and the role of the serrated pathway in colorectal cancer, they concluded.
Serrated pathway serves as predictor
The current study is an important addition to the knowledge of colorectal cancer risk, Atsushi Sakuraba, MD, PhD, associate professor of medicine at the University of Chicago, said in an interview.
“In addition to the conventional adenoma pathway, the serrated pathway has been recognized to account for a significant portion of colorectal cancer, but whether detection of serrated polyps [is] associated with reduction of CRC remains unknown,” he said.
Dr. Sakuraba said he was not surprised by the study findings. Given that the serrated pathway is now considered to account for approximately 10%-20% of all CRC cases, higher detection rates should result in lower risk of CRC, he noted.
The findings support the value of CSSDR in clinical practice, said Dr. Sakuraba. “The study has shown that a clinically significant serrated polyps detection rate of 3% was associated with lower postcolonoscopy CRC, so endoscopists should introduce this to their practice in addition to adenoma detection rates,” he said.
However, Dr. Sakuraba acknowledged the limitations of the current study and emphasized that it needs to be reproduced in other cohorts. Prospective studies might be helpful as well, he said.
The study received no outside funding. The researchers and Dr. Sakuraba had no financial conflicts to disclose.
FROM GASTROINTESTINAL ENDOSCOPY
Neuropsychiatric outcomes similar for hospitalized COVID-19 patients and non–COVID-19 patients
Hospitalized COVID-19 survivors showed greater cognitive impairment 6 months later, compared with patients hospitalized for other causes, but the overall disease burden was similar, based on data from 85 adults with COVID-19.
Previous studies have shown that cognitive and neuropsychiatric symptoms can occur from 2-6 months after COVID-19 recovery, and such symptoms are known to be associated with hospitalization for other severe medical conditions, Vardan Nersesjan, MD, of Copenhagen University Hospital, and colleagues wrote.
However, it remains unknown if COVID-19 is associated with a unique pattern of cognitive and mental impairment compared with other similarly severe medical conditions, they said.
In a study published in JAMA Psychiatry (2022 Mar 23. doi: 10.1001/jamapsychiatry.2022.0284), the researchers identified 85 adult COVID-19 survivors and 61 controls with non-COVID medical conditions who were treated and released between July 2020 and July 2021. The COVID-19 patients and controls were matched for age, sex, and ICU status. Cognitive impairment was assessed using the Mini-International Neuropsychiatric Interview, the Montreal Cognitive Assessment (MoCA), neurologic examination, and a semistructured interview to determine subjective symptoms.
The primary outcomes were the total scores on the MoCA and any new-onset psychiatric diagnoses. Secondary outcomes included specific psychiatric diagnoses such as depression, neurologic examination findings, and self-reported neuropsychiatric and cognitive symptoms. The mean age of the COVID-19 patients was 56.8 years, and 42% were women.
At 6 months’ follow-up, cognitive status was significantly lower in COVID-19 survivors, compared with controls, based on total geometric mean MoCA scores (26.7 vs. 27.5, P = .01). However, cognitive status improved significantly from 19.2 at hospital discharge to 26.1 at 6 months in 15 of the COVID-19 patients (P = .004), the researchers noted.
New-onset psychiatric diagnoses occurred in 16 COVID-19 patients and 12 of the controls (19% vs. 20%); this difference was not significant.
Secondary outcomes were not significantly different at 6 months between the groups, with the exception of anosmia, which was significantly more common in the COVID-19 patients; however, the significance disappeared in adjusted analysis, the researchers said.
The study findings were limited by several factors including the inability to prove causality because of the case-control feature and by the inability to detect small differences in neuropsychiatric outcomes, the researchers noted.
However, the results were strengthened by the use of a prospectively matched control group with similar disease severity admitted to the same hospital in the same time frame. Although the overall burden of neuropsychiatric and neurologic symptoms and diagnoses appeared similar in COVID-19 patients and those with other medical conditions, more research in larger populations is needed to determine smaller differences in neuropsychiatric profiles, the researchers noted.
Study fills research gap
The study is important at this time because, although prolonged neuropsychiatric and cognitive symptoms have been reported after COVID-19, the field lacked prospective case-control studies with well-matched controls to investigate whether these outcomes differed from those seen in other critical illnesses that had also required hospitalization, corresponding author Michael E. Benros, MD, of the Mental Health Center, Copenhagen, said in an interview.
“I was surprised that there was a significant worse cognitive functioning among COVID-19 patients 6 months after symptom onset also when compared to this well-matched control group that had been hospitalized for non–COVID-19 illness, although the absolute difference between the groups in cognition score were small,” said Dr. Benros. “Another notable finding is the large improvement in cognitive functioning from discharge to follow-up,” he added on behalf of himself and fellow corresponding author Daniel Kondziella, MD.
The study results show that cognitive function affected by COVID-19 and critical illness as observed at discharge showed a substantial improvement at 6 months after symptom onset, said Dr. Benros. “However, the cognitive function was significantly worse among severely ill COVID-19 patients 6 months after symptom onset when compared to a matched control group of individuals hospitalized for non–COVID-19 illness, although this difference in cognitive function was rather small in absolute numbers, and smaller than what had been suggested by other studies that lacked control groups. Strikingly, neuropsychiatric disorders were similar across the two groups, which was also the case when looking at neuropsychiatric symptoms.
“Larger prospective case-control studies of neuropsychiatric and cognitive functioning after COVID-19, compared with matched controls are still needed to detect smaller differences, and more detailed cognitive domains, and with longer follow-up time, which we are currently conducting,” Dr. Benros said.
Controlled studies will help planning
“Lingering neuropsychiatric complications are common after COVID-19, but only controlled studies can tell us whether these complications are specific to COVID-19, rather than a general effect of having been medically ill,” Alasdair G. Rooney, MRCPsych MD PhD, of the University of Edinburgh, said in an interview. “The answer matters ultimately because COVID-19 is a new disease; societies and health care services need to be able to plan for its specific consequences.”
The health status of the control group is important as well. “Most previous studies had compared COVID-19 survivors against healthy controls or patients from a historical database. This new study compared COVID-19 survivors against those hospitalized for other medical causes over the same period,” Dr. Rooney said. “This is a more stringent test of whether COVID-19 has specific neurocognitive and neuropsychiatric consequences.
“The study found that new-onset neuropsychiatric diagnoses and symptoms were no more likely to occur after COVID-19 than after similarly severe medical illnesses,” Dr. Rooney said. “This negative finding runs counter to some earlier studies and may surprise some.” The findings need to be replicated in larger samples, but the current study shows the importance of prospectively recruiting active controls.
“In a subgroup analysis, some patients showed good improvement in cognitive scores between discharge and follow-up. While unsurprising, this is encouraging and suggests that the early postdischarge months are an important time for neurocognitive recovery,” Dr. Rooney noted.
“The findings suggest that COVID-19 may impair attention more selectively than other medical causes of hospitalization. COVID-19 survivors may also be at higher risk of significant overall cognitive impairment than survivors of similarly severe medical illnesses, after a similar duration,” said Dr. Rooney. “If the results are replicated by other prospective studies, they would suggest that there is something about COVID-19 that causes clinically significant neurocognitive difficulties in a minority of survivors.
“Larger well-controlled studies are required, with longer follow-up and more detailed neurocognitive testing,” as the duration of impairment and scope for further recovery are not known, Dr. Rooney added. Also unknown is whether COVID-19 affects attention permanently, or whether recovery is simply slower after COVID-19 compared to other medical illnesses.
“Knowing who is at the greatest risk of severe cognitive impairment after COVID-19 is important and likely to allow tailoring of more effective shielding strategies,” said Dr. Rooney. “This study was conducted before the widespread availability of vaccines for COVID-19. Long-term neuropsychiatric outcomes in vaccinated patients remain largely unknown. Arguably, these are now more important to understand, as future COVID-19 waves will occur mainly among vaccinated individuals.”
The study was supported by the Lundbeck Foundation and the Novo Nordisk Foundation. Lead author Dr. Nersesjan had no financial conflicts to disclose. Dr. Benros reported grants from Lundbeck Foundation and Novo Nordisk Foundation during the conduct of the study. Dr. Rooney had no financial conflicts to disclose.
This article was updated 3/25/22.
Hospitalized COVID-19 survivors showed greater cognitive impairment 6 months later, compared with patients hospitalized for other causes, but the overall disease burden was similar, based on data from 85 adults with COVID-19.
Previous studies have shown that cognitive and neuropsychiatric symptoms can occur from 2-6 months after COVID-19 recovery, and such symptoms are known to be associated with hospitalization for other severe medical conditions, Vardan Nersesjan, MD, of Copenhagen University Hospital, and colleagues wrote.
However, it remains unknown if COVID-19 is associated with a unique pattern of cognitive and mental impairment compared with other similarly severe medical conditions, they said.
In a study published in JAMA Psychiatry (2022 Mar 23. doi: 10.1001/jamapsychiatry.2022.0284), the researchers identified 85 adult COVID-19 survivors and 61 controls with non-COVID medical conditions who were treated and released between July 2020 and July 2021. The COVID-19 patients and controls were matched for age, sex, and ICU status. Cognitive impairment was assessed using the Mini-International Neuropsychiatric Interview, the Montreal Cognitive Assessment (MoCA), neurologic examination, and a semistructured interview to determine subjective symptoms.
The primary outcomes were the total scores on the MoCA and any new-onset psychiatric diagnoses. Secondary outcomes included specific psychiatric diagnoses such as depression, neurologic examination findings, and self-reported neuropsychiatric and cognitive symptoms. The mean age of the COVID-19 patients was 56.8 years, and 42% were women.
At 6 months’ follow-up, cognitive status was significantly lower in COVID-19 survivors, compared with controls, based on total geometric mean MoCA scores (26.7 vs. 27.5, P = .01). However, cognitive status improved significantly from 19.2 at hospital discharge to 26.1 at 6 months in 15 of the COVID-19 patients (P = .004), the researchers noted.
New-onset psychiatric diagnoses occurred in 16 COVID-19 patients and 12 of the controls (19% vs. 20%); this difference was not significant.
Secondary outcomes were not significantly different at 6 months between the groups, with the exception of anosmia, which was significantly more common in the COVID-19 patients; however, the significance disappeared in adjusted analysis, the researchers said.
The study findings were limited by several factors including the inability to prove causality because of the case-control feature and by the inability to detect small differences in neuropsychiatric outcomes, the researchers noted.
However, the results were strengthened by the use of a prospectively matched control group with similar disease severity admitted to the same hospital in the same time frame. Although the overall burden of neuropsychiatric and neurologic symptoms and diagnoses appeared similar in COVID-19 patients and those with other medical conditions, more research in larger populations is needed to determine smaller differences in neuropsychiatric profiles, the researchers noted.
Study fills research gap
The study is important at this time because, although prolonged neuropsychiatric and cognitive symptoms have been reported after COVID-19, the field lacked prospective case-control studies with well-matched controls to investigate whether these outcomes differed from those seen in other critical illnesses that had also required hospitalization, corresponding author Michael E. Benros, MD, of the Mental Health Center, Copenhagen, said in an interview.
“I was surprised that there was a significant worse cognitive functioning among COVID-19 patients 6 months after symptom onset also when compared to this well-matched control group that had been hospitalized for non–COVID-19 illness, although the absolute difference between the groups in cognition score were small,” said Dr. Benros. “Another notable finding is the large improvement in cognitive functioning from discharge to follow-up,” he added on behalf of himself and fellow corresponding author Daniel Kondziella, MD.
The study results show that cognitive function affected by COVID-19 and critical illness as observed at discharge showed a substantial improvement at 6 months after symptom onset, said Dr. Benros. “However, the cognitive function was significantly worse among severely ill COVID-19 patients 6 months after symptom onset when compared to a matched control group of individuals hospitalized for non–COVID-19 illness, although this difference in cognitive function was rather small in absolute numbers, and smaller than what had been suggested by other studies that lacked control groups. Strikingly, neuropsychiatric disorders were similar across the two groups, which was also the case when looking at neuropsychiatric symptoms.
“Larger prospective case-control studies of neuropsychiatric and cognitive functioning after COVID-19, compared with matched controls are still needed to detect smaller differences, and more detailed cognitive domains, and with longer follow-up time, which we are currently conducting,” Dr. Benros said.
Controlled studies will help planning
“Lingering neuropsychiatric complications are common after COVID-19, but only controlled studies can tell us whether these complications are specific to COVID-19, rather than a general effect of having been medically ill,” Alasdair G. Rooney, MRCPsych MD PhD, of the University of Edinburgh, said in an interview. “The answer matters ultimately because COVID-19 is a new disease; societies and health care services need to be able to plan for its specific consequences.”
The health status of the control group is important as well. “Most previous studies had compared COVID-19 survivors against healthy controls or patients from a historical database. This new study compared COVID-19 survivors against those hospitalized for other medical causes over the same period,” Dr. Rooney said. “This is a more stringent test of whether COVID-19 has specific neurocognitive and neuropsychiatric consequences.
“The study found that new-onset neuropsychiatric diagnoses and symptoms were no more likely to occur after COVID-19 than after similarly severe medical illnesses,” Dr. Rooney said. “This negative finding runs counter to some earlier studies and may surprise some.” The findings need to be replicated in larger samples, but the current study shows the importance of prospectively recruiting active controls.
“In a subgroup analysis, some patients showed good improvement in cognitive scores between discharge and follow-up. While unsurprising, this is encouraging and suggests that the early postdischarge months are an important time for neurocognitive recovery,” Dr. Rooney noted.
“The findings suggest that COVID-19 may impair attention more selectively than other medical causes of hospitalization. COVID-19 survivors may also be at higher risk of significant overall cognitive impairment than survivors of similarly severe medical illnesses, after a similar duration,” said Dr. Rooney. “If the results are replicated by other prospective studies, they would suggest that there is something about COVID-19 that causes clinically significant neurocognitive difficulties in a minority of survivors.
“Larger well-controlled studies are required, with longer follow-up and more detailed neurocognitive testing,” as the duration of impairment and scope for further recovery are not known, Dr. Rooney added. Also unknown is whether COVID-19 affects attention permanently, or whether recovery is simply slower after COVID-19 compared to other medical illnesses.
“Knowing who is at the greatest risk of severe cognitive impairment after COVID-19 is important and likely to allow tailoring of more effective shielding strategies,” said Dr. Rooney. “This study was conducted before the widespread availability of vaccines for COVID-19. Long-term neuropsychiatric outcomes in vaccinated patients remain largely unknown. Arguably, these are now more important to understand, as future COVID-19 waves will occur mainly among vaccinated individuals.”
The study was supported by the Lundbeck Foundation and the Novo Nordisk Foundation. Lead author Dr. Nersesjan had no financial conflicts to disclose. Dr. Benros reported grants from Lundbeck Foundation and Novo Nordisk Foundation during the conduct of the study. Dr. Rooney had no financial conflicts to disclose.
This article was updated 3/25/22.
Hospitalized COVID-19 survivors showed greater cognitive impairment 6 months later, compared with patients hospitalized for other causes, but the overall disease burden was similar, based on data from 85 adults with COVID-19.
Previous studies have shown that cognitive and neuropsychiatric symptoms can occur from 2-6 months after COVID-19 recovery, and such symptoms are known to be associated with hospitalization for other severe medical conditions, Vardan Nersesjan, MD, of Copenhagen University Hospital, and colleagues wrote.
However, it remains unknown if COVID-19 is associated with a unique pattern of cognitive and mental impairment compared with other similarly severe medical conditions, they said.
In a study published in JAMA Psychiatry (2022 Mar 23. doi: 10.1001/jamapsychiatry.2022.0284), the researchers identified 85 adult COVID-19 survivors and 61 controls with non-COVID medical conditions who were treated and released between July 2020 and July 2021. The COVID-19 patients and controls were matched for age, sex, and ICU status. Cognitive impairment was assessed using the Mini-International Neuropsychiatric Interview, the Montreal Cognitive Assessment (MoCA), neurologic examination, and a semistructured interview to determine subjective symptoms.
The primary outcomes were the total scores on the MoCA and any new-onset psychiatric diagnoses. Secondary outcomes included specific psychiatric diagnoses such as depression, neurologic examination findings, and self-reported neuropsychiatric and cognitive symptoms. The mean age of the COVID-19 patients was 56.8 years, and 42% were women.
At 6 months’ follow-up, cognitive status was significantly lower in COVID-19 survivors, compared with controls, based on total geometric mean MoCA scores (26.7 vs. 27.5, P = .01). However, cognitive status improved significantly from 19.2 at hospital discharge to 26.1 at 6 months in 15 of the COVID-19 patients (P = .004), the researchers noted.
New-onset psychiatric diagnoses occurred in 16 COVID-19 patients and 12 of the controls (19% vs. 20%); this difference was not significant.
Secondary outcomes were not significantly different at 6 months between the groups, with the exception of anosmia, which was significantly more common in the COVID-19 patients; however, the significance disappeared in adjusted analysis, the researchers said.
The study findings were limited by several factors including the inability to prove causality because of the case-control feature and by the inability to detect small differences in neuropsychiatric outcomes, the researchers noted.
However, the results were strengthened by the use of a prospectively matched control group with similar disease severity admitted to the same hospital in the same time frame. Although the overall burden of neuropsychiatric and neurologic symptoms and diagnoses appeared similar in COVID-19 patients and those with other medical conditions, more research in larger populations is needed to determine smaller differences in neuropsychiatric profiles, the researchers noted.
Study fills research gap
The study is important at this time because, although prolonged neuropsychiatric and cognitive symptoms have been reported after COVID-19, the field lacked prospective case-control studies with well-matched controls to investigate whether these outcomes differed from those seen in other critical illnesses that had also required hospitalization, corresponding author Michael E. Benros, MD, of the Mental Health Center, Copenhagen, said in an interview.
“I was surprised that there was a significant worse cognitive functioning among COVID-19 patients 6 months after symptom onset also when compared to this well-matched control group that had been hospitalized for non–COVID-19 illness, although the absolute difference between the groups in cognition score were small,” said Dr. Benros. “Another notable finding is the large improvement in cognitive functioning from discharge to follow-up,” he added on behalf of himself and fellow corresponding author Daniel Kondziella, MD.
The study results show that cognitive function affected by COVID-19 and critical illness as observed at discharge showed a substantial improvement at 6 months after symptom onset, said Dr. Benros. “However, the cognitive function was significantly worse among severely ill COVID-19 patients 6 months after symptom onset when compared to a matched control group of individuals hospitalized for non–COVID-19 illness, although this difference in cognitive function was rather small in absolute numbers, and smaller than what had been suggested by other studies that lacked control groups. Strikingly, neuropsychiatric disorders were similar across the two groups, which was also the case when looking at neuropsychiatric symptoms.
“Larger prospective case-control studies of neuropsychiatric and cognitive functioning after COVID-19, compared with matched controls are still needed to detect smaller differences, and more detailed cognitive domains, and with longer follow-up time, which we are currently conducting,” Dr. Benros said.
Controlled studies will help planning
“Lingering neuropsychiatric complications are common after COVID-19, but only controlled studies can tell us whether these complications are specific to COVID-19, rather than a general effect of having been medically ill,” Alasdair G. Rooney, MRCPsych MD PhD, of the University of Edinburgh, said in an interview. “The answer matters ultimately because COVID-19 is a new disease; societies and health care services need to be able to plan for its specific consequences.”
The health status of the control group is important as well. “Most previous studies had compared COVID-19 survivors against healthy controls or patients from a historical database. This new study compared COVID-19 survivors against those hospitalized for other medical causes over the same period,” Dr. Rooney said. “This is a more stringent test of whether COVID-19 has specific neurocognitive and neuropsychiatric consequences.
“The study found that new-onset neuropsychiatric diagnoses and symptoms were no more likely to occur after COVID-19 than after similarly severe medical illnesses,” Dr. Rooney said. “This negative finding runs counter to some earlier studies and may surprise some.” The findings need to be replicated in larger samples, but the current study shows the importance of prospectively recruiting active controls.
“In a subgroup analysis, some patients showed good improvement in cognitive scores between discharge and follow-up. While unsurprising, this is encouraging and suggests that the early postdischarge months are an important time for neurocognitive recovery,” Dr. Rooney noted.
“The findings suggest that COVID-19 may impair attention more selectively than other medical causes of hospitalization. COVID-19 survivors may also be at higher risk of significant overall cognitive impairment than survivors of similarly severe medical illnesses, after a similar duration,” said Dr. Rooney. “If the results are replicated by other prospective studies, they would suggest that there is something about COVID-19 that causes clinically significant neurocognitive difficulties in a minority of survivors.
“Larger well-controlled studies are required, with longer follow-up and more detailed neurocognitive testing,” as the duration of impairment and scope for further recovery are not known, Dr. Rooney added. Also unknown is whether COVID-19 affects attention permanently, or whether recovery is simply slower after COVID-19 compared to other medical illnesses.
“Knowing who is at the greatest risk of severe cognitive impairment after COVID-19 is important and likely to allow tailoring of more effective shielding strategies,” said Dr. Rooney. “This study was conducted before the widespread availability of vaccines for COVID-19. Long-term neuropsychiatric outcomes in vaccinated patients remain largely unknown. Arguably, these are now more important to understand, as future COVID-19 waves will occur mainly among vaccinated individuals.”
The study was supported by the Lundbeck Foundation and the Novo Nordisk Foundation. Lead author Dr. Nersesjan had no financial conflicts to disclose. Dr. Benros reported grants from Lundbeck Foundation and Novo Nordisk Foundation during the conduct of the study. Dr. Rooney had no financial conflicts to disclose.
This article was updated 3/25/22.
FROM JAMA PSYCHIATRY
Severe obesity reduces responses to TNF inhibitors and non-TNF biologics to similar extent
There does not appear to be superiority of any type of biologic medication for patients with rheumatoid arthritis across different body mass index (BMI) groupings, with obesity and underweight both reducing the effects of the treatments after 6 months of use, according to findings from registry data on nearly 6,000 individuals.
Although interest in the precision use of biologics for RA is on the rise, few patient characteristics have been identified to inform therapeutic decisions, Joshua F. Baker, MD, of the Philadelphia Veterans Affairs Medical Center and the University of Pennsylvania, Philadelphia, and colleagues wrote.
Previous studies on the effect of obesity on RA treatments have been inconclusive, and a comparison of RA treatments across BMI categories would provide more definitive guidance, they said.
In a study published in Arthritis Care & Research, the researchers used the CorEvitas U.S. observational registry (formerly known as Corrona) to identify adults who initiated second- or third-line treatment for RA with tumor necrosis factor inhibitors (n = 2,891) or non-TNFi biologics (n = 3,010) between 2001 and April 30, 2021.
The study population included adults diagnosed with RA; those with low disease activity or without a 6-month follow-up visit were excluded. BMI was categorized as underweight (less than 18.5 kg/m2), normal weight (18.5-25 kg/m2), overweight (25-30 kg/m2), obese (30-35 kg/m2), and severely obese (35 kg/m2 or higher). The three measures of response were the achievement of low disease activity (LDA), a change at least as large as the minimum clinically important difference (MCID), and the absolute change on the Clinical Disease Activity Index (CDAI) from baseline.
A total of 2,712 patients were obese or severely obese at the time of treatment initiation.
Overall, patients with severe obesity had significantly lower odds of achieving either LDA or a change at least as large as the MCID, as well as less improvement in CDAI score, compared with other BMI categories. However, in adjusted models, the differences in these outcomes for patients with severe obesity were no longer statistically significant, whereas underweight was associated with lower odds of achieving LDA (odds ratio, 0.32; P = .005) or a change at least as large as the MCID (OR, 0.40; P = .005). The adjusted model also showed lesser improvement on CDAI in underweight patients, compared with patients of normal weight (P = .006).
Stratification by TNFi and non-TNFi therapies showed no differences in clinical response rates across BMI categories.
The study represents the first evidence of a similar reduction in therapeutic response with both TNFi and non-TNFi in severely obese patients, with estimates for non-TNFi biologics that fit within the 95% confidence interval for TNFi biologics, the researchers wrote. “Our current study suggests that a lack of response among obese patients is not specific to TNFi therapies, suggesting that this phenomenon is not biologically specific to the TNF pathway.”
The study findings were limited by several factors, including the focus on patients who were not naive to biologic treatments and by the relatively small number of underweight patients (n = 57), the researchers noted. Other limitations include unaddressed mediators of the relationship between obesity and disease activity and lack of data on off-label dosing strategies.
However, the results were strengthened by the large sample size, control for a range of confounding factors, and the direct comparison of RA therapies.
The researchers concluded that BMI should not influence the choice of TNF versus non-TNF therapy in terms of clinical efficacy.
The study was supported by the Corrona Research Foundation. Dr. Baker disclosed receiving support from a Veterans Affairs Clinical Science Research and Development Merit Award and a Rehabilitation Research and Development Merit Award, and consulting fees from Bristol-Myers Squibb, Pfizer, CorEvitas, and Burns-White. Two coauthors reported financial ties to CorEvitas.
There does not appear to be superiority of any type of biologic medication for patients with rheumatoid arthritis across different body mass index (BMI) groupings, with obesity and underweight both reducing the effects of the treatments after 6 months of use, according to findings from registry data on nearly 6,000 individuals.
Although interest in the precision use of biologics for RA is on the rise, few patient characteristics have been identified to inform therapeutic decisions, Joshua F. Baker, MD, of the Philadelphia Veterans Affairs Medical Center and the University of Pennsylvania, Philadelphia, and colleagues wrote.
Previous studies on the effect of obesity on RA treatments have been inconclusive, and a comparison of RA treatments across BMI categories would provide more definitive guidance, they said.
In a study published in Arthritis Care & Research, the researchers used the CorEvitas U.S. observational registry (formerly known as Corrona) to identify adults who initiated second- or third-line treatment for RA with tumor necrosis factor inhibitors (n = 2,891) or non-TNFi biologics (n = 3,010) between 2001 and April 30, 2021.
The study population included adults diagnosed with RA; those with low disease activity or without a 6-month follow-up visit were excluded. BMI was categorized as underweight (less than 18.5 kg/m2), normal weight (18.5-25 kg/m2), overweight (25-30 kg/m2), obese (30-35 kg/m2), and severely obese (35 kg/m2 or higher). The three measures of response were the achievement of low disease activity (LDA), a change at least as large as the minimum clinically important difference (MCID), and the absolute change on the Clinical Disease Activity Index (CDAI) from baseline.
A total of 2,712 patients were obese or severely obese at the time of treatment initiation.
Overall, patients with severe obesity had significantly lower odds of achieving either LDA or a change at least as large as the MCID, as well as less improvement in CDAI score, compared with other BMI categories. However, in adjusted models, the differences in these outcomes for patients with severe obesity were no longer statistically significant, whereas underweight was associated with lower odds of achieving LDA (odds ratio, 0.32; P = .005) or a change at least as large as the MCID (OR, 0.40; P = .005). The adjusted model also showed lesser improvement on CDAI in underweight patients, compared with patients of normal weight (P = .006).
Stratification by TNFi and non-TNFi therapies showed no differences in clinical response rates across BMI categories.
The study represents the first evidence of a similar reduction in therapeutic response with both TNFi and non-TNFi in severely obese patients, with estimates for non-TNFi biologics that fit within the 95% confidence interval for TNFi biologics, the researchers wrote. “Our current study suggests that a lack of response among obese patients is not specific to TNFi therapies, suggesting that this phenomenon is not biologically specific to the TNF pathway.”
The study findings were limited by several factors, including the focus on patients who were not naive to biologic treatments and by the relatively small number of underweight patients (n = 57), the researchers noted. Other limitations include unaddressed mediators of the relationship between obesity and disease activity and lack of data on off-label dosing strategies.
However, the results were strengthened by the large sample size, control for a range of confounding factors, and the direct comparison of RA therapies.
The researchers concluded that BMI should not influence the choice of TNF versus non-TNF therapy in terms of clinical efficacy.
The study was supported by the Corrona Research Foundation. Dr. Baker disclosed receiving support from a Veterans Affairs Clinical Science Research and Development Merit Award and a Rehabilitation Research and Development Merit Award, and consulting fees from Bristol-Myers Squibb, Pfizer, CorEvitas, and Burns-White. Two coauthors reported financial ties to CorEvitas.
There does not appear to be superiority of any type of biologic medication for patients with rheumatoid arthritis across different body mass index (BMI) groupings, with obesity and underweight both reducing the effects of the treatments after 6 months of use, according to findings from registry data on nearly 6,000 individuals.
Although interest in the precision use of biologics for RA is on the rise, few patient characteristics have been identified to inform therapeutic decisions, Joshua F. Baker, MD, of the Philadelphia Veterans Affairs Medical Center and the University of Pennsylvania, Philadelphia, and colleagues wrote.
Previous studies on the effect of obesity on RA treatments have been inconclusive, and a comparison of RA treatments across BMI categories would provide more definitive guidance, they said.
In a study published in Arthritis Care & Research, the researchers used the CorEvitas U.S. observational registry (formerly known as Corrona) to identify adults who initiated second- or third-line treatment for RA with tumor necrosis factor inhibitors (n = 2,891) or non-TNFi biologics (n = 3,010) between 2001 and April 30, 2021.
The study population included adults diagnosed with RA; those with low disease activity or without a 6-month follow-up visit were excluded. BMI was categorized as underweight (less than 18.5 kg/m2), normal weight (18.5-25 kg/m2), overweight (25-30 kg/m2), obese (30-35 kg/m2), and severely obese (35 kg/m2 or higher). The three measures of response were the achievement of low disease activity (LDA), a change at least as large as the minimum clinically important difference (MCID), and the absolute change on the Clinical Disease Activity Index (CDAI) from baseline.
A total of 2,712 patients were obese or severely obese at the time of treatment initiation.
Overall, patients with severe obesity had significantly lower odds of achieving either LDA or a change at least as large as the MCID, as well as less improvement in CDAI score, compared with other BMI categories. However, in adjusted models, the differences in these outcomes for patients with severe obesity were no longer statistically significant, whereas underweight was associated with lower odds of achieving LDA (odds ratio, 0.32; P = .005) or a change at least as large as the MCID (OR, 0.40; P = .005). The adjusted model also showed lesser improvement on CDAI in underweight patients, compared with patients of normal weight (P = .006).
Stratification by TNFi and non-TNFi therapies showed no differences in clinical response rates across BMI categories.
The study represents the first evidence of a similar reduction in therapeutic response with both TNFi and non-TNFi in severely obese patients, with estimates for non-TNFi biologics that fit within the 95% confidence interval for TNFi biologics, the researchers wrote. “Our current study suggests that a lack of response among obese patients is not specific to TNFi therapies, suggesting that this phenomenon is not biologically specific to the TNF pathway.”
The study findings were limited by several factors, including the focus on patients who were not naive to biologic treatments and by the relatively small number of underweight patients (n = 57), the researchers noted. Other limitations include unaddressed mediators of the relationship between obesity and disease activity and lack of data on off-label dosing strategies.
However, the results were strengthened by the large sample size, control for a range of confounding factors, and the direct comparison of RA therapies.
The researchers concluded that BMI should not influence the choice of TNF versus non-TNF therapy in terms of clinical efficacy.
The study was supported by the Corrona Research Foundation. Dr. Baker disclosed receiving support from a Veterans Affairs Clinical Science Research and Development Merit Award and a Rehabilitation Research and Development Merit Award, and consulting fees from Bristol-Myers Squibb, Pfizer, CorEvitas, and Burns-White. Two coauthors reported financial ties to CorEvitas.
FROM ARTHRITIS CARE & RESEARCH
Maternal obesity promotes risk of perinatal death
The infants of obese pregnant women had a 55% higher adjusted perinatal death rate, compared with those of normal-weight pregnant women, but lower gestational age had a mediating effect, based on data from nearly 400,000 women-infant pairs.
“While some obesity-related causes of fetal death are known, the exact pathophysiology behind the effects of obesity on perinatal death are not completely understood,” Jeffrey N. Bone, MD, of the University of British Columbia, Vancouver, and colleagues wrote. Higher body mass index prior to pregnancy also is associated with preterm delivery, but the effect of gestational age on the association between BMI and infant mortality has not been well explored.
In a study published in PLOS ONE, the researchers reviewed data from nearly 400,000 women obtained through the British Columbia Perinatal Database Registry, which collects obstetric and neonatal data from hospital charts and from delivery records of home births. Births at less than 20 weeks’ gestation and late pregnancy terminations were excluded.
BMI was based on self-reported prepregnancy height and weight; of the 392,820 included women, 12.8% were classified as obese, 20.6% were overweight, 60.6% were normal weight, and 6.0% were underweight. Infants of women with higher BMI had a lower gestational age at delivery. Perinatal mortality occurred in 1,834 pregnancies (0.5%). In adjusted analysis, infant perinatal death was significantly more likely for obese women (adjusted odds ratio, 1.55) and overweight women (aOR, 1.22).
However, 63.1% of this association in obese women was mediated by gestational age of the infant at delivery, with aORs of 1.32 and 1.18 for natural indirect and natural direct effects, respectively, compared with that of normal-weight women. Similar, but lesser effects were noted for overweight women, with aORs of 1.11 and 1.10, respectively. “Direct effects were higher, and mediation was lower for stillbirth than for neonatal death, where the total effect was entirely indirect,” but the confidence intervals remained consistent with the primary analyses, the researchers noted.
The increased perinatal death rates of infants of obese and overweight women reflect data from previous studies, but the current study’s use of mediation analysis offers new insight on the mechanism by which perinatal death rates increase with higher maternal BMI, the researchers wrote.
The study findings were limited by several factors including the need to consider potential common risk factors for both perinatal death and early delivery that would be affected by maternal obesity, the researchers noted. Other limitations included the use of gestational age at stillbirth, which represents an approximation of fetal death in some cases, and the use of self-reports for prepregnancy maternal BMI.
However, the results were strengthened by the large, population-based design and information on potential confounding variables, and suggest that early gestational age at delivery may play a role in maternal obesity-related perinatal death risk.
“To better inform the pregnancy management in obese women, further studies should continue to disentangle the causal pathways under which obesity increases the risk of perinatal death, including, for example, gestational diabetes and other obesity-related pregnancy complications,” they concluded.
More testing and counseling are needed
The current study is important because obesity rates continue to increase in the reproductive-age population, Marissa Platner, MD, of Emory University, Atlanta, said in an interview. “Obesity has become a known risk factor for adverse pregnancy outcomes, specifically the risk of stillbirth and perinatal death. However, the authors correctly point out that the underlying cause of these perinatal deaths in women with obesity is unclear. Additionally, ACOG recently updated their clinical guidelines to recommend routine antenatal testing for women with obesity due to these increased rates of stillbirth.
“I was not surprised by these findings; similar to previous literature, the risks of perinatal death seem to have a dose-response relationship with increasing BMI. We know that women with prepregnancy obesity are also at higher risk of perinatal complications in the preterm period, which would increase the risk of perinatal death,” Dr. Platner said
“I think the take-home message for clinicians is twofold,” Dr. Platner said. First, “we need to take the updated antenatal testing guidelines from ACOG very seriously and implement these in our practices.” Second, “in the preconception or early antepartum period, these patients should be thoroughly counseled on the associated risks of pregnancy and discuss appropriate gestational weight gain guidelines and lifestyle modifications.”
However, “additional research is needed in a U.S. population with higher rates of obesity to determine the true effects of obesity on perinatal deaths and to further elucidate the underlying pathophysiology and disease processes that may lead to increased risk of both stillbirth and perinatal deaths,” Dr. Platner emphasized.
*This story was updated on March 23, 2022.
The study was supported by the Sick Kids Foundation and the Canadian Institute of Health Research. The researchers had no financial conflicts to disclose. Dr. Platner had no financial conflicts to disclose.
The infants of obese pregnant women had a 55% higher adjusted perinatal death rate, compared with those of normal-weight pregnant women, but lower gestational age had a mediating effect, based on data from nearly 400,000 women-infant pairs.
“While some obesity-related causes of fetal death are known, the exact pathophysiology behind the effects of obesity on perinatal death are not completely understood,” Jeffrey N. Bone, MD, of the University of British Columbia, Vancouver, and colleagues wrote. Higher body mass index prior to pregnancy also is associated with preterm delivery, but the effect of gestational age on the association between BMI and infant mortality has not been well explored.
In a study published in PLOS ONE, the researchers reviewed data from nearly 400,000 women obtained through the British Columbia Perinatal Database Registry, which collects obstetric and neonatal data from hospital charts and from delivery records of home births. Births at less than 20 weeks’ gestation and late pregnancy terminations were excluded.
BMI was based on self-reported prepregnancy height and weight; of the 392,820 included women, 12.8% were classified as obese, 20.6% were overweight, 60.6% were normal weight, and 6.0% were underweight. Infants of women with higher BMI had a lower gestational age at delivery. Perinatal mortality occurred in 1,834 pregnancies (0.5%). In adjusted analysis, infant perinatal death was significantly more likely for obese women (adjusted odds ratio, 1.55) and overweight women (aOR, 1.22).
However, 63.1% of this association in obese women was mediated by gestational age of the infant at delivery, with aORs of 1.32 and 1.18 for natural indirect and natural direct effects, respectively, compared with that of normal-weight women. Similar, but lesser effects were noted for overweight women, with aORs of 1.11 and 1.10, respectively. “Direct effects were higher, and mediation was lower for stillbirth than for neonatal death, where the total effect was entirely indirect,” but the confidence intervals remained consistent with the primary analyses, the researchers noted.
The increased perinatal death rates of infants of obese and overweight women reflect data from previous studies, but the current study’s use of mediation analysis offers new insight on the mechanism by which perinatal death rates increase with higher maternal BMI, the researchers wrote.
The study findings were limited by several factors including the need to consider potential common risk factors for both perinatal death and early delivery that would be affected by maternal obesity, the researchers noted. Other limitations included the use of gestational age at stillbirth, which represents an approximation of fetal death in some cases, and the use of self-reports for prepregnancy maternal BMI.
However, the results were strengthened by the large, population-based design and information on potential confounding variables, and suggest that early gestational age at delivery may play a role in maternal obesity-related perinatal death risk.
“To better inform the pregnancy management in obese women, further studies should continue to disentangle the causal pathways under which obesity increases the risk of perinatal death, including, for example, gestational diabetes and other obesity-related pregnancy complications,” they concluded.
More testing and counseling are needed
The current study is important because obesity rates continue to increase in the reproductive-age population, Marissa Platner, MD, of Emory University, Atlanta, said in an interview. “Obesity has become a known risk factor for adverse pregnancy outcomes, specifically the risk of stillbirth and perinatal death. However, the authors correctly point out that the underlying cause of these perinatal deaths in women with obesity is unclear. Additionally, ACOG recently updated their clinical guidelines to recommend routine antenatal testing for women with obesity due to these increased rates of stillbirth.
“I was not surprised by these findings; similar to previous literature, the risks of perinatal death seem to have a dose-response relationship with increasing BMI. We know that women with prepregnancy obesity are also at higher risk of perinatal complications in the preterm period, which would increase the risk of perinatal death,” Dr. Platner said
“I think the take-home message for clinicians is twofold,” Dr. Platner said. First, “we need to take the updated antenatal testing guidelines from ACOG very seriously and implement these in our practices.” Second, “in the preconception or early antepartum period, these patients should be thoroughly counseled on the associated risks of pregnancy and discuss appropriate gestational weight gain guidelines and lifestyle modifications.”
However, “additional research is needed in a U.S. population with higher rates of obesity to determine the true effects of obesity on perinatal deaths and to further elucidate the underlying pathophysiology and disease processes that may lead to increased risk of both stillbirth and perinatal deaths,” Dr. Platner emphasized.
*This story was updated on March 23, 2022.
The study was supported by the Sick Kids Foundation and the Canadian Institute of Health Research. The researchers had no financial conflicts to disclose. Dr. Platner had no financial conflicts to disclose.
The infants of obese pregnant women had a 55% higher adjusted perinatal death rate, compared with those of normal-weight pregnant women, but lower gestational age had a mediating effect, based on data from nearly 400,000 women-infant pairs.
“While some obesity-related causes of fetal death are known, the exact pathophysiology behind the effects of obesity on perinatal death are not completely understood,” Jeffrey N. Bone, MD, of the University of British Columbia, Vancouver, and colleagues wrote. Higher body mass index prior to pregnancy also is associated with preterm delivery, but the effect of gestational age on the association between BMI and infant mortality has not been well explored.
In a study published in PLOS ONE, the researchers reviewed data from nearly 400,000 women obtained through the British Columbia Perinatal Database Registry, which collects obstetric and neonatal data from hospital charts and from delivery records of home births. Births at less than 20 weeks’ gestation and late pregnancy terminations were excluded.
BMI was based on self-reported prepregnancy height and weight; of the 392,820 included women, 12.8% were classified as obese, 20.6% were overweight, 60.6% were normal weight, and 6.0% were underweight. Infants of women with higher BMI had a lower gestational age at delivery. Perinatal mortality occurred in 1,834 pregnancies (0.5%). In adjusted analysis, infant perinatal death was significantly more likely for obese women (adjusted odds ratio, 1.55) and overweight women (aOR, 1.22).
However, 63.1% of this association in obese women was mediated by gestational age of the infant at delivery, with aORs of 1.32 and 1.18 for natural indirect and natural direct effects, respectively, compared with that of normal-weight women. Similar, but lesser effects were noted for overweight women, with aORs of 1.11 and 1.10, respectively. “Direct effects were higher, and mediation was lower for stillbirth than for neonatal death, where the total effect was entirely indirect,” but the confidence intervals remained consistent with the primary analyses, the researchers noted.
The increased perinatal death rates of infants of obese and overweight women reflect data from previous studies, but the current study’s use of mediation analysis offers new insight on the mechanism by which perinatal death rates increase with higher maternal BMI, the researchers wrote.
The study findings were limited by several factors including the need to consider potential common risk factors for both perinatal death and early delivery that would be affected by maternal obesity, the researchers noted. Other limitations included the use of gestational age at stillbirth, which represents an approximation of fetal death in some cases, and the use of self-reports for prepregnancy maternal BMI.
However, the results were strengthened by the large, population-based design and information on potential confounding variables, and suggest that early gestational age at delivery may play a role in maternal obesity-related perinatal death risk.
“To better inform the pregnancy management in obese women, further studies should continue to disentangle the causal pathways under which obesity increases the risk of perinatal death, including, for example, gestational diabetes and other obesity-related pregnancy complications,” they concluded.
More testing and counseling are needed
The current study is important because obesity rates continue to increase in the reproductive-age population, Marissa Platner, MD, of Emory University, Atlanta, said in an interview. “Obesity has become a known risk factor for adverse pregnancy outcomes, specifically the risk of stillbirth and perinatal death. However, the authors correctly point out that the underlying cause of these perinatal deaths in women with obesity is unclear. Additionally, ACOG recently updated their clinical guidelines to recommend routine antenatal testing for women with obesity due to these increased rates of stillbirth.
“I was not surprised by these findings; similar to previous literature, the risks of perinatal death seem to have a dose-response relationship with increasing BMI. We know that women with prepregnancy obesity are also at higher risk of perinatal complications in the preterm period, which would increase the risk of perinatal death,” Dr. Platner said
“I think the take-home message for clinicians is twofold,” Dr. Platner said. First, “we need to take the updated antenatal testing guidelines from ACOG very seriously and implement these in our practices.” Second, “in the preconception or early antepartum period, these patients should be thoroughly counseled on the associated risks of pregnancy and discuss appropriate gestational weight gain guidelines and lifestyle modifications.”
However, “additional research is needed in a U.S. population with higher rates of obesity to determine the true effects of obesity on perinatal deaths and to further elucidate the underlying pathophysiology and disease processes that may lead to increased risk of both stillbirth and perinatal deaths,” Dr. Platner emphasized.
*This story was updated on March 23, 2022.
The study was supported by the Sick Kids Foundation and the Canadian Institute of Health Research. The researchers had no financial conflicts to disclose. Dr. Platner had no financial conflicts to disclose.
FROM PLOS ONE
Eating olive oil may slow CLL disease progression
Olive oil is a major component of the Mediterranean diet, and olive phenols have been shown to convey antioxidant, anti-inflammatory, anticancer, neuroprotective, and antidiabetic effects by modulating various molecular pathways, Andrea Paola Rojas Gil, PhD, of the University of Peloponnese, Tripoli, Greece, and colleagues wrote.
In most patients, CLL is incurable, but those at the early stages do not need immediate therapy and may benefit from an intervention to prevent disease progression, the authors wrote. Previous research suggested that dietary intervention exerts a salutary effect on early CLL, and in vitro studies suggested that oleocanthal, a component of extra virgin olive oil, induced anticancer activity.
In a study published in Frontiers in Oncology, the researchers enrolled adults with early stage CLL who had not undergone chemotherapy or other treatment. All patients adhered to a Mediterranean-style diet.
After a washout period of 9-12 months, the researchers randomized 22 patients to extra virgin olive oil high in oleocanthal and oleacein (high OC/OL-EVOO). Patients in the intervention group consumed 40 mL/day of high OC/OL-EVOO before meals. Their average age was 71 years; 10 were women and 12 were men.
The primary outcomes included changes in hematological, biochemical, and apoptotic markers. After 6 months, patients in the intervention group showed a statistically significant reduction in white blood cells and lymphocyte count, compared with measurements taken 3 months before the intervention. The WBC decrease was greatest among patients with the highest WBC levels at baseline.
As for biochemical markers, the researchers observed a significant decrease in glucose levels during the intervention, but no significant effects on metabolic indexes or renal function.
After 3 months and also after 6 months of the olive oil intervention, patients showed a significant increase in the apoptotic markers ccK18 and Apo1-Fas (P ≤ .05 for both), as well as an increase in the cell cycle negative regulator p21. The dietary intervention also was associated with significant decreases in expression of the antiapoptotic protein survivin and in cyclin D, a positive cell cycle regulator protein.
Further, patients who had a high ccK18 level at baseline showed a significantly greater increase in ccK18 after the intervention, compared with those with lower ccK18 at baseline (P = .001).
Notably, “a negative correlation of the WBC at the end of the dietary intervention with the fluctuation of the protein expression of the apoptotic marker ccK18 (final – initial) was observed,” the researchers wrote in their discussion.
The study findings were limited by several factors including the small sample size, short intervention time, and pilot design, the researchers said. Other limitations include the possible effect of other unmeasured properties of olive oil.
However, the results reflect previous studies showing the benefits of a Mediterranean-type diet, and they represent the first clinical trial to indicate possible beneficial effects from oleocanthal and oleacein on the progression of CLL. Therefore, the authors concluded, the study is worthy of a large, multicenter trial.
Pilot data merit more research
In an interview, corresponding author Prokopios Magiatis, PhD, noted that CLL is “the most commonly diagnosed adult leukemia in Western countries and is responsible for about one in four cases of all leukemias.” CLL remains incurable in most patients, and ways to delay disease progression are needed.
“Oleocanthal is the active ingredient of early harvest olive oil with proven anticancer activities in vitro and in vivo,” Dr. Magiatis explained. “For this reason, it was a unique challenge to investigate the anticancer activity of this compound for the first time in humans through the dietary consumption of specifically selected olive oil.” He expressed surprise at the beneficial effects of high-oleocanthal olive oil, not only to the white blood cells, but also to glucose levels.
“It seems that oleocanthal can activate mechanisms related to the apoptosis of cancer cells, and also mechanisms related to blood glucose regulation without affecting any normal cells of the body,” he said. “All anticancer drugs usually have severe side effects, however the administration of 25 mg of oleocanthal through the dietary consumption of olive oil did not present any harmful effects for at least 6 months of everyday use.
“The addition of naturally produced high-oleocanthal olive oil in the diet of early-stage CLL patients at a dose of three tablespoons per day [40 mL] is a practice that may lower the cancerous white blood cells of the patients without any risk,” said Dr. Magiatis. “High-oleocanthal early-harvest olive oil has been consumed for centuries, and may be the key of longevity of several Mediterranean populations.
“In our study, the number of the white blood cells returned back to the number it was one year before the initiation of the study; this clearly shows that it could be a significant factor for the delay of the progress of the disease,” he said.
The current trial was a pilot study in one hospital with only 22 patients for 6 months, said Dr. Magiatis. “We are currently preparing the expansion of the study to other hospitals and other countries, and we aim to include at least 100 patients for at least 1 year, to validate the already-obtained beneficial results.”
The clinical trial is supported by the nonprofit organization World Olive Center for Health, he added.
The current study received no outside funding. The researchers had no financial conflicts to disclose.
Olive oil is a major component of the Mediterranean diet, and olive phenols have been shown to convey antioxidant, anti-inflammatory, anticancer, neuroprotective, and antidiabetic effects by modulating various molecular pathways, Andrea Paola Rojas Gil, PhD, of the University of Peloponnese, Tripoli, Greece, and colleagues wrote.
In most patients, CLL is incurable, but those at the early stages do not need immediate therapy and may benefit from an intervention to prevent disease progression, the authors wrote. Previous research suggested that dietary intervention exerts a salutary effect on early CLL, and in vitro studies suggested that oleocanthal, a component of extra virgin olive oil, induced anticancer activity.
In a study published in Frontiers in Oncology, the researchers enrolled adults with early stage CLL who had not undergone chemotherapy or other treatment. All patients adhered to a Mediterranean-style diet.
After a washout period of 9-12 months, the researchers randomized 22 patients to extra virgin olive oil high in oleocanthal and oleacein (high OC/OL-EVOO). Patients in the intervention group consumed 40 mL/day of high OC/OL-EVOO before meals. Their average age was 71 years; 10 were women and 12 were men.
The primary outcomes included changes in hematological, biochemical, and apoptotic markers. After 6 months, patients in the intervention group showed a statistically significant reduction in white blood cells and lymphocyte count, compared with measurements taken 3 months before the intervention. The WBC decrease was greatest among patients with the highest WBC levels at baseline.
As for biochemical markers, the researchers observed a significant decrease in glucose levels during the intervention, but no significant effects on metabolic indexes or renal function.
After 3 months and also after 6 months of the olive oil intervention, patients showed a significant increase in the apoptotic markers ccK18 and Apo1-Fas (P ≤ .05 for both), as well as an increase in the cell cycle negative regulator p21. The dietary intervention also was associated with significant decreases in expression of the antiapoptotic protein survivin and in cyclin D, a positive cell cycle regulator protein.
Further, patients who had a high ccK18 level at baseline showed a significantly greater increase in ccK18 after the intervention, compared with those with lower ccK18 at baseline (P = .001).
Notably, “a negative correlation of the WBC at the end of the dietary intervention with the fluctuation of the protein expression of the apoptotic marker ccK18 (final – initial) was observed,” the researchers wrote in their discussion.
The study findings were limited by several factors including the small sample size, short intervention time, and pilot design, the researchers said. Other limitations include the possible effect of other unmeasured properties of olive oil.
However, the results reflect previous studies showing the benefits of a Mediterranean-type diet, and they represent the first clinical trial to indicate possible beneficial effects from oleocanthal and oleacein on the progression of CLL. Therefore, the authors concluded, the study is worthy of a large, multicenter trial.
Pilot data merit more research
In an interview, corresponding author Prokopios Magiatis, PhD, noted that CLL is “the most commonly diagnosed adult leukemia in Western countries and is responsible for about one in four cases of all leukemias.” CLL remains incurable in most patients, and ways to delay disease progression are needed.
“Oleocanthal is the active ingredient of early harvest olive oil with proven anticancer activities in vitro and in vivo,” Dr. Magiatis explained. “For this reason, it was a unique challenge to investigate the anticancer activity of this compound for the first time in humans through the dietary consumption of specifically selected olive oil.” He expressed surprise at the beneficial effects of high-oleocanthal olive oil, not only to the white blood cells, but also to glucose levels.
“It seems that oleocanthal can activate mechanisms related to the apoptosis of cancer cells, and also mechanisms related to blood glucose regulation without affecting any normal cells of the body,” he said. “All anticancer drugs usually have severe side effects, however the administration of 25 mg of oleocanthal through the dietary consumption of olive oil did not present any harmful effects for at least 6 months of everyday use.
“The addition of naturally produced high-oleocanthal olive oil in the diet of early-stage CLL patients at a dose of three tablespoons per day [40 mL] is a practice that may lower the cancerous white blood cells of the patients without any risk,” said Dr. Magiatis. “High-oleocanthal early-harvest olive oil has been consumed for centuries, and may be the key of longevity of several Mediterranean populations.
“In our study, the number of the white blood cells returned back to the number it was one year before the initiation of the study; this clearly shows that it could be a significant factor for the delay of the progress of the disease,” he said.
The current trial was a pilot study in one hospital with only 22 patients for 6 months, said Dr. Magiatis. “We are currently preparing the expansion of the study to other hospitals and other countries, and we aim to include at least 100 patients for at least 1 year, to validate the already-obtained beneficial results.”
The clinical trial is supported by the nonprofit organization World Olive Center for Health, he added.
The current study received no outside funding. The researchers had no financial conflicts to disclose.
Olive oil is a major component of the Mediterranean diet, and olive phenols have been shown to convey antioxidant, anti-inflammatory, anticancer, neuroprotective, and antidiabetic effects by modulating various molecular pathways, Andrea Paola Rojas Gil, PhD, of the University of Peloponnese, Tripoli, Greece, and colleagues wrote.
In most patients, CLL is incurable, but those at the early stages do not need immediate therapy and may benefit from an intervention to prevent disease progression, the authors wrote. Previous research suggested that dietary intervention exerts a salutary effect on early CLL, and in vitro studies suggested that oleocanthal, a component of extra virgin olive oil, induced anticancer activity.
In a study published in Frontiers in Oncology, the researchers enrolled adults with early stage CLL who had not undergone chemotherapy or other treatment. All patients adhered to a Mediterranean-style diet.
After a washout period of 9-12 months, the researchers randomized 22 patients to extra virgin olive oil high in oleocanthal and oleacein (high OC/OL-EVOO). Patients in the intervention group consumed 40 mL/day of high OC/OL-EVOO before meals. Their average age was 71 years; 10 were women and 12 were men.
The primary outcomes included changes in hematological, biochemical, and apoptotic markers. After 6 months, patients in the intervention group showed a statistically significant reduction in white blood cells and lymphocyte count, compared with measurements taken 3 months before the intervention. The WBC decrease was greatest among patients with the highest WBC levels at baseline.
As for biochemical markers, the researchers observed a significant decrease in glucose levels during the intervention, but no significant effects on metabolic indexes or renal function.
After 3 months and also after 6 months of the olive oil intervention, patients showed a significant increase in the apoptotic markers ccK18 and Apo1-Fas (P ≤ .05 for both), as well as an increase in the cell cycle negative regulator p21. The dietary intervention also was associated with significant decreases in expression of the antiapoptotic protein survivin and in cyclin D, a positive cell cycle regulator protein.
Further, patients who had a high ccK18 level at baseline showed a significantly greater increase in ccK18 after the intervention, compared with those with lower ccK18 at baseline (P = .001).
Notably, “a negative correlation of the WBC at the end of the dietary intervention with the fluctuation of the protein expression of the apoptotic marker ccK18 (final – initial) was observed,” the researchers wrote in their discussion.
The study findings were limited by several factors including the small sample size, short intervention time, and pilot design, the researchers said. Other limitations include the possible effect of other unmeasured properties of olive oil.
However, the results reflect previous studies showing the benefits of a Mediterranean-type diet, and they represent the first clinical trial to indicate possible beneficial effects from oleocanthal and oleacein on the progression of CLL. Therefore, the authors concluded, the study is worthy of a large, multicenter trial.
Pilot data merit more research
In an interview, corresponding author Prokopios Magiatis, PhD, noted that CLL is “the most commonly diagnosed adult leukemia in Western countries and is responsible for about one in four cases of all leukemias.” CLL remains incurable in most patients, and ways to delay disease progression are needed.
“Oleocanthal is the active ingredient of early harvest olive oil with proven anticancer activities in vitro and in vivo,” Dr. Magiatis explained. “For this reason, it was a unique challenge to investigate the anticancer activity of this compound for the first time in humans through the dietary consumption of specifically selected olive oil.” He expressed surprise at the beneficial effects of high-oleocanthal olive oil, not only to the white blood cells, but also to glucose levels.
“It seems that oleocanthal can activate mechanisms related to the apoptosis of cancer cells, and also mechanisms related to blood glucose regulation without affecting any normal cells of the body,” he said. “All anticancer drugs usually have severe side effects, however the administration of 25 mg of oleocanthal through the dietary consumption of olive oil did not present any harmful effects for at least 6 months of everyday use.
“The addition of naturally produced high-oleocanthal olive oil in the diet of early-stage CLL patients at a dose of three tablespoons per day [40 mL] is a practice that may lower the cancerous white blood cells of the patients without any risk,” said Dr. Magiatis. “High-oleocanthal early-harvest olive oil has been consumed for centuries, and may be the key of longevity of several Mediterranean populations.
“In our study, the number of the white blood cells returned back to the number it was one year before the initiation of the study; this clearly shows that it could be a significant factor for the delay of the progress of the disease,” he said.
The current trial was a pilot study in one hospital with only 22 patients for 6 months, said Dr. Magiatis. “We are currently preparing the expansion of the study to other hospitals and other countries, and we aim to include at least 100 patients for at least 1 year, to validate the already-obtained beneficial results.”
The clinical trial is supported by the nonprofit organization World Olive Center for Health, he added.
The current study received no outside funding. The researchers had no financial conflicts to disclose.
FROM FRONTIERS IN ONCOLOGY
Step test signals exercise capacity in asthma patients
The incremental step test is a highly reliable measure of exercise capacity in patients with moderate to severe asthma, based on data from 50 individuals.
Asthma patients often limit their physical exercise to avoid respiratory symptoms, which creates a downward spiral of reduced exercise capacity and ability to perform activities of daily living, wrote Renata Cléia Claudino Barbosa of the University of Sao Paulo and colleagues. “However, exercise training has been shown to be an important adjunctive therapy for asthma treatment that improves exercise capacity and health-related quality of life,” they wrote.
Step tests have been identified as a simpler, less costly alternative to cardiopulmonary exercise tests to measure exercise capacity in patients with chronic obstructive pulmonary disease, but their effectiveness for asthma patients has not investigated, the researchers said.
In a study published in Pulmonology, the researchers recruited 50 adults with moderate or severe asthma during routine care at a university hospital. The participants had been clinically stable for at least 6 months, with no hospitalizations, emergency care, or medication changes in the past 30 days. All participants received short-acting and long-acting bronchodilators and inhaled corticosteroids. The patients ranged in age from 18 to 60 years, with body mass index measures from 20 kg/m2 to 40 kg/m2.
Participants were randomized to tests on 2 nonconsecutive days at least 48 hours apart. On the first day, patients completed asthma control questionnaires and lung function tests, then performed either a cardiopulmonary exercise test (CPET) or two incremental step tests (IST-1 and IST-2). On the second day, they performed the other test. Participants were instructed to use bronchodilators 15 minutes before each test.
The step test involved stepping up and down on a 20-cm high wooden bench.
Overall, the peak oxygen uptakes were 27.6 mL/kg per minute for the CPET, 22.3 mL/kg per minute for the first IST, and 23.3 mL/kg per minute for the second IST.
“The IST with better performance regarding the peak VO2 value was called the best IST (b-IST),” and these values were used for validity and interpretability analyses, the researchers wrote.
In a reliability analysis, the intraclass correlation coefficient (ICC) was 0.93, the measurement error was 2.5%, and the construct validity for peak VO2 was significantly more reliable than the CPET (P < 0.001), the researchers said. The ICC for total number of steps was 0.88.
Notably, “the present study also demonstrated that IST is not interchangeable with the CPET since the subjects with moderate to severe asthma did not reach the maximal exercise capacity,” the researchers said. However, “we believe that the IST is superior to walking tests in subjects with asthma because it is an activity that requires greater ventilation in a subject’s daily life,” they said.
The study findings were limited by several factors including the relatively small study population and the small number of male patients, which may limit generalizability to males with asthma or other asthma endotypes, the researchers said. However, the results were strengthened by the randomized design, and support the value of the IST as a cost-effective option for assessing exercise capacity, preferably with two step tests to minimize the learning effect, they said. Additional research is needed to determine whether IST can assess responsiveness to pharmacological and nonpharmalogical treatments in asthma patients, they noted.
The study was supported by the Sao Paulo Research Foundation, Conselho Nacional de Pesquisa, and Coordination of Improvement of Higher Level Personnel--Brazil. The researchers had no financial conflicts to disclose.
The incremental step test is a highly reliable measure of exercise capacity in patients with moderate to severe asthma, based on data from 50 individuals.
Asthma patients often limit their physical exercise to avoid respiratory symptoms, which creates a downward spiral of reduced exercise capacity and ability to perform activities of daily living, wrote Renata Cléia Claudino Barbosa of the University of Sao Paulo and colleagues. “However, exercise training has been shown to be an important adjunctive therapy for asthma treatment that improves exercise capacity and health-related quality of life,” they wrote.
Step tests have been identified as a simpler, less costly alternative to cardiopulmonary exercise tests to measure exercise capacity in patients with chronic obstructive pulmonary disease, but their effectiveness for asthma patients has not investigated, the researchers said.
In a study published in Pulmonology, the researchers recruited 50 adults with moderate or severe asthma during routine care at a university hospital. The participants had been clinically stable for at least 6 months, with no hospitalizations, emergency care, or medication changes in the past 30 days. All participants received short-acting and long-acting bronchodilators and inhaled corticosteroids. The patients ranged in age from 18 to 60 years, with body mass index measures from 20 kg/m2 to 40 kg/m2.
Participants were randomized to tests on 2 nonconsecutive days at least 48 hours apart. On the first day, patients completed asthma control questionnaires and lung function tests, then performed either a cardiopulmonary exercise test (CPET) or two incremental step tests (IST-1 and IST-2). On the second day, they performed the other test. Participants were instructed to use bronchodilators 15 minutes before each test.
The step test involved stepping up and down on a 20-cm high wooden bench.
Overall, the peak oxygen uptakes were 27.6 mL/kg per minute for the CPET, 22.3 mL/kg per minute for the first IST, and 23.3 mL/kg per minute for the second IST.
“The IST with better performance regarding the peak VO2 value was called the best IST (b-IST),” and these values were used for validity and interpretability analyses, the researchers wrote.
In a reliability analysis, the intraclass correlation coefficient (ICC) was 0.93, the measurement error was 2.5%, and the construct validity for peak VO2 was significantly more reliable than the CPET (P < 0.001), the researchers said. The ICC for total number of steps was 0.88.
Notably, “the present study also demonstrated that IST is not interchangeable with the CPET since the subjects with moderate to severe asthma did not reach the maximal exercise capacity,” the researchers said. However, “we believe that the IST is superior to walking tests in subjects with asthma because it is an activity that requires greater ventilation in a subject’s daily life,” they said.
The study findings were limited by several factors including the relatively small study population and the small number of male patients, which may limit generalizability to males with asthma or other asthma endotypes, the researchers said. However, the results were strengthened by the randomized design, and support the value of the IST as a cost-effective option for assessing exercise capacity, preferably with two step tests to minimize the learning effect, they said. Additional research is needed to determine whether IST can assess responsiveness to pharmacological and nonpharmalogical treatments in asthma patients, they noted.
The study was supported by the Sao Paulo Research Foundation, Conselho Nacional de Pesquisa, and Coordination of Improvement of Higher Level Personnel--Brazil. The researchers had no financial conflicts to disclose.
The incremental step test is a highly reliable measure of exercise capacity in patients with moderate to severe asthma, based on data from 50 individuals.
Asthma patients often limit their physical exercise to avoid respiratory symptoms, which creates a downward spiral of reduced exercise capacity and ability to perform activities of daily living, wrote Renata Cléia Claudino Barbosa of the University of Sao Paulo and colleagues. “However, exercise training has been shown to be an important adjunctive therapy for asthma treatment that improves exercise capacity and health-related quality of life,” they wrote.
Step tests have been identified as a simpler, less costly alternative to cardiopulmonary exercise tests to measure exercise capacity in patients with chronic obstructive pulmonary disease, but their effectiveness for asthma patients has not investigated, the researchers said.
In a study published in Pulmonology, the researchers recruited 50 adults with moderate or severe asthma during routine care at a university hospital. The participants had been clinically stable for at least 6 months, with no hospitalizations, emergency care, or medication changes in the past 30 days. All participants received short-acting and long-acting bronchodilators and inhaled corticosteroids. The patients ranged in age from 18 to 60 years, with body mass index measures from 20 kg/m2 to 40 kg/m2.
Participants were randomized to tests on 2 nonconsecutive days at least 48 hours apart. On the first day, patients completed asthma control questionnaires and lung function tests, then performed either a cardiopulmonary exercise test (CPET) or two incremental step tests (IST-1 and IST-2). On the second day, they performed the other test. Participants were instructed to use bronchodilators 15 minutes before each test.
The step test involved stepping up and down on a 20-cm high wooden bench.
Overall, the peak oxygen uptakes were 27.6 mL/kg per minute for the CPET, 22.3 mL/kg per minute for the first IST, and 23.3 mL/kg per minute for the second IST.
“The IST with better performance regarding the peak VO2 value was called the best IST (b-IST),” and these values were used for validity and interpretability analyses, the researchers wrote.
In a reliability analysis, the intraclass correlation coefficient (ICC) was 0.93, the measurement error was 2.5%, and the construct validity for peak VO2 was significantly more reliable than the CPET (P < 0.001), the researchers said. The ICC for total number of steps was 0.88.
Notably, “the present study also demonstrated that IST is not interchangeable with the CPET since the subjects with moderate to severe asthma did not reach the maximal exercise capacity,” the researchers said. However, “we believe that the IST is superior to walking tests in subjects with asthma because it is an activity that requires greater ventilation in a subject’s daily life,” they said.
The study findings were limited by several factors including the relatively small study population and the small number of male patients, which may limit generalizability to males with asthma or other asthma endotypes, the researchers said. However, the results were strengthened by the randomized design, and support the value of the IST as a cost-effective option for assessing exercise capacity, preferably with two step tests to minimize the learning effect, they said. Additional research is needed to determine whether IST can assess responsiveness to pharmacological and nonpharmalogical treatments in asthma patients, they noted.
The study was supported by the Sao Paulo Research Foundation, Conselho Nacional de Pesquisa, and Coordination of Improvement of Higher Level Personnel--Brazil. The researchers had no financial conflicts to disclose.
FROM PULMONOLOGY
Shunt diameter predicts liver function after obliteration procedure
Patients with cirrhosis and larger spontaneous shunt diameters showed a significantly greater increase in hepatic venous pressure gradient (HVPG) following balloon-occluded retrograde transvenous obliteration compared to patients with smaller shunt diameters, based on data from 34 adults.
Portal hypertension remains a key source of complications that greatly impact quality of life in patients with cirrhosis, wrote Akihisa Tatsumi, MD, of the University of Yamanashi, Japan, and colleagues. These patients sometimes develop spontaneous portosystemic shunts (SPSS) to lower portal pressure, but these natural shunts are an incomplete solution – one that may contribute to liver dysfunction by reducing hepatic portal blood flow. However, the association of SPSS with liver functional reserve remains unclear, the researchers said.
Balloon-occluded retrograde transvenous obliteration (BRTO) is gaining popularity as a treatment for SPSS in patients with cirrhosis but determining the patients who will benefit from this procedure remains a challenge, the researchers wrote. “Apart from BRTO, some recent studies have reported the impact of the SPSS diameter on the future pathological state of the liver,” which prompted the question of whether SPSS diameter plays a role in predicting portal hypertension–related liver function at baseline and after BRTO, the researchers explained.
In their study, published in JGH Open, the researchers identified 34 cirrhotic patients with SPSS who underwent BRTO at a single center in Japan between 2006 and 2018; all of the patients were available for follow-up at least 6 months after the procedure.
The reasons for BRTO were intractable gastric varices in 18 patients and refractory hepatic encephalopathy with shunt in 16 patients; the mean observation period was 1,182 days (3.24 years). The median age of the patients was 66.5 years, and 53% were male. A majority (76%) of the patients had decompensated cirrhosis with Child-Pugh (CP) scores of B or C, and the maximum diameter of SPSS increased significantly with increased in CP scores (P < .001), the researchers noted.
Overall, at 6 months after BRTO, patients showed significant improvements in liver function from baseline. However, the improvement rate was lower in patients whose shunt diameter was 10 mm or less, and improvement was greatest when the shunt diameter was between 10 mm and 20 mm. “Because the CP score is a significant cofounding factor of the SPSS diameter, we next evaluated the changes in liver function classified by CP scores,” the researchers wrote. In this analysis, the post-BRTO changes in liver function in patients with CP scores of A or B still showed an association between improvement in liver function and larger shunt diameter, but this relationship did not extend to patients with CP scores of C, the researchers said.
A larger shunt diameter also was significantly associated with a greater increase in HVPG after balloon occlusion (P = .005).
“Considering that patients with large SPSS diameters might gain higher portal flow following elevation of HVPG after BRTO, it is natural that the larger the SPSS diameter, the greater the improvements in liver function,” the researchers wrote in their discussion of the findings. “However, such a clear correlation was evident only when the baseline CP scores were within A or B, and not in C, indicating that the improvement of liver function might not parallel HVPG increase in some CP C patients,” they noted.
The study was limited by several factors including the retrospective design from a single center and its small sample size, the researchers noted. Other limitations included selecting and measuring only the largest SPSS of each patient and lack of data on the impact of SPSS diameter on overall survival, they said.
However, the results suggest that SPSS diameter may serve not only as an indicator of portal hypertension involvement at baseline, but also as a useful clinical predictor of liver function after BRTO, they concluded.
Study supports potential benefits of BRTO
“While the association between SPSS and complications of portal hypertension such as variceal bleeding and hepatic encephalopathy have been known, data are lacking in regard to characteristics of SPSS that are most dysfunctional, and whether certain patients may benefit from BRTO to occlude these shunts,” Khashayar Farsad, MD, of Oregon Health & Science University, Portland, said in an interview.
“The results are in many ways expected based on anticipated impact of larger versus smaller SPSS in overall liver function,” Dr. Farsad noted. “The study, however, does show a nice correlation between several factors involved in liver function and their changes depending on shunt diameter, correlated with changes in the relative venous pressure gradient across the liver,” he said. “Furthermore, the finding that changes were most evident in those with relatively preserved liver function [Child-Turcotte-Pugh grades A and B] suggests less of a relationship between SPSS and liver function in those with more decompensated liver disease,” he added.
“The impact of the study is significantly limited by its retrospective design, small numbers with potential patient heterogeneity, and lack of a control cohort,” said Dr. Farsad. However, “The major take-home message for clinicians is a potential signal that the size of the SPSS at baseline may predict the impact of the SPSS on liver function, and therefore, the potential benefit of a procedure such as BRTO to positively influence this,” he said. “Additional research with larger cohorts and a prospective study design would be warranted, however, before this information would be meaningful in daily clinical decision making,” he emphasized.
The study was supported by the Research Program on Hepatitis of the Japanese Agency for Medical Research and Development. The researchers had no financial conflicts to disclose. Dr. Farsad disclosed research support from W.L. Gore & Associates, Guerbet LLC, Boston Scientific, and Exelixis; serving as a consultant for NeuWave Medical, Cook Medical, Guerbet LLC, and Eisai, and holding equity in Auxetics Inc.
Patients with cirrhosis and larger spontaneous shunt diameters showed a significantly greater increase in hepatic venous pressure gradient (HVPG) following balloon-occluded retrograde transvenous obliteration compared to patients with smaller shunt diameters, based on data from 34 adults.
Portal hypertension remains a key source of complications that greatly impact quality of life in patients with cirrhosis, wrote Akihisa Tatsumi, MD, of the University of Yamanashi, Japan, and colleagues. These patients sometimes develop spontaneous portosystemic shunts (SPSS) to lower portal pressure, but these natural shunts are an incomplete solution – one that may contribute to liver dysfunction by reducing hepatic portal blood flow. However, the association of SPSS with liver functional reserve remains unclear, the researchers said.
Balloon-occluded retrograde transvenous obliteration (BRTO) is gaining popularity as a treatment for SPSS in patients with cirrhosis but determining the patients who will benefit from this procedure remains a challenge, the researchers wrote. “Apart from BRTO, some recent studies have reported the impact of the SPSS diameter on the future pathological state of the liver,” which prompted the question of whether SPSS diameter plays a role in predicting portal hypertension–related liver function at baseline and after BRTO, the researchers explained.
In their study, published in JGH Open, the researchers identified 34 cirrhotic patients with SPSS who underwent BRTO at a single center in Japan between 2006 and 2018; all of the patients were available for follow-up at least 6 months after the procedure.
The reasons for BRTO were intractable gastric varices in 18 patients and refractory hepatic encephalopathy with shunt in 16 patients; the mean observation period was 1,182 days (3.24 years). The median age of the patients was 66.5 years, and 53% were male. A majority (76%) of the patients had decompensated cirrhosis with Child-Pugh (CP) scores of B or C, and the maximum diameter of SPSS increased significantly with increased in CP scores (P < .001), the researchers noted.
Overall, at 6 months after BRTO, patients showed significant improvements in liver function from baseline. However, the improvement rate was lower in patients whose shunt diameter was 10 mm or less, and improvement was greatest when the shunt diameter was between 10 mm and 20 mm. “Because the CP score is a significant cofounding factor of the SPSS diameter, we next evaluated the changes in liver function classified by CP scores,” the researchers wrote. In this analysis, the post-BRTO changes in liver function in patients with CP scores of A or B still showed an association between improvement in liver function and larger shunt diameter, but this relationship did not extend to patients with CP scores of C, the researchers said.
A larger shunt diameter also was significantly associated with a greater increase in HVPG after balloon occlusion (P = .005).
“Considering that patients with large SPSS diameters might gain higher portal flow following elevation of HVPG after BRTO, it is natural that the larger the SPSS diameter, the greater the improvements in liver function,” the researchers wrote in their discussion of the findings. “However, such a clear correlation was evident only when the baseline CP scores were within A or B, and not in C, indicating that the improvement of liver function might not parallel HVPG increase in some CP C patients,” they noted.
The study was limited by several factors including the retrospective design from a single center and its small sample size, the researchers noted. Other limitations included selecting and measuring only the largest SPSS of each patient and lack of data on the impact of SPSS diameter on overall survival, they said.
However, the results suggest that SPSS diameter may serve not only as an indicator of portal hypertension involvement at baseline, but also as a useful clinical predictor of liver function after BRTO, they concluded.
Study supports potential benefits of BRTO
“While the association between SPSS and complications of portal hypertension such as variceal bleeding and hepatic encephalopathy have been known, data are lacking in regard to characteristics of SPSS that are most dysfunctional, and whether certain patients may benefit from BRTO to occlude these shunts,” Khashayar Farsad, MD, of Oregon Health & Science University, Portland, said in an interview.
“The results are in many ways expected based on anticipated impact of larger versus smaller SPSS in overall liver function,” Dr. Farsad noted. “The study, however, does show a nice correlation between several factors involved in liver function and their changes depending on shunt diameter, correlated with changes in the relative venous pressure gradient across the liver,” he said. “Furthermore, the finding that changes were most evident in those with relatively preserved liver function [Child-Turcotte-Pugh grades A and B] suggests less of a relationship between SPSS and liver function in those with more decompensated liver disease,” he added.
“The impact of the study is significantly limited by its retrospective design, small numbers with potential patient heterogeneity, and lack of a control cohort,” said Dr. Farsad. However, “The major take-home message for clinicians is a potential signal that the size of the SPSS at baseline may predict the impact of the SPSS on liver function, and therefore, the potential benefit of a procedure such as BRTO to positively influence this,” he said. “Additional research with larger cohorts and a prospective study design would be warranted, however, before this information would be meaningful in daily clinical decision making,” he emphasized.
The study was supported by the Research Program on Hepatitis of the Japanese Agency for Medical Research and Development. The researchers had no financial conflicts to disclose. Dr. Farsad disclosed research support from W.L. Gore & Associates, Guerbet LLC, Boston Scientific, and Exelixis; serving as a consultant for NeuWave Medical, Cook Medical, Guerbet LLC, and Eisai, and holding equity in Auxetics Inc.
Patients with cirrhosis and larger spontaneous shunt diameters showed a significantly greater increase in hepatic venous pressure gradient (HVPG) following balloon-occluded retrograde transvenous obliteration compared to patients with smaller shunt diameters, based on data from 34 adults.
Portal hypertension remains a key source of complications that greatly impact quality of life in patients with cirrhosis, wrote Akihisa Tatsumi, MD, of the University of Yamanashi, Japan, and colleagues. These patients sometimes develop spontaneous portosystemic shunts (SPSS) to lower portal pressure, but these natural shunts are an incomplete solution – one that may contribute to liver dysfunction by reducing hepatic portal blood flow. However, the association of SPSS with liver functional reserve remains unclear, the researchers said.
Balloon-occluded retrograde transvenous obliteration (BRTO) is gaining popularity as a treatment for SPSS in patients with cirrhosis but determining the patients who will benefit from this procedure remains a challenge, the researchers wrote. “Apart from BRTO, some recent studies have reported the impact of the SPSS diameter on the future pathological state of the liver,” which prompted the question of whether SPSS diameter plays a role in predicting portal hypertension–related liver function at baseline and after BRTO, the researchers explained.
In their study, published in JGH Open, the researchers identified 34 cirrhotic patients with SPSS who underwent BRTO at a single center in Japan between 2006 and 2018; all of the patients were available for follow-up at least 6 months after the procedure.
The reasons for BRTO were intractable gastric varices in 18 patients and refractory hepatic encephalopathy with shunt in 16 patients; the mean observation period was 1,182 days (3.24 years). The median age of the patients was 66.5 years, and 53% were male. A majority (76%) of the patients had decompensated cirrhosis with Child-Pugh (CP) scores of B or C, and the maximum diameter of SPSS increased significantly with increased in CP scores (P < .001), the researchers noted.
Overall, at 6 months after BRTO, patients showed significant improvements in liver function from baseline. However, the improvement rate was lower in patients whose shunt diameter was 10 mm or less, and improvement was greatest when the shunt diameter was between 10 mm and 20 mm. “Because the CP score is a significant cofounding factor of the SPSS diameter, we next evaluated the changes in liver function classified by CP scores,” the researchers wrote. In this analysis, the post-BRTO changes in liver function in patients with CP scores of A or B still showed an association between improvement in liver function and larger shunt diameter, but this relationship did not extend to patients with CP scores of C, the researchers said.
A larger shunt diameter also was significantly associated with a greater increase in HVPG after balloon occlusion (P = .005).
“Considering that patients with large SPSS diameters might gain higher portal flow following elevation of HVPG after BRTO, it is natural that the larger the SPSS diameter, the greater the improvements in liver function,” the researchers wrote in their discussion of the findings. “However, such a clear correlation was evident only when the baseline CP scores were within A or B, and not in C, indicating that the improvement of liver function might not parallel HVPG increase in some CP C patients,” they noted.
The study was limited by several factors including the retrospective design from a single center and its small sample size, the researchers noted. Other limitations included selecting and measuring only the largest SPSS of each patient and lack of data on the impact of SPSS diameter on overall survival, they said.
However, the results suggest that SPSS diameter may serve not only as an indicator of portal hypertension involvement at baseline, but also as a useful clinical predictor of liver function after BRTO, they concluded.
Study supports potential benefits of BRTO
“While the association between SPSS and complications of portal hypertension such as variceal bleeding and hepatic encephalopathy have been known, data are lacking in regard to characteristics of SPSS that are most dysfunctional, and whether certain patients may benefit from BRTO to occlude these shunts,” Khashayar Farsad, MD, of Oregon Health & Science University, Portland, said in an interview.
“The results are in many ways expected based on anticipated impact of larger versus smaller SPSS in overall liver function,” Dr. Farsad noted. “The study, however, does show a nice correlation between several factors involved in liver function and their changes depending on shunt diameter, correlated with changes in the relative venous pressure gradient across the liver,” he said. “Furthermore, the finding that changes were most evident in those with relatively preserved liver function [Child-Turcotte-Pugh grades A and B] suggests less of a relationship between SPSS and liver function in those with more decompensated liver disease,” he added.
“The impact of the study is significantly limited by its retrospective design, small numbers with potential patient heterogeneity, and lack of a control cohort,” said Dr. Farsad. However, “The major take-home message for clinicians is a potential signal that the size of the SPSS at baseline may predict the impact of the SPSS on liver function, and therefore, the potential benefit of a procedure such as BRTO to positively influence this,” he said. “Additional research with larger cohorts and a prospective study design would be warranted, however, before this information would be meaningful in daily clinical decision making,” he emphasized.
The study was supported by the Research Program on Hepatitis of the Japanese Agency for Medical Research and Development. The researchers had no financial conflicts to disclose. Dr. Farsad disclosed research support from W.L. Gore & Associates, Guerbet LLC, Boston Scientific, and Exelixis; serving as a consultant for NeuWave Medical, Cook Medical, Guerbet LLC, and Eisai, and holding equity in Auxetics Inc.
FROM JGH OPEN
Kawasaki disease guideline highlights rheumatology angles
All Kawasaki disease (KD) patients should be treated first with intravenous immunoglobulin, according to an updated guideline issued jointly by the American College of Rheumatology and the Vasculitis Foundation.
KD has low mortality when treated appropriately, guideline first author Mark Gorelik, MD, assistant professor of pediatrics at Columbia University, New York, and colleagues wrote.
The update is important at this time because new evidence continues to emerge in the clinical management of KD, Dr. Gorelik said in an interview.
“In addition, this guideline approaches Kawasaki disease from a perspective of acting as an adjunct to the already existing and excellent American Heart Association guidelines by adding information in areas that rheumatologists may play a role,” Dr. Gorelik said. “This is specifically regarding patients who may require additional therapy beyond standard IVIg, such as patients who may be at higher risk of morbidity from disease and patients who have refractory disease,” he explained.
The guideline, published in Arthritis & Rheumatology, includes 11 recommendations, 1 good practice statement, and 1 ungraded position statement. The good practice statement emphasizes that all patients with KD should be initially treated with IVIg.
The position statement advises that either nonglucocorticoid immunosuppressive therapy or glucocorticoids may be used for patients with acute KD whose fever persists despite repeated IVIg treatment. No clinical evidence currently supports the superiority of either nonglucocorticoid immunosuppressive therapy or glucocorticoids; therefore, the authors support the use of either based on what is appropriate in any given clinical situation. Although optimal dosage and duration of glucocorticoids have yet to be determined in a U.S. population, the authors described a typical glucocorticoid dosage as starting prednisone at 2 mg/kg per day, with a maximum of 60 mg/day, and dose tapering over 15 days.
The 11 recommendations consist of 7 strong and 4 conditional recommendations. The strong recommendations focus on prompt treatment of incomplete KD, treatment with aspirin, and obtaining an echocardiogram in patients with unexplained macrophage activation syndrome or shock. The conditional recommendations support using established therapy promptly at disease onset, then identifying cases in which additional therapy is needed.
Dr. Gorelik highlighted four clinical takeaways from the guideline. First, “patients with higher risk for complications do exist in Kawasaki disease, and that these patients can be treated more aggressively,” he said. “Specifically, patients with aneurysms seen at first ultrasound, and patients who are under 6 months, are more likely to have progressive and/or refractory disease; these patients can be treated with an adjunctive short course of corticosteroids.”
Second, “the use of high-dose aspirin for patients with Kawasaki disease does not have strong basis in evidence. While aspirin itself of some dose is necessary for patients with Kawasaki disease, use of either high- or low-dose aspirin has the same outcome for patients, and a physician may choose either of these in practice,” he said.
Third, “we continue to recommend that refractory patients with Kawasaki disease be treated with a second dose of IVIg; however, there are many scenarios in which a physician may choose either corticosteroids [either a single high dose of >10 mg/kg, or a short moderate-dose course of 2 mg/kg per day for 5-7 days] or a biologic agent such as infliximab. ... These are valid choices for therapy in patients with refractory Kawasaki disease,” he emphasized.
Fourth, “physicians should discard the idea of treating before [and conversely, not treating after] 10 days of fever,” Dr. Gorelik said. “Patients with Kawasaki disease should be treated as soon as the diagnosis is made, regardless of whether this patient is on day 5, day 12, or day 20 of symptoms.”
Update incorporates emerging evidence
Potential barriers to implementing the guideline in practice include the challenge of weaning doctors from practices that are habitual in medicine, Dr. Gorelik said. “One of these is the use of high-dose aspirin for Kawasaki disease; a number of studies have shown over the past decade or more that high-dose aspirin has no greater effect than lower-dose aspirin for Kawasaki disease. Despite all of these studies, the use of high-dose aspirin continued. High-dose aspirin for Kawasaki disease was used in the era prior to use of IVIg as an anti-inflammatory agent. However, it has poor efficacy in this regard, and the true benefit for aspirin is for anticoagulation for patients at risk of a clot, and this is just as effective in lower doses. Expressing this in a guideline could help to change practices by helping physicians understand not only what they are guided to do, but why.”
Additional research is needed to better identify high-risk patients in non-Japanese populations, he noted. “While studies from Japan suggest that higher-risk patients can be identified based on various parameters, these have not been well replicated in non-Japanese populations. Good research that identifies which patients may be more at risk in other populations would be helpful to more precisely target high-risk therapy.”
Other research needs include a clearer understanding of the best therapies for refractory patients, Dr. Gorelik said. “One area of the most difficulty was determining whether patients with refractory disease should have repeated IVIg or a switch to glucocorticoids and biologic agents. Some of this research is underway, and some was published just as these guidelines were being drawn, and this particular area is one that is likely to change significantly. While currently we recommend a repeated dose of IVIg, it is likely that over the very near term, the use of repeated IVIg in KD will be curtailed” because of concerns such as the relatively high rate of hemolysis. Research to identify which therapy has a noninferior effect with a superior risk profile is needed; such research “will likely result in a future iteration of these guidelines specifically related to this question,” he concluded.
The KD guideline is the final companion to three additional ACR/VF vasculitis guidelines that were released in July 2021. The guideline research received no outside funding. The researchers had no financial conflicts to disclose.
All Kawasaki disease (KD) patients should be treated first with intravenous immunoglobulin, according to an updated guideline issued jointly by the American College of Rheumatology and the Vasculitis Foundation.
KD has low mortality when treated appropriately, guideline first author Mark Gorelik, MD, assistant professor of pediatrics at Columbia University, New York, and colleagues wrote.
The update is important at this time because new evidence continues to emerge in the clinical management of KD, Dr. Gorelik said in an interview.
“In addition, this guideline approaches Kawasaki disease from a perspective of acting as an adjunct to the already existing and excellent American Heart Association guidelines by adding information in areas that rheumatologists may play a role,” Dr. Gorelik said. “This is specifically regarding patients who may require additional therapy beyond standard IVIg, such as patients who may be at higher risk of morbidity from disease and patients who have refractory disease,” he explained.
The guideline, published in Arthritis & Rheumatology, includes 11 recommendations, 1 good practice statement, and 1 ungraded position statement. The good practice statement emphasizes that all patients with KD should be initially treated with IVIg.
The position statement advises that either nonglucocorticoid immunosuppressive therapy or glucocorticoids may be used for patients with acute KD whose fever persists despite repeated IVIg treatment. No clinical evidence currently supports the superiority of either nonglucocorticoid immunosuppressive therapy or glucocorticoids; therefore, the authors support the use of either based on what is appropriate in any given clinical situation. Although optimal dosage and duration of glucocorticoids have yet to be determined in a U.S. population, the authors described a typical glucocorticoid dosage as starting prednisone at 2 mg/kg per day, with a maximum of 60 mg/day, and dose tapering over 15 days.
The 11 recommendations consist of 7 strong and 4 conditional recommendations. The strong recommendations focus on prompt treatment of incomplete KD, treatment with aspirin, and obtaining an echocardiogram in patients with unexplained macrophage activation syndrome or shock. The conditional recommendations support using established therapy promptly at disease onset, then identifying cases in which additional therapy is needed.
Dr. Gorelik highlighted four clinical takeaways from the guideline. First, “patients with higher risk for complications do exist in Kawasaki disease, and that these patients can be treated more aggressively,” he said. “Specifically, patients with aneurysms seen at first ultrasound, and patients who are under 6 months, are more likely to have progressive and/or refractory disease; these patients can be treated with an adjunctive short course of corticosteroids.”
Second, “the use of high-dose aspirin for patients with Kawasaki disease does not have strong basis in evidence. While aspirin itself of some dose is necessary for patients with Kawasaki disease, use of either high- or low-dose aspirin has the same outcome for patients, and a physician may choose either of these in practice,” he said.
Third, “we continue to recommend that refractory patients with Kawasaki disease be treated with a second dose of IVIg; however, there are many scenarios in which a physician may choose either corticosteroids [either a single high dose of >10 mg/kg, or a short moderate-dose course of 2 mg/kg per day for 5-7 days] or a biologic agent such as infliximab. ... These are valid choices for therapy in patients with refractory Kawasaki disease,” he emphasized.
Fourth, “physicians should discard the idea of treating before [and conversely, not treating after] 10 days of fever,” Dr. Gorelik said. “Patients with Kawasaki disease should be treated as soon as the diagnosis is made, regardless of whether this patient is on day 5, day 12, or day 20 of symptoms.”
Update incorporates emerging evidence
Potential barriers to implementing the guideline in practice include the challenge of weaning doctors from practices that are habitual in medicine, Dr. Gorelik said. “One of these is the use of high-dose aspirin for Kawasaki disease; a number of studies have shown over the past decade or more that high-dose aspirin has no greater effect than lower-dose aspirin for Kawasaki disease. Despite all of these studies, the use of high-dose aspirin continued. High-dose aspirin for Kawasaki disease was used in the era prior to use of IVIg as an anti-inflammatory agent. However, it has poor efficacy in this regard, and the true benefit for aspirin is for anticoagulation for patients at risk of a clot, and this is just as effective in lower doses. Expressing this in a guideline could help to change practices by helping physicians understand not only what they are guided to do, but why.”
Additional research is needed to better identify high-risk patients in non-Japanese populations, he noted. “While studies from Japan suggest that higher-risk patients can be identified based on various parameters, these have not been well replicated in non-Japanese populations. Good research that identifies which patients may be more at risk in other populations would be helpful to more precisely target high-risk therapy.”
Other research needs include a clearer understanding of the best therapies for refractory patients, Dr. Gorelik said. “One area of the most difficulty was determining whether patients with refractory disease should have repeated IVIg or a switch to glucocorticoids and biologic agents. Some of this research is underway, and some was published just as these guidelines were being drawn, and this particular area is one that is likely to change significantly. While currently we recommend a repeated dose of IVIg, it is likely that over the very near term, the use of repeated IVIg in KD will be curtailed” because of concerns such as the relatively high rate of hemolysis. Research to identify which therapy has a noninferior effect with a superior risk profile is needed; such research “will likely result in a future iteration of these guidelines specifically related to this question,” he concluded.
The KD guideline is the final companion to three additional ACR/VF vasculitis guidelines that were released in July 2021. The guideline research received no outside funding. The researchers had no financial conflicts to disclose.
All Kawasaki disease (KD) patients should be treated first with intravenous immunoglobulin, according to an updated guideline issued jointly by the American College of Rheumatology and the Vasculitis Foundation.
KD has low mortality when treated appropriately, guideline first author Mark Gorelik, MD, assistant professor of pediatrics at Columbia University, New York, and colleagues wrote.
The update is important at this time because new evidence continues to emerge in the clinical management of KD, Dr. Gorelik said in an interview.
“In addition, this guideline approaches Kawasaki disease from a perspective of acting as an adjunct to the already existing and excellent American Heart Association guidelines by adding information in areas that rheumatologists may play a role,” Dr. Gorelik said. “This is specifically regarding patients who may require additional therapy beyond standard IVIg, such as patients who may be at higher risk of morbidity from disease and patients who have refractory disease,” he explained.
The guideline, published in Arthritis & Rheumatology, includes 11 recommendations, 1 good practice statement, and 1 ungraded position statement. The good practice statement emphasizes that all patients with KD should be initially treated with IVIg.
The position statement advises that either nonglucocorticoid immunosuppressive therapy or glucocorticoids may be used for patients with acute KD whose fever persists despite repeated IVIg treatment. No clinical evidence currently supports the superiority of either nonglucocorticoid immunosuppressive therapy or glucocorticoids; therefore, the authors support the use of either based on what is appropriate in any given clinical situation. Although optimal dosage and duration of glucocorticoids have yet to be determined in a U.S. population, the authors described a typical glucocorticoid dosage as starting prednisone at 2 mg/kg per day, with a maximum of 60 mg/day, and dose tapering over 15 days.
The 11 recommendations consist of 7 strong and 4 conditional recommendations. The strong recommendations focus on prompt treatment of incomplete KD, treatment with aspirin, and obtaining an echocardiogram in patients with unexplained macrophage activation syndrome or shock. The conditional recommendations support using established therapy promptly at disease onset, then identifying cases in which additional therapy is needed.
Dr. Gorelik highlighted four clinical takeaways from the guideline. First, “patients with higher risk for complications do exist in Kawasaki disease, and that these patients can be treated more aggressively,” he said. “Specifically, patients with aneurysms seen at first ultrasound, and patients who are under 6 months, are more likely to have progressive and/or refractory disease; these patients can be treated with an adjunctive short course of corticosteroids.”
Second, “the use of high-dose aspirin for patients with Kawasaki disease does not have strong basis in evidence. While aspirin itself of some dose is necessary for patients with Kawasaki disease, use of either high- or low-dose aspirin has the same outcome for patients, and a physician may choose either of these in practice,” he said.
Third, “we continue to recommend that refractory patients with Kawasaki disease be treated with a second dose of IVIg; however, there are many scenarios in which a physician may choose either corticosteroids [either a single high dose of >10 mg/kg, or a short moderate-dose course of 2 mg/kg per day for 5-7 days] or a biologic agent such as infliximab. ... These are valid choices for therapy in patients with refractory Kawasaki disease,” he emphasized.
Fourth, “physicians should discard the idea of treating before [and conversely, not treating after] 10 days of fever,” Dr. Gorelik said. “Patients with Kawasaki disease should be treated as soon as the diagnosis is made, regardless of whether this patient is on day 5, day 12, or day 20 of symptoms.”
Update incorporates emerging evidence
Potential barriers to implementing the guideline in practice include the challenge of weaning doctors from practices that are habitual in medicine, Dr. Gorelik said. “One of these is the use of high-dose aspirin for Kawasaki disease; a number of studies have shown over the past decade or more that high-dose aspirin has no greater effect than lower-dose aspirin for Kawasaki disease. Despite all of these studies, the use of high-dose aspirin continued. High-dose aspirin for Kawasaki disease was used in the era prior to use of IVIg as an anti-inflammatory agent. However, it has poor efficacy in this regard, and the true benefit for aspirin is for anticoagulation for patients at risk of a clot, and this is just as effective in lower doses. Expressing this in a guideline could help to change practices by helping physicians understand not only what they are guided to do, but why.”
Additional research is needed to better identify high-risk patients in non-Japanese populations, he noted. “While studies from Japan suggest that higher-risk patients can be identified based on various parameters, these have not been well replicated in non-Japanese populations. Good research that identifies which patients may be more at risk in other populations would be helpful to more precisely target high-risk therapy.”
Other research needs include a clearer understanding of the best therapies for refractory patients, Dr. Gorelik said. “One area of the most difficulty was determining whether patients with refractory disease should have repeated IVIg or a switch to glucocorticoids and biologic agents. Some of this research is underway, and some was published just as these guidelines were being drawn, and this particular area is one that is likely to change significantly. While currently we recommend a repeated dose of IVIg, it is likely that over the very near term, the use of repeated IVIg in KD will be curtailed” because of concerns such as the relatively high rate of hemolysis. Research to identify which therapy has a noninferior effect with a superior risk profile is needed; such research “will likely result in a future iteration of these guidelines specifically related to this question,” he concluded.
The KD guideline is the final companion to three additional ACR/VF vasculitis guidelines that were released in July 2021. The guideline research received no outside funding. The researchers had no financial conflicts to disclose.
FROM ARTHRITIS & RHEUMATOLOGY
Obesity linked to combined OSA syndrome and severe asthma
Almost all patients with both obstructive sleep apnea syndrome and severe asthma fell into the obesity phenotype, not the allergy phenotype, based on data from nearly 1,500 adults.
Both asthma and sleep-disordered breathing are common conditions worldwide, and previous research suggests that obstructive sleep apnea syndrome (OSAS) and severe asthma in particular could be associated, wrote Laurent Portel, MD, of Centre Hospitalier de Libourne, France, and colleagues.
“Even if the underlying mechanisms are not well established, it is clear that both OSAS and obesity act to aggravate existing asthma, making it more difficult to control,” they said. However, the pathology of this relationship is not well-understood, and data on severe asthma phenotypes and OSAS are limited, they said.
In a study published in Respiratory Medicine and Research, the investigators reviewed data from 1,465 patients older than 18 years with severe asthma who were part of a larger, prospective multicenter study of the management of asthma patients. The larger study, developed by the Collège des Pneumologues des Hôpitaux Généraux (CPHG) is known as the FASE-CPHG (France Asthme SEvère-CPHG) and includes 104 nonacademic hospitals in France.
Diagnosis of OSAS was reported by physicians; diagnosis of severe asthma was based on the Global Initiative for Asthma criteria. The average age of the patients was 54.4 years, 63% were women, and 60% were nonsmokers.
A total of 161 patients were diagnosed with OSAS. The researchers conducted a cluster analysis on 1,424 patients, including 156 of the OSAS patients. They identified five clusters: early-onset atopic asthma (690 patients), obese asthma (153 patients), late-onset asthma (299 patients), eosinophilic asthma (143 patients), and aspirin sensitivity asthma (139 patients).
All 153 patients in the obese asthma cluster had OSAS, by contrast, none of the patients in the early atopic asthma cluster had OSAS.
Overall, obesity, male sex, high blood pressure, depression, late-onset asthma, and early-onset atopic asthma were independently associated with OSAS, with odds ratios of 5.782, 3.047, 2.875, 2.552, 1.789, and 0.622, respectively.
Notably, OSAS patients were more frequently treated with long-term oral corticosteroids than those without OSAS (30% vs. 15%, P < .0001), the researchers said. “It is possible that this treatment may be responsible for obesity, and it represents a well-known risk factor for developing OSAS,” they wrote.
Uncontrolled asthma was significantly more common in OSAS patients than in those without OSAS (77.7% vs. 69%, P = .03), and significantly more OSAS patients reported no or occasional physical activity (79.8% vs. 68.2%, P ≤ .001).
The study findings were limited by several factors including the lack of patients from primary care or university hospitals, which may limit the generalizability of the results, the reliance on physician statements for diagnosis of OSAS, and the lack of data on OSAS severity or treatment, the researchers noted.
However, the results fill a needed gap in the literature because of the limited data on severe asthma patients in real life, and identifying severe asthma patients by phenotype may help identify those at greatest risk for OSAS, they said.
“Identified patients could more easily benefit from specific examinations such as poly(somno)graphy and, consequently, could benefit from a better management of both asthma and OSAS,” they emphasized.
The larger FASE-CPHG study was supported in part by ALK, AstraZeneca, Boehringer Ingelheim, GSK, and Le Nouveau Souffle. The researchers had no financial conflicts to disclose.
Almost all patients with both obstructive sleep apnea syndrome and severe asthma fell into the obesity phenotype, not the allergy phenotype, based on data from nearly 1,500 adults.
Both asthma and sleep-disordered breathing are common conditions worldwide, and previous research suggests that obstructive sleep apnea syndrome (OSAS) and severe asthma in particular could be associated, wrote Laurent Portel, MD, of Centre Hospitalier de Libourne, France, and colleagues.
“Even if the underlying mechanisms are not well established, it is clear that both OSAS and obesity act to aggravate existing asthma, making it more difficult to control,” they said. However, the pathology of this relationship is not well-understood, and data on severe asthma phenotypes and OSAS are limited, they said.
In a study published in Respiratory Medicine and Research, the investigators reviewed data from 1,465 patients older than 18 years with severe asthma who were part of a larger, prospective multicenter study of the management of asthma patients. The larger study, developed by the Collège des Pneumologues des Hôpitaux Généraux (CPHG) is known as the FASE-CPHG (France Asthme SEvère-CPHG) and includes 104 nonacademic hospitals in France.
Diagnosis of OSAS was reported by physicians; diagnosis of severe asthma was based on the Global Initiative for Asthma criteria. The average age of the patients was 54.4 years, 63% were women, and 60% were nonsmokers.
A total of 161 patients were diagnosed with OSAS. The researchers conducted a cluster analysis on 1,424 patients, including 156 of the OSAS patients. They identified five clusters: early-onset atopic asthma (690 patients), obese asthma (153 patients), late-onset asthma (299 patients), eosinophilic asthma (143 patients), and aspirin sensitivity asthma (139 patients).
All 153 patients in the obese asthma cluster had OSAS, by contrast, none of the patients in the early atopic asthma cluster had OSAS.
Overall, obesity, male sex, high blood pressure, depression, late-onset asthma, and early-onset atopic asthma were independently associated with OSAS, with odds ratios of 5.782, 3.047, 2.875, 2.552, 1.789, and 0.622, respectively.
Notably, OSAS patients were more frequently treated with long-term oral corticosteroids than those without OSAS (30% vs. 15%, P < .0001), the researchers said. “It is possible that this treatment may be responsible for obesity, and it represents a well-known risk factor for developing OSAS,” they wrote.
Uncontrolled asthma was significantly more common in OSAS patients than in those without OSAS (77.7% vs. 69%, P = .03), and significantly more OSAS patients reported no or occasional physical activity (79.8% vs. 68.2%, P ≤ .001).
The study findings were limited by several factors including the lack of patients from primary care or university hospitals, which may limit the generalizability of the results, the reliance on physician statements for diagnosis of OSAS, and the lack of data on OSAS severity or treatment, the researchers noted.
However, the results fill a needed gap in the literature because of the limited data on severe asthma patients in real life, and identifying severe asthma patients by phenotype may help identify those at greatest risk for OSAS, they said.
“Identified patients could more easily benefit from specific examinations such as poly(somno)graphy and, consequently, could benefit from a better management of both asthma and OSAS,” they emphasized.
The larger FASE-CPHG study was supported in part by ALK, AstraZeneca, Boehringer Ingelheim, GSK, and Le Nouveau Souffle. The researchers had no financial conflicts to disclose.
Almost all patients with both obstructive sleep apnea syndrome and severe asthma fell into the obesity phenotype, not the allergy phenotype, based on data from nearly 1,500 adults.
Both asthma and sleep-disordered breathing are common conditions worldwide, and previous research suggests that obstructive sleep apnea syndrome (OSAS) and severe asthma in particular could be associated, wrote Laurent Portel, MD, of Centre Hospitalier de Libourne, France, and colleagues.
“Even if the underlying mechanisms are not well established, it is clear that both OSAS and obesity act to aggravate existing asthma, making it more difficult to control,” they said. However, the pathology of this relationship is not well-understood, and data on severe asthma phenotypes and OSAS are limited, they said.
In a study published in Respiratory Medicine and Research, the investigators reviewed data from 1,465 patients older than 18 years with severe asthma who were part of a larger, prospective multicenter study of the management of asthma patients. The larger study, developed by the Collège des Pneumologues des Hôpitaux Généraux (CPHG) is known as the FASE-CPHG (France Asthme SEvère-CPHG) and includes 104 nonacademic hospitals in France.
Diagnosis of OSAS was reported by physicians; diagnosis of severe asthma was based on the Global Initiative for Asthma criteria. The average age of the patients was 54.4 years, 63% were women, and 60% were nonsmokers.
A total of 161 patients were diagnosed with OSAS. The researchers conducted a cluster analysis on 1,424 patients, including 156 of the OSAS patients. They identified five clusters: early-onset atopic asthma (690 patients), obese asthma (153 patients), late-onset asthma (299 patients), eosinophilic asthma (143 patients), and aspirin sensitivity asthma (139 patients).
All 153 patients in the obese asthma cluster had OSAS, by contrast, none of the patients in the early atopic asthma cluster had OSAS.
Overall, obesity, male sex, high blood pressure, depression, late-onset asthma, and early-onset atopic asthma were independently associated with OSAS, with odds ratios of 5.782, 3.047, 2.875, 2.552, 1.789, and 0.622, respectively.
Notably, OSAS patients were more frequently treated with long-term oral corticosteroids than those without OSAS (30% vs. 15%, P < .0001), the researchers said. “It is possible that this treatment may be responsible for obesity, and it represents a well-known risk factor for developing OSAS,” they wrote.
Uncontrolled asthma was significantly more common in OSAS patients than in those without OSAS (77.7% vs. 69%, P = .03), and significantly more OSAS patients reported no or occasional physical activity (79.8% vs. 68.2%, P ≤ .001).
The study findings were limited by several factors including the lack of patients from primary care or university hospitals, which may limit the generalizability of the results, the reliance on physician statements for diagnosis of OSAS, and the lack of data on OSAS severity or treatment, the researchers noted.
However, the results fill a needed gap in the literature because of the limited data on severe asthma patients in real life, and identifying severe asthma patients by phenotype may help identify those at greatest risk for OSAS, they said.
“Identified patients could more easily benefit from specific examinations such as poly(somno)graphy and, consequently, could benefit from a better management of both asthma and OSAS,” they emphasized.
The larger FASE-CPHG study was supported in part by ALK, AstraZeneca, Boehringer Ingelheim, GSK, and Le Nouveau Souffle. The researchers had no financial conflicts to disclose.
FROM RESPIRATORY MEDICINE AND RESEARCH