User login
SABR offers surgery alternative for localized RCC
For patients with localized renal cell carcinoma (RCC), stereotactic ablative body radiotherapy (SABR) may be an effective alternative to surgery, according to findings from a retrospective study.
Patients with smaller tumors and nonmetastatic disease achieved the best outcomes with SABR, reported lead author Rodney E. Wegner, MD, of Allegheny Health Network Cancer Institute, Pittsburgh, and colleagues.
“Radiation therapy is often overlooked in [RCC] as historic preclinical data reported RCC as being relatively radioresistant to external beam radiation at conventional doses,” the investigators wrote in Advances in Radiation Oncology. However, SABR may be able to overcome this resistance by delivering “highly conformal dose escalated radiation,” the investigators noted, citing two recent reports from the International Radiosurgery Oncology Consortium for Kidney (IROCK) that showed promising results (J Urol. 2019 Jun;201[6]:1097-104 and Cancer. 2018 Mar 1;124[5]:934-42).
The present study included 347 patients with RCC from the National Cancer Database who were treated with SABR and not surgery. Most patients (94%) did not have systemic therapy. Similar proportions lacked lymph node involvement (97%) or distant metastasis (93%). About three-quarters of patients (76%) had T1 disease. The median SABR dose was 45 Gy, ranging from 35 to 54 Gy, most frequently given in three fractions.
After a median follow-up of 36 months, ranging from 1 to 156 months, median overall survival across all patients was 58 months. SABR was most effective for patients with nonmetastatic disease who had smaller tumors.
An inverse correlation between tumor size and overall survival was apparent given that patients with tumors 2.5 cm or smaller had the longest median overall survival, at 92 months, with decrements in survival as tumors got larger. Survival dropped to 88 months for tumors 2.6-3.5 cm, 44 months for tumors 3.5-5.0 cm, and finally to 26 months for tumors larger than 5.0 cm. In addition to tumor size and metastatic disease, age was a risk factor for shorter survival.
“The results presented demonstrate excellent post-SABR outcomes, with median overall survival in the range of 7-8 years for smaller lesions,” the investigators wrote. “This is particularly impressive considering that many of these patients were likely medically inoperable.”
The researchers noted that most of kidney SABR is done at academic centers, which highlights the importance of appropriate technology and training for delivering this treatment.
“Further prospective research is needed to verify its safety and efficacy,” the investigators concluded.
No external funding was provided for the project and the investigators reported no conflicts of interest.
SOURCE: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
For patients with localized renal cell carcinoma (RCC), stereotactic ablative body radiotherapy (SABR) may be an effective alternative to surgery, according to findings from a retrospective study.
Patients with smaller tumors and nonmetastatic disease achieved the best outcomes with SABR, reported lead author Rodney E. Wegner, MD, of Allegheny Health Network Cancer Institute, Pittsburgh, and colleagues.
“Radiation therapy is often overlooked in [RCC] as historic preclinical data reported RCC as being relatively radioresistant to external beam radiation at conventional doses,” the investigators wrote in Advances in Radiation Oncology. However, SABR may be able to overcome this resistance by delivering “highly conformal dose escalated radiation,” the investigators noted, citing two recent reports from the International Radiosurgery Oncology Consortium for Kidney (IROCK) that showed promising results (J Urol. 2019 Jun;201[6]:1097-104 and Cancer. 2018 Mar 1;124[5]:934-42).
The present study included 347 patients with RCC from the National Cancer Database who were treated with SABR and not surgery. Most patients (94%) did not have systemic therapy. Similar proportions lacked lymph node involvement (97%) or distant metastasis (93%). About three-quarters of patients (76%) had T1 disease. The median SABR dose was 45 Gy, ranging from 35 to 54 Gy, most frequently given in three fractions.
After a median follow-up of 36 months, ranging from 1 to 156 months, median overall survival across all patients was 58 months. SABR was most effective for patients with nonmetastatic disease who had smaller tumors.
An inverse correlation between tumor size and overall survival was apparent given that patients with tumors 2.5 cm or smaller had the longest median overall survival, at 92 months, with decrements in survival as tumors got larger. Survival dropped to 88 months for tumors 2.6-3.5 cm, 44 months for tumors 3.5-5.0 cm, and finally to 26 months for tumors larger than 5.0 cm. In addition to tumor size and metastatic disease, age was a risk factor for shorter survival.
“The results presented demonstrate excellent post-SABR outcomes, with median overall survival in the range of 7-8 years for smaller lesions,” the investigators wrote. “This is particularly impressive considering that many of these patients were likely medically inoperable.”
The researchers noted that most of kidney SABR is done at academic centers, which highlights the importance of appropriate technology and training for delivering this treatment.
“Further prospective research is needed to verify its safety and efficacy,” the investigators concluded.
No external funding was provided for the project and the investigators reported no conflicts of interest.
SOURCE: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
For patients with localized renal cell carcinoma (RCC), stereotactic ablative body radiotherapy (SABR) may be an effective alternative to surgery, according to findings from a retrospective study.
Patients with smaller tumors and nonmetastatic disease achieved the best outcomes with SABR, reported lead author Rodney E. Wegner, MD, of Allegheny Health Network Cancer Institute, Pittsburgh, and colleagues.
“Radiation therapy is often overlooked in [RCC] as historic preclinical data reported RCC as being relatively radioresistant to external beam radiation at conventional doses,” the investigators wrote in Advances in Radiation Oncology. However, SABR may be able to overcome this resistance by delivering “highly conformal dose escalated radiation,” the investigators noted, citing two recent reports from the International Radiosurgery Oncology Consortium for Kidney (IROCK) that showed promising results (J Urol. 2019 Jun;201[6]:1097-104 and Cancer. 2018 Mar 1;124[5]:934-42).
The present study included 347 patients with RCC from the National Cancer Database who were treated with SABR and not surgery. Most patients (94%) did not have systemic therapy. Similar proportions lacked lymph node involvement (97%) or distant metastasis (93%). About three-quarters of patients (76%) had T1 disease. The median SABR dose was 45 Gy, ranging from 35 to 54 Gy, most frequently given in three fractions.
After a median follow-up of 36 months, ranging from 1 to 156 months, median overall survival across all patients was 58 months. SABR was most effective for patients with nonmetastatic disease who had smaller tumors.
An inverse correlation between tumor size and overall survival was apparent given that patients with tumors 2.5 cm or smaller had the longest median overall survival, at 92 months, with decrements in survival as tumors got larger. Survival dropped to 88 months for tumors 2.6-3.5 cm, 44 months for tumors 3.5-5.0 cm, and finally to 26 months for tumors larger than 5.0 cm. In addition to tumor size and metastatic disease, age was a risk factor for shorter survival.
“The results presented demonstrate excellent post-SABR outcomes, with median overall survival in the range of 7-8 years for smaller lesions,” the investigators wrote. “This is particularly impressive considering that many of these patients were likely medically inoperable.”
The researchers noted that most of kidney SABR is done at academic centers, which highlights the importance of appropriate technology and training for delivering this treatment.
“Further prospective research is needed to verify its safety and efficacy,” the investigators concluded.
No external funding was provided for the project and the investigators reported no conflicts of interest.
SOURCE: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
FROM ADVANCES IN RADIATION ONCOLOGY
Key clinical point:
Major finding: Median overall survival was 92 months among patients with renal tumors no larger than 2.5 cm.
Study details: A retrospective study involving 347 patients with localized renal cell carcinoma (RCC) who were treated with stereotactic ablative body radiotherapy.
Disclosures: No external funding was provided for the study and the investigators reported having no conflicts of interest.
Source: Wegner RE et al. Adv Rad Onc. 2019 Aug 8. doi: 10.1016/j.adro.2019.07.018.
Pelvic floor muscle training outperforms attention-control massage for fecal incontinence
For first-line treatment of patients with fecal incontinence, pelvic floor muscle training (PFMT) is superior to attention-control massage, according to investigators.
Source: American Gastroenterological Association
In a study involving 98 patients, those who combined PFMT with biofeedback and conservative therapy were five times as likely to report improved symptoms than those who used attention-control massage and conservative therapy, reported Anja Ussing, MD, of Copenhagen University Hospital in Hvidovre, Denmark, and colleagues. Patients in the PFMT group also had significantly greater reductions in severity of incontinence, based on Vaizey incontinence score.
“Evidence from randomized controlled trials regarding the effect of PFMT for fecal incontinence is lacking,” the investigators wrote in Clinical Gastroenterology and Hepatology. Although previous trials have evaluated PFMT, none controlled for the effect of interactions with care providers. “To evaluate the effect of PFMT, there is a need for a trial that uses a comparator to control for this nonspecific trial effect associated with the attention given by the health care professional.”
To perform such a trial, the investigators recruited 98 patients with a history of fecal incontinence for at least 6 months. Patients were excluded if they had severe neurologic conditions, pregnancy, diarrhea, rectal prolapse, previous radiotherapy or cancer surgery in the lower abdomen, cognitive impairment, inadequate fluency in Danish, or a history of at least two PFMT training sessions within the past year. Enrolled patients were randomized in a 1:1 ratio to receive PFMT with biofeedback and conservative treatment, or attention-control massage training and conservative therapy. The primary outcome was symptom improvement, determined by the Patient Global Impression of Improvement scale at 16 weeks. Secondary outcome measures included the Fecal Incontinence Severity Index, Vaizey score, and Fecal Incontinence Quality of Life Scale.
Patients were predominantly female, with just three men in the PFMT group and six in the attention-control massage group. The PFMT group also had a slightly higher median age, at 65 years, compared with 58 years in the control group.
At 16 weeks, the difference in self-reported symptoms was dramatic, with 74.5% of patients in the PFMT group reporting improvement, compared with 35.5% in the control group, which translated to an unadjusted odds ratio of 5.16 (P = .0002). When symptom improvements were confined to those who reported being “very much better” or “much better,” the disparity between groups still remained strong, with an unadjusted OR of 2.98 (P = .025). Among the three secondary outcomes, only the Vaizey score showed a significant difference between groups. Patients treated with PFMT had a mean difference in Vaizey score change of –1.83 points, using a scale from 0 to 24, with 24 representing complete incontinence (P = .04).
“We were not able to show any differences between groups in the number of fecal incontinence episodes,” the investigators wrote. “We had much missing data in the bowel diaries and we can only guess what the result would have been if the data had been more complete. Electronic assessment of incontinence episodes could be a way to reduce the amount of missing data in future trials.”
Still, the investigators concluded that PFMT was the superior therapy. “Based on the results, PFMT in combination with conservative treatment should be offered as first-line treatment for adults with fecal incontinence.”
They also highlighted the broad applicability of their findings, regardless of facility type.
“In the current trial, more than one-third of patients had sphincter injuries confirmed at endoanal ultrasound, this reflects the tertiary setting of our trial,” they wrote. “However, our results may be highly relevant in a primary setting because there is an unmet need for treatment of fecal incontinence in primary health care, and the interventions do not necessarily need to be conducted at specialized centers.”
The study was funded by the Danish Foundation for Research in Physiotherapy, The Lundbeck Foundation, the Research Foundation at Copenhagen University Hospital, and the Foundation of Aase and Ejnar Danielsen. The investigators reported additional relationships with Medtronic, Helsefonden, Gynzone, and others.
SOURCE: Ussing A et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.015.
For first-line treatment of patients with fecal incontinence, pelvic floor muscle training (PFMT) is superior to attention-control massage, according to investigators.
Source: American Gastroenterological Association
In a study involving 98 patients, those who combined PFMT with biofeedback and conservative therapy were five times as likely to report improved symptoms than those who used attention-control massage and conservative therapy, reported Anja Ussing, MD, of Copenhagen University Hospital in Hvidovre, Denmark, and colleagues. Patients in the PFMT group also had significantly greater reductions in severity of incontinence, based on Vaizey incontinence score.
“Evidence from randomized controlled trials regarding the effect of PFMT for fecal incontinence is lacking,” the investigators wrote in Clinical Gastroenterology and Hepatology. Although previous trials have evaluated PFMT, none controlled for the effect of interactions with care providers. “To evaluate the effect of PFMT, there is a need for a trial that uses a comparator to control for this nonspecific trial effect associated with the attention given by the health care professional.”
To perform such a trial, the investigators recruited 98 patients with a history of fecal incontinence for at least 6 months. Patients were excluded if they had severe neurologic conditions, pregnancy, diarrhea, rectal prolapse, previous radiotherapy or cancer surgery in the lower abdomen, cognitive impairment, inadequate fluency in Danish, or a history of at least two PFMT training sessions within the past year. Enrolled patients were randomized in a 1:1 ratio to receive PFMT with biofeedback and conservative treatment, or attention-control massage training and conservative therapy. The primary outcome was symptom improvement, determined by the Patient Global Impression of Improvement scale at 16 weeks. Secondary outcome measures included the Fecal Incontinence Severity Index, Vaizey score, and Fecal Incontinence Quality of Life Scale.
Patients were predominantly female, with just three men in the PFMT group and six in the attention-control massage group. The PFMT group also had a slightly higher median age, at 65 years, compared with 58 years in the control group.
At 16 weeks, the difference in self-reported symptoms was dramatic, with 74.5% of patients in the PFMT group reporting improvement, compared with 35.5% in the control group, which translated to an unadjusted odds ratio of 5.16 (P = .0002). When symptom improvements were confined to those who reported being “very much better” or “much better,” the disparity between groups still remained strong, with an unadjusted OR of 2.98 (P = .025). Among the three secondary outcomes, only the Vaizey score showed a significant difference between groups. Patients treated with PFMT had a mean difference in Vaizey score change of –1.83 points, using a scale from 0 to 24, with 24 representing complete incontinence (P = .04).
“We were not able to show any differences between groups in the number of fecal incontinence episodes,” the investigators wrote. “We had much missing data in the bowel diaries and we can only guess what the result would have been if the data had been more complete. Electronic assessment of incontinence episodes could be a way to reduce the amount of missing data in future trials.”
Still, the investigators concluded that PFMT was the superior therapy. “Based on the results, PFMT in combination with conservative treatment should be offered as first-line treatment for adults with fecal incontinence.”
They also highlighted the broad applicability of their findings, regardless of facility type.
“In the current trial, more than one-third of patients had sphincter injuries confirmed at endoanal ultrasound, this reflects the tertiary setting of our trial,” they wrote. “However, our results may be highly relevant in a primary setting because there is an unmet need for treatment of fecal incontinence in primary health care, and the interventions do not necessarily need to be conducted at specialized centers.”
The study was funded by the Danish Foundation for Research in Physiotherapy, The Lundbeck Foundation, the Research Foundation at Copenhagen University Hospital, and the Foundation of Aase and Ejnar Danielsen. The investigators reported additional relationships with Medtronic, Helsefonden, Gynzone, and others.
SOURCE: Ussing A et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.015.
For first-line treatment of patients with fecal incontinence, pelvic floor muscle training (PFMT) is superior to attention-control massage, according to investigators.
Source: American Gastroenterological Association
In a study involving 98 patients, those who combined PFMT with biofeedback and conservative therapy were five times as likely to report improved symptoms than those who used attention-control massage and conservative therapy, reported Anja Ussing, MD, of Copenhagen University Hospital in Hvidovre, Denmark, and colleagues. Patients in the PFMT group also had significantly greater reductions in severity of incontinence, based on Vaizey incontinence score.
“Evidence from randomized controlled trials regarding the effect of PFMT for fecal incontinence is lacking,” the investigators wrote in Clinical Gastroenterology and Hepatology. Although previous trials have evaluated PFMT, none controlled for the effect of interactions with care providers. “To evaluate the effect of PFMT, there is a need for a trial that uses a comparator to control for this nonspecific trial effect associated with the attention given by the health care professional.”
To perform such a trial, the investigators recruited 98 patients with a history of fecal incontinence for at least 6 months. Patients were excluded if they had severe neurologic conditions, pregnancy, diarrhea, rectal prolapse, previous radiotherapy or cancer surgery in the lower abdomen, cognitive impairment, inadequate fluency in Danish, or a history of at least two PFMT training sessions within the past year. Enrolled patients were randomized in a 1:1 ratio to receive PFMT with biofeedback and conservative treatment, or attention-control massage training and conservative therapy. The primary outcome was symptom improvement, determined by the Patient Global Impression of Improvement scale at 16 weeks. Secondary outcome measures included the Fecal Incontinence Severity Index, Vaizey score, and Fecal Incontinence Quality of Life Scale.
Patients were predominantly female, with just three men in the PFMT group and six in the attention-control massage group. The PFMT group also had a slightly higher median age, at 65 years, compared with 58 years in the control group.
At 16 weeks, the difference in self-reported symptoms was dramatic, with 74.5% of patients in the PFMT group reporting improvement, compared with 35.5% in the control group, which translated to an unadjusted odds ratio of 5.16 (P = .0002). When symptom improvements were confined to those who reported being “very much better” or “much better,” the disparity between groups still remained strong, with an unadjusted OR of 2.98 (P = .025). Among the three secondary outcomes, only the Vaizey score showed a significant difference between groups. Patients treated with PFMT had a mean difference in Vaizey score change of –1.83 points, using a scale from 0 to 24, with 24 representing complete incontinence (P = .04).
“We were not able to show any differences between groups in the number of fecal incontinence episodes,” the investigators wrote. “We had much missing data in the bowel diaries and we can only guess what the result would have been if the data had been more complete. Electronic assessment of incontinence episodes could be a way to reduce the amount of missing data in future trials.”
Still, the investigators concluded that PFMT was the superior therapy. “Based on the results, PFMT in combination with conservative treatment should be offered as first-line treatment for adults with fecal incontinence.”
They also highlighted the broad applicability of their findings, regardless of facility type.
“In the current trial, more than one-third of patients had sphincter injuries confirmed at endoanal ultrasound, this reflects the tertiary setting of our trial,” they wrote. “However, our results may be highly relevant in a primary setting because there is an unmet need for treatment of fecal incontinence in primary health care, and the interventions do not necessarily need to be conducted at specialized centers.”
The study was funded by the Danish Foundation for Research in Physiotherapy, The Lundbeck Foundation, the Research Foundation at Copenhagen University Hospital, and the Foundation of Aase and Ejnar Danielsen. The investigators reported additional relationships with Medtronic, Helsefonden, Gynzone, and others.
SOURCE: Ussing A et al. Clin Gastroenterol Hepatol. 2018 Dec 20. doi: 10.1016/j.cgh.2018.12.015.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Clip closure reduces postop bleeding risk after proximal polyp resection
In a prospective study of almost 1,000 patients, this benefit was not influenced by polyp size, electrocautery setting, or concomitant use of antithrombotic medications, reported Heiko Pohl, MD, of Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“Endoscopic resection has replaced surgical resection as the primary treatment for large colon polyps due to a lower morbidity and less need for hospitalization,” the investigators wrote in Gastroenterology. “Postprocedure bleeding is the most common severe complication, occurring in 2%-24% of patients.” This risk is particularly common among patients with large polyps in the proximal colon.
Although previous trials have suggested that closing polyp resection sites with hemoclips could reduce the risk of postoperative bleeding, studies to date have been retrospective or uncontrolled, precluding definitive conclusions.
The prospective, controlled trial involved 44 endoscopists at 18 treatment centers. Enrollment included 919 patients with large, nonpedunculated colorectal polyps of at least 20 mm in diameter. Patients were randomized in an approximate 1:1 ratio into the clip group or control group and followed for at least 30 days after endoscopic polyp resection. The primary outcome was postoperative bleeding, defined as severe bleeding that required invasive intervention such as surgery or blood transfusion during follow-up. Subgroup analysis looked for associations between bleeding and polyp location, size, electrocautery setting, and medications.
Across the entire population, postoperative bleeding was significantly less common among patients who had their resection sites closed with clips, occurring at a rate of 3.5%, compared with 7.1% in the control group (P = .015). Serious adverse events were also less common in the clip group than the control group (4.8% vs. 9.5%; P = .006).
While the reduction of bleeding risk from clip closure was not influenced by polyp size, use of antithrombotic medications, or electrocautery setting, polyp location turned out to be a critical factor. Greatest reduction in risk of postoperative bleeding was seen among the 615 patients who had proximal polyps, based on a bleeding rate of 3.3% when clipped versus 9.6% among those who went without clips (P = .001). In contrast, clips in the distal colon were associated with a higher absolute risk of postoperative bleeding than no clips (4.0% vs. 1.4%); however, this difference was not statistically significant (P = .178).
“[T]his multicenter trial provides strong evidence that endoscopic clip closure of the mucosal defect after resection of large ... nonpedunculated colon polyps in the proximal colon significantly reduces the risk of postprocedure bleeding,” the investigators wrote.
They suggested that their study provides greater confidence in findings than similar trials previously conducted, enough to recommend that endoscopic techniques be altered accordingly. “[O]ur trial was methodologically rigorous, adequately powered, and all polyps were removed by endoscopic mucosal resection, which is considered the standard technique for large colon polyps in Western countries,” they wrote. “The results of the study are therefore broadly applicable to current practice. Furthermore, conduct of the study at different centers with multiple endoscopists strengthens generalizability of the findings.”
The investigators also speculated about why postoperative bleeding risk was increased when clips were used in the distal colon. “Potential explanations include a poorer quality of clipping, a shorter clip retention time, possible related to a thicker colon wall in the distal compared to the proximal colon,” they wrote, adding that “these considerations are worthy of further study.”
Indeed, more work remains to be done. “A formal cost-effectiveness analysis is needed to better understand the value of clip closure,” they wrote. “Such analysis can then also examine possible thresholds, for instance regarding the minimum proportion of polyp resections, for which complete closure should be achieved, or the maximum number of clips to close a defect.”
The study was funded by Boston Scientific. The investigators reported additional relationships with U.S. Endoscopy, Olympus, Medtronic, and others.
SOURCE: Pohl H et al. Gastroenterology. 2019 Mar 15. doi: 10.1053/j.gastro.2019.03.019.
In a prospective study of almost 1,000 patients, this benefit was not influenced by polyp size, electrocautery setting, or concomitant use of antithrombotic medications, reported Heiko Pohl, MD, of Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“Endoscopic resection has replaced surgical resection as the primary treatment for large colon polyps due to a lower morbidity and less need for hospitalization,” the investigators wrote in Gastroenterology. “Postprocedure bleeding is the most common severe complication, occurring in 2%-24% of patients.” This risk is particularly common among patients with large polyps in the proximal colon.
Although previous trials have suggested that closing polyp resection sites with hemoclips could reduce the risk of postoperative bleeding, studies to date have been retrospective or uncontrolled, precluding definitive conclusions.
The prospective, controlled trial involved 44 endoscopists at 18 treatment centers. Enrollment included 919 patients with large, nonpedunculated colorectal polyps of at least 20 mm in diameter. Patients were randomized in an approximate 1:1 ratio into the clip group or control group and followed for at least 30 days after endoscopic polyp resection. The primary outcome was postoperative bleeding, defined as severe bleeding that required invasive intervention such as surgery or blood transfusion during follow-up. Subgroup analysis looked for associations between bleeding and polyp location, size, electrocautery setting, and medications.
Across the entire population, postoperative bleeding was significantly less common among patients who had their resection sites closed with clips, occurring at a rate of 3.5%, compared with 7.1% in the control group (P = .015). Serious adverse events were also less common in the clip group than the control group (4.8% vs. 9.5%; P = .006).
While the reduction of bleeding risk from clip closure was not influenced by polyp size, use of antithrombotic medications, or electrocautery setting, polyp location turned out to be a critical factor. Greatest reduction in risk of postoperative bleeding was seen among the 615 patients who had proximal polyps, based on a bleeding rate of 3.3% when clipped versus 9.6% among those who went without clips (P = .001). In contrast, clips in the distal colon were associated with a higher absolute risk of postoperative bleeding than no clips (4.0% vs. 1.4%); however, this difference was not statistically significant (P = .178).
“[T]his multicenter trial provides strong evidence that endoscopic clip closure of the mucosal defect after resection of large ... nonpedunculated colon polyps in the proximal colon significantly reduces the risk of postprocedure bleeding,” the investigators wrote.
They suggested that their study provides greater confidence in findings than similar trials previously conducted, enough to recommend that endoscopic techniques be altered accordingly. “[O]ur trial was methodologically rigorous, adequately powered, and all polyps were removed by endoscopic mucosal resection, which is considered the standard technique for large colon polyps in Western countries,” they wrote. “The results of the study are therefore broadly applicable to current practice. Furthermore, conduct of the study at different centers with multiple endoscopists strengthens generalizability of the findings.”
The investigators also speculated about why postoperative bleeding risk was increased when clips were used in the distal colon. “Potential explanations include a poorer quality of clipping, a shorter clip retention time, possible related to a thicker colon wall in the distal compared to the proximal colon,” they wrote, adding that “these considerations are worthy of further study.”
Indeed, more work remains to be done. “A formal cost-effectiveness analysis is needed to better understand the value of clip closure,” they wrote. “Such analysis can then also examine possible thresholds, for instance regarding the minimum proportion of polyp resections, for which complete closure should be achieved, or the maximum number of clips to close a defect.”
The study was funded by Boston Scientific. The investigators reported additional relationships with U.S. Endoscopy, Olympus, Medtronic, and others.
SOURCE: Pohl H et al. Gastroenterology. 2019 Mar 15. doi: 10.1053/j.gastro.2019.03.019.
In a prospective study of almost 1,000 patients, this benefit was not influenced by polyp size, electrocautery setting, or concomitant use of antithrombotic medications, reported Heiko Pohl, MD, of Geisel School of Medicine at Dartmouth, Hanover, N.H., and colleagues.
“Endoscopic resection has replaced surgical resection as the primary treatment for large colon polyps due to a lower morbidity and less need for hospitalization,” the investigators wrote in Gastroenterology. “Postprocedure bleeding is the most common severe complication, occurring in 2%-24% of patients.” This risk is particularly common among patients with large polyps in the proximal colon.
Although previous trials have suggested that closing polyp resection sites with hemoclips could reduce the risk of postoperative bleeding, studies to date have been retrospective or uncontrolled, precluding definitive conclusions.
The prospective, controlled trial involved 44 endoscopists at 18 treatment centers. Enrollment included 919 patients with large, nonpedunculated colorectal polyps of at least 20 mm in diameter. Patients were randomized in an approximate 1:1 ratio into the clip group or control group and followed for at least 30 days after endoscopic polyp resection. The primary outcome was postoperative bleeding, defined as severe bleeding that required invasive intervention such as surgery or blood transfusion during follow-up. Subgroup analysis looked for associations between bleeding and polyp location, size, electrocautery setting, and medications.
Across the entire population, postoperative bleeding was significantly less common among patients who had their resection sites closed with clips, occurring at a rate of 3.5%, compared with 7.1% in the control group (P = .015). Serious adverse events were also less common in the clip group than the control group (4.8% vs. 9.5%; P = .006).
While the reduction of bleeding risk from clip closure was not influenced by polyp size, use of antithrombotic medications, or electrocautery setting, polyp location turned out to be a critical factor. Greatest reduction in risk of postoperative bleeding was seen among the 615 patients who had proximal polyps, based on a bleeding rate of 3.3% when clipped versus 9.6% among those who went without clips (P = .001). In contrast, clips in the distal colon were associated with a higher absolute risk of postoperative bleeding than no clips (4.0% vs. 1.4%); however, this difference was not statistically significant (P = .178).
“[T]his multicenter trial provides strong evidence that endoscopic clip closure of the mucosal defect after resection of large ... nonpedunculated colon polyps in the proximal colon significantly reduces the risk of postprocedure bleeding,” the investigators wrote.
They suggested that their study provides greater confidence in findings than similar trials previously conducted, enough to recommend that endoscopic techniques be altered accordingly. “[O]ur trial was methodologically rigorous, adequately powered, and all polyps were removed by endoscopic mucosal resection, which is considered the standard technique for large colon polyps in Western countries,” they wrote. “The results of the study are therefore broadly applicable to current practice. Furthermore, conduct of the study at different centers with multiple endoscopists strengthens generalizability of the findings.”
The investigators also speculated about why postoperative bleeding risk was increased when clips were used in the distal colon. “Potential explanations include a poorer quality of clipping, a shorter clip retention time, possible related to a thicker colon wall in the distal compared to the proximal colon,” they wrote, adding that “these considerations are worthy of further study.”
Indeed, more work remains to be done. “A formal cost-effectiveness analysis is needed to better understand the value of clip closure,” they wrote. “Such analysis can then also examine possible thresholds, for instance regarding the minimum proportion of polyp resections, for which complete closure should be achieved, or the maximum number of clips to close a defect.”
The study was funded by Boston Scientific. The investigators reported additional relationships with U.S. Endoscopy, Olympus, Medtronic, and others.
SOURCE: Pohl H et al. Gastroenterology. 2019 Mar 15. doi: 10.1053/j.gastro.2019.03.019.
FROM GASTROENTEROLOGY
Patients with viral hepatitis are living longer, increasing risk of extrahepatic mortality
Patients with viral hepatitis may live longer after treatment with direct-acting antiviral agents (DAAs), but their risk of extrahepatic causes of death may rise as a result, according to investigators.
Importantly, this increasing rate of extrahepatic mortality shouldn’t be seen as a causal link with DAA use, cautioned lead author Donghee Kim, MD, PhD, of Stanford (Calif.) University, and colleagues. Instead, the upward trend is more likely because of successful treatment with DAAs, which can increase lifespan, and with it, time for susceptibility to extrahepatic conditions.
This was just one finding from a retrospective study that used U.S. Census and National Center for Health Statistics mortality records to evaluate almost 28 million deaths that occurred between 2007 and 2017. The investigators looked for mortality trends among patients with common chronic liver diseases, including viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD), noting that each of these conditions is associated with extrahepatic complications. The study included deaths due to extrahepatic cancer, cardiovascular disease, and diabetes.
While the efficacy of therapy for viral hepatitis has improved markedly since 2014, treatments for ALD and NAFLD have remained static, the investigators noted.
“Unfortunately, there have been no significant breakthroughs in the treatment of [ALD] over the last 2 decades, resulting in an increase in estimated global mortality to 3.8%,” the investigators wrote in Gastroenterology.
“[NAFLD] is the most common chronic liver disease in the world,” they added. “The leading cause of death in individuals with NAFLD is cardiovascular disease, followed by extrahepatic malignancies, and then liver-related mortality. However, recent trends in ALD and NAFLD-related extrahepatic complications in comparison to viral hepatitis have not been studied.”
The results of the current study supported the positive impact of DAAs, which began to see widespread use in 2014. Age-standardized mortality among patients with hepatitis C virus rose until 2014 (2.2% per year) and dropped thereafter (–6.5% per year). Mortality among those with hepatitis B virus steadily decreased over the study period (–1.2% per year).
Of note, while deaths because of HCV-related liver disease dropped from 2014 to 2017, extrahepatic causes of death didn’t follow suit. Age-standardized mortality for cardiovascular disease and diabetes increased at average annual rates of 1.9% and 3.3%, respectively, while the rate of extrahepatic cancer-related deaths held steady.
“The widespread use, higher efficacy and durable response to DAA agents in individuals with HCV infection may have resulted in a paradigm shift in the clinical progression of coexisting disease entities following response to DAA agents in the virus-free environment,” the investigators wrote. “These findings suggest assessment and identification of risk and risk factors for extrahepatic cancer, cardiovascular disease, and diabetes in individuals who have been successfully treated and cured of HCV infection.”
In sharp contrast with the viral hepatitis findings, mortality rates among patients with ALD and NAFLD increased at an accelerating rate over the 11-year study period.
Among patients with ALD, all-cause mortality increased by an average of 3.4% per year, at a higher rate in the second half of the study than the first (4.6% vs 2.1%). Liver disease–related mortality rose at a similar, accelerating rate. In the same group, deaths due to cardiovascular disease increased at an average annual rate of 2.1%, which was accelerating, while extrahepatic cancer-related deaths increased at a more constant rate of 3.6%.
For patients with NAFLD, all-cause mortality increased by 8.1% per year, accelerating from 6.1% in the first half of the study to 11.2% in the second. Deaths from liver disease increased at an average rate of 12.6% per year, while extrahepatic deaths increased significantly for all three included types: cardiovascular disease (2.0%), extrahepatic cancer (15.1%), and diabetes (9.7%).
Concerning the worsening rates of mortality among patients with ALD and NAFLD, the investigators cited a lack of progress in treatments, and suggested that “the quest for newer therapies must remain the cornerstone in our efforts.”
The investigators reported no external funding or conflicts of interest.
SOURCE: Kim D et al. Gastroenterology. 2019 Jun 25. doi: 10.1053/j.gastro.2019.06.026.
Chronic liver disease is one of the leading causes of death in the United States. Whereas mortality from other causes (e.g., heart disease and cancer) has declined, age-adjusted mortality from chronic liver disease has continued to increase. There have been a few major advances in the treatment of several chronic liver diseases in recent years. These include nucleos(t)ide analogues for hepatitis B virus (HBV) and direct-acting antiviral agents for the treatment of hepatitis C virus infection (HCV). Many studies show that these treatments are highly effective in improving patient outcomes, including patient survival. However, whether these individual-level benefits have translated into population-level improvements remains unclear.
Overall, the results were mixed; they were encouraging for viral hepatitis but concerning for alcoholic and nonalcoholic liver disease. Specifically, all-cause mortality from HCV was on an upward trajectory in the first 7 years (from 2007 to 2014) but the trend shifted from 2014 onward. Importantly, this inflection point coincided with the timing of the new HCV treatments. Most of this positive shift post 2014 was related to a strong downward trend in liver-related mortality. In contrast, upward trends in mortality related to extrahepatic causes (such as cardiovascular mortality) continued unabated. The authors found similar results for HBV. The story, however, was different for alcohol and nonalcohol-related liver disease – both conditions lacking effective treatments; liver-related mortality for both continued to increase during the study period.
Although we cannot make causal inferences from this study, overall, the results are good news. They suggest that HBV and HCV treatments have reached enough infected people to result in tangible improvements in the burden of chronic liver disease. We may now need to shift the focus of secondary prevention efforts from liver to nonliver (extrahepatic) morbidity in the newer cohorts of patients with treated HCV and HBV.
Fasiha Kanwal, MD, MSHS, is an investigator in the clinical epidemiology and comparative effectiveness program for the Center for Innovations in Quality, Effectiveness, and Safety in collaboration with the Michael E. DeBakey VA Medical Center, as well as an associate professor of medicine in gastroenterology and hepatology at Baylor College of Medicine in Houston. She has no conflicts of interest.
Chronic liver disease is one of the leading causes of death in the United States. Whereas mortality from other causes (e.g., heart disease and cancer) has declined, age-adjusted mortality from chronic liver disease has continued to increase. There have been a few major advances in the treatment of several chronic liver diseases in recent years. These include nucleos(t)ide analogues for hepatitis B virus (HBV) and direct-acting antiviral agents for the treatment of hepatitis C virus infection (HCV). Many studies show that these treatments are highly effective in improving patient outcomes, including patient survival. However, whether these individual-level benefits have translated into population-level improvements remains unclear.
Overall, the results were mixed; they were encouraging for viral hepatitis but concerning for alcoholic and nonalcoholic liver disease. Specifically, all-cause mortality from HCV was on an upward trajectory in the first 7 years (from 2007 to 2014) but the trend shifted from 2014 onward. Importantly, this inflection point coincided with the timing of the new HCV treatments. Most of this positive shift post 2014 was related to a strong downward trend in liver-related mortality. In contrast, upward trends in mortality related to extrahepatic causes (such as cardiovascular mortality) continued unabated. The authors found similar results for HBV. The story, however, was different for alcohol and nonalcohol-related liver disease – both conditions lacking effective treatments; liver-related mortality for both continued to increase during the study period.
Although we cannot make causal inferences from this study, overall, the results are good news. They suggest that HBV and HCV treatments have reached enough infected people to result in tangible improvements in the burden of chronic liver disease. We may now need to shift the focus of secondary prevention efforts from liver to nonliver (extrahepatic) morbidity in the newer cohorts of patients with treated HCV and HBV.
Fasiha Kanwal, MD, MSHS, is an investigator in the clinical epidemiology and comparative effectiveness program for the Center for Innovations in Quality, Effectiveness, and Safety in collaboration with the Michael E. DeBakey VA Medical Center, as well as an associate professor of medicine in gastroenterology and hepatology at Baylor College of Medicine in Houston. She has no conflicts of interest.
Chronic liver disease is one of the leading causes of death in the United States. Whereas mortality from other causes (e.g., heart disease and cancer) has declined, age-adjusted mortality from chronic liver disease has continued to increase. There have been a few major advances in the treatment of several chronic liver diseases in recent years. These include nucleos(t)ide analogues for hepatitis B virus (HBV) and direct-acting antiviral agents for the treatment of hepatitis C virus infection (HCV). Many studies show that these treatments are highly effective in improving patient outcomes, including patient survival. However, whether these individual-level benefits have translated into population-level improvements remains unclear.
Overall, the results were mixed; they were encouraging for viral hepatitis but concerning for alcoholic and nonalcoholic liver disease. Specifically, all-cause mortality from HCV was on an upward trajectory in the first 7 years (from 2007 to 2014) but the trend shifted from 2014 onward. Importantly, this inflection point coincided with the timing of the new HCV treatments. Most of this positive shift post 2014 was related to a strong downward trend in liver-related mortality. In contrast, upward trends in mortality related to extrahepatic causes (such as cardiovascular mortality) continued unabated. The authors found similar results for HBV. The story, however, was different for alcohol and nonalcohol-related liver disease – both conditions lacking effective treatments; liver-related mortality for both continued to increase during the study period.
Although we cannot make causal inferences from this study, overall, the results are good news. They suggest that HBV and HCV treatments have reached enough infected people to result in tangible improvements in the burden of chronic liver disease. We may now need to shift the focus of secondary prevention efforts from liver to nonliver (extrahepatic) morbidity in the newer cohorts of patients with treated HCV and HBV.
Fasiha Kanwal, MD, MSHS, is an investigator in the clinical epidemiology and comparative effectiveness program for the Center for Innovations in Quality, Effectiveness, and Safety in collaboration with the Michael E. DeBakey VA Medical Center, as well as an associate professor of medicine in gastroenterology and hepatology at Baylor College of Medicine in Houston. She has no conflicts of interest.
Patients with viral hepatitis may live longer after treatment with direct-acting antiviral agents (DAAs), but their risk of extrahepatic causes of death may rise as a result, according to investigators.
Importantly, this increasing rate of extrahepatic mortality shouldn’t be seen as a causal link with DAA use, cautioned lead author Donghee Kim, MD, PhD, of Stanford (Calif.) University, and colleagues. Instead, the upward trend is more likely because of successful treatment with DAAs, which can increase lifespan, and with it, time for susceptibility to extrahepatic conditions.
This was just one finding from a retrospective study that used U.S. Census and National Center for Health Statistics mortality records to evaluate almost 28 million deaths that occurred between 2007 and 2017. The investigators looked for mortality trends among patients with common chronic liver diseases, including viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD), noting that each of these conditions is associated with extrahepatic complications. The study included deaths due to extrahepatic cancer, cardiovascular disease, and diabetes.
While the efficacy of therapy for viral hepatitis has improved markedly since 2014, treatments for ALD and NAFLD have remained static, the investigators noted.
“Unfortunately, there have been no significant breakthroughs in the treatment of [ALD] over the last 2 decades, resulting in an increase in estimated global mortality to 3.8%,” the investigators wrote in Gastroenterology.
“[NAFLD] is the most common chronic liver disease in the world,” they added. “The leading cause of death in individuals with NAFLD is cardiovascular disease, followed by extrahepatic malignancies, and then liver-related mortality. However, recent trends in ALD and NAFLD-related extrahepatic complications in comparison to viral hepatitis have not been studied.”
The results of the current study supported the positive impact of DAAs, which began to see widespread use in 2014. Age-standardized mortality among patients with hepatitis C virus rose until 2014 (2.2% per year) and dropped thereafter (–6.5% per year). Mortality among those with hepatitis B virus steadily decreased over the study period (–1.2% per year).
Of note, while deaths because of HCV-related liver disease dropped from 2014 to 2017, extrahepatic causes of death didn’t follow suit. Age-standardized mortality for cardiovascular disease and diabetes increased at average annual rates of 1.9% and 3.3%, respectively, while the rate of extrahepatic cancer-related deaths held steady.
“The widespread use, higher efficacy and durable response to DAA agents in individuals with HCV infection may have resulted in a paradigm shift in the clinical progression of coexisting disease entities following response to DAA agents in the virus-free environment,” the investigators wrote. “These findings suggest assessment and identification of risk and risk factors for extrahepatic cancer, cardiovascular disease, and diabetes in individuals who have been successfully treated and cured of HCV infection.”
In sharp contrast with the viral hepatitis findings, mortality rates among patients with ALD and NAFLD increased at an accelerating rate over the 11-year study period.
Among patients with ALD, all-cause mortality increased by an average of 3.4% per year, at a higher rate in the second half of the study than the first (4.6% vs 2.1%). Liver disease–related mortality rose at a similar, accelerating rate. In the same group, deaths due to cardiovascular disease increased at an average annual rate of 2.1%, which was accelerating, while extrahepatic cancer-related deaths increased at a more constant rate of 3.6%.
For patients with NAFLD, all-cause mortality increased by 8.1% per year, accelerating from 6.1% in the first half of the study to 11.2% in the second. Deaths from liver disease increased at an average rate of 12.6% per year, while extrahepatic deaths increased significantly for all three included types: cardiovascular disease (2.0%), extrahepatic cancer (15.1%), and diabetes (9.7%).
Concerning the worsening rates of mortality among patients with ALD and NAFLD, the investigators cited a lack of progress in treatments, and suggested that “the quest for newer therapies must remain the cornerstone in our efforts.”
The investigators reported no external funding or conflicts of interest.
SOURCE: Kim D et al. Gastroenterology. 2019 Jun 25. doi: 10.1053/j.gastro.2019.06.026.
Patients with viral hepatitis may live longer after treatment with direct-acting antiviral agents (DAAs), but their risk of extrahepatic causes of death may rise as a result, according to investigators.
Importantly, this increasing rate of extrahepatic mortality shouldn’t be seen as a causal link with DAA use, cautioned lead author Donghee Kim, MD, PhD, of Stanford (Calif.) University, and colleagues. Instead, the upward trend is more likely because of successful treatment with DAAs, which can increase lifespan, and with it, time for susceptibility to extrahepatic conditions.
This was just one finding from a retrospective study that used U.S. Census and National Center for Health Statistics mortality records to evaluate almost 28 million deaths that occurred between 2007 and 2017. The investigators looked for mortality trends among patients with common chronic liver diseases, including viral hepatitis, alcoholic liver disease (ALD), and nonalcoholic fatty liver disease (NAFLD), noting that each of these conditions is associated with extrahepatic complications. The study included deaths due to extrahepatic cancer, cardiovascular disease, and diabetes.
While the efficacy of therapy for viral hepatitis has improved markedly since 2014, treatments for ALD and NAFLD have remained static, the investigators noted.
“Unfortunately, there have been no significant breakthroughs in the treatment of [ALD] over the last 2 decades, resulting in an increase in estimated global mortality to 3.8%,” the investigators wrote in Gastroenterology.
“[NAFLD] is the most common chronic liver disease in the world,” they added. “The leading cause of death in individuals with NAFLD is cardiovascular disease, followed by extrahepatic malignancies, and then liver-related mortality. However, recent trends in ALD and NAFLD-related extrahepatic complications in comparison to viral hepatitis have not been studied.”
The results of the current study supported the positive impact of DAAs, which began to see widespread use in 2014. Age-standardized mortality among patients with hepatitis C virus rose until 2014 (2.2% per year) and dropped thereafter (–6.5% per year). Mortality among those with hepatitis B virus steadily decreased over the study period (–1.2% per year).
Of note, while deaths because of HCV-related liver disease dropped from 2014 to 2017, extrahepatic causes of death didn’t follow suit. Age-standardized mortality for cardiovascular disease and diabetes increased at average annual rates of 1.9% and 3.3%, respectively, while the rate of extrahepatic cancer-related deaths held steady.
“The widespread use, higher efficacy and durable response to DAA agents in individuals with HCV infection may have resulted in a paradigm shift in the clinical progression of coexisting disease entities following response to DAA agents in the virus-free environment,” the investigators wrote. “These findings suggest assessment and identification of risk and risk factors for extrahepatic cancer, cardiovascular disease, and diabetes in individuals who have been successfully treated and cured of HCV infection.”
In sharp contrast with the viral hepatitis findings, mortality rates among patients with ALD and NAFLD increased at an accelerating rate over the 11-year study period.
Among patients with ALD, all-cause mortality increased by an average of 3.4% per year, at a higher rate in the second half of the study than the first (4.6% vs 2.1%). Liver disease–related mortality rose at a similar, accelerating rate. In the same group, deaths due to cardiovascular disease increased at an average annual rate of 2.1%, which was accelerating, while extrahepatic cancer-related deaths increased at a more constant rate of 3.6%.
For patients with NAFLD, all-cause mortality increased by 8.1% per year, accelerating from 6.1% in the first half of the study to 11.2% in the second. Deaths from liver disease increased at an average rate of 12.6% per year, while extrahepatic deaths increased significantly for all three included types: cardiovascular disease (2.0%), extrahepatic cancer (15.1%), and diabetes (9.7%).
Concerning the worsening rates of mortality among patients with ALD and NAFLD, the investigators cited a lack of progress in treatments, and suggested that “the quest for newer therapies must remain the cornerstone in our efforts.”
The investigators reported no external funding or conflicts of interest.
SOURCE: Kim D et al. Gastroenterology. 2019 Jun 25. doi: 10.1053/j.gastro.2019.06.026.
FROM GASTROENTEROLOGY
Type of renal dysfunction affects liver cirrhosis mortality risk
For non–status 1 patients with cirrhosis who are awaiting liver transplantation, type of renal dysfunction may be a key determinant of mortality risk, based on a retrospective analysis of more than 22,000 patients.
Risk of death was greatest for patients with acute on chronic kidney disease (AKI on CKD), followed by AKI alone, then CKD alone, reported lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.
Although it is well known that renal dysfunction worsens outcomes among patients with liver cirrhosis, the impact of different types of kidney pathology on mortality risk has been minimally researched, the investigators wrote in Clinical Gastroenterology and Hepatology. “To date, studies evaluating the impact of renal dysfunction on prognosis in patients with cirrhosis have mostly focused on AKI.”
To learn more, the investigators performed a retrospective study involving acute, chronic, and acute on chronic kidney disease among patients with cirrhosis. They included data from 22,680 non–status 1 adults who were awaiting liver transplantation between 2007 and 2014, with at least 90 days on the wait list. Information was gathered from the Organ Procurement and Transplantation Network registry.
AKI was defined by fewer than 72 days of hemodialysis, or an increase in creatinine of at least 0.3 mg/dL or at least 50% in the last 7 days. CKD was identified by more than 72 days of hemodialysis, or an estimated glomerular filtration rate less than 60 mL/min/1.73 m2 for 90 days with a final rate of at least 30 mL/min/1.73 m2. Using these criteria, the researchers put patients into four possible categories: AKI on CKD, AKI, CKD, or normal renal function. The primary outcome was wait list mortality, which was defined as death, or removal from the wait list for illness. Follow-up started at the time of addition to the wait list and continued until transplant, removal from the wait list, or death.
Multivariate analysis, which accounted for final MELD-Na score and other confounders, showed that patients with AKI on CKD fared worst, with a 2.86-fold higher mortality risk (subhazard [SHR] ratio, 2.86) than that of patients with normal renal function. The mortality risk for acute on chronic kidney disease was followed closely by patients with AKI alone (SHR, 2.42), and more distantly by patients with CKD alone (SHR, 1.56). Further analysis showed that the disparity between mortality risks of each subgroup became more pronounced with increased MELD-Na score. In addition, evaluation of receiver operating characteristic curves for 6-month wait list mortality showed that the addition of renal function to MELD-Na score increased the accuracy of prognosis from an area under the curve of 0.71 to 0.80 (P less than .001).
“This suggests that incorporating the pattern of renal function could provide an opportunity to better prognosticate risk of mortality in the patients with cirrhosis who are the sickest,” the investigators concluded.
They also speculated about why outcomes may vary by type of kidney dysfunction.
“We suspect that those patients who experience AKI and AKI on CKD in our cohort likely had a triggering event – infection, bleeding, hypovolemia – that put these patients at greater risk for waitlist mortality,” the investigators wrote. “These events inherently carry more risk than stable nonliver-related elevations in serum creatinine that are seen in patients with CKD. Because of this heterogeneity of etiology in renal dysfunction in patients with cirrhosis, it is perhaps not surprising that unique renal function patterns variably impact mortality.”
The investigators noted that the findings from the study have “important implications for clinical practice,” and suggested that including type of renal dysfunction would have the most significant affect on accuracy of prognoses among patients at greatest risk of mortality.
The study was funded by a Paul B. Beeson Career Development Award and the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Verna disclosed relationships with Salix, Merck, and Gilead.
SOURCE: Cullaro et al. Clin Gastroenterol Hepatol. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.043.
Cirrhotic patients with renal failure have a sevenfold increase in mortality compared with those without renal failure. Acute kidney injury (AKI) is common in cirrhosis; increasingly, cirrhotic patients awaiting liver transplantation have or are also at risk for CKD. They are sicker, older, and have more comorbidities such as obesity and diabetes. In this study, the cumulative incidence of death on the wait list was much more pronounced for any form of AKI, with those with AKI on CKD having the highest cumulative incidence of wait list mortality compared with those with normal renal function. The study notably raises several important issues. First, AKI exerts a greater influence in risk of mortality on CKD than it does on those with normal renal function. This is relevant given the increasing prevalence of CKD in this population. Second, it emphasizes the need to effectively measure renal function. All serum creatinine-based equations overestimate glomerular filtration rate in the presence of renal dysfunction. Finally, the study highlights the importance of extrahepatic factors in determining mortality on the wait list. While in all comers, a mathematical model such as the MELDNa score may be able to predict mortality, for a specific patient the presence of comorbid conditions, malnutrition and sarcopenia, infections, critical illness, and now pattern of renal dysfunction, may all play a role.
Sumeet K. Asrani, MD, MSc, is a hepatologist affiliated with Baylor University Medical Center, Dallas. He has no conflicts of interest.
Cirrhotic patients with renal failure have a sevenfold increase in mortality compared with those without renal failure. Acute kidney injury (AKI) is common in cirrhosis; increasingly, cirrhotic patients awaiting liver transplantation have or are also at risk for CKD. They are sicker, older, and have more comorbidities such as obesity and diabetes. In this study, the cumulative incidence of death on the wait list was much more pronounced for any form of AKI, with those with AKI on CKD having the highest cumulative incidence of wait list mortality compared with those with normal renal function. The study notably raises several important issues. First, AKI exerts a greater influence in risk of mortality on CKD than it does on those with normal renal function. This is relevant given the increasing prevalence of CKD in this population. Second, it emphasizes the need to effectively measure renal function. All serum creatinine-based equations overestimate glomerular filtration rate in the presence of renal dysfunction. Finally, the study highlights the importance of extrahepatic factors in determining mortality on the wait list. While in all comers, a mathematical model such as the MELDNa score may be able to predict mortality, for a specific patient the presence of comorbid conditions, malnutrition and sarcopenia, infections, critical illness, and now pattern of renal dysfunction, may all play a role.
Sumeet K. Asrani, MD, MSc, is a hepatologist affiliated with Baylor University Medical Center, Dallas. He has no conflicts of interest.
Cirrhotic patients with renal failure have a sevenfold increase in mortality compared with those without renal failure. Acute kidney injury (AKI) is common in cirrhosis; increasingly, cirrhotic patients awaiting liver transplantation have or are also at risk for CKD. They are sicker, older, and have more comorbidities such as obesity and diabetes. In this study, the cumulative incidence of death on the wait list was much more pronounced for any form of AKI, with those with AKI on CKD having the highest cumulative incidence of wait list mortality compared with those with normal renal function. The study notably raises several important issues. First, AKI exerts a greater influence in risk of mortality on CKD than it does on those with normal renal function. This is relevant given the increasing prevalence of CKD in this population. Second, it emphasizes the need to effectively measure renal function. All serum creatinine-based equations overestimate glomerular filtration rate in the presence of renal dysfunction. Finally, the study highlights the importance of extrahepatic factors in determining mortality on the wait list. While in all comers, a mathematical model such as the MELDNa score may be able to predict mortality, for a specific patient the presence of comorbid conditions, malnutrition and sarcopenia, infections, critical illness, and now pattern of renal dysfunction, may all play a role.
Sumeet K. Asrani, MD, MSc, is a hepatologist affiliated with Baylor University Medical Center, Dallas. He has no conflicts of interest.
For non–status 1 patients with cirrhosis who are awaiting liver transplantation, type of renal dysfunction may be a key determinant of mortality risk, based on a retrospective analysis of more than 22,000 patients.
Risk of death was greatest for patients with acute on chronic kidney disease (AKI on CKD), followed by AKI alone, then CKD alone, reported lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.
Although it is well known that renal dysfunction worsens outcomes among patients with liver cirrhosis, the impact of different types of kidney pathology on mortality risk has been minimally researched, the investigators wrote in Clinical Gastroenterology and Hepatology. “To date, studies evaluating the impact of renal dysfunction on prognosis in patients with cirrhosis have mostly focused on AKI.”
To learn more, the investigators performed a retrospective study involving acute, chronic, and acute on chronic kidney disease among patients with cirrhosis. They included data from 22,680 non–status 1 adults who were awaiting liver transplantation between 2007 and 2014, with at least 90 days on the wait list. Information was gathered from the Organ Procurement and Transplantation Network registry.
AKI was defined by fewer than 72 days of hemodialysis, or an increase in creatinine of at least 0.3 mg/dL or at least 50% in the last 7 days. CKD was identified by more than 72 days of hemodialysis, or an estimated glomerular filtration rate less than 60 mL/min/1.73 m2 for 90 days with a final rate of at least 30 mL/min/1.73 m2. Using these criteria, the researchers put patients into four possible categories: AKI on CKD, AKI, CKD, or normal renal function. The primary outcome was wait list mortality, which was defined as death, or removal from the wait list for illness. Follow-up started at the time of addition to the wait list and continued until transplant, removal from the wait list, or death.
Multivariate analysis, which accounted for final MELD-Na score and other confounders, showed that patients with AKI on CKD fared worst, with a 2.86-fold higher mortality risk (subhazard [SHR] ratio, 2.86) than that of patients with normal renal function. The mortality risk for acute on chronic kidney disease was followed closely by patients with AKI alone (SHR, 2.42), and more distantly by patients with CKD alone (SHR, 1.56). Further analysis showed that the disparity between mortality risks of each subgroup became more pronounced with increased MELD-Na score. In addition, evaluation of receiver operating characteristic curves for 6-month wait list mortality showed that the addition of renal function to MELD-Na score increased the accuracy of prognosis from an area under the curve of 0.71 to 0.80 (P less than .001).
“This suggests that incorporating the pattern of renal function could provide an opportunity to better prognosticate risk of mortality in the patients with cirrhosis who are the sickest,” the investigators concluded.
They also speculated about why outcomes may vary by type of kidney dysfunction.
“We suspect that those patients who experience AKI and AKI on CKD in our cohort likely had a triggering event – infection, bleeding, hypovolemia – that put these patients at greater risk for waitlist mortality,” the investigators wrote. “These events inherently carry more risk than stable nonliver-related elevations in serum creatinine that are seen in patients with CKD. Because of this heterogeneity of etiology in renal dysfunction in patients with cirrhosis, it is perhaps not surprising that unique renal function patterns variably impact mortality.”
The investigators noted that the findings from the study have “important implications for clinical practice,” and suggested that including type of renal dysfunction would have the most significant affect on accuracy of prognoses among patients at greatest risk of mortality.
The study was funded by a Paul B. Beeson Career Development Award and the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Verna disclosed relationships with Salix, Merck, and Gilead.
SOURCE: Cullaro et al. Clin Gastroenterol Hepatol. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.043.
For non–status 1 patients with cirrhosis who are awaiting liver transplantation, type of renal dysfunction may be a key determinant of mortality risk, based on a retrospective analysis of more than 22,000 patients.
Risk of death was greatest for patients with acute on chronic kidney disease (AKI on CKD), followed by AKI alone, then CKD alone, reported lead author Giuseppe Cullaro, MD, of the University of California, San Francisco, and colleagues.
Although it is well known that renal dysfunction worsens outcomes among patients with liver cirrhosis, the impact of different types of kidney pathology on mortality risk has been minimally researched, the investigators wrote in Clinical Gastroenterology and Hepatology. “To date, studies evaluating the impact of renal dysfunction on prognosis in patients with cirrhosis have mostly focused on AKI.”
To learn more, the investigators performed a retrospective study involving acute, chronic, and acute on chronic kidney disease among patients with cirrhosis. They included data from 22,680 non–status 1 adults who were awaiting liver transplantation between 2007 and 2014, with at least 90 days on the wait list. Information was gathered from the Organ Procurement and Transplantation Network registry.
AKI was defined by fewer than 72 days of hemodialysis, or an increase in creatinine of at least 0.3 mg/dL or at least 50% in the last 7 days. CKD was identified by more than 72 days of hemodialysis, or an estimated glomerular filtration rate less than 60 mL/min/1.73 m2 for 90 days with a final rate of at least 30 mL/min/1.73 m2. Using these criteria, the researchers put patients into four possible categories: AKI on CKD, AKI, CKD, or normal renal function. The primary outcome was wait list mortality, which was defined as death, or removal from the wait list for illness. Follow-up started at the time of addition to the wait list and continued until transplant, removal from the wait list, or death.
Multivariate analysis, which accounted for final MELD-Na score and other confounders, showed that patients with AKI on CKD fared worst, with a 2.86-fold higher mortality risk (subhazard [SHR] ratio, 2.86) than that of patients with normal renal function. The mortality risk for acute on chronic kidney disease was followed closely by patients with AKI alone (SHR, 2.42), and more distantly by patients with CKD alone (SHR, 1.56). Further analysis showed that the disparity between mortality risks of each subgroup became more pronounced with increased MELD-Na score. In addition, evaluation of receiver operating characteristic curves for 6-month wait list mortality showed that the addition of renal function to MELD-Na score increased the accuracy of prognosis from an area under the curve of 0.71 to 0.80 (P less than .001).
“This suggests that incorporating the pattern of renal function could provide an opportunity to better prognosticate risk of mortality in the patients with cirrhosis who are the sickest,” the investigators concluded.
They also speculated about why outcomes may vary by type of kidney dysfunction.
“We suspect that those patients who experience AKI and AKI on CKD in our cohort likely had a triggering event – infection, bleeding, hypovolemia – that put these patients at greater risk for waitlist mortality,” the investigators wrote. “These events inherently carry more risk than stable nonliver-related elevations in serum creatinine that are seen in patients with CKD. Because of this heterogeneity of etiology in renal dysfunction in patients with cirrhosis, it is perhaps not surprising that unique renal function patterns variably impact mortality.”
The investigators noted that the findings from the study have “important implications for clinical practice,” and suggested that including type of renal dysfunction would have the most significant affect on accuracy of prognoses among patients at greatest risk of mortality.
The study was funded by a Paul B. Beeson Career Development Award and the National Institute of Diabetes and Digestive and Kidney Diseases. Dr. Verna disclosed relationships with Salix, Merck, and Gilead.
SOURCE: Cullaro et al. Clin Gastroenterol Hepatol. 2019 Feb 1. doi: 10.1016/j.cgh.2019.01.043.
FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY
Local treatment boosts survival for some with oligometastatic lung cancer
Adding local treatment to systemic therapy may extend survival among certain patients with oligometastatic non–small cell lung cancer (NSCLC), according to a retrospective look at more than 34,000 patients.
Surgical resection provided the greatest survival benefit, followed by external beam radiotherapy or thermal ablation (EBRT/TA), reported lead author Johannes Uhlig, MD, of University Medical Center Göttingen (Germany) and colleagues.
NSCLC patients with five or fewer metastatic sites (oligometastatic disease) are thought to achieve better outcomes than patients with more widely disseminated disease, the investigators noted in JAMA Network Open, but the benefit of local therapy for this population is unclear.
“A recent randomized, prospective study of 74 patients with oligometastatic NSCLC identified superior progression-free survival with local control after hypofractionated radiotherapy or surgical resection and radiotherapy compared with systemic therapy alone, suggesting an important application of local treatment options for patients with metastatic disease,” the investigators wrote.
To build on these findings, the investigators retrospectively evaluated 34,887 patients with stage IV NSCLC who had up to one distant metastatic lesion in the liver, lung, brain, or bone, as documented in the National Cancer Database. Treatment groups were divided into patients who received systemic therapy alone, surgical resection plus systemic therapy, or EBRT/TA plus systemic therapy. Multivariable Cox proportional hazards models were used to compare overall survival among the three groups.
Including a median follow-up of 39.4 months, data analysis showed that patients who underwent surgery and systemic therapy fared the best. Adding surgery reduced mortality risk by 38% and 41%, compared with EBRT/TA plus systemic therapy and systemic therapy alone, respectively (P less than .001 for both). Compared with systemic therapy alone, adding EBRT/TA reduced mortality risk by 5% (P = .002).
The impact of EBRT/TA varied among subgroups. For those with squamous cell carcinoma who had limited nodal disease, adding EBRT/TA resulted in a clear benefit, reducing mortality risk by 32% (P less than .001). Compared with systemic therapy alone, this benefit translated to higher survival rates for up to 3 years. Conversely, adding EBRT/TA increased risk of death by 39% among patients with extended local and distant adenocarcinoma (P less than .001). In this subgroup, survival rates over the next 3 years were higher among patients treated with systemic therapy alone.
“The present study supports a combined approach of local therapy in addition to systemic treatment for select patients with oligometastatic NSCLC,” the investigators concluded.
The study was funded by the U.S. Department of Defense. The investigators disclosed additional relationships with Bayer, AstraZeneca, Bristol-Myers Squibb, and others.
SOURCE: Uhlig et al. JAMA Netw Open. 2019 Aug 21. doi: 10.1001/jamanetworkopen.2019.9702.
Adding local treatment to systemic therapy may extend survival among certain patients with oligometastatic non–small cell lung cancer (NSCLC), according to a retrospective look at more than 34,000 patients.
Surgical resection provided the greatest survival benefit, followed by external beam radiotherapy or thermal ablation (EBRT/TA), reported lead author Johannes Uhlig, MD, of University Medical Center Göttingen (Germany) and colleagues.
NSCLC patients with five or fewer metastatic sites (oligometastatic disease) are thought to achieve better outcomes than patients with more widely disseminated disease, the investigators noted in JAMA Network Open, but the benefit of local therapy for this population is unclear.
“A recent randomized, prospective study of 74 patients with oligometastatic NSCLC identified superior progression-free survival with local control after hypofractionated radiotherapy or surgical resection and radiotherapy compared with systemic therapy alone, suggesting an important application of local treatment options for patients with metastatic disease,” the investigators wrote.
To build on these findings, the investigators retrospectively evaluated 34,887 patients with stage IV NSCLC who had up to one distant metastatic lesion in the liver, lung, brain, or bone, as documented in the National Cancer Database. Treatment groups were divided into patients who received systemic therapy alone, surgical resection plus systemic therapy, or EBRT/TA plus systemic therapy. Multivariable Cox proportional hazards models were used to compare overall survival among the three groups.
Including a median follow-up of 39.4 months, data analysis showed that patients who underwent surgery and systemic therapy fared the best. Adding surgery reduced mortality risk by 38% and 41%, compared with EBRT/TA plus systemic therapy and systemic therapy alone, respectively (P less than .001 for both). Compared with systemic therapy alone, adding EBRT/TA reduced mortality risk by 5% (P = .002).
The impact of EBRT/TA varied among subgroups. For those with squamous cell carcinoma who had limited nodal disease, adding EBRT/TA resulted in a clear benefit, reducing mortality risk by 32% (P less than .001). Compared with systemic therapy alone, this benefit translated to higher survival rates for up to 3 years. Conversely, adding EBRT/TA increased risk of death by 39% among patients with extended local and distant adenocarcinoma (P less than .001). In this subgroup, survival rates over the next 3 years were higher among patients treated with systemic therapy alone.
“The present study supports a combined approach of local therapy in addition to systemic treatment for select patients with oligometastatic NSCLC,” the investigators concluded.
The study was funded by the U.S. Department of Defense. The investigators disclosed additional relationships with Bayer, AstraZeneca, Bristol-Myers Squibb, and others.
SOURCE: Uhlig et al. JAMA Netw Open. 2019 Aug 21. doi: 10.1001/jamanetworkopen.2019.9702.
Adding local treatment to systemic therapy may extend survival among certain patients with oligometastatic non–small cell lung cancer (NSCLC), according to a retrospective look at more than 34,000 patients.
Surgical resection provided the greatest survival benefit, followed by external beam radiotherapy or thermal ablation (EBRT/TA), reported lead author Johannes Uhlig, MD, of University Medical Center Göttingen (Germany) and colleagues.
NSCLC patients with five or fewer metastatic sites (oligometastatic disease) are thought to achieve better outcomes than patients with more widely disseminated disease, the investigators noted in JAMA Network Open, but the benefit of local therapy for this population is unclear.
“A recent randomized, prospective study of 74 patients with oligometastatic NSCLC identified superior progression-free survival with local control after hypofractionated radiotherapy or surgical resection and radiotherapy compared with systemic therapy alone, suggesting an important application of local treatment options for patients with metastatic disease,” the investigators wrote.
To build on these findings, the investigators retrospectively evaluated 34,887 patients with stage IV NSCLC who had up to one distant metastatic lesion in the liver, lung, brain, or bone, as documented in the National Cancer Database. Treatment groups were divided into patients who received systemic therapy alone, surgical resection plus systemic therapy, or EBRT/TA plus systemic therapy. Multivariable Cox proportional hazards models were used to compare overall survival among the three groups.
Including a median follow-up of 39.4 months, data analysis showed that patients who underwent surgery and systemic therapy fared the best. Adding surgery reduced mortality risk by 38% and 41%, compared with EBRT/TA plus systemic therapy and systemic therapy alone, respectively (P less than .001 for both). Compared with systemic therapy alone, adding EBRT/TA reduced mortality risk by 5% (P = .002).
The impact of EBRT/TA varied among subgroups. For those with squamous cell carcinoma who had limited nodal disease, adding EBRT/TA resulted in a clear benefit, reducing mortality risk by 32% (P less than .001). Compared with systemic therapy alone, this benefit translated to higher survival rates for up to 3 years. Conversely, adding EBRT/TA increased risk of death by 39% among patients with extended local and distant adenocarcinoma (P less than .001). In this subgroup, survival rates over the next 3 years were higher among patients treated with systemic therapy alone.
“The present study supports a combined approach of local therapy in addition to systemic treatment for select patients with oligometastatic NSCLC,” the investigators concluded.
The study was funded by the U.S. Department of Defense. The investigators disclosed additional relationships with Bayer, AstraZeneca, Bristol-Myers Squibb, and others.
SOURCE: Uhlig et al. JAMA Netw Open. 2019 Aug 21. doi: 10.1001/jamanetworkopen.2019.9702.
FROM JAMA NETWORK OPEN
Key clinical point: Adding local treatment to systemic therapy may extend survival among certain patients with oligometastatic non–small cell lung cancer (NSCLC).
Major finding: Patients treated with a combination of surgical resection and systemic therapy had better overall survival than patients treated with systemic therapy alone (hazard ratio, 0.59).
Study details: A retrospective analysis of 34,887 patients with stage IV NSCLC.
Disclosures: The study was funded by the U.S. Department of Defense. The investigators disclosed additional relationships with Bayer, AstraZeneca, Bristol-Myers Squibb, and others.
Source: Uhlig J et al. JAMA Netw Open. 2019 Aug 21. doi: 10.1001/jamanetworkopen.2019.9702.
Oncologists agree with AI treatment decisions about half the time
When it comes to treatment recommendations for high-risk breast cancer, oncologists agree with a leading artificial intelligence platform about half the time, according to investigators.
In the first study of its kind, involving 10 Chinese oncologists, recommendation concordance with the Watson for Oncology treatment advisory tool (WfO) was generally lower for hormone receptor–positive and metastatic cancers than hormone receptor–negative and nonmetastatic cases, reported Fengrui Xu, MD, of the Academy of Military Medical Sciences in Beijing, and colleagues. Refinement could enable broad use of Watson, not to dictate treatment decisions, but instead to propose alternate treatment approaches and offer point-of-care access to relevant evidence.
“[WfO] is an example of a quantitative oncology clinical decision support that leverages the clinical expertise of oncologists at Memorial Sloan Kettering Cancer Center [MSKCC],” the investigators wrote in JCO Clinical Cancer Informatics. The platform uses machine-learning software to interpret patient scenarios in light from MSKCC training cases, MSKCC treatment guidelines, and more than 300 medical textbooks and journals.
To compare WfO with real-world decision makers, the investigators recruited three chief physicians, four attending physicians, and three fellows to provide treatment recommendations for 1,977 patients with complex breast cancer who were treated at 10 hospitals in China. Participating physicians shared the workload; each evaluated an average of 198 different cases.
On average, oncologists and WfO made the same treatment recommendations 56% of the time. Out of the different types of physicians, fellows were most likely to agree with WfO, based on a 68% concordance rate, compared with 54% for chief physicians and 49% for attending physicians. Including all physicians, concordance was lowest for hormone receptor–positive/HER2-positive disease (48%) and highest for triple-negative cases (71%). Adjuvant and metastatic therapies were also evaluated, with high concordance for adjuvant endocrine (78%) and targeted therapy (100%), compared with moderate concordance for first- (52%) and second-line metastatic therapy (50%). The investigators described concordance results as generally “modest;” however, they noted that such levels are promising.
“This degree of concordance is encouraging because therapeutic decisions in these cases are often difficult as a result of the current limits of medical knowledge for treating complex breast cancers and the presence of local contextual factors that affect physician treatment choices,” the investigators wrote. “It is important to note that nonconcordance does not imply that one treatment is correct for a given patient and another is not, nor does it necessarily diminish the potential value of a decision support system that provides access to supporting evidence and insight into its reasoning process.”
The study was funded by Zefei Jiang. The investigators reported affiliations with IBM Watson Health, Pharmaceutical Manufacturer Institution, Merck, and others.
SOURCE: Xu F et al. JCO Clin Cancer Inform. 2019 Aug 16. doi: 10.1200/CCI.18.00159.
When it comes to treatment recommendations for high-risk breast cancer, oncologists agree with a leading artificial intelligence platform about half the time, according to investigators.
In the first study of its kind, involving 10 Chinese oncologists, recommendation concordance with the Watson for Oncology treatment advisory tool (WfO) was generally lower for hormone receptor–positive and metastatic cancers than hormone receptor–negative and nonmetastatic cases, reported Fengrui Xu, MD, of the Academy of Military Medical Sciences in Beijing, and colleagues. Refinement could enable broad use of Watson, not to dictate treatment decisions, but instead to propose alternate treatment approaches and offer point-of-care access to relevant evidence.
“[WfO] is an example of a quantitative oncology clinical decision support that leverages the clinical expertise of oncologists at Memorial Sloan Kettering Cancer Center [MSKCC],” the investigators wrote in JCO Clinical Cancer Informatics. The platform uses machine-learning software to interpret patient scenarios in light from MSKCC training cases, MSKCC treatment guidelines, and more than 300 medical textbooks and journals.
To compare WfO with real-world decision makers, the investigators recruited three chief physicians, four attending physicians, and three fellows to provide treatment recommendations for 1,977 patients with complex breast cancer who were treated at 10 hospitals in China. Participating physicians shared the workload; each evaluated an average of 198 different cases.
On average, oncologists and WfO made the same treatment recommendations 56% of the time. Out of the different types of physicians, fellows were most likely to agree with WfO, based on a 68% concordance rate, compared with 54% for chief physicians and 49% for attending physicians. Including all physicians, concordance was lowest for hormone receptor–positive/HER2-positive disease (48%) and highest for triple-negative cases (71%). Adjuvant and metastatic therapies were also evaluated, with high concordance for adjuvant endocrine (78%) and targeted therapy (100%), compared with moderate concordance for first- (52%) and second-line metastatic therapy (50%). The investigators described concordance results as generally “modest;” however, they noted that such levels are promising.
“This degree of concordance is encouraging because therapeutic decisions in these cases are often difficult as a result of the current limits of medical knowledge for treating complex breast cancers and the presence of local contextual factors that affect physician treatment choices,” the investigators wrote. “It is important to note that nonconcordance does not imply that one treatment is correct for a given patient and another is not, nor does it necessarily diminish the potential value of a decision support system that provides access to supporting evidence and insight into its reasoning process.”
The study was funded by Zefei Jiang. The investigators reported affiliations with IBM Watson Health, Pharmaceutical Manufacturer Institution, Merck, and others.
SOURCE: Xu F et al. JCO Clin Cancer Inform. 2019 Aug 16. doi: 10.1200/CCI.18.00159.
When it comes to treatment recommendations for high-risk breast cancer, oncologists agree with a leading artificial intelligence platform about half the time, according to investigators.
In the first study of its kind, involving 10 Chinese oncologists, recommendation concordance with the Watson for Oncology treatment advisory tool (WfO) was generally lower for hormone receptor–positive and metastatic cancers than hormone receptor–negative and nonmetastatic cases, reported Fengrui Xu, MD, of the Academy of Military Medical Sciences in Beijing, and colleagues. Refinement could enable broad use of Watson, not to dictate treatment decisions, but instead to propose alternate treatment approaches and offer point-of-care access to relevant evidence.
“[WfO] is an example of a quantitative oncology clinical decision support that leverages the clinical expertise of oncologists at Memorial Sloan Kettering Cancer Center [MSKCC],” the investigators wrote in JCO Clinical Cancer Informatics. The platform uses machine-learning software to interpret patient scenarios in light from MSKCC training cases, MSKCC treatment guidelines, and more than 300 medical textbooks and journals.
To compare WfO with real-world decision makers, the investigators recruited three chief physicians, four attending physicians, and three fellows to provide treatment recommendations for 1,977 patients with complex breast cancer who were treated at 10 hospitals in China. Participating physicians shared the workload; each evaluated an average of 198 different cases.
On average, oncologists and WfO made the same treatment recommendations 56% of the time. Out of the different types of physicians, fellows were most likely to agree with WfO, based on a 68% concordance rate, compared with 54% for chief physicians and 49% for attending physicians. Including all physicians, concordance was lowest for hormone receptor–positive/HER2-positive disease (48%) and highest for triple-negative cases (71%). Adjuvant and metastatic therapies were also evaluated, with high concordance for adjuvant endocrine (78%) and targeted therapy (100%), compared with moderate concordance for first- (52%) and second-line metastatic therapy (50%). The investigators described concordance results as generally “modest;” however, they noted that such levels are promising.
“This degree of concordance is encouraging because therapeutic decisions in these cases are often difficult as a result of the current limits of medical knowledge for treating complex breast cancers and the presence of local contextual factors that affect physician treatment choices,” the investigators wrote. “It is important to note that nonconcordance does not imply that one treatment is correct for a given patient and another is not, nor does it necessarily diminish the potential value of a decision support system that provides access to supporting evidence and insight into its reasoning process.”
The study was funded by Zefei Jiang. The investigators reported affiliations with IBM Watson Health, Pharmaceutical Manufacturer Institution, Merck, and others.
SOURCE: Xu F et al. JCO Clin Cancer Inform. 2019 Aug 16. doi: 10.1200/CCI.18.00159.
FROM JCO CLINICAL CANCER INFORMATICS
Adding chemo beats standard gefitinib for EGFR-mutated lung cancer
For patients with EGFR-mutated, advanced non–small cell lung cancer (NSCLC), adding pemetrexed and carboplatin to standard gefitinib therapy markedly extends progression free survival, but at the cost of twice as many serious toxicities, in a recent phase 3 trial.
Two previous phase 2 trials (J Clin Oncol. 2016 Sep 20;34[27]:3258-66 and Ann Oncol. 2015 Feb 10;26[5]:888-94) suggested that adding chemotherapy could improve outcomes over gefitinib alone, but this is the first study to clearly demonstrate better overall survival, reported lead author Vanita Noronha, MD, of Tata Memorial Hospital in Mumbai, India, and colleagues. They noted that this is the second regimen to demonstrate better overall survival than standard gefitinib for EGFR-mutated lung cancer, with dacomitinib being the first, as shown by the ARCHER 1050 trial.
The present study involved 350 patients with advanced, EGFR-mutated NSCLC who had an Eastern Cooperative Oncology (ECOG) performance status of 0-2 and were candidates for first-line palliative therapy. Approximately one-fifth of patients (21%) had a performance status of 2, and almost as many (18%) had brain metastases. After stratification for performance status and mutation type, patients were randomized in a 1:1 ratio to receive either gefitinib monotherapy (250 mg once daily) or gefitinib plus a chemotherapy combination of pemetrexed (500 mg/m2) and carboplatin (area under the curve of 5 with Calvert formula) on day 1 of four 21-day cycles. Subsequently, nonprogressing patients in the chemotherapy group received maintenance therapy with pemetrexed at the same dose and frequency. Treatment was continued until progression, toxicity, or withdrawal of consent. The primary endpoint was progression-free survival (PFS). Secondary outcomes included overall survival (OS), response rate, quality of life, and toxicity.
After a median follow-up of 17 months, the investigators found that adding chemotherapy to gefitinib resulted in a clear benefit, with estimated median PFS increasing from 8 months to 16 months (P less than .001). Estimated median overall survival also increased, with a figure not reached in the chemotherapy/gefitinib group, compared with 17 months among those who received gefitinib alone. Response rates echoed these findings, with more patients in the chemotherapy/gefitinib group achieving complete (2.9% vs. 0.6%) and partial remission (72.4% vs. 61.9%).
“[T]he PFS attained in our study is noteworthy, considering that 21% of our study patients had a [performance status] of 2, whereas the FLAURA study, [which demonstrated a PFS of 18.9 months with osimertinib], only included patients with a [performance status] of 1 or lower,” the investigators wrote. Their report is in Journal of Clinical Oncology.
Still, introducing chemotherapy was not without negative consequences. Compared with the gefitinib monotherapy group, patients who also received chemotherapy more often had grade 3 or higher adverse events (75% vs. 49.4%), and twice as many had clinically significant, serious toxicities (50.6% vs. 25.3%). The additional toxicities were predominantly due to myelosuppression and nephrotoxicity.
Despite these drawbacks, the investigators concluded that combination therapy was superior to gefitinib alone. “The combination of gefitinib, pemetrexed, and carboplatin represents a new standard first-line therapy for EGFR-mutant NSCLC,” the investigators concluded.
The study was funded by Tata Memorial Center Research Administration Council, Fresenius Kabi India, Lung Cancer Consortium India, and others. The investigators reported relationships with Roche, Biocon, Amgen, and others.
SOURCE: Noronha et al. Journal of Clinical Oncology. 2019 Aug 14. doi: 10.1200/JCO.19.01154.
For patients with EGFR-mutated, advanced non–small cell lung cancer (NSCLC), adding pemetrexed and carboplatin to standard gefitinib therapy markedly extends progression free survival, but at the cost of twice as many serious toxicities, in a recent phase 3 trial.
Two previous phase 2 trials (J Clin Oncol. 2016 Sep 20;34[27]:3258-66 and Ann Oncol. 2015 Feb 10;26[5]:888-94) suggested that adding chemotherapy could improve outcomes over gefitinib alone, but this is the first study to clearly demonstrate better overall survival, reported lead author Vanita Noronha, MD, of Tata Memorial Hospital in Mumbai, India, and colleagues. They noted that this is the second regimen to demonstrate better overall survival than standard gefitinib for EGFR-mutated lung cancer, with dacomitinib being the first, as shown by the ARCHER 1050 trial.
The present study involved 350 patients with advanced, EGFR-mutated NSCLC who had an Eastern Cooperative Oncology (ECOG) performance status of 0-2 and were candidates for first-line palliative therapy. Approximately one-fifth of patients (21%) had a performance status of 2, and almost as many (18%) had brain metastases. After stratification for performance status and mutation type, patients were randomized in a 1:1 ratio to receive either gefitinib monotherapy (250 mg once daily) or gefitinib plus a chemotherapy combination of pemetrexed (500 mg/m2) and carboplatin (area under the curve of 5 with Calvert formula) on day 1 of four 21-day cycles. Subsequently, nonprogressing patients in the chemotherapy group received maintenance therapy with pemetrexed at the same dose and frequency. Treatment was continued until progression, toxicity, or withdrawal of consent. The primary endpoint was progression-free survival (PFS). Secondary outcomes included overall survival (OS), response rate, quality of life, and toxicity.
After a median follow-up of 17 months, the investigators found that adding chemotherapy to gefitinib resulted in a clear benefit, with estimated median PFS increasing from 8 months to 16 months (P less than .001). Estimated median overall survival also increased, with a figure not reached in the chemotherapy/gefitinib group, compared with 17 months among those who received gefitinib alone. Response rates echoed these findings, with more patients in the chemotherapy/gefitinib group achieving complete (2.9% vs. 0.6%) and partial remission (72.4% vs. 61.9%).
“[T]he PFS attained in our study is noteworthy, considering that 21% of our study patients had a [performance status] of 2, whereas the FLAURA study, [which demonstrated a PFS of 18.9 months with osimertinib], only included patients with a [performance status] of 1 or lower,” the investigators wrote. Their report is in Journal of Clinical Oncology.
Still, introducing chemotherapy was not without negative consequences. Compared with the gefitinib monotherapy group, patients who also received chemotherapy more often had grade 3 or higher adverse events (75% vs. 49.4%), and twice as many had clinically significant, serious toxicities (50.6% vs. 25.3%). The additional toxicities were predominantly due to myelosuppression and nephrotoxicity.
Despite these drawbacks, the investigators concluded that combination therapy was superior to gefitinib alone. “The combination of gefitinib, pemetrexed, and carboplatin represents a new standard first-line therapy for EGFR-mutant NSCLC,” the investigators concluded.
The study was funded by Tata Memorial Center Research Administration Council, Fresenius Kabi India, Lung Cancer Consortium India, and others. The investigators reported relationships with Roche, Biocon, Amgen, and others.
SOURCE: Noronha et al. Journal of Clinical Oncology. 2019 Aug 14. doi: 10.1200/JCO.19.01154.
For patients with EGFR-mutated, advanced non–small cell lung cancer (NSCLC), adding pemetrexed and carboplatin to standard gefitinib therapy markedly extends progression free survival, but at the cost of twice as many serious toxicities, in a recent phase 3 trial.
Two previous phase 2 trials (J Clin Oncol. 2016 Sep 20;34[27]:3258-66 and Ann Oncol. 2015 Feb 10;26[5]:888-94) suggested that adding chemotherapy could improve outcomes over gefitinib alone, but this is the first study to clearly demonstrate better overall survival, reported lead author Vanita Noronha, MD, of Tata Memorial Hospital in Mumbai, India, and colleagues. They noted that this is the second regimen to demonstrate better overall survival than standard gefitinib for EGFR-mutated lung cancer, with dacomitinib being the first, as shown by the ARCHER 1050 trial.
The present study involved 350 patients with advanced, EGFR-mutated NSCLC who had an Eastern Cooperative Oncology (ECOG) performance status of 0-2 and were candidates for first-line palliative therapy. Approximately one-fifth of patients (21%) had a performance status of 2, and almost as many (18%) had brain metastases. After stratification for performance status and mutation type, patients were randomized in a 1:1 ratio to receive either gefitinib monotherapy (250 mg once daily) or gefitinib plus a chemotherapy combination of pemetrexed (500 mg/m2) and carboplatin (area under the curve of 5 with Calvert formula) on day 1 of four 21-day cycles. Subsequently, nonprogressing patients in the chemotherapy group received maintenance therapy with pemetrexed at the same dose and frequency. Treatment was continued until progression, toxicity, or withdrawal of consent. The primary endpoint was progression-free survival (PFS). Secondary outcomes included overall survival (OS), response rate, quality of life, and toxicity.
After a median follow-up of 17 months, the investigators found that adding chemotherapy to gefitinib resulted in a clear benefit, with estimated median PFS increasing from 8 months to 16 months (P less than .001). Estimated median overall survival also increased, with a figure not reached in the chemotherapy/gefitinib group, compared with 17 months among those who received gefitinib alone. Response rates echoed these findings, with more patients in the chemotherapy/gefitinib group achieving complete (2.9% vs. 0.6%) and partial remission (72.4% vs. 61.9%).
“[T]he PFS attained in our study is noteworthy, considering that 21% of our study patients had a [performance status] of 2, whereas the FLAURA study, [which demonstrated a PFS of 18.9 months with osimertinib], only included patients with a [performance status] of 1 or lower,” the investigators wrote. Their report is in Journal of Clinical Oncology.
Still, introducing chemotherapy was not without negative consequences. Compared with the gefitinib monotherapy group, patients who also received chemotherapy more often had grade 3 or higher adverse events (75% vs. 49.4%), and twice as many had clinically significant, serious toxicities (50.6% vs. 25.3%). The additional toxicities were predominantly due to myelosuppression and nephrotoxicity.
Despite these drawbacks, the investigators concluded that combination therapy was superior to gefitinib alone. “The combination of gefitinib, pemetrexed, and carboplatin represents a new standard first-line therapy for EGFR-mutant NSCLC,” the investigators concluded.
The study was funded by Tata Memorial Center Research Administration Council, Fresenius Kabi India, Lung Cancer Consortium India, and others. The investigators reported relationships with Roche, Biocon, Amgen, and others.
SOURCE: Noronha et al. Journal of Clinical Oncology. 2019 Aug 14. doi: 10.1200/JCO.19.01154.
FROM THE JOURNAL OF CLINICAL ONCOLOGY
New biomarker model outmatches conventional risk factors for predicting mortality
A new model using 14 biomarkers may be more accurate at predicting longer-term mortality than a model comprising conventional risk factors, based on the largest metabolomics study to date.
The prognostic model was more accurate at predicting 5- and 10-year mortality across all ages, reported Joris Deelen, PhD, of Leiden (the Netherlands) University Medical Center and colleagues.
“These results suggest that metabolic biomarker profiling could potentially be used to guide patient care, if further validated in relevant clinical settings,” the investigators wrote in Nature Communications.
“There is no consensus on the ultimate set of predictors of longer-term [5-10 years] mortality risk, since the predictive power of the currently used risk factors is limited, especially at higher ages,” the investigators wrote. “However, it is especially this age group and follow-up time window for which a robust tool would aid clinicians in assessing whether treatment is still sensible.”
The current study was a survival meta-analysis of 44,168 individuals from 12 cohorts aged between 18 and 109 years at baseline. First, the investigators looked for associations between 226 metabolic biomarkers and all-cause mortality in the 5,512 people who died during follow-up. This revealed associations between mortality and 136 biomarkers, which increased to 159 biomarkers after adjusting for recently reported all-cause mortality associations with albumin, very low-density lipoprotein (VLDL) particle size, citrate, and glycoprotein acetyls. Because of strong correlations between many of the biomarkers evaluated, the investigators pared the field down to 63 biomarkers, then used a forward-backward procedure to ultimately identify 14 biomarkers independently associated with mortality. Of the four recently described biomarkers, citrate was excluded from the final model because of its minimal contribution to mortality estimates.
The 14 biomarkers were total lipids in chylomicrons and extremely large VLDL cholesterol, total lipids in small HDL cholesterol, mean diameter for VLDL cholesterol particles, ratio of polyunsaturated fatty acids to total fatty acids, glucose, lactate, histidine, isoleucine, leucine, valine, phenylalanine, acetoacetate, albumin, and glycoprotein acetyls.
“The 14 identified biomarkers are involved in various processes, such as lipoprotein and fatty acid metabolism, glycolysis, fluid balance, and inflammation. Although the majority of these biomarkers have been associated with mortality before, this is the first study that shows their independent effect when combined into one model,” the researchers wrote.
Implementation of the new biomarker model led to a score that typically ranged from –2 to 3. A 1-point increase was associated with a 173% increased risk of death (hazard ratio, 2.73; P less than 1 x 10–132). Analysis of cause-specific mortality revealed that most biomarkers were predictive of multiple causes of death. Some biomarkers were more focused; glucose, for example, was more predictive of cardiovascular-related death than of death because of cancer or nonlocalized infections. Compared with a model incorporating conventional risk factors, the biomarker model more accurately predicted 5- and 10-year mortality, with respective C-statistics of 0.837 versus 0.772 and 0.830 versus 0.790. This superiority was even more pronounced when only individuals aged older than 60 years were included.
The study was funded by Biobanking and BioMolecular resources Research Initiative–Netherlands. The investigators reported additional relationships with Nightingale Health, Novo Nordisk, and Bayer.
SOURCE: Deelen J et al. Nature Comm. 2019 Aug 20. doi: 10.1038/s41467-019-11311-9.
A new model using 14 biomarkers may be more accurate at predicting longer-term mortality than a model comprising conventional risk factors, based on the largest metabolomics study to date.
The prognostic model was more accurate at predicting 5- and 10-year mortality across all ages, reported Joris Deelen, PhD, of Leiden (the Netherlands) University Medical Center and colleagues.
“These results suggest that metabolic biomarker profiling could potentially be used to guide patient care, if further validated in relevant clinical settings,” the investigators wrote in Nature Communications.
“There is no consensus on the ultimate set of predictors of longer-term [5-10 years] mortality risk, since the predictive power of the currently used risk factors is limited, especially at higher ages,” the investigators wrote. “However, it is especially this age group and follow-up time window for which a robust tool would aid clinicians in assessing whether treatment is still sensible.”
The current study was a survival meta-analysis of 44,168 individuals from 12 cohorts aged between 18 and 109 years at baseline. First, the investigators looked for associations between 226 metabolic biomarkers and all-cause mortality in the 5,512 people who died during follow-up. This revealed associations between mortality and 136 biomarkers, which increased to 159 biomarkers after adjusting for recently reported all-cause mortality associations with albumin, very low-density lipoprotein (VLDL) particle size, citrate, and glycoprotein acetyls. Because of strong correlations between many of the biomarkers evaluated, the investigators pared the field down to 63 biomarkers, then used a forward-backward procedure to ultimately identify 14 biomarkers independently associated with mortality. Of the four recently described biomarkers, citrate was excluded from the final model because of its minimal contribution to mortality estimates.
The 14 biomarkers were total lipids in chylomicrons and extremely large VLDL cholesterol, total lipids in small HDL cholesterol, mean diameter for VLDL cholesterol particles, ratio of polyunsaturated fatty acids to total fatty acids, glucose, lactate, histidine, isoleucine, leucine, valine, phenylalanine, acetoacetate, albumin, and glycoprotein acetyls.
“The 14 identified biomarkers are involved in various processes, such as lipoprotein and fatty acid metabolism, glycolysis, fluid balance, and inflammation. Although the majority of these biomarkers have been associated with mortality before, this is the first study that shows their independent effect when combined into one model,” the researchers wrote.
Implementation of the new biomarker model led to a score that typically ranged from –2 to 3. A 1-point increase was associated with a 173% increased risk of death (hazard ratio, 2.73; P less than 1 x 10–132). Analysis of cause-specific mortality revealed that most biomarkers were predictive of multiple causes of death. Some biomarkers were more focused; glucose, for example, was more predictive of cardiovascular-related death than of death because of cancer or nonlocalized infections. Compared with a model incorporating conventional risk factors, the biomarker model more accurately predicted 5- and 10-year mortality, with respective C-statistics of 0.837 versus 0.772 and 0.830 versus 0.790. This superiority was even more pronounced when only individuals aged older than 60 years were included.
The study was funded by Biobanking and BioMolecular resources Research Initiative–Netherlands. The investigators reported additional relationships with Nightingale Health, Novo Nordisk, and Bayer.
SOURCE: Deelen J et al. Nature Comm. 2019 Aug 20. doi: 10.1038/s41467-019-11311-9.
A new model using 14 biomarkers may be more accurate at predicting longer-term mortality than a model comprising conventional risk factors, based on the largest metabolomics study to date.
The prognostic model was more accurate at predicting 5- and 10-year mortality across all ages, reported Joris Deelen, PhD, of Leiden (the Netherlands) University Medical Center and colleagues.
“These results suggest that metabolic biomarker profiling could potentially be used to guide patient care, if further validated in relevant clinical settings,” the investigators wrote in Nature Communications.
“There is no consensus on the ultimate set of predictors of longer-term [5-10 years] mortality risk, since the predictive power of the currently used risk factors is limited, especially at higher ages,” the investigators wrote. “However, it is especially this age group and follow-up time window for which a robust tool would aid clinicians in assessing whether treatment is still sensible.”
The current study was a survival meta-analysis of 44,168 individuals from 12 cohorts aged between 18 and 109 years at baseline. First, the investigators looked for associations between 226 metabolic biomarkers and all-cause mortality in the 5,512 people who died during follow-up. This revealed associations between mortality and 136 biomarkers, which increased to 159 biomarkers after adjusting for recently reported all-cause mortality associations with albumin, very low-density lipoprotein (VLDL) particle size, citrate, and glycoprotein acetyls. Because of strong correlations between many of the biomarkers evaluated, the investigators pared the field down to 63 biomarkers, then used a forward-backward procedure to ultimately identify 14 biomarkers independently associated with mortality. Of the four recently described biomarkers, citrate was excluded from the final model because of its minimal contribution to mortality estimates.
The 14 biomarkers were total lipids in chylomicrons and extremely large VLDL cholesterol, total lipids in small HDL cholesterol, mean diameter for VLDL cholesterol particles, ratio of polyunsaturated fatty acids to total fatty acids, glucose, lactate, histidine, isoleucine, leucine, valine, phenylalanine, acetoacetate, albumin, and glycoprotein acetyls.
“The 14 identified biomarkers are involved in various processes, such as lipoprotein and fatty acid metabolism, glycolysis, fluid balance, and inflammation. Although the majority of these biomarkers have been associated with mortality before, this is the first study that shows their independent effect when combined into one model,” the researchers wrote.
Implementation of the new biomarker model led to a score that typically ranged from –2 to 3. A 1-point increase was associated with a 173% increased risk of death (hazard ratio, 2.73; P less than 1 x 10–132). Analysis of cause-specific mortality revealed that most biomarkers were predictive of multiple causes of death. Some biomarkers were more focused; glucose, for example, was more predictive of cardiovascular-related death than of death because of cancer or nonlocalized infections. Compared with a model incorporating conventional risk factors, the biomarker model more accurately predicted 5- and 10-year mortality, with respective C-statistics of 0.837 versus 0.772 and 0.830 versus 0.790. This superiority was even more pronounced when only individuals aged older than 60 years were included.
The study was funded by Biobanking and BioMolecular resources Research Initiative–Netherlands. The investigators reported additional relationships with Nightingale Health, Novo Nordisk, and Bayer.
SOURCE: Deelen J et al. Nature Comm. 2019 Aug 20. doi: 10.1038/s41467-019-11311-9.
FROM NATURE COMMUNICATIONS
Key clinical point: A new model using 14 biomarkers may be more accurate at predicting 5- and 10-year mortality than a model comprising conventional risk factors.
Major finding: The biomarker model better predicted 5-year mortality than the conventional model (C-statistic, 0.837 vs. 0.772).
Study details: A retrospective metabolomics study involving 44,168 individuals.
Disclosures: The study was funded by Biobanking and BioMolecular Resources Research Initiative–Netherlands. The investigators reported additional relationships with Nightingale Health, Novo Nordisk, and Bayer.
Source: Deelen J et al. Nature Comm. 2019 Aug 20. doi: 10.1038/s41467-019-11311-9.
Zanubrutinib may be poised to challenge ibrutinib for CLL
The Bruton tyrosine kinase (BTK) inhibitor zanubrutinib appears safe and effective for patients with B-cell malignancies, according to results from a phase 1 trial.
Among patients with chronic lymphocytic leukemia (CLL) or small lymphocytic lymphoma (SLL), the overall response rate was 96.2%, reported Constantine Si Lun Tam, MD, of Peter MacCallum Cancer Centre in Melbourne and colleagues.
“Zanubrutinib (BGB-3111) is a highly specific next-generation BTK inhibitor with favorable oral bioavailability, as shown in preclinical studies,” the investigators wrote in Blood. “Compared with ibrutinib, zanubrutinib has shown greater selectivity for BTK and fewer off-target effects in multiple in vitro enzymatic and cell-based assays.”
The current, open-label trial involved 144 patients with B-cell malignancies. To determine optimal dosing, the investigators recruited 17 patients with relapsed/refractory B-cell malignancies who had received at least one prior therapy. The dose expansion part of the study assessed responses in multiple cohorts, including patients with CLL/SLL, mantle cell lymphoma, and Waldenström macroglobulinemia. The primary endpoints were safety and tolerability, including maximum tolerated dose. Efficacy findings were also reported.
During dose escalation, no dose-limiting toxicities were observed, so the highest dose – 320 mg once daily or 160 mg twice daily – was selected for further testing.
The investigators highlighted efficacy and safety findings from 94 patients with CLL/SLL who were involved in dose expansion. Although nearly one-quarter (23.4%) were treatment-naive, the median number of prior therapies was two, and some patients had high-risk features, such as adverse cytogenetics, including 19.1% with a TP53 mutation and 23.3% with a 17p deletion. After a median follow-up of 13.7 months, 94.7% of these patients were still undergoing treatment.
Out of the initial 94 patients with CLL/SLL, 78 were evaluable for efficacy. The overall response rate was 96.2%, including two (2.6%) complete responses, 63 (80.8%) partial responses, and 10 (12.8%) partial responses with lymphocytosis. The median progression-free survival had not been reached, and the 12-month estimated progression-free survival was 100%.
In regard to safety, the most common adverse events were contusion (35.1%), upper respiratory tract infection (33.0%), cough (25.5%), diarrhea (21.3%), fatigue (19.1%), back pain (14.9%), hematuria (14.9%), headache (13.8%), nausea (13.8%), rash (12.8%), arthralgia (11.7%), muscle spasms (11.7%), and urinary tract infection (10.6%).
A number of other adverse events were reported, although these occurred in less than 10% of patients.
More than one-third of patients (36.2%) experienced grade 3 or higher adverse events, with neutropenia being most common (6.4%), followed by pneumonia , hypertension, and anemia, which each occurred in 2.1% of patients, and less commonly, back pain, nausea, urinary tract infection, purpura, cellulitis, and squamous cell carcinoma of the skin, which each occurred in 1.1% of patients.
“In this first-in-human study, zanubrutinib demonstrated encouraging activity in patients with relapsed/refractory and treatment-naive CLL/SLL, with good tolerability,” the investigators concluded. “Two ongoing randomized studies of zanubrutinib versus ibrutinib (NCT03053440 and NCT03734016) aim to determine whether consistent, continuous BTK blockade with a selective inhibitor results in fewer off-target effects and translates into improvements in disease control.”
The study was funded by BeiGene USA, which is developing the drug. The investigators reported relationships with the study sponsor, as well as Janssen, Pharmacyclics, AbbVie, and others.
SOURCE: Tam CSL et al. Blood. 2019 Jul 24. doi: 10.1182/blood.2019001160.
The Bruton tyrosine kinase (BTK) inhibitor zanubrutinib appears safe and effective for patients with B-cell malignancies, according to results from a phase 1 trial.
Among patients with chronic lymphocytic leukemia (CLL) or small lymphocytic lymphoma (SLL), the overall response rate was 96.2%, reported Constantine Si Lun Tam, MD, of Peter MacCallum Cancer Centre in Melbourne and colleagues.
“Zanubrutinib (BGB-3111) is a highly specific next-generation BTK inhibitor with favorable oral bioavailability, as shown in preclinical studies,” the investigators wrote in Blood. “Compared with ibrutinib, zanubrutinib has shown greater selectivity for BTK and fewer off-target effects in multiple in vitro enzymatic and cell-based assays.”
The current, open-label trial involved 144 patients with B-cell malignancies. To determine optimal dosing, the investigators recruited 17 patients with relapsed/refractory B-cell malignancies who had received at least one prior therapy. The dose expansion part of the study assessed responses in multiple cohorts, including patients with CLL/SLL, mantle cell lymphoma, and Waldenström macroglobulinemia. The primary endpoints were safety and tolerability, including maximum tolerated dose. Efficacy findings were also reported.
During dose escalation, no dose-limiting toxicities were observed, so the highest dose – 320 mg once daily or 160 mg twice daily – was selected for further testing.
The investigators highlighted efficacy and safety findings from 94 patients with CLL/SLL who were involved in dose expansion. Although nearly one-quarter (23.4%) were treatment-naive, the median number of prior therapies was two, and some patients had high-risk features, such as adverse cytogenetics, including 19.1% with a TP53 mutation and 23.3% with a 17p deletion. After a median follow-up of 13.7 months, 94.7% of these patients were still undergoing treatment.
Out of the initial 94 patients with CLL/SLL, 78 were evaluable for efficacy. The overall response rate was 96.2%, including two (2.6%) complete responses, 63 (80.8%) partial responses, and 10 (12.8%) partial responses with lymphocytosis. The median progression-free survival had not been reached, and the 12-month estimated progression-free survival was 100%.
In regard to safety, the most common adverse events were contusion (35.1%), upper respiratory tract infection (33.0%), cough (25.5%), diarrhea (21.3%), fatigue (19.1%), back pain (14.9%), hematuria (14.9%), headache (13.8%), nausea (13.8%), rash (12.8%), arthralgia (11.7%), muscle spasms (11.7%), and urinary tract infection (10.6%).
A number of other adverse events were reported, although these occurred in less than 10% of patients.
More than one-third of patients (36.2%) experienced grade 3 or higher adverse events, with neutropenia being most common (6.4%), followed by pneumonia , hypertension, and anemia, which each occurred in 2.1% of patients, and less commonly, back pain, nausea, urinary tract infection, purpura, cellulitis, and squamous cell carcinoma of the skin, which each occurred in 1.1% of patients.
“In this first-in-human study, zanubrutinib demonstrated encouraging activity in patients with relapsed/refractory and treatment-naive CLL/SLL, with good tolerability,” the investigators concluded. “Two ongoing randomized studies of zanubrutinib versus ibrutinib (NCT03053440 and NCT03734016) aim to determine whether consistent, continuous BTK blockade with a selective inhibitor results in fewer off-target effects and translates into improvements in disease control.”
The study was funded by BeiGene USA, which is developing the drug. The investigators reported relationships with the study sponsor, as well as Janssen, Pharmacyclics, AbbVie, and others.
SOURCE: Tam CSL et al. Blood. 2019 Jul 24. doi: 10.1182/blood.2019001160.
The Bruton tyrosine kinase (BTK) inhibitor zanubrutinib appears safe and effective for patients with B-cell malignancies, according to results from a phase 1 trial.
Among patients with chronic lymphocytic leukemia (CLL) or small lymphocytic lymphoma (SLL), the overall response rate was 96.2%, reported Constantine Si Lun Tam, MD, of Peter MacCallum Cancer Centre in Melbourne and colleagues.
“Zanubrutinib (BGB-3111) is a highly specific next-generation BTK inhibitor with favorable oral bioavailability, as shown in preclinical studies,” the investigators wrote in Blood. “Compared with ibrutinib, zanubrutinib has shown greater selectivity for BTK and fewer off-target effects in multiple in vitro enzymatic and cell-based assays.”
The current, open-label trial involved 144 patients with B-cell malignancies. To determine optimal dosing, the investigators recruited 17 patients with relapsed/refractory B-cell malignancies who had received at least one prior therapy. The dose expansion part of the study assessed responses in multiple cohorts, including patients with CLL/SLL, mantle cell lymphoma, and Waldenström macroglobulinemia. The primary endpoints were safety and tolerability, including maximum tolerated dose. Efficacy findings were also reported.
During dose escalation, no dose-limiting toxicities were observed, so the highest dose – 320 mg once daily or 160 mg twice daily – was selected for further testing.
The investigators highlighted efficacy and safety findings from 94 patients with CLL/SLL who were involved in dose expansion. Although nearly one-quarter (23.4%) were treatment-naive, the median number of prior therapies was two, and some patients had high-risk features, such as adverse cytogenetics, including 19.1% with a TP53 mutation and 23.3% with a 17p deletion. After a median follow-up of 13.7 months, 94.7% of these patients were still undergoing treatment.
Out of the initial 94 patients with CLL/SLL, 78 were evaluable for efficacy. The overall response rate was 96.2%, including two (2.6%) complete responses, 63 (80.8%) partial responses, and 10 (12.8%) partial responses with lymphocytosis. The median progression-free survival had not been reached, and the 12-month estimated progression-free survival was 100%.
In regard to safety, the most common adverse events were contusion (35.1%), upper respiratory tract infection (33.0%), cough (25.5%), diarrhea (21.3%), fatigue (19.1%), back pain (14.9%), hematuria (14.9%), headache (13.8%), nausea (13.8%), rash (12.8%), arthralgia (11.7%), muscle spasms (11.7%), and urinary tract infection (10.6%).
A number of other adverse events were reported, although these occurred in less than 10% of patients.
More than one-third of patients (36.2%) experienced grade 3 or higher adverse events, with neutropenia being most common (6.4%), followed by pneumonia , hypertension, and anemia, which each occurred in 2.1% of patients, and less commonly, back pain, nausea, urinary tract infection, purpura, cellulitis, and squamous cell carcinoma of the skin, which each occurred in 1.1% of patients.
“In this first-in-human study, zanubrutinib demonstrated encouraging activity in patients with relapsed/refractory and treatment-naive CLL/SLL, with good tolerability,” the investigators concluded. “Two ongoing randomized studies of zanubrutinib versus ibrutinib (NCT03053440 and NCT03734016) aim to determine whether consistent, continuous BTK blockade with a selective inhibitor results in fewer off-target effects and translates into improvements in disease control.”
The study was funded by BeiGene USA, which is developing the drug. The investigators reported relationships with the study sponsor, as well as Janssen, Pharmacyclics, AbbVie, and others.
SOURCE: Tam CSL et al. Blood. 2019 Jul 24. doi: 10.1182/blood.2019001160.
FROM BLOOD