User login
Mitchel is a reporter for MDedge based in the Philadelphia area. He started with the company in 1992, when it was International Medical News Group (IMNG), and has since covered a range of medical specialties. Mitchel trained as a virologist at Roswell Park Memorial Institute in Buffalo, and then worked briefly as a researcher at Boston Children's Hospital before pivoting to journalism as a AAAS Mass Media Fellow in 1980. His first reporting job was with Science Digest magazine, and from the mid-1980s to early-1990s he was a reporter with Medical World News. @mitchelzoler
Panel Proposes MRI Role in Knee OA Diagnosis
BRUSSELS – The use of magnetic resonance imaging may enable earlier recognition of knee osteoarthritis, and should be incorporated into recommended diagnostic criteria, a panel of 16 osteoarthritis experts concluded.
Using MRI to define knee osteoarthritis (OA) may allow detection of the disease before radiographic changes occur. But despite a growing body of literature on the role of MRI in OA, little uniformity exists for its diagnostic application, perhaps because of the absence of criteria for an MRI-based structural diagnosis of OA, the group said.
The Osteoarthritis Research Society International (OARSI) organized the 16-member panel, the OA Imaging Working Group, to develop an MRI-based definition of structural OA. The working group sought to identify structural changes on MRI that defined a structural diagnosis of knee OA, Dr. David J. Hunter and the other members of the working group wrote in a poster presented at the congress, which was organized by OARSI.
The working group began with a literature review through April 2009, a process that yielded 25 studies that met the group's inclusion criteria and evaluated MRI diagnostic performance. Through a multiphase process of discussion and voting, the group agreed on a set of nine propositions and two OA definitions based on MRI criteria. (See boxes.) These constitute “statements of preamble and context setting.” The two definitions “offer an opportunity for formal testing against other diagnostic constructs,” said Dr. Hunter, a rheumatologist and professor of medicine at the University of Sydney and his associates in the working group.
The working group noted that the American College of Rheumatology in 1986 first released the current standard criteria for diagnosing OA, which deal only with radiographic imaging (Arthritis Rheum. 1986;29:1039-49). The European League Against Rheumatism published more current recommendations this year, but focused on a clinical diagnosis that did not involve imaging (Ann. Rheum. Dis. 2010;69:483-9).
The working group aimed to “include MRI as a means to define the disease with the intent that one may be able to identify early, preradiographic disease, thus enabling recruitment of study populations where structure modification (or structure maintenance) may be realistic in a more preventive manner.”
The group cautioned that prior to using the definitions, “it is important that their validity and diagnostic performance be adequately tested.” They also stressed that “the propositions have been developed for structural OA, not for a clinical diagnosis, not for early OA, and not to facilitate staging of the disease.”
An osteoarthritis specialist who was not involved with the working group cautioned that waiting for MRI structural changes that are specific for OA may still miss a truly early diagnosis, before irreversible pathology occurred.
“There are early changes [seen with MRI] that are not picked up on radiographs, but we don't yet have a standardized, validated definition of an earlier stage” on MRI, Dr. Tuhina Neogi, a rheumatologist at Boston University, said in an interview.
Dr. Hunter said that he has received research support from AstraZeneca, DJO Inc. (DonJoy), Eli Lilly & Co., Merck & Co., Pfizer Inc., Stryker Corp., and Wyeth. Eight of the other members of the working group also provided disclosures, whereas the remaining seven members said they had no disclosures. Dr. Neogi had no disclosures.
The Panel's MRI-Based Definition of OA
The two definitions of MRI findings diagnostic of knee OA are:
1. Tibiofemoral OA should have either both features from group A (below), or one feature from group A and at least two from group B. Examination of the patient must also rule out joint trauma in the past 6 months (by history) and inflammatory arthritis (by radiographs, history, and lab findings).
▸ Group A features: Definite osteophyte formation; full-thickness cartilage loss.
▸ Group B features: Subchondral bone marrow lesion or cyst not associated with meniscal or ligamentous attachments; meniscal subluxation, maceration, or degenerative (horizontal) tear; partial-thickness cartilage loss (without full-thickness loss).
2. Patellofemoral OA requires both of the following features involving the patella, anterior femur, or both:
▸ Definite osteophyte formation.
▸ Partial- or full-thickness cartilage loss.
The Panel's Propositions
Here are the nine propositions on MRI diagnosis of knee OA:
1. MRI changes of OA may occur in the absence of radiographic findings of OA.
2. MRI may add to the diagnosis of OA and should be incorporated into the ACR diagnostic criteria including x-ray, clinical, and lab parameters.
3. MRI may be used for inclusion in clinical studies, but should not be a primary diagnostic tool in a clinical setting.
4. Certain MRI changes that occur in isolation are not diagnostic of OA, including cartilage loss; change in cartilage composition; cystic change and development of bone marrow lesions; and ligamentous, tendinous, and meniscal damage.
5. No single finding is diagnostic of knee OA.
6. MRI findings indicative of knee OA may include abnormalities in all tissues of the joint (bone, cartilage, meniscus, synovium, ligament, and capsule).
7. Given the multiple tissue abnormalities detected by MRI in OA, diagnostic criteria are likely to involve combinations of features.
8. Definite osteophyte production is indicative of OA.
9. Joint space narrowing as assessed by MRI cannot be used as a diagnostic criterion.
BRUSSELS – The use of magnetic resonance imaging may enable earlier recognition of knee osteoarthritis, and should be incorporated into recommended diagnostic criteria, a panel of 16 osteoarthritis experts concluded.
Using MRI to define knee osteoarthritis (OA) may allow detection of the disease before radiographic changes occur. But despite a growing body of literature on the role of MRI in OA, little uniformity exists for its diagnostic application, perhaps because of the absence of criteria for an MRI-based structural diagnosis of OA, the group said.
The Osteoarthritis Research Society International (OARSI) organized the 16-member panel, the OA Imaging Working Group, to develop an MRI-based definition of structural OA. The working group sought to identify structural changes on MRI that defined a structural diagnosis of knee OA, Dr. David J. Hunter and the other members of the working group wrote in a poster presented at the congress, which was organized by OARSI.
The working group began with a literature review through April 2009, a process that yielded 25 studies that met the group's inclusion criteria and evaluated MRI diagnostic performance. Through a multiphase process of discussion and voting, the group agreed on a set of nine propositions and two OA definitions based on MRI criteria. (See boxes.) These constitute “statements of preamble and context setting.” The two definitions “offer an opportunity for formal testing against other diagnostic constructs,” said Dr. Hunter, a rheumatologist and professor of medicine at the University of Sydney and his associates in the working group.
The working group noted that the American College of Rheumatology in 1986 first released the current standard criteria for diagnosing OA, which deal only with radiographic imaging (Arthritis Rheum. 1986;29:1039-49). The European League Against Rheumatism published more current recommendations this year, but focused on a clinical diagnosis that did not involve imaging (Ann. Rheum. Dis. 2010;69:483-9).
The working group aimed to “include MRI as a means to define the disease with the intent that one may be able to identify early, preradiographic disease, thus enabling recruitment of study populations where structure modification (or structure maintenance) may be realistic in a more preventive manner.”
The group cautioned that prior to using the definitions, “it is important that their validity and diagnostic performance be adequately tested.” They also stressed that “the propositions have been developed for structural OA, not for a clinical diagnosis, not for early OA, and not to facilitate staging of the disease.”
An osteoarthritis specialist who was not involved with the working group cautioned that waiting for MRI structural changes that are specific for OA may still miss a truly early diagnosis, before irreversible pathology occurred.
“There are early changes [seen with MRI] that are not picked up on radiographs, but we don't yet have a standardized, validated definition of an earlier stage” on MRI, Dr. Tuhina Neogi, a rheumatologist at Boston University, said in an interview.
Dr. Hunter said that he has received research support from AstraZeneca, DJO Inc. (DonJoy), Eli Lilly & Co., Merck & Co., Pfizer Inc., Stryker Corp., and Wyeth. Eight of the other members of the working group also provided disclosures, whereas the remaining seven members said they had no disclosures. Dr. Neogi had no disclosures.
The Panel's MRI-Based Definition of OA
The two definitions of MRI findings diagnostic of knee OA are:
1. Tibiofemoral OA should have either both features from group A (below), or one feature from group A and at least two from group B. Examination of the patient must also rule out joint trauma in the past 6 months (by history) and inflammatory arthritis (by radiographs, history, and lab findings).
▸ Group A features: Definite osteophyte formation; full-thickness cartilage loss.
▸ Group B features: Subchondral bone marrow lesion or cyst not associated with meniscal or ligamentous attachments; meniscal subluxation, maceration, or degenerative (horizontal) tear; partial-thickness cartilage loss (without full-thickness loss).
2. Patellofemoral OA requires both of the following features involving the patella, anterior femur, or both:
▸ Definite osteophyte formation.
▸ Partial- or full-thickness cartilage loss.
The Panel's Propositions
Here are the nine propositions on MRI diagnosis of knee OA:
1. MRI changes of OA may occur in the absence of radiographic findings of OA.
2. MRI may add to the diagnosis of OA and should be incorporated into the ACR diagnostic criteria including x-ray, clinical, and lab parameters.
3. MRI may be used for inclusion in clinical studies, but should not be a primary diagnostic tool in a clinical setting.
4. Certain MRI changes that occur in isolation are not diagnostic of OA, including cartilage loss; change in cartilage composition; cystic change and development of bone marrow lesions; and ligamentous, tendinous, and meniscal damage.
5. No single finding is diagnostic of knee OA.
6. MRI findings indicative of knee OA may include abnormalities in all tissues of the joint (bone, cartilage, meniscus, synovium, ligament, and capsule).
7. Given the multiple tissue abnormalities detected by MRI in OA, diagnostic criteria are likely to involve combinations of features.
8. Definite osteophyte production is indicative of OA.
9. Joint space narrowing as assessed by MRI cannot be used as a diagnostic criterion.
BRUSSELS – The use of magnetic resonance imaging may enable earlier recognition of knee osteoarthritis, and should be incorporated into recommended diagnostic criteria, a panel of 16 osteoarthritis experts concluded.
Using MRI to define knee osteoarthritis (OA) may allow detection of the disease before radiographic changes occur. But despite a growing body of literature on the role of MRI in OA, little uniformity exists for its diagnostic application, perhaps because of the absence of criteria for an MRI-based structural diagnosis of OA, the group said.
The Osteoarthritis Research Society International (OARSI) organized the 16-member panel, the OA Imaging Working Group, to develop an MRI-based definition of structural OA. The working group sought to identify structural changes on MRI that defined a structural diagnosis of knee OA, Dr. David J. Hunter and the other members of the working group wrote in a poster presented at the congress, which was organized by OARSI.
The working group began with a literature review through April 2009, a process that yielded 25 studies that met the group's inclusion criteria and evaluated MRI diagnostic performance. Through a multiphase process of discussion and voting, the group agreed on a set of nine propositions and two OA definitions based on MRI criteria. (See boxes.) These constitute “statements of preamble and context setting.” The two definitions “offer an opportunity for formal testing against other diagnostic constructs,” said Dr. Hunter, a rheumatologist and professor of medicine at the University of Sydney and his associates in the working group.
The working group noted that the American College of Rheumatology in 1986 first released the current standard criteria for diagnosing OA, which deal only with radiographic imaging (Arthritis Rheum. 1986;29:1039-49). The European League Against Rheumatism published more current recommendations this year, but focused on a clinical diagnosis that did not involve imaging (Ann. Rheum. Dis. 2010;69:483-9).
The working group aimed to “include MRI as a means to define the disease with the intent that one may be able to identify early, preradiographic disease, thus enabling recruitment of study populations where structure modification (or structure maintenance) may be realistic in a more preventive manner.”
The group cautioned that prior to using the definitions, “it is important that their validity and diagnostic performance be adequately tested.” They also stressed that “the propositions have been developed for structural OA, not for a clinical diagnosis, not for early OA, and not to facilitate staging of the disease.”
An osteoarthritis specialist who was not involved with the working group cautioned that waiting for MRI structural changes that are specific for OA may still miss a truly early diagnosis, before irreversible pathology occurred.
“There are early changes [seen with MRI] that are not picked up on radiographs, but we don't yet have a standardized, validated definition of an earlier stage” on MRI, Dr. Tuhina Neogi, a rheumatologist at Boston University, said in an interview.
Dr. Hunter said that he has received research support from AstraZeneca, DJO Inc. (DonJoy), Eli Lilly & Co., Merck & Co., Pfizer Inc., Stryker Corp., and Wyeth. Eight of the other members of the working group also provided disclosures, whereas the remaining seven members said they had no disclosures. Dr. Neogi had no disclosures.
The Panel's MRI-Based Definition of OA
The two definitions of MRI findings diagnostic of knee OA are:
1. Tibiofemoral OA should have either both features from group A (below), or one feature from group A and at least two from group B. Examination of the patient must also rule out joint trauma in the past 6 months (by history) and inflammatory arthritis (by radiographs, history, and lab findings).
▸ Group A features: Definite osteophyte formation; full-thickness cartilage loss.
▸ Group B features: Subchondral bone marrow lesion or cyst not associated with meniscal or ligamentous attachments; meniscal subluxation, maceration, or degenerative (horizontal) tear; partial-thickness cartilage loss (without full-thickness loss).
2. Patellofemoral OA requires both of the following features involving the patella, anterior femur, or both:
▸ Definite osteophyte formation.
▸ Partial- or full-thickness cartilage loss.
The Panel's Propositions
Here are the nine propositions on MRI diagnosis of knee OA:
1. MRI changes of OA may occur in the absence of radiographic findings of OA.
2. MRI may add to the diagnosis of OA and should be incorporated into the ACR diagnostic criteria including x-ray, clinical, and lab parameters.
3. MRI may be used for inclusion in clinical studies, but should not be a primary diagnostic tool in a clinical setting.
4. Certain MRI changes that occur in isolation are not diagnostic of OA, including cartilage loss; change in cartilage composition; cystic change and development of bone marrow lesions; and ligamentous, tendinous, and meniscal damage.
5. No single finding is diagnostic of knee OA.
6. MRI findings indicative of knee OA may include abnormalities in all tissues of the joint (bone, cartilage, meniscus, synovium, ligament, and capsule).
7. Given the multiple tissue abnormalities detected by MRI in OA, diagnostic criteria are likely to involve combinations of features.
8. Definite osteophyte production is indicative of OA.
9. Joint space narrowing as assessed by MRI cannot be used as a diagnostic criterion.
Low Vitamin K Linked to Knee Osteoarthritis
Major Finding: People who developed osteoarthritis in both knees during 30 months of follow-up had a twofold increased rate of vitamin K deficiency at baseline, compared with people who did not develop osteoarthritis, and a nearly threefold increased risk of vitamin K deficiency compared with those who developed osteoarthritis in one knee.
Data Source: The 1,180 people enrolled in the MOST study who did not have osteoarthritis at baseline.
Disclosures: Dr. Neogi had no disclosures.
BRUSSELS – Vitamin K deficiency may increase the risk for developing knee osteoarthritis and for forming knee cartilage lesions, judging from the findings of a 30-month study of nearly 1,200 people at risk for knee osteoarthritis.
This apparent role of low vitamin K levels in susceptibility to knee pathology raised the question whether vitamin K supplementation for deficient individuals might be a “simple, effective preventive agent,” Dr. Tuhina Neogi said at the congress.
“The next step is an intervention trial,” said Dr. Neogi, a rheumatologist at Boston University. “Taken together, there is enough biological plausibility that vitamin K could play a role. Osteoarthritis is multifactorial, but this could be one component. If [dietary supplementation] proves effective, it would be something easy for people to do.”
Vitamin K works as a cofactor in the carboxylation of several proteins that are involved in bone and cartilage formation and maintenance. Prior studies have shownethat low vitamin K intake and low blood levels weare linked to prevalent radiographic features of hand and knee osteoarthritis.
The investigators examined data that were collected from people enrolled in the Multicenter Osteoarthritis (MOST) study who had an elevated risk for knee osteoarthritis at entry but had not yet developed the disease. MOST enrolled more than 3,000 people who had osteoarthritis or were at risk for it starting in 2003 at two U.S. sites. The 1,180 people who were included in the study averaged 62 years of age; 62% were women, and their average body mass index was about 30 kg/m
Dr. Neogi and her associates defined vitamin K deficiency as a plasma level of phylloquinone less than 0.5 nmol/L. (Normal is 0.5-1.2 nmol/L.) At baseline, 9% of the study participants without osteoarthritis had vitamin K deficiency.
The researchers made incidence osteoarthritis the primary end point, defined as development of a knee Kellgren-Lawrence (KL) grade of 2 or higher (including knee replacement). All people included in the analysis had a KL grade less than 2 at baseline. During 30 months of follow-up, 15% of the participants developed osteoarthritis.
Analysis revealed that participants with vitamin K deficiency at baseline had a 43% increased risk, after adjustment for age, sex, BMI, bone mineral density, and vitamin D level at baseline. This increased risk just missed reaching significance. Dr. Neogi suggested that this may have been a power issue, with too few vitamin K–deficient participants in the database.
Analysis also showed a link between the extent of knee osteoarthritis and vitamin K deficiency. Those with osteoarthritis in both knees had a significant, nearly threefold increased risk of vitamin K deficiency at baseline, compared with those who developed osteoarthritis in one knee during follow-up. Those who had both knees affected had a significant, twofold increased risk of vitamin deficiency, compared with people who did not develop any knee osteoarthritis, she reported at the congress, which was organized by the Osteoarthritis Research Society International.
Vitamin K–deficient participants also had a significant, nearly threefold increased risk of new cartilage lesions on their knee MRI scans that were consistent with osteoarthritis.
Major Finding: People who developed osteoarthritis in both knees during 30 months of follow-up had a twofold increased rate of vitamin K deficiency at baseline, compared with people who did not develop osteoarthritis, and a nearly threefold increased risk of vitamin K deficiency compared with those who developed osteoarthritis in one knee.
Data Source: The 1,180 people enrolled in the MOST study who did not have osteoarthritis at baseline.
Disclosures: Dr. Neogi had no disclosures.
BRUSSELS – Vitamin K deficiency may increase the risk for developing knee osteoarthritis and for forming knee cartilage lesions, judging from the findings of a 30-month study of nearly 1,200 people at risk for knee osteoarthritis.
This apparent role of low vitamin K levels in susceptibility to knee pathology raised the question whether vitamin K supplementation for deficient individuals might be a “simple, effective preventive agent,” Dr. Tuhina Neogi said at the congress.
“The next step is an intervention trial,” said Dr. Neogi, a rheumatologist at Boston University. “Taken together, there is enough biological plausibility that vitamin K could play a role. Osteoarthritis is multifactorial, but this could be one component. If [dietary supplementation] proves effective, it would be something easy for people to do.”
Vitamin K works as a cofactor in the carboxylation of several proteins that are involved in bone and cartilage formation and maintenance. Prior studies have shownethat low vitamin K intake and low blood levels weare linked to prevalent radiographic features of hand and knee osteoarthritis.
The investigators examined data that were collected from people enrolled in the Multicenter Osteoarthritis (MOST) study who had an elevated risk for knee osteoarthritis at entry but had not yet developed the disease. MOST enrolled more than 3,000 people who had osteoarthritis or were at risk for it starting in 2003 at two U.S. sites. The 1,180 people who were included in the study averaged 62 years of age; 62% were women, and their average body mass index was about 30 kg/m
Dr. Neogi and her associates defined vitamin K deficiency as a plasma level of phylloquinone less than 0.5 nmol/L. (Normal is 0.5-1.2 nmol/L.) At baseline, 9% of the study participants without osteoarthritis had vitamin K deficiency.
The researchers made incidence osteoarthritis the primary end point, defined as development of a knee Kellgren-Lawrence (KL) grade of 2 or higher (including knee replacement). All people included in the analysis had a KL grade less than 2 at baseline. During 30 months of follow-up, 15% of the participants developed osteoarthritis.
Analysis revealed that participants with vitamin K deficiency at baseline had a 43% increased risk, after adjustment for age, sex, BMI, bone mineral density, and vitamin D level at baseline. This increased risk just missed reaching significance. Dr. Neogi suggested that this may have been a power issue, with too few vitamin K–deficient participants in the database.
Analysis also showed a link between the extent of knee osteoarthritis and vitamin K deficiency. Those with osteoarthritis in both knees had a significant, nearly threefold increased risk of vitamin K deficiency at baseline, compared with those who developed osteoarthritis in one knee during follow-up. Those who had both knees affected had a significant, twofold increased risk of vitamin deficiency, compared with people who did not develop any knee osteoarthritis, she reported at the congress, which was organized by the Osteoarthritis Research Society International.
Vitamin K–deficient participants also had a significant, nearly threefold increased risk of new cartilage lesions on their knee MRI scans that were consistent with osteoarthritis.
Major Finding: People who developed osteoarthritis in both knees during 30 months of follow-up had a twofold increased rate of vitamin K deficiency at baseline, compared with people who did not develop osteoarthritis, and a nearly threefold increased risk of vitamin K deficiency compared with those who developed osteoarthritis in one knee.
Data Source: The 1,180 people enrolled in the MOST study who did not have osteoarthritis at baseline.
Disclosures: Dr. Neogi had no disclosures.
BRUSSELS – Vitamin K deficiency may increase the risk for developing knee osteoarthritis and for forming knee cartilage lesions, judging from the findings of a 30-month study of nearly 1,200 people at risk for knee osteoarthritis.
This apparent role of low vitamin K levels in susceptibility to knee pathology raised the question whether vitamin K supplementation for deficient individuals might be a “simple, effective preventive agent,” Dr. Tuhina Neogi said at the congress.
“The next step is an intervention trial,” said Dr. Neogi, a rheumatologist at Boston University. “Taken together, there is enough biological plausibility that vitamin K could play a role. Osteoarthritis is multifactorial, but this could be one component. If [dietary supplementation] proves effective, it would be something easy for people to do.”
Vitamin K works as a cofactor in the carboxylation of several proteins that are involved in bone and cartilage formation and maintenance. Prior studies have shownethat low vitamin K intake and low blood levels weare linked to prevalent radiographic features of hand and knee osteoarthritis.
The investigators examined data that were collected from people enrolled in the Multicenter Osteoarthritis (MOST) study who had an elevated risk for knee osteoarthritis at entry but had not yet developed the disease. MOST enrolled more than 3,000 people who had osteoarthritis or were at risk for it starting in 2003 at two U.S. sites. The 1,180 people who were included in the study averaged 62 years of age; 62% were women, and their average body mass index was about 30 kg/m
Dr. Neogi and her associates defined vitamin K deficiency as a plasma level of phylloquinone less than 0.5 nmol/L. (Normal is 0.5-1.2 nmol/L.) At baseline, 9% of the study participants without osteoarthritis had vitamin K deficiency.
The researchers made incidence osteoarthritis the primary end point, defined as development of a knee Kellgren-Lawrence (KL) grade of 2 or higher (including knee replacement). All people included in the analysis had a KL grade less than 2 at baseline. During 30 months of follow-up, 15% of the participants developed osteoarthritis.
Analysis revealed that participants with vitamin K deficiency at baseline had a 43% increased risk, after adjustment for age, sex, BMI, bone mineral density, and vitamin D level at baseline. This increased risk just missed reaching significance. Dr. Neogi suggested that this may have been a power issue, with too few vitamin K–deficient participants in the database.
Analysis also showed a link between the extent of knee osteoarthritis and vitamin K deficiency. Those with osteoarthritis in both knees had a significant, nearly threefold increased risk of vitamin K deficiency at baseline, compared with those who developed osteoarthritis in one knee during follow-up. Those who had both knees affected had a significant, twofold increased risk of vitamin deficiency, compared with people who did not develop any knee osteoarthritis, she reported at the congress, which was organized by the Osteoarthritis Research Society International.
Vitamin K–deficient participants also had a significant, nearly threefold increased risk of new cartilage lesions on their knee MRI scans that were consistent with osteoarthritis.
ADHD Affected 9.5% of Children in 2007-2008
Major Finding: During 2007-2008, U.S. children and adolescents aged 4-17 years had a 9.5% prevalence rate of ever having attention-deficit/hyperactivity disorder, a significant increase from the 7.8% rate in 2003-2004.
Data Source: The National Survey of Children's Health, a random-sample telephone survey of parents with data on more than 70,000 U.S. children and adolescents aged 4-17 years run by the Centers for Disease Control and Prevention.
Disclosures: Ms. Danielson said that she had no conflicts of interest.
NEW YORK – The prevalence of attention-deficit/hyperactivity disorder among children and adolescents rose to its highest level in 2007-2008, with 9.5% of children and adolescents ever diagnosed, according to a federally sponsored national telephone survey covering more than 70,000 American children and adolescents.
Although the reasons behind the increased prevalence of attention-deficit/hyperactivity disorder (ADHD) remain unclear, the increase over the 7.8% rate of ever-diagnosed ADHD in 2003-2004 reached statistical significance and appears real.
“We think something is going on,” Melissa L. Danielson said, while presenting a poster at the meeting. Explanations might include increased awareness of the diagnosis, and more children and adolescents undergoing formal evaluation, she said. Backing up the national finding are data on ADHD prevalence in each individual state. Prevalence rates rose in almost every state, and in 13 states recent increases reached statistical significance, she said in an interview.
The National Survey of Children's Health, run by the Centers for Disease Control and Prevention, receives its primary funding from the Department of Health and Human Services. In 2007 and 2008, a randomly selected sample of U.S. parents answered a telephone survey about their children's health. Parents answered four questions about ADHD: Did they have a child aged 4-17 years who ever received a diagnosis of the disorder? Did their child have a current diagnosis? Is the ADHD mild, moderate, or severe? Does the child receive medication?
Extrapolated survey results showed that in 2007-2008, 4.1 million American children and adolescents had a current diagnosis, 7.2% of the 4- to 17-year-old population (less than the 9.5% ever diagnosed with ADHD). Of these, two-thirds – 2.7 million – received medical treatment for their ADHD, and parents said that 570,000 (14%) of their kids had severe ADHD. About half had mild ADHD, with the remaining patients having what their parents described as moderate disorder. Subgroups with significantly less-severe ADHD included girls and adolescents aged 15-17.
Boys, adolescents aged 15-17 years, and multiracial and non-Hispanic children all had significantly higher prevalence rates of current ADHD relative to their respective comparator subgroups. Sex, race, and ethnicity had no linkage with medication use, but medication treatment occurred less often in the 15- to 17-year-olds, said Ms. Danielson, a statistician on the Child Development Studies team of the CDC in Atlanta.
Children aged 11-14 years had the widest medication use, 73%, while adolescents aged 15-17 had the lowest rate of medication use, 56%, a statistically significant difference. Children aged 11-14 years with severe disease had a roughly 90% rate of medical treatment; teens aged 15-17 years with mild ADHD had the lowest medication rate, about 50%.
Children and teens with a concurrent diagnosis of disruptive behavior disorder had a statistically significant, 50% adjusted, relative increased rate of receiving medical treatment for their ADHD and also had a significantly higher prevalence of current, severe ADHD. More than 30% of children with the concurrent diagnosis had severe ADHD.
Major Finding: During 2007-2008, U.S. children and adolescents aged 4-17 years had a 9.5% prevalence rate of ever having attention-deficit/hyperactivity disorder, a significant increase from the 7.8% rate in 2003-2004.
Data Source: The National Survey of Children's Health, a random-sample telephone survey of parents with data on more than 70,000 U.S. children and adolescents aged 4-17 years run by the Centers for Disease Control and Prevention.
Disclosures: Ms. Danielson said that she had no conflicts of interest.
NEW YORK – The prevalence of attention-deficit/hyperactivity disorder among children and adolescents rose to its highest level in 2007-2008, with 9.5% of children and adolescents ever diagnosed, according to a federally sponsored national telephone survey covering more than 70,000 American children and adolescents.
Although the reasons behind the increased prevalence of attention-deficit/hyperactivity disorder (ADHD) remain unclear, the increase over the 7.8% rate of ever-diagnosed ADHD in 2003-2004 reached statistical significance and appears real.
“We think something is going on,” Melissa L. Danielson said, while presenting a poster at the meeting. Explanations might include increased awareness of the diagnosis, and more children and adolescents undergoing formal evaluation, she said. Backing up the national finding are data on ADHD prevalence in each individual state. Prevalence rates rose in almost every state, and in 13 states recent increases reached statistical significance, she said in an interview.
The National Survey of Children's Health, run by the Centers for Disease Control and Prevention, receives its primary funding from the Department of Health and Human Services. In 2007 and 2008, a randomly selected sample of U.S. parents answered a telephone survey about their children's health. Parents answered four questions about ADHD: Did they have a child aged 4-17 years who ever received a diagnosis of the disorder? Did their child have a current diagnosis? Is the ADHD mild, moderate, or severe? Does the child receive medication?
Extrapolated survey results showed that in 2007-2008, 4.1 million American children and adolescents had a current diagnosis, 7.2% of the 4- to 17-year-old population (less than the 9.5% ever diagnosed with ADHD). Of these, two-thirds – 2.7 million – received medical treatment for their ADHD, and parents said that 570,000 (14%) of their kids had severe ADHD. About half had mild ADHD, with the remaining patients having what their parents described as moderate disorder. Subgroups with significantly less-severe ADHD included girls and adolescents aged 15-17.
Boys, adolescents aged 15-17 years, and multiracial and non-Hispanic children all had significantly higher prevalence rates of current ADHD relative to their respective comparator subgroups. Sex, race, and ethnicity had no linkage with medication use, but medication treatment occurred less often in the 15- to 17-year-olds, said Ms. Danielson, a statistician on the Child Development Studies team of the CDC in Atlanta.
Children aged 11-14 years had the widest medication use, 73%, while adolescents aged 15-17 had the lowest rate of medication use, 56%, a statistically significant difference. Children aged 11-14 years with severe disease had a roughly 90% rate of medical treatment; teens aged 15-17 years with mild ADHD had the lowest medication rate, about 50%.
Children and teens with a concurrent diagnosis of disruptive behavior disorder had a statistically significant, 50% adjusted, relative increased rate of receiving medical treatment for their ADHD and also had a significantly higher prevalence of current, severe ADHD. More than 30% of children with the concurrent diagnosis had severe ADHD.
Major Finding: During 2007-2008, U.S. children and adolescents aged 4-17 years had a 9.5% prevalence rate of ever having attention-deficit/hyperactivity disorder, a significant increase from the 7.8% rate in 2003-2004.
Data Source: The National Survey of Children's Health, a random-sample telephone survey of parents with data on more than 70,000 U.S. children and adolescents aged 4-17 years run by the Centers for Disease Control and Prevention.
Disclosures: Ms. Danielson said that she had no conflicts of interest.
NEW YORK – The prevalence of attention-deficit/hyperactivity disorder among children and adolescents rose to its highest level in 2007-2008, with 9.5% of children and adolescents ever diagnosed, according to a federally sponsored national telephone survey covering more than 70,000 American children and adolescents.
Although the reasons behind the increased prevalence of attention-deficit/hyperactivity disorder (ADHD) remain unclear, the increase over the 7.8% rate of ever-diagnosed ADHD in 2003-2004 reached statistical significance and appears real.
“We think something is going on,” Melissa L. Danielson said, while presenting a poster at the meeting. Explanations might include increased awareness of the diagnosis, and more children and adolescents undergoing formal evaluation, she said. Backing up the national finding are data on ADHD prevalence in each individual state. Prevalence rates rose in almost every state, and in 13 states recent increases reached statistical significance, she said in an interview.
The National Survey of Children's Health, run by the Centers for Disease Control and Prevention, receives its primary funding from the Department of Health and Human Services. In 2007 and 2008, a randomly selected sample of U.S. parents answered a telephone survey about their children's health. Parents answered four questions about ADHD: Did they have a child aged 4-17 years who ever received a diagnosis of the disorder? Did their child have a current diagnosis? Is the ADHD mild, moderate, or severe? Does the child receive medication?
Extrapolated survey results showed that in 2007-2008, 4.1 million American children and adolescents had a current diagnosis, 7.2% of the 4- to 17-year-old population (less than the 9.5% ever diagnosed with ADHD). Of these, two-thirds – 2.7 million – received medical treatment for their ADHD, and parents said that 570,000 (14%) of their kids had severe ADHD. About half had mild ADHD, with the remaining patients having what their parents described as moderate disorder. Subgroups with significantly less-severe ADHD included girls and adolescents aged 15-17.
Boys, adolescents aged 15-17 years, and multiracial and non-Hispanic children all had significantly higher prevalence rates of current ADHD relative to their respective comparator subgroups. Sex, race, and ethnicity had no linkage with medication use, but medication treatment occurred less often in the 15- to 17-year-olds, said Ms. Danielson, a statistician on the Child Development Studies team of the CDC in Atlanta.
Children aged 11-14 years had the widest medication use, 73%, while adolescents aged 15-17 had the lowest rate of medication use, 56%, a statistically significant difference. Children aged 11-14 years with severe disease had a roughly 90% rate of medical treatment; teens aged 15-17 years with mild ADHD had the lowest medication rate, about 50%.
Children and teens with a concurrent diagnosis of disruptive behavior disorder had a statistically significant, 50% adjusted, relative increased rate of receiving medical treatment for their ADHD and also had a significantly higher prevalence of current, severe ADHD. More than 30% of children with the concurrent diagnosis had severe ADHD.
High Vitamin D Intake Linked to Reduced Fractures
TORONTO – A daily vitamin D dose of at least 792 IU was linked with significantly reduced rates of nonvertebral fractures and hip fractures in a meta-analysis of data from 11 randomized, controlled trials.
But the benefit appeared blunted when vitamin D was combined with a higher calcium dose, or when patients received vitamin D once yearly, Dr. Heike A. Bischoff-Ferrari reported.
In the meta-analysis, patients in the highest quartile for daily vitamin D intake, 792-2000 IU, had a statistically significant 14% reduced rate of any nonvertebral fracture, and a significant 30% reduced rate of hip fractures, after adjustment for age, gender, and type of dwelling, said Dr. Bischoff-Ferrari, a rheumatologist at the University of Zurich.
Her meta-analysis pooled individual participant data, published through June 2010, from 12 double-blind, randomized, controlled trials that examined the impact of vitamin D supplements on fracture rate in people aged 65 years or older.
The primary analysis focused on the 11 studies of the 12 in which participants received the supplement at least monthly, with 31,022 people enrolled. The 12th study tested once annual dosing, and the researchers included those data in a separate analysis. The participants' average age was 76 years; 90% were women.
The analysis divided the study subjects into the control group, with more than 15,000 people, and then into quartiles of their received amount of vitamin D, including both their study-treatment dose and any additional vitamin D intake. The analysis also accounted for adherence to treatment. Each vitamin D quartile contained nearly 4,000 people, with a daily dose range of 792-2,000 IU forming the top quartile. Only the top quartile of vitamin intake linked with statistically significant differences, compared with the controls, for any nonvertebral fracture and for hip fracture.
Adding the data from the one trial that tested annual vitamin D treatment to the meta-analysis eliminated the statistically significant effect on fracture rates, suggesting that yearly administration of vitamin D produces a different effect than daily, weekly, or monthly treatment.
An additional analysis that looked at the interaction of calcium supplements along with vitamin D showed that with a daily calcium dose below 1,000 mg/day a high-dose vitamin D supplement (792-2,000 IU/day) linked with a statistically significant reduction in nonvertebral fractures, but when the daily calcium supplement delivered 1,000 mg or more, this amount of vitamin D did not associate with any significant change in fracture rate, suggesting an adverse effect from higher calcium intake.
Dr. Bischoff-Ferrari said that she had no disclosures.
TORONTO – A daily vitamin D dose of at least 792 IU was linked with significantly reduced rates of nonvertebral fractures and hip fractures in a meta-analysis of data from 11 randomized, controlled trials.
But the benefit appeared blunted when vitamin D was combined with a higher calcium dose, or when patients received vitamin D once yearly, Dr. Heike A. Bischoff-Ferrari reported.
In the meta-analysis, patients in the highest quartile for daily vitamin D intake, 792-2000 IU, had a statistically significant 14% reduced rate of any nonvertebral fracture, and a significant 30% reduced rate of hip fractures, after adjustment for age, gender, and type of dwelling, said Dr. Bischoff-Ferrari, a rheumatologist at the University of Zurich.
Her meta-analysis pooled individual participant data, published through June 2010, from 12 double-blind, randomized, controlled trials that examined the impact of vitamin D supplements on fracture rate in people aged 65 years or older.
The primary analysis focused on the 11 studies of the 12 in which participants received the supplement at least monthly, with 31,022 people enrolled. The 12th study tested once annual dosing, and the researchers included those data in a separate analysis. The participants' average age was 76 years; 90% were women.
The analysis divided the study subjects into the control group, with more than 15,000 people, and then into quartiles of their received amount of vitamin D, including both their study-treatment dose and any additional vitamin D intake. The analysis also accounted for adherence to treatment. Each vitamin D quartile contained nearly 4,000 people, with a daily dose range of 792-2,000 IU forming the top quartile. Only the top quartile of vitamin intake linked with statistically significant differences, compared with the controls, for any nonvertebral fracture and for hip fracture.
Adding the data from the one trial that tested annual vitamin D treatment to the meta-analysis eliminated the statistically significant effect on fracture rates, suggesting that yearly administration of vitamin D produces a different effect than daily, weekly, or monthly treatment.
An additional analysis that looked at the interaction of calcium supplements along with vitamin D showed that with a daily calcium dose below 1,000 mg/day a high-dose vitamin D supplement (792-2,000 IU/day) linked with a statistically significant reduction in nonvertebral fractures, but when the daily calcium supplement delivered 1,000 mg or more, this amount of vitamin D did not associate with any significant change in fracture rate, suggesting an adverse effect from higher calcium intake.
Dr. Bischoff-Ferrari said that she had no disclosures.
TORONTO – A daily vitamin D dose of at least 792 IU was linked with significantly reduced rates of nonvertebral fractures and hip fractures in a meta-analysis of data from 11 randomized, controlled trials.
But the benefit appeared blunted when vitamin D was combined with a higher calcium dose, or when patients received vitamin D once yearly, Dr. Heike A. Bischoff-Ferrari reported.
In the meta-analysis, patients in the highest quartile for daily vitamin D intake, 792-2000 IU, had a statistically significant 14% reduced rate of any nonvertebral fracture, and a significant 30% reduced rate of hip fractures, after adjustment for age, gender, and type of dwelling, said Dr. Bischoff-Ferrari, a rheumatologist at the University of Zurich.
Her meta-analysis pooled individual participant data, published through June 2010, from 12 double-blind, randomized, controlled trials that examined the impact of vitamin D supplements on fracture rate in people aged 65 years or older.
The primary analysis focused on the 11 studies of the 12 in which participants received the supplement at least monthly, with 31,022 people enrolled. The 12th study tested once annual dosing, and the researchers included those data in a separate analysis. The participants' average age was 76 years; 90% were women.
The analysis divided the study subjects into the control group, with more than 15,000 people, and then into quartiles of their received amount of vitamin D, including both their study-treatment dose and any additional vitamin D intake. The analysis also accounted for adherence to treatment. Each vitamin D quartile contained nearly 4,000 people, with a daily dose range of 792-2,000 IU forming the top quartile. Only the top quartile of vitamin intake linked with statistically significant differences, compared with the controls, for any nonvertebral fracture and for hip fracture.
Adding the data from the one trial that tested annual vitamin D treatment to the meta-analysis eliminated the statistically significant effect on fracture rates, suggesting that yearly administration of vitamin D produces a different effect than daily, weekly, or monthly treatment.
An additional analysis that looked at the interaction of calcium supplements along with vitamin D showed that with a daily calcium dose below 1,000 mg/day a high-dose vitamin D supplement (792-2,000 IU/day) linked with a statistically significant reduction in nonvertebral fractures, but when the daily calcium supplement delivered 1,000 mg or more, this amount of vitamin D did not associate with any significant change in fracture rate, suggesting an adverse effect from higher calcium intake.
Dr. Bischoff-Ferrari said that she had no disclosures.
Fewer Osteoporosis Screenings Okay for Some
TORONTO – Women aged 67 years or older with a bone mineral density T score higher than −1.50 on dual-energy x-ray absorptiometry can have their next DXA examination deferred for at least 10 years with a low risk that they'll progress to osteoporosis in the interim, according to an analysis of data from more than 5,000 U.S. women.
“Fewer than 10% of women with a BMD [bone mineral density] T score of more than −1.50 were estimated to transition to osteoporosis if followed for 15 years,” Dr. Margaret L. Gourlay said. For these women, “repeat testing before 10 years is unlikely to show osteoporosis,” she said, and for women with a T score of −1.50 to −1.99, “a 5-year interval could be considered.”
The results provide the first evidence-based guidance available on the appropriate interval for osteoporosis screening in elderly women.
“The value of these results is that we can be less concerned about women with good BMD,” Dr. Gourlay said in an interview. “We don't need to go on autopilot and screen [all women] every 2 years.” Medicare reimburses for screening women aged 65 years or older with dual-energy x-ray absorptiometry (DXA) every 2 years, she noted, and hence U.S. physicians often recommend this screening interval. Earlier this year, however, an updated review of osteoporosis screening by the U.S. Preventive Services Task Force (USPSTF) noted that no evidence existed to support any screening interval (Ann. Intern. Med. 2010;153:99-111).
The results “were a surprise in a good way,” said Dr. Gourlay, a family physician at the University of North Carolina in Chapel Hill. “This is good news for women with good BMD. For women with higher bone density, we're probably doing some unnecessary testing.”
The new results also showed that the T score exerted the strongest influence on the osteoporosis screening interval, more so than clinical risk factors for fracture. Adjustment for “risk factors did not make too much of a difference, so physicians do not need to make a FRAX calculation” to decide a screening interval, she said. “They can just go by the BMD.
“The importance [of the new findings] is not the absolute time estimates we found; it's the magnitude of the difference,” she said. “A 16-year interval [for 10% of women to develop osteoporosis] for women in the top two T score groups, and a 5-year interval [for women with a baseline T score of −1.50 to −1.99] is quite different” from the way most physicians practice today.
She cautioned that the finding needs confirmation from similar analyses using different data sets, and that it remains up to health policy-setting groups, such as the USPSTF, to consider the findings and use them to formulate updated screening recommendations.
The analysis used data collected in the Study of Osteoporotic Fractures (SOF), which enrolled women aged 65 years or older in four U.S. cities starting in 1986 and has followed them since then. Dr. Gourlay and her associates focused on 5,036 women who underwent at least two serial BMD measures over a total of 15 years, excluding women with osteoporosis at any hip site at baseline, those with an incident hip fracture, those treated with a bisphosphonate or calcitonin, and women who died or dropped out of the study.
The analysis included 1,275 women who had at least one normal baseline BMD value (a T score of −1.00 or greater) and 4,279 women with at least one T score that identified them as having osteopenia (−1.01 to −2.49). Some women fell into both categories if they underwent at least three DXA examinations starting with at least one normal T score followed by at least one osteopenic score. At baseline, the rate of estrogen use ran 25% in women with a normal T score at baseline and 16% in women with osteopenia – typical for practice in the 1980s.
During follow-up, full transition to osteoporosis occurred in fewer than 1% of the women with a T score of at least −1.00 at baseline, fewer than 5% of those with a T score of −1.01 to −1.49 at baseline, 22% of women with a score of −1.50 to −1.99 at baseline, and in 65% of women with a baseline T score of −2.00 to −2.49.
After adjustment for age and continuous bone mineral density, it took an estimated 16 years for 10% of women with a T score of −1.00 or higher at baseline to transition to osteoporosis. The other three subgroups analyzed underwent covariate adjustment for age, body mass index, current estrogen use, any fracture after age 50, current smoking, and oral glucocorticoid use. After adjustment, the average time for 10% of women to transition to osteoporosis was 15.5 years in women following a T score measure of −1.01 to −1.49, 4.5 years in women with a T score of −1.50 to −1.99, and 1.2 years in women with a T score of −2.00 to −2.49.
Another analysis stratified women by their age at the baseline DXA examination (see chart). Even among women aged 85 years, it took an average of nearly 11 years for 10% to develop osteoporosis after a baseline T score of −1.01 to −1.49.
TORONTO – Women aged 67 years or older with a bone mineral density T score higher than −1.50 on dual-energy x-ray absorptiometry can have their next DXA examination deferred for at least 10 years with a low risk that they'll progress to osteoporosis in the interim, according to an analysis of data from more than 5,000 U.S. women.
“Fewer than 10% of women with a BMD [bone mineral density] T score of more than −1.50 were estimated to transition to osteoporosis if followed for 15 years,” Dr. Margaret L. Gourlay said. For these women, “repeat testing before 10 years is unlikely to show osteoporosis,” she said, and for women with a T score of −1.50 to −1.99, “a 5-year interval could be considered.”
The results provide the first evidence-based guidance available on the appropriate interval for osteoporosis screening in elderly women.
“The value of these results is that we can be less concerned about women with good BMD,” Dr. Gourlay said in an interview. “We don't need to go on autopilot and screen [all women] every 2 years.” Medicare reimburses for screening women aged 65 years or older with dual-energy x-ray absorptiometry (DXA) every 2 years, she noted, and hence U.S. physicians often recommend this screening interval. Earlier this year, however, an updated review of osteoporosis screening by the U.S. Preventive Services Task Force (USPSTF) noted that no evidence existed to support any screening interval (Ann. Intern. Med. 2010;153:99-111).
The results “were a surprise in a good way,” said Dr. Gourlay, a family physician at the University of North Carolina in Chapel Hill. “This is good news for women with good BMD. For women with higher bone density, we're probably doing some unnecessary testing.”
The new results also showed that the T score exerted the strongest influence on the osteoporosis screening interval, more so than clinical risk factors for fracture. Adjustment for “risk factors did not make too much of a difference, so physicians do not need to make a FRAX calculation” to decide a screening interval, she said. “They can just go by the BMD.
“The importance [of the new findings] is not the absolute time estimates we found; it's the magnitude of the difference,” she said. “A 16-year interval [for 10% of women to develop osteoporosis] for women in the top two T score groups, and a 5-year interval [for women with a baseline T score of −1.50 to −1.99] is quite different” from the way most physicians practice today.
She cautioned that the finding needs confirmation from similar analyses using different data sets, and that it remains up to health policy-setting groups, such as the USPSTF, to consider the findings and use them to formulate updated screening recommendations.
The analysis used data collected in the Study of Osteoporotic Fractures (SOF), which enrolled women aged 65 years or older in four U.S. cities starting in 1986 and has followed them since then. Dr. Gourlay and her associates focused on 5,036 women who underwent at least two serial BMD measures over a total of 15 years, excluding women with osteoporosis at any hip site at baseline, those with an incident hip fracture, those treated with a bisphosphonate or calcitonin, and women who died or dropped out of the study.
The analysis included 1,275 women who had at least one normal baseline BMD value (a T score of −1.00 or greater) and 4,279 women with at least one T score that identified them as having osteopenia (−1.01 to −2.49). Some women fell into both categories if they underwent at least three DXA examinations starting with at least one normal T score followed by at least one osteopenic score. At baseline, the rate of estrogen use ran 25% in women with a normal T score at baseline and 16% in women with osteopenia – typical for practice in the 1980s.
During follow-up, full transition to osteoporosis occurred in fewer than 1% of the women with a T score of at least −1.00 at baseline, fewer than 5% of those with a T score of −1.01 to −1.49 at baseline, 22% of women with a score of −1.50 to −1.99 at baseline, and in 65% of women with a baseline T score of −2.00 to −2.49.
After adjustment for age and continuous bone mineral density, it took an estimated 16 years for 10% of women with a T score of −1.00 or higher at baseline to transition to osteoporosis. The other three subgroups analyzed underwent covariate adjustment for age, body mass index, current estrogen use, any fracture after age 50, current smoking, and oral glucocorticoid use. After adjustment, the average time for 10% of women to transition to osteoporosis was 15.5 years in women following a T score measure of −1.01 to −1.49, 4.5 years in women with a T score of −1.50 to −1.99, and 1.2 years in women with a T score of −2.00 to −2.49.
Another analysis stratified women by their age at the baseline DXA examination (see chart). Even among women aged 85 years, it took an average of nearly 11 years for 10% to develop osteoporosis after a baseline T score of −1.01 to −1.49.
TORONTO – Women aged 67 years or older with a bone mineral density T score higher than −1.50 on dual-energy x-ray absorptiometry can have their next DXA examination deferred for at least 10 years with a low risk that they'll progress to osteoporosis in the interim, according to an analysis of data from more than 5,000 U.S. women.
“Fewer than 10% of women with a BMD [bone mineral density] T score of more than −1.50 were estimated to transition to osteoporosis if followed for 15 years,” Dr. Margaret L. Gourlay said. For these women, “repeat testing before 10 years is unlikely to show osteoporosis,” she said, and for women with a T score of −1.50 to −1.99, “a 5-year interval could be considered.”
The results provide the first evidence-based guidance available on the appropriate interval for osteoporosis screening in elderly women.
“The value of these results is that we can be less concerned about women with good BMD,” Dr. Gourlay said in an interview. “We don't need to go on autopilot and screen [all women] every 2 years.” Medicare reimburses for screening women aged 65 years or older with dual-energy x-ray absorptiometry (DXA) every 2 years, she noted, and hence U.S. physicians often recommend this screening interval. Earlier this year, however, an updated review of osteoporosis screening by the U.S. Preventive Services Task Force (USPSTF) noted that no evidence existed to support any screening interval (Ann. Intern. Med. 2010;153:99-111).
The results “were a surprise in a good way,” said Dr. Gourlay, a family physician at the University of North Carolina in Chapel Hill. “This is good news for women with good BMD. For women with higher bone density, we're probably doing some unnecessary testing.”
The new results also showed that the T score exerted the strongest influence on the osteoporosis screening interval, more so than clinical risk factors for fracture. Adjustment for “risk factors did not make too much of a difference, so physicians do not need to make a FRAX calculation” to decide a screening interval, she said. “They can just go by the BMD.
“The importance [of the new findings] is not the absolute time estimates we found; it's the magnitude of the difference,” she said. “A 16-year interval [for 10% of women to develop osteoporosis] for women in the top two T score groups, and a 5-year interval [for women with a baseline T score of −1.50 to −1.99] is quite different” from the way most physicians practice today.
She cautioned that the finding needs confirmation from similar analyses using different data sets, and that it remains up to health policy-setting groups, such as the USPSTF, to consider the findings and use them to formulate updated screening recommendations.
The analysis used data collected in the Study of Osteoporotic Fractures (SOF), which enrolled women aged 65 years or older in four U.S. cities starting in 1986 and has followed them since then. Dr. Gourlay and her associates focused on 5,036 women who underwent at least two serial BMD measures over a total of 15 years, excluding women with osteoporosis at any hip site at baseline, those with an incident hip fracture, those treated with a bisphosphonate or calcitonin, and women who died or dropped out of the study.
The analysis included 1,275 women who had at least one normal baseline BMD value (a T score of −1.00 or greater) and 4,279 women with at least one T score that identified them as having osteopenia (−1.01 to −2.49). Some women fell into both categories if they underwent at least three DXA examinations starting with at least one normal T score followed by at least one osteopenic score. At baseline, the rate of estrogen use ran 25% in women with a normal T score at baseline and 16% in women with osteopenia – typical for practice in the 1980s.
During follow-up, full transition to osteoporosis occurred in fewer than 1% of the women with a T score of at least −1.00 at baseline, fewer than 5% of those with a T score of −1.01 to −1.49 at baseline, 22% of women with a score of −1.50 to −1.99 at baseline, and in 65% of women with a baseline T score of −2.00 to −2.49.
After adjustment for age and continuous bone mineral density, it took an estimated 16 years for 10% of women with a T score of −1.00 or higher at baseline to transition to osteoporosis. The other three subgroups analyzed underwent covariate adjustment for age, body mass index, current estrogen use, any fracture after age 50, current smoking, and oral glucocorticoid use. After adjustment, the average time for 10% of women to transition to osteoporosis was 15.5 years in women following a T score measure of −1.01 to −1.49, 4.5 years in women with a T score of −1.50 to −1.99, and 1.2 years in women with a T score of −2.00 to −2.49.
Another analysis stratified women by their age at the baseline DXA examination (see chart). Even among women aged 85 years, it took an average of nearly 11 years for 10% to develop osteoporosis after a baseline T score of −1.01 to −1.49.
Major Finding: Long-term follow-up of transition rate to
osteoporosis in U.S. women aged 67 years or older showed that fewer than
10% developed osteoporosis within 15 years when their baseline DXA T
score exceeded −1.50.
Data Source: 5,036 women enrolled in the Study of Osteoporotic Fracture who met the analysis criteria.
Disclosures: Dr. Gourlay said she had no disclosures.
Kids Diagnosed With ADHD Often Remit
NEW YORK – A diagnosis of ADHD might not be forever.
In fact, it can be pretty fleeting. Analysis of serial assessments of more than 8,000 U.S. children and adolescents for attention-deficit/hyperactivity disorder (ADHD) showed that the diagnosis often did not persist after follow-up of 1 year or longer, J. Blake Turner, Ph.D., said at the annual meeting of the American Academy of Child & Adolescent Psychiatry.
ADHD diagnoses "are extremely transient over a 1-year period. Generally, loss of the diagnosis is more likely than persistence," said Dr. Turner, a researcher in the division of child and adolescent psychiatry at Columbia University in New York.
The findings suggest that problems exist with current nosology for ADHD, and that current prevalence estimates from community studies may be inflated. "We need to examine the predictors of ADHD persistence over time," he said. "We need to look at what’s going on here and what predicts the persistence of disruptive disorders.
"If patients are diagnosed with ADHD and it is transient – if it is reactive distress that is likely to go away – do we want to identify them?" he asked in an interview. "If a diagnosis is made of ADHD, do you let it go because it will likely resolve on its own, or will treatment help it resolve more quickly?" We think of ADHD as something that lasts, not something that comes and goes. Perhaps we need [a diagnosis] that’s more stable," possibly by basing it on a larger number of symptoms. "That would mean changing the ADHD diagnosis," he said.
Preliminary analysis of serial assessments for oppositional defiance disorder and conduct disorder in the same data set of 8,714 children and adolescents showed similar, transient patterns after an initial diagnosis, Dr. Turner added.
"It troubles me that the [ADHD] phenotype looks so unstable," commented Dr. Daniel S. Pine, chief of the Section on Development and Affective Neuroscience at the National Institute of Mental Health. "A lot of people are struggling with the threshold for [diagnosing] ADHD. This is a very different conceptualization of ADHD; we don’t usually think of it as something that’s gone in 2 years. If this is [children having] a transient reaction to stress, I don’t want to talk about it [in] the same way as clinical ADHD.
Dr. Pine suggested that Dr. Turner’s new finding might help explain the high reported prevalence rates of ADHD, and that the results also raised issues about using stimulants to treat newly diagnosed ADHD.
"I look at some of the prevalences [reported], and it’s absurd. I find it very hard to believe that 20% of American boys have ADHD," but that is what some recent reports documented, Dr. Pine said. Other reports said that about 6% of all American children and about 12% of boys receive stimulant treatment for ADHD. "When I look at these data [in Dr. Turner’s report], the question of stimulant use is right behind there."
Dr. Turner used data collected by 4 of the 16 studies done by researchers in the DISC (Diagnostic Interview Schedule for Children) Nosology Group. All of the studies used the DISC to assess a group of children, adolescents, or both. The four studies used by Dr. Turner included serial assessments using the DISC for ADHD a year or more apart. Depending on the study and whether the diagnostic criteria included the age of onset, the range of ADHD prevalence at the initial examination was 5%-40%, with roughly 1,200 total cases identified.
At a follow-up visit at least 1 year following the initial examination, loss of the ADHD diagnosis occurred in roughly 55%-75% of the patients who had been diagnosed with inattention ADHD the first time. In patients who were initially diagnosed with hyperactive ADHD, the loss rate at follow-up ran 55%-65%. Those who were first diagnosed with combined ADHD had a more stable course, with about 18%-35% not maintaining the diagnosis at follow-up.
Additional analysis showed that lost ADHD diagnoses usually did not occur as a small change in an initially marginal diagnosis. Patients who changed from having ADHD to not having it lost five ADHD symptoms, on average. And the remitters and nonremitters all had a similar pattern of disease severity at their initial diagnosis. Patients’ age had no association with whether or not an ADHD diagnosis disappeared. And patients who received treatment had a higher likelihood of retaining their ADHD diagnosis at follow-up than did those who did not receive treatment, possibly because the patients who were treated generally had more chronic ADHD.
Dr. Turner had no disclosures.
NEW YORK – A diagnosis of ADHD might not be forever.
In fact, it can be pretty fleeting. Analysis of serial assessments of more than 8,000 U.S. children and adolescents for attention-deficit/hyperactivity disorder (ADHD) showed that the diagnosis often did not persist after follow-up of 1 year or longer, J. Blake Turner, Ph.D., said at the annual meeting of the American Academy of Child & Adolescent Psychiatry.
ADHD diagnoses "are extremely transient over a 1-year period. Generally, loss of the diagnosis is more likely than persistence," said Dr. Turner, a researcher in the division of child and adolescent psychiatry at Columbia University in New York.
The findings suggest that problems exist with current nosology for ADHD, and that current prevalence estimates from community studies may be inflated. "We need to examine the predictors of ADHD persistence over time," he said. "We need to look at what’s going on here and what predicts the persistence of disruptive disorders.
"If patients are diagnosed with ADHD and it is transient – if it is reactive distress that is likely to go away – do we want to identify them?" he asked in an interview. "If a diagnosis is made of ADHD, do you let it go because it will likely resolve on its own, or will treatment help it resolve more quickly?" We think of ADHD as something that lasts, not something that comes and goes. Perhaps we need [a diagnosis] that’s more stable," possibly by basing it on a larger number of symptoms. "That would mean changing the ADHD diagnosis," he said.
Preliminary analysis of serial assessments for oppositional defiance disorder and conduct disorder in the same data set of 8,714 children and adolescents showed similar, transient patterns after an initial diagnosis, Dr. Turner added.
"It troubles me that the [ADHD] phenotype looks so unstable," commented Dr. Daniel S. Pine, chief of the Section on Development and Affective Neuroscience at the National Institute of Mental Health. "A lot of people are struggling with the threshold for [diagnosing] ADHD. This is a very different conceptualization of ADHD; we don’t usually think of it as something that’s gone in 2 years. If this is [children having] a transient reaction to stress, I don’t want to talk about it [in] the same way as clinical ADHD.
Dr. Pine suggested that Dr. Turner’s new finding might help explain the high reported prevalence rates of ADHD, and that the results also raised issues about using stimulants to treat newly diagnosed ADHD.
"I look at some of the prevalences [reported], and it’s absurd. I find it very hard to believe that 20% of American boys have ADHD," but that is what some recent reports documented, Dr. Pine said. Other reports said that about 6% of all American children and about 12% of boys receive stimulant treatment for ADHD. "When I look at these data [in Dr. Turner’s report], the question of stimulant use is right behind there."
Dr. Turner used data collected by 4 of the 16 studies done by researchers in the DISC (Diagnostic Interview Schedule for Children) Nosology Group. All of the studies used the DISC to assess a group of children, adolescents, or both. The four studies used by Dr. Turner included serial assessments using the DISC for ADHD a year or more apart. Depending on the study and whether the diagnostic criteria included the age of onset, the range of ADHD prevalence at the initial examination was 5%-40%, with roughly 1,200 total cases identified.
At a follow-up visit at least 1 year following the initial examination, loss of the ADHD diagnosis occurred in roughly 55%-75% of the patients who had been diagnosed with inattention ADHD the first time. In patients who were initially diagnosed with hyperactive ADHD, the loss rate at follow-up ran 55%-65%. Those who were first diagnosed with combined ADHD had a more stable course, with about 18%-35% not maintaining the diagnosis at follow-up.
Additional analysis showed that lost ADHD diagnoses usually did not occur as a small change in an initially marginal diagnosis. Patients who changed from having ADHD to not having it lost five ADHD symptoms, on average. And the remitters and nonremitters all had a similar pattern of disease severity at their initial diagnosis. Patients’ age had no association with whether or not an ADHD diagnosis disappeared. And patients who received treatment had a higher likelihood of retaining their ADHD diagnosis at follow-up than did those who did not receive treatment, possibly because the patients who were treated generally had more chronic ADHD.
Dr. Turner had no disclosures.
NEW YORK – A diagnosis of ADHD might not be forever.
In fact, it can be pretty fleeting. Analysis of serial assessments of more than 8,000 U.S. children and adolescents for attention-deficit/hyperactivity disorder (ADHD) showed that the diagnosis often did not persist after follow-up of 1 year or longer, J. Blake Turner, Ph.D., said at the annual meeting of the American Academy of Child & Adolescent Psychiatry.
ADHD diagnoses "are extremely transient over a 1-year period. Generally, loss of the diagnosis is more likely than persistence," said Dr. Turner, a researcher in the division of child and adolescent psychiatry at Columbia University in New York.
The findings suggest that problems exist with current nosology for ADHD, and that current prevalence estimates from community studies may be inflated. "We need to examine the predictors of ADHD persistence over time," he said. "We need to look at what’s going on here and what predicts the persistence of disruptive disorders.
"If patients are diagnosed with ADHD and it is transient – if it is reactive distress that is likely to go away – do we want to identify them?" he asked in an interview. "If a diagnosis is made of ADHD, do you let it go because it will likely resolve on its own, or will treatment help it resolve more quickly?" We think of ADHD as something that lasts, not something that comes and goes. Perhaps we need [a diagnosis] that’s more stable," possibly by basing it on a larger number of symptoms. "That would mean changing the ADHD diagnosis," he said.
Preliminary analysis of serial assessments for oppositional defiance disorder and conduct disorder in the same data set of 8,714 children and adolescents showed similar, transient patterns after an initial diagnosis, Dr. Turner added.
"It troubles me that the [ADHD] phenotype looks so unstable," commented Dr. Daniel S. Pine, chief of the Section on Development and Affective Neuroscience at the National Institute of Mental Health. "A lot of people are struggling with the threshold for [diagnosing] ADHD. This is a very different conceptualization of ADHD; we don’t usually think of it as something that’s gone in 2 years. If this is [children having] a transient reaction to stress, I don’t want to talk about it [in] the same way as clinical ADHD.
Dr. Pine suggested that Dr. Turner’s new finding might help explain the high reported prevalence rates of ADHD, and that the results also raised issues about using stimulants to treat newly diagnosed ADHD.
"I look at some of the prevalences [reported], and it’s absurd. I find it very hard to believe that 20% of American boys have ADHD," but that is what some recent reports documented, Dr. Pine said. Other reports said that about 6% of all American children and about 12% of boys receive stimulant treatment for ADHD. "When I look at these data [in Dr. Turner’s report], the question of stimulant use is right behind there."
Dr. Turner used data collected by 4 of the 16 studies done by researchers in the DISC (Diagnostic Interview Schedule for Children) Nosology Group. All of the studies used the DISC to assess a group of children, adolescents, or both. The four studies used by Dr. Turner included serial assessments using the DISC for ADHD a year or more apart. Depending on the study and whether the diagnostic criteria included the age of onset, the range of ADHD prevalence at the initial examination was 5%-40%, with roughly 1,200 total cases identified.
At a follow-up visit at least 1 year following the initial examination, loss of the ADHD diagnosis occurred in roughly 55%-75% of the patients who had been diagnosed with inattention ADHD the first time. In patients who were initially diagnosed with hyperactive ADHD, the loss rate at follow-up ran 55%-65%. Those who were first diagnosed with combined ADHD had a more stable course, with about 18%-35% not maintaining the diagnosis at follow-up.
Additional analysis showed that lost ADHD diagnoses usually did not occur as a small change in an initially marginal diagnosis. Patients who changed from having ADHD to not having it lost five ADHD symptoms, on average. And the remitters and nonremitters all had a similar pattern of disease severity at their initial diagnosis. Patients’ age had no association with whether or not an ADHD diagnosis disappeared. And patients who received treatment had a higher likelihood of retaining their ADHD diagnosis at follow-up than did those who did not receive treatment, possibly because the patients who were treated generally had more chronic ADHD.
Dr. Turner had no disclosures.
Major Finding: Children and adolescents who are diagnosed with ADHD often lose the diagnosis at follow-up a year or more later. Of roughly 1,200 kids from four separate studies initially diagnosed with ADHD, 18%-35% of those diagnosed with combined ADHD failed to meet the diagnosis at their follow-up assessment, 55%-65% of those initially identified with hyperactive ADHD lost the diagnosis at follow-up, and 55%-75% of those first diagnosed with inattentive ADHD lacked the diagnosis at follow-up.
Data Source: Review of data collected on 8,714 U.S. children and adolescents assessed in four studies from the Diagnostic Interview Schedule for Children Nosology Group.
Disclosures: Dr. Turner had no disclosures.
T Score When Bisphosphonate Stops Drives Fracture Risk
TORONTO – The stronger a patient’s bones are when bisphosphonate treatment is stopped, the less likely the bones are to fracture later, based on an analysis of 437 patients.
In contrast, changes in bone mineral density following the end of bisphosphonate therapy had no significant link with subsequent fracture risk, Dr. Douglas C. Bauer said at the annual meeting of the American Society for Bone and Mineral Research.
The finding calls into doubt the common practice of running annual dual-energy x-ray absorptiometry examinations on patients who have withdrawn from bisphosphonate treatment.
Routine BMD measurement "1-2 years after stopping prolonged alendronate therapy may not be useful for predicting the patient’s fracture risk," said Dr. Bauer, professor of medicine, epidemiology, and biostatistics at the University of California, San Francisco. The BMD at the time of alendronate discontinuation "was highly predictive of who was going to fracture."
Patients who stopped alendronate therapy with a total hip BMD T score of –1.4 or greater had a 9% rate of clinical fracture during 5 years of follow-up. Patients with a T score of –2.1 to –1.5 when they stopped bisphosphonate treatment had a 23% fracture rate during 5 years of follow-up, and those who stopped with a T score lower than –2.1 had a 33% fracture rate over the next 5 years. The between-group differences were statistically significant.
These data "are helpful as I try to decide which of my patients I should leave on a bisphosphonate," commented Dr. Elizabeth Shane, a professor of medicine at Columbia University in New York. "Patients below –2.1 were at very high risk of fracture, but even those in the middle tertile, with less than –1.5, were at a risk almost as high." Dr. Bauer’s data "provides me with some comfort [on whom] I can stop safely."
Dr. Bauer and his associates used data collected in the FLEX (Fracture Intervention Trial Long-Term Extension) study, which randomized 1,099 postmenopausal women who had completed 5 years of treatment with alendronate to either continue on alendronate for another 5 years or switch to placebo (JAMA 2006;296:2927-38). They focused on the 437 patients who switched to placebo, and assessed the BMD measures that were associated with fracture risk during follow-up.
Even among patients who had relatively substantial bone loss during 1 year of follow-up, the amount of lost BMD did not significantly correlate with their follow-up fracture rate. The researchers saw no significant link to fracture rate among the 21% of patients who lost at least 3% of their BMD during the first year of follow-up, or among the 8% of patients who lost at least 5% of their BMD during 1 year of follow-up.
When a patient starts bisphosphonate treatment, the BMD typically rises sharply for a couple of years, and then plateaus and remains stable, Dr. Bauer said. After patients stop bisphosphonate treatment, their BMD usually declines gradually. Prior analysis of the FLEX data showed that patients who failed to reach a BMD of at least –2.5 usually benefited with fewer fractures when they remained on bisphosphonate treatment. The new findings suggest that patients with T scores of less than –1.5 may also benefit from continued treatment, On the other hand, when patients reach an adequate BMD (greater than –1.5), "it’s not unreasonable to talk with the patient about the potential risks and benefits of a drug holiday," Dr. Bauer said.
The FLEX study was funded by Merck, the company that markets alendronate (Fosamax). Dr. Bauer said that he has received research funding from Amgen, Merck, and Novartis.
Dr. Douglas C. Bauer, American Society for Bone and Mineral Research, dual-energy x-ray absorptiometry, BMD, alendronate therapy, fracture risk, University of California San Francisco, T score, Dr. Elizabeth Shane, FLEX study, Fracture Intervention Trial Long-Term Extension study, Fosamax
TORONTO – The stronger a patient’s bones are when bisphosphonate treatment is stopped, the less likely the bones are to fracture later, based on an analysis of 437 patients.
In contrast, changes in bone mineral density following the end of bisphosphonate therapy had no significant link with subsequent fracture risk, Dr. Douglas C. Bauer said at the annual meeting of the American Society for Bone and Mineral Research.
The finding calls into doubt the common practice of running annual dual-energy x-ray absorptiometry examinations on patients who have withdrawn from bisphosphonate treatment.
Routine BMD measurement "1-2 years after stopping prolonged alendronate therapy may not be useful for predicting the patient’s fracture risk," said Dr. Bauer, professor of medicine, epidemiology, and biostatistics at the University of California, San Francisco. The BMD at the time of alendronate discontinuation "was highly predictive of who was going to fracture."
Patients who stopped alendronate therapy with a total hip BMD T score of –1.4 or greater had a 9% rate of clinical fracture during 5 years of follow-up. Patients with a T score of –2.1 to –1.5 when they stopped bisphosphonate treatment had a 23% fracture rate during 5 years of follow-up, and those who stopped with a T score lower than –2.1 had a 33% fracture rate over the next 5 years. The between-group differences were statistically significant.
These data "are helpful as I try to decide which of my patients I should leave on a bisphosphonate," commented Dr. Elizabeth Shane, a professor of medicine at Columbia University in New York. "Patients below –2.1 were at very high risk of fracture, but even those in the middle tertile, with less than –1.5, were at a risk almost as high." Dr. Bauer’s data "provides me with some comfort [on whom] I can stop safely."
Dr. Bauer and his associates used data collected in the FLEX (Fracture Intervention Trial Long-Term Extension) study, which randomized 1,099 postmenopausal women who had completed 5 years of treatment with alendronate to either continue on alendronate for another 5 years or switch to placebo (JAMA 2006;296:2927-38). They focused on the 437 patients who switched to placebo, and assessed the BMD measures that were associated with fracture risk during follow-up.
Even among patients who had relatively substantial bone loss during 1 year of follow-up, the amount of lost BMD did not significantly correlate with their follow-up fracture rate. The researchers saw no significant link to fracture rate among the 21% of patients who lost at least 3% of their BMD during the first year of follow-up, or among the 8% of patients who lost at least 5% of their BMD during 1 year of follow-up.
When a patient starts bisphosphonate treatment, the BMD typically rises sharply for a couple of years, and then plateaus and remains stable, Dr. Bauer said. After patients stop bisphosphonate treatment, their BMD usually declines gradually. Prior analysis of the FLEX data showed that patients who failed to reach a BMD of at least –2.5 usually benefited with fewer fractures when they remained on bisphosphonate treatment. The new findings suggest that patients with T scores of less than –1.5 may also benefit from continued treatment, On the other hand, when patients reach an adequate BMD (greater than –1.5), "it’s not unreasonable to talk with the patient about the potential risks and benefits of a drug holiday," Dr. Bauer said.
The FLEX study was funded by Merck, the company that markets alendronate (Fosamax). Dr. Bauer said that he has received research funding from Amgen, Merck, and Novartis.
TORONTO – The stronger a patient’s bones are when bisphosphonate treatment is stopped, the less likely the bones are to fracture later, based on an analysis of 437 patients.
In contrast, changes in bone mineral density following the end of bisphosphonate therapy had no significant link with subsequent fracture risk, Dr. Douglas C. Bauer said at the annual meeting of the American Society for Bone and Mineral Research.
The finding calls into doubt the common practice of running annual dual-energy x-ray absorptiometry examinations on patients who have withdrawn from bisphosphonate treatment.
Routine BMD measurement "1-2 years after stopping prolonged alendronate therapy may not be useful for predicting the patient’s fracture risk," said Dr. Bauer, professor of medicine, epidemiology, and biostatistics at the University of California, San Francisco. The BMD at the time of alendronate discontinuation "was highly predictive of who was going to fracture."
Patients who stopped alendronate therapy with a total hip BMD T score of –1.4 or greater had a 9% rate of clinical fracture during 5 years of follow-up. Patients with a T score of –2.1 to –1.5 when they stopped bisphosphonate treatment had a 23% fracture rate during 5 years of follow-up, and those who stopped with a T score lower than –2.1 had a 33% fracture rate over the next 5 years. The between-group differences were statistically significant.
These data "are helpful as I try to decide which of my patients I should leave on a bisphosphonate," commented Dr. Elizabeth Shane, a professor of medicine at Columbia University in New York. "Patients below –2.1 were at very high risk of fracture, but even those in the middle tertile, with less than –1.5, were at a risk almost as high." Dr. Bauer’s data "provides me with some comfort [on whom] I can stop safely."
Dr. Bauer and his associates used data collected in the FLEX (Fracture Intervention Trial Long-Term Extension) study, which randomized 1,099 postmenopausal women who had completed 5 years of treatment with alendronate to either continue on alendronate for another 5 years or switch to placebo (JAMA 2006;296:2927-38). They focused on the 437 patients who switched to placebo, and assessed the BMD measures that were associated with fracture risk during follow-up.
Even among patients who had relatively substantial bone loss during 1 year of follow-up, the amount of lost BMD did not significantly correlate with their follow-up fracture rate. The researchers saw no significant link to fracture rate among the 21% of patients who lost at least 3% of their BMD during the first year of follow-up, or among the 8% of patients who lost at least 5% of their BMD during 1 year of follow-up.
When a patient starts bisphosphonate treatment, the BMD typically rises sharply for a couple of years, and then plateaus and remains stable, Dr. Bauer said. After patients stop bisphosphonate treatment, their BMD usually declines gradually. Prior analysis of the FLEX data showed that patients who failed to reach a BMD of at least –2.5 usually benefited with fewer fractures when they remained on bisphosphonate treatment. The new findings suggest that patients with T scores of less than –1.5 may also benefit from continued treatment, On the other hand, when patients reach an adequate BMD (greater than –1.5), "it’s not unreasonable to talk with the patient about the potential risks and benefits of a drug holiday," Dr. Bauer said.
The FLEX study was funded by Merck, the company that markets alendronate (Fosamax). Dr. Bauer said that he has received research funding from Amgen, Merck, and Novartis.
Dr. Douglas C. Bauer, American Society for Bone and Mineral Research, dual-energy x-ray absorptiometry, BMD, alendronate therapy, fracture risk, University of California San Francisco, T score, Dr. Elizabeth Shane, FLEX study, Fracture Intervention Trial Long-Term Extension study, Fosamax
Dr. Douglas C. Bauer, American Society for Bone and Mineral Research, dual-energy x-ray absorptiometry, BMD, alendronate therapy, fracture risk, University of California San Francisco, T score, Dr. Elizabeth Shane, FLEX study, Fracture Intervention Trial Long-Term Extension study, Fosamax
Major Finding: Patients withdrawn from bisphosphonate treatment after 5 years on the drug with a total hip T score of more than –1.5 had a 9% risk for any clinical fracture during the following 5 years. Patients with a T score of –1.5 to –2.1 at the time bisphosphonate treatment stopped had a 23% fracture rate during the next 5 years. Patients with a total hip T score of less than –2.1 had a 33% fracture rate during the 5 years after bisphosphonate withdrawal.
Data Source: Review of 437 postmenopausal women enrolled in the FLEX study who were randomized to placebo following 5 years of continuous alendronate treatment.
Disclosures: Dr. Bauer said he received research funding from Amgen, Merck, and Novartis. The FLEX study was sponsored by Merck.
Prevalence of ADHD in U.S. Youths Reached 9.5% in 2007-2008
NEW YORK – The U.S. prevalence of attention-deficit/hyperactivity disorder among children and adolescents rose to its highest level in 2007-2008, with 9.5% of children and adolescents ever diagnosed, according to a federally sponsored national telephone survey covering more than 70,000 American children and adolescents.
Although the reasons behind the increased prevalence of attention-deficit/hyperactivity disorder (ADHD) remain unclear, the increase over the 7.8% rate of ever diagnosed ADHD in 2003-2004 reached statistical significance and appears real.
"We think something is going on," Melissa L. Danielson said while presenting a poster at the annual meeting of the American Academy of Child and Adolescent Psychiatry. Explanations might include increased awareness of the diagnosis, and more children and adolescents undergoing formal evaluation, she said. Backing up the national finding are the data on ADHD prevalence in each individual state. Prevalence rates have been up in almost every state, and in 13 states recent increases reached statistical significance, she said in an interview.
The National Survey of Children’s Health, run by the Centers for Disease Control and Prevention, receives its primary funding from the Department of Health and Human Services. In 2007 and 2008, a randomly selected sample of U.S. parents answered a telephone survey about their children’s health. Parents answered four questions about ADHD: Did they have a child aged 4-17 years who ever received a diagnosis of disorder? Did their child have a current diagnosis? Is the ADHD mild, moderate, or severe? Does the child receive medication?
Extrapolated survey results showed that in 2007-2008, 4.1 million American children and adolescents had a current diagnosis, 7.2% of the 4- to 17-year-old population (less than the 9.5% ever diagnosed with ADHD). Of these, two-thirds – 2.7 million – received medical treatment for their ADHD, and parents said that 570,000 (14%) of their kids had severe ADHD. About half had mild ADHD, with the remaining patients having what their parents described as moderate disorder. Subgroups with significantly less-severe ADHD included girls and adolescents aged 15-17.
Boys, adolescents aged 15-17 years, and multiracial and non-Hispanic children all had significantly higher prevalence rates of current ADHD relative to their respective comparator subgroups. Gender, race, and ethnicity had no linkage with medication use, but medication treatment occurred less often in the 15- to 17-year-olds, said Ms. Danielson, a statistician on the Child Development Studies team of the CDC in Atlanta. Children aged 11-14 years had the widest medication use, 73%, while adolescents aged 15-17 had the lowest rate of medication, 56%, a statistically significant difference. Children aged 11-14 years with severe disease had a roughly 90% rate of medical treatment; teens aged 15-17 years with mild ADHD had the lowest medication rate, about 50%.
Children and teens with a concurrent diagnosis of disruptive behavior disorder had a statistically significant, 50% adjusted, relative increased rate of receiving medical treatment for their ADHD and also had a significantly higher prevalence of current, severe ADHD. More than 30% of children with the combination of current ADHD and disruptive behavior disorder had severe ADHD.
Ms. Danielson said that she had no disclosures.
NEW YORK – The U.S. prevalence of attention-deficit/hyperactivity disorder among children and adolescents rose to its highest level in 2007-2008, with 9.5% of children and adolescents ever diagnosed, according to a federally sponsored national telephone survey covering more than 70,000 American children and adolescents.
Although the reasons behind the increased prevalence of attention-deficit/hyperactivity disorder (ADHD) remain unclear, the increase over the 7.8% rate of ever diagnosed ADHD in 2003-2004 reached statistical significance and appears real.
"We think something is going on," Melissa L. Danielson said while presenting a poster at the annual meeting of the American Academy of Child and Adolescent Psychiatry. Explanations might include increased awareness of the diagnosis, and more children and adolescents undergoing formal evaluation, she said. Backing up the national finding are the data on ADHD prevalence in each individual state. Prevalence rates have been up in almost every state, and in 13 states recent increases reached statistical significance, she said in an interview.
The National Survey of Children’s Health, run by the Centers for Disease Control and Prevention, receives its primary funding from the Department of Health and Human Services. In 2007 and 2008, a randomly selected sample of U.S. parents answered a telephone survey about their children’s health. Parents answered four questions about ADHD: Did they have a child aged 4-17 years who ever received a diagnosis of disorder? Did their child have a current diagnosis? Is the ADHD mild, moderate, or severe? Does the child receive medication?
Extrapolated survey results showed that in 2007-2008, 4.1 million American children and adolescents had a current diagnosis, 7.2% of the 4- to 17-year-old population (less than the 9.5% ever diagnosed with ADHD). Of these, two-thirds – 2.7 million – received medical treatment for their ADHD, and parents said that 570,000 (14%) of their kids had severe ADHD. About half had mild ADHD, with the remaining patients having what their parents described as moderate disorder. Subgroups with significantly less-severe ADHD included girls and adolescents aged 15-17.
Boys, adolescents aged 15-17 years, and multiracial and non-Hispanic children all had significantly higher prevalence rates of current ADHD relative to their respective comparator subgroups. Gender, race, and ethnicity had no linkage with medication use, but medication treatment occurred less often in the 15- to 17-year-olds, said Ms. Danielson, a statistician on the Child Development Studies team of the CDC in Atlanta. Children aged 11-14 years had the widest medication use, 73%, while adolescents aged 15-17 had the lowest rate of medication, 56%, a statistically significant difference. Children aged 11-14 years with severe disease had a roughly 90% rate of medical treatment; teens aged 15-17 years with mild ADHD had the lowest medication rate, about 50%.
Children and teens with a concurrent diagnosis of disruptive behavior disorder had a statistically significant, 50% adjusted, relative increased rate of receiving medical treatment for their ADHD and also had a significantly higher prevalence of current, severe ADHD. More than 30% of children with the combination of current ADHD and disruptive behavior disorder had severe ADHD.
Ms. Danielson said that she had no disclosures.
NEW YORK – The U.S. prevalence of attention-deficit/hyperactivity disorder among children and adolescents rose to its highest level in 2007-2008, with 9.5% of children and adolescents ever diagnosed, according to a federally sponsored national telephone survey covering more than 70,000 American children and adolescents.
Although the reasons behind the increased prevalence of attention-deficit/hyperactivity disorder (ADHD) remain unclear, the increase over the 7.8% rate of ever diagnosed ADHD in 2003-2004 reached statistical significance and appears real.
"We think something is going on," Melissa L. Danielson said while presenting a poster at the annual meeting of the American Academy of Child and Adolescent Psychiatry. Explanations might include increased awareness of the diagnosis, and more children and adolescents undergoing formal evaluation, she said. Backing up the national finding are the data on ADHD prevalence in each individual state. Prevalence rates have been up in almost every state, and in 13 states recent increases reached statistical significance, she said in an interview.
The National Survey of Children’s Health, run by the Centers for Disease Control and Prevention, receives its primary funding from the Department of Health and Human Services. In 2007 and 2008, a randomly selected sample of U.S. parents answered a telephone survey about their children’s health. Parents answered four questions about ADHD: Did they have a child aged 4-17 years who ever received a diagnosis of disorder? Did their child have a current diagnosis? Is the ADHD mild, moderate, or severe? Does the child receive medication?
Extrapolated survey results showed that in 2007-2008, 4.1 million American children and adolescents had a current diagnosis, 7.2% of the 4- to 17-year-old population (less than the 9.5% ever diagnosed with ADHD). Of these, two-thirds – 2.7 million – received medical treatment for their ADHD, and parents said that 570,000 (14%) of their kids had severe ADHD. About half had mild ADHD, with the remaining patients having what their parents described as moderate disorder. Subgroups with significantly less-severe ADHD included girls and adolescents aged 15-17.
Boys, adolescents aged 15-17 years, and multiracial and non-Hispanic children all had significantly higher prevalence rates of current ADHD relative to their respective comparator subgroups. Gender, race, and ethnicity had no linkage with medication use, but medication treatment occurred less often in the 15- to 17-year-olds, said Ms. Danielson, a statistician on the Child Development Studies team of the CDC in Atlanta. Children aged 11-14 years had the widest medication use, 73%, while adolescents aged 15-17 had the lowest rate of medication, 56%, a statistically significant difference. Children aged 11-14 years with severe disease had a roughly 90% rate of medical treatment; teens aged 15-17 years with mild ADHD had the lowest medication rate, about 50%.
Children and teens with a concurrent diagnosis of disruptive behavior disorder had a statistically significant, 50% adjusted, relative increased rate of receiving medical treatment for their ADHD and also had a significantly higher prevalence of current, severe ADHD. More than 30% of children with the combination of current ADHD and disruptive behavior disorder had severe ADHD.
Ms. Danielson said that she had no disclosures.
FROM THE ANNUAL MEETING OF THE AMERICAN ACADEMY OF CHILD and ADOLESCENT PSYCHIATRY
Major Finding: During 2007-2008, U.S. children and adolescents aged 4-17 years had a 9.5% prevalence rate of ever having attention-deficit/hyperactivity disorder, a significant increase from the 7.8% rate in 2003-2004.
Data Source: The National Survey of Children’s Health, a random-sample telephone survey of parents with data on more than 70,000 U.S. children and adolescents aged 4-17 years run by the Centers for Disease Control and Prevention.
Disclosures: Ms. Danielson said that she had no disclosures.
Calcium Supplement Use Linked With Higher Cardiovascular Disease Risk
TORONTO – Calcium supplements appear to cause more harm than good, according to a meta-analysis of 28,000 participants in nine trials that includes a new analysis of more than 16,000 participants in the Women’s Health Initiative, but the reanalysis has raised concerns among the WHI’s original investigators.
"We calculate that for every 1,000 people treated with calcium for 5 years, it will lead to four additional myocardial infarctions, four additional strokes, and two additional deaths, while preventing three fractures," Dr. Ian R. Reid said at the annual meeting of the American Society for Bone and Mineral Research.
"I don’t prescribe calcium supplements to anyone anymore for preventing bone fractures. People should get calcium from their diet," said Dr. Reid, a professor of medicine at the University of Auckland, New Zealand. "We believe there is a fundamental difference between dietary calcium and supplemental calcium." He speculated that a calcium supplement, even at a relatively modest dose of 500 mg, produces a "borderline hypercalcemia" that persists for several hours and raises the risk for MI or stroke, the same way that people in the highest quartile for normal blood calcium levels have an increased risk for cardiovascular disease events.
But the researchers who ran the Women’s Health Initiative (WHI) study questioned the legitimacy of the new analysis beyond a hypothesis-generating exercise.
"The WHI investigators have concerns about the reanalysis and whether omitting the subgroups with favorable results is appropriate," commented Dr. JoAnn E. Manson, professor of medicine at Harvard University and chief of the division of preventive medicine at Brigham and Women’s Hospital, both in Boston, and a WHI coinvestigator.
Dr. Reid and his associates initially documented their finding that calcium supplements raise cardiovascular risk in a pair of meta-analyses published online last July (BMJ 2010;341:c3691). They reported that calcium supplement use was linked with a statistically significant 27% and 31% relatively increased risk for MI in two separate meta-analyses.
To further explore the impact of calcium supplements on cardiovascular risk, they received permission from the National Heart, Lung, and Blood Institute to reanalyze data collected in a WHI study of more than 36,000 postmenopausal women randomized to receive a daily supplement with 500 mg calcium plus vitamin D or placebo. The original report from the WHI investigators showed that the calcium plus vitamin D treatment did not significantly increase or decrease coronary or cerebrovascular risk in generally healthy postmenopausal women during 7 years of treatment. (Circulation 2007;115:846-54).
But the WHI study design allowed the participants to take more calcium supplements in addition to their study agent, if they wanted to do so. At baseline, more than 19,000 (54%) of the women in the study reported using a calcium supplement on their own, and at the end of the study 69% reported the practice, Dr. Reid said. To address the possible confounding this may have caused, he focused his analysis on the 16,718 women in the WHI study who reported not using a personal calcium supplement at entry into the study.
In this subgroup, the MI rate ran 2.5% in women randomized to calcium supplement treatment, and 2.0% among women in the placebo arm, a 22% relative increased MI rate with the calcium supplement that was statistically significant. The rate of MI or stroke ran a relative 16% higher among the women taking the calcium supplement, which was also statistically significant. The results showed no significant effect of calcium supplementation on stroke rate. "We saw the same effect as in the meta-analysis," Dr. Reid said.
But if Dr. Reid’s analysis did not start with a prior hypothesis, this finding can only be considered hypothesis generating, not hypothesis testing, Dr. Manson said in an interview. "Many subgroups were tested in the WHI, and some would be expected to show significant effect modification by chance," she pointed out. In addition, randomization made background levels of calcium use similar in the two treatment arms and thereby neutralized background calcium use as a possible confounder. Dr. Manson also noted that if supplemental calcium posed a risk, the event rates should have been highest among women taking both the study calcium dose and an additional dose on their own.
When the Auckland researchers added the results from the WHI subanalysis to their previously reported meta-analysis, they "just reinforced the trends and made them more significant," Dr. Reid said in an interview.
When data from the WHI subgroup that did not use personal calcium supplements at baseline were added to the meta-analysis, the results showed that those who did take supplements had a 24% relative excess of MIs, a 15% relative excess of stroke, and a 16% relative excess of MI or stroke, he reported.
"What we now have is six or seven very large trials, and [the results they show] for MI all line up very consistently, without significant heterogeneity. When you look at risk vs. benefit, the evidence for an increased risk of MI is stronger than the evidence that calcium supplements prevent bone fractures. It’s hard to justify continuing calcium supplements," Dr. Reid said.
Dr. Reid said that he had no relevant disclosures.
TORONTO – Calcium supplements appear to cause more harm than good, according to a meta-analysis of 28,000 participants in nine trials that includes a new analysis of more than 16,000 participants in the Women’s Health Initiative, but the reanalysis has raised concerns among the WHI’s original investigators.
"We calculate that for every 1,000 people treated with calcium for 5 years, it will lead to four additional myocardial infarctions, four additional strokes, and two additional deaths, while preventing three fractures," Dr. Ian R. Reid said at the annual meeting of the American Society for Bone and Mineral Research.
"I don’t prescribe calcium supplements to anyone anymore for preventing bone fractures. People should get calcium from their diet," said Dr. Reid, a professor of medicine at the University of Auckland, New Zealand. "We believe there is a fundamental difference between dietary calcium and supplemental calcium." He speculated that a calcium supplement, even at a relatively modest dose of 500 mg, produces a "borderline hypercalcemia" that persists for several hours and raises the risk for MI or stroke, the same way that people in the highest quartile for normal blood calcium levels have an increased risk for cardiovascular disease events.
But the researchers who ran the Women’s Health Initiative (WHI) study questioned the legitimacy of the new analysis beyond a hypothesis-generating exercise.
"The WHI investigators have concerns about the reanalysis and whether omitting the subgroups with favorable results is appropriate," commented Dr. JoAnn E. Manson, professor of medicine at Harvard University and chief of the division of preventive medicine at Brigham and Women’s Hospital, both in Boston, and a WHI coinvestigator.
Dr. Reid and his associates initially documented their finding that calcium supplements raise cardiovascular risk in a pair of meta-analyses published online last July (BMJ 2010;341:c3691). They reported that calcium supplement use was linked with a statistically significant 27% and 31% relatively increased risk for MI in two separate meta-analyses.
To further explore the impact of calcium supplements on cardiovascular risk, they received permission from the National Heart, Lung, and Blood Institute to reanalyze data collected in a WHI study of more than 36,000 postmenopausal women randomized to receive a daily supplement with 500 mg calcium plus vitamin D or placebo. The original report from the WHI investigators showed that the calcium plus vitamin D treatment did not significantly increase or decrease coronary or cerebrovascular risk in generally healthy postmenopausal women during 7 years of treatment. (Circulation 2007;115:846-54).
But the WHI study design allowed the participants to take more calcium supplements in addition to their study agent, if they wanted to do so. At baseline, more than 19,000 (54%) of the women in the study reported using a calcium supplement on their own, and at the end of the study 69% reported the practice, Dr. Reid said. To address the possible confounding this may have caused, he focused his analysis on the 16,718 women in the WHI study who reported not using a personal calcium supplement at entry into the study.
In this subgroup, the MI rate ran 2.5% in women randomized to calcium supplement treatment, and 2.0% among women in the placebo arm, a 22% relative increased MI rate with the calcium supplement that was statistically significant. The rate of MI or stroke ran a relative 16% higher among the women taking the calcium supplement, which was also statistically significant. The results showed no significant effect of calcium supplementation on stroke rate. "We saw the same effect as in the meta-analysis," Dr. Reid said.
But if Dr. Reid’s analysis did not start with a prior hypothesis, this finding can only be considered hypothesis generating, not hypothesis testing, Dr. Manson said in an interview. "Many subgroups were tested in the WHI, and some would be expected to show significant effect modification by chance," she pointed out. In addition, randomization made background levels of calcium use similar in the two treatment arms and thereby neutralized background calcium use as a possible confounder. Dr. Manson also noted that if supplemental calcium posed a risk, the event rates should have been highest among women taking both the study calcium dose and an additional dose on their own.
When the Auckland researchers added the results from the WHI subanalysis to their previously reported meta-analysis, they "just reinforced the trends and made them more significant," Dr. Reid said in an interview.
When data from the WHI subgroup that did not use personal calcium supplements at baseline were added to the meta-analysis, the results showed that those who did take supplements had a 24% relative excess of MIs, a 15% relative excess of stroke, and a 16% relative excess of MI or stroke, he reported.
"What we now have is six or seven very large trials, and [the results they show] for MI all line up very consistently, without significant heterogeneity. When you look at risk vs. benefit, the evidence for an increased risk of MI is stronger than the evidence that calcium supplements prevent bone fractures. It’s hard to justify continuing calcium supplements," Dr. Reid said.
Dr. Reid said that he had no relevant disclosures.
TORONTO – Calcium supplements appear to cause more harm than good, according to a meta-analysis of 28,000 participants in nine trials that includes a new analysis of more than 16,000 participants in the Women’s Health Initiative, but the reanalysis has raised concerns among the WHI’s original investigators.
"We calculate that for every 1,000 people treated with calcium for 5 years, it will lead to four additional myocardial infarctions, four additional strokes, and two additional deaths, while preventing three fractures," Dr. Ian R. Reid said at the annual meeting of the American Society for Bone and Mineral Research.
"I don’t prescribe calcium supplements to anyone anymore for preventing bone fractures. People should get calcium from their diet," said Dr. Reid, a professor of medicine at the University of Auckland, New Zealand. "We believe there is a fundamental difference between dietary calcium and supplemental calcium." He speculated that a calcium supplement, even at a relatively modest dose of 500 mg, produces a "borderline hypercalcemia" that persists for several hours and raises the risk for MI or stroke, the same way that people in the highest quartile for normal blood calcium levels have an increased risk for cardiovascular disease events.
But the researchers who ran the Women’s Health Initiative (WHI) study questioned the legitimacy of the new analysis beyond a hypothesis-generating exercise.
"The WHI investigators have concerns about the reanalysis and whether omitting the subgroups with favorable results is appropriate," commented Dr. JoAnn E. Manson, professor of medicine at Harvard University and chief of the division of preventive medicine at Brigham and Women’s Hospital, both in Boston, and a WHI coinvestigator.
Dr. Reid and his associates initially documented their finding that calcium supplements raise cardiovascular risk in a pair of meta-analyses published online last July (BMJ 2010;341:c3691). They reported that calcium supplement use was linked with a statistically significant 27% and 31% relatively increased risk for MI in two separate meta-analyses.
To further explore the impact of calcium supplements on cardiovascular risk, they received permission from the National Heart, Lung, and Blood Institute to reanalyze data collected in a WHI study of more than 36,000 postmenopausal women randomized to receive a daily supplement with 500 mg calcium plus vitamin D or placebo. The original report from the WHI investigators showed that the calcium plus vitamin D treatment did not significantly increase or decrease coronary or cerebrovascular risk in generally healthy postmenopausal women during 7 years of treatment. (Circulation 2007;115:846-54).
But the WHI study design allowed the participants to take more calcium supplements in addition to their study agent, if they wanted to do so. At baseline, more than 19,000 (54%) of the women in the study reported using a calcium supplement on their own, and at the end of the study 69% reported the practice, Dr. Reid said. To address the possible confounding this may have caused, he focused his analysis on the 16,718 women in the WHI study who reported not using a personal calcium supplement at entry into the study.
In this subgroup, the MI rate ran 2.5% in women randomized to calcium supplement treatment, and 2.0% among women in the placebo arm, a 22% relative increased MI rate with the calcium supplement that was statistically significant. The rate of MI or stroke ran a relative 16% higher among the women taking the calcium supplement, which was also statistically significant. The results showed no significant effect of calcium supplementation on stroke rate. "We saw the same effect as in the meta-analysis," Dr. Reid said.
But if Dr. Reid’s analysis did not start with a prior hypothesis, this finding can only be considered hypothesis generating, not hypothesis testing, Dr. Manson said in an interview. "Many subgroups were tested in the WHI, and some would be expected to show significant effect modification by chance," she pointed out. In addition, randomization made background levels of calcium use similar in the two treatment arms and thereby neutralized background calcium use as a possible confounder. Dr. Manson also noted that if supplemental calcium posed a risk, the event rates should have been highest among women taking both the study calcium dose and an additional dose on their own.
When the Auckland researchers added the results from the WHI subanalysis to their previously reported meta-analysis, they "just reinforced the trends and made them more significant," Dr. Reid said in an interview.
When data from the WHI subgroup that did not use personal calcium supplements at baseline were added to the meta-analysis, the results showed that those who did take supplements had a 24% relative excess of MIs, a 15% relative excess of stroke, and a 16% relative excess of MI or stroke, he reported.
"What we now have is six or seven very large trials, and [the results they show] for MI all line up very consistently, without significant heterogeneity. When you look at risk vs. benefit, the evidence for an increased risk of MI is stronger than the evidence that calcium supplements prevent bone fractures. It’s hard to justify continuing calcium supplements," Dr. Reid said.
Dr. Reid said that he had no relevant disclosures.
FROM THE ANNUAL MEETING OF THE AMERICAN SOCIETY FOR BONE AND MINERAL RESEARCH
Calcium Supplement Use Linked With Higher Cardiovascular Disease Risk
TORONTO – Calcium supplements appear to cause more harm than good, according to a meta-analysis of 28,000 participants in nine trials that includes a new analysis of more than 16,000 participants in the Women’s Health Initiative, but the reanalysis has raised concerns among the WHI’s original investigators.
"We calculate that for every 1,000 people treated with calcium for 5 years, it will lead to four additional myocardial infarctions, four additional strokes, and two additional deaths, while preventing three fractures," Dr. Ian R. Reid said at the annual meeting of the American Society for Bone and Mineral Research.
"I don’t prescribe calcium supplements to anyone anymore for preventing bone fractures. People should get calcium from their diet," said Dr. Reid, a professor of medicine at the University of Auckland, New Zealand. "We believe there is a fundamental difference between dietary calcium and supplemental calcium." He speculated that a calcium supplement, even at a relatively modest dose of 500 mg, produces a "borderline hypercalcemia" that persists for several hours and raises the risk for MI or stroke, the same way that people in the highest quartile for normal blood calcium levels have an increased risk for cardiovascular disease events.
But the researchers who ran the Women’s Health Initiative (WHI) study questioned the legitimacy of the new analysis beyond a hypothesis-generating exercise.
"The WHI investigators have concerns about the reanalysis and whether omitting the subgroups with favorable results is appropriate," commented Dr. JoAnn E. Manson, professor of medicine at Harvard University and chief of the division of preventive medicine at Brigham and Women’s Hospital, both in Boston, and a WHI coinvestigator.
Dr. Reid and his associates initially documented their finding that calcium supplements raise cardiovascular risk in a pair of meta-analyses published online last July (BMJ 2010;341:c3691). They reported that calcium supplement use was linked with a statistically significant 27% and 31% relatively increased risk for MI in two separate meta-analyses.
To further explore the impact of calcium supplements on cardiovascular risk, they received permission from the National Heart, Lung, and Blood Institute to reanalyze data collected in a WHI study of more than 36,000 postmenopausal women randomized to receive a daily supplement with 500 mg calcium plus vitamin D or placebo. The original report from the WHI investigators showed that the calcium plus vitamin D treatment did not significantly increase or decrease coronary or cerebrovascular risk in generally healthy postmenopausal women during 7 years of treatment. (Circulation 2007;115:846-54).
But the WHI study design allowed the participants to take more calcium supplements in addition to their study agent, if they wanted to do so. At baseline, more than 19,000 (54%) of the women in the study reported using a calcium supplement on their own, and at the end of the study 69% reported the practice, Dr. Reid said. To address the possible confounding this may have caused, he focused his analysis on the 16,718 women in the WHI study who reported not using a personal calcium supplement at entry into the study.
In this subgroup, the MI rate ran 2.5% in women randomized to calcium supplement treatment, and 2.0% among women in the placebo arm, a 22% relative increased MI rate with the calcium supplement that was statistically significant. The rate of MI or stroke ran a relative 16% higher among the women taking the calcium supplement, which was also statistically significant. The results showed no significant effect of calcium supplementation on stroke rate. "We saw the same effect as in the meta-analysis," Dr. Reid said.
But if Dr. Reid’s analysis did not start with a prior hypothesis, this finding can only be considered hypothesis generating, not hypothesis testing, Dr. Manson said in an interview. "Many subgroups were tested in the WHI, and some would be expected to show significant effect modification by chance," she pointed out. In addition, randomization made background levels of calcium use similar in the two treatment arms and thereby neutralized background calcium use as a possible confounder. Dr. Manson also noted that if supplemental calcium posed a risk, the event rates should have been highest among women taking both the study calcium dose and an additional dose on their own.
When the Auckland researchers added the results from the WHI subanalysis to their previously reported meta-analysis, they "just reinforced the trends and made them more significant," Dr. Reid said in an interview.
When data from the WHI subgroup that did not use personal calcium supplements at baseline were added to the meta-analysis, the results showed that those who did take supplements had a 24% relative excess of MIs, a 15% relative excess of stroke, and a 16% relative excess of MI or stroke, he reported.
"What we now have is six or seven very large trials, and [the results they show] for MI all line up very consistently, without significant heterogeneity. When you look at risk vs. benefit, the evidence for an increased risk of MI is stronger than the evidence that calcium supplements prevent bone fractures. It’s hard to justify continuing calcium supplements," Dr. Reid said.
Dr. Reid said that he had no relevant disclosures.
TORONTO – Calcium supplements appear to cause more harm than good, according to a meta-analysis of 28,000 participants in nine trials that includes a new analysis of more than 16,000 participants in the Women’s Health Initiative, but the reanalysis has raised concerns among the WHI’s original investigators.
"We calculate that for every 1,000 people treated with calcium for 5 years, it will lead to four additional myocardial infarctions, four additional strokes, and two additional deaths, while preventing three fractures," Dr. Ian R. Reid said at the annual meeting of the American Society for Bone and Mineral Research.
"I don’t prescribe calcium supplements to anyone anymore for preventing bone fractures. People should get calcium from their diet," said Dr. Reid, a professor of medicine at the University of Auckland, New Zealand. "We believe there is a fundamental difference between dietary calcium and supplemental calcium." He speculated that a calcium supplement, even at a relatively modest dose of 500 mg, produces a "borderline hypercalcemia" that persists for several hours and raises the risk for MI or stroke, the same way that people in the highest quartile for normal blood calcium levels have an increased risk for cardiovascular disease events.
But the researchers who ran the Women’s Health Initiative (WHI) study questioned the legitimacy of the new analysis beyond a hypothesis-generating exercise.
"The WHI investigators have concerns about the reanalysis and whether omitting the subgroups with favorable results is appropriate," commented Dr. JoAnn E. Manson, professor of medicine at Harvard University and chief of the division of preventive medicine at Brigham and Women’s Hospital, both in Boston, and a WHI coinvestigator.
Dr. Reid and his associates initially documented their finding that calcium supplements raise cardiovascular risk in a pair of meta-analyses published online last July (BMJ 2010;341:c3691). They reported that calcium supplement use was linked with a statistically significant 27% and 31% relatively increased risk for MI in two separate meta-analyses.
To further explore the impact of calcium supplements on cardiovascular risk, they received permission from the National Heart, Lung, and Blood Institute to reanalyze data collected in a WHI study of more than 36,000 postmenopausal women randomized to receive a daily supplement with 500 mg calcium plus vitamin D or placebo. The original report from the WHI investigators showed that the calcium plus vitamin D treatment did not significantly increase or decrease coronary or cerebrovascular risk in generally healthy postmenopausal women during 7 years of treatment. (Circulation 2007;115:846-54).
But the WHI study design allowed the participants to take more calcium supplements in addition to their study agent, if they wanted to do so. At baseline, more than 19,000 (54%) of the women in the study reported using a calcium supplement on their own, and at the end of the study 69% reported the practice, Dr. Reid said. To address the possible confounding this may have caused, he focused his analysis on the 16,718 women in the WHI study who reported not using a personal calcium supplement at entry into the study.
In this subgroup, the MI rate ran 2.5% in women randomized to calcium supplement treatment, and 2.0% among women in the placebo arm, a 22% relative increased MI rate with the calcium supplement that was statistically significant. The rate of MI or stroke ran a relative 16% higher among the women taking the calcium supplement, which was also statistically significant. The results showed no significant effect of calcium supplementation on stroke rate. "We saw the same effect as in the meta-analysis," Dr. Reid said.
But if Dr. Reid’s analysis did not start with a prior hypothesis, this finding can only be considered hypothesis generating, not hypothesis testing, Dr. Manson said in an interview. "Many subgroups were tested in the WHI, and some would be expected to show significant effect modification by chance," she pointed out. In addition, randomization made background levels of calcium use similar in the two treatment arms and thereby neutralized background calcium use as a possible confounder. Dr. Manson also noted that if supplemental calcium posed a risk, the event rates should have been highest among women taking both the study calcium dose and an additional dose on their own.
When the Auckland researchers added the results from the WHI subanalysis to their previously reported meta-analysis, they "just reinforced the trends and made them more significant," Dr. Reid said in an interview.
When data from the WHI subgroup that did not use personal calcium supplements at baseline were added to the meta-analysis, the results showed that those who did take supplements had a 24% relative excess of MIs, a 15% relative excess of stroke, and a 16% relative excess of MI or stroke, he reported.
"What we now have is six or seven very large trials, and [the results they show] for MI all line up very consistently, without significant heterogeneity. When you look at risk vs. benefit, the evidence for an increased risk of MI is stronger than the evidence that calcium supplements prevent bone fractures. It’s hard to justify continuing calcium supplements," Dr. Reid said.
Dr. Reid said that he had no relevant disclosures.
TORONTO – Calcium supplements appear to cause more harm than good, according to a meta-analysis of 28,000 participants in nine trials that includes a new analysis of more than 16,000 participants in the Women’s Health Initiative, but the reanalysis has raised concerns among the WHI’s original investigators.
"We calculate that for every 1,000 people treated with calcium for 5 years, it will lead to four additional myocardial infarctions, four additional strokes, and two additional deaths, while preventing three fractures," Dr. Ian R. Reid said at the annual meeting of the American Society for Bone and Mineral Research.
"I don’t prescribe calcium supplements to anyone anymore for preventing bone fractures. People should get calcium from their diet," said Dr. Reid, a professor of medicine at the University of Auckland, New Zealand. "We believe there is a fundamental difference between dietary calcium and supplemental calcium." He speculated that a calcium supplement, even at a relatively modest dose of 500 mg, produces a "borderline hypercalcemia" that persists for several hours and raises the risk for MI or stroke, the same way that people in the highest quartile for normal blood calcium levels have an increased risk for cardiovascular disease events.
But the researchers who ran the Women’s Health Initiative (WHI) study questioned the legitimacy of the new analysis beyond a hypothesis-generating exercise.
"The WHI investigators have concerns about the reanalysis and whether omitting the subgroups with favorable results is appropriate," commented Dr. JoAnn E. Manson, professor of medicine at Harvard University and chief of the division of preventive medicine at Brigham and Women’s Hospital, both in Boston, and a WHI coinvestigator.
Dr. Reid and his associates initially documented their finding that calcium supplements raise cardiovascular risk in a pair of meta-analyses published online last July (BMJ 2010;341:c3691). They reported that calcium supplement use was linked with a statistically significant 27% and 31% relatively increased risk for MI in two separate meta-analyses.
To further explore the impact of calcium supplements on cardiovascular risk, they received permission from the National Heart, Lung, and Blood Institute to reanalyze data collected in a WHI study of more than 36,000 postmenopausal women randomized to receive a daily supplement with 500 mg calcium plus vitamin D or placebo. The original report from the WHI investigators showed that the calcium plus vitamin D treatment did not significantly increase or decrease coronary or cerebrovascular risk in generally healthy postmenopausal women during 7 years of treatment. (Circulation 2007;115:846-54).
But the WHI study design allowed the participants to take more calcium supplements in addition to their study agent, if they wanted to do so. At baseline, more than 19,000 (54%) of the women in the study reported using a calcium supplement on their own, and at the end of the study 69% reported the practice, Dr. Reid said. To address the possible confounding this may have caused, he focused his analysis on the 16,718 women in the WHI study who reported not using a personal calcium supplement at entry into the study.
In this subgroup, the MI rate ran 2.5% in women randomized to calcium supplement treatment, and 2.0% among women in the placebo arm, a 22% relative increased MI rate with the calcium supplement that was statistically significant. The rate of MI or stroke ran a relative 16% higher among the women taking the calcium supplement, which was also statistically significant. The results showed no significant effect of calcium supplementation on stroke rate. "We saw the same effect as in the meta-analysis," Dr. Reid said.
But if Dr. Reid’s analysis did not start with a prior hypothesis, this finding can only be considered hypothesis generating, not hypothesis testing, Dr. Manson said in an interview. "Many subgroups were tested in the WHI, and some would be expected to show significant effect modification by chance," she pointed out. In addition, randomization made background levels of calcium use similar in the two treatment arms and thereby neutralized background calcium use as a possible confounder. Dr. Manson also noted that if supplemental calcium posed a risk, the event rates should have been highest among women taking both the study calcium dose and an additional dose on their own.
When the Auckland researchers added the results from the WHI subanalysis to their previously reported meta-analysis, they "just reinforced the trends and made them more significant," Dr. Reid said in an interview.
When data from the WHI subgroup that did not use personal calcium supplements at baseline were added to the meta-analysis, the results showed that those who did take supplements had a 24% relative excess of MIs, a 15% relative excess of stroke, and a 16% relative excess of MI or stroke, he reported.
"What we now have is six or seven very large trials, and [the results they show] for MI all line up very consistently, without significant heterogeneity. When you look at risk vs. benefit, the evidence for an increased risk of MI is stronger than the evidence that calcium supplements prevent bone fractures. It’s hard to justify continuing calcium supplements," Dr. Reid said.
Dr. Reid said that he had no relevant disclosures.
FROM THE ANNUAL MEETING OF THE AMERICAN SOCIETY FOR BONE AND MINERAL RESEARCH
Major Finding: People taking a calcium supplement showed a statistically significant 24% excess relative risk for MI, a 15% excess relative risk for stroke, and a 16% excess relative risk for MI or stroke.
Data Source: Meta-analysis of nine studies that compared calcium supplements with placebo in a total of more than 28,000 people.
Disclosures: Dr. Reid said that he had no relevant disclosures.