The debate over the safety of giving birth at home, both for the mother and for the infant, has continued for years. I’ve written about the issue myself. From time immemorial until about 75 years ago or so most babies were born at home. Now it’s around 1% in the USA, although it’s much higher than that in many Western European countries. The shift to hospital births paralleled the growth of hospitals, pediatrics, and obstetrics. With that shift there has been a perceived decrease in women’s autonomy over their healthcare decisions. There has also been an unsurprising jump in the proportion of caesarian section deliveries, an operative procedure, and various other medical interventions in labor and delivery, even though current data suggests the recent jump in caesarian delivery (now around 30%) has not added any benefits. The debate over whether the dominance of hospital births is a good thing or a bad thing (or neither) is much more than a medical debate; it’s also a social and political one. It’s also to some extent an issue of medical power, a struggle between physician obstetricians who deliver babies in the hospital and nurse midwives who often deliver babies at home. I’m very interested in the social and political aspects, but as a pediatrician I’m particularly concerned with the safety question: Is it more dangerous for your baby to be born at home?
One problem in answering this question is that most of the studies about the safety of home birth came from abroad. But a few years ago we got some data from the USA, published in the New England Journal of Medicine, entitled “Planned out of hospital births and birth outcomes.”
One big problem with evaluating previous data has been that vital statistics from birth certificates counted home births and hospital births, but did not identify as a separate category those women who planned to deliver at home, but then were admitted to a hospital to deliver there because of some issue with the pregnancy. Such women were just counted as hospital births. Also, the recent growth of birthing centers has introduced a location kind of intermediate between home and hospital. The large study linked above was from Oregon, using the years 2012 and 2013. It gives some useful information.
The bottom line is that children born to women who intended to give birth at home had an infant mortality rate of 3.9 deaths per 1,000 deliveries. This was significantly higher than the death rate of infants born in a hospital, which was 1.8 deaths per 1,000 deliveries. Not surprisingly, women who delivered in the hospital had a far high rate of some kind of intervention, such as caesarian section.
What should we make of this? Thinking about risk can be difficult, and it’s important to understand the difference between relative and absolute risk. (I’ve written about that, too.) Media reports often obscure this key point. For example, in this study the risk of infant mortality increased 100% with home birth. 100%!! But twice a very small number is still a very small number. The absolute risk of a baby dying in a home delivery is very small. Still, it is higher.
What this means is that a woman deciding to deliver at home should understand all the facts. Some will not want to accept this increased risk, however small it is in absolute terms. Some will accept it. The same issue of the Journal had a good editorial discussing how to think about the issue. It’s a very good summary of the fundamental question. It’s all about the issue of acceptable risk, and how that varies with the person. The conclusion:
Ultimately, women’s choices for place of delivery will be determined by the extent of their tolerance for risk and which risks they most want to avoid.
All of us are aware of what has been termed our “obesity epidemic.” The current prevalence of obesity among adults in the US is around 40%, a dramatic increase over the past 50 years; it was about 15% in 1970. Rates are also increasing across first world countries, so we are not alone in this. Obesity is defined as a body mass index (BMI) of greater than 30. Values of 25 – 30 are termed overweight. BMI is weight in kilograms divided by height in meters squared.
The graph shows the trends over the past decades and has some interesting features. Note that the percent of the population that is obese or extremely obese (BMI > 40) has increased but the percent classified as overweight has not. This suggests to me, although I haven’t seen anything written about it, that overweight and obese patients are two separate groups; the overweight are not destined to become obese. There are even some recent data that suggest being mildly overweight may actually be a good thing as you age.
Many explanations have been offered for the progressive increase in adult obesity, including increased intake of calories, often in the form of soft drinks, and sedentary lifestyle. The simple calculation of excess calories consumed versus calories burned offers a partial explanation, and certainly that’s what I was taught in medical school in the 1970s; obesity was simple arithmetic. It turns out things are more complicated than that. Genetics, for example, plays a large role, as do various hormonal systems.
I don’t follow the enormous medical literature on obesity closely, but this recent study really intrigued me. It was in a journal I haven’t seen before, Economics and Human Biology. This seems appropriate, since the economic effects of the obesity epidemic are massive and getting larger all the time. The authors studied annual sugar consumption in the US population and compared it with obesity rates later. Now, that approach is pretty reductionist in that it ignores many other kinds of calories that aren’t sugar, but the results are interesting. Their findings suggest that, among today’s adults, obesity correlates with global sugar intake during their childhood years in the 1970s and 1980s. If this is the case, one would predict a decrease in obesity among adolescents and young adults now because sugar intake in the US has decreased by 25% in the last decade. In fact, adolescent obesity prevalence, after a steady and seemingly inexorable rise, may actually have plateaued over the past 5 years or so.
The usual caveat of correlation not indicating causation need to be kept in mind, of course. Yet it makes biological sense to me. I think our metabolic state could have a certain kind of “memory” about the milieu it experienced during early growth and development and have responded to that it ways that could persist for many years.
I have written previously on Kevin Pho’s useful KevinMD site about the alarming statistic that gunshot injuries are now the second most common cause of death among children. Between 2011 and 2015 there were over 21,000 children killed by guns. This recent study in the Pediatrics, the journal of the American Academy of Pediatrics, further analyzes the question; it compares pediatric firearm fatality rates among the various states and then tests for correlation between children killed and the degree of strictness or not of the state’s gun laws. There are extensive data on whole populations showing that stricter laws correlate with lower rates of firearms injuries, but pediatric fatalities have not been specifically investigated. Of course as a PICU physician, somebody who takes care of children shot by guns, the latter question is of great interest to me.
Central to the work is developing some sort of grading scale for strictness of gun laws. The authors didn’t use a scale they developed themselves — they used the 2011–2015 Gun Law Scorecard system from the Brady Campaign to Prevent Gun Violence. The higher the state gun law score, the stricter the firearms legislation. The authors also used what they termed secondary exposure variables. These were the presence or not of individual laws previously associated with lower mortality rates in the total population of adults and children: universal background checks for firearm purchase, universal background checks for ammunition purchase, and identification requirement for firearms (microstamping, ballistic fingerprinting). Their findings are best summarized in this graph from the paper. It plots gun law score against the rate of children killed by guns.
The trend line for that graph is pretty striking. It’s clear increasing strictness of gun laws correlates with less children killed by guns. This also fits with the experience in other Western countries. See the example of Australia tightening their gun laws, which greatly reduced gun deaths.
The issue, of course, is that America has a gun culture unique in the Western world. Our courts have interpreted the rights of citizens to own guns quite broadly owing to the Second Amendment. It’s known the great majority of guns are owned by a minority of citizens. So the question is: How many dead children represent an acceptable price to pay for loose gun laws? Because it’s clear that, all slogans aside (e.g. “guns don’t kill people, people kill people”), looser laws lead to more deaths. The research is there. Now we have to decide if we want to do anything about it.
Every day we get bombarded in the news with health statistics. Coffee causes cancer! Coffee cures cancer! And so on. Many of these are meant to grab headlines (and, these days, web page clicks) and the articles they accompany are often very poor at telling the reader what they mean. They often have statistics, and health statistics can be complicated. Sad to say, even many physicians are pretty poor and sorting out the hype from the helpful. This article is a very helpful guide to finding your way when you are reading the health news. It’s called “Helping Doctors and Patients Make Sense of Health Statistics,” and it does just that. I’d bookmark it or even print it out for future reference, if you’re more old school. Don’t be put off by the somber looking first page — it’s actually quite readable. I should point out here that, although I took statistics courses long ago and have used simple statistical tests in my own research career, I am by no means expert. I always consulted a real statistician before submitting any research for publication.
The article starts with the common problem one sees in media reports: the difference between absolute and relative risk. The authors used the example of a scare over birth control pills that happened in 1995 when the U.K. Committee on Safety in Medicines issued a warning about a newer version of pills. The committee sent a warning to all physicians that the newer pills were associated with a 100% rise in risk for serious blood clots. One hundred percent! Yikes! The warning led to many women stopping their pills and there was a predictable rise in unwanted pregnancies, accompanied by an estimated 15,000 more abortions the following year. The effects lasted for years. What was the truth about this new risk?
The truth was that the newer pills were associated with a risk for serious blood clots of 2 per 7,000 women. For comparison, the earlier pills had been associated with a 1 per 7,000 women risk. And 2 is 100% more than 1, so that’s the increase in relative risk. But the increase in absolute risk was an additional 1 woman in 7,000. (It should be noted here that pregnancy itself is associated with an increased risk of blood clots.) I see this particular misunderstanding often in news reports regarding risks of medical procedures. When you read these stories you need to examine not just the relative risk, which often makes for good headlines; you need to look at the actual number, not just the percent change.
The pill scare hurt women, hurt the National Health Service, and even hurt the pharmaceutical industry. Among the few to profit were the journalists who got the story on the front page.
There is also this helpful article, from the always readable Scientific American. I like it a lot, too. It reminds us how statistical significance doesn’t always mean real life significance.
Imagine if there were a simple single statistical measure everybody could use with any set of data and it would reliably separate true from false. Oh, the things we would know! Unrealistic to expect such wizardry though, huh? Yet, statistical significance is commonly treated as though it is that magic wand. Take a null hypothesis or look for any association between factors in a data set and abracadabra! Get a “p value” over or under 0.05 and you can be 95% certain it’s either a fluke or it isn’t. You can eliminate the play of chance! You can separate the signal from the noise! Except that you can’t. That’s not really what testing for statistical significance does. And therein lies the rub.
The article points out what the vaunted, and too often venerated, p value means is that it estimates the probability of getting roughly that result if the study hypothesis is assumed to be true. It can’t on its own tell you whether this assumption was right, or whether the results would hold true in different circumstances. It provides a limited picture of probability, taking limited information about the data into account and giving only “yes” or “no” as options.
If you are really interested in this topic you should read a bit about what’s called Bayesian statistics, named after an 18th century mathematician. The basic notion here is that we need to consider our prior knowledge about something before applying statistical tests, and that we should factor in this knowledge when we make our statistical comparisons. In other words, all possibilities are not intrinsically equal going into the analysis. The debates between Bayesian and what are termed “frequentist” statisticians go back and forth. But what we should take home from these debates is that the science of statistics, like other sciences, is subject to revision and change over time.
A final key point is to look at medical headlines of new medical breakthroughs and try to decide if the findings really are “significant” in real life. Is there only a tiny effect, really, even though the p value is “significant” at the 0.05 level? Also beware of what we call “data dredging,” in which multiple comparisons are made using the same data set. When you do that the chances of coming up with a significant, yet spurious association go up.
All of this has made some people call for some rudimentary statistical training to be part of the standard mathematics curriculum at the high school level. I think this is a good idea. I didn’t get introduced to any statistical concepts in high school, and I took all the math available. That should change if we expect the mass of our citizenry to be competent to judge things for themselves. Medical journalists definitely need this knowledge because currently many do a terrible job interpreting medical reports. The two articles I linked are a great place to start.
We believe that the current entrepreneurial development model for antibiotics is broken and needs to be fundamentally transformed.
This provocative opinion is from a recent editorial in The New England Journal of Medicine. The introduction of penicillin, the first antibiotic miracle drug, led to an 80% reduction in mortality from infectious diseases. Other antibiotics quickly followed, reducing death rates even further. Over the past several decades, however, the discovery of new antibiotics has greatly slowed; most are what are called “me too” drugs that are potentially profitable for the manufacturer but not in any way ground-breaking. Emerging resistance to antibiotics was noted soon after their discovery but newer agents appeared to keep us one step ahead of the pathogens. This breathing room may now have disappeared — we now are confronted with pathogenic bacteria that are completely resistant to all known antibiotics. Some have termed our new situation the post-antibiotic era. So we desperately need newer agents to treat infection. Where are these to come from?
The editorial writers describe how ineffectual our current model is for developing these essential new antibiotics. For one thing, development costs are enormous — up to two billion dollars. There is also this problem:
Rising rates of resistance appear to create new market opportunities for antibiotics. However, the absolute number of infections caused by each type of resistant bacterium is relatively small. Each newly approved antibiotic thus captures an ever-shrinking share of an increasingly splintered market — a problem that will only worsen over time.
A widely influential solution was proposed by the economist Jim O’Neill in 2016. His idea was to offer a variety of special financial incentives to drug companies to develop new antibiotics. Now he says it’s simply time to “just take it away from them and take it over.” The authors of The New England Journal editorial propose a model consisting of nonprofit organizations to focus on all aspects of preventing infections — not just new antibiotics but also things like vaccines, immunotherapies, and inflammatory modulators. I agree such a multifaceted approach is important because resistance among bacteria will always be an issue. A key principle here is using multiple approaches that work in different ways. We certainly use that principle in infectious disease practice by combining antibiotics that work in different ways.
Establishing new ways of organizing our fight against infection would be difficult. But market-based, for-profit approaches simply haven’t worked at all. Drug companies are actually losing money trying. And thus as a society we’re losing the battle.
This one isn’t really about children specifically, but I found it fascinating. It recently appeared in the prestigious scientific journal Nature. Humans like music. The kind of music we like varies greatly, but love of music and rhythm seems to be something that crosses all cultural boundaries. Why is this? It would appear to be something intrinsic to being human, which implies that love of music is hard wired into our brains. So why is that? Until I read the above article in Lancet I was unaware there is an entire scientific field regarding the neuroscience of music appreciation. Some experts in this field believe music appeared even before the maturation of language:
Somewhere along the evolutionary way, our ancestors, with very limited language but with considerable emotional expression, began to articulate and gesticulate feelings: denotation before connotation. But, as the philosopher Susanne Langer noted, ‘The most highly developed type of such purely connotational semantic is music.’ In other words, meaning in music came to us before meaning given by words.
The authors of the Lancet study investigated the response to various musical things in macaque monkeys and compared them to those in human brains. There are portions of the brain that are devoted to perception of musical pitch and the investigators used those areas for comparison. The research team set out to compare how the brains of humans and those of rhesus macaques reacted to auditory stimuli that characterize music and speech. Speech and music contain harmonic frequency components, which are perceived to have pitch. Highly inflected human languages in particular, such as Chinese, rely on pitch and tone to convey meaning, and humans recognize this very early in life. Humans have cortical regions with a strong response preference for harmonic tones versus noise. But is the same true for nonhuman primates? The answer was no. In their words: “The results raise the possibility that these sounds, which are embedded in speech and music, may have shaped the basic organization of the human brain.”
Humans but not macaques showed regions with a strong preference for harmonic sounds compared to noise, measured with both synthetic tones and macaque vocalizations. . . . This species difference may be driven by the unique demands of speech and music perception in humans.
So OK — we differ from monkeys. But pet owners will tell you that other animals besides us are affected by music, although a dog howling along like the one above. Some research suggests dogs find classical music calming and heavy metal rock music annoying (just like me!). As a pediatrician, and like many parents, I have noticed infants respond to music. This begins so early that it strongly suggests to me the circuits to respond to pitch and harmony are already hard-wired into the human brain at birth.
At any rate, this little excursion is a great example of why it’s fascinating to peruse from time to time general scientific journals. You come across things you never otherwise would have encountered.
I wrote about this topic a few years back, but the recent outbreak of measles has once again ignited the debate of just what the government has the right to do or not do in compelling individual actions in support of public health. This is an old question, and it’s worth considering it in historical context.
One aspect of the endless vaccine debate is the aspect of coercion some parents feel about requiring children to be vaccinated before they can go to school. Vaccination is mandated for school attendance in all states. But this isn’t really an absolute requirement. Although all 50 states ostensibly require vaccination, all but 5 (Mississippi, West Virginia, and more recently California, New York, and Maine) allow parents to opt out for religious reasons, and 15 states allow this for philosophical reasons. (See here for a current list.) Still, in general vaccines are required unless the child has a medical reason not to get them, such as having a problem with the immune system. Is this an unprecedented use of state power? I don’t think it is.
In fact, historically there have been many examples of the government inserting itself into healthcare decisions of individuals and families in order to protect the public health. Some of these go back many years. Quarantine, for example, goes back to medieval times, centuries before the germs were discovered. It has since 1944 been a power of the federal government; federal agents may detain and send for medical examination persons entering the country suspected of carrying one of a list of communicable diseases. Quarantine has also been used by local and state governments, particularly in the pre-antibiotic era. Measles is a good example, as you can see from the photograph above. Quarantine can be abused, and has in fact been abused in the past for discrimination against certain minority groups. A paper from the American Bar Association details some of those instances here. The paper even questions if it should be abolished for these reasons. But the practice is a very old one.
Seaports have long been sites of quarantine enforcement. In colonial Pennsylvania, for example, ships bound for Philadelphia had to stop at an island in the Delaware River for up to 30 days to ensure they were not carrying any disease. Note that an island was chosen since it is easier to isolate ships there. During the cholera epidemics in the mid-nineteenth century the quarantine of ships arriving from abroad was common. It should be noted though that, prior to the acceptance of the germ theory, enforcement of quarantine was at least as concerned with the cargo as it was the persons on the ship. But the legality of the practice was well accepted.
Laws requiring mandatory vaccination have been around for over a century. Some opposition to them has been around just as long. The constitutionality of these laws was affirmed in the famous decision by the Supreme Court in 1905 — Jacobson v. Massachusetts. The case in question concerned smallpox vaccination, and of course we have many more vaccinations now. There have since been multiple cases concerning mandatory vaccination for school attendance; in all cases the courts have ruled in favor of these mandates. If you are interested in reading deeper into this controversy a good person to consult is Professor Dorit Reiss, a law professor at the University of California. She has devoted her career to examining vaccine law and politics. Here is a place where you can find links to some of her many publications on the subject. You can also read a nice review of the historical controversies over quarantine here.
Of course the government mandates many things for the protection of public health. Milk is pasteurized (although there are raw milk enthusiasts who object — and many of whom get sick as a result), water is purified, and dirty restaurants can be closed. Like quarantine, these measures restrict our personal freedom a little, but what about government-mandated medical treatment? That sounds a bit more like the situation with compulsory vaccination of children. As it happens, there are more recent examples of compulsory treatment, particularly involving tuberculosis.
A couple of decades ago I was involved in a case of a woman with active tuberculosis who refused to take treatment for it. Worse, her particular strain of TB was one highly resistant to many antibiotics, so if that spread it would represent a real public health emergency. The district judge agreed. He confined the woman to the hospital against her will so she could be given anti-TB medications until she was not longer infectious to others. At the time I thought this was pretty unusual. When I looked into it, though, I found that there have been many instances of people with TB being confined against their will until they were no longer a threat to others. The ABA link above lists several examples of this.
So it’s clear to me there is a long tradition of the state restricting personal freedom in the service of protecting the public health. Like everything, of course, the devil is in the details. To me the guiding principle is that your right to swing your fist ends where my nose begins.
The common practice in this country (although not everywhere—Europe, for example) has long been to treat all acute middle ear infections (otitis media) with antibiotics. This is not necessarily needed. We now know that for many children another reasonable approach is to wait a day or so to see if the symptoms get better on their own without antibiotics. Parents have an important role in making this choice. If you and the doctor decide to wait on antibiotic treatment, you can still treat fever and pain with acetaminophen (Tylenol) or ibuprofen (Motrin). There is also available a numbing ear drop that, when dripped down the ear canal onto the ear drum, directly relieves the pain there.
If you think about it, this newer understanding of the natural history of ear infections makes sense. Children have been contracting ear infections for many thousands of years, yet we have only had antibiotics for three-quarters of a century. The overwhelming majority of those children in the pre-antibiotic era must have recovered from the infection on their own.
You may have noticed that above I used the weasel words “not necessarily” in my statement regarding whether or not ear infections need prompt evaluation. There are a few times that they do. How can you know when that is? The answer is to look at your child in light of what else she is doing. She will have a fever and probably be fussy. But if she is alert, drinking fluids, and looks good otherwise, you can safely put off having her evaluated. On the other hand, if she is lethargic, glassy-eyed, and not taking fluids, then you should bring her in because those kinds of symptoms can be indicative of a more serious condition. If you decide to wait and see how your child does before bringing her to the doctor, how much time should you give it? Twenty-four hours is a reasonable amount of time to wait to see if the fever and pain resolve. If they do not, then it would be appropriate to bring your child to the doctor.
The American Academy of Pediatrics has revised their recommendations about treating ear infections to reflect this more nuanced approach. You can read the list of recommendations here, but the gist of it is that “watchful waiting” is a reasonable alternative to antibiotic therapy for children over 6 months of age, who do not appear seriously ill, and who do not have a temperature of over 102.2.
You can read more specifics about what causes ear infections here if you like.
An interesting article in the journal Pediatrics is both intriguing and sobering. It is intriguing because it lays bare something we don’t talk much about or teach our students about; it is sobering because it describes the potential harm that can come from it, harm I have personally witnessed. The issue is overdiagnosis, and it’s related to our relentless quest to explain everything.
Overdiagnosis is the term the authors use to describe a situation in which a true abnormality is discovered, but detection of that abnormality does not benefit the patient. It’s not the same as misdiagnosis, meaning the diagnosis is inaccurate. It is also distinct from overtreatment or overuse, in which excessive treatment is given to patients for both correct and incorrect diagnoses. Overdiagnosis means finding something which, although “abnormal,” doesn’t help the patient in any way.
Some of the most controversial and compelling discussions of overdiagnosis come from cancer research. Two of the most common cancers, prostate cancer for men and breast cancer for women, run smack into the issue. As a pediatrician I don’t treat either one, but the concept certainly applies to children’s health. It is generally true early diagnosis and treatment of cancer is better than late diagnosis and treatment . . . usually, not always. A problem can arise when we use screening tests for early cancer as a mandate to treat them aggressively when we find them. The PSA (prostate-specific antigen) blood test was developed when researchers noticed it went up in men with prostate cancer. From that observation it was a short but significant leap to use the test in men who were not known to have cancer to screen for its presence. The problem is at least two-fold. There is overlap in the test numbers between cancer and normal, and many small prostate cancers, even when present, do not progress quickly. Since the treatment for prostate cancer is seriously invasive and has several bad side effects, the therapy may be worse than the disease, especially in older men. You can read more about the PSA controversy here. There are similar questions about screening for breast cancer; early detection is a good thing, but how early in life and how often should otherwise low-risk women be screened? You can read a nice summary of that discussion here. This issue also has caused fierce debates. There are other examples but these two serve to highlight the problem of finding a middle ground between overdiagnosis and underdiagnosis.
Children don’t get cancer very often, but there are plenty of examples of overdiagnosis causing mischief with them, too. The linked article above describes several common ones. A usual scenario is getting a test that, even if abnormal, will not lead to any meaningful effect on the child’s health. Additionally, an abnormal test then typically leads to getting other tests, which can lead to other tests, and so on down the rabbit hole. I have seen that many times. As the authors state:
Medical tests are more accessible, rapid, and frequently consumed than ever before. Discussions between patients [or their parents] and providers tend to focus on the potential benefits of testing, with less regard for the potential harms. Yet a single test can give rise to a cascade of events, many of which have the potential to harm.
In evaluating the importance of overdiagnosis in a condition at the population level, we propose focusing on the frequency of overdiagnoses relative to needed diagnoses, the ratio of potential benefits from needed diagnoses to potential harms from overdiagnoses, and the amount of resource utilization resulting from overdiagnosis.
This is kind of a new frontier in medicine, and the issue grows larger as the huge number of diagnostic tests we have mushrooms every year. For a parent, a good rule of thumb is to ask the doctor not just what the benefits of a proposed test are, but also the risks. Importantly, ask what the doctor will actually do with the result. We are prone to think more information is always a good thing, but that clearly is not the case. And never, ever get a test just because you (or your doctor) are merely curious. If you’re interested in some of the specific conditions the authors discuss there is a useful table in the article. Several of them, such as neonatal jaundice and gastroesophageal reflux, are quite provocative in their implications.
A large number of pediatric practices these days use after-hours call centers for parents who have questions about a sick child. I’ve been looking around to find some data about how common this is, but my sense is that the majority of pediatricians use them. There is no question these call centers make live easier for the doctor; having somebody screen the calls, answer easy questions, and only call you for important issues is a great boon. But that boon comes at a cost: the people staffing the call centers are not doctors. They are often experienced nurses, but that is not the same thing. So deciding what is important and what can wait can be a problem.
The call centers generally use predetermined protocols drawn up by experts to help guide decision-making. This is a good way to ensure consistent, quality advice. But not every child fits the protocol, and a set of guidelines is not a substitute for actual clinical experience. Really, these days a savvy parent can get almost as much useful guidance from consulting Dr. Google (or my latest book). A study presented at a meeting of the American Academy of Pediatrics examines another question: do these call centers send too many children to the emergency department?
My assumption would be that they do. After all, they are hard-wired to do so. If you call one, the person giving you advice not only is not a doctor, they do not know your child. Also, the decision-making protocols they use necessarily err on the safe side. So if there is any doubt about what to do they will advise you to take your child to the emergency department even though your child’s doctor often might not do that.
The study bears out this presumption. The investigators, from Children’s National Medical Center in Washington, D.C., examined the records of 220 children for whom the call center advised parents to take their child to the emergency department. They used a panel of evaluators to see if the visit to the ED was appropriate. They found that, for a third of the children, they could have safely stayed home.
After-hours call centers have made doctors’ lives less hectic, and I’m not suggesting we do away with them. They give thousands of parents useful advice. Plus, what we don’t know is if even more of those 220 children would have ended up in the ED if the call center didn’t exist: who knows, perhaps they steered a significant fraction of children away from an inconvenient and expensive ED visit. However, in my own anecdotal experience the call centers do increase ED use. I have had many parents tell me, after I’ve seen their child in the ED, that the only reason they came was that the call center told them to — that they were surprised by that advice and otherwise would have stayed home.
My own father was a small town pediatrician. He didn’t have an answering service. When parents wanted to ask about their sick child they just called him at our home. His phone number was in the directory just like everybody else’s. He didn’t have any sort of pager. If he wasn’t home, people called back or else called whatever number one of us kids or my mother told them where to call to find him. Those were simpler times, and not necessarily better ones. Now we have call centers, and we need to figure out how best to use them.
I’d be interested in any experiences, good or bad, that parents have had with after-hours call centers. Were they helpful? Were they a problem?
Sachin on Thinking about risk So, I beleive my first point is correct about the brain development effect ? Yes, I've heard the ...
My Published Books
How Your Child Heals An Inside Look at Common Childhood Ailments This book takes you on a tour of the inner workings of a child's body as it heals from injury, illness, and common diseases.
How To Talk To Your Child's Doctor A Handbook for Parents "A great read for any parent. It really helps you get inside the mind of your child's doctor and figure out how best to communicate."
Your Critically Ill Child Life and Death Choices Parents Much Face Personal stories of children and their families and how they and their doctors together learned the best way to understand and take care of each other.