You are hereThe Healthcare Blog
The Healthcare Blog
What is the hue and cry about this time? United Healthcare is saying it has lost large bales and wads of money on Obamacare exchange plans, and just may give up on them entirely. Anthem and Aetna allow that they are not making very much either. Some new not-for-profit market entrants have gone belly up, and the others are having a hard time.
Before we perform the Last Rites over Obamacare, perhsp we should think for a moment about the hit ratio of the first 711 Wolf Reports from Boy W. Cried and ask a few questions.
First: Do we trust implicitly the numbers that the health plans are giving out in press releases, citing unacceptably high medical loss ratios? Medical loss ratios (MLRs) are self-reported. Yes, there is a certain amount of accountability. The numbers have to square with expenses given on their corporate tax forms and so on, but there is wiggle room in just what is reported and how. If is a reasonable supposition that if you wanted to look for the professionals with the greatest skill in juggling numbers, you would find them working for insurance companies, especially health plans, because the stakes are so high. These numbers people at the top of their game have huge incentives to report a high MLR, so if there is wiggle room, I am sure they will find it.
Beyond that, MLR is reported by state, by market segment (large group, small group, individual), against what portion of a premium is “earned” within that reporting period, and by calendar year rather than any company’s financial year. To say, “Our MLR is X” is to claim that X the correct aggregate number across their entire multi-state system, from all their subsidiaries, appropriately weighted for the size of each region. We don’t have access to those numbers, just to what they are telling us. There are plenty of reasons for them to want to report the highest MLR they can get away with, plenty of reasons to be skeptical of the numbers they are giving out, and plenty of reasons not to base drastic policy changes on such pronouncements.
But let’s get down to business here. So they lost money (or barely made it) in 2014 and 2015, and they are projecting the same in 2016? Doesn’t this mean that they misjudged the cost of healthcare, so they need to raise premiums? And they didn’t realize this soon enough to do raise them appropriately for the 2016 year?
Sounds like somebody (or a pile of somebodies) made faulty business judgments. This is not too surprising, given that these are new business models in new markets. Pricing, risk analysis, and utilization projections are hard enough in established markets, doubly difficult in emerging ones, and exponentially more difficult for a new company scrambling to grab any market share at all, like the failed cooperatives.
Well, waah. Welcome to competition, market capitalism, all that stuff. None of this is in the least surprising.
But does it mean that “Obamacare has failed”? Does it even mean that these companies have failed in Obamacare markets? No, it means what it is: These companies have failed to make the profits that they hoped for in the opening three years of Obamacare. And they are telling us all about their pain so that the government (through regulation) or the body politic (by repealing Obamacare) will make it easier for them to churn a profit.
So what’s the real problem here? In any kind of economy, you need to price your products so that (in aggregate, over time) your total cost of ownership is less than what you sell your products for. There’s your margin, the oxygen of your business. These folks are claiming that the aggregate total cost of ownership of what they are selling (access to healthcare) is close to what they are selling it for. Hmmm. That’s a problem. It has two paths out: Lower the total cost of ownership (get the actual costs of healthcare down) or raise the premium.
How about getting aggressive about the real cost of healthcare? Two problems with that part of the equation: 1) It’s really hard and takes years. 2) It does not benefit just them. It will benefit the whole market. So it’s not a path to greater profitability.
A health plan’s profit (margin) is some percentage of the total cost of care for the people they cover. So they have an incentive on the one hand to cover a lot of people (that is, increase their market share). They have an incentive to keep their premiums competitive not in absolute terms but relative to other payers in each regional market. On the other hand, they have no incentive to get aggressive about actually lowering the underlying real costs of healthcare for the whole market. That would not give them a competitive advantage.
What’s the business concern with raising their premiums appropriately? The concern is that these lower-cost narrow network exchange plans are price inelastic. If they raise their premiums, they will lose market share. But wait, if the cause really is the underlying high costs of healthcare, won’t everyone’s premiums have to go up the same amount? This complaint sounds more like an assumption that others can provision the market more efficiently, keep their premiums more competitive, and gobble up market share.
Again, is this a failure of the Obamacare model? Or is it actually proof of concept? To say that the Obamacare exchanges are failing because some companies might give up on them is to imagine that the purpose of Obamacare, the metric on which it should be measured, is to make health plans comfortable and profitable. Wrong.
The core idea of the Obamacare exchanges has been that health plans should compete on a level playing field to see who could offer the best service and the best access to healthcare at the lowest price. That’s what markets are for. The assumption built into this logic is that some organizations will do it better than others, some will not be good at it, and the market will shake out. If nobody ever failed in the Obamacare exchanges, then we would have to say that they failed to establish anything resembling a true market.
Joe Flower is a healthcare futurist and author. He is a contributing editor with THCB.
Categories: OIG Advisory Opinions
Through Dec. 15, federal regulators will accept public comments on the next set of rules that will shape the future of medicine in the transition to a super information highway for
Electronic Health Records (EHRs). For health providers, this is a time to speak out.
One idea: Why not suggest options to give leniency to older doctors struggling with the shift to technology late in their careers?
By the government’s own estimate,in a report on A 10-Year Vision to Achieve an Interoperable Health IT Infrastructure, a fully functioning EHR system, for the cross-sharing of health records among providers, will take until 2024 to materialize.The technology is simply a long way off.
Meanwhile, doctors are reporting data while the infrastructure for sharing it doesn’t exist. Now, for the first time, physicians will be reporting to the federal government on progress toward uniform objectives for the meaningful use of electronic health records. Those who meet requirements will be eligible for incentive payments from Medicare and Medicaid, while those who don’t may face penalties. In addition, audits are expected to begin in 2016.
Amid this shift to a new, data-driven healthcare system, the nation needs older doctors to keep practicing to meet presentneeds of an aging population, as well as an expanded Medicaid system. If burdensome reporting rules encourage retirements, as some studies indicate, the building of an information highway may result in the unintended consequence of a bottlenecked road to seeing a physician. The likely result: Nurse practitioners will deliver a greater share of the nation’s healthcare.
Some critics say the medical profession exaggerates a coming shortage of physicians.
Yet concierge medical practices are growing in number, luring those willing to pay a premium to see a doctor quickly for extended-time visits.
Last year, the New York Times reported on long wait times for doctor appointments as a new norm, and not just in traditionally under-served rural areas. The article pointed to one study that found patients waiting an average of 66 days for a physical examination in Boston, and 32 days for a cardiologist appointment in Washington.
Think of what the wait times would be if mass retirements materialized, as suggested by findings of a 2014 survey of 20,000 physicians by The Physicians Foundation. Thirty-nine percent indicated plans to accelerate retirement due to changes in the healthcare system.Others reported plans to cut back on patient caseload or seek different jobs.
The potential for disruption is even more startling when you consider the number of older doctors in practice. According to R. Jan Gurley, a physician writing on the blog of the University of Southern California’s Center for Health Journalism, one in three doctors is over 50, and one in four is over 60 – despite roughly 20,000 newly medical school graduates a year.
Because of what’s at stake — potentially the very underpinnings of our nation’s healthcare system — health providers should speak out forcefully during the government’s open comment period. Yes, it is late in the rulemaking game for EHRs.But new rules are being written for 2018 and beyond, and modifications are being made to rules in effect through 2017.
Would an outpouring of thoughtful, well-documented recommendations make a difference? In a democracy, the answer should be yes. The value of keeping older doctors in practice far outweighs the benefit of driving them crazy as they try to meet reporting requirements with often-clumsy EHR technology. The challenge is to find a middle ground.
Diane Evans is a former Akron Beacon Journal editorial writer and columnist, and now publisher of the recently introduced MyHIPAA Guide, a news and information service for HIPAA-covered organizations trying to stay up with the seismic shift to a data-driven electronic health system. MyHIPAAGuide.com is hosting a forums discussion that is open to all who would like to share insights on key points that should be conveyed to CMS and government regulators.
Categories: OIG Advisory Opinions
Over the last half decade, the Federal Government has successfully convinced a majority of physicians and hospitals to begin using electronic health records by providing $30+ billion dollars in subsidies to those who use an ONC Certified electronic health record (EHR) according to the “Meaningful Use” guidelines.
Although the physician community usually consists of a multiplicity of dogmatic opinions, on the subject of Meaningful Use (MU), there is now near unanimous agreement that the MU train has not succeeded in achieving its intended purpose, which was to improve quality or reduce the cost of healthcare. Earlier this month, 111 medical organizations, led by the AMA, sent a letter to Congress asking that MU Stage 3 be delayed and MU Stage 2 be redesigned.
Dissatisfaction with MU even extends to the Chief HIT Geek, John Halamka, M.D., who has concluded MU “Stage 2 and Stage 3 will not improve (health) outcomes” and has called to “Replace the meaningful use program with alternative payment models and merit-based incentive payments.”
In an attempt to objectively assess the MU program, I put together a list of reasons to help me determine whether the MU program should be continued or terminated:
Reasons to Continue the Meaningful Use Program (Pro MU)
- Some late adopting physicians and hospitals will continue to receive significant financial payments from the Federal Government if they participate in MU programs.
- Computerized Physician Order Entry (CPOE) and electronic prescribing have been demonstrated to reduce medical errors.
Reasons to Terminate the Meaningful Use Program (Con MU)
- The majority of physicians already use EHR and there is no reason to continue to incentivize them.
- There is a ground swell of discontent among physicians arising from the poor design of many Certified EHRs and the current MU program further enshrines the use of these EHRs.
- Many physicians believe that MU program interferes with the physician-patient relationship by forcing physicians to spend time acknowledging clinically meaningless Certified EHR prompts.
- Hospital resources devoted to meeting MU requirements have hindered some hospital’s ability to update their IT infrastructure by drawing resources away from important IT problems.
- MU mandates have onerously consumed EHR vendor and healthcare provider resources while decreasing resources which can be devoted to creating innovative healthcare solutions.
- Physicians do not believe (nor is there data to demonstrate) that forcing patients to visit the physician’s MU mandated patient portal promotes the health of their patients.
- Physician practices are overburdened with bureaucratic mandates (Rx appeals, insurance requests for records) and MU tasks consume staff and physician time, thus diverting them from patient care.
- There are substantial financial penalties and psychological costs which physicians will incur if they are audited as a result of their participation in the MU program and these financial penalties are disproportionate to the financial incentives arising from the MU program.
- Only 12% of physicians have completed MU stage 2 and fewer will likely participate in MU3.
- The collective burden of all the workflow changes required by three stages of Meaningful Use regulations will make it hard for clinicians to spend adequate time on direct patient care (John Halamka, M.D., http://geekdoctor.blogspot.com)
- The public health reporting requirements required by MU will be hard to achieve in many locations due to the heterogeneity of local public health capabilities (John Halamka, M.D.)
- There is no data which proves that achieving MU Stage 1 or Stage 2 improves the quality or reduces the cost of healthcare
- A majority (68%) of physicians report MU measures do not help them improve patient care or safety. (Survey of Texas Physicians Meaningful Use. Texas Medical Association)
- A decision to work towards a “delay” in MU Stage 3 program will enshrine the currently intrusive and wasteful MU1 and MU2 work protocols as part of the standard office visit.
- While there is great promise which may derive from true HIT interoperability, there are many ways to achieve HIT interoperability independently of the MU
- It is illogical to hold physicians responsible for implementing HIT mandates which are clearly beyond their ability to create, pay for and/or implement
- Meaningful use has ” created … a monster, when really what we were shooting for was good patient care.” (Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems and Health Policy. The RAND Corporation, American Medical Association 2013)
- Reducing the cumulative burden of rules and regulations may enhance physicians’ ability to focus on patient care. (Factors Affecting Physician Professional Satisfaction and Their Implications for Patient Care, Health Systems and Health Policy)
- The current approach to automated quality reporting does not yet deliver on the promise of feasibility, validity and reliability of measures or the reduction in reporting burden placed on hospitals. (A Study Of The Impact Of Meaningful Use Clinical Quality Measures. Floyd Eisenberg,Caterina Lasome, Aneel Advani, Rute Martins, Patricia A. Craig, Sharon Sprenger. 2013)
- The workflow changes to meet the MU eCQM reporting tool requirements have added to physician and nursing workload, providing no perceived benefit to patient care. (A Study Of The Impact Of Meaningful Use Clinical Quality Measures. Eisenberg et al)
- EHRs are not designed to capture and enable re-use of information captured during the course of care for later eCQM reporting. (A Study Of The Impact Of Meaningful Use Clinical Quality Measures. Eisenberg et al)
- Champions of EHR adoption within hospitals …. have been significantly challenged by Meaningful Use Program eCQMs that are complex, inaccurate, outdated and that require incredible detail to be documented (often in duplicative ways) in a structured form in the EHR with no perceived additional value to patient care. (A Study Of The Impact Of Meaningful Use Clinical Quality Measures. Eisenberg et al)
- Fifty two percent of Texas physicians report all or most of the (MU) measures are not meaningful to care. (Survey of Texas Physicians Meaningful Use. Texas Medical Association)
- There is essentially no data which demonstrate that the vast majority of meaningful use measures (excluding clinical decision support and computerized provider order entry) improve the quality of patient care. (Ann Intern Med. 2014;160:48-54)
- The existing MU program has had a deleterious effect on physician morale. (Robert Wachter, The Digital Doctor: Hope, Hype Harm at the Dawn of Medicine’s Computer Age)
I fully acknowledge that the above lists are imperfect and that some will quibble over specific items on the lists. I want to encourage readers to add items to the “Pro” and “Con” lists in the comments section of this article, but please do so respectfully and in a measured manner. Inflammatory rhetoric will only denigrate the effectiveness of this conversation and serve no useful purpose.
Despite the imperfect nature of the above lists, I think we can objectively conclude that it is time for the Federal Government to immediately terminate the MU program.
I believe that the AMA’s strategy to delay/revise the MU program is the wrong goal. If they succeed in delaying the implementation of MU3, they will have enshrined MU1 and MU2 protocols into the practice of medicine and this will permanently interfere with our ability to provide care to our patients while making it very difficult to implement innovative healthcare solutions which have the potential to solve our healthcare cost/quality problem.
Until there is objective evidence that the MU program has a salutary effect on our health care system, not only should the MU program be terminated, but the Federal Government and private insurers should also be prohibited from creating financial incentives and disincentives arising from the MU program.
Hayward Zwerling, M.D., FACP, FACE
President, ComChart Medical Software (no longer for sale)
The Lowell Diabetes & Endocrine Center
Categories: OIG Advisory Opinions
I know it’s not always about me (my ex-wife was quite clear on that point), but I was deeply saddened to see one of the Blues – specifically, Blue Cross of Tennessee — descending into the fabricated-wellness-outcomes abyss.
By way of background, regular readers of this irregular column and/or www.theysaidwhat.net have seen multitudinous examples of vendors telling lies that any fifth-grader could see through. Perhaps the best two examples on this site are Staywell and Mercer reporting mathematically impossible savings for British Petroleum and Health Fitness Corporation admitting they lied about saving the lives of cancer victims in Nebraska.
In both cases, the de facto leader of the wellness industry, Dr. Ron Goetzel of Truven Health Care, was so appalled by this dishonesty that he assembled a group of other self-described industry leaders to give them awards. Not just any awards, but awards named after the most respected Surgeon General in history, Dr. C. Everett Koop. (No doubt the irony was lost on Dr. Goetzel.)
In all fairness, wellness vendors have to lie, since it turns out that achieving savings is mathematically impossible. If they told the truth, they’d all be fired. Even so, Blues should be held to a higher standard for integrity than independent wellness vendors, because lies told by one Blue affect all the others by sullying one of the most readily identified and respected trademarks in America.
Before continuing, I do want to emphasize that this isn’t about “the Blues,” which are all independent of one another. It’s specifically about Blue Cross of Tennessee (BCBSTN). By contrast, other Blues – Massachusetts, Rhode Island, Louisiana, Carefirst and South Carolina come to mind (along with the Blue Care Network subsidiary of Blue Cross of Michigan) – have created exemplary outcomes reports. For them, integrity trumps impossibility. Two have even been validated by the Intel-GE Care Innovations Validation Institute, the gold standard in outcomes measurement.
Not so BCBSTN. They published a report in which Onlife Health showed some of the best outcomes in wellness history. BCBSTN calls Onlife their “partner” company in this report. However, a corporate lawyer – or BCBSTN itself, in this other press release – would call Onlife a “subsidiary,” for the simple reason that BCBTN owns them. By contrast, you don’t own your “partner.”
In other words, BCBSTN is validating itself, not a “partner.”
The “intervention” was that these admittedly overweight employees walked an extra 2500 steps (about 19 minutes) a day. Here’s what those literal and figurative “baby steps” (as the report calls them) achieved. This is cut-and-pasted from the report:
- Emergency room visits and inpatient hospital stays were more than 50 percent lower in the moderate exercise group, as well. There were 219.6 ER visits and 59.9 inpatient stays per 1,000 for overweight non-exercisers compared to 73.6 ER visits and 30.1 inpatient stays per 1,000 for overweight moderate exercisers.
Let’s consider ER and Inpatient separately. About 40-million ER visits a year are specifically caused by injuries, or roughly 126 per 1000 people. This injuries-only figure dwarfs BCBSTN’s all-in ER visit figure of 73.6 per 1000 allegedly achieved by this program. In other words, walking an extra 19 minutes a day not only wiped out every single non-injury-related ER visits, but also about 40% of all injury-related ER visits.
Next, let’s consider the inpatient stays. Their 30 stays per 1000 includes birth events, as compared to the more typical figure, which BCBSTN also experienced in the control group, of about 60 per 1000. All birth events combined are about 15 to 20 per 1000. Taking those birth events out of the BCBSTN tally yields 10-15 admissions per 1000, a Nobel Prizewinning figure. And all achieved by walking an extra 19 minutes a day.
Another way of looking at it: here are the top 21 admissions categories for Tennesseans insured through their employer. With the exceptions of #9 and #10 (morbid obesity and heart attacks), probably not one single admission in any of these categories could have been prevented by walking an extra 19 minutes a day. Even in those two categories, optimistically only a handful of admissions would be prevented by short walks.
In addition – this is also part of wellness vendor DNA as this Wellsteps example shows – BCBSTN’s numbers contradict themselves. Compare these three bullet points from the study (italics are ours):
- Contrary to the popular guideline of 10,000 steps a day, employees who took as few as 5,000 steps per day, had annual healthcare costs nearly 20 percent lower than their sedentary counterparts who did not exercise. Spending was $2,038 per member per year (PMPY) for non-exercisers compared to $1,646 PMPY for moderate exercisers.
- Average claims cost PMPY dropped from $5,712 for non-users to $4,248 for employees who participated in one program and $3,120 for those participating in two programs. Employees engaged in three programs saw their claims cost cut almost in half at $2,892.
- Looking at BCBST overall, Danny Timblin, president and CEO of Onlife Health, noted, “Over a three-year period of time when we did this study, their claims were essentially flat.
Well, which is it? Is it $2,038 per year for non-exercisers, or $5,712? How is it that “moderate exercisers” only spend $1,646 in the first bullet point but $2,892 in the second? And how can any sizable group of people only spend $1646 per year per person, vs. the more typical $5000-$6000? (Even adding up the cost of just those impossibly low ER and Inpatient utilization figures would total more than $1646.)
And how does “their claims were essentially flat” in the third bullet point reconcile with the massive declines in the second bullet point?
Yet another head-scratcher: these massive savings were achieved despite the wellness industry’s own report admitting wellness loses money. You might say: “So what? Maybe Onlife disagrees with that industry report.” Except that Onlife co–wrote that report, unless there is a different Onlife Health that is listed as a collaborator on it:
This Onlife “study,” if nothing else, validates the observation in our book Surviving Workplace Wellness: “In wellness, you don’t have to challenge the numbers to invalidate them. You merely have to read the numbers. They will invalidate themselves.”
Which brings us back to what Blue Cross of Tennessee needs to do next. It seems they have, in the immortal words of the great philosopher Ricky Ricardo, some ‘splainin’ to do. At the very least, perhaps an apology to the Blue Cross Association and to their fellow Blues.
Al Lewis is the founder of Quizzify.com and the author of Surviving Workplace Wellness.
Categories: OIG Advisory Opinions
Every once in awhile on the wards, one of the attending physicians will approach me and ask me to perform a literature review on a particular clinical question. It might be a question like “What does the evidence say about how long should Bactrim should be given for a UTI?” or “Which is more effective in the management of atrial fibrillation, rate control or rhythm control?” A chill usually runs down my spine, like that feeling one gets when a cop siren wails from behind while one is driving. But thankfully, summarizing what we know about a subject is actually a pretty formulaic exercise, involving a PubMed search followed by an evaluation of the various studies with consideration for generalizability, bias, and confounding.
A more interesting question, in my opinion, is to ask why we do not know what we do not know. To delve into is a question requires some understanding of how research is conducted, and it has implications for how clinicians make decisions with their patients. Below, I hope to provide some insights into the ways in which clinical research is limited. In doing so, I hope to illustrate why some topics we know less about, and why some questions are perhaps even unknowable.
Negative studies are difficult to publish
A positive study is one that demonstrates a statistically significant result. Anegative study is one that shows no statistically significant difference. Any researcher would agree that it is easier to publish a positive study — after all, it is more exciting to read a study that suggests that some new kind of treatment works, as opposed to a study that shows that a treatment did not do anything. I would also contribute an additional point, which is that it is analytically easier to construct a compelling positive study (“even limitations in our data, we were able to show a statistically significant improvement in mortality in the group that received this surgical technique”) vs. a compelling negative study (“there was no statistically significant difference between the two groups, and we are confident that we are interpreting our data well enough and have a large enough sample size to be able to detect a meaningful difference if there were one”).
So when one delves into a particular research question, one must interpret the literature in the context of possible negative studies that may have been performed and not published. Admittedly, this is a bit like asking this Californian to ponder earthquakes — the ground may be shifting beneath my feet, but it gives me less anxiety to ignore the possibility.
Publication a slow, deliberate process
Briefly, here are the steps:
- Submit manuscript to journal
- Hope it does not get rejected immediately. If it does submit to another journal.
- Wait for peer reviews, hope manuscript does not get rejected based on those peer reviews.
- Make revisions based on concerns of reviewers
- Journal officially accept manuscript for publication and eventually publishes it
Each one of these steps can take months. For example, a study that I worked on, ironically titled “Timeliness of Care in US Emergency Departments: An Analysis of Newly Released Metrics From the Centers for Medicare & Medicaid Services.” This data was analyzed and the first draft of the manuscript was written over the course of a week in August 2013. The final manuscript (which was fairly similar to the first draft in my opinion) was published in November 2014.
There are, of course, many ways that researchers get their research out there, such as posters and presentations at conferences, research meetings, blogs, Twitter. But the fact of the matter is that a lot of science is known by someone, long before it gets in the public domain.
Certain populations are systematically underrepresented in medical research
Every study involves an investigation into a particular population sample, which researchers work very delicately to select. Given that researchers use samples, the interpretation of how a study informs the care of any individual patient must consider the generalizability of that study. But if we look across multiple studies, or across even entire fields of research, and examine which samples are being studied, it is apparent that there are large groups of people who are underrepresented in clinical research. For example, much has been written about how clinical research studies enroll disproportionately few minorities. Our science is largely based on a Caucasian population! Much research into hospital quality measures, as another example, is based on fee-for-service inpatient Medicare claims which excludes all the outpatient services that hospitals provide, Medicare Advantage patients, people younger than 65. The quality of care that 27-year-olds like me receive is relatively poorly studied.
Researchers generally choose the samples they study out of convenience, the discussion section of a paper typically pays at least some lip service to the limitations on generalizing the results of that study to other populations. But because of the systematic underrepresentation of certain populations in research, clinicians are left to make assumptions, i.e. a medication will be equally effective in poorly-studied population X as in well-studied population Y. These kinds of assumptions about generalizability are strong ones, ones that basic scientists and social scientists would be more hesitant about.
Clinical research favors a handful of simple methods
Student’s t-test, chi-square test for independence, ordinary least squares regression, logistic regression, and Cox proportional hazards regression account for the vast majority of analytic methods in clinical research. And indeed, those were pretty much all of the analytical methods that I was taught in my evidence-based medicine course in medical school. While these methods are probably sufficient for understanding and performing randomized control trials, there are so many other valuable methods in observational data research that one rarely sees. Without advocating for the adoption the “mathiness” of economics, clinical research could stand to learn about methods seen in other fields. Instrumental variable methods, for example, are part of the fundamentals of econometrics and could deepen our understanding of observational data in medicine.
It is all about the average, when it comes to medical research
A distribution of data might look like this:
Medical research largely concerns itself with where the little triangle below points:
That is the average, a single number that the fundamentally underlies the various statistical methods that are common in medical research, but by itself cannot truly describes an entire distribution. Papers will generally also present standard deviations, which is helpful, but truly only sufficient if one assumes a normal distribution. One rarely sees medians or percentiles in medical research, let alone more obscure concepts like skewness or kurtosis. In a sense, our science is based on how averages relate to averages, and ignores much of the complexity of the entire distributions of what we measure.
This has profound clinical implications. Countless times, my patients ask, “Will this treatment work?” And I might be left to say something like, “85% of people see some response” ← a statement about averages, “but everyone is different, some people respond better, some people respond worse, some people not at all” ← a hand-wavy statement about the rest of the distribution.
Clinical research lives in two dimensions
Treatment and outcome. Independent variable and dependent variable. X and Y. Left-sided and right-sided. Does this surgical technique lower recurrence? Does this drug decrease cardiovascular risk? The majority of clinical research is focused on linking one thing with another thing, in pursuit of establishing a causal relationship. Researchers spend less time thinking about how a third thing (or even a fourth thing) might modulate the relationship between the first two things. To what extent does age influence the effectiveness of this drug in lowering risk of cardiovascular events?
Researchers do investigate those “three-dimensional” questions by using methods like stratification or effect modification, but over all it represents a minority of all research effort (perhaps tucked away in a Table 4 or 5 of a paper). Maybe the “big data” or “precision medicine” movements are the solution.
The easily measurable is favored over the hard-to-measure, let alone the immeasurable
If one is going to perform research, it is of course natural to prioritize the low hanging fruit. This means investigating particular outcomes that are more easily measured than others. Death, for example, is perhaps the simplest outcome that there is to measure in healthcare — in fact, many countries have national registries of when/why every single one of its citizens dies. Probably the next easiest type of outcome to measure are non-death discrete events, e.g. a hospitalization, an adverse drug event, a cancer recurrence. Measuring quality of life is more difficult — you have to go around asking people self-report their quality of life. And if ones believes, as integrative medicine pioneer Dr. Rachel Remen does, that to heal is to help people purse what has meaning and value in life…good luck measuring that outcome!
The tyranny of multiple comparisons vs. the requirements for pre-specified analyses
Most research findings are presented alongside a p-value, which is a way of describing what are the chances that a particular result might be due to randomness in the data, rather than representing a true effect. The lower the p-value, the more valid the result, and a p-value of less <0.05 is the standard, albeit arbitrary, cutoff for statistical significance in clinical research. However, when a researcher performs many different statistical comparisons, the probability that one of those many will achieve statistical significance at a <0.05 level increases, an issue known as the multiple comparisons problem. One solution is to adjust the cutoff for statistical significance — essentially the more tests a researcher performs, the more stringent the cutoff for significance needs to be.
This is all good, but what if a researcher submits a manuscript that contains ten comparisons, but in reality performed one hundred throughout the course of his investigation? The significance cutoff really should be adjusted to account for one hundred comparisons, but was likely only adjusted for ten when it was submitted for publication. It is a problem called data mining. Researchers understand that it is poor form to do this, though “data mining” to one person might be “thoughtfully exploring the data” to someone else. Indeed, data mining typically occurs not because a researcher is actively snooping around the data for a significant result, but because a researcher has worked with the data for so long that it might have just happened by accident.
Besides self-policing, there are two mechanisms to protect against against data mining. Reviewers may ask the authors to run other analyses to see if they support the results that were presented. There also may be a requirement that before any data are acquired, the authors have to specify exactly which analyses they plan to perform. It should be pointed out that such a requirement makes research less efficient. If pre-specified analyses are important, then every data set can only really be analyzed once, and one is restricted from exploring hypotheses that are generated by the results of the initial analyses.
Research is expensive!
The NIH devotes several billion dollars to clinical research on its own, and clinical research is also supported by various state organizations and philanthropy. While this may sound like a lot of money, it is not! Research is quite expensive, if you factor in the cost of salary, equipment/overhead, staff support, data collection, etc. There are unfortunately more interesting research questions than money to properly investigate all of those questions.
Another source of funding is industry…but accepting funding from industry has its issues. Say you have a pharmaceutical company that has developed a new drug, and they then pay a group of researchers to conduct a study that tests the efficacy of that drug. We can all see the problem in this scenario. It is hence critically important for researchers to disclose any conflicts of interests. For better or worse, the knee-jerk reaction of most academics is to discredit studies when there is a blatant conflict of interest.
Given the resource constraints, researchers try to be cost-effective, perhaps even taking shortcuts. It might mean interviewing subjects every other year instead of every year. Or following the subjects for 5 years, instead of 10. Of the common clinical research study designs, randomized control trials tend to be by far the most expensive type of study, followed by cohort studies, case-control studies, cross-sectional studies, and case reports.
Ethical considerations provide boundaries on what kinds of studies are permissible
From 1932 to 1972, an infamous clinical study was conducted by the U.S. Public Health Service, in which African-American men were untreated for syphilis to observe the natural progression of the disease. None of the infected men were told they had the disease, and none were treated with penicillin after the antibiotic became a proven treatment. Public outrage and congressional investigation into the Tuskegee Syphilis Study eventually led to the establishment of the Office of Human Research Protections within the Department of Health and Human Services and a series of federal laws and regulations requiring the protection of human subjects.
Scientists are thankfully much more informed and sensitive to how to ethically conduct research. Ethical considerations rightfully places limitations on which kinds of research are permissible (particularly randomized control trials), but as a result, scientists have to accept that some knowledge is unattainable. You cannot design a study that randomly assigns people to cigarette smoking (a fact touted by the tobacco industry). You can design an ethical randomized control trial that investigates the use of cannabis to reduce nausea and vomiting during chemotherapy. You probably cannot design an ethical randomized control trial that investigates toxicity in recreational use of cannabis.
There is a lot of pressure and competition in academia…and a lot of scientific misconduct
It seems like every month, I read a story in the news about how a researcher was was caught fabricating and falsifying data. This is reflected in the increasing number of studies that are retracted, and I cannot help but think that this is related to increasing pressure and competition in academia. The mechanisms that prevent scientific misconduct are feeble. One has to attest to the integrity of the study when it is accepted for publication. Researchers sometimes attempt to reproduce each other’s results, though researchers are generally much more interested in pursuing their own research. And certainly the consequences of being caught fudging are severe, often grounds for dismissal. But despite the consequences, the temptation to fabricate results is real.
Scientific misconduct erodes the public’s faith in the integrity of science. It is hard to digest research if one has to also entertain the possibility that someone made the stuff up! Furthermore, once a study is out there, it never truly disappears, even if it is retracted. Vaccines and autism, anyone?
Sidney Le is a UCSF medical student and health services researcher
Categories: OIG Advisory Opinions
When John Milton (Al Pacino) chuckled in the Devil’s Advocate “vanity, definitely my favorite sin,” he may have been referring to academics, not attorneys.
In academia skins are thin, hairs are split, emails are long, humor is self-congratulatory, and everyone cites themselves thinking that they’re Shakespeare. In the land of geniuses pettiness lies next to godliness. Wallace Sayre, a political scientist, once said that academic politics is vicious because the stakes are so low.
The iconoclast, Nassim Taleb, reserves special derision for academics. He took Steven Pinker to task for claiming that violence has progressively declined because of a decline in religion. According to Taleb, Pinker was ignoring fat tails – or the long lull before the storm. Pinker responded by saying that Taleb was being fooled by belligerence.
Not even the hard sciences are spared from hair splitters. Bruce Hillman, in The Man Who Stalked Einstein, tells the story of Philipp Lenard, a German physicist who hated Einstein, viscerally. Lenard, a devout Nazi, was no mug – he was awarded a Nobel Prize for his work on cathode rays. Lenard’s hatred of Einstein had roots deeper than anti-Semitism. Lenard was trying to prove the existence of ether, a mysterious substance once believed to fill the universe and produce gravity. While Lenard was experimenting, Einstein, in a flash of inspiration, intuited space-time, which enraged Lenard because ether was not needed to explain gravity, and this implied that Lenard had been seeking something imaginary.
Lenard felt that experimenters, not thinkers, deserved the highest honors. He despised theoretical physicists, who, he felt, merely procrastinated. Lenard’s exaltation of experimental sciences is not out of place with academia today where experimenters get more credence (and access to the public purse) than theorists. Arthur Eddington, who proved the curvature of space-time by showing that gravity bends light in a solar eclipse, would have shared the honors with Einstein today. But Eddington is a historical footnote compared to Einstein.
Why was the accomplished Lenard so envious of Einstein? The currencies in academia are fame and recognition, not money. Fame cannot exist by itself – more fame for some comes with less fame for others. Fame is a zero sum game. Einstein’s stardom irked Lenard who felt that Einstein was not worthy of any recognition.
Not every academic is petulant. The economists John Maynard Keynes and Friedrich Hayek fought like gladiators but their duel had the aestheticism of a Socratic dialog. Their clash advanced knowledge. Keynes called Hayek’s The Road to Serfdom, a “frightful muddle”, and an example of how “starting with a mistake, a remorseless logician can up in Bedlam.” Privately, Keynes praised Hayek on his tome. The more baroque Hayek said “Keynes is not a highly trained or a very sophisticated economic theorist.” Keynes held the upper hand but both were friends and even hung out during the German air strikes in the Second World War.
A mini-Lenard is embedded in nearly every academic. Scholarly duels have become passive-aggressive – many academics discredit their opponents by ignoring them. A case in point is the clash between the economist, Paul Krugman, and the historian, Niall Ferguson. Ferguson fires the salvo. Krugman returns fire by pretending to ignore the salvo. This is a shame, because as petulant as academia can be, we lose when academics don’t argue.
In a couple of weeks I will be attending the Radiological Society of North America – one of the largest medical meetings in the world. During scientific presentations there is guaranteed to be one person, often me, who will clear his (yes, always a male) throat, walk to the mike and, assuming immeasurable pomposity, with the tone that can only arise by perfected self-love, draw attention to a patently obvious limitation in the study methodology, usually that the study is not randomized, before reminding everyone “in my experience…blah, blah, blah,” compelling the presenter to say “thank you for the thoughtful comments sir, yes, more research is needed.”
Self-love is the opium of academics.
About the author: Saurabh Jha is an academic tosser desperately seeking an alter ego. Interested candidates can reach him on Twitter @RogueRad
Categories: OIG Advisory Opinions
UnitedHealth Group just announced they expect to lose $450 million in the Obamacare exchanges and are seriously considering withdrawing from the program in the coming year.
This morning, the Wall Street Journal reported just about everybody else is losing their shirts in Obamacare as well:
Several other big publicly traded insurers also flagged problems with their exchange business in their third-quarter earnings Anthem Inc. said enrollment is less than expected, though it is making a profit Aetna Inc. said it expects to lose money on its exchange business this year, but hopes to improve the result in 2016. Humana Inc. and Cigna Corp. also flagged challenges…
There are signs that broad pattern has continued–and in some cases worsened–this year. A Goldman Sachs Group Inc. analysis of state filings for 30 not-for-profit Blue Cross and Blue Shield insurers found that their overall company wide results were “barely break-even” for the first half of 2015.
Goldman analysts projected the group would post an aggregate loss for the full year–the first since the late 1980s. The analysis said the health-law exchanges appeared to be a “key driver” for the faltering corporate results, and the medical-loss ratio for the Blue insurers’ individual business was 99% in the first half of 2015–up from 91% at that point in 2014, and 82% for the first six months of 2013.
Every health plan I talk to tells me that they don’t expect their Obamacare business to be profitable even in 2016 after their big rate increases. That does not bode well for the rate increases we can expect to be announced in the middle of next year’s elections.
And, then there are the insolvencies of 12 of the 23 original Obamacare co-op insurance companies–the canaries in the Obamacare coal mine–with almost all of the rest of the survivors losing lots of money.
Why is this happening?
Because nowhere near enough healthy people are signing up to pay for the sick.
This from The Robert Wood Johnson Foundation (RWJF) and The Urban Institute (UI) in their October 2015 policy brief regarding the Obamacare insurance exchange enrollment:
We estimate that just over 24 million people were eligible for tax credits for health coverage purchased through Affordable Care Act’s (ACA) health insurance marketplaces in 2015. As of the beginning of March 2015, 10 million people eligible for tax credits had selected marketplace plans, representing a plan selection rate of 41 percent of the population estimated to be eligible for tax credits. By the end of June, 2015, 8.6 million had actually enrolled in marketplace coverage with tax credits, representing an enrollment rate of 35 percent.
In recent post at Forbes, Has the Obama Administration Given Up on Obamacare?, I made the point that the Obama administration’s 2016 almost flat enrollment estimate would constitute only a small fraction of the potential market–I estimated less than 40% of those eligible for a subsidy.
But who am I?
Now, The Robert Wood Johnson Foundation and the Urban Institute have come to largely the same conclusion–enrolling a total of 10 million in the exchanges, based on historic trends, would mean only about 9 million of them would be subsidy eligible. That would amount to only 38% of the 24 million people eligible for a subsidy.
And, don’t forget that the only place a subsidy eligible person can get an Obamacare subsidy is in the state and federal exchanges. They can’t get subsidized commercial health insurance anywhere else.
And, I suggested in the same post that such a poor 2016 open enrollment would be way short of the market share required to create an efficient risk pool–having enough healthy people paying into the pool to support the sick at affordable rates. I also argued that such low enrollment rates could never make the new health insurance law politically sustainable.
That the Affordable Care Act’s individual market risk pool is so far unacceptable was reinforced by a recent McKinsey report that health insurers lost an aggregate $2.5 billion in the individual health insurance market in 2014–an average of $163 per enrollee. They reported that only 36% of health plans in the individual market made money in 2014–and that was before they found out that the federal government was only going to pay off on 12.6% of the risk corridor reinsurance payments the carriers expected and many had already booked.
Because the risk corridor program is revenue neutral, the fact that the carriers in the red are only going to collect 12.6% of what they requested means that the carriers losing money did so at a rate eight times greater than the carriers making money!
I have also regularly argued that the reason that the take-up rate among most of those eligible is so low is that the policies are still too expensive and the deductibles and co-pays are too high for other than the poorest.
In another recent post, Why the Affordable Care Act Isn’t Here To Stay–In One Picture,I pointed to an Avalere Consulting analysis that showed that while three-quarters of the poorest of those eligible for the exchange subsidies have signed up, only 20% of those making between 251% and 300% of the poverty level had so far enrolled.
What did the Robert Wood Johnson Foundation and The Urban Institute find on this count?
They found almost exactly the same thing–the poorest are buying Obamacare and the vast majority of the rest–even if they are subsidy eligible–are not:
And the reason the working and middle-class are not buying it? This from the RWJF/UI policy brief:
The uniformity of 2015 marketplace plan selection rates at different income levels across the 37 states using HealthCare.gov is striking. In part, it may reflect people’s judgments about the affordability of marketplace coverage at different income levels. Premium tax credits, cost sharing reductions, and actuarial value levels are the same across the states, so marketplace enrollment data may provide valuable information on people’s willingness to pay for marketplace health coverage. This conclusion is reinforced by several studies that have shown many people who shopped for marketplace coverage did not choose a plan, considered the available options to be unaffordable.
When are Obamacare apologists going to stop spinning the insurance exchange enrollment as some big victory that is running smoothly? Yes, Obamacare has brought the number of those uninsured down–because of the Medicaid expansion in those states that have taken it and because the poorest people eligible for the biggest exchange subsidies and lowest deductibles have found the program attractive.
But that Obamacare has been a huge failure among the working class and middle-class–not to mention those who make too much for subsidies and have to pay the full cost for their expensive plans–has once again been confirmed.
How does the Obama administration spin 2015’s unacceptably low health insurance exchange take-up rate of 35% and their projection that it will hardly grow in 2016?
This open enrollment is going to be a challenge but having fewer uninsured Americans to sign up is a good problem to have.
The arrogance in this spin is astounding.
When will the denial, over the real shape Obamacare is in, end?
The Robert Wood Johnson Foundation and Urban Institute findings have now given additional credibility to the very same conclusion many of us have been trying to make since the Obamacare launch: The Obama administration has NOT been so successful in enrolling those eligible–they’ve got more than 60% of the group remaining!
If the Obama administration signs up the 10 million they are estimating they will sign-up during the current open enrollment, based upon the historic number that are subsidy eligible, they will have less than the 9 million of the 24 million RWJF and UI estimate are in the potential exchange subsidy market–just a 38% success rate. And, that is nowhere near where they will have to be to make these risk pools sustainable for the insurance companies or politically sustainable in the country.
Or keep the likes of UnitedHealth Group in the program.
How can Obamacare be fixed?
First, the Obama administration can improve, but not completely solve, their Obamacare problems by dramatically revisiting their regulations so as to give health plans the flexibility they need to better design plans their customers want to buy.
But that would only be a first step.
Robert Lazewski is a principal at Health Policy and Strategy Associates.
Categories: OIG Advisory Opinions
According to this Wall Street Journal article, the prospect that “your doctor may soon prescribe you a smartphone app” is ushering in a new era of m-healthiness.
e-Researchers from marquee academic institutions are assessing the impact of handheld apps on medication use, symptom management, risk reduction and provider-patient communication. There’s not only an technology platform but an accompanying library of tailored e-prompts, e-reminders, e-pop-ups, e-recommendations, e-messaging, e-images and e-videos.
In other words, mix one part app with one part patient and bake until quality goes up and costs go down.
Unfortunately, however, what the article failed to mention is that much of that app content is based on information that is freely available in the public domain, and that these app developers have reconfigured and adapted it according to the variable interests, expertise and culture of their sponsoring institutions.
While policymakers and researchers would like to believe that on-line and public-domain health information is a commodity, the fact is that buyer, purchaser and provider organizations have been accessing, downloading and branding it for years.
They’ve taken a special pride of ownership in the other half of the wording, editing, formatting and presentation of that content. That’s what makes it “theirs” for both their providers and their patients.
After all, all healthcare is local.
This has important implications for the smartphone app industry. While the academic e-researchers and business e-developers dream of having their apps used by delivery systems everywhere, the problem is that their apps are often intertwined with their own organizations’ content.
In other words, you can have any breast cancer, heart failure or post-hospital discharge smartphone-based solution that you want, just so long as you also import their prompts, reminders, pop-ups, recommendations, messages, images and videos.
What then, are three rules to have your smartphone app be adopted by health systems everywhere?
1) Architecture Trumps Content: Smart app developers understand that the value proposition of the underlying technology architecture is separate from the value proposition of the content. The app itself needs to be independently stable, secure and snappy with minimal branching logic, an easy-to-use interface and freedom from annoying bugs, whether it’s heart failure for a hundred patients in Halifax or a dozen persons with diabetes in Des Moines.
2) Architecture Must Support Any Content: Very smart app developers also understand that the architecture should be able to accommodate any content that is preferred by their customers. If ABC Regional Health System wants their in-house policies, procedures, pamphlets, web-pages, in-house guidelines and electronic record prompts to be reflected in a smartphone app, then the app’s framework should be able to import it in a seamless plug and play fashion.
3) Architecture Should Come With Content: That being said, not every buyer, purchaser or provider will have all the content needed to manage a target population. That means app developers will need to have generic content ready to go to fill in the gaps.
The business case for apps may be similar to selling a house. First off, make sure the foundation is solid and the roof is intact. Be prepared to move knock out walls and move windows, if that’s what the buyer wants. And, if the house needs to be furnished with some furniture, do it; if the buyer wants some or all of their furniture to furnish the house, do it.
<em>Jaan Sidorov, MD is chief medical officer at MedSolis.</em>
Categories: OIG Advisory Opinions
We used to hear “no one shops for health care.” But we know that not to be true;here’s a blog post I wrote about how people are doing just that.
So, now that we know they do shop, do they do it well? That’s a good question too.
A recent study from some Berkeley economists found that people on high deductible plans don’t shop well. Sarah Kliff, writing about it in Vox, says the study “shows that when faced with a higher deductible, patients did not price shop for a better deal. Instead, both healthy and sick patients simply used way less health care.”
I read the paper, by Zarek C. Brot-Goldberg, Amitabh Chandra, Benjamin R. Handel and Jonathan T. Kolstad, and had some questions and thoughts: First, the company studied has relatively well-paid workers — “employees at the firm are relatively high income (median income $125,000-$150,000),” we are told. Higher income=Less price sensitivity.
Also, we know women shop more for health care and men shop less; women make 80 to 90 percent of the health care decisions in the U.S., and they are deeply in touch with this issue, while men aren’t. I did not see a gender breakdown in the methodology. So I wonder: Men or women?
Also, we learn that workers got tools to use to assess care, but we don’t see those tools — and believe me, I have seen some terrible ones. For example, here’s a post from one of our partners, Elana Gordon at WHYY public radio in Philadelphia, about how bad one insurer’s tools were for one couple.
Also, we don’t know what kind of education on their new system the workers got, so it’s a little bit murky (though the original study is incredibly long).
The rational health care shopper
Taking up the topic again, in a recent piece titled “Patients aren’t consumers, but the fiction of the rational health care shopper continues,” my friend Trudy Lieberman puts forth an argument that people are not rational health care shoppers. I sort of agree, but disagree deeply on the causes. One big reason that people aren’t “rational” shoppers: they don’t have information. Other reasons: 1) They’re sick and don’t want to shop. 2) They don’t expect to get robbed at the doctor’s office.
Lieberman discusses the Berkeley study in her piece, for the Center for Health Journalism at the University of Southern California Annenberg Center, then quotes me as saying our work on price transparency is good journalism and good public service. She then concludes:
“That may be, but the evidence, including the latest strong results from the Berkeley study, tells us that the focus on turning patients into shoppers has significant downsides. When people can’t distinguish between low- and high-value care or forego needed treatment because even a ‘cheap’ price is too high for the family budget, the cost of treating them may eventually be far greater. Remember, that was one of the arguments for the Affordable Care Act. But the high cost-sharing the exchange policies demand turns that premise on its head. I’m all for transparency and think Pinder’s work, as well as Steven Brill’s in Time and Elisabeth Rosenthal’s in The New York Times, goes a long way to acquaint the public about the American cost of health care. Just don’t count on 320 million people looking for the cheapest CT scan to lower the high price tag for American health care.”
I am not a Berkeley economist, and didn’t see the data or do the analysis they did, and my questions about that study persist. Also, I deeply respect and admire Trudy, and will always treasure our friendship. Her work is amazing. And I am a pig for a compliment, and so thanks, Trudy!
And yet, I have some thoughts on this piece.
Information is hard to find
From the boldfaced passages above:
1. Quality transparency is broken. People have a hard time distinguishing between high- and low-value care because the information that would help them decide what’s high and low value is hard to find. There’s some promising work being done, but it’s hard to find good, actionable information.
2. Price transparency is broken. People may skip treatment because they see the terrifyingly high sticker (Chargemaster) prices on bills. Why are those prices so crazy high to begin with — $6,221 for an MRI? Really? Also, perhaps they don’t realize that a cash price might be lower, or they might pay a negotiated rate under their plan, not the sticker price. Also, lower-cost treatment alternatives with nearly equal merit might be available.
3. High cost-sharing is not limited to exchange policies via the Affordable Care Act: it’s rife in employer policies now too. Here’sa recent Kaiser studydetailing the rising premium cost to employees of employer-sponsored care (the employee share of premiums averages $4,955 a year for a family, almost double what it was in 2005), and the rising deductibles in employer-sponsored care (now an average of $1,077, more than triple what it was in 2006).
4. People who know prices might choose to pay less. Further, once they understand that health care pricing is random and capricious, we might see real policy change. We’re not counting on The Little Guy or Gal to be able to effectively cut through the murk, profiteering and doublespeak effectively and thus fix the health care marketplace. But we’re trying to help.
Of course, the health care experience is fraught with emotion: it’s not like shopping for a tomato or a car. People don’t want to “shop” for health care when they’re in an emergency. Truly, they don’t want to “shop” for health care at all, in my experience: they just want to get treated and get well. But increasingly, they realize that they must find information about price and quality to protect themselves.
People don’t like this, and they don’t like being in the dark. Look at this study remarking on how people want to know — and how hard they say it is to find information. (I blogged about it here.) Funded by the Robert Wood Johnson Foundation and completed by Public Agenda, the study found:
- 56 percent of Americans have tried to find information about health care prices before getting care.
- Most Americans seem open to looking for better-value care. The majority of Americans do not believe that higher-priced care is necessarily better quality.
- Most Americans who have compared prices say they saved money.
Also, there’s this study about how higher prices are not necessarily better. And there’s this about high- and low- priced hospitals and links to quality.
Also, perspectives matter: if you have great health insurance with low co-pays and deductibles, and no co-insurance — and a good income and good health — you may be seeing the entire market through that prism, and may believe that others don’t shop. But they do shop, and often quite well — when they have the tools to do so.
An inexpensive MRI.
What you will see is that some people bought that MRI for $475, or $575, or $580, while others paid much more (see screenshots at right).
One insured person was charged $2,885; insurance paid $944.97, and the individual paid $1,940.03.
Some places charge $6,221 for that MRI.
A pricey MRI.
MRI charge: $580.“I was told procedure would be 1850. I have a 7500 deductaible. So I talked to the office mgr who said if I paid upfront and agreed not to report the procedure to Blue Cross, that it would be $580″
MRI charge: $3,163. “High deductible so paid the whole thing and then found out I could have had it done for *HALF* the price only blocks away. My first foray into individual insurance and it sucked. Need to shop around assuming can even get a price quote.”
Then there was the woman who called ClearHealthCosts from Missouri to say her husband was unemployed and she was off work on disability, but could go back if she showed an MRI — but she would have to pay herself. How might she shop for an MRI? she asked.
A woman from New Jersey called with a similar question — this time, though, there was a baby crying in the background.
A really pricey MRI.
People are paying more for health care in rising deductibles and co-insurance, but this phenomenon is not limited to Affordable Care Act plans: It is seen throughout the entire marketplace.
This was not caused by the A.C.A. The high deductibles and co-insurance are a function of the way our marketplace works. It keeps people in the dark about price charged, price paid, quality, outcomes, malfeasance and the like.
The system of third-party payers (insurance companies and government payers), plus the presence of employers buying insurance for employees, makes it even messier.
Also, the presence in this marketplace of for-profit companies is a driver of prices. For-profit companies need to maximize profits.
Here’s one way of thinking about it: Goldman Sachs is investing in health care. Goldman Sachs is here for the money. If there’s more for Goldman, there’s less for you. In this context, I hasten to add that nonprofit status does not confer saintlihood, especially not in this marketplace.
On quality metrics: The responsibility for making quality assessments clear falls on the providers or on regulators, not on the patients.
Yes, it’s hard, but it must be done. You have never lived until you’ve seen a bunch of radiologists arguing over what makes a good MRI. Imagine that for every procedure, every pill.
But if people can’t find this information because the radiologists (or other specialists or industry players or regulators) cannot agree, or make this a low priority, or decide not to publicize information, are patients responsible?
A path forward
If we want people to shop well for health care, we should ask ourselves what kind of tools they need.
If we give people good and workable tools for discerning price and quality, they will be better able to perform this work. That’s the problem we’re trying to solve here at ClearHealthCosts.com. We’re not seeking wholesale reform of the system, though that might be a laudable goal: We just don’t think that’s in the cards right now, and that’s not where we’re putting our energy.
Here are some suggestions: Make all the information public and easily usable, all the time.
On prices: Make public all Chargemaster rates, private-payer reimbursement rates, Medicare reimbursement rates, and cash rates, all the time.
On quality: Make public all performance data, outcomes, frequency of procedures, disciplinary actions, payoffs to providers and similar data, all the time. (Hats off to ProPublica, for example, for working to find this data and make it usable, as well as the Leapfrog Group, U.S. News and World Report, Consumer Reports and all the others who are working this problem.)
And listen to people. Here’s a note from our mailbox:
“I just want to say that your website is amazing “Please don‘t stop because you are helping people everyday, many of whom are struggling to make ends meet while others are just looking for a some transparency in a market where there has traditionally been very little.
Jeanne Pinder, is the founder and CEO of ClearHealthCosts.
Categories: OIG Advisory Opinions
Ask the chief medical officer of a major health system about the issues that keep them up at night and he or she will talk about the need to understand outcomes in complex populations, the need to engage in new business models, novel collaborations with other stakeholders, and engaging “customers” i.e. patients in new ways, all while addressing increasing cost pressures and safety concerns.
Sit down with a franchise leader in oncology at a pharmaceutical or biotech innovator and ask the same questions, and the response will pretty much be the same.
The convergence of business imperatives is largely driven by two factors: 1) the shift to value based healthcare reimbursement from volume, and 2) our rapidly advancing understanding of the causes of disease and health that holds promise to accelerate further because of the proliferation of electronic health information coupled with continued scientific innovation.
These two factors are driving nearly all healthcare stakeholders – health systems, health plans, governments, and life science manufacturers – to struggle to answer the “hard questions” in healthcare – what works, for whom, why and at what cost? That’s the connection that aligns all segments of the health care and life sciences sectors in this emerging era of new financing, rapid knowledge expansion, increasing consumer expectations and care delivery strategies.
And the good news is that science and technology are advancing at a pace that will enable this to occur as long as policy, public sentiment and the right business models can be found to support rather than thwart the shift to value based, personalized healthcare.
For all participants, the shift towards value-based reimbursement and personalized healthcare poses challenges that will test the core of their business and operating models, as well as the technology and systems strategies to support these new models.
When Deloitte launched ConvergeHealth in 2014,we recognized that bringing different and critical elements of the healthcare ecosystem into alignment based on insights from health information had enormous transformative potential and was in fact becoming an imperative for our clients to survive and thrive in this new era of healthcare.
Our North Star goal is to help the healthcare system become a learning healthcare system, where all relevant information can help to drive higher quality, more efficient care while also enabling breakthrough insights that will lead to the next wave of medical innovation. The ultimate objective is better outcomes, delivered in a cost-efficient manner, with valuebased, more personalized care as the end result.
Since we launched ConvergeHEALTH, the shift towards value based care has only accelerated. Recently, the Catalyst for Payment Reform (CPR) reported that 42 percent of Medicare’s $360 billion in payments are now tied to value, the rest being made through more traditional fee-for-services. The Administration has set forth aggressive goals to significantly increase this percentage over time. The momentum from volume to value is growing.
At the same time, movement towards personalized care also is growing. The President announced a major Precision Medicine Initiative as part of this year’s State of the Union, the House of Representatives passed the 21st Century Cures Act, and the number of targeted, personalized therapies across therapeutic areas continues to grow. The long promised era of personalized medicine appears to be close.
A powerful engine available for winning this market shift is insights derived from data analytics. Important factors that will determine how smoothly that engine can eventually operate include:
- Major redirection in reimbursement. The push towards payments that actually reward outcomes, as noted, is gaining steam and requires new analytics approaches that allow us to understand financial and clinical outcomes in complex populations.
- Massive advances in science and technology. Game-changing advances in genomics, proteomics, imaging and other aspects of scientific research and development are leading to an explosion of data, laying a path towards broader and more effective application of personalized medicine. Again, interpretation of massive amounts of complex data can hold the key to success.
- The digital health wave. Electronic medical records, wearables and patient self-reported data via social media and other patient engagement technologies– are generating another tsunami of health information that could have a significant impact on health care and life sciences and fill critical gaps in our understanding of health and disease.
If data and analytics are the engine driving this shift, the vehicle for winning the market shift is new, increasingly collaborative business models.
Said another way, technology innovation is a necessary but wholly insufficient piece of supporting this shift. Business model and operating model innovation are just as critical. Deloitte is committed to innovating here as well, as our ongoing collaboration with pharmaceutical company Allergan and our frequent teammate in innovation and collaborator Intermountain Healthcare demonstrates. This is one of a number of joint ventures that enable us to leverage our ConvergeHEALTH analytics platform to support insights into what medical interventions work optimally in certain populations. (See Health care IT News Story – 3 heavyweights harness analytics for women’s health)
These types of next generation collaborations underscore a push toward enhanced clinical and operational excellence, value-based care to improve population health management and reliance on evidence-based medicine, and establishing excellence in research by leveraging real-world evidence and comparative analysis.
The writer and futurist William Gibson put it well: “The future is already here – it’s just not evenly distributed.”
That’s where health care and the life sciences stand. It’s our role to bring innovative yet pragmatic, workable strategies to ensure the future that is already here extends evenly to all industry stakeholders, and most importantly the patient.
Brett J. Davis is the general manager of Deloitte Health Informatics (DHI), providing advanced analytics services and products to health care providers, researchers and medical manufacturers.
Categories: OIG Advisory Opinions
Today’s healthcare information technology headlines are littered with how large delivery networks are scaling up and successfully building and using IT infrastructure. But the real success story is hiding in the shadows of these large enterprise deployments, in the small and independent practices across the US. The recent ICD-10 transition, that had been foretold to drive small enterprise into financial despair due to their lack of IT savvy and infrastructure, has shown just the opposite. A report from a leading provider of billing software that was based on government and private payer claims analysis for the past 30 days shows a different story.
Small independent practices have few rejected claims and are getting paid quickly. The software vendor’s report, using data from over 13,000 small practices, showed that in October:
- 99% of customers submitted at least one ICD-10 claim
- 87% of customers received payment for at least one ICD-10 claim
- 4 million claims submitted in October were already paid
- 11 days was the average time to payment for ICD-10 claims
- The payer rejection rate through one clearinghouse was 1.6%
These results are in line with announcements from large payers and clearinghouses like Humana, UnitedHealthcare, and Emdeon that reported no significant increase in denials during a panel at MGMA. However, the results do show small practices out-performing the industry average provided by CMS where total claim rejections were estimated at two percent.
The report clearly shows that small and independent practices that utilize an ICD-10 ready billing system designed for their needs to submit and process claims, have a lower denial rate thanthe average. “In preparing for the ICD-10 transition, [the right software] was of great importance for us to improve, and most importantly, have an EHR that effectively and accurately communicates data to our PM system, seamlessly bridging the gap between treatment records, scheduling, and billing requirements,” said Dr. Rebecca Pearson, an independent chiropractor in private practice.
This is a victory for the independent practice that is clearly out gunned when it comes to large scale IT resources and budgets. Practices of all sizes can learn that the right solutions for a small practice can allow them to operate much like their larger counterparts, and efficiently manage clinical charts, quality reports, and claims management.
Technology can level the playing field in many ways in healthcare and the future is bright for those practices that want to stay independent and leverage technology to ensure their success.
Tom Giannulli, MD is a clinical advisor to Kareo
Categories: OIG Advisory Opinions
You follow movies? That is, not just watching them but thinking about how they are built, looking at the structure? In classic movie structure there is a moment near the end of the first act. We’ve established the situation, met our hero, witnessed some good action where he or she can display amazing talents but also what may be a fatal weakness.
Then comes the moment: Some grizzled veteran or stern authority brings the hero up short. Think of Casino Royale, that scene where Daniel Craig’s Bond (after those brutal opening scenes) is back in London and is confronted by Judy Dench’s M. Or Obi Wan Kenobi challenging Luke: “You must learn the Force.” Or that moment in the classic Westerns when the tired, angry old sheriff rips off his badge and throws it on the desk, leaving the whole problem to the young upstart deputy. But before he stomps out the door he turns and says to the young upstart, “You know what your problem is, kid?”
And then he tells him what the problem is: not just the kid’s problem, but the problem at the core of the whole movie. He just lays it out, plain as day.
In healthcare, this is that moment. We are near the end of the first act of whatever you want to calloutthis vast change we are going through.
And where are we? Across America, the cry of the age is “Volume to value.” At conferences we all stand hand over heart and pledge allegiance to the Institute of Health Improvement’s Triple Aim of providing a better care experience, improving the health of populations, and reducing per capita costs of health care.
But in each market, some major players are throwing their muscle into winning against the competition by defeating the Triple Aim, by increasing their volume, raising their prices, doing more wasteful overtreatment, and taking on little or no risk for the health of populations. At least in the short term, the predatory strategies of these players are making it more difficult for the rest of us to survive and serve.
I’m not going to name names here. You know who you are. Worse for you in the long run, your customers and potential customers are coming to know who you are, and their strength in the market is increasing every year.
Nap time is over, folks. It’s time to put this discussion in the open.
First Question: Will They Succeed in Defeating the Movement?
These predatory systems are certainly making the movement from volume to value more difficult. Will they succeed in stopping it?
An insight from systems thinking about traffic might be instructive. One way to study traffic is to model it with automata: Create little software bots that mimic the motions and decisions of cars and drivers and set them loose on virtual freeways and streets. If you make them all the same, say all moving at the speed limit when they can, at a certain traffic density they always gridlock. If you make some of them different, if you introduce, for instance, a few slow-moving trucks onto your virtual freeway, the traffic actually moves better as it constantly re-arranges itself to get around them.
Similarly, such predatory systems may slow the rest of us down and be a problem for us, but over the longer term they may well spur faster changes in rival systems and in the customer base that will lead to more rapid and complete change.
Second Question: Will Their Strategy Succeed and Last for Them?
Whether they will succeed really depends on the strength of the other forces in the system — in this case, the forces pushing for lower cost at the same or higher quality.
These forces include employers, other large purchasers such as pension plans, health plans constructing narrow networks, competing healthcare providers, out-of-region and virtual competitors, new market entrants, and individual consumers pushed into narrow network plans with high deductibles and co-pays.
Note who owns the largest “lever and a place to stand” in this concatenation: employers and other large purchasers. They have the direct incentive, the market power and increasingly the information to try new strategies.
Competing structures, including, especially, multispecialty physician groups, are also important in many markets.Why? Because doctors are truly scared. That fear is driving many of them into the arms of hospitals and hospital-based integrated health networks. But the fear is driving others into building their own larger structures and creating specialized accountable care organizations and ACO-like arrangements through them. Increasingly they are seeking direct arrangements with self-funded employers, as with Boeing intheSt. Louis, Seattle and Chicago areas.
The cost crunch driving this price-sensitive behavior will increase as income inequality grows and as more boomers retire. The possibility of “entitlement reform” which unbundles Medicare into a defined-contribution program is slim, but if it happens it will only increase the cost crunch and put more of the decision-making power in the hands of the individual consumer.
The pressure will grow also as buyers become more aware that in healthcare price is truly not a marker for quality, only for market dominance. The prices in healthcare are not justifiable even in terms of the supply chain, as similar institutions in the same or similar markets often have wildly different price structures. In any market, under conditions of full transparency about quality, prices for substitutable products might vary by 50 percent or even 100 percent, not by 500 percent or 1000 percent as they commonly do in healthcare.
Payer and purchaser techniques such as reference pricing, bundled products and medical tourism are capable of picking apart a market and exposing healthcare providers to market forces based on price and quality whether or not those providers wish to be exposed.
Third Question: Does Size Matter?
CEOs of these expanding market dominators will tell you that it’s a defensive strategy: They know that the big crunch is coming, and they are bulking up to own as much of the market as possible as a reserve against that future.
They avoid alternative strategies out of fear: If they engage in risk contracts, if they market bundled products at market prices, if they take on capitated Medicaid contracts, they will be undercutting their ability to extract rentier payments for their market dominance, lower their top line income, and put the organization at greater risk.
Is this true? No. Let’s look at a few reasons why.
First. No, you don’t have to be a certain size, you don’t have to have a certain top line in order to survive. To survive you have to make sure that your top line is greater than what it costs you to bring in that top line. The metaphorical bottom line is the actual bottom line.
Second. Capital costs. Capital costs increase your cost basis more or less permanently. You have to bring in a certain level of business to lay off that bonded indebtedness every year, every month, before you can even think about turning a profit.
Some smart organizations are thinking like this: Look, the environment is changing rapidly. Some parts of what we do (say, primary care for certain populations) are with us more or less permanently, and all signs are that they are likely to grow with time rather than diminish. However we end up getting paid for that, the more efficiently and effectively we can do that, the better off we will be. So this is a good place to take on debt that we know we can service with that line of business, to build whatever is the most efficient business and physical structure for that. (Or we can build a public/private partnership that turns the transaction into someone else’s debt against a leasehold for us.)
Other lines of business, such as specific types of surgery or techniques such as proton beam therapy, have a quite different capital profile. In the changing environment, all techniques that are not truly helpful, that do not have a positive cost/benefit ratio for the customer, are likely to diminish substantially. In the new environment, if the value isn’t there, the volume won’t be there. So in the end, expending scarce capital capacity on building for them may look like you went to a lot of work to weld a ballandchain onto your own ankle.
Third. Different revenue streams have different effects on the bottom line. Let’s look at fee-for-service, bundled products and risk contracts.
If you are getting paid for every procedure and test in a fee-for-service world, it doesn’t matter how wasteful they are, how effective or how efficient, because every expense creates its own addition to the top line, every one of them is reimbursed, and you get paid for your inefficiencies. So as a business proposition, who cares? Volume equals value, at least for you, whether or not it does for your customers.
If you offer a bundled product, the top line is no longer per test or procedure; it is per case. It still doesn’t matter whether the case itself is wasteful. Whether the patient is better off with a new knee is irrelevant financially, but suddenly the efficiency of producing the product (the “total cost of ownership”) is deeply relevant, because every extra CT scan, every mistake, every increased complexity of the operation adds to the cost against a fixed top line. If you can’t get efficient enough to get your true costs below your price (or worse, you don’t know your true costs), then every time you sell that product you are costing yourself money.
If you offer a risk-based product, now your top line is not per case but per life — per employee per month, per Medicaid beneficiary, per patient allocated to your accountable care organization. So the cost concern shifts to that level: What is the total cost of ownership of primary care, or spine-and-pain care, or diabetes care, or total life care for that life? Now it matters not only whether you are doing operations that really don’t need to be done, that are not truly medically indicated. It matters not only whether you are doing what does need to be done in the most cost-effective way possible. It matters even more whether you could have gotten to the actual goal, a healthy pain-free patient, as efficiently, effectively and quickly as possible. And the most efficient path to health for all patients (if you can do it) is to get them there before they ever need complex and expensive care: comprehensive disease management and prevention.
The Odds of Drawing to an Inside Straight
Price is implacable. You cannot game a market that is structurally exposed to price differences, information and options. Market dominators must keep up the opacity of their prices and depend on unrated backroom deals with major payers and purchasers in order to maintain their status.
In turbulent conditions, successful strategies will be those that thrive under conditions of high variance, multiple energy inputs and multiple strategic options. Successful strategies build expertise, experience and capacity for multiple revenue streams with multiple target markets.
Given the other forces in play, we cannot build any reasonable scenario in which the status quo continues. Questions on which we can build credible scenarios include: How quickly will the collapse to a more open market come to your market? And will that collapse be limited to certain revenue streams, lines of business and target markets, or will it be across the board?
Maintaining market dominance is actually a fragile strategy based on a single scenario and a monochromatic set of assumptions about the future. If your entire business structure depends on keeping your prices a high secret, and not exposed to real competition on price and quality, you are on the crumbling edge of a cliff as the seas advance.
Joe Flower is a healthcare futurist and author. He is a contributing editor with THCB.
Categories: OIG Advisory Opinions
The key to driving behavior change, a seasoned marketing executive turned digital health investor told a panel on patient engagement that I moderated this week, is to get beyond the demographics of customers, and to understand the “why” – what are their distinct motivations and drivers?
Customers with similar demographic characteristics might be motivated in very distinct ways, he explained; sophisticated, quantitative market research can help define the different “personalities” present in a particular market.
Healthcare businesses, he emphasized, need to recognize these differences, and customize their approaches based on this nuanced understanding.
On the one hand, it occurred to me he was describing the behavioral component of precision medicine; in the same way it’s important to match an oncology drug with the right biochemical pathway, it’s also essential to customize the motivational approach to the characteristics of each individual.
On the other hand, I realized there was something that seemed a little sad about the idea of developing extensive market analytics and fancy digital engagement tools to simulate what the best doctors have done for years – deeply know their patients and suggest treatments informed by this understanding.
Instead, it seems, we’ve slashed the time physicians get to spend with patients, protocolized and algorithmitized almost every moment of this brief encounter, and insisted the balance of time is used for point-and-click data entry and perhaps a rushed dictation. We’ve industrialized the physician-doctor encounter – the process and the paperwork — but eviscerated the human relationship; it’s value, unable to translate easily to an excel spreadsheet, was discounted and dismissed.
As I look at the extensive analytic efforts to categorize patients, and the many digital health platforms designed to motivate behavior, it’s hard not to ask whether we’re painfully trying — at scale but without heart — to re-create something we might have been better off not destroying in the first place.
David Shaywitz is based in Mountain View, California. He is Chief Medical Officer at DNAnexus, a Mountain View based company and holds an adjunct appointment, Visiting Scientist, in the Department of Biomedical Informatics at Harvard Medical School.
Categories: OIG Advisory Opinions
According to the Nielsen survey earlier this month by the Council of Accountable Physician Practices and the Bipartisan Policy Center, the majority of medical providers in the United States still do not use emails or text messages to communicate with their patients, despite the fact that such communication channels are in very high demand from the patients.
The survey results are appalling. After all, when you receive text message reminders about your upcoming credit card bill or ask your airline a question about your flight reservation via email, why can’t you communicate with your doctor in the same convenient way? Why are we still using the technology of the 20th century to communicate with our doctors in the 21stcentury?
The answer has three sides to it: Economics, technology management and regulations.
Physicians, like you and me, have to make a living. In the current fee-for-service payment system, doctors are only paid for the services for which they can submit a claim to the insurance companies. As you may have guessed already, doctors are not always reimbursed for the time and energy that they spend on emails and text messages. If they can answer your question during an office (which pays more than online consultations) why would they answer it in an email?
Information technology has revolutionized all industries but healthcare. Everyone, except doctor and hospitals, had to either jump on the IT bandwagon or go out of business. The lack of economic incentives in the fee-for-service payment model prevented physicians to seriously consider implementing such technologies in their practices. Even larger medical providers rarely have a well-defined digital strategy. As a result, while other industries now have learned how to adopt, use and manage information technology, healthcare sector lacks the required business expertise for successful implementation of information technology.
While there are hundreds of products and thousands of experts for customer relationship management in virtually every other industry, the healthcare sector seems to lack the required technical and business expertise for patient relationship management. Even if medical providers want to better communicate with their patients, they neither have the tools nor the expertise, at least as compared with other industries. If these technologies are not correctly implemented and integrated with the workflow of medical providers, they will become a problem rather than a solution. Imagine a doctor who is constantly distracted by the flow of emails and text messages form his patients.
Finally, the misunderstanding of laws and regulations which are intended to protect patient privacy in healthcare further inhibits medical providers to fully embrace IT. The Health Insurance Portability and Accountability Act commonly known as HIPAA is a good example of such acts. I believe HIPAA is a fairly well-designed act and does pretty well in protecting patients’ privacy, but as David Harlow points out, there’s a lot of confusion about HIPAA on the part of medical providers and tremendous resistance to open communication even when authorized and demanded by patients.
These factors have created a situation in which medical providers do not have the incentive to better communicate with their patients, and even if they want to do so, they rarely know how and are often concerned about the possible legal consequences of their actions. Given these barriers, the fact that even a small percentage of medical providers are using these communication technologies is surprising to me.
Despite the lackluster survey results, I believe that medical providers will use modern communication tools in the near future. As value based payments replace the fee-for-service models, providers will have much larger incentives to communicate with their patients. This demand from the side of medical providers will drive the IT sector to develop the required tools and very soon the healthcare industry will learn how to successfully integrate these technologies into their daily routine. The generation of young and digitally native doctors will help expedite this process.
Niam Yaraghi is a fellow at the Brookings Institution. This post first appeared in the Brookings Tech Talk Blog.
Categories: OIG Advisory Opinions
Hospitals can get overwhelmed by the array of ratings, rankings and scorecards that gauge the quality of care that they provide. Yet when those reports come out, we still scrutinize them, seeking to understand how to improve. This work is only worthwhile, of course, when these rankings are based on valid measures.
Certainly, few rankings receive as much attention as U.S. News & World Report’s annual Best Hospitals list. This year, as we pored over the data, we made a startling discovery: As a whole, Maryland hospitals performed significantly worse on a patient safety metric that counts toward 10 percent of a hospital’s overall score. Just three percent of the state’s hospitals received the highest U.S. News score in patient safety — 5 out of 5 — compared to 12 percent of the remaining U.S. hospitals. Similarly, nearly 68 percent of Maryland hospitals, including The Johns Hopkins Hospital, received the worst possible mark — 1 out of 5 — while nationally just 21 percent did. This had been a trend for a few years.
What could account for this discrepancy? Could we all really be doing this poorly in my home state and in our hospital, where we take great pride in our efforts to prevent patient harm? After lengthy analysis, it seems quite clear that the answer is no. Instead, the patient safety score appears to have a bias against Maryland hospitals, because the data from our state is incomplete and not consistent with the data reported for hospitals outside of Maryland.
Maryland’s Unique Arrangement
The U.S. News patient safety score rates hospitals on their track record for preventing seven health care-associated patient harms, such as punctured lung, hematoma and pressure ulcer. U.S. News derives this score by identifying Medicare billing claims that include the diagnosis codes for these harms. For this year’s rankings, this analysis used claims data from October 2010 through September 2013.
The differences between Maryland and other states involves how we account for complications that are “present on admission” and therefore are not the result of poor hospital care. The hematoma that a patient suffered in an auto accident, for example, should not be attributed to the care he or she received in the hospital. Since late 2007, hospitals outside of Maryland have been required to add codes to their Medicare billing claims to indicate such present-on-admission conditions or face financial penalties for not doing so.
But in Maryland, we have a longstanding and unique arrangement with Medicare that has allowed us to participate in our state’s pay-for-quality programs instead of the federal program. This essentially requires Maryland hospitals to have two data sets, one we submit to Medicare for billing and one we submit to the state for quality reporting. It wasn’t until October 2014 — after the period analyzed for this year’s U.S. News patient safety score — that that Medicare program started requiring present-on-admission codes from Maryland hospitals. On the other hand, Medicare required these codes from non-Maryland hospitals starting in 2007.
The result: Many complications that patients actually suffered before they came to our hospitals are being counted against Maryland hospitals in the U.S. News rankings.
The impact can be staggering. For example, when we looked at Medicare data for The Johns Hopkins Hospital in 2012 we found 29 cases of pressure ulcers — the number used by U.S. News. Yet after examining the quality-related data that we sent to the state of Maryland, all but one of those pressure ulcer cases were found to be present on admission.
Other Maryland hospitals mirrored our performance. Nearly 87 percent received the lowest possible score for pressure ulcers, versus 21 percent outside of our state. And Maryland hospitals, on average, performed far worse on safety than any other state. If the public were not aware of these data quality issues, they may mistakenly conclude that Maryland hospitals are significantly less safe than those in other states.
Looking Good vs. Doing Well
If this is too far a journey into the obscure world of health care measures, I feel your pain. Hospital leaders and quality improvement specialists are constantly bombarded with these measures, and spend much time trying to separate true concerns with “noise” from poor measures, poor data quality or random error. Too often, what we find is noise. But if a hospital looks poor in an invalid but high-profile measure, they ignore it at their own peril — even if improving on the score is more about looking good than actually delivering better care.
Over the years, U.S. News has continually sought to make its ranking methodology more fair and robust, and they have made improvements in response to feedback. Given the huge toll of preventable patient harm, it is encouraging that the weight given to patient safety was doubled, from five percent to 10 percent of the overall hospital score, beginning with last year’s rankings.
But we must be sure that all hospitals are measured with the same yardstick and that the measures are valid and reliable. This bias wasn’t created intentionally by U.S. News. We at Johns Hopkins didn’t even realize it existed until this summer, and it appears to be news to other Maryland hospitals as well. It’s the result of the uncoordinated, confusing way in which we attempt to measure quality in this country. The federal government, state agencies, insurers, nonprofits and others are creating measures with varying degrees of validity and usefulness. We need to work towards creating a single source of truth for which all hospitals and providers will be judged. In the meantime, I believe the U.S. News methodology for patient safety scores should be re-evaluated and possibly revised or replaced.
The U.S. News Response
–by Avery Comarow, Health Rankings Editor, and Ben Harder, Chief of Health Analysis, U.S. News & World Report
The key question raised by Pronovost is whether higher rates of missing present-on-admission (POA) information in data submitted by Maryland hospitals, and specifically by Johns Hopkins, caused those hospitals to receive lower patient safety scores than they would have otherwise. Pronovost concludes that the clear answer is yes. We feel the answer is complicated.
On our overall patient safety score, it is true that Maryland hospitals as a group did score below hospitals in most other states. It is not clear, however, that this is because of problems with the completeness or quality of POA data for Maryland hospitals, as Pronovost argues. It could reflect Maryland hospitals’ actual quality of care. It could reveal deficiencies with software — developed by the federal Agency for Healthcare Research & Quality (AHRQ) — that U.S. News used to adjust for missing POA information about pressure ulcer and other patient safety indicators. Or it could be due to an unknown combination of the three — or to other factors.
Maryland hospitals were not required to submit POA information to the Centers for Medicare & Medicaid Services for payment until October 2014. However, an analysis by U.S. News found at least some POA information in prior years’ Medicare records for almost all Maryland hospitals — some, in fact, had less missing data than hospitals in other states. Nevertheless, the typical Maryland hospital had significantly more missing POA data than hospitals elsewhere.
The documentation for the AHRQ software, which was designed specifically to account for missing POA data, made no mention that it had limitations in dealing with large amounts of missing data. Recently, in response to our questions, AHRQ indicated that its software was not designed to fill in the blanks for hospitals with high levels of missing POA data. The agency’s scientists told U.S. News and its data contractor RTI International that while this limitation could introduce bias, they could not say how much that problem might skew the results, nor whether any potential bias would favor or disfavor Maryland hospitals.
Further analysis by U.S. News found that on most of the seven complication rates that comprise our overall patient safety score, Maryland hospitals scored similarly to hospitals elsewhere. The overall score and the complication cited in Pronovost’s letter — pressure ulcer — were exceptions, not the rule. Reasonable people may differ in how they interpret that observation.
What is clear is the need for U.S. News to give the government’s patient safety measures a hard look. We will communicate our findings and describe any planned methodology changes as we determine tem. In the meantime, we soon will annotate the patient safety scores of all Maryland hospitals to reflect the newly understood limitations of the AHRQ software we use.
Peter Pronovost is the director of the Armstrong Institute, as well as senior vice president for patient safety and quality, at Johns Hopkins Medicine.
Categories: OIG Advisory Opinions
The Aga Khan delivered the Samuel L. and Elizabeth Jodidi Lecture at Harvard University yesterday. He has been a strong proponent of pluralism in the world and has devoted billions of dollars in resources from the Aka Khan Development Network to enhancing education, health care , culture, and economic development in the world’s poorest countries in Asia, Africa, and the Middle East. The full text is here, but I offer a pertinent excerpt, with lessons about an increasingly divisive level of political debate in the US and elsewhere:
In looking back to my Harvard days (in the 1950s), I recall how a powerful sense of technological promise was in the air — a faith that human invention would continue its ever-accelerating conquest of time and space. I recall too, how this confidence was accompanied by what was described as a “revolution of rising expectations” and the fall of colonial empires. And of course, this trend seemed to culminate some years later with the end of the Cold War and the “new world order” that it promised.
But even as old barriers crumbled and new connections expanded, a paradoxical trend set in, one that we see today at every hand. At the same time that the world was becoming more interconnected, it also become more fragmented.
We have been mesmerised on one hand by the explosive pace of what we call “globalisation,” a centripetal force putting us as closely in touch with people who live across the world as we are to those who live next to us. But at the same time, a set of centrifugal forces have been gaining on us, producing a growing sense — between and within societies — of disintegration.
Whether we are looking at a more fragile European Union, a more polarised United States, a more fervid Sunni-Shia conflict, intensified tribal rivalries in much of Africa and Asia, or other splintering threats in every corner of the planet, the word “fragmentation” seems to define our times.
Global promise, it can be said, has been matched by tribal wariness. We have more communication, but we also have more confrontation. Even as we exclaim about growing connectivity we seem to experience greater disconnection.
Perhaps what we did not see so clearly 60 years ago is the fact that technological advance does not necessarily mean human progress. Sometimes it can mean the reverse.
The more we communicate, the harder it can sometimes be to evaluate what we are saying. More information often means less context and more confusion. More than that, the increased pace of human interaction means that we encounter the stranger more often, and more directly. What is different is no longer abstract and distant. Even for the most tolerant among us, difference, more and more, can be up close and in your face.
What all of this means is that the challenge of living well together — a challenge as old as the human-race — can seem more and more complicated. And so we ask ourselves, what are the resources that we might now draw upon to counter this trend? How can we go beyond our bold words and address the mystery of why our ideals still elude us?
A pluralist, cosmopolitan society is a society which not only accepts difference, but actively seeks to understand it and to learn from it. In this perspective, diversity is not a burden to be endured, but an opportunity to be welcomed.
A cosmopolitan society regards the distinctive threads of our particular identities as elements that bring beauty to the larger social fabric. A cosmopolitan ethic accepts our ultimate moral responsibility to the whole of humanity, rather than absolutising a presumably exceptional part.
Perhaps it is a natural condition of an insecure human race to seek security in a sense of superiority. But in a world where cultures increasingly interpenetrate one another, a more confident and a more generous outlook is needed.
What this means, perhaps above all else, is a readiness to participate in a true dialog with diversity, not only in our personal relationships, but in institutional and international relationships also. But that takes work, and it takes patience. Above all, it implies a readiness to listen.
What is needed, as the former Governor General of Canada Adrienne Clarkson has said, and I quote, is a readiness “to listen to your neighbour, even when you may not particularly like him.” Is that message clear? You listen to people you don’t like!
Paul Levy is the former CEO of BIDMC and blogs at Not Running a Hospital, where an earlier version of this post appeared.
Categories: OIG Advisory Opinions
Despite the many flaws in our healthcare system, we could always point to data showing that over the last few decades we were living longer and healthier lives—even if not quite as long and healthy as our contemporaries in many European and some Asian countries.
It now appears that’s no longer true for one segment of the U.S. population.
I’m talking, of course, about the surprising findings released last week that the death rate among non-Hispanic white men and women ages 45 to 54 increased from 1999 to 2013 after decreasing steadily for 20 years, as it did for other age cohorts and ethnic groups.
The rise was small in absolute terms—half a percent a year—but it was a relatively sharpreversal in direction from the average 2% a year decline in death rate from 1978 to 1998. Moreover, this population experienced an increase in non-fatal diseases and conditions, too (called morbidity).
For both death rates and morbidity, the reversal occurred in all income and education brackets in the 45-54 age cohort, but it was most pronounced among those with lower incomes and less than a college education.
The researchers found that no other developed country experienced a similar reversal. And blacks, Hispanics, and those aged 65 and above in the U.S continued to see death rates fall in the period examined.
The bottom line in terms of overall impact: If the death rate for white 45−54 year olds had continued to decline at its previous (2%) rate, half a million deaths (and these are premature deaths) would have been avoided from 1999 to 2013. That’s comparable to lives lost so far to AIDS, the author’s say. It’s also on a par with the increased death rates and lower life expectancy in Russia in the 1980s and 90s.
What’s going on?! The researchers didn’t mince words in their published article or in media comments: this unwelcome turn of events is attributable almost entirely to “deaths from distress and despair…both economic and psychological,” as co-author Ann Case of Princeton University put it in an NPR radio interview.
Namely, the rise in death rate, they found, was triggered by drug and alcohol poisonings, suicide, chronic liver diseases and cirrhosis of the liver. Likewise, the increase in morbidity reflected a rise in alcohol and illicit drug use; abuse and misuse of prescription drugs; psychological distress; physical problems and pain (neck, facial, joint and back, and sciatica), and difficulties with the activities of daily living.
I’m sure the sophisticated THCB readership can pretty much deduce the confluence of factors that precipitated this reversal, though few of us might have predicted it would be so intense or so specific to the white middle-aged:
- The erosion of the manufacturing base and loss of blue-collar jobs (down from 28% of jobs in 1970 to 17% in 2010, and still declining), and the loss of rural jobs
- Wage and income stagnation in the low- and middle-income groups
- Income inequality and economic insecurity
- The great recession
- The decline of the stable 2-parent family (the percent of single white mothers rose from 18% in 1980 to 30% in 2010 for those with no college degree)
- People giving up on being in the work force
- Shifts in social trends leading to more isolation andloss of community
- The ready availability, overuse and abuse of both prescription and illegal drugs, especially narcotic painkillers and heroin (opioids)
- Poor diet, physical activity and health and lifestyle habits (despite years of public health messaging)
- Asuboptimal and dysfunctional mental health system and poor access to mental health care and substance abuse programs
- Rising out-of-pocket healthcare costs for people with inadequate or no health insurance, leading them to postpone or forgo treatment
This new-found trend represents a public health failure and a failure of our healthcare safety net. In particular, it’s yet another marker of dismal mental healthcare access and inadequate community-based substance abuse programs. If not addressed, the trend bodes ill on many fronts. For one, this cohort will age into Medicare in worse health than the current elderly. That will cost money. The reversal is already eroding productivity, the authors suggest.
They don’t pull punches in other conclusions: “Addictions are hard to treat….so those currently in midlife may be a ‘lost generation’ whose future is less bright than those who preceded them.”
That less prosperous future is, of course, also forecast for today’s urban black youth, new retirees, and even segments of the millennial generation—due to some of the same cultural, social and economic forces. Healthcare professionals, administrators and policy wonks can’t solve all the above-mentioned underlying problems but it seems to me that they (we) have a responsibility to advocateharder for solutions.
Steven Findlay is an independent journalist and editor who covers medicine and healthcare policy and technology.
Categories: OIG Advisory Opinions
Wrapping up a great week spent with emergency medicine friends attending this year’s American College of Emergency Physicians national meeting in Boston. Over the course of a few receptions and dinners, more than one old friend has stopped to ask me about how I made the decision to step away from caring for patients in the emergency department and into a nonclinical role at a progressive startup healthcare company. A few friends confessed that they love the idea of getting their hands dirty fixing a broken healthcare system– but don’t know where to begin.
I have a very limited perspective and I’m no expert on career pivots. But I often look to an article I came across a few years ago, written by Whitney Johnson in the Harvard Business Review. Her article is called Disrupt Yourself.
In the piece (and later in her book) Johnson argues that people can successfully transition into satisfying roles in new businesses but often need to “disrupt” themselves and their current careers. This disruption is needed because moving to another job or field (even one adjacent to the one you’re in) is hard. I think that this is particularly true in medicine where the time and money needed to become a doctor creates incumbents, inherently resistant to change. Physicians are, by nature of our training and regulation, IBMs and Microsofts. We are slow to change. We can plateau.
If as an individual you’ve reached a plateau or you suspect you won’t be happy at the top rung of the ladder you’re climbing, you should disrupt yourself for the same reasons that companies must.
Johnson references Clayton Christensen, who is the father of disruptive innovation, which is the theory that the most successful innovations create new markets and value networks. She believes that the same principles hold true for positioning yourself in the career market:
I believe that disruption can also work on a personal level, not just for entrepreneurs who launch disruptive companies but for people who work within and move between organizations. Zigzagging career paths may be common now, but the people who zigzag best don’t do it randomly.
Johnson identifies four principles for folks looking to translate their skills into a new type of work. She writes that they need to:
- Target a needthat can be met more effectively.
- Identify their disruptive strengths.
Don’t think just about what you do well—think about what you do well that most others can’t. Those are your disruptive strengths.
- Step back (or sideways)in order to grow
An individual’s well-being depends on learning and advancement. When organizations get too big, they stop exploring smaller, riskier but perhaps more lucrative markets because the resulting revenues won’t affect their bottom line enough.
- Let their strategy emerge.
Because we’re not following traditional paths, we can’t always see the end from the beginning. As John D. Rockefeller wrote, “If you want to succeed, you should strike out on new paths, rather than travel worn paths of accepted success.”
Marc is a doctor and healthcare executive living in Boston. He is a fellow of the American College of Healthcare Executives and the American College of Emergency Physicians.
Categories: OIG Advisory Opinions
The majority of health problems in modern developed countries are self-inflicted, the results of lifestyle choices. These problems don’t respond to a pill–or even to bariatric surgery. Moreover, the medical profession hasn’t found ways to change lifestyle.
For instance, one study found that only one of six overweight adults in the US have sustained a weight loss–and that was an improvement over other studies. Another site claims that 90-95% of all dieters regain their weight within five years. It’s encouraging to note an 80% improvement among people with obesity who get treatment–but the source doesn’t say what “treatment” is. It apparently goes far beyond advice and Weight Watchers–so only 10% of obese Americans get treatment in the first place.
Health problems are killing us, and bankrupting us along the way. It’s well known that a tiny percentage of patients generate the most treatment and the highest health care costs, as Atul Gawande pointed out in a famous New Yorker article.
Of course, lifestyle doesn’t lie behind all hot-spotters (for some we can blame birth defects or other debilitating accidents, and for others we can blame over intervention in dying people), but a lot of them just just exhibit exaggerated versions of the common behavior problems most Americans face: bad eating, drug use, lack of exercise, etc.
A number of months ago, I met with a leading public health expert in Massachusetts. After I walked down to Arlington’s premier professional rendezvous, the Kickstand Cafe, we talked over oatmeal with nuts and fruit about behavior change, public health, and patient engagement, which I prefer to call patient empowerment–or as he put it, “patient activation” (which sounds to me opening an account at some business).
The expert and I shared another connection besides our mutual interest in health. We are active members of the Greater Boston Interfaith Organization, a 20-year-old community organizing group that is part of the Industrial Areas Foundation founded by Saul Alinsky in 1940. So we started asking each other what a community organization could do to improve its members’ health. GBIO wasn’t the first to join the universal coverage movement, but the muscle of its 50 congregations and 10,000 members became key to passing Massachusett’s 2006 health care act, often called “Romneycare” and the basis for the national Affordable Care Act. I personally lobbied a leading State Senate member and sat in on a hearing where Mitt Romney defended his individual mandate.
Since passage of the law, we’ve built relationships with government and industry figures and helped create the policies that made universal coverage universally popular in the state.
However, rising costs are still a problem. The state also has a long way to go to address key behavior changes in the population.
With the major features of reform in place, GBIO has been sidelined to a relatively reactive role, such as protesting a merger involving the Massachusetts mega-provider, Partners Health Care. We’d like to play a constructive role as well.
The key may be support and community–what GBIO is built on, and what sick people also need. Many clinics create support teams that do things such as send text messages to encourage healthy behavior among patients with chronic conditions, and mobile devices make patient monitoring feasible, but there are limits to the level of engagement clinic staff can create.
Other programs involve family members, whose intense relationships can make their messages powerful. But we can’t always depend on family members: they may be busy, disengaged, overwhelmed by the patient’s needs, or burned-out after years of failure to improve. They may be addicted to the same unhealthy food choices or behaviors that are making the patient worse, and perhaps even enable those behaviors. (See the movie “Fed Up” and consider the families’ roles in the cases they document.)
So is there a role for the community? There are many calls in the public health sector for community involvement, like an emergency physician’s observation that health is intimately tied to issues such as literacy, employment, transportation, crime, and poverty. One objective of an ONC six-year Federal health IT strategic plan is to “Protect and promote public health and healthy, resilient communities.” The idea of making a whole town responsible for its residents health makes Esther Dyson’s “Way to Wellville” intriguing, even though it’s a rather mixed bag of disparate elements.
The religious centers, labor unions, and other organizations making up GBIO represent the most important instances in the US of the “third place” described in the classic Ray Oldenburg book, The Great Good Place. People in these places step outside the roles and constraints they deal with at work and in the home. They take on new roles–and perhaps we can make those healthy roles.
One model is provided by a GBIO initiative on debt. Like most of our activities, the initiative was launched after hundreds of discussions among congregants about the problems that have the biggest impacts on our lives. Numerous political campaigns, of course, have been conducted around debt–student loans are a highly publicized example–but GBIO started with a personal program called Moving from Debt to Assets. Through courses led by local financial experts, support groups, and other contacts, the program helped 875 people extract themselves from debt and start saving money.
What could congregations do to support people whose problems are with their bodies rather than their finances? Could peer support, regular guidance, and even a generous dose of religious motivation overcome the dismal statistics for behavior change? Here are some ways community organizations and their member congregations could make a positive impact on their members’ health:
- Invite speakers to congregational events and even services to describe paths to better health, along with recent discoveries.
- Organize peer support groups. Some expert guidance may be necessary here to guarantee the privacy of what people say.
- Carry out group discussions in the classic community organizing manner to discover local health problems affecting the congregation–such as trucks idling at a construction site, or a lack of fresh vegetables and fruits in local stores–and organize for change.
- Advocate for patient access to records, the provision of coordinated care teams to patients who need them, and other improvements in provider behavior.
- Use the network of “caring committee” members to help individuals find doctors, and accompany those who need help with translation, medical terminology, or understanding care plans.
- Encourage the use of appropriate health IT tools such as educational apps and sensors, and provide training.
- Create a healthy environment within the congregation itself, such as an examination of the food served at community events.
- Draw on religious traditions and texts to provide inspirations that link health to leading a good life.
Both top-down change (regulation) and bottom-up change (patient empowerment) are key ingredients to improving health care. But something critical also lies in between–community action. Proven community organizing techniques and advocacy among institutions in the patients’ lives might make all the difference.
Andy Oram is an editor at O’Reilly Media.
Categories: OIG Advisory Opinions
What should doctors know before joining a startup? I don’t know if these were questions medical school graduates in the Bay Area asked themselves as they opted to join a startup rather than completing their medical training in residency programs. These new doctors felt they could make a bigger impact on patient care by leaving the system and its current status quo.
Why not? In the Bay Area, small startups and former startups like Facebook, Google, and Apple are literally blocks away from academic medical centers. Everyone knows someone working at a startup. At a healthcare innovation summit, Vinod Khosla, co-founder of Sun Microsystems and venture capitalist reassured technology entrepreneurs that the opportunities to disrupt healthcare were tremendous. After all,
Khosla encouraged attendees to develop technology that would stop doctors from practicing like “voodoo doctors” and be more like scientists. Disruption required having an outsider point of view. Khosla highlighted how CEO Jack Dorsey of Square was able to disrupt and provide services more cheaply than the traditional methods of the electronic payment system accepting Visa and Mastercard because only 2 percent of the employees at Square ever worked in the industry.
Former Executive Editor of WIRED Thomas Goetz interviewing venture capitalist Vinod Khosla
There is the lingering perception that technology can make health care cheaper, more accessible, and better without physician insights. Yet there have been few public successes so far. In an interview with Malcolm Gladwell, venture capitalist Bill Gurley seemed resigned to the fact that finding such a startup to fix healthcare will not happen.
Yet, I believe there are opportunities for startups to help. For healthcare to be disrupted, doctors and Silicon Valley need to collaborate. Each group brings valid and important points of view that the other cannot fully understand simply because you don’t know what you don’t know. For doctors joining a startup, add tremendous value by understanding the challenges the healthcare system faces as well as the challenges and mindset of a startup. Here are five recommended books to get you started.
What is Disruption? The Innovator’s Prescription
You hear how startups will disrupt the status quo. Who came up with this? Harvard Business School Professor Clay Christensen has often been credited with the concept of a disruptive innovation. A disruptive innovation is a product or service that not very good initially. It serves a market or need that is currently ignored by incumbents. Over time the disruptive innovation gets so much better that it serves larger markets or needs that it overtakes the incumbent companies. At this point it is too late for the incumbents to respond. An example of such an disruptive innovation might be Apple’s iPhone where much of the initial functionality has now become so robust that incumbents creating digital camera, GPS, camcorders, or laptop computers are in trouble.
Christensen’s book, The Innovator’s Prescription: A Disruptive Solution for Health Care, is the best book on how disruption in health care might occur. By looking at other industries where initially products and services offered were “so complicated and expensive that only people with a lot of money can afford them, and only people with a lot of expertise can provide or use them” and how over time everyone now had access to telephones, computers, and airline travel, the book provides a framework on how that will happen in health care.
Anyone wanting to succeed in the new world of health care as predicted by this comprehensive and thoughtful analysis would be wise to add this book to their list of must reads.
Zero To One: Notes on Startups, or How to Build the Future
The Hard Things About Hard Things: Building a Business When There Are No Easy Answers
Written by entrepreneurs, Peter Thiel and Ben Horowitz respectively, these books provide an insider’s perspective on both the promise and perils of being in a startup. Venture capitalist Peter Thiel was the co-founder and CEO of PayPal and founder of Planitir. Thiel believes technology can solve our problems and the importance of using the strengths of technology and people to make an impact.Zero to One: Notes on Startups, or How to Build the Future notes there is only one moment in time when something is invented and you go from zero to one. The creation of Google was such a moment.
If Thiel’s book captures the optimism of a better future, Horowitz’s book details the gritty realities of a startup CEO in The Hard Thing About Hard Things: Building a Business When There Are No Easy Answers. Venture capitalist Horowitz was CEO of Opsware. He pivoted the business multiple times when things looked bleak, found capital during the worst economic crisis since the Great Depression when everyone thought he was insane, and led the company through multiple layoffs before successfully being sold. A sobering yet incredibly important read, Horowitz shatters the allure, mystic, and promise of startups and replaces it with the stark frankness that the world is competitive and startups are fragile and the path to success difficult.
Overtreated: Why Too Much Medicine Is Making Us Sicker and Poorer
Understanding the current healthcare status quo is important if one is to understand the variation in medical care and outcomes. Overtreated: Why Too Much Medicine Is Making Us Sicker and Poorer is the best book to quickly get you up to speed on why we are the most expensive in the world and the worst at keeping us healthy. Balanced and thoroughly researched, this book illustrates how the failings of our healthcare system are more complex than simply claiming that insurers are greedy and malpractice insurance premiums are too expensive. Learn what you are up against if you plan on disrupting healthcare.
Teaming: How Organizations Learn, Innovate and Compete in the Knowledge Economy
Doctors don’t work well in teams. This was outlined by New Yorker writer, best-selling author, and surgeon Dr. Atul Gawande in Cowboys and Pit Crews. Yet it is teamwork across disciplines that matter in a startup. Here is where Harvard Business School Professor Amy Edmonson’s Teaming: How Organizations Learn, Innovate, and Compete in the Knowledge Economy is helpful. To maximize learning, conflict and failure are necessary for teaming to be successful. Successful teaming requires an environment where it is psychologically safe to speak up, which is not typically true in a hospital environment where a strict hierarchy still pervades. Edmonson highlights how individual and organizational psychology, the reality of hierarchical status, cultural differences, and distance can and do separate team members which can prevent successful teaming. Leaders can close these gaps by understanding the existence of these obstacles and by adapting their leadership style to support and facilitate teaming successfully. She gives plenty of examples where teaming went well and not so well (the “impossible” rescue of miners in Chile and space shuttle Columbia tragedy). Learning thoughtfully from these failures and framing them as essential for continuous improvement and innovation is key for organizations to benefit from teaming. So by understanding these dynamics, you can determine whether your startup has the dynamics it needs to be successful and how to lead one.
There you have it. Five books. Five perspectives. Good luck! I can’t wait to hear what you come up with!
Davis Liu, M.D., is a practicing board-certified family physician with the Permanente Medical Group in Northern California since 2000
Categories: OIG Advisory Opinions