BUILDING A HEALTHIER PITTSBURGH: Rethinking the way we pay for care
How did America end up with this health care system?
April 27, 2014 12:15 AM
Archives Service Center, University of Pittsburgh
Inoculations and new types of antibiotics represented major advancements in public health. Here, children are in line to receive diphtheria inoculations from Allegheny County's Division of Health in 1930.
Detre Library and Archives, Sen. John Heinz History Center
A company infirmary at H.J. Heinz. According to the Heinz History Center, "fully equipped infirmary rooms were maintained in the company 'hospital' at the H.J. Heinz Company main plant on the North Side section of Pittsburgh." Company infirmaries were a forerunner to employer sponsored health care.
Associated Press, file
This Aug. 14, 1935 photo shows President Franklin D. Roosevelt signing the Social Security Bill in Washington.
By Bill Toland / Pittsburgh Post-Gazette
Pennsylvania in the late 1800s and early 1900s was an unsafe place to work, with a quarter-million recorded industrial calamities a year. So dangerous were the trades, and so gruesome were the accidents, that the chronicling of injuries suffered by workers became its own muckraking genre. A short-lived publication of the International Association of Factory Inspectors got some its best stories from the steel industry: In one nightmare narrative, an explosion at a Butler County steel mill forced “streams of hot metal [down] on the workmen, engulfing and literally cooking some of them.” Some accounts weren’t as spectacular, but merely ghastly — arms jerked from sockets, regular decapitations, and sawmill accidents with all of the attendant gore and sinew you would expect.
In 1907, writer William B. Hard turned his attention to U.S. Steel Corp. in an article called “Making Steel and Killing Men.” Steel is war, he said, and the workers were losing: Men were being “roasted alive by molten slag.” In a given year, according to his own estimate (gleaned from a U.S. Steel plant in Chicago), more than 10 percent of the steel workforce was injured or killed on the job.
In the same year, the Russell Sage Foundation began its massive survey of working conditions in Pittsburgh. That survey spawned a 1910 book that found that among those who were lucky enough to survive their workplace accidents, “53 percent received $100 or less from the employer. ... In over one-half of the deaths and injuries [employers] assumed absolutely no share of the inevitable income loss,” the book, written by Crystal Eastman, a lawyer and co-founder of the American Civil Liberties Union, said.
“A special cloud always threatens the home of the worker in dangerous trades,” Eastman wrote. “It is not just that those whose lot falls in this part of the work should endure not only all the physical torture that comes with injury, but also almost the entire economic loss which inevitably follows it.”
Like much else about our nation, employer-based health insurance has roots in our industrial past. In the decades after the Civil War, those who worked in the most dangerous jobs — mining, steel, railroads, riverboats, lumber — had access to company doctors, on the company’s tab, often in “industrial clinics” or in union-operated infirmaries. As insurers grew more sophisticated, they began selling “accident” policies that included disability, death and burial benefits to employers. The policies didn’t resemble the health coverage we know today, but the precedent of businesses having a stake in the well-being of their employees was established.
As unions grew more powerful in the late 1800s, they began taking out their own sickness protections, having realized that “employed persons needed economic protection against the unforeseeable losses” created by illness and accidents, according to 20th-century health economist J.F. Follmann. Tradesmen and factory workers were the most likely to see such coverage. Fraternal organizations and “mutual protection societies” also offered limited coverage for ill members.
“Industrial sickness funds [were] concentrated in manufacturing,” said John E. Murray, professor of political economy at Tennessee’s Rhodes College and author of “Origins of American Health Insurance.” “They had no kind of actuarial sense, [yet] they were operated pretty carefully, kind of by trial and error.”
Modern group insurance can be traced to 1910, when mail-order retailer Montgomery Ward solicited what is now considered the nation’s first multi-employee health insurance policy, through a plan issued by the London Guarantee and Accident Co. of New York. It paid total annual benefits of up to $28.85 to ill or injured employees.
Still, “it was essentially insurance that covered disability,” said Melissa A. Thomasson, economics professor and health care expert at Ohio’s Miami University. “At the time, the expenses associated with medical care were low — wage loss was a much bigger” financial hardship, by a 4-to-1 ratio, when someone missed time at work.
The country was years away from a health plan that would directly pay for the actual hospitalization and medical care of workers. That no such plan existed was partly because there was no cohesive health care “system” to speak of in the early 1900s, and most health care — even primitive surgery — often still happened in the home, not a clinical setting. Of the care that was available, mostly for infectious diseases and traumatic injuries, much of it was unscientific by today’s standards. With some exceptions, hospitals were mental wards and homes for the indigent, operated by nurses and nuns, treating only specific ethnic or religious groups.
Even so, the science of medical care was progressing rapidly, and debate over who ought to pay for such care — and whether it was a right, or a privilege — was fomenting. Workplace reformers such as I.M. Rubinow — a doctor, the head of the American Association for Labor Legislation, a Russian emigre and a socialist — wanted “compulsory” sickness coverage that would pay for medical costs and disease prevention for all workers, modeled after similar plans taking hold in Europe. Economist Irving Fisher believed health insurance was necessary “to tide workers over the grave emergencies [and to] reduce illness itself.”
Such a system would “relieve poverty caused by sickness by distributing individual wage losses and medical costs through insurance.” (It’s a concern that, in America, has never fully been relieved — even today, unpaid medical bills are the No. 1 cause of U.S. bankruptcies, outpacing credit-card bills or late mortgage payments.) The argument that a health insurance system would yield a net savings for society “was meant to appeal to business,” wrote Paul Starr in the American Journal Of Public Health.
“The model was what was happening in Europe, starting with [Prussian Chancellor Otto von] Bismarck in Germany,” said Theodore M. Brown, professor of history and medical humanities at Rochester University. But “the situation suddenly turned very, very negative.”
Businesses thought that compulsory health insurance would be too costly and would amount to a raise for every worker in America, and yet for all that expense — and in an era when preventive medicine was more about public health than individual patients — insurance “would not materially reduce the amount of sickness.” Politicians said a system of universal medical coverage would be identical to “German socialist insurance,” a grave insult in the late 1910s, now that America was at war with Kaiser Wilhelm II.
Insurance companies and physicians weren’t on board, either. Doctors worried then, as now, that health insurers would have too much control over prices and practice methods; insurers worried that a system of “compulsory” health insurance would interfere with their lucrative life insurance business.
“That [proposed] legislation offended virtually every interest there was,” Ms. Thomasson said.
In California, a 1918 ballot measure to create a statewide insurance program failed badly. By 1920, the concept of national, compulsory insurance was dead in Washington, D.C., and in most state capitols.
Then came the Depression.
Rise of the Blues
It’s as true in 2014 as it was in 1929 — when people don’t have a job, they can’t pay for medical care.
President Franklin D. Roosevelt sought to change that, but his plan (later championed by President Harry S. Truman) to build compulsory health care into the Social Security act failed. Meanwhile, newly built hospitals were struggling — more than 100 went under, the rest were operating well below capacity, and those patients who showed up weren’t paying their bills. With hospitals desperate for new revenue streams, a group of 1,500 Dallas-area teachers offered to prepay premiums to the Baylor Hospital in exchange for up to 21 days of future care, and the forerunner to Blue Cross was born.
Soon plans would involve multiple sets of employees, covering multiple hospitals. By 1935, 19 prototype Blue Cross plans existed in 13 states. By 1939, prepayment plans were being created for physicians, too — forerunners to the modern Blue Shield. In the West, dam workers for the Kaiser Construction Co. were among the first to have voluntary premiums deducted from their paychecks. Those premiums then were steered to an insurer, which then sent money to an on-site Kaiser doctor who treated those who were injured while working on the dams. In short, everyone — insurers as well as doctors — was paid in advance, and the program was replicated at construction sites up and down the West Coast.
Our modern system of insurance was taking form.
“The Depression is really big,” Ms. Thomasson said.
Also big, she said, was the development and wider distribution of two drug categories, sulfa antimicrobials (a precursor to today’s antibiotics) and, a decade later, penicillin. In the late 1930s, sulfas were used treat a variety of bacterial maladies; infections that would have been deadly in the 1920s were easily curable just two decades later.
“There’s suddenly a real, perceived need for health care and health insurance,” Ms. Thomasson said.
If the Depression motivated hospitals to think about new prepayment models, World War II motivated employers and employee unions to get creative about health benefits. In 1940, less than 10 percent (12 million people) of the U.S. population had any kind of health coverage. By 1950, about half of America was covered.
What happened? Something called the 1942 Stabilization Act, a work of Congress designed to limit wage increases during wartime. The point of it was to combat inflation, which, in the words of the act itself, “threaten[s] our military effort and our domestic economic structure.” The effect, though, was that employers — needing to recruit workers at a time when many able-bodied men were overseas — began offering more generous health benefits.
Another side effect of the Stabilization Act was that health premiums deducted by employers — while still considered part of compensation for the purposes of labor negotiations — don’t count as income, and, as a result, workers don’t pay income or payroll taxes on those benefits. The result was an incentive for the employer, rather than the employee, to make health insurance arrangements, and the era of third-party health insurance was fully underway. Insurers began adding new types of coverage — “major medical” evolved in the 1950s, vision care in 1957 and dental benefits in 1959.
The post-World War II era saw “rapid market penetration of [employer] health insurance,” Mr. Brown said. “It was very good for workers. And very good for their families.”
It became the “cornerstone” of our system of health care provision, “as vital to [our health] as the drugs, devices and medical services that the insurance covers,” according to the New England Journal of Medicine.
But tying health care to employment naturally left out two vulnerable groups — those who are unable to work or worked in low-paying jobs without health benefits, and those who were beyond working age. Seniors, in particular, were being priced out of the market, as greater health plan penetration meant more health care was being consumed, resulting in health cost inflation.
In the late 1950s, “suddenly, retirees couldn’t afford health care,” Mr. Brown said.
President John F. Kennedy advocated “Medical Care for the Aged,” a hospital insurance plan for seniors. After Kennedy’s assassination, President Lyndon B. Johnson, with Democratic majorities in the U.S. House and Senate, created the Medicare and Medicaid systems in 1965. (Today, the two programs, administered jointly by federal and state governments, insure more than 105 million Americans at any given time, about a third of the U.S. population; that number will increase this year as millions of Americans are newly eligible for Medicaid, as a result of the 2010 Affordable Care Act.)
With so many Americans enrolled in government-paid health insurance programs, advocates for “compulsory” insurance — rebranded as “universal” or “single-payer” health care — were encouraged by the era’s progressive reforms. And progressives were joined in that reformist spirit by none other than President Richard M. Nixon.
Fits and starts
Corporations and, to a lesser extent, unions were agitating for federal leadership on health care costs by the early 1970s, as medical care was eating up larger and larger portions of company budgets and was also chipping away at employee wage increases — most contract gains were now coming in the way of fringe benefits, rather than salary.
In 1971, Sen. Edward M. Kennedy proposed a single-payer plan that would have eventually expanded nationalized health care to every American. Nixon had his own plan, one that would have mandated coverage by private employers, but over time that plan moved to the left, and by 1974, when the president again addressed Congress on the issue, he was offering a plan that “would offer to every American [broad] and balanced health protection.” He also sought to confront insurance denials based on pre-existing health conditions, telling Congress that “there would be no exclusions of coverage based on the nature of the illness … a person with heart disease would qualify for benefits as would a person with kidney disease.”
Compromise was “in the air” that spring, Kennedy said.
“By the mid-1970s, it was assumed that national health insurance was coming,” Mr. Brown said. “It was only a question of what form it would take.”
The compromise spirit quickly evaporated. By the summer, Watergate had fully ensnared the president, and by August, Nixon had resigned. His successor, Gerald Ford, backed away from Nixon’s health insurance plan, and while President Jimmy Carter made a lukewarm pitch for a nationalized insurance system, the recession of the 1970s put health overhaul on the back burner, where it would stay until President Bill Clinton’s 1992 election.
“Millions of Americans are just a pink slip away from losing their health insurance, and one serious illness away from losing all their savings,” Mr. Clinton said in 1993. “And on any given day, over 37 million Americans — most of them working people and their little children — have no health insurance at all. And in spite of all this, our medical bills are growing at over twice the rate of inflation.” His plan — promoted by his wife, Hillary Rodham Clinton — would have required virtually all Americans to enroll in a health policy, to be managed by regional purchasing cooperatives.
“Too grand a scale,” Ms. Thomasson said. “People feared change” of that degree.
Mr. Clinton’s plan died in Congress the next year, meaning it would again be up to the private sector to control its own costs. At the time he gave his speech, health costs had been rising rapidly for more than two decades, outpacing inflation and the overall growth rate in the economy. As a result of the increasing expense, employers were methodically dropping coverage: in 1979, about 97 percent of employees at large and medium-sized companies offered coverage to employees.
By 1991, that figure was at 83 percent.
The insurance industry responded with a resurrected product designed to control costs — the health maintenance organization, or the HMO. Though “managed care” plans had been around since the 1930s (the early prepaid health plans could be viewed as a precursor to the HMO), their modern enrollment grew dramatically after the HMO Act of 1973, another Nixon-era health overhaul measure.
That act gave loans to insurers that wanted to start to expand an HMO and, more importantly, required many large employers to offer an HMO option alongside traditional major medical plans. By the mid-1990s, 50 million Americans were enrolled in employer-sponsored HMOs, and that enrollment penetration allowed insurers to exert greater control on how and when Americans received medical care. The strictest of the HMOs managed care through “gatekeeper” physicians, limited doctor networks and aggressive utilization review. The cost controls worked — in the latter half of the 1990s, health spending inflation slowed significantly.
Problem is, patients didn’t care for having their health decisions managed so tightly.
“There was a backlash against them,” Mr. Brown said. “There were all sorts of horror stories” about cost-containment and denial of care.
Lawmakers answered with “patients bills of rights,” and employers and insurers responded with more liberal HMOs and preferred-provider organizations, so-called open-access plans that allowed patients to seek care of specialists without the referral of a primary care physician. But a loosening of restrictions was, in effect, a loosening of purse strings, and U.S. health costs continued to grow as a share of gross domestic product, a figure that now stands at 17.2 percent.
A major Medicare expansion under President George W. Bush — the popular “Part D” plan that gives prescription drug subsidies to Americans older than 65 — went into effect in 2006, and added about $70 billion to the annual U.S. health care outlay.
Change on the horizon?
For the decades’ worth of talk of overhauling health care, our national system of paying for insurance coverage isn’t much different from the one we had in place 50 years ago — Americans get coverage and prescriptions through a mix of government and military plans, employer-sponsored HMOs and PPOs, and individually purchased plans. For tens of millions of Americans at a time, there’s still no health insurance at all. Health care accounts for more than 1 in 6 of every dollar we spend in this country.
That share, however, has dropped slightly, from 17.3 percent of GDP in 2011 to 17.2 percent in 2012. Overall health care spending has slowed, too, growing at just 3.7 percent in 2012, to $2.8 trillion. Since 2009, the rate of increase in health care has hovered between 3.6 percent and 3.8 percent, according to the federal Centers for Medicare & Medicaid Services, and spending on prescription drugs dropped by 1 percent from 2011 to 2012, the first drop in pharmacy spending in at least five decades.
Much of that flattening is still tied up in the lingering effects of the Great Recession. But some of it may be linked to new ways of thinking in how we pay for care: Insurers and the federal government are pushing hospitals to provide quality care, not quantity of care. Medical practices will pay greater attention to chronic conditions, and care will be better coordinated among practices and specialists (the jury is still out on whether “patient-centered medical homes” actually reduce costs or improve care outcomes). Electronic medical records and health information exchanges have the potential to make health care more efficient. Higher deductibles and copays mean people will have to think twice about how they spend their money, turning patients into comparison shoppers and giving rise to the “consumer-directed health plan.” (Some of these issues will be explored in this special section.)
The payer mix seems destined to change over the next decade, too, thanks to the 2010 Affordable Care Act and retiring baby boomers. A larger percentage of Americans will see their care arranged through government programs, with boomers entering Medicare en masse and millions of low-income workers now eligible for newly expanded Medicaid programs.
And as more people sign up for individual plans through HealthCare.gov and state-operated health insurance shopping exchanges, incrementally, health coverage will become further divorced from employment, a slow unraveling of the job-based insurance system that has developed over the last century. Those workers who keep their coverage may be forced to shop for it on local business exchanges, aided with a stipend from their employer — that is, a defined health care contribution, rather than a defined benefit.
Some of these new cost-sharing concepts “are old ideas that get recycled and buffed up” once a generation, Mr. Brown said. Others may have more lasting impact. If the program promoted by President Barack Obama works as advertised, the government will pay for more and more care, but the levers of control are increasingly in the hands of private companies, since many Medicare plans and almost all Medicaid HMOs are administered by commercial insurance companies.
Still, at best, these are incremental changes — something America’s political system is quite good at, and something its voters don’t seem to mind, because many of them remain happy with their health coverage and their health care.
We’re not very good at implementing “one grand design,” Ms. Thomasson said. “Health insurance in the U.S. developed in a piecemeal approach, as different interest groups addressed their specific needs at different points. … If you look at the history of [U.S. health coverage], it’s politics of a little bit more of a time.”
To report inappropriate comments, abuse and/or repeat offenders, please send an email to
firstname.lastname@example.org and include a link to the article and a copy of the comment. Your report will be reviewed in a timely manner.