11:23AM
MENU
Advertisement
Pittsburgh boasts a small tech sector and growing AI industry, epitomized by a mile stretch of road near Bakery Square dubbed “AI Avenue” that is home to several tech giants, including Google, Duolingo and the Carnegie Mellon University Cloud Lab.
1
MORE

Michael Chou: AI could be safe, but only if we set the rules now

Evan Robinson-Johnson/Post-Gazet

Michael Chou: AI could be safe, but only if we set the rules now

Unmanaged automation has serious ethical risks. In healthcare, for example, a widely used algorithm systematically under-estimated the needs of Black patients, using cost as a bad proxy for severity of need. In hiring, an AI-driven screening tool penalized women based on criteria learned from a historically male dominated applicant pool.

These examples show us a scary truth: automation, when designed without care, can exacerbate the very problems it’s trying to solve. As automation and particularly artificial intelligence seep into every aspect of our lives, the question we should be asking isn’t what can technology do, but what should it do.

Scary and needed

But it’s not all bad. Automation’s ubiquity is a testament to its power. Research shows that over 90% of workers using automation report increased productivity, and 85% say these tools help them collaborate more. Over 80% say automation is a powerful tool to combat burnout, allowing them to focus on important work and deepen customer relationships rather than waste energy on repetitive tasks.

Advertisement

That’s why automation is here to stay. But the very same qualities that make it so powerful — speed, efficiency, scale — also amplify the consequences when things go wrong. Finding the balance between progress and responsibility is not just an ethical requirement but a practical one.

Think about how datasets, the foundation of most automation systems, can encode historical biases. Algorithms built on bad data will amplify those biases on a massive scale, turning good tools into systems of exclusion.

To prevent this, the ability to ensure clean data and to monitor how an AI model is being trained on that data must be baked into the creation of that automation from the start. Transparency isn’t a feature. It’s the foundation of trust. Without clear reasoning behind the outcomes, automation will erode user confidence and undermine itself.

Irreplaceable humans

The trades provide a great example of this dynamic. These professions — plumbing, electrical work, HVAC, etc. — aren’t resistant to change because they’re stubborn, but because they value human expertise that automation can’t replicate.

Advertisement

Trust is at the heart of these fields, and when automation is opaque, that trust is easily broken. A technician assigned to a complex job by an unproven system may wonder if the decision was based on logic, bias or error. Without visibility into how decisions are made, even the most advanced tools will alienate their users.

This tension between trust and efficiency isn’t limited to the trades. Across industries, the systems that succeed are the ones that handle the repetitive, boring work while leaving room for creativity, judgment and human intuition.

Automation should be a partner, not a replacement, freeing people to work on things that require uniquely human skills. At its best, automation doesn’t diminish us; it enhances us.

But that requires intentionality. Automation isn’t ideologically or ethically neutral. It reflects the priorities and values of its creators. Transparency builds trust incrementally, helping users feel comfortable with automation as a partner rather than a replacement. Systems that address biases and prioritize user understanding empower people rather than displacing them.

Fairness and transparency

The challenges are big. But the opportunities are even bigger. Automation can free people from tedious work so they can focus on work that requires creativity and care. It can find patterns and insights humans will miss so we can make better decisions. But that can only happen if fairness, transparency and empowerment are baked in.

The future of automation is being written now. Its ethical challenges are not theoretical. They’re in the algorithms that decide who gets hired, who gets promoted and whose work is valued. As leaders and technologists we have a duty to make sure automation serves humanity, not the other way around.

Progress doesn’t mean getting rid of tradition or replacing human expertise. It means building on that to make sure technology doesn’t just serve efficiency — but humanity itself.

Michael Chou is the Chief Product Officer for BuildOps.

First Published: May 9, 2025, 8:30 a.m.

RELATED
SHOW COMMENTS (1)  
Join the Conversation
Commenting policy | How to Report Abuse
If you would like your comment to be considered for a published letter to the editor, please send it to letters@post-gazette.com. Letters must be under 250 words and may be edited for length and clarity.
Must Read
Partners
Advertisement
Pittsburgh boasts a small tech sector and growing AI industry, epitomized by a mile stretch of road near Bakery Square dubbed “AI Avenue” that is home to several tech giants, including Google, Duolingo and the Carnegie Mellon University Cloud Lab.  (Evan Robinson-Johnson/Post-Gazet)
Evan Robinson-Johnson/Post-Gazet
Advertisement
LATEST opinion
Advertisement
TOP
Email a Story