When Marek Kowalkiewicz stood before a crowd of business leaders and technologists, he didn’t begin by warning them of machine overlords. Instead, he described his robot vacuum.
“It gets stuck in the simplest places,” the professor deadpanned. “They’re not overlords. They’re minions.”
This simple metaphor — humorous but profound — now underpins Kowalkiewicz’s book, The Economy of Algorithms: AI and the Rise of Digital Minions. It’s an exploration of how algorithms, once confined to the backends of spreadsheets and academic theory, have taken center stage in shaping how decisions are made, businesses operate, and societies function. These so-called AI minions aren’t just tools — they’re collaborators. And as artificial intelligence begins to seep into every corner of the economy, our relationship with them is becoming both more intimate and more complex.
Kowalkiewicz, Chair in Digital Economy at Queensland University of Technology and a longtime advisor on digital transformation, believes these algorithms deserve our attention not because they are threatening, but because they are powerful — and deeply flawed. They follow instructions perfectly. But they can’t reason. They can’t interpret context. And without human oversight, they can wreak havoc.
“They’re like the yellow creatures from the movies,” he said. “They’re helpful, energetic, but if you leave them unsupervised — chaos.”
AI Minions Behind the Scenes
Automation is not new. Algorithms have powered business logic and industrial control systems for decades. What’s different now is accessibility and scale. A teenager can launch a fleet of bots to buy limited-edition sneakers and flip them online, profiting in minutes. An intern can automate content creation and be accused of cheating — even when the work is original. A solo entrepreneur can build an entire back-office operation with off-the-shelf AI tools and not write a single line of code.
These minions now operate at the edge of organizations. They are embedded in scheduling systems, compliance checkers, customer service scripts, and marketing funnels. Many of them work invisibly, flagged only when they fail — or succeed too well.
Kowalkiewicz highlights a cautionary tale from 2011, when an obscure biology book on Amazon was listed for $23.7 million. The price wasn’t a mistake. Two sellers had configured pricing bots to automatically undercut or outbid the other. Their recursive strategies escalated out of control, until a paperback that no one intended to buy became a digital arms race between algorithms.
Today, similar automations operate across ecommerce, stock trading, and logistics — often with little human awareness of their interactions. The AI minions are not malevolent. They’re simply doing what they were told.
Designing for Accountability
That’s where structured systems — and the human beings who design and monitor them — matter most. Platforms like Way We Do offer a counterbalance to the chaos. The system enables teams to document, assign, and track operational processes with clarity and control. More than just policy documentation, Way We Do embeds those processes into everyday workflows, assigning roles to steps and guiding users through real-time execution.
As companies begin layering AI tools into these workflows — from writing assistants to data analyzers — platforms like this provide a framework to maintain oversight. AI agents can follow steps, complete tasks, and pass results to humans for review. It’s a digital apprenticeship of sorts, where AI learns to contribute without overrunning the system.
Kowalkiewicz encourages this model. He urges businesses to think of their digital tools as evolving employees. “Start them as interns,” he says. “Test them. Then promote them when they prove their value.”
This approach reflects the evolving management challenge of automation. It’s not just about finding efficiencies — it’s about assigning responsibility. If an AI agent makes an error in judgment, who is accountable? The answer depends largely on how thoughtfully the system was built.
Automation Without Consent
Of course, not all automation is visible or authorized. So-called “shadow AI” — where employees secretly deploy their own automations — is on the rise. Sometimes it’s harmless: a macro that formats reports or scrapes emails. Sometimes it’s disruptive: a bot that writes proposals, responds to customers, or handles compliance checks — without any oversight or documentation.
Kowalkiewicz recalls a Reddit user who automated their entire job, spending six years playing video games at their desk while bots quietly filed reports and answered emails. When they were finally discovered, they were fired. Not for incompetence — but for efficiency.
“Firing the only person in the company who knew how to automate the work was probably the worst management decision they could’ve made,” he said. “They missed the opportunity to learn.”
Systems like Way We Do, when implemented effectively, surface this kind of hidden innovation. Employees can bring their automations into the light — documenting steps, flagging automation opportunities, and assigning approvals. Governance, not punishment, becomes the response.
The Specter of Phantom AI
While hidden AI is a concern, what’s more troubling is phantom AI — false positives where no automation exists, yet people are accused of cheating. Kowalkiewicz shared examples from students and interns flagged by AI detection tools despite having written their work manually. One even typed the 1901 Australian Constitution into a detection tool — it came back as AI-generated.
“That’s not just a technical error,” he says. “It’s a failure of trust.”
Educators, managers, and institutions must now grapple with a paradox: AI is both a tool and a threat. When detection systems are flawed, they undermine the very integrity they seek to protect.
Pixels and Empathy
Kowalkiewicz is not a pessimist. His book closes not with warnings, but with principles for building a better digital future. He advocates for building ecosystems of AI agents — but under human stewardship. He envisions AI that enhances diversity, unlocks new value propositions, and operates with empathy baked into its code.
He cites research showing that diverse datasets — even when imperfect — produce AI models that outperform their training. In one case, chess models trained on beginner-level games became stronger than their data suggested, simply because of the diversity in play style. Homogeneity, by contrast, stifled growth.
The implication is clear: to build intelligent systems, organizations must cultivate intelligent teams. Not just data scientists, but domain experts, process managers, and frontline workers must contribute to how automation is designed, deployed, and governed.
A Bold Future, Made by Choice
One of the most moving examples Kowalkiewicz shares isn’t about code at all. It’s about a café in Japan, where robots deliver drinks to customers. But these aren’t autonomous bots. They are avatars, controlled remotely by people with disabilities who cannot leave their homes. Through these devices, they work. They socialize. They reclaim agency.
This, he argues, is the future worth building. Not a world of cold efficiency, but one where automation expands human potential. Not a surrender to AI, but a collaboration with it.
The challenge, he reminds us, is not whether we use AI. It’s how we choose to do so.
Do we design systems that empower people — or displace them? Do we accept AI minions as tools to enhance our work — or as shortcuts to avoid it? Do we build governance, diversity, and purpose into our systems — or let the bots run wild?
Kowalkiewicz ends his talk with a gentle revision of a familiar phrase. “People over pixels” is noble, he says. But maybe the future calls for something more.
“Let’s instill empathy in algorithms,” he says. “Let’s plant our hearts in hardware.”
And maybe, just maybe, let’s learn to lead — not by fearing the minions, but by teaching them well.