Technological advances and other great leaps in knowledge always have inspired both hope and dread. The Greek myth of Prometheus illustrates this.

Prometheus defied the gods of Olympus by stealing fire from them and giving it to humanity. This allowed mere men and women to light the dark, heat their homes and forge tools and weapons. But it also gave humans the capacity to burn down everything around them.

The moral of the tale is clear: Be careful with fire—that which gives us the power to create can also destroy.

Artificial intelligence is the fire in today’s version of that story. Or at least it could be without proper safeguards that put our human conscience at the center.

AI’s creative capacity is immense. It can apply centuries of learning within fractions of a second, enabling huge surges in knowledge that used to require years of human study and effort.

To use but one example, ancient, arcane texts and artifacts have begun to yield their secrets under the swift and relentless probing of AI. The Promethean power of these systems is unlocking languages and dialects that have been unspoken for millennia.

I’ve been thinking about the role of AI in teaching and learning, and more broadly how it might impact the intersection between learning and work. But AI is not new. In fact, the term artificial intelligence was coined in 1956 by mathematics professor John McCarthy for the first academic conference on AI.

Since those early days we’ve seen several different modes of artificial intelligence emerge, including:

  • Sensing AI, including sensors in industrial settings.
  •  Communications tools such as Siri and Alexa.
  •  Movement-oriented AI, including the Boston Dynamics robots.
  • Deep Learning, a subset of machine learning in which layers of neural networks perform some analytical tasks without human intervention.

It’s no mystery why many people are troubled by the rapid growth of this technology even as it continues to evolve. There are serious questions around ethics, including privacy and the intellectual property of those whose works have been used to create the large language models so increasingly used today. Even darker concerns have been raised about the prospect of superintelligent systems that could evolve without human control.

In other words, Prometheus still has more to teach us about the true cost of technology.

Examples abound. According to a recent story in The Washington Post, Amazon’s spoken-word bot Alexa has reported as fact the false claim that the 2020 election was stolen.

Alexa dropped the claim after somebody alerted Amazon, but similar misinformation is still being shared. That’s because AI’s consumption of data, like fire’s scorching of earth, is relentless, even rapacious—devoid of moral reasoning. It seeks out information both good and bad. If we humans feed it a lie, AI will use that falsehood as fuel for greater distortions.

And like fire, once it is unleashed in the world, AI goes where it will, so we must be cautious.

Anu Bradford, author of the recent book “Digital Empires: The Global Battle to Regulate Technology, illustrates how nations across the globe are considering legal frameworks.

China’s focus has been to strengthen state control, limiting innovation and making government the gatekeeper for both information and disinformation.

The European Union’s tack has been more nuanced and human-centered, focusing on preserving the rights of those who interact with AI. This keeps the focal point where it should be—on people and their lives.

But the law can only do so much. As novelist Robert Penn Warren pointed out in “All the King’s Men,” a classic study of unrestrained power, the law is like a twin-sized blanket on a double bed. No matter how it’s stretched or twisted, it always leaves something uncovered.

In the United States, our approach has been to encourage innovation. But this can allow misinformation and other destructive forces to roam unfettered across the digital landscape.

For artificial intelligence, we need more than just a legal framework; we need an individual rights and responsibilities framework. That ethical frame should include our approach to higher education, one that both engages with AI’s ethical implications, and helps prepare people for the work that only humans can do.

AI, like fire, is a force of great power. But it does not possess a conscience.

That’s our uniquely human job—now more than ever.


This article was originally published in Forbes.

Back to News