AI: We Don't Know What It's For Yet
In 1876, Western Union had the opportunity to purchase Alexander Graham Bell's telephone patent for $100,000. They passed, dismissing it as a "toy" with no commercial value. The company was in the telegraph business, after all—the nervous system of commerce. What use could anyone possibly have for this curiosity that let two people talk to each other over a wire?
That decision became one of the most spectacular miscalculations in business history. Not because Western Union was stupid, but because they—like the rest of us when facing transformative technology—couldn't see past the obvious to imagine the inevitable.
We're in that moment again with AI. And just like with the telephone, the internet, and every other world-changing technology before it, we have absolutely no idea what AI is actually for.
The Pattern We Never Learn
History has a sense of humor about our inability to predict the future of transformative technologies. Pick any innovation that fundamentally changed how we live and work, and you'll find a trail of spectacularly wrong predictions about what it would become.
Thomas Edison thought electricity's killer app was lighting. In 1880, he famously promised to "make electricity so cheap that only the rich will burn candles." He was right about the cheap part. But lighting turned out to be just the opening act. Electricity didn't just replace candles—it became the invisible infrastructure powering everything from refrigeration to computation, communication to entertainment. Edison was focused on the immediate use case. He couldn't see that electricity would reshape civilization.
Radio faced similar skepticism. When David Sarnoff proposed investing in radio in the 1920s, his associates shot him down: "The wireless music box has no imaginable commercial value. Who would pay for a message sent to nobody in particular?" They were thinking about radio as wireless telegraphy—point-to-point communication. The idea that millions of people would tune in simultaneously to hear the same content? Unimaginable. Broadcasting as a concept didn't exist yet, so they literally couldn't conceive of radio's actual purpose.
Television in 1946 was dismissed by movie producer Darryl Zanuck with perfect confidence: "Television won't be able to hold on to any market it captures after the first six months. People will soon get tired of staring at a plywood box every night." A year later, The New York Times editorial board agreed: "TV will never be a serious competitor for radio because people must sit and keep their eyes glued on a screen; the average American family hasn't time for it."
They were imagining television as "radio with pictures"—a novelty that would wear thin. They couldn't foresee that television would become the primary medium for news, entertainment, education, and culture for the next 70 years. The future doesn't announce itself clearly. It arrives disguised as something familiar.
Personal computers hit the same wall. In 1977, Ken Olsen, founder and president of Digital Equipment Corporation, told the World Future Society, "There is no reason for any individual to have a computer in their home." Now, this quote gets taken out of context—Olsen later explained he was talking about centralized home automation systems, not personal computing. But that's exactly the point: even the people building the technology couldn't see the real application. DEC made computers. Olsen had them at home. And he still missed what they would become.
The Email Phase
Here's where it gets interesting. The internet itself followed this exact pattern.
When ARPANET went live, it was designed for resource sharing—letting researchers at different universities access each other's computing power. That was the plan. That was the vision.
But by 1973, 75% of all ARPANET traffic was email.
Nobody saw that coming. Email wasn't even supposed to be the point. It was a side feature, a convenience. Ray Tomlinson created it almost as an afterthought in 1971, choosing the @ symbol simply because it wasn't being used for anything else. The internet's first killer app was an accident.
For years—decades, really—email was the internet for most people. It was the reason you got connected, the thing that made the whole network valuable. Businesses bought in because of email. Home users signed up for email. AOL's famous "You've got mail!" became the sound of the digital age.
Then the World Wide Web showed up, and suddenly email was just one small piece of something vastly bigger. But even after the web arrived, it took years for people to realize what they were looking at. Early websites were digital brochures. E-commerce was a curiosity. Social networks didn't exist. Streaming video was technically impossible. The smartphone was still a decade away.
The internet's actual killer apps—search, social media, cloud computing, streaming, the gig economy, remote work—weren't imagined by the people who built the network. They emerged gradually, as millions of people experimented, failed, succeeded, and built on each other's discoveries.
We had to discover what the internet was for. It wasn't planned. It was found.
GPS: From Missiles to Pizza Delivery
GPS might be the perfect example of this phenomenon. It was built by the U.S. military in the 1970s and 1980s for one purpose: accurate targeting of weapons systems and positioning of military assets. That's it. That's all it was meant to do.
In 2000, President Clinton made the full-accuracy GPS signal available to civilians. Within a few years, GPS was everywhere—in cars, phones, fitness trackers, agricultural equipment, delivery trucks, dating apps. Nobody planned for Uber, Waze, Pokémon GO, or precision agriculture. Those applications emerged because people had access to location data and started experimenting.
Today, GPS powers logistics networks, enables autonomous vehicles, helps you find your lost phone, and lets you order food delivery with pinpoint accuracy. A military navigation system became the invisible infrastructure for the on-demand economy. Not because someone planned it that way, but because the technology existed and people found uses for it that the original designers never imagined.
The Smartphone Revolution Nobody Predicted
Let's talk about the smartphone, because this one's recent enough that some of us remember it happening.
The BlackBerry was the dominant device in the early 2000s. It was a business tool—an email machine with a phone attached. RIM (BlackBerry's maker) obsessed over email efficiency, physical keyboards, and enterprise security. The device was so identified with business communication that people literally called it a "CrackBerry" because executives were addicted to checking email.
When Apple launched the iPhone in 2007, the initial reaction from many tech analysts and industry insiders was skepticism. It didn't have a keyboard. The battery life was mediocre. It was expensive. Business users wouldn't want a touchscreen.
But Apple did something unexpected: they opened the App Store in 2008. And within a few years, the smartphone became something nobody had really predicted—a universal computing platform, a camera, a gaming device, a payment system, a navigation tool, a social network client, a streaming video player, a health tracker, and yes, also a phone.
Instagram launched in 2010. Uber in 2009. Snapchat in 2011. TikTok in 2016. None of these companies existed when the iPhone launched. The killer apps for smartphones weren't built by the phone manufacturers—they emerged from the ecosystem once the platform existed and developers started experimenting.
BlackBerry, meanwhile, kept optimizing for email. By 2013, the company was in freefall. They were so focused on what they thought smartphones were for that they missed what smartphones actually became.
Blockchain's Identity Crisis
Even recent technologies follow this pattern. Blockchain burst onto the scene as the infrastructure for Bitcoin—a peer-to-peer digital currency designed to work without banks or governments. That was the whole pitch: decentralized money.
A decade later, blockchain's most promising applications have almost nothing to do with replacing traditional currency. Companies are exploring blockchain for supply chain transparency, medical records, digital identity verification, smart contracts, and decentralized autonomous organizations. Ethereum opened the door to applications that had nothing to do with being "digital gold." NFTs, DeFi, decentralized social networks—none of this was in Bitcoin's original whitepaper.
We're still figuring out what blockchain is actually for. The technology exists. People are experimenting. Some applications will stick, most will fail, and in ten years we'll look back and say, "Oh, that's what blockchain was good for." But we're not there yet.
Where We Are With AI
Which brings us to AI.
Right now, in early 2026, AI is our email moment. Maybe even our "digital brochure" moment.
We're using AI for:
- Customer service chatbots
- Content generation (writing, images, code)
- Task automation (scheduling, data entry, research)
- Predictive analytics
- Voice assistants
- Summarization tools
These are useful. Some are genuinely transformative for productivity. But they're also obvious—the low-hanging fruit, the things we can immediately see and understand based on what AI can demonstrably do today.
This is not what AI is for. This is just what we've figured out so far.
The real applications—the ones that will be obvious in hindsight, the ones that will make current use cases look quaint—those haven't been invented yet. They can't be, because we're still in the discovery phase. We're still learning what's possible. We're still building the tools that will let other people build the actual killer apps.
Think about it: email existed for nearly 20 years before the internet became a consumer phenomenon. The web existed for years before Google. Social media existed before smartphones made it ubiquitous. Streaming video was technically possible long before the combination of broadband, compression algorithms, and content delivery networks made Netflix viable.
The infrastructure comes first. The discovery comes second. The transformation comes third.
We're in phase one with AI. Maybe early phase two. We haven't hit phase three yet—the part where some startup or researcher or random developer combines AI with something else in a way nobody expected, and suddenly there's a whole new category of value that didn't exist before.
What This Means for Businesses
So what do you do with this information? How do you make decisions about AI when you don't know what it's actually for yet?
The answer is the same as it's always been with transformative technologies: you experiment. You learn. You stay close to the edge.
The companies that won with the internet weren't the ones who waited until all the use cases were clear. They were the ones who got in early, tried things, failed often, and learned fast. Amazon started as an online bookstore in 1995, when e-commerce was mostly a joke. Google launched in 1998 as the 20th search engine in a crowded market. Facebook launched in 2004 when social networks were considered played out after Friendster and MySpace.
None of these companies knew what the internet's killer app would be. They just knew something big was happening, and they wanted to be close to it when it crystallized.
The same applies to AI. You don't need to know exactly what AI will become to benefit from it. You need to:
Build literacy. Make sure your team understands what AI can and can't do today. Not hype, not sci-fi—actual capabilities.
Run experiments. Identify small, low-risk projects where AI might add value. Try them. Measure the results. Learn from what works and what doesn't.
Stay adaptable. Don't build your entire strategy around today's AI use cases. The landscape is shifting fast. What works today might be obsolete in 18 months. What seems impossible today might be routine in three years.
Watch the edges. Pay attention to weird experiments, novel applications, unexpected combinations. The breakthrough won't come from the obvious use cases. It'll come from someone trying something that sounds slightly crazy.
Invest in infrastructure. Even if you don't know exactly what you'll build, having clean data, good APIs, and technical capability positions you to move fast when clarity emerges.
The companies that thrived in past technological shifts weren't the ones with perfect foresight. They were the ones who were ready to pivot when the moment arrived. Western Union could have owned telecommunications. Kodak invented the digital camera. Blockbuster could have been Netflix. They all had the technology. They just couldn't see past the obvious to the inevitable.
The Uncomfortable Truth
Here's the uncomfortable truth: we are terrible at predicting what new technologies are actually for.
Experts miss it. Inventors miss it. CEOs miss it. Entire industries miss it. The people building the technology miss it.
But that's not a bug—it's a feature. If the future were predictable, it wouldn't be transformative. The fact that we can't see what's coming is precisely what makes the discovery process so valuable.
AI will not be what we think it is. It will be something stranger, more useful, more integrated into daily life than our current chatbots and image generators suggest. We're going to look back at 2026 the way we look back at 1995, when "the information superhighway" was mostly used for email and weird personal homepages.
We're in the "plywood box" era of AI—the phase where we're still thinking about it in terms of what we already understand, rather than what it might become.
So no, I don't know what AI is for. Neither do you. Neither does anyone else, no matter how confidently they pitch their vision of the future.
But I know we're going to find out. And the companies, teams, and individuals who stay curious, experiment boldly, and keep learning will be the ones who recognize the real applications when they emerge.
Because history has taught us one thing over and over: the future isn't predicted. It's discovered.
And we're just getting started.
Michael LaVista is CEO of Caxy, a custom software development firm in Chicago, and host of The Digital Transformist podcast. He's been building digital products since before "digital transformation" was a buzzword, which means he's seen plenty of technologies that were supposed to change everything (and a few that actually did).



