We’re not here to bash AI. We use it too. But lately, we’ve noticed something: AI is showing up everywhere. From email writing to creating headshots, it feels like every tool, platform, and product is racing to add “AI-powered” to the label.
But that begs the question: Just because we can use AI for everything… should we?
Sure, AI can automate your workflow and spit out 10 blog posts in under a minute. But can it build trust with a client? Or spark a real conversation?
This blog is our invitation to pause. To look past the hype and ask how AI is actually serving us, and how it might not be. Because while AI is a powerful tool, it still needs something we already have: people. And it’s missing something we all crave: human connection.
Is AI Everywhere?
Short answer: Yes.
Long answer: Absolutely yes. And probably in more places than you realize.
Artificial Intelligence isn’t some overnight tech trend that appeared out of the digital mist in the 2020s. It’s been around since the 1950s, when the term “AI” was coined to describe a machine’s ability to simulate human intelligence.
If you’ve ever played chess against a computer, you’ve already experienced one of the earliest and most enduring forms of AI. It’s not just moving pieces at random. It’s calculating, analyzing, and playing to win. That’s the quiet reality of AI: it’s been powering your email filters, your GPS reroutes, and your eerily accurate Netflix queue for years.
But this moment? The ChatGPT moment? That’s not just AI. It’s Generative AI, and it’s what’s got everyone in a frenzy.
Generative AI refers to tools that don’t just process data, but create new content from it. Words, images, code, fake Drake songs… you name it. Tools like these can now write your emails, design your website, generate your headshot, and yes, even ghostwrite your memoir if you ask nicely.
We’ll admit, AI is very good at what it does, when it’s doing the right things. And when used with intention, it helps teams move faster, iterate quicker, and offload the busywork so humans can focus on strategy and creativity.
But, AI is not a mind reader, it’s not your brand voice, and it definitely can’t replace the nuanced thinking, emotional intelligence, and strategic judgment that comes from real-life human brains. Plus, there’s some evidence that suggests we must rely more on those human brains when working alongside Generative AI.
When AI Starts Thinking For Us
The Cognitive Impact
The tools we’ve historically relied on—like calculators or Excel spreadsheets—were designed to assist, not replace, our cognitive abilities. You still had to understand the formula, the logic, the objective. But today’s AI doesn’t just assist; it automates thinking. And that’s where things get complicated.
According to Forbes, “AI, on the other hand, is more complex in terms of its offerings—and cognitive impact.” Researchers at the National Institutes of Health (NIH) warn of “AI-induced skill decay,” where constant reliance on AI could weaken our ability to think critically, solve problems creatively, or innovate meaningfully—a gradual erosion of human innovation and independent thought.
In sectors like healthcare and finance, AI is already making high-stakes decisions like suggesting diagnoses or investment strategies. But outsourcing too much judgment to machines? That’s a slippery slope. Less human oversight means less opportunity for critical thinking and creative problem solving, and more trust in tools that, while powerful, aren’t infallible.
The takeaway? If we want big ideas, we need brains, not just bots. As the NIH notes, we must first “understand how to work independently of AI”, only then can we use it wisely.
The Bias Problem
The phrase “garbage in, garbage out” takes on a new level in the era of generative AI. These tools are trained on massive datasets filled with human-created content, much of which already reflects existing stereotypes and systemic biases. As Bloomberg highlights, “We are essentially projecting a single worldview out into the world, instead of representing diverse kinds of cultures or visual identities.”
The consequences? More than just a lack of inclusion in digital spaces. In real-world applications like hiring, insurance, or even law enforcement, biased AI-generated images and videos can reinforce stereotypes or deliver harmful impacts.
Generative AI doesn’t just reflect bias, it reinforces it, potentially at a massive scale. The challenge for creators and companies isn’t just to acknowledge this issue, it’s to actively correct it. That means more representative datasets, better oversight, and a refusal to accept a “default” that leaves entire communities misrepresented or erased.
The Environmental Cost
Generative AI might live in the cloud, but its environmental footprint is very much grounded in reality. From rare earth mining to water consumption, the infrastructure behind AI is anything but invisible.
The United Nations Environment Programme warns that data centers powering AI consume massive amounts of energy and water. One estimate suggests AI infrastructure may soon use six times more water than Denmark, a nation of 6 million people!
Then there’s the hardware. Building the electronics that run AI models requires staggering resources—800 kg of raw materials to make a single 2 kg computer. Add in toxic e-waste and an ongoing reliance on fossil fuels, and the “sustainability” story starts to unravel, no matter how many carbon credits tech giants buy.
Even a simple ChatGPT prompt reportedly uses 10 times more electricity than a standard Google Search, according to the International Energy Agency. Multiply that by billions of queries. A little scary, yet? As we race to build smarter machines, are we ignoring the environmental costs of their intelligence?
Partnering People & Machines
When used intentionally, AI and humans can actually make a pretty solid team.
Think of AI as your super-speedy sidekick. It’s great at sifting through mountains of data, generating outlines, proofreading for grammar slips, or even automating those mind-numbing tasks like calendar scheduling or inbox cleanup. It’s the ultimate co-pilot for the boring stuff. But that’s where the line should be drawn. Because when AI starts creeping into judgment, taste, emotion, or nuance (the things that make us human), it tends to fall flat. Collaboration works best when you let AI handle the grunt work and let humans do the thinking, feeling, and connecting.
Because at the end of the day, people don’t just want information; they want to be understood.