For more than twenty-five years I’ve worked in technology—building computers, designing networks, writing code, and experimenting with electronics and microcontrollers. When you spend that much time around technology, you begin to see patterns.
Not just patterns in machines, but patterns in how people react to them.
Right now, artificial intelligence is the newest technology capturing the public’s attention. Some people are embracing it enthusiastically. Others are pushing back strongly against it. And many people seem to be feeling a kind of uneasiness that’s hard to define.
I’ve seen something like this before.
Years ago, when I became more aware of environmental issues, I experienced what many people now call eco-anxiety. When you start learning about climate systems, global resource use, pollution, and biodiversity loss, the scale of the problem can feel overwhelming.
You begin to realize how much mining is required to build our infrastructure.
How much farming is required to feed billions of people.
How much energy civilization consumes every single day.
The scale is enormous.
And when people first see that scale, something interesting often happens: their brains shut down. Not because they don’t care, but because the system suddenly feels too large to process.
I’m beginning to see the same reaction emerging with artificial intelligence.
Artificial intelligence is not a small technology. It touches computing, energy infrastructure, economics, education, and creativity. When people begin to understand things like large data centers, electricity consumption, and automation potential, it can trigger a similar sense of anxiety.
Some people respond by embracing AI completely.
Others respond by rejecting it entirely.
Recently another dynamic has started appearing as well. People who use AI tools are sometimes confronted or criticized for using them—whether it’s generating an image, writing text, or experimenting with new applications. The criticism might focus on environmental impacts, data center energy use, or concerns about low-quality “AI slop.”
Some of these concerns are legitimate and worth discussing. But socially, the conversation can quickly shift from discussing the technology itself to judging the people who use it.
And that’s where something interesting happens.
The discussion stops being about understanding the tool and starts becoming about identity and labels.
Facing the scale of climate change or the rise of artificial intelligence can feel overwhelming. Pause. Reflect. Balance comes from understanding, not fear.
History shows that this isn’t unusual.
When the internet first began spreading in the 1990s, many people feared it would damage society. Others believed it would transform the world for the better.
When automobiles were first invented, there were no highways, no gas stations, and no global petroleum infrastructure. That entire system had to develop over time.
Every major technological shift has produced excitement, fear, resistance, and adaptation all at once.
Artificial intelligence appears to be following the same pattern.
Just as water must be balanced to sustain life, technology requires thoughtful use to serve humanity.
One simple way to think about technology is through something essential to life: WATER.
Too little water, and we die from dehydration.
Too much water, and we drown.
The same substance that sustains life can become dangerous when it is out of balance.
Technology works much the same way.
Tools can help us build, create, communicate, and understand the world in new ways. But they also require responsibility and thoughtful use.
The goal is not blind acceptance.
The goal is not total rejection.
The goal is balance.
One of the more unusual social dynamics emerging around AI is the possibility of a new type of division forming—not based on politics, religion, or culture, but based on technological preference.
Are you pro-AI?
Are you anti-AI?
Do you use AI tools, or avoid them?
These kinds of labels may simplify complex conversations, but they rarely help us solve real problems.
Artificial intelligence isn’t going away. The technology is already becoming integrated into research, medicine, science, education, and countless other fields.
The real challenge isn’t deciding whether AI exists.
The real challenge is deciding how we use it.
When tools become identity markers, the gap between understanding and judgment can grow wide.
If history teaches us anything, it’s that humans are remarkably adaptable when we approach change thoughtfully.
Every major technological shift brings trade-offs. Every system humanity builds has environmental and social consequences.
The question has never been whether technology has impacts.
The real question has always been whether we are wise enough to guide its use responsibly.
So perhaps the real question isn’t whether artificial intelligence is good or bad.
Perhaps the deeper question is this:
Should we avoid new tools out of fear of criticism, or should we focus on learning how to use them responsibly?
Because if history is any guide, the tools we create will continue to evolve.
The real challenge has never been the technology itself.
The real challenge has always been how humans choose to use it.
Stepping back to think about how we think is often the first step toward balance.
This article was written by Douglas E. Fessler. The ideas and reflections are my own, drawing on decades of experience in IT, environmental monitoring, STEM education, and community initiatives. AI-assisted tools were used to structure and clarify complex concepts — a reflection, in itself, of the subject explored.