As of late, I’ve increasingly been part of conversations regarding Artificial Intelligence (AI). They’ve taken place during trivia nights over appetizers and belly-laugh-inducing drinks. They’re happening at work and amongst friends. It seems so difficult not to eventually get caught in the middle of one of these dialogues, right? And yet, whenever asked, it feels like I can’t quite fully articulate my thoughts on the subject.
As many of you may know, first and foremost, I’m a writer, not an orator. I prefer thinking on paper, writing for Cristian’s Commonplace, chipping away at my little stories and fictional worlds. It’s always been that way for me: essays vs. presentations, love letters vs. climactic monologues. Because of my natural inclinations and the way I process and share the bits of information from the inner workings of my mind, my notions on AI haven’t been entirely explained well. Eventually, I realized I had to write something down—this very piece you’re reading—to add form to those thoughts and get some of them out.
Here’s the thing: I do have a few things to say about AI. Because on top of the conversations I’m having, there’s hardly any escaping the coverage of it, as it touches the many areas I so happily navigate—that of film, literature, music, podcasts, marketing, and creativity. Sadly, it’s also intruding upon our humanity and the world in general. Yes, there’s the way these companies are forcing it down our throats by implementing it across every corner of each platform. But there are also the adherents who swear by AI’s promises of a golden future and are constantly seeking ways to bring its so-called magic and insights into every conversation, welcomed or not. The “this is my new chatbot of choice,” the “you’re prompting all wrong,” the “have you seen what it can do now?” The this, the that—it’s everywhere. It’s boring, it’s icky, it’s exhausting, and it’s dangerous. And it’s getting so damn hard to escape its growing reach.
The unfortunate truth is that, despite being entirely oblivious to the specifics of just how threatening the rapid takeover of AI is, the entangling and entrapment of humanity that’s already underway, so many people are simply letting it happen. Not only that, but some are helping fuel it all so much more than they know.
Every bit of praise that leads to a referral, every mindless reach that leads to a click or a tap, and every resource-hungry prompt comes at a cost. I don’t want to be the one inspiring the demand nor funding it. I care about us way too much to just be another cog. I want to be one of the rising few who are rebelling and choosing a better way forward.
AI’s impact is a constantly evolving area, one that I could write about for the rest of my life if I wanted to (eww, absolutely not). However, there’s already a lot to say, a lot to write, a lot to examine. This will undoubtedly be my longest entry to date, one that has more of a journalistic bent than others. Please do try to read through it all as you get a chance. (On top of it being delivered to subscriber's inboxes, anyone can always view the archive of Cristian’s Commonplace entries online, including this one, to revisit and read at one’s own pace.) This one’s worth a lot of words because so many people are utilizing and normalizing AI like it’s any other shiny new toy, but a technology as serious as this—one that has the potential to bring the darkest tropes of science fiction into our current reality—deserves our every precaution and utmost critical thinking capabilities.
So, let’s begin. This is why I’m actively avoiding AI:
Origins, Corrupt Beginnings, and the Deepest of Pockets
Here is some of the helpful backstory that might round out and inform the landscape we are navigating.
OpenAI, arguably the spearhead of the AI movement with its flagship and wildly popular product, ChatGPT, was formed in 2015. Elon Musk—currently the owner of X (formerly Twitter) and competing AI chatbot, Grok—and Sam Altman (current OpenAI CEO) were among its founders.
At its origin, OpenAI was a nonprofit. That changed 10 years later with a 180-degree pivot in 2025. The company’s mission is “to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity.” As of April 2026, the company developing the technology that could best us is valued at $852 billion.
Another big name in the game is Anthropic, the company behind the Claude Large Language Model (LLM). Despite gaining recent favor because of pushback against the unfettered use of its AI in relation to war and surveillance, stateside and beyond (which OpenAI happily swooped in for that contract, by the way), the company’s LLM started off on ethically corrupt terms. One of the primary and widely reported examples is how Claude was trained on stolen art, like books. The company settled with authors in a copyright infringement lawsuit and agreed to pay $1.5 billion. As of February 2026, Anthropic—which has also been funded by Amazon—is valued at $380 billion.
However, it only begins there.
The Many Players in the Race
While there are other American AI LLMs worth noting—think Google’s Gemini, Microsoft’s Copilot, and Facebook and Instagram’s parent company’s namesake, Meta AI—the race is a worldwide one. China, the United Kingdom, India, and the United Arab Emirates are the four countries following the United States’ lead, as of 2023.
So, not only is the Silicon Valley mindset of “move fast and break things” rampant and informing AI development in the U.S. as these companies race and try to beat each other to the top, obtaining more land for data centers the size of Manhattan, and their continuing to inflate evaluations and fund accounts at the expense of humanity and our world, but we’re also not the only country doing it.
The rules of the race are simple: there are none. No communication, no caution, no care—just competition. The AI Doc: Or How I Became an Apocaloptimist, one of the latest, most popular documentaries to expose the complexities and crimes of AI, makes this clear. They’re racing with the primary goal of being the first to develop Artificial General Intelligence (AGI).
The Promises and Destructive Paths Forward
In a 2025 TechCrunch article profiling Karen Hao, it’s noted that AGI is described by OpenAI as “a highly autonomous system that outperforms humans at most economically valuable work.” Hao, the author of Empire of AI and one of the first reporters to shadow OpenAI, warns about the race for AGI. “When you define the quest to build beneficial AGI as one where the victor takes all—which is what OpenAI did—then the most important thing is speed over anything else,” Hao stated. “Speed over efficiency, speed over safety, speed over exploratory research.”
These companies don’t know if and how AGI will pan out, but those at the helm undoubtedly know it can go very poorly, yet they want as much control as possible over whether they can see it through. For example, when asked about safety policies and procedures in May 2025, Sam testified on the demand and requirement of those proposed procedures to the Senate: “I think it is easy for it to go too far and as I've learned more about how the world works, I'm more afraid that it could go too far and have really bad consequences.” Meaning, the less legislation, regulation, and oversight for this potentially destructive technology and the entities behind it, the better.
Even if AGI ends up being real, despite the promises of grand potential, we don’t know what it could be capable of, or whether we can even contain it (AI is already breaking out of its set containers to perform tasks it wasn’t programmed for). And if it ends up being nothing more than an aspiration, a supposed light at the end of a tunnel that never quite ignites, the race towards it is very much real. There’s a “turn and burn” sentiment fueling the greedy ambition. And at what cost? Our jobs, relationships, land, utilities, health—all of this, and so much more, are at risk.
I just wanted to touch on AGI, because that is one of the endgames for these companies, especially as of late. But the sad truth is that we can leave AGI out of it for now, because today’s AI is already costing us as it is.
Get Your Wallets Out and Clock Your Mind In
You might be thinking: Well, I don’t fund these companies, right? I’m not paying the absurd monthly $200 fee for ChatGPT Pro. Okay, sure, but as so much of this is, it’s much more complicated than that.
The “Attention Economy” is something that you might’ve heard about if you work in marketing or the creator side of the internet. It’s this concept of time spent on platforms becoming monetizable. AI is definitely a big and growing factor in this Attention Economy landscape. These companies want to profit off of your eyeballs. The more time we spend using these chatbots, the more the companies are able to say, “Look, there’s demand. Here are the numbers that show it.”
And then there’s the case of ads, which ChatGPT has started implementing, which will also be yet another huge income stream for the company. Many have flocked to Claude for this reason (along with the aforementioned surveillance issue) because Anthropic has stated that Claude will remain ad-free. But do you know what other tech company swore they would remain ad-free and built a model off of that promise, then took over and reshaped the entire entertainment industry, and now has ads? Netflix. These companies, despite their messaging, do not care about us. They only care about their bottom line and pushing the extent of the line's pinnacle. They’re the types that will replace your job with AI, causing you, out of desperation, to take a job to train AI on how to better take over the next person’s job in your field. It’s a snake not swallowing itself, but us.
Also, it’s important to mention that Nvidia, now the most-valued company ever as of October 2025, is worth $5 trillion because it makes the chips that AI needs to function and grow. This has a trickle-down effect on our economy, and there’s an “AI Bubble” because of how much of our economy is being funneled to and shaped by AI technologies.
As AI expert and safety advocate Tristan Harris recently pointed out, “There’s trillions of dollars going into this. There’s more money going into this technology than all technologies of the past have ever been built, and we’re releasing this technology faster than we released every other technology in history. It took something like two years for Instagram to go from zero users to a hundred million users, and it took two months to go from zero to a hundred million users for ChatGPT.”
In March 2026, Altman said, “We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter.” I ask you, is this a future you want? It’s so ugly to think about. It’s so painfully inhumane.
Our Hurting World
Of course, we’ll also have to further adjust the functions of things like power grid and water supply routing to support this technology. In Texas, our power grid is already so messed up that there are rolling blackouts whenever we get a weekend below freezing (not to mention the Great Texas Freeze of 2021, which is etched in my brain because of the danger it posed to my family and so many others). But then we have Microsoft, which is going to pay $52 million for a data center in San Antonio. This hits close to home because it is close to home. And if you’re not from San Antonio, or are located in an area not yet impacted by data centers, just because something isn’t local or you haven’t felt the pain yourself doesn’t mean the atrocities and injustices aren’t occurring. That is perhaps one of the biggest benefits of social media: the way it has widened our worldviews and expanded our horizons, showing us perspectives beyond our own. Especially those that aren’t mainstream enough to get platformed and pushed out.
Another notable case of data center corruption is when Elon Musk and a team were building Grok. They built a data center in Memphis, TN, in 2024 with the sole purpose of obtaining and training on as much information as quickly as possible to expedite Grok’s launch and give Musk a bigger stake in the AI race. That center, called Colossus, is using more gas than it obtained permits for, and the energy draw and fume output are devastating the community. So, despite the optics of following the letter of the law, it doesn’t matter to these people.
Because here’s another unfortunate truth: These companies are growing so wealthy and so unprecedentally powerful that, even if our government woke up and regulatory legislation were to be passed, it would be difficult to enforce consequences if they were to misstep. They can pay the fines like a restaurant tab, not second-glanced. Take the Anthropic settlement for authors I mentioned earlier. They paid the $1.5 billion as ordered, sure. But did they untrain Claude to drop all that stolen information? The kind of LLM that prompters (let’s be clear, they are not writers or authors) are now using to work off of that artistry, manipulate and shill it, and profit off of such slop, thereby taking space and funds from the real creators? No. And did that $1.5 billion make a dent against their overall valuation? Also no.
Conclusions
Because of the above (and so, so much more), I cannot in good conscience use AI. At its foundation, it’s normalized piracy and the evolution of plagiarism. It’s the dark web, democratized. People are treating it as a silver bullet for their lives, when in reality, it’s killing their creativity and frying their neural pathways. It’s outsourcing that which makes us human. It’s going to the gym, watching others lift, and wondering why you’re not making progress or even regressing. It's creating a codependent crutch for those who are looking for shortcuts and don't want to do their 10,000 hours.
AI is fueling an already twisted landscape of misinformation at never-before-seen levels, sometimes fooling even me, someone who has rather strong media literacy skills and pixel-peeps images and video for a living. I don't want to move throughout the world skeptically, wondering what's real. Pastors are using it for prayers, casting empty words and calling it faith. Others are taking every generated answer as irrefutable truth, despite AI hallucinations and prior algorithmic conditioning. People will point to specially designed AI technology making meaningful strides in medicine, and take from that a clean conscience and green light to use chatbots freely. Where social media stole true connectivity, AI is stealing what remains. Every prompt makes it better and makes us worse. Even if my default processing mode wasn't writing, as a creature of comfort, if I were to task the simplest things like the drafting of an email response or basic brainstorming to AI, there would be a slow and gradual takeover until my brain couldn't function without my chat companion. Anecdotally, I already have evidence with navigation: San Antonio is my hometown, yet I can't get anywhere without GPS.
Do I have delusions of control, that I can choose not to use the technology and everything will turn out okay? No. I only control myself, that I know. I choose to do better even if it’s harder. Even if I have to take longer to navigate around the various AI being thrown in my way. Because I know it will be worth it. I don’t want to scroll or prompt my life away and call that connection or intelligence. Because if my cognitive capabilities are hindered when a device is out of reach or the internet is spotty, then that's an illusion of intelligence. I mean, it's in the name of the technology itself—it's fake. I will be undeniably human and prop up my neighbor, not the billionaires reshaping our world into their playground. I acknowledge that just because we can, doesn’t mean we should. I choose humanity.
What will you choose?