The AI Drama: From Progress to Paranoia
- Mauro Longoni
- Mar 27
- 12 min read

Ah, our dear, beloved artificial intelligence. Poor, battered AI is a deeply divisive and controversial topic. People love it or hate it; they are disgusted by it or adore it. Some would do anything to avoid it like the plague, while others build entire empires upon it. Ever since intuitive chats and goofy social media videos aggressively forced their way into our lives, we’ve been grappling with a reality way different from anything we’ve known—one where we must find a new balance. I know AI has infinite uses, but for the average person (like us), it serves only two purposes: making funny TikToks and asking about the evolution of the fruit fly at 3 AM while completely wasted.
When I wrote about the fall into hell and the rise into heaven of technology, I knew I would have eventually written this post. One completes the other; if you talk about tech, you talk about AI. Besides, it’s a subject far too fascinating for me to ignore. I just didn't know how to write it at first. Then it hit me. This post will be split into two parts: the first dedicated to history (since nobody seems to know where AI actually comes from), and the second to dismantling the "dark side" of a technology that, in fact, has no real dark side of its own.
For example, it’s only thanks to AI that I could have even written this: with a few prompts, I created the skeleton of this post and figured out exactly where and what to research.
Enjoy the read!
AI: An Old Lady with Potential.
If you asked anyone when AI was actually invented, most would point to the Covid-19 pandemic era—basically 2020 onwards. The reason is simple: before that, AI lived only in Star Trek. Only recently have we started shouting at our smartphones and having them actually understand us. But that answer is wrong—very wrong. 2020 isn't even remotely close.
The Dawn of the '40s and '50s
The concept of AI—machines that can think—was born during the era of Swing and Bebop. We’re talking about the dramatic 1940s. Between air raids (WWII was at its peak), mathematicians and philosophers asked a simple and yet fascinating question: "What would the world be like if a machine could 'think'?" Perhaps they saw how humanity was reasoning and figured machines could do better.
Luckily, they hadn't seen The Matrix yet.
For instance, Alan Turing published "Computing Machinery and Intelligence," proposing what we now call the Turing Test. The question wasn't "Do machines think?" but "Can they behave in a way indistinguishable from a human?" It’s an old concept but damn relevant today, as companies race to close the emotional gap between computers and people.
Another genius of the time was Isaac Asimov, who explored the ethics of robotics through his "Three Laws of Robotics":
First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law: A robot must obey orders given by humans except where such orders would conflict with the First Law.
Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Law Zero: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
If these sound familiar, they are the same laws from the movie "I, Robot" with Will Smith. That film (a great one, in my opinion) tried to visualize what happens when a robot has enough data to develop a consciousness and question those very rules.
From that moment on, the doors opened to something entirely new, which ironically didn't even have a name yet. People were discussing and theorizing about consciousness and robots, but there was no formal definition for the concept. Don't worry, though—it didn't take long to find a name. In 1956, during the Dartmouth Workshop, figures like John McCarthy, Marvin Minsky, and Claude Shannon gathered under the belief that every aspect of learning or intelligence could be described so precisely that a machine could simulate it. It wasn't a wild guess. If you describe to a machine how to do 1+1, it gives the same result as a human. That was the basis for computer design. It was then that Mr. McCarthy finally gave it a name: "Artificial Intelligence." Simple, yet effective.
From that year on, enthusiasm was through the roof. Among the many breakthroughs, the first programs capable of solving algebraic problems or proving logical theorems were created. One such example was Perceptron in 1958, created by Frank Rosenblatt; it was the first elementary neural network, inspired by the way biological neurons function. Then came ELIZA in 1966—the first "chatbot" in history—created by Joseph Weizenbaum and capable of simulating a conversation with a psychotherapist.
Great pioneers, great talents, great ideas—there was only one small problem: the technology itself. The ideas were decades ahead of their time. It’s a pity that 1940s technology didn't just take an immediate leap into the 21st century. Computers didn't even exist yet, and machinery needed to calculate such a massive amount of data were not ready.
Since the results weren't optimal, governments cut research funding—as always happens when something doesn't bring in money or votes immediately. People will go to any lengths for failing military projects, but for non-warfare science, no one even blinks. Regardless, the failures and budget cuts cooled the initial enthusiasm, leading to growing general disinterest and a period of stagnation.
The '80s and '90s Revival.
Thirty years passed, and the timing seemed right again. Technology had made giant leaps compared to three decades prior. Computers—complex calculating machines—had been invented and, over time, became increasingly affordable and easy to use. All the prerequisites were there to finally get serious. And indeed, the first steps were taken once more. In the early '80s, AI came back into fashion thanks to "Expert Systems": programs that mimicked the decision-making process of a human expert in specific fields (e.g., medical diagnoses or chemical analysis). The idea was brilliant and is still applied today in medical research or the simple AI we use every day. However, as useful as computers were becoming to the cause, they still had performance limits. The systems were too rigid and difficult to update, leading to a second halt at the end of the '80s. But that was only a temporary standby.
With the increase in computing power and the availability of the first large datasets, the '90s finally seemed to offer that fertile ground that had been desperately sought for decades. Computers appeared mature enough to finally push AI forward, after 40 years of back-and-forth. Thanks to improved processing power, the focus (though limited, since everyone wanted the internet) shifted from logic-based AI to Machine Learning. With so "much" power and the rapid rise of the internet, AI could finally learn on its own. We saw the first voice assistants and speech recognition technologies—basically the mothers of Siri and Alexa.
And then we have Deep Blue. No, it’s not a '90s challenge or some secret military code. It’s simply the mane of an IBM computer. What’s so special about it? Well, it defeated the world chess champion of the time, Garry Kasparov. For the first time, a computer beat the human mind in a game of move calculation and strategic planning. Machine learning was working. It was a massive media turning point.
The Post-Dot-Com Era: The Radiant 21st Century.
Then the world decided to go crazy and burst the Dot-Com bubble, abruptly slowing down a process that had successfully restarted after half a century. For the early years of the 2000s, there was almost total radio silence. Between wars, economic crises, and a general depression, the last thing anyone wanted was to invest in AI. After 2010, three years after the advent of smartphones, AI returned to the scene.
The first major result was ImageNet in 2012. What is ImageNet, you might ask? It was a competition between AI softwares. ImageNet was a gigantic database of over 14 million images, manually cataloged by human beings. Every year, a challenge was held—the ImageNet Large Scale Visual Recognition Challenge (ILSVRC)—where the competing software had to try to correctly recognize objects in photos. That year, a group of "evil geniuses" from the University of Toronto (led by Alex Krizhevsky and Geoffrey Hinton) presented AlexNet. The result? While other algorithms had an error margin of 25-26%, AlexNet drastically lowered it to 15.3%. The secret? Using GPUs (taken from the gaming industry) to train deep neural networks (Deep Learning). It was a revolution. A neural network won an image recognition competition by a huge margin, proving that Deep Learning was the future.
And then we have AlphaGo (2016). Go is an ancient game from Asia. It resembles chess but with different rules and millions of possible combinations. For years, many groups tried to make machines compete with a person at this game. The machine always failed. Then the British arrived. The group DeepMind changed the rules. They did something very simple: instead of instructing the AI with a specific command, they let the AI make mistakes and learn on its own using Deep Learning. The AI played against itself millions of times, learned all the moves players had made, and in 2016, the machine beat Lee Sedol, winner of 18 world titles, by 4-1. Then they did even better a year later. In 2017, they told the machine: learn by yourself, no human data. After only 3 days of solitary training, it beat the version that had defeated Lee Sedol by 100 to 0.
And then we have today. Every single attempt, failure, and success in 80 years of history has brought technology into the phase of Large Language Models (LLM). AI is no longer limited to classifying data (e.g., "this is a cat"), but can create new content in a fluid, rapid, preemptive, and creative way. We see it every day: in business, it can plan trips, schedules, and corporate action plans; in medicine, it can identify pathologies years before symptoms appear and create new drugs; and in transport, it can report faults before they happen and pilot machines. This power is derived from increasingly powerful computers and an ever-larger internet, giving AI what it has always asked for from the beginning: power and data.
The Modern Drama.
History was fascinating; today’s reality sucks. From here on out, we’re going to have some real fun. Up until now, AI has always caused something beautiful—one step forward after another that could open a new age in human history. We’ve talked about how the human mind moved from an abstract, almost philosophical concept to a tangible reality that predicts and resolves. Now, that same human mind that pushed for progress is pushing to take a step backward, trying in every way to destroy everything good that’s been done, whether out of ignorance or stupidity bordering on involuntary comedy. Let's play for a moment and reason through what people are saying.
"AI is stealing our jobs!" Well, that was the plan from the very beginning. We knew perfectly well this would happen. It’s incredible that people are complaining. When we decided to use computers and robotics, the mantra was always: "technology will allow us to be free." And it’s true; it did. Just think about letters. Before email, it took us days to get written replies or signed documents. Now those replies are almost immediate. This time-saving is seen in every aspect of the working world. If, in the 90s, a job was done in 8 hours, with technology the same job could be done in two. Those six hours, in theory, should have been free time... time we should have used for ourselves. Instead, employers took advantage of it with the excuse: "well, I pay you for 40 hours a week, you can't just go home early." So, that "freeing humanity from the prison of work" did nothing but increase the workload. But the problem wasn't technology; it was man's obsession with making more and more money.
The other modern problem now is unemployment. Many positions are being replaced by machines. Again, this was the plan from the start. Machines were supposed to work for us while we enjoyed our lives. The fact that this unemployment is now a problem is not the fault of technology—which we voluntarily decided to use since that was our goal from the beginning—but of politics, which didn't think of a plan to support such a transformation... a transformation that was widely anticipated in previous decades.
"Whoever controls AI will be rich!" Absolutely true, and it’s right that it should be so. The controversy doesn't hold up because, since the dawn of commerce, the big fish has always eaten the small fish, and we all accepted it. Those who produce a lot and sell a lot should also have a lot. If with AI you produce double the amount of good and sell quadruple more, whoever owns those means should live great. Now, with AI, it has almost become a moral issue—the fact that a company can have better results by crushing the competition because it has more capital to invest in technology. But it’s always been this way. Just think of China and how it conquered the world market, or all those giants that destroyed the competition during the 21th century (see Amazon). No one ever complained, though. The reason? No AI. Now, if you use AI to crush the competition, you’re a bad person.
Furthermore, there is another drama: the "inequality" between those who produce and those who don't. The idea—which made my skin crawl—is that a person who produces nothing, due to a lack of AI use, should earn exactly as much as someone who uses AI massively. This is pure madness. I’ve never heard a local fruit seller cry and say he "should earn as much as a supermarket" because he lacks all the logistics a supermarket has. But with AI, people have to whimper.
"AI can become evil." Apparently, AI could become racist, homophobic, or generally evil. AI is not a 3-year-old anarchist child running around naked without any control. AI is a program that executes the orders of us humans, reads what we humans write, and works within the limits of us humans. If AI becomes homophobic or racist, it’s because man programmed it for that purpose, uses it for that purpose, and shares homophobic content that the AI finds. I use Gemini, and I’ve even asked controversial questions, yet Gemini hasn't started insulting me. It’s not the software becoming evil; it’s the person behind it who programs it for that purpose who makes it so. We shouldn't limit AI, but rather internet access for certain people.
"I don't know where the AI gets its information." This statement has a dual meaning. On one hand, people are worried about where the AI gets its info and are terrified that AI will generate Fake News or Deep Fakes. Now we’re worried about this problem? It seems a bit hypocritical to me. Before AI, we read anything, even the most absurd things, and believed them blindly. I’m talking about all the conspiracy theorists out there. They didn't need AI to find unreliable sources, believe them, and repeat them like parrots in this or that forum. As long as humanity turns itself into idiots outside of AI, everything is fine. If AI is used for that purpose, then it's all AI's fault.
Anyway, yes, AI can generate wrong news or information. It is still a program that is learning. But honestly, believing blindly everything the AI says—despite the AI itself saying "i can make mistakes!"—is truly laughable. You want the AI to give true news? Great—since the AI fishes from the internet and we control the internet, maybe we should start writing correct things. Just a thought!
"AI controls minds." I will never understand this one. How ChatGPT or Gemini can control the masses, someone needs to explain to me. If you don't give the program any command, that program stays off. It’s not like my television turns itself on, chains me to the armchair, and shows me the worst trash television has to offer. If I don't hit the power button, nothing happens. So, if you don't use AI, how can it influence you? Secondly, let's say that, for some incredible reason, the AI subvertly decided to control minds. If you let yourself be controlled, it's your fault. It’s your choice to believe and accept what the AI tells you. If I tell you to jump off a bridge, you’d tell me to get lost; but if the AI says it, you’d do it because you think it's right since the AI is correct? If you think that, it's your problem—a big one—not a program's. Anyone or anything can do and say whatever they want, but if we fall for it, it's only our fault.
"AI destroys the environment." The agricultural industry, livestock farming, and the mining industry do much more damage. It’s not a server farm that’s destroying the planet, but all the fields to grow soy by burning the Amazon rainforest, all the water consumed to produce a kilo of meat, the chemical industry with its waste, and all the coal and oil we burn to produce energy.
"AI steals from artists." To train AI, software companies use copyrighted content to make the software better. Here I have to speak for the artists. I’m fine with companies feeding the AI as much material as possible so that the AI is effective. But not paying the owners of that product isn't entirely correct either. However, it must also be said that if the AI fishes from the internet, how do you make the company pay? Let’s take the example of someone searching for a literary text. The AI proposes that text. Now, the question is: where did it get that text? If it took it from the source—meaning the artist—then the artist must get paid. We know, however, that on the internet, the same thing is published multiple times. If the AI fished from another source, what do we do?
Side Notes
Honestly, it’s incredible how humans can be so stupid as to not see how harmless and fundamental AI is. And it’s equally incredible how stupidity guides our judgment. All we need is a minimum of intelligence on our part. All the controversies we have today make no sense. What we’re complaining about now is exactly what we’ve always accepted—like ignorance, malice, and stupidity—yet suddenly with AI, it’s no longer acceptable. It’s almost a mass hysteria. Perhaps the fear isn’t of AI itself, but of our own awareness that we suck so much that we will corrupt even something potentially wonderful, which in turn will compromise us. We truly are number one!
M.










































Comments