The Cost of Fear: Europe's Misguided Approach to AI Innovation

Opinion article written by Felipe Daragon without using any AI witchcraft like ChatGPT, with exception to the small story at the end of the article. That one uses GPT dark art.

If you time travelled to the 16th or 17th century and talked about ChatGPT showing a paper about how it works, you would be burned alive at the stake as a witch, but now European governments evolved and are just banning new technology driven by ungrounded fears and data privacy laws that overnight became an impediment to innovation and a future with limitless possibilities for its own citizens. Meanwhile, an "expert" suggested bombing data centers to stop the rise of AI [1]. If you are an AI witcher aiming for global satanic AI domination, you better hide before the bombs fall into your sorcery center.

I'm afraid Europe may be headed torwards a disaster as big as the Russian-Ukranian war, which may end up negatively impacting European businesses perhaps not for a year, but a decade. I'm talking about the irrational fear of artificial intelligence (AI) and the way the continent is starting to approach and inadvertently prevent AI innovation with its data privacy laws.

To start with, I strongly believe that the recent ban of ChatGPT in Italy, now being considered by other European countries, is of excessive nature. As a cybersecurity and privacy expert and as a developer with over two decades of experience, I must confess that I'm no longer sure that many of the data privacy laws put in place in the pre-AI era are now perfectly adequate to deal with the so many innovative and disruptive AI systems emerging daily. It is likely that some data protection authorities are going to create more harm than good to its citizens by trying to enforce laws that do not appropriately address the fast-paced AI evolution we are seeing today.

I agree with what Matteo Salvini, who serves as Italy's transport minister, recently said in an interview, that common sense is needed as "privacy issues concern practically all online services" and that the excessive ban [2] could harm national business and innovation in Italy. Going beyond his opinion, I also believe that a pause or slowdown in AI adoption in any country would negatively affect the national security interests of the country adopting the restrictions. That's because it's very difficult to enact and enforce technology related laws equally around the globe and while one specific country presses the brake, another one will be pressing the accelerator. That's for sure. In the end, a country taking such a drastic measure of slowing down AI adoption and innovation will end up having its competitiveness greately harmed.

Ten years from now many recognized AI companies around the world will be telling how they were at the right time when they developed their AI related innovations, and how they positively impacted their businesses and the places around them or the world. They will be telling a story about the perfect and balanced environment that allowed them to flourish. Or they will be telling a different kind of story, a sad one about how they moved from a country to another because they could not operate in their native country with the needed freedom to innovate. Regulators, which one you want your country to be in the future documentaries? Yes, data privacy laws are very important, but at the expense of your own progress, future and national security?

Now, when I talk about irrational fear, to understand you have to take a deep look at how GPT actually works. I'm a developer and a curious person and I just did that recently. Any developer who went through this process knows that GPT is like an insanely powerful autocomplete machine[3]. It is an autocomplete on steroids, trained to predict the next word (or token) after another word, in a way that becomes a complete answer to a question that has been made [4].

Here lies a big difference on how it works when compared to search engines. Search engines, like Google, usually crawl web sites, caching texts in a massive database, and show you links based on their cached contents and matching your search keywords. On the other hand, a GPT-like database is created by processing and learning from public texts from all over the web, but at the end of process, when you use the chatbot, you are not querying cached texts that have been stored. Instead, you are querying a database of probabilities generated from what it learned, about words appearing after other words. This final, trained database does not actually contain full sentences, or portions of the original processed texts, but probabilities related to how a word will appear after another based on what it has learned by reading the entire Internet. This is why when you query it about a topic, each time it gives you different answers aligned with your query.

User: complete this quote, A man was swimming in a lake when

ChatGPT: suddenly he cried out for help. He was drowning and unable to keep himself afloat. A passerby on the shore heard his cries and immediately jumped into the water to rescue him. After the man was brought safely to the shore, he turned to his rescuer and said, "Thank you so much for saving me! I thought I was going to drown!" The passerby replied, "Don't thank me, thank the water. It taught me how to swim." This quote highlights the importance of difficult experiences in helping us develop valuable skills and knowledge.

User: complete this quote, A man was swimming in a lake when

ChatGPT: a crocodile grabbed his arm and tried to drag him underwater. He fought back with all his strength and eventually managed to pry open the crocodile's jaws and escape. Later, when asked about his harrowing experience, he said, "A man was swimming in a lake when a crocodile tried to drag him under. And that's when instincts kicked in."

That's very far from an AI about to become a sentient entity. It is amazing and scary, like a magic trick, only until you see what enables the trick. GPT is just a different kind of calculator, that takes an input, which happens to be words instead of numbers, and spits out impressive and coherent results, based on models and probabilities. It connects dots in jaw-dropping speed thanks to today's enhanced computing and storage powers in a way that non-technical humans can barely comprehend. Because the training process made the chatbot a polyglot, you can also query knowledge using any known language, breaking language barriers for learning in an unprecedented way.

One must be blindfolded to not see that, from global warming to new cancer treatments, and much more, the increased productivity generated by humans working with such incredible AI tools is able to accelerate the creation of solutions to difficult to solve problems in the same way that the Internet invention enabled us to do. Yes, AI will be increasingly used for attack purposes, both in cybernetic and kinetic warfare, in cybercrime, and in dirty politics to augment fake news, but we will be able to use AIs for defensive purposes, and to counter the new era of fake news, as well, essentially balancing the game. As a cybersecurity expert, I'm not worried about that and I doubt most experienced colleagues from the field are worried about the ability to balance the game. AI-powered tools, products and methods will emerge for both defensive and offensive purposes and we will adapt to this new scenario.

In comparison to the way we manage other inventions that impact our society today, like social networks, we all know how much social media can negatively affect people's mood and worsen symptoms of depression, anxiety, and stress - there are plenty studies on that. We all know that fake news are a big problem around the world. Yet, we're not discussing about shutting down all social media or Facebook, instead we're handling the problems case by case, with rationality and proportionality. That's because the invention's benefits outweight its risks, and the same can be said about AI.

To deal with cyberharassment, fake news, fake accounts, cyber racism, and other issues, many mechanisms have been created by social media companies over the years. They significanly helped restrain social media problems. In the same way, responsible AI companies like OpenAI created red teams with the task of testing their chatbot before its release to make sure that no illegal, inappropriate or innacurate answers are given to its users. In my own, personal experience working with ChatGPT, I came to the conclusion that in some topics, they even went too far in censoring specific types of answers that would be useful for cyberdefensive and educational purposes.

Finally, my opinion is that the reported privacy breach that allowed a small percentage of ChatGPT users to temporarily see the titles and chat history from other users is a kind of accidental exposure that could happen with any kind of website, including social networks, and has happened before with other companies. I'm not saying that it should be ignored, but bringing this incident to the AI discussion table right now is just opportunism, perhaps motivated by irrational fear and concerns, and does not justify a ban of the technology.

We must all start a responsible discussion related to AI innovation and how we need to deal with the actual challenges this new technology poses to the world. We must find out what are we going to do if we reach the conclusion that the data privacy laws, as designed in the pre-AI era, will do more harm than good to the progress of humanity, and if a revision of the laws with the help of the companies in the forefront of AI innovation is needed.

ChatGPT: Once upon a time, in a small village hidden in the depths of time, there lived a woman named Emily. Emily was a gifted scientist, working on a secret project that could change the course of human history. Little did she know, her experiments with time travel would lead her to a dangerous and unexpected destination.

One fateful day, as Emily was putting the final touches on her time machine, a surge of energy pulsed through her laboratory. In the blink of an eye, she found herself transported from the 21st century to the year 1550. Dressed in modern clothing and equipped with futuristic gadgets, Emily stood out like a sore thumb in the bustling market of a 16th-century village.

As she wandered through the village, she caught the attention of a curious man named Thomas. A scholarly fellow, Thomas was intrigued by Emily's appearance and the strange devices she carried. Unable to resist his curiosity, Thomas invited Emily to his home to learn more about her and her mysterious origins.

Emily: "Greetings, Thomas. I come from a time far into the future, where wondrous inventions exist. One such invention is called ChatGPT."

Thomas: "What is this ChatGPT, and how does it work?"

Emily: "ChatGPT is based on the GPT-4 architecture, a deep learning model that can generate human-like text. It understands and processes natural language, thanks to its vast number of layers and parameters."

Thomas: "I can hardly believe it! Can you demonstrate how it works?"

Emily: "Of course! Watch as I converse with this AI on my tablet."

(Emily demonstrates ChatGPT to Thomas, and word of her "magic" spreads through the village.)

Magistrate: "Emily, you stand accused of witchcraft. What say you in your defense?"

Emily: "I am no witch. The technology I shared is the product of human ingenuity, not dark arts."

Thomas: "I can attest to that! She comes from a time of great knowledge and invention."

Magistrate: "Your words fall on deaf ears. You shall be burned at the stake for your sorcery!"

(Emily is taken to the pyre, where she addresses the crowd before her execution.)

Emily: "I came to your time with the hope of sharing knowledge and fostering understanding. The 'magic' you fear is nothing more than the product of human progress. I hope that one day, you will learn to embrace the unknown instead of fearing it."

(Emily is burned at the stake, her final words a testament to the tragic consequences of fear and misunderstanding.)

This article was published by The Hunter on April 5, 2023.