First they came for the typists, and I did not speak out —
Because I was not a typist.
Then they came for the mailman and the cab driver, and I did not speak out —
Because I was neither a mailman nor a cab driver.
Then they came for the writers, and I began to worry —
Because I am a writer.
Now they are coming for the tech bros, and it’s time to pull the plug —
Because now this monster will speak for all of us (and put us out of business).
If you want a quick summary of what’s happening in the world of artificial intelligence (AI), my humble adaptation of German pastor Martin Niemöller’s famous prose on the silence of intellectuals in the face of Nazi brutality should tell you all you need to know.
The Future of Life Institute published an open letter in late March demanding a pause on AI experiments and it has already gained over 20,000 signatures, including the likes of Twitter CEO Elon Musk, Apple co-founder Steve Wozniak, author Yuval Noah Harari, Skype co-founder Jaan Tallinn, politician Andrew Yang, and a number of well-known AI researchers.
Are the alarm bells ringing because AI is now posing a threat to their jobs and livelihoods? Or is there really a need to pause AI research before it wipes out humanity itself? Let’s dig in in this week’s The Global Tiller as we take a closer look at what this open letter demands and why AI has suddenly sparked fears. What will we gain by pausing AI research?
The main demand put forward by FLI is for all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4. They are demanding that this pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium. During this pause, they suggest, AI labs and other experts to develop a set of shared protocols for advanced AI design and development, making sure that the technology is safe beyond reasonable doubt.
The reason for the panic is because it is only recently that AI has started outperforming humans in all aspects of comparison: handwriting, speech and image recognition, reading comprehension and language understanding. But instead of celebrating the victory and using these developments to improve current processes, AI labs are working even harder towards the ultimate goal: artificial general intelligence (AGI), AI that is as capable of learning intellectual tasks as humans. Imagine ChatGPT, only not stupid.
For AI to become AGI, it still has some ways to go. It will need to develop sensory perception, fine motor skills and an understanding of natural language. All of which are still distant goals of generative AI in its current form. Yet, the signatories of the letter want labs to stop training AI any further until they can create the parameters that make sure when AI does become AGI, it will not wipe out our entire existence.
Not everyone is, however, on board. Bill Gates, whose Microsoft is investing heavily in OpenAI, says pausing AI will not solve these challenges. He doesn’t see the whole world agreeing to hold off AI development, for one. And besides, he still sees value in the potential of AI to address global inequalities. Even Meta’s chief AI scientist Yann LeCun and Deep Learning founder Andrew Ng said a pause on developing or deploying AI models was unrealistic and counterproductive, but they agreed that some regulation was necessary.
But AI regulation has been in the works even before this open letter was published. The European Union has led the charge from the front by proposing regulations that ensure AI tools can be trusted by consumers, in response to widely known cases on bias in AI (we covered it in a previous issue). However, Bill Gates’ words are already ringing true as there isn’t a global consensus on how to deal with AI. The US has so far been extremely slow in policing AI, while India has opted against its regulation. China’s approach, unlike EU’s horizontal and comprehensive regulation, has been more vertical and targeted different applications and AI types.
Admittedly, AI regulations may not be happening at the pace at which the technology is developing but what will a six-month pause achieve? Is the pause a call for regulation in general, or just a call for tech bros to converge so that the final rules are set by them and not by governments? If ethics are such a big concern in AI, how come the first departments to be laid off in Elon’s Twitter were the ethics ones. Why was Timnit Gebru fired unceremoniously from Google’s Ethical Artificial Intelligence Team?
Until next time, take care and stay safe!
Hira - Editor - The Global Tiller
Dig Deeper
New York Times’ Opinion editor Ezra Klein spends a lot of time with AI researchers and, in this podcast, he gives us some insights into the thinking that’s driving this technology.
…and now what?
We’re living in unprecedented times. Everyone has said this since the dawn of times but it may be true this time also. Why? Because we are forced by our own intelligence to question our nature and not only what it means to be human but also how do we maintain some sense of purpose in our life.
I’m no AI expert and, for me, most of the current level of technology is close to magic. AI is a whole other level. We’re leaving the realm of magic and entering one of mythology. But this feeling that we have facing our most extraordinary (in the proper sense of the word) invention is actually blurring our judgment and analysis when it comes to interacting with it.
First, we tend to underestimate our own abilities when looking at our own creations. We marvel in front of those machines, glorified mechanical turks, and consider ourselves way below them. There’s a fundamental flaw to this approach of valuing the intelligence of AI. What is indeed AI if not the cumulation of data and the work of dozens of people who managed to put together complex algorithms. It’s incredible work, indeed, and it comes to a point where it does things that even the lead thinkers in the field struggle to understand.
Second, we compare AI to the regular intelligence of a human and we fear for ourselves. Why? Because we forget that the individual human is not necessarily the unit of reference when it comes to appreciating our intelligence or our abilities. We are social animals. Culture and history may have made us think that the individual is the epitome of what it means to be human. But our intelligence, our immense talent that made us who we are today as a species, is our collective work. One way to explain it is to realise that what our kids are learning today in school is not coming from the intelligence of the teacher, but it’s a long line of individual intelligences that have worked together from each other’s work since centuries now. A doctor is still benefiting from Aristotle and Hypocrates’ knowledge amplified by the millions of other thinkers who developed medicine. To put it shortly, one AI intelligence can’t be compared to one individual human intelligence. It should be compared to human intelligence as a collective species.
Why is this important? Because we can reposition ourselves in that conversation by understanding that we still have ways and abilities to keep our hands over our own creation and we have the ability to maintain a sentiment of control over it. We still have agency in this conversation. We may be running against the clock but we haven’t been left behind.
Which means that we can act.
And that brings us to the second element of my perspective. We’re actually not living in an unprecedented time (I know I sound a bit schizophrenic here but hear me out). It’s not the first time in our history that we’re questioning our own nature. We did it when we invented farming, when some of our communities explored new territories and met different people, when we questioned the existence of God and many other times.
During all those historical steps of our collective history, we have been forced to question what it meant to be a human. For a long time, it meant to be pretty like any other species, then we found ways to “control” natural processes. Some of us thought that it was about being Christian and of fair skin until we met other beliefs, cultures and skin completions. We thought then that we were the creatures of a formidable esoteric being until we realised it could just be a cocktail of atoms randomly put together. Every time we found ways to deal with this question: what does it mean to be human and how to adjust the interactions among us and “others” in ways that would benefit everyone (more or less). And because we have lived this in the past then we can not only find potential options but also know exactly what we shouldn’t do.
We shouldn’t let things loose and give in to the will and greed of some private actors who will focus on their own profits and will be willing to take advantage of the most disadvantaged of us. We shouldn’t find a way to compensate and define new approximative theories trying to define what we’re facing now, otherwise, we’ll fall for those (tech) gurus trying to sell us a new religion that, like many others, will just try to enslave us into a thirst for power. We should take agency, because we have, over what we have created and define what should be not only its purpose but also the space in which it should be allowed.
Yet, looking at the current conversation on AI it seems like we’ve lost agency and we don’t know what to do. This is dangerous as we’re at the edge of the cliff at a time that, once again, will define our future for generations to come. In that case, we need vision, we need leadership, we need courage and, more than anything, we need a clear understanding of what it is that we’re living in and what needs to be done.
My answer to this: we’re living in a new defining-era that is a challenge to our empathy, our values and our ability to build a world that’s fit for all. We need to make sure that the space that is being created is a space that can be safely welcoming, within which everyone can have a say and that offers opportunities instead of, once again, depriving to conquer and excluding to profit.
Will we have the intelligence required for this new artificial era? If “God does not play dice”, I think AI may, but make sure it hasn’t been programmed already.
Philippe - Founder & CEO - Pacific Ventury
Join us on Notes
The Global Tiller is on Substack Notes, and would love for you to join us there!
Notes is a new space on Substack for us to share links, short posts, quotes, photos, and more. I plan to use it for things that don’t fit in the newsletter, like work-in-progress or quick questions.
Head to substack.com/notes or find the “Notes” tab in the Substack app. As a subscriber to The Global Tiller, you’ll automatically see our notes. Feel free to like, reply, or share them around!
You can also share notes of your own. We hope this becomes a space where every reader of The Global Tiller can share thoughts, ideas, and interesting quotes from the things we're reading on Substack and beyond.