If AI was writing today’s newsletter, you would be reading about 'Exploring the Role of Technology in Sustainable Agriculture’. My work as the editor of The Global Tiller would have been vastly reduced and you’d have an 8-minute read starting with something like this:
The world population is expected to reach 9.7 billion by 2050, and with it, the demand for food will increase significantly. To meet this demand, the agricultural industry must evolve to become more efficient and sustainable. One way to achieve this is through the use of technology.
Precision agriculture is an innovative approach that utilizes technology such as sensors, drones, and GPS to gather data and optimize crop yields. This can lead to reduced use of resources such as water and fertilizer, and ultimately, a more sustainable agricultural system.
Another technology that has the potential to revolutionize agriculture is vertical farming. This method involves growing crops in stacked layers indoors, using LED lighting and controlled…
As much as I’d like to believe that I have better writing skills than AI, I’m not sure I would be confidently making that claim for longer. Granted, the text above sounds a little robotic (and why wouldn’t it) but we’ve all written worse school papers than what ChatGPT produced in a few seconds.
This week in The Global Tiller, we look closely into ChatGPT, the latest AI breakthrough to have taken the world by storm. What is it capable of and which sectors does it pose a threat to? How will it impact our society?
To understand what ChatGPT is, I decided to ask the software itself.
ChatGPT is a state-of-the-art language model developed by OpenAI. It is a variant of the GPT (Generative Pre-trained Transformer) model, which is trained on a massive amount of data to generate human-like text. One of the key features of ChatGPT is its ability to understand and respond to natural language input, making it ideal for a wide range of applications, such as text generation, language translation, and conversational AI.
What makes ChatGPT stand out from previous AI interfaces is the fact that it remembers what you’ve said earlier in the conversation, it allows you to correct it when it makes a mistake and it has been trained to decline inappropriate requests. The world of the movie, Her, doesn’t seem so farfetched anymore.
Nevertheless, it may sometimes generate inaccurate information, or harmful and biased instructions. Since this AI was trained with data until 2021, it is also not very accurate if you ask questions about current developments. Yet, it is making heads roll.
Universities and schools are worried that students will use ChatGPT to plagiarise written assignments and research papers. Companies are worried applicants will use it to write cover letters. Law enforcers are worried criminals will use it to look up instructions on how to make a bomb. Lawyers are worried their interns will use it to create legal documents, and news editors are worried it will copyedit a story faster and more efficiently than themselves.
But you can look at these same 'concerns' from a different lens. Students who don’t speak English as a first language will now be able to improve the grammar of their essays, job applicants may finally succeed in writing cover letters that get them noticed. Paralegals may focus on more creative aspects of their jobs than having to spend nights perfecting contractual lingo. If law enforcers are worried about readily available bomb-making instructions, they should go after Google first. As for news editors, and it hurts me to say it, but copyediting has always been a monotonous part of the job. Maybe now you can focus on the bigger picture, like which topics you should cover and which communities you should highlight.
This genie is not going back into the bottle. What we should focus on instead is how we are going to adapt. How are schools going to change their assessment criterion to make sure they are really testing what their students are learning, instead of lazily assigning topics to be graded on a predetermined rubric. How are companies going to redesign their hiring procedures so instead of shortlisting candidates based on who uses the best keywords in cover letters, they could give more value to individual skills and suitability.
In much the same way that automated directions took us from efficient taxis to ride sharing apps, generative AI, like ChatGPT and its successors, will lead to innovations that are hard for us to fathom at this point. When companies combine this technology with the data they already possess, it could lead to significant disruptions across industries.
What remains to be seen is how policymaking responds to these disruptions. How will our laws regulate data usage? What requirements can be made mandatory for training generative AI in the first place? How will we protect vulnerable sectors of the society from suffering even more in the midst of the system overhaul that AI is bound to bring?
If there is one task you’d rather have ChatGPT do for you, what would it be? Do leave a comment below and let us know.
Until next time, take care and stay safe!
Hira - Editor - The Global Tiller
Dig Deeper
What makes generative AI the technology of the year for 2022? BigThink digs into how Generative AI is disrupting multiple industries, from artwork and poetry to essays and computer code.
…and now what?
When I was in high school, we were asked to prepare a presentation for geography class. My topic was on development issues in Africa, if I remember correctly, and that was the year when we got internet at home.
So I used a lot (like a lot!) of material taken out of the internet, from the few websites that were available back then. I printed everything, put it together nicely and handed it in. The teacher loved it and appreciated me for taking the opportunity to use this new tool.
My classmates, on the other hand, were less pleased. They told me I didn’t do the work myself and I was basically cheating. Was I? I didn’t think so. Looking at the discourse around ChatGPT, the story is not entirely new. Before AI, it was the internet, and before that it was called cheating if you used a calculator.
Now, as a teacher at the university, I’m very often asked by the administration to be careful about cheating, the use of phones, etc. But I have a trick for that: I tell students to bring whatever content they need or want because I’m grading their critical analysis skills, their ability to build a strong and structured argument. It’s not about the information they share, it’s about how they share it and what they do with it to produce additional reflections, questions and ideas.
Does ChatGPT threaten to expose my trick? Yes and no. Yes, because its abilities are far greater than any other kind of generative AI until now. It’ll be harder and harder to decipher between human work and AI work. And no, because it’s just another tool to produce more, maybe to produce better, if anything to do things differently.
The debate around ChatGPT is another Luddite fight. It’s already here now, as Hira said, so it’s more about how to we adapt to it and take the best of it (through careful regulation, education and collaboration).
It’s already a game changer in many ways. Look at Google panicking and laying off workers as Microsoft announced it will integrate ChatGPT in its Big search engine. Knowing how heavily criticised Google’s search engine has been, they have reason to be scared.
But, as it claims itself, ChatGPT is just another tool to produce “human-like text”. It helps us produce things that could look like something manmade, to some extent. So it’ll be good for many tasks and help us improve on many things but it won’t replace us.
I may be biased here but chatGPT’s Global Tiller is nice yet it doesn’t have any depth. It lacks the experience, the wisdom or, even more simply, the sensory interactions that makes us react with more depth to whatever we’re talking about. Singer Nick Cave said it perfectly when someone asked ChatGPT to write a song in the style of Nick Cave: “ChatGPT’s melancholy role is that it is destined to imitate and can never have an authentic human experience, no matter how devalued and inconsequential the human experience may in time become”.
Unlike what we always fear, machines don’t deprive us completely of our humanity. It is us humans who deprive ourselves of our shared humanity by putting people at the level of machines, treating them as such and preventing many of us to explore their own potential, abilities and purpose. The risk for the future is not machines per se, it is how much we treat ourselves as machines.
Let me be clearer on this. We have cooking machines for decades now. Some highly exceptional ones, like the Magimix or InstaPot, which are quite magical indeed. But even if Magimix makes wonderful cuisine, people still continue to create recipes and magnify the culinary experience. ChatGPT is just the Magimix of language, it makes good food but, for culinary delight, we will still go to our favourite chef.
And that is how the human adventure will probably continue. But don’t trust me, trust ChatGPT when I asked it what the future of humanity is:
Ultimately, the future of humanity will depend on the choices and actions that we take as a society in the present.
So, what will our choices be?
Philippe - Founder & CEO - Pacific Ventury