Three things from our pod this week on AI with Tyler Cowen:
The AI revolution is coming. Who would you rather led it: America or China?
It’s the middle of the pack that should worry; AI can’t hammer a nail, but it can write a mediocre op-ed, a run-of-the-mill analysis. AI>Good enough.
This will upend teaching forever: Less homework! More actual teaching!
Tyler Cowen is an AI enthusiast. A BIG enthusiast. And his enthusiasm is infectious as he imagines the possibilities — new cures, new knowledge, ever-smarter users. But the other side, including now Elon Musk and Apple co-founder Steve Wozniak, think AI’s explosive growth needs a serious pause to contemplate the implications and risks of this fast-growing new technology.
Or as the tabloid Daily Mail put it even more succinctly: “Are we creating a monster that will enslave rather than serve us?”
On the one hand, tasking ChatGPT (or its newer versions) to write a poem in the style of Dr. Seuss about the risks of AI is epic:
Oh the risks of AI, oh my, oh my,
Bringing troubles we cannot deny,
With power to learn and power to grow,
We must be careful, this much we know.
Or imagine having the power of 100 assistants to write and think and bounce ideas off of. Imagine having one accountant tasking AI with the work of 50. It’s empowering.
On the other, in the hands of, say, ISIS or other malefactors, imagine AI’s destructive potential. Or just go back and watch the Terminator series (except the crappy weird last one). Or think about the more mundane — the decline of friendship, the atomization of our society, the end of human interactions, the end of human debate, a designer intellectual world in the shape of your interests, your obsessions, your passions, with no competing clutter. But it’s that competing clutter that enriches us, and enlightens us about other perspectives.
Cowen insists that all revolutionary developments come with risks and rewards. He also notes correctly that the AI revolution is here and can’t be postponed, pace Elon. So the choice is not whether it happens, it’s who manages the revolution. This is not a subject in your #WTH hosts’ sweet spot (unlike creative bickering and national security). Listen to the pod, read the Musk/Wozniak letter, give ChatGPT or the Bing AI a try, and see what you think. We’d love to hear from you, comment below.
HIGHLIGHTS
What’s the revolution happening here?
TC: I think, for the first time, human beings have created true intelligence. It's not conscious, it doesn't have feelings, it's not sentient, but it can perform really a large number and wide variety of human tasks at the level of, say, an IQ 130 or 140 person, plus it often seems to know everything. So this is just a momentous development in the history of mankind.
Is this good news or bad news?
TC: All major changes bring a lot of good and a pretty high degree of bad. I think of it as like the printing press. As you mentioned before, the printing press enabled the Scientific Revolution, later, the Industrial Revolution. We can't imagine modern life or long life expectancies without it, but the printing press also gave some spur to wars of religion, other wars, the writings of Lenin, of Hitler. So truly fundamental changes just disorient people, restructure many elements of reality. But, when push comes to shove, you have to do them. There's no particular reason to think AI as we have it now is going to destroy us. It's not that I think we have very specific, very firm predictions, but simply being a super intelligence does not give a computerized entity the ability to manipulate the world very easily.
So you see a better life with AI?
TC: Imagine that your kid now has a high-quality tutor. It can ask questions endlessly and interact with it on basically every single topic. Not everyone will use this properly, but it's a phenomenal advance. I'm happy about the complainers. I think they'll end up making this a better product, a better service. But one way to think about it is that, if we don't do this, China and other countries will. And who would you rather have in the lead here, us or them?
How is AI going to reshape learning?
TC: For one thing, our schools are so screwed up. So much of what they do is about ranking the students rather than teaching them. The panic there shows how screwed up they are. I say this will force good schools to reorient themselves almost fully toward teaching. Actually, what will rank you over time is the AI. That itself may be scary in some ways, but institutions such as take-home homework, that's over forever. I say it's a good thing. What we need to do is teach our students how to use GPT-4 and its successors to do their work and to build projects, because that's what the jobs of the very near future or even the present will be like.
How can you rely on the answers AI gives?
TC: [Y]you really need to word your prompts very specifically to get the best answers. So, if you just say, "Oh, what should I ask Tyler Cowen about GPT," it's okay, but it's not really impressive. But, if you give it a lot of context, a lot of detail, and then say, "Give me the answers of someone who is expert in the other podcasts of Tyler Cowen,” something like that, it will do much better.
I just wrote a 40-page paper on how to give GPT better prompts, most of all in the area of economics. It's been a very popular paper. It's on SSRN, and it's called How to Use GPT to Learn and Teach Economics. But it's general knowledge you can apply to virtually any topic.
Isn’t this just another crutch? Another way to avoid reading books and talking to people?
TC: It's like sitting down in a room of Nobel laureates and having the ability to ask them questions. Not everyone will use that for good, but it has plenty of nuance, incredible knowledge of detail, of history. I just did a podcast with GPT where GPT played the role of Jonathan Swift, and the GPT did better, I would say, than about a third of my human guests, and my human guests are quite notable people. Again, I'm not saying everyone will use this well, but it's a remarkable opportunity.
If it makes you feel any better, GPT models are about to obliterate a lot of social media use. Rather than going onto the site and doing whatever, you'll just speak or write to your GPT, "Hey, tell me what's up in all my accounts. Condense it. Point me to what's important." It won't necessarily make things better, but I think on average it will make things a lot better, save people a lot of time. You could even give your GPT a command, "The material that might depress me or harm me in some way, take that away. Only send me the good news."
How is AI going to affect work — laborers and the economics of work?
TC: I think, if you're a company that organizes routine back-office work for American corporations using, say, the Philippines or India, that will be automated fairly quickly, and I would be bearish on that sector. That will be bad for some number of workers in other countries. I think the big gainers are, I wouldn't call it unskilled labor, it requires a lot of skill, but people who build things. Carpenters, gardeners, people in construction. We're going to have a lot more ideas, a lot more new projects, a lot of things we'll want to do. This should raise the wages of Americans who are not producing ideas, but doing things with their hands, which GPT cannot do at all. And the chattering classes, people in law, also people who work with words, the word cells, they're called, a lot of their jobs will become a lot more competitive. So it's going to be very interesting who it helps and who has a tougher time of it, but it will be inverting a lot of some of the other recent trends.
If AI can invent a new antibiotic, can’t it also invent a new biological weapon? A new nuclear weapon?
TC: I don't know exactly what data it's been trained on. I'm assuming that the people who built it did not train it on data for how to make a nuclear bomb. But what GPT models do not do is tell you how to run the lab, how to keep yourself safe, the inarticulable procedures that are a part of every production process that aren't simply written down. But, on national security as an issue, I would say it's a huge edge for America over China. We're well ahead of them, and they will be using our GPT models. So they will have access to Western ideas more than ever before. They'll have a big incentive to use VPNs, and the great firewall essentially has come down because of GPT models. So the ones who are in trouble is China, not us.
[…L]ook at the point in its most general terms, again, the printing press, the internet, telephones, radio, automobiles, they all can be used for good and bad purposes. You have to ask yourself, do you think the forces of good are more cooperative, more productive, more innovative in such a way that new advancements generally help good more than they help evil? And I think that's the case here as well. But it is correct to think that resources more generally can help evil parties. I think that's something we have to deal with. We've been dealing with it since mankind invented fire. Things go on fire, fire leads to weapons, weapons kill people. But at the end of the day, it's still a good thing that we have fire and this will be like that.
Cui bono from AI?
TC: [T]hose who can manage GPT models will advance at the expense of those who cannot. And many people who work with words will have to find other jobs [and] many of those people who say were hurt by trade with China in manufacturing, they're exactly the people that will be helped by this new technology because it will bring a lot more projects to America, just new ideas for doing things, new plans. Think of it, what can GPT models not do? They can't do hammer and nails, they can't build a structure, they can't plant crops, they can't tend to a garden. So in relative terms, the demands for those laborers will go up and it will reverse some of the earlier trends.
Is government going to try to regulate AI?
TC: Well, government is so slow and so ill-informed, it's probably not going to matter. There are no meaningfully new international agreements. We can't even manage the WTO, UN, many other institutions. So to think some international body will be effective here, I mean maybe the EU will try to ban it in some way, but I don't even think they'll succeed. It's too useful. People will just access the US version of it. So governments will be very late to the party for better or worse, and these things will progress. We've seen incredible progress in the last two years. The US government, other governments essentially have done nothing. They've helped it in some background ways in terms of subsidizing their early R&D, but it's going to follow its own logic whether we like that or not. So I would say let's get used to it and let's work for the better version of it rather than the worst version.
People say that half of all computer coding will be done by computers within two years… What will that mean?
TC: That was an underestimate, I've said that a few weeks ago. The pace has been faster than I thought and it will be more than that, sooner than that. But the creative element of programming still will come from humans. You'll either have to be a super sharp creative idea generator programmer type or you'll be kind of editor checker of the AI code. One of those two things. I strongly suspect the number of jobs and programming will go up, but they will be split. And again, you'll have to be good at one of those two things. You can't just be an ordinary 60th percentile programmer, slogging away at your coding every night.
How is the loss of human debate going to affect us?
TC: I read the GPT every night, I love reading it. […I]f I ask it what causes inflation, give me an answer worthy of Milton Friedman, that's exactly what it gives me. I get Milton Friedman in a sense, in my living room. So if you ask it for what would be a comical stupid answer such as I might hear from a circus clown, it will give you that too. So it is really up to the user. But I'm not so pessimistic about human nature. I think the smarter, more productive people will have a bigger impact with it than the idiot.
Full transcript here.
SHOWNOTES
Neglected Open Questions in the Economics of Artificial Intelligence (Tyler Cowen, 2019)
How to Learn and Teach Economics with Large Language Models, Including GPT (GMU, March 17 2023)
Who gains and loses from the new AI? (Tyler Cowen, Marginal Revolution, December 16 2022)
AI Is About to Transform Childhood. Are We Ready? (Tyler Cowen, Bloomberg, March 16 2023)
Progress Studies: A Conversation with Tyler Cowen (The Point, February 22 2023)
AI Is Improving Faster Than Most Humans Realize (Tyler Cowen, Bloomberg, January 24 2023)
OpenAI, Open Research & UPenn Paper Considers How GPTs Will Impact the US Labour Market (Synced Review, March 23 2023)
Michal Kosinski Thread on GPT4 writing code to “escape” (March 17, 2023)
Lessons from the World’s Two Experiments in AI Governance (Carnegie Endowment for International Peace, February 14 2023)
NIST’s AI Risk Management Framework plants a flag in AI debate (Brookings, February 15 2023)
US Chamber of Commerce calls for AI regulation (Reuters, March 9 2023)