Here is the result in plain text:
I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day.
I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.”
I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as “real” A.G.I., but that these mostly won’t matter, because the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.
I believe that over the next decade, powerful A.I. will generate trillions of dollars in economic value and tilt the balance of political and military power toward the nations that control it — and that most governments and big corporations already view this as obvious, as evidenced by the huge sums of money they’re spending to get there first.
I believe that most people and institutions are totally unprepared for the A.I. systems that exist today, let alone more powerful ones, and that there is no realistic plan at any level of government to mitigate the risks or capture the benefits of these systems.
I believe that hardened A.I. skeptics — who insist that the progress is all smoke and mirrors, and who dismiss A.G.I. as a delusional fantasy — not only are wrong on the merits, but are giving people a false sense of security.
I believe that whether you think A.G.I. will be great or terrible for humanity — and honestly, it may be too early to say — its arrival raises important economic, political and technological questions to which we currently have no answers.
I believe that the right time to start preparing for A.G.I. is now.
In San Francisco, where I’m based, the idea of A.G.I. isn’t fringe or exotic. People here talk about “feeling the A.G.I.,” and building smarter-than-human A.I. systems has become the explicit goal of some of Silicon Valley’s biggest companies. Every week, I meet engineers and entrepreneurs working on A.I. who tell me that change — big change, world-shaking change, the kind of transformation we’ve never seen before — is just around the corner.
“Over the past year or two, what used to be called ‘short timelines’ (thinking that A.G.I. would probably be built this decade) has become a near-consensus,” Miles Brundage, an independent A.I. policy researcher who left OpenAI last year, told me recently.
Outside the Bay Area, few people have even heard of A.G.I., let alone started planning for it. And in my industry, journalists who take A.I. progress seriously still risk getting mocked as gullible dupes or industry shills.
I’ve also found many uses for A.I. tools in my work. I don’t use A.I. to write my columns, but I use it for lots of other things — preparing for interviews, summarizing research papers, building personalized apps to help me with administrative tasks. None of this was possible a few years ago. And I find it implausible that anyone who uses these systems regularly for serious work could conclude that they’ve hit a plateau.
If you really want to grasp how much better A.I. has gotten recently, talk to a programmer. A year or two ago, A.I. coding tools existed, but were aimed more at speeding up human coders than at replacing them. Today, software engineers tell me that A.I. does most of the actual coding for them, and that they increasingly feel that their job is to supervise the A.I. systems.
Jared Friedman, a partner at Y Combinator, a start-up accelerator, recently said a quarter of the accelerator’s current batch of start-ups were using A.I. to write nearly all their code.
“A year ago, they would’ve built their product from scratch — but now 95 percent of it is built by an A.I.,” he said.
Overpreparing is better than underpreparing.
Maybe A.I. progress will hit a bottleneck we weren’t expecting — an energy shortage that prevents A.I. companies from building bigger data centers, or limited access to the powerful chips used to train A.I. models. Maybe today’s model architectures and training techniques can’t take us all the way to A.G.I., and more breakthroughs are needed.
But even if A.G.I. arrives a decade later than I expect — in 2036, rather than 2026 — I believe we should start preparing for it now.
Most of the advice I’ve heard for how institutions should prepare for A.G.I. boils down to things we should be doing anyway: modernizing our energy infrastructure, hardening our cybersecurity defenses, speeding up the approval pipeline for A.I.-designed drugs, writing regulations to prevent the most serious A.I. harms, teaching A.I. literacy in schools and prioritizing social and emotional development over soon-to-be-obsolete technical skills. These are all sensible ideas, with or without A.G.I.
Source link