And how much more reticent to get together to discuss harms their tools can be seen causing right now. (And data exploitation as a tool to concentrate market power is nothing new.)Ĭertainly it speaks volumes about existing AI power structures that tech execs at AI giants including OpenAI, DeepMind, Stability AI and Anthropic are so happy to band and chatter together when it comes to publicly amplifying talk of existential AI risk. So of course there are clear commercial motivations for AI giants to want to route regulatory attention into the far-flung theoretical future, with talk of an AI-driven doomsday - as a tactic to draw lawmakers’ minds away from more fundamental competition and antitrust considerations in the here and now. Talk of existential AI risk also distracts attention from problems related to market structure and dominance, as Jenna Burrell, director of research at Data & Society, pointed out in this recent Columbia Journalism Review article reviewing media coverage of ChatGPT - where she argued we need to move away from focusing on red herrings like AI’s potential “sentience” to covering how AI is further concentrating wealth and power. It’s certainly notable that after a meeting last week between the UK prime minister and a number of major AI execs, including Altman and Hassabis, the government appears to be shifting tack on AI regulation - with a sudden keen interest in existential risk, per the Guardian’s reporting. Not to mention AI-driven spam! And the environmental toll of the energy expended to train these AI monsters. Or, indeed, baked in flaws like disinformation (“hallucination”) and risks like bias (automated discrimination). Such as the tools’ free use of copyrighted data to train AI systems without permission or consent (or payment) or the systematic scraping of online personal data in violation of people’s privacy or the lack of transparency from AI giants vis-a-vis the data used to train these tools. This drumbeat of hysterical headlines has arguably distracted attention from deeper scrutiny of existing harms. So, in recent months, there has actually been a barrage of heavily publicized warnings over AI risks that don’t exist yet. There was also the open letter signed by Elon Musk (and scores of others) back in March which called for a six-month pause on development of AI models more powerful than OpenAI’s GPT-4 to allow time for shared safety protocols to be devised and applied to advanced AI - warning over risks posed by “ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control”. Per a short explainer on CAIS’ website the statement has been kept “succinct” because those behind it are concerned to avoid their message about “some of advanced AI’s most severe risks” being drowned out by discussion of other “important and urgent risks from AI” which they nonetheless imply are getting in the way of discussion about extinction-level AI risk. Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Here’s their (intentionally brief) statement in full: The statement, which is being hosted on the website of a San Francisco-based, privately-funded not-for-profit called the Center for AI Safety (CAIS), seeks to equate AI risk with the existential harms posed by nuclear apocalypse and calls for policymakers to focus their attention on mitigating what they claim is ‘doomsday’ extinction-level AI risk. Make way for yet another headline-grabbing AI policy intervention: Hundreds of AI scientists, academics, tech CEOs and public figures - from OpenAI CEO Sam Altman and DeepMind CEO Demis Hassabis to veteran AI computer scientist Geoffrey Hinton, MIT’s Max Tegmark and Skype co-founder Jaan Tallinn to Grimes the musician and populist podcaster Sam Harris, to name a few - have added their names to a statement urging global attention on existential AI risk.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |