From fake photos of Donald Trump being arrested by New York City police officers to a chatbot describing a very lively computer scientist as having died tragically, the ability of the new generation of generative artificial intelligence systems to create convincing but fictional text and images set off alarms about fraud and misinformation about steroids. Indeed, on March 29, 2023, a group of artificial intelligence researchers and industry figures urged the industry to suspend further training of the latest AI technology or, short of that, for governments to “impose a moratorium.”
These technologies – image generators such as DALL-E, Midjourney and Stable Diffusion, and text generators such as Bard, ChatGPT, Chinchilla and LLaMA – are now available to millions of people and require no technical knowledge to use.
Given the potential for widespread harm as tech companies roll out these AI systems and test them on the public, policymakers are faced with the task of determining whether and how to regulate the emerging technology. The Conversation asked three tech policy experts to explain why regulating AI is such a challenge – and why getting it right is so important.
Human frailties and a moving target
S. Shyam Sundar, professor of media effects and director, Center for Socially Responsible AI, Penn State
The reason to regulate AI is not because the technology is out of control, but because human imagination is out of proportion. Disturbing media coverage has fueled irrational beliefs about AI’s capabilities and consciousness. Such beliefs build on “automation bias” or the tendency to let your guard down when machines perform a task. An example is reduced vigilance among pilots when their planes are flying on autopilot.
Numerous studies in my lab have shown that when a machine, rather than a human, is identified as a source of interaction, it triggers a mental shortcut in the minds of users that we call a “machine heuristic.” This shortcut is the belief that machines are accurate, objective, unbiased, infallible and so on. This clouds the user’s judgment and leads the user to trust machines too much. However, it is not sufficient for humans to simply abuse AI’s infallibility, as humans are known to unconsciously assume competence even when the technology does not warrant it.
Research has also shown that people treat computers as social beings when the machines show even the slightest hint of humanity, such as the use of conversational language. In these cases, people apply social rules of human interaction, such as politeness and reciprocity. So, when computers seem sentient, people tend to trust them blindly. Regulation is needed to ensure that AI products earn this trust and do not exploit it.
AI presents a unique challenge because unlike in traditional engineering systems, designers cannot be sure how AI systems will behave. When a traditional car was shipped from the factory, engineers knew exactly how it would function. But with self-driving cars, the engineers can never be sure how it will perform in new situations.
Lately, thousands of people around the world have marveled at what large generative AI models like GPT-4 and DALL-E 2 produce in response to their requests. None of the engineers involved in developing these AI models could tell you exactly what the models will produce. To complicate matters, such models change and evolve with more and more interaction.
All this means there is a lot of potential for dung fires. Therefore, much depends on how AI systems are deployed and what provisions are in place for assistance when human sensitivities or well-being are harmed. AI is more of an infrastructure, like a highway. You can design it to shape human behavior in the collective, but you will need mechanisms to deal with abuse, such as speeding, and unpredictable events, such as accidents.
AI developers will also have to be excessively creative in imagining ways in which the system can behave and try to anticipate possible violations of social standards and responsibilities. This means there is a need for regulatory or governance frameworks that rely on periodic audits and policing of AI’s outcomes and products, although I believe these frameworks must also recognize that the system designers cannot always be held responsible for mishaps.
The combination of ‘soft’ and ‘hard’ approaches
Cason Schmit, Assistant Professor of Public Health, Texas A&M University
Regulating AI is difficult. To regulate AI well, you must first define AI and understand the expected AI risks and benefits. Defining AI legally is important to identify what is subject to the law. But AI technology is still developing, so it is difficult to establish a stable legal definition.
It is also important to understand the risks and benefits of AI. Good regulations should maximize public benefits while minimizing risks. However, AI applications are still emerging, so it is difficult to know or predict what future risks or benefits may be. These kinds of unknowns make emerging technologies like AI extremely difficult to regulate with traditional laws and regulations.
Legislators are often too slow to adapt to the rapidly changing technological environment. Some new laws are outdated by the time they are enacted or even enacted. Without new laws, regulators must use old laws to address new problems. Sometimes this leads to legal barriers to social benefits or legal loopholes to harmful behaviour.
“Soft laws” are the alternative to traditional “hard law” approaches to legislation intended to prevent specific offences. In the soft law approach, a private organization sets rules or standards for industry members. It can change faster than traditional legislation. This makes soft laws promising for emerging technologies because they can quickly adapt to new applications and risks. However, soft laws can mean soft enforcement.
Megan Doerr, Jennifer Wagner and I propose a third way: Copyleft AI with Trusted Enforcement (CAITE). This approach combines two very different concepts in intellectual property – copyleft licensing and patent trolls.
Copyleft licensing allows content to be easily used, reused or modified under the terms of a license – for example open source software. The CAITE model uses copyleft licenses to require AI users to follow specific ethical guidelines, such as transparent assessments of the impact of bias.
In our model, these licenses also transfer the legal right to enforce license violations to a trusted third party. It creates an enforcement entity that exists solely to enforce ethical AI standards and that could be partially funded by fines for unethical behavior. This entity is like a patent troll in that it is private rather than governmental and it supports itself by enforcing the legal intellectual property rights it collects from others. In this case, rather than enforcement for profit, the entity enforces the ethical guidelines defined in the licenses – a “troll forever.”
This model is flexible and adaptable to meet the needs of a changing AI environment. It also enables significant application options, like a traditional government regulator. In this way, it combines the best elements of hard and soft law approaches to meet the unique challenges of AI.
Four key questions to ask
John Villasenor, Professor of Electrical Engineering, Law, Public Policy and Management, University of California, Los Angeles
The extraordinary recent advances in large-language model-based generative AI are prompting calls to create new AI-specific regulation. Here are four key questions to ask as that dialogue progresses:
1) Is new AI-specific regulation needed? Many of the potentially problematic outcomes of AI systems are already addressed by existing frameworks. If an AI algorithm used by a bank to evaluate loan applications leads to racially discriminatory lending decisions, it would violate the Fair Housing Act. If the AI software in a driverless car causes an accident, products liability provides a framework to pursue remedies.
2) What are the risks of regulating a rapidly changing technology based on a snapshot in time? A classic example of this is the Stored Communications Act, which was enacted in 1986 to address then-new digital communication technologies such as e-mail. With the enactment of the SCA, Congress provided significantly fewer privacy protections for emails that are more than 180 days old.
The logic was that limited storage space meant people were constantly cleaning out their inboxes by deleting older messages to make room for new ones. Consequently, messages stored for more than 180 days were deemed less important from a privacy point of view. It’s not clear that this logic ever made sense, and it certainly doesn’t make sense in the 2020s, when the majority of our emails and other stored digital communications are older than six months.
A common response to concerns about regulating technology based on a single snapshot in time is this: If a law or regulation becomes outdated, you need to update it. But that’s easier said than done. Most people agree that the SCA became obsolete decades ago. But because Congress couldn’t agree on specifically how to revise the 180-day provision, it’s still on the books more than a third of a century after its enactment.
3) What are the potential unintended consequences? The Allow States and Victims to Fight Online Sex Trafficking Act of 2017 was a law passed in 2018 that revised Section 230 of the Communications Decency Act with the goal of combating sex trafficking. While there is little evidence that it has reduced sex trafficking, it has had a hugely problematic impact on another group of people: sex workers who used to rely on the websites knocked offline by FOSTA-SESTA to exchange information about dangerous clients . This example shows the importance of taking a broad view of the potential effects of proposed regulations.
4) What are the economic and geopolitical implications? If regulators in the United States act to deliberately slow progress in AI, it will simply push investment and innovation—and the resulting job creation—elsewhere. While emerging AI raises many concerns, it also promises to bring enormous benefits in areas including education, medicine, manufacturing, transportation safety, agriculture, weather forecasting, access to legal services and more.
I believe AI regulations drafted with the above four questions in mind will be more likely to successfully address the potential harms of AI while also ensuring access to its benefits.
Disclaimer for Uncirculars, with a Touch of Personality:
While we love diving into the exciting world of crypto here at Uncirculars, remember that this post, and all our content, is purely for your information and exploration. Think of it as your crypto compass, pointing you in the right direction to do your own research and make informed decisions.
No legal, tax, investment, or financial advice should be inferred from these pixels. We’re not fortune tellers or stockbrokers, just passionate crypto enthusiasts sharing our knowledge.
And just like that rollercoaster ride in your favorite DeFi protocol, past performance isn’t a guarantee of future thrills. The value of crypto assets can be as unpredictable as a moon landing, so buckle up and do your due diligence before taking the plunge.
Ultimately, any crypto adventure you embark on is yours alone. We’re just happy to be your crypto companion, cheering you on from the sidelines (and maybe sharing some snacks along the way). So research, explore, and remember, with a little knowledge and a lot of curiosity, you can navigate the crypto cosmos like a pro!
UnCirculars – Cutting through the noise, delivering unbiased crypto news