The rise of artificial intelligence has created increasing excitement and much debate about its potential to revolutionize entire industries. At its best, AI can improve medical diagnosis, more quickly identify potential national security threats, and solve crimes. But there are also significant concerns – in areas such as education, intellectual property and privacy.
Today’s WatchBlog post looks at our recent work on how Generative AI systems (for example, ChatGPT and Bard) and other forms of AI have the potential to provide new capabilities, but require responsible oversight.
The promise and perils of current AI use
Our recent work has looked at three main areas of AI advancement.
Generative AI systems can create text (apps like ChatGPT and Bard, for example), images, audio, video, and other content when prompted by a user. These growing abilities can be used in a variety of fields such as education, government, law and entertainment. As of early 2023, some emerging generative AI systems have reached more than 100 million users. Advanced chatbots, virtual assistants and language translation tools are examples of generative AI systems in widespread use. As news headlines indicate, this technology continues to gain worldwide attention for its benefits. But there are also concerns, such as how it could be used to replicate work of writers and artists, generate code for more effective cyberattacks, and even help produce new chemical warfare compounds, among other things. Our recent Spotlight on Generative AI takes a deeper look at how this technology works.
Machine learning is a second application of AI that is growing in use. This technology is used in fields that require advanced image analysis, from medical diagnostics to military intelligence. In a report last year, we looked at how machine learning was used to aid the medical diagnostic process. It can be used to identify hidden or complex patterns in data, detect diseases earlier and improve treatments. We found that benefits include more consistent analysis of medical data and increased access to care, particularly for underserved populations. However, our work has looked at limitations and bias in data used to develop AI tools that can reduce their safety and effectiveness and contribute to disparities for certain patient populations.
Facial recognition is another type of AI technology that has shown both promise and dangers in its use. Law enforcement—federal, as well as state and local—has used facial recognition technology to support criminal investigations and video surveillance. It is also used at ports of entry to match travelers to their passports. While this technology can be used to more quickly identify potential criminals, or those who might not have been identified without it, our work also found some concerns about its use. Despite improvements, inaccuracies and bias in some facial recognition systems can lead to more frequent misidentifications for certain demographics. There are also concerns about whether the technology violates individuals’ personal privacy.
To ensure accountability and mitigate the risks of AI use
How can we reduce the risks and ensure these systems work appropriately for everyone as AI use continues its rapid expansion?
Appropriate oversight will be critical to ensuring AI technology remains effective, and keeps our data protected. We developed an AI Accountability Framework to help Congress address the complexities, risks, and societal consequences of emerging AI technologies. Our framework outlines key practices to ensure accountability and responsible AI use by federal agencies and other entities involved in the design, development, deployment, and ongoing monitoring of AI systems. It is built around four principles—governance, data, performance, and monitoring—that provide structures and processes to manage, operate, and oversee the implementation of AI systems.
AI technologies have enormous potential for good, but much of their power comes from their ability to outperform human capabilities and understanding. From commercial products to strategic competition between world powers, AI is poised to have a dramatic influence on both daily life and world events. This makes accountability critical to its application, and the framework can be used to ensure that people drive the system – not the other way around.
Disclaimer for Uncirculars, with a Touch of Personality:
While we love diving into the exciting world of crypto here at Uncirculars, remember that this post, and all our content, is purely for your information and exploration. Think of it as your crypto compass, pointing you in the right direction to do your own research and make informed decisions.
No legal, tax, investment, or financial advice should be inferred from these pixels. We’re not fortune tellers or stockbrokers, just passionate crypto enthusiasts sharing our knowledge.
And just like that rollercoaster ride in your favorite DeFi protocol, past performance isn’t a guarantee of future thrills. The value of crypto assets can be as unpredictable as a moon landing, so buckle up and do your due diligence before taking the plunge.
Ultimately, any crypto adventure you embark on is yours alone. We’re just happy to be your crypto companion, cheering you on from the sidelines (and maybe sharing some snacks along the way). So research, explore, and remember, with a little knowledge and a lot of curiosity, you can navigate the crypto cosmos like a pro!
UnCirculars – Cutting through the noise, delivering unbiased crypto news