As generative AI gains a stronger foothold in the enterprise, managers are being urged to pay greater attention to AI ethics — a major challenge as many issues relate to bias, transparency, explainability and trust. To illuminate the various nuances of ethical AI, government agencies, regulators, and independent groups are developing ethical AI frameworks, tools, and resources.
“The most impactful frameworks or approaches to addressing ethical AI issues … take into account all aspects of the technology—its use, risks, and potential outcomes,” said Tad Roselund, managing director and senior partner at Boston Consulting Group (BCG) said. . Many firms approach the development of ethical AI frameworks from a purely value-based position, he added. It is important to take a holistic ethical AI approach that integrates strategy with process and technical controls, cultural norms and governance. These three elements of an ethical AI framework can help shape responsible AI policies and initiatives. And it all starts by establishing a set of principles around AI use.
“Often, businesses and leaders are narrowly focused on one of these elements when they should be focusing on all of them,” Roselund reasoned. Addressing any element can be a good starting point, but by considering all three elements—controls, cultural norms, and governance—businesses can design an all-encompassing ethical AI framework. This approach is especially important when it comes to generative AI and its ability to democratize the use of AI.
Businesses must also instill AI ethics in those who develop and use AI tools and technology. Open communication, educational resources and enforced guidelines and processes to ensure the proper use of AI, Roselund advised, could further strengthen an internal AI ethics framework that addresses generative AI.
Top Resources for Forming an Ethical AI Framework
There are various standards, tools, techniques and other resources to help shape a company’s internal ethical AI framework. The following are listed alphabetically:
AI Now Institute focuses on the social implications of AI and policy research in responsible AI. Research areas include algorithmic liability, antitrust issues, biometrics, worker data rights, large-scale AI models and privacy. The report “AI Now 2023 Landscape: Confronting Tech Power” offers a deep dive into many ethical issues that can be useful in developing a responsible AI policy. Berkman Klein Center for Internet and Society at Harvard University promotes research on the big questions related to the ethics and governance of AI. It has contributed to the dialogue on information quality, influenced policy-making on algorithms in criminal justice, supported the development of AI governance frameworks, studied algorithmic accountability and collaborated with AI providers. CEN-CENELEC Joint Technical Committee on Artificial Intelligence (JTC 21) is an ongoing EU initiative for various responsible AI standards. The group plans to produce standards for the European market and inform EU legislation, policies and values. It also plans to specify technical requirements for characterizing transparency, robustness and accuracy in AI systems. Institute for Technology, Ethics and Culture (ITEC) Handbook was a collaborative effort between Santa Clara University’s Markkula Center for Applied Ethics and the Vatican to develop a practical, incremental road map for technology ethics. The handbook includes a five-stage maturity model, with specific measurable steps companies can take at each level of maturity. It also promotes an operational approach to implementing ethics as an ongoing practice, similar to DevSecOps for ethics. The core idea is to bring legal, technical and business teams together during ethical AI’s early stages to root out the mistakes at a time when it is much cheaper to fix than after responsible AI deployment. ISO/IEC 23894:2023 IT-AI guidance on risk management standard describes how an organization can manage risks specifically related to AI. This can help to standardize the technical language that characterizes underlying principles and how these principles apply to the development, provision or presentation of AI systems. It also covers policies, procedures and practices for assessing, treating, monitoring, reviewing and recording risk. It is highly technical and aimed at engineers rather than business experts. NIST AI Risk Management Framework (AI RMF 1.0) guides government agencies and the private sector on managing emerging AI risks and promoting responsible AI. Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, pointed to the depth of the NIST framework, particularly its specificity in implementing controls and policies to better govern AI systems within different organizational contexts. Nvidia/NeMo Guardrails provides a flexible interface to define specific behavior patterns that bots should follow. It supports the Colang modeling language. One chief data scientist said his company uses the open-source toolset to prevent a support chatbot on a lawyer’s website from providing answers that could be construed as legal advice. Stanford Institute for Human-Centered Artificial Intelligence (HAI) provides ongoing research and guidance on best practices for human-centered AI. One early initiative in collaboration with Stanford Medicine is Responsible AI for Safe and Equitable Health, which addresses ethical and safety issues surrounding AI in health and medicine. “Towards unified objectives for self-reflective AI” is a paper by Matthias Samwald, Robert Praas and Konstantin Hebenstreit that takes a Socratic approach to identify underlying assumptions, contradictions and errors through dialogue and questions about truth, transparency, robustness and alignment of ethical principles. One goal is to develop AI metasystems in which two or more component AI models complement, critique, and improve their mutual performance. World Economic Forum’s “The Presidio Recommendations on Responsible Generative AI” white paper contains 30 “action-oriented” recommendations to “navigate AI complexities and ethically harness its potential.” It includes sections on responsible development and release of generative AI, open innovation and international collaboration, and social progress.
Ethical AI Best Practices
Ethical AI resources are a good starting point for customizing and establishing a company’s ethical AI framework and introducing responsible AI policies and initiatives. The following best practices can help achieve these goals:
Appoint an ethical leader. There are cases where many well-intentioned people sit around a table and discuss various ethical AI issues, but fail to make informed, decisive calls to action, Roselund noted. A single leader appointed by the CEO can drive decisions and actions. Take a cross-functional approach. Implementing AI tools and technologies across the company requires cross-functional collaboration, so the policies and procedures to ensure AI’s responsible use should reflect that approach, Roselund advised. Ethical AI requires leadership, but its success is not the sole responsibility of one person or department. Adapt the ethical AI framework. A generative AI ethics framework should be tailored to a company’s own unique style, goals and risks, without forcing a square peg into a round hole. “Overloaded program implementations,” Gupta said, “ultimately lead to premature termination due to inefficiency, cost overruns and burnout of staff tasked with putting the program in place.” Harmonize ethical AI programs with existing workflows and governance structures. Gupta compared this approach to preparing for a successful organ transplant. Establish ethical AI measurements. For employees to buy into an ethical AI framework and responsible AI policies, companies need to be transparent about their intentions, expectations and corporate values, as well as their plans to measure success. “Employees must not only be made aware of these new ethical stresses, but they must also be measured in their adaptation and rewarded for adapting to new expectations,” explains Brian Green, director of technology ethics at Markkula Center for Applied Ethics. Be open to different opinions. It is essential to engage a diverse group of voices, including ethicists, field experts, and those in surrounding communities who can influence AI deployment. “By working together, we gain a deeper understanding of ethical concerns and viewpoints and develop AI systems that are inclusive and respectful of diverse values,” said Paul Pallath, vice president of the applied AI practice at technology consulting firm Searce. Take a holistic perspective. Legality does not always align with ethics, Pallath warned. Sometimes legally acceptable actions can raise ethical concerns. Ethical decision-making must address both legal and moral aspects. This approach ensures that AI technology meets legal requirements and upholds ethical principles to protect the well-being of individuals and society.
Future of Ethical AI Frameworks
Researchers, business leaders, and regulators continue to explore ethical issues related to responsible AI. Legal challenges involving copyright and intellectual property protection will need to be addressed, Gupta predicted. Issues related to generative AI and hallucinations will take longer to address as some of those potential problems are inherent in the design of today’s AI systems.
Businesses and data scientists will also need to better address issues of bias and inequality in training data and machine learning algorithms. In addition, issues related to AI system security, including cyber-attacks against large language models, will require continuous engineering and design improvements to keep pace with increasingly sophisticated criminal adversaries.
“AI ethics will only grow in importance,” Gupta surmised, “and will experience many more overlaps with adjacent fields to strengthen the contributions it can make to the broader AI community.” In the near future, Pallath sees AI evolving towards enhancing human capabilities in conjunction with AI technology rather than replacing humans entirely. “Ethical considerations,” he explained, “will revolve around optimizing AI’s role in enhancing human creativity, productivity and decision-making, all while maintaining human control and oversight.”
AI ethics will remain a rapidly growing movement for the foreseeable future, Green added. “[W]with AI,” he admitted, “we’ve now created thinkers outside of ourselves and discovered that unless we give them some ethical thoughts, they won’t make good choices.”
AI ethics is never done. Ethical judgments may need to change as conditions change. “We need to maintain our awareness and skill,” Green emphasized, “so that, if AI does not benefit society, we can make the necessary improvements.”
Disclaimer for Uncirculars, with a Touch of Personality:
While we love diving into the exciting world of crypto here at Uncirculars, remember that this post, and all our content, is purely for your information and exploration. Think of it as your crypto compass, pointing you in the right direction to do your own research and make informed decisions.
No legal, tax, investment, or financial advice should be inferred from these pixels. We’re not fortune tellers or stockbrokers, just passionate crypto enthusiasts sharing our knowledge.
And just like that rollercoaster ride in your favorite DeFi protocol, past performance isn’t a guarantee of future thrills. The value of crypto assets can be as unpredictable as a moon landing, so buckle up and do your due diligence before taking the plunge.
Ultimately, any crypto adventure you embark on is yours alone. We’re just happy to be your crypto companion, cheering you on from the sidelines (and maybe sharing some snacks along the way). So research, explore, and remember, with a little knowledge and a lot of curiosity, you can navigate the crypto cosmos like a pro!
UnCirculars – Cutting through the noise, delivering unbiased crypto news