Managing the Risks of Generative AI

Generative artificial intelligence (AI) has become widely popular, but its adoption by businesses comes with a degree of ethical risk. A clear framework aligning generative AI goals with business objectives is essential, along with operationalizing ethical AI practices.

The guidelines for the ethical development of generative AI ( provide a comprehensive framework for organizations as they adopt this technology and cover five focus areas: accuracy, safety, honesty, empowerment, and sustainability.

  • To ensure accuracy, organizations should train AI models on their own data, communicate uncertainty, and enable validation.
  • Safety involves mitigating bias, toxicity, and harmful outputs, as well as protecting privacy and identifying vulnerabilities.
  • Honesty requires respecting data provenance, obtaining consent, and transparently disclosing AI-generated content.
  • Empowerment emphasizes the supportive role of AI and involves humans in decision-making, ensuring accessibility, and treating contributors with respect.
  • Sustainability addresses the environmental impact of large language models and encourages minimizing their size and energy consumption.

Integrating generative AI in business applications requires using zero-party and first-party data, keeping data fresh and well-labeled, and involving humans in the loop for context and accuracy. Testing and continuous oversight are essential, and feedback from employees, advisors, and communities helps identify risks and make improvements.

By following these guidelines, organizations can responsibly adopt generative AI, mitigate risks, eliminate bias, and uphold ethical AI practices that benefit both their own interests and broader societal responsibilities.