Developing Responsible AI - Avoiding Bias, Discrimination, and Inequity
1h 1m
Created on June 03, 2024
Intermediate
Overview
Developing AI tools and training AI models involves a significant number of legal issues and involves equitable considerations such as the need to avoid biased models, discriminatory results, and other equitable issues. The White House Executive Order on AI mandates that numerous agencies take action on these issues. Various states have passed laws limiting the use of AI in the workplace for employment decisions and employee monitoring. Others have passed laws to protect consumers from harmful uses of AI. This course will address how these issues arise, the legal issues that can result if developers are not careful in developing AI, and various ways in which the use of AI can lead to biased or discriminatory results for employees, consumers, and others. The course will also provide details on the tools, resources, and approaches for ensuring responsible AI.
This course will benefit in-house counsel, outside counsel, corporate executives, and AI developers.
Learning Objectives:
- Identify how bias and discrimination can arise with AI, including biased data sets, biased or discriminatory AI algorithms, and the use of AI in a manner that leads to bias and discrimination
- Break down the implications of the White House Executive Order on AI and the various agency mandates to ensure responsible AI
- Analyze the laws and regulatory guidance that relate to biased and discriminatory developed and use of AI
- Review the tools, resources, and approaches for ensuring the responsible development of AI
Gain access to this course, and unlimited access to 2000+ courses, with a Plus subscription.
Explore Lawline Subscriptions