Gå offline med appen Player FM !
EP 377: Confronting AI Bias and AI Discrimination in the Workplace
Manage episode 444554958 series 3470198
Send Everyday AI and Jordan a text message
Think AI is neutral? Think again. This is the workplace impact you never saw coming. What happens when the tech we rely on to be impartial actually reinforces bias? Join us for a deep dive into AI bias and discrimination with Samta Kapoor, EY’s Americas Energy AI and Responsible AI Leader.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Jordan and Samta questions on AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
1. Business Leaders Confronting AI Bias and Discrimination
2. AI Guardrails
3. Bias and Discrimination in AI Models
4. AI and the Future of Work
4. Responsible AI and the Future
Timestamps:
02:10 About Samta Kapoor and her role at EY
05:33 AI has risks, biases; guardrails recommended.
06:42 Governance ensures technology is scaled responsibly.
13:33 Models reflect biases; they mirror societal discrimination.
16:10 Embracing AI enhances adaptability, not job replacement.
19:04 Leveraging AI for business transformation and innovation.
23:05 Technology rapidly changing requires agile adaptation.
25:12 Address AI bias to reduce employee anxiety.
Keywords:
generative AI, AI bias, AI discrimination, business leaders, model bias, model discrimination, AI models, AI guardrails, AI governance, AI policy, Ernst and Young, AI risk, AI implementation, AI investment, AI hype, AI fear, AI training, workplace AI, AI understanding, AI usage, AI responsibilities, generative AI implementation, practical AI use cases, AI audit, AI technology advancement, multimodal models, AI tech enablement, AI innovation, company AI policies, AI anxiety.
471 episoder
Manage episode 444554958 series 3470198
Send Everyday AI and Jordan a text message
Think AI is neutral? Think again. This is the workplace impact you never saw coming. What happens when the tech we rely on to be impartial actually reinforces bias? Join us for a deep dive into AI bias and discrimination with Samta Kapoor, EY’s Americas Energy AI and Responsible AI Leader.
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Jordan and Samta questions on AI
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
1. Business Leaders Confronting AI Bias and Discrimination
2. AI Guardrails
3. Bias and Discrimination in AI Models
4. AI and the Future of Work
4. Responsible AI and the Future
Timestamps:
02:10 About Samta Kapoor and her role at EY
05:33 AI has risks, biases; guardrails recommended.
06:42 Governance ensures technology is scaled responsibly.
13:33 Models reflect biases; they mirror societal discrimination.
16:10 Embracing AI enhances adaptability, not job replacement.
19:04 Leveraging AI for business transformation and innovation.
23:05 Technology rapidly changing requires agile adaptation.
25:12 Address AI bias to reduce employee anxiety.
Keywords:
generative AI, AI bias, AI discrimination, business leaders, model bias, model discrimination, AI models, AI guardrails, AI governance, AI policy, Ernst and Young, AI risk, AI implementation, AI investment, AI hype, AI fear, AI training, workplace AI, AI understanding, AI usage, AI responsibilities, generative AI implementation, practical AI use cases, AI audit, AI technology advancement, multimodal models, AI tech enablement, AI innovation, company AI policies, AI anxiety.
471 episoder
Alla avsnitt
×Välkommen till Player FM
Player FM scannar webben för högkvalitativa podcasts för dig att njuta av nu direkt. Den är den bästa podcast-appen och den fungerar med Android, Iphone och webben. Bli medlem för att synka prenumerationer mellan enheter.