Psych Tech @ Work cover image

Psych Tech @ Work

Responsible AI In 2025 and Beyond – Three pillars of progress

Apr 15, 2025
54:44

"Part of putting an AI strategy together is understanding the limitations and where unintended consequences could occur, which is why you need diversity of thought within committees created to guide AI governance and ethics." 

– Bob Pulver

My guest for this episode is my friend in ethical/responsible AI, Bob Pulver, the founder of CognitivePath.io and host of the podcast "Elevate Your AIQ." 

Bob specializes in helping organizations navigate the complexities of responsible AI, from strategic adoption to effective governance practices.  

Bob was my guest about a year ago and in this episode he drops back in to discuss what has changed in the faced paced world of AI across three pillars of responsible AI usage.  

* Human-Centric AI 

* AI Adoption and Readiness 

* AI Regulation and Governance

The past year’s progress explained through three pillars that are shaping ethical AI:

These are the themes that we explore in our conversation and our thoughts on what has changed/evolved in the past year.

1. Human-Centric AI

Change from Last Year:

* Shift from compliance-driven AI towards a more holistic, human-focused perspective, emphasizing AI's potential to enhance human capabilities and fairness.

Reasons for Change:

* Increasing comfort level with AI and experience with the benefits that it brings to our work

* Continued exploration and development of low stakes, low friction use cases

* AI continues to be seen as a partner and magnifier of human capabilities

What to Expect in the Next Year:

* Increased experience with human machine partnerships

* Increased opportunities to build superpowers

* Increased adoption of human centric tools by employers

2. AI Adoption and Readiness

Change from Last Year:

* Organizations have moved from cautious, fragmented adoption to structured, strategic readiness and literacy initiatives.

* Significant growth in AI educational resources and adoption within teams, rather than just individuals.

Reasons for Change:

* Improved understanding of AI's benefits and limitations, reducing fears and resistance.

* Availability of targeted AI literacy programs, promoting organization-wide AI understanding and capability building.

What to Expect in the Next Year:

* More systematic frameworks for AI adoption across entire organizations.

* Increased demand for formal AI proficiency assessments to ensure responsible and effective usage.

3. AI Regulation and Governance

Change from Last Year:

* Transition from broad discussions about potential regulations towards concrete legislative actions, particularly at state and international levels (e.g., EU AI Act, California laws).

* Momentum to hold vendors of AI increasingly accountable for ethical AI use.

Reasons for Change:

* Growing awareness of risks associated with unchecked AI deployment.

* Increased push to stay on the right side of AI via legislative activity at state and global levels addressing transparency, accountability, and fairness.

What to Expect in the Next Year:

* Implementation of stricter AI audits and compliance standards.

* Clearer responsibilities for vendors and organizations regarding ethical AI practices.

* Finally some concrete standards that will require fundamental changes in oversight and create messy situations.

Practical Takeaways:

What should I/we be doing to move the ball fwd and realize AI’s full potential while limiting collateral damage?

Prioritize Human-Centric AI Design

* Define Clear Use Cases: Ensure AI is solving a genuine human-centered problem rather than just introducing technology for technology’s sake.

* Promote Transparency and Trust: Clearly communicate how and why AI is being used, ensuring it enhances rather than replaces human judgment and involvement.

Build Robust AI Literacy and Education Programs

* Develop Organizational AI Literacy: Implement structured training initiatives that educate employees about fundamental AI concepts, the practical implications of AI use, and ethical considerations.

* Create Role-Specific Training: Provide tailored AI skill-building programs based on roles and responsibilities, moving beyond individual productivity to team-based effectiveness.

Strengthen AI Governance and Oversight

* Adopt Proactive Compliance Practices: Align internal policies with rigorous standards such as the EU AI Act to preemptively prepare for emerging local and global legislation.

* Vendor Accountability: Develop clear guidelines and rigorous vetting processes for vendors to ensure transparency and responsible use, preparing your organization for upcoming regulatory audits.

Monitor AI Effectiveness and Impact

* Continuous Monitoring: Shift from periodic audits to continuous monitoring of AI tools to ensure fairness, transparency, and functionality.

* Evaluate Human Impact Regularly: Regularly assess the human impact of AI tools on employee experience, fairness in decision-making, and organizational trust.

Email Bob- bob@cognitivepath.io 

Listen to Bob’s awesome podcast - Elevate you AIQ



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit charleshandler.substack.com

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner