<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 10 May 2026 15:03:31 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Tech Transforms - Episodes Tagged with “Aigovernance”</title>
    <link>https://techtransforms.fireside.fm/tags/aigovernance</link>
    <pubDate>Tue, 25 Nov 2025 10:00:00 -0500</pubDate>
    <description>Global technology is changing the way we live. Critical government decisions affect the intersection of technology advancement and human needs. This podcast talks to some of the most prominent influencers shaping the landscape to understand how they are leveraging technology to solve complex challenges while also meeting the needs of today's modern world.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Tech Transforms, brought to you by Owl Cyber Defense, talks to some of the most prominent influencers shaping government technology.</itunes:subtitle>
    <itunes:author>Carolyn Ford</itunes:author>
    <itunes:summary>Global technology is changing the way we live. Critical government decisions affect the intersection of technology advancement and human needs. This podcast talks to some of the most prominent influencers shaping the landscape to understand how they are leveraging technology to solve complex challenges while also meeting the needs of today's modern world.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Carolyn Ford</itunes:name>
      <itunes:email>Galadrielford@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<itunes:category text="Government"/>
<item>
  <title>Episode 111: One Woman’s Rebellion Against Reckless AI</title>
  <link>https://techtransforms.fireside.fm/111</link>
  <guid isPermaLink="false">bf31a00f-5ddb-4181-9e9d-b03daadfed94</guid>
  <pubDate>Tue, 25 Nov 2025 10:00:00 -0500</pubDate>
  <author>Carolyn Ford</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/81d9d6b0-0045-48da-8495-fd87c4613d7f/bf31a00f-5ddb-4181-9e9d-b03daadfed94.mp3" length="66886497" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Carolyn Ford</itunes:author>
  <itunes:subtitle>AI risks aren’t sci-fi — they’re already woven into our schools, healthcare, and public systems.
In this week’s Tech Transforms, Carolyn talks with Janet Kang, Executive Director at Just Horizons Alliance, about ethical AI, real-time risk, and why we need “circuit breakers” for AI before harm scales.</itunes:subtitle>
  <itunes:duration>46:24</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/episodes/b/bf31a00f-5ddb-4181-9e9d-b03daadfed94/cover.jpg?v=1"/>
  <description>&lt;p&gt;In this thought-provoking episode of Tech Transforms, host Carolyn Ford welcomes Janet Kang, a Silicon Valley entrepreneur turned nonprofit leader whose work sits at the intersection of AI innovation, ethics, and long-term societal impact. After building companies since age 13, launching multiple ed-tech ventures, and incubating AI-powered products in a corporate venture studio, Janet experienced firsthand the exhilarating speed and unsettling risks of deploying AI in real-world environments. Those experiences ultimately led her to join Just Horizons Alliance, a nonprofit committed to developing open protocols, ethical frameworks, and real-time auditing tools that help organizations build and deploy AI responsibly.&lt;/p&gt;

&lt;p&gt;Janet shares candid stories from the early days of AI adoption, where models behaved inconsistently, guardrails lagged behind product timelines, and the pressure to scale fast often overshadowed deeper questions of safety and accountability. She explains why today’s biggest risk isn’t far-off superintelligence; it's the immediate, under-regulated integration of AI into education, healthcare, hiring systems, and public services. For younger users especially, she warns, AI already shapes communication, decision-making, confidence, and even identity and most tech leaders lack the tools to properly assess or mitigate those risks.&lt;/p&gt;

&lt;p&gt;Carolyn and Janet explore why ethical AI requires more than thought leadership and policy statements. It requires action: adversarial testing, real-world simulations, contextual frameworks, and independent audits that account for messy human behavior, not just ideal use cases. They also discuss the structural barriers women face in tech, the mentors who “give up their seat” to make space, and the mindset shift that comes with parenthood thinking in decades, not quarters.&lt;br&gt;
Looking ahead, Janet envisions a future where AI becomes “infrastructure, not the main character” as invisible and reliable as flipping a light switch because circuit breakers, safety layers, and accountability systems are finally in place. Until then, she calls on builders, executives, educators, and policymakers to take practical steps now: test relentlessly, understand failure modes, prioritize vulnerable users, and choose impact over speed.&lt;/p&gt;

&lt;p&gt;This is an episode for leaders who want to innovate boldly and responsibly, those wrestling with how to balance progress with protection, and how to shape an AI-powered future worthy of the next generation.&lt;/p&gt;

&lt;p&gt;Show Notes:&lt;br&gt;
&lt;a href="http://www.justhorizons.org" target="_blank" rel="nofollow noopener"&gt;www.justhorizons.org&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/kangjanet/" target="_blank" rel="nofollow noopener"&gt;https://www.linkedin.com/in/kangjanet/&lt;/a&gt;&lt;br&gt;
Pause superintelligence petition  - &lt;a href="https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio" target="_blank" rel="nofollow noopener"&gt;https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Books Mentioned&lt;br&gt;
Empire of AI — Karen Hao&lt;br&gt;
The Alignment Problem — Brian Christian&lt;br&gt;
The Broken Earth Trilogy — N.K. Jemisin (recommended by Carolyn)&lt;/p&gt;
</description>
  <itunes:keywords>AI Ethics, TechTransforms,AI governance, AI for Good, AI safety frameworks</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>In this thought-provoking episode of Tech Transforms, host Carolyn Ford welcomes Janet Kang, a Silicon Valley entrepreneur turned nonprofit leader whose work sits at the intersection of AI innovation, ethics, and long-term societal impact. After building companies since age 13, launching multiple ed-tech ventures, and incubating AI-powered products in a corporate venture studio, Janet experienced firsthand the exhilarating speed and unsettling risks of deploying AI in real-world environments. Those experiences ultimately led her to join Just Horizons Alliance, a nonprofit committed to developing open protocols, ethical frameworks, and real-time auditing tools that help organizations build and deploy AI responsibly.</p>

<p>Janet shares candid stories from the early days of AI adoption, where models behaved inconsistently, guardrails lagged behind product timelines, and the pressure to scale fast often overshadowed deeper questions of safety and accountability. She explains why today’s biggest risk isn’t far-off superintelligence; it&#39;s the immediate, under-regulated integration of AI into education, healthcare, hiring systems, and public services. For younger users especially, she warns, AI already shapes communication, decision-making, confidence, and even identity and most tech leaders lack the tools to properly assess or mitigate those risks.</p>

<p>Carolyn and Janet explore why ethical AI requires more than thought leadership and policy statements. It requires action: adversarial testing, real-world simulations, contextual frameworks, and independent audits that account for messy human behavior, not just ideal use cases. They also discuss the structural barriers women face in tech, the mentors who “give up their seat” to make space, and the mindset shift that comes with parenthood thinking in decades, not quarters.<br>
Looking ahead, Janet envisions a future where AI becomes “infrastructure, not the main character” as invisible and reliable as flipping a light switch because circuit breakers, safety layers, and accountability systems are finally in place. Until then, she calls on builders, executives, educators, and policymakers to take practical steps now: test relentlessly, understand failure modes, prioritize vulnerable users, and choose impact over speed.</p>

<p>This is an episode for leaders who want to innovate boldly and responsibly, those wrestling with how to balance progress with protection, and how to shape an AI-powered future worthy of the next generation.</p>

<p>Show Notes:<br>
<a href="http://www.justhorizons.org" rel="nofollow">www.justhorizons.org</a><br>
<a href="https://www.linkedin.com/in/kangjanet/" rel="nofollow">https://www.linkedin.com/in/kangjanet/</a><br>
Pause superintelligence petition  - <a href="https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio" rel="nofollow">https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio</a></p>

<p>Books Mentioned<br>
Empire of AI — Karen Hao<br>
The Alignment Problem — Brian Christian<br>
The Broken Earth Trilogy — N.K. Jemisin (recommended by Carolyn)</p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>In this thought-provoking episode of Tech Transforms, host Carolyn Ford welcomes Janet Kang, a Silicon Valley entrepreneur turned nonprofit leader whose work sits at the intersection of AI innovation, ethics, and long-term societal impact. After building companies since age 13, launching multiple ed-tech ventures, and incubating AI-powered products in a corporate venture studio, Janet experienced firsthand the exhilarating speed and unsettling risks of deploying AI in real-world environments. Those experiences ultimately led her to join Just Horizons Alliance, a nonprofit committed to developing open protocols, ethical frameworks, and real-time auditing tools that help organizations build and deploy AI responsibly.</p>

<p>Janet shares candid stories from the early days of AI adoption, where models behaved inconsistently, guardrails lagged behind product timelines, and the pressure to scale fast often overshadowed deeper questions of safety and accountability. She explains why today’s biggest risk isn’t far-off superintelligence; it&#39;s the immediate, under-regulated integration of AI into education, healthcare, hiring systems, and public services. For younger users especially, she warns, AI already shapes communication, decision-making, confidence, and even identity and most tech leaders lack the tools to properly assess or mitigate those risks.</p>

<p>Carolyn and Janet explore why ethical AI requires more than thought leadership and policy statements. It requires action: adversarial testing, real-world simulations, contextual frameworks, and independent audits that account for messy human behavior, not just ideal use cases. They also discuss the structural barriers women face in tech, the mentors who “give up their seat” to make space, and the mindset shift that comes with parenthood thinking in decades, not quarters.<br>
Looking ahead, Janet envisions a future where AI becomes “infrastructure, not the main character” as invisible and reliable as flipping a light switch because circuit breakers, safety layers, and accountability systems are finally in place. Until then, she calls on builders, executives, educators, and policymakers to take practical steps now: test relentlessly, understand failure modes, prioritize vulnerable users, and choose impact over speed.</p>

<p>This is an episode for leaders who want to innovate boldly and responsibly, those wrestling with how to balance progress with protection, and how to shape an AI-powered future worthy of the next generation.</p>

<p>Show Notes:<br>
<a href="http://www.justhorizons.org" rel="nofollow">www.justhorizons.org</a><br>
<a href="https://www.linkedin.com/in/kangjanet/" rel="nofollow">https://www.linkedin.com/in/kangjanet/</a><br>
Pause superintelligence petition  - <a href="https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio" rel="nofollow">https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio</a></p>

<p>Books Mentioned<br>
Empire of AI — Karen Hao<br>
The Alignment Problem — Brian Christian<br>
The Broken Earth Trilogy — N.K. Jemisin (recommended by Carolyn)</p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
