<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Thu, 07 May 2026 16:50:09 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Tech Transforms - Episodes Tagged with “Aiethics”</title>
    <link>https://techtransforms.fireside.fm/tags/aiethics</link>
    <pubDate>Tue, 25 Nov 2025 10:00:00 -0500</pubDate>
    <description>Global technology is changing the way we live. Critical government decisions affect the intersection of technology advancement and human needs. This podcast talks to some of the most prominent influencers shaping the landscape to understand how they are leveraging technology to solve complex challenges while also meeting the needs of today's modern world.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Tech Transforms, brought to you by Owl Cyber Defense, talks to some of the most prominent influencers shaping government technology.</itunes:subtitle>
    <itunes:author>Carolyn Ford</itunes:author>
    <itunes:summary>Global technology is changing the way we live. Critical government decisions affect the intersection of technology advancement and human needs. This podcast talks to some of the most prominent influencers shaping the landscape to understand how they are leveraging technology to solve complex challenges while also meeting the needs of today's modern world.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Carolyn Ford</itunes:name>
      <itunes:email>Galadrielford@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<itunes:category text="Government"/>
<item>
  <title>Episode 111: One Woman’s Rebellion Against Reckless AI</title>
  <link>https://techtransforms.fireside.fm/111</link>
  <guid isPermaLink="false">bf31a00f-5ddb-4181-9e9d-b03daadfed94</guid>
  <pubDate>Tue, 25 Nov 2025 10:00:00 -0500</pubDate>
  <author>Carolyn Ford</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/81d9d6b0-0045-48da-8495-fd87c4613d7f/bf31a00f-5ddb-4181-9e9d-b03daadfed94.mp3" length="66886497" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Carolyn Ford</itunes:author>
  <itunes:subtitle>AI risks aren’t sci-fi — they’re already woven into our schools, healthcare, and public systems.
In this week’s Tech Transforms, Carolyn talks with Janet Kang, Executive Director at Just Horizons Alliance, about ethical AI, real-time risk, and why we need “circuit breakers” for AI before harm scales.</itunes:subtitle>
  <itunes:duration>46:24</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/episodes/b/bf31a00f-5ddb-4181-9e9d-b03daadfed94/cover.jpg?v=1"/>
  <description>&lt;p&gt;In this thought-provoking episode of Tech Transforms, host Carolyn Ford welcomes Janet Kang, a Silicon Valley entrepreneur turned nonprofit leader whose work sits at the intersection of AI innovation, ethics, and long-term societal impact. After building companies since age 13, launching multiple ed-tech ventures, and incubating AI-powered products in a corporate venture studio, Janet experienced firsthand the exhilarating speed and unsettling risks of deploying AI in real-world environments. Those experiences ultimately led her to join Just Horizons Alliance, a nonprofit committed to developing open protocols, ethical frameworks, and real-time auditing tools that help organizations build and deploy AI responsibly.&lt;/p&gt;

&lt;p&gt;Janet shares candid stories from the early days of AI adoption, where models behaved inconsistently, guardrails lagged behind product timelines, and the pressure to scale fast often overshadowed deeper questions of safety and accountability. She explains why today’s biggest risk isn’t far-off superintelligence; it's the immediate, under-regulated integration of AI into education, healthcare, hiring systems, and public services. For younger users especially, she warns, AI already shapes communication, decision-making, confidence, and even identity and most tech leaders lack the tools to properly assess or mitigate those risks.&lt;/p&gt;

&lt;p&gt;Carolyn and Janet explore why ethical AI requires more than thought leadership and policy statements. It requires action: adversarial testing, real-world simulations, contextual frameworks, and independent audits that account for messy human behavior, not just ideal use cases. They also discuss the structural barriers women face in tech, the mentors who “give up their seat” to make space, and the mindset shift that comes with parenthood thinking in decades, not quarters.&lt;br&gt;
Looking ahead, Janet envisions a future where AI becomes “infrastructure, not the main character” as invisible and reliable as flipping a light switch because circuit breakers, safety layers, and accountability systems are finally in place. Until then, she calls on builders, executives, educators, and policymakers to take practical steps now: test relentlessly, understand failure modes, prioritize vulnerable users, and choose impact over speed.&lt;/p&gt;

&lt;p&gt;This is an episode for leaders who want to innovate boldly and responsibly, those wrestling with how to balance progress with protection, and how to shape an AI-powered future worthy of the next generation.&lt;/p&gt;

&lt;p&gt;Show Notes:&lt;br&gt;
&lt;a href="http://www.justhorizons.org" target="_blank" rel="nofollow noopener"&gt;www.justhorizons.org&lt;/a&gt;&lt;br&gt;
&lt;a href="https://www.linkedin.com/in/kangjanet/" target="_blank" rel="nofollow noopener"&gt;https://www.linkedin.com/in/kangjanet/&lt;/a&gt;&lt;br&gt;
Pause superintelligence petition  - &lt;a href="https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio" target="_blank" rel="nofollow noopener"&gt;https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Books Mentioned&lt;br&gt;
Empire of AI — Karen Hao&lt;br&gt;
The Alignment Problem — Brian Christian&lt;br&gt;
The Broken Earth Trilogy — N.K. Jemisin (recommended by Carolyn)&lt;/p&gt;
</description>
  <itunes:keywords>AI Ethics, TechTransforms,AI governance, AI for Good, AI safety frameworks</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>In this thought-provoking episode of Tech Transforms, host Carolyn Ford welcomes Janet Kang, a Silicon Valley entrepreneur turned nonprofit leader whose work sits at the intersection of AI innovation, ethics, and long-term societal impact. After building companies since age 13, launching multiple ed-tech ventures, and incubating AI-powered products in a corporate venture studio, Janet experienced firsthand the exhilarating speed and unsettling risks of deploying AI in real-world environments. Those experiences ultimately led her to join Just Horizons Alliance, a nonprofit committed to developing open protocols, ethical frameworks, and real-time auditing tools that help organizations build and deploy AI responsibly.</p>

<p>Janet shares candid stories from the early days of AI adoption, where models behaved inconsistently, guardrails lagged behind product timelines, and the pressure to scale fast often overshadowed deeper questions of safety and accountability. She explains why today’s biggest risk isn’t far-off superintelligence; it&#39;s the immediate, under-regulated integration of AI into education, healthcare, hiring systems, and public services. For younger users especially, she warns, AI already shapes communication, decision-making, confidence, and even identity and most tech leaders lack the tools to properly assess or mitigate those risks.</p>

<p>Carolyn and Janet explore why ethical AI requires more than thought leadership and policy statements. It requires action: adversarial testing, real-world simulations, contextual frameworks, and independent audits that account for messy human behavior, not just ideal use cases. They also discuss the structural barriers women face in tech, the mentors who “give up their seat” to make space, and the mindset shift that comes with parenthood thinking in decades, not quarters.<br>
Looking ahead, Janet envisions a future where AI becomes “infrastructure, not the main character” as invisible and reliable as flipping a light switch because circuit breakers, safety layers, and accountability systems are finally in place. Until then, she calls on builders, executives, educators, and policymakers to take practical steps now: test relentlessly, understand failure modes, prioritize vulnerable users, and choose impact over speed.</p>

<p>This is an episode for leaders who want to innovate boldly and responsibly, those wrestling with how to balance progress with protection, and how to shape an AI-powered future worthy of the next generation.</p>

<p>Show Notes:<br>
<a href="http://www.justhorizons.org" rel="nofollow">www.justhorizons.org</a><br>
<a href="https://www.linkedin.com/in/kangjanet/" rel="nofollow">https://www.linkedin.com/in/kangjanet/</a><br>
Pause superintelligence petition  - <a href="https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio" rel="nofollow">https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio</a></p>

<p>Books Mentioned<br>
Empire of AI — Karen Hao<br>
The Alignment Problem — Brian Christian<br>
The Broken Earth Trilogy — N.K. Jemisin (recommended by Carolyn)</p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>In this thought-provoking episode of Tech Transforms, host Carolyn Ford welcomes Janet Kang, a Silicon Valley entrepreneur turned nonprofit leader whose work sits at the intersection of AI innovation, ethics, and long-term societal impact. After building companies since age 13, launching multiple ed-tech ventures, and incubating AI-powered products in a corporate venture studio, Janet experienced firsthand the exhilarating speed and unsettling risks of deploying AI in real-world environments. Those experiences ultimately led her to join Just Horizons Alliance, a nonprofit committed to developing open protocols, ethical frameworks, and real-time auditing tools that help organizations build and deploy AI responsibly.</p>

<p>Janet shares candid stories from the early days of AI adoption, where models behaved inconsistently, guardrails lagged behind product timelines, and the pressure to scale fast often overshadowed deeper questions of safety and accountability. She explains why today’s biggest risk isn’t far-off superintelligence; it&#39;s the immediate, under-regulated integration of AI into education, healthcare, hiring systems, and public services. For younger users especially, she warns, AI already shapes communication, decision-making, confidence, and even identity and most tech leaders lack the tools to properly assess or mitigate those risks.</p>

<p>Carolyn and Janet explore why ethical AI requires more than thought leadership and policy statements. It requires action: adversarial testing, real-world simulations, contextual frameworks, and independent audits that account for messy human behavior, not just ideal use cases. They also discuss the structural barriers women face in tech, the mentors who “give up their seat” to make space, and the mindset shift that comes with parenthood thinking in decades, not quarters.<br>
Looking ahead, Janet envisions a future where AI becomes “infrastructure, not the main character” as invisible and reliable as flipping a light switch because circuit breakers, safety layers, and accountability systems are finally in place. Until then, she calls on builders, executives, educators, and policymakers to take practical steps now: test relentlessly, understand failure modes, prioritize vulnerable users, and choose impact over speed.</p>

<p>This is an episode for leaders who want to innovate boldly and responsibly, those wrestling with how to balance progress with protection, and how to shape an AI-powered future worthy of the next generation.</p>

<p>Show Notes:<br>
<a href="http://www.justhorizons.org" rel="nofollow">www.justhorizons.org</a><br>
<a href="https://www.linkedin.com/in/kangjanet/" rel="nofollow">https://www.linkedin.com/in/kangjanet/</a><br>
Pause superintelligence petition  - <a href="https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio" rel="nofollow">https://www.axios.com/2025/10/22/superintelligence-ai-pause-yoshua-bengio</a></p>

<p>Books Mentioned<br>
Empire of AI — Karen Hao<br>
The Alignment Problem — Brian Christian<br>
The Broken Earth Trilogy — N.K. Jemisin (recommended by Carolyn)</p>]]>
  </itunes:summary>
</item>
<item>
  <title>Episode 107: The Curious Case of AI  - Part 1 (The Chills)</title>
  <link>https://techtransforms.fireside.fm/107</link>
  <guid isPermaLink="false">347de276-8530-4e39-b23c-fc3fe9dd8b25</guid>
  <pubDate>Tue, 07 Oct 2025 09:30:00 -0400</pubDate>
  <author>Carolyn Ford</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/81d9d6b0-0045-48da-8495-fd87c4613d7f/347de276-8530-4e39-b23c-fc3fe9dd8b25.mp3" length="42367496" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Carolyn Ford</itunes:author>
  <itunes:subtitle>In this Tech Transforms Halloween special, host Carolyn Ford and futurist Joseph Bradley explore the unsettling side of AI. From identic AI that mirrors human identity to the risks of bias and the rise of cognitive cities, they discuss why trust, ethics, and purpose must guide how we design and deploy artificial intelligence. This episode asks the critical question: will our AI future look more like The Borg… or The Federation?</itunes:subtitle>
  <itunes:duration>35:15</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/episodes/3/347de276-8530-4e39-b23c-fc3fe9dd8b25/cover.jpg?v=1"/>
  <description>&lt;p&gt;The Curious Case of AI - A Two-Part Special (“The Chills”)&lt;br&gt;
In this Tech Transforms Halloween “Chills” episode, host Carolyn Ford and futurist Joseph Bradley explore the eerie, but essential, questions AI raises. Using Star Trek as their guide, they look at how “identic AI”, technology that mirrors identity, preferences, and even purpose can feel both thrilling and unsettling.&lt;br&gt;
Bradley, a strong believer in AI’s potential, points out that every powerful tool comes with risks if it’s misused. Together, he and Carolyn discuss what happens when efficiency is valued over happiness, how bias can creep in if we aren’t intentional, and why cognitive cities must be built with trust and ethics at the core. Think less “the end is near” and more “what safeguards do we need to make sure this future works for people?”&lt;br&gt;
This episode sets the stage for leaders, innovators, and everyday users to think critically about how AI shapes identity, relationships, and society, while remembering that the choices we make now will decide whether the future feels like The Borg… or The Federation.&lt;/p&gt;

&lt;p&gt;Mentioned in this episode:&lt;br&gt;
Joseph Bradley’s book U to the Power of 2 (Pre-order: josephmbradley.com | ​​&lt;a href="https://shop.u-x2.ai/" target="_blank" rel="nofollow noopener"&gt;https://shop.u-x2.ai/&lt;/a&gt;)&lt;br&gt;
Paperclip dilemma thought experiment - &lt;a href="https://nickbostrom.com/ethics/ai" target="_blank" rel="nofollow noopener"&gt;https://nickbostrom.com/ethics/ai&lt;/a&gt;&lt;br&gt;
Smart vs. Cognitive Cities &lt;a href="https://www.pwc.com/m1/en/publications/documents/cognitive-cities-a-journey-to-intelligent-urbanism.pdf" target="_blank" rel="nofollow noopener"&gt;https://www.pwc.com/m1/en/publications/documents/cognitive-cities-a-journey-to-intelligent-urbanism.pdf&lt;/a&gt;&lt;br&gt;
Questioneering: The New Model for Innovative Leaders in the Digital Age &lt;/p&gt;
</description>
  <itunes:keywords>Identic AI, Agentic AI, AI and happiness, Smart cities, cognitive cities</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>The Curious Case of AI - A Two-Part Special (“The Chills”)<br>
In this Tech Transforms Halloween “Chills” episode, host Carolyn Ford and futurist Joseph Bradley explore the eerie, but essential, questions AI raises. Using Star Trek as their guide, they look at how “identic AI”, technology that mirrors identity, preferences, and even purpose can feel both thrilling and unsettling.<br>
Bradley, a strong believer in AI’s potential, points out that every powerful tool comes with risks if it’s misused. Together, he and Carolyn discuss what happens when efficiency is valued over happiness, how bias can creep in if we aren’t intentional, and why cognitive cities must be built with trust and ethics at the core. Think less “the end is near” and more “what safeguards do we need to make sure this future works for people?”<br>
This episode sets the stage for leaders, innovators, and everyday users to think critically about how AI shapes identity, relationships, and society, while remembering that the choices we make now will decide whether the future feels like The Borg… or The Federation.</p>

<p>Mentioned in this episode:<br>
Joseph Bradley’s book U to the Power of 2 (Pre-order: josephmbradley.com | ​​<a href="https://shop.u-x2.ai/" rel="nofollow">https://shop.u-x2.ai/</a>)<br>
Paperclip dilemma thought experiment - <a href="https://nickbostrom.com/ethics/ai" rel="nofollow">https://nickbostrom.com/ethics/ai</a><br>
Smart vs. Cognitive Cities <a href="https://www.pwc.com/m1/en/publications/documents/cognitive-cities-a-journey-to-intelligent-urbanism.pdf" rel="nofollow">https://www.pwc.com/m1/en/publications/documents/cognitive-cities-a-journey-to-intelligent-urbanism.pdf</a><br>
Questioneering: The New Model for Innovative Leaders in the Digital Age </p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>The Curious Case of AI - A Two-Part Special (“The Chills”)<br>
In this Tech Transforms Halloween “Chills” episode, host Carolyn Ford and futurist Joseph Bradley explore the eerie, but essential, questions AI raises. Using Star Trek as their guide, they look at how “identic AI”, technology that mirrors identity, preferences, and even purpose can feel both thrilling and unsettling.<br>
Bradley, a strong believer in AI’s potential, points out that every powerful tool comes with risks if it’s misused. Together, he and Carolyn discuss what happens when efficiency is valued over happiness, how bias can creep in if we aren’t intentional, and why cognitive cities must be built with trust and ethics at the core. Think less “the end is near” and more “what safeguards do we need to make sure this future works for people?”<br>
This episode sets the stage for leaders, innovators, and everyday users to think critically about how AI shapes identity, relationships, and society, while remembering that the choices we make now will decide whether the future feels like The Borg… or The Federation.</p>

<p>Mentioned in this episode:<br>
Joseph Bradley’s book U to the Power of 2 (Pre-order: josephmbradley.com | ​​<a href="https://shop.u-x2.ai/" rel="nofollow">https://shop.u-x2.ai/</a>)<br>
Paperclip dilemma thought experiment - <a href="https://nickbostrom.com/ethics/ai" rel="nofollow">https://nickbostrom.com/ethics/ai</a><br>
Smart vs. Cognitive Cities <a href="https://www.pwc.com/m1/en/publications/documents/cognitive-cities-a-journey-to-intelligent-urbanism.pdf" rel="nofollow">https://www.pwc.com/m1/en/publications/documents/cognitive-cities-a-journey-to-intelligent-urbanism.pdf</a><br>
Questioneering: The New Model for Innovative Leaders in the Digital Age </p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
