<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 10 May 2026 17:27:15 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>Tech Transforms - Episodes Tagged with “Nist”</title>
    <link>https://techtransforms.fireside.fm/tags/nist</link>
    <pubDate>Tue, 26 Aug 2025 10:00:00 -0400</pubDate>
    <description>Global technology is changing the way we live. Critical government decisions affect the intersection of technology advancement and human needs. This podcast talks to some of the most prominent influencers shaping the landscape to understand how they are leveraging technology to solve complex challenges while also meeting the needs of today's modern world.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>Tech Transforms, brought to you by Owl Cyber Defense, talks to some of the most prominent influencers shaping government technology.</itunes:subtitle>
    <itunes:author>Carolyn Ford</itunes:author>
    <itunes:summary>Global technology is changing the way we live. Critical government decisions affect the intersection of technology advancement and human needs. This podcast talks to some of the most prominent influencers shaping the landscape to understand how they are leveraging technology to solve complex challenges while also meeting the needs of today's modern world.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/cover.jpg?v=3"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:owner>
      <itunes:name>Carolyn Ford</itunes:name>
      <itunes:email>Galadrielford@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Technology"/>
<itunes:category text="Government"/>
<item>
  <title>Episode 104: Securing the Future: AI, Cyber Risk, and the Federal Mission</title>
  <link>https://techtransforms.fireside.fm/104</link>
  <guid isPermaLink="false">6e6b136b-79de-4ba1-870e-79089db0897c</guid>
  <pubDate>Tue, 26 Aug 2025 10:00:00 -0400</pubDate>
  <author>Carolyn Ford</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/81d9d6b0-0045-48da-8495-fd87c4613d7f/6e6b136b-79de-4ba1-870e-79089db0897c.mp3" length="69695506" type="audio/mpeg"/>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Carolyn Ford</itunes:author>
  <itunes:subtitle>AI isn’t coming to government, it’s already here. In this episode of Tech Transforms, host Carolyn Ford sits down with Martin Stanley, Senior Advisor at NIST, to explore how federal agencies can secure AI systems, protect sensitive training data, and prepare for “machine speed” cyber defense. From NIST’s new AI Risk Management Framework to guardrails for generative AI, Martin explains what leaders need to know to adopt AI responsibly without sacrificing security.</itunes:subtitle>
  <itunes:duration>48:21</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/8/81d9d6b0-0045-48da-8495-fd87c4613d7f/episodes/6/6e6b136b-79de-4ba1-870e-79089db0897c/cover.jpg?v=1"/>
  <description>&lt;p&gt;In this episode of TechTransforms, host Carolyn Ford sits down with Martin Stanley, Senior Advisor at NIST, to explore how AI is reshaping federal cybersecurity. They dive into NIST’s AI Risk Management Framework, the growing need for secure and resilient AI systems, and what it takes to build a “risk-aware” culture in government. Stanley shares insights on guarding against threats like model theft and prompt injection, how agencies are adapting zero trust principles for AI, and why explainability is essential in machine learning models. Whether you're new to AI governance or advancing your cybersecurity strategy, this episode offers practical guidance for navigating the evolving AI risk landscape.&lt;/p&gt;

&lt;p&gt;Show Notes:&lt;br&gt;
NIST AI resources: &lt;a href="https://www.nist.gov/artificial-intelligence/ai-resources" target="_blank" rel="nofollow noopener"&gt;https://www.nist.gov/artificial-intelligence/ai-resources&lt;/a&gt;&lt;br&gt;
AI Risk Management Framework &lt;a href="https://www.nist.gov/itl/ai-risk-management-framework" target="_blank" rel="nofollow noopener"&gt;https://www.nist.gov/itl/ai-risk-management-framework&lt;/a&gt;&lt;br&gt;
NIST-AI-600-1: AI RMF Generative AI Profile &lt;a href="https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf" target="_blank" rel="nofollow noopener"&gt;https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf&lt;/a&gt;&lt;br&gt;
Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile  &lt;a href="https://doi.org/10.6028/NIST.SP.800-218A" target="_blank" rel="nofollow noopener"&gt;https://doi.org/10.6028/NIST.SP.800-218A&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Email: &lt;a href="mailto:martin.stanley@nist.gov" target="_blank" rel="nofollow noopener"&gt;martin.stanley@nist.gov&lt;/a&gt;&lt;br&gt;
LinkedIn: &lt;a href="https://www.linkedin.com/in/mcs729/" target="_blank" rel="nofollow noopener"&gt;https://www.linkedin.com/in/mcs729/&lt;/a&gt;&lt;/p&gt;
</description>
  <itunes:keywords>NIST, TechTransforms, Cybersecurity, AI, cyber defense</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>In this episode of TechTransforms, host Carolyn Ford sits down with Martin Stanley, Senior Advisor at NIST, to explore how AI is reshaping federal cybersecurity. They dive into NIST’s AI Risk Management Framework, the growing need for secure and resilient AI systems, and what it takes to build a “risk-aware” culture in government. Stanley shares insights on guarding against threats like model theft and prompt injection, how agencies are adapting zero trust principles for AI, and why explainability is essential in machine learning models. Whether you&#39;re new to AI governance or advancing your cybersecurity strategy, this episode offers practical guidance for navigating the evolving AI risk landscape.</p>

<p>Show Notes:<br>
NIST AI resources: <a href="https://www.nist.gov/artificial-intelligence/ai-resources" rel="nofollow">https://www.nist.gov/artificial-intelligence/ai-resources</a><br>
AI Risk Management Framework <a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="nofollow">https://www.nist.gov/itl/ai-risk-management-framework</a><br>
NIST-AI-600-1: AI RMF Generative AI Profile <a href="https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf" rel="nofollow">https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf</a><br>
Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile  <a href="https://doi.org/10.6028/NIST.SP.800-218A" rel="nofollow">https://doi.org/10.6028/NIST.SP.800-218A</a></p>

<p>Email: <a href="mailto:martin.stanley@nist.gov" rel="nofollow">martin.stanley@nist.gov</a><br>
LinkedIn: <a href="https://www.linkedin.com/in/mcs729/" rel="nofollow">https://www.linkedin.com/in/mcs729/</a></p>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>In this episode of TechTransforms, host Carolyn Ford sits down with Martin Stanley, Senior Advisor at NIST, to explore how AI is reshaping federal cybersecurity. They dive into NIST’s AI Risk Management Framework, the growing need for secure and resilient AI systems, and what it takes to build a “risk-aware” culture in government. Stanley shares insights on guarding against threats like model theft and prompt injection, how agencies are adapting zero trust principles for AI, and why explainability is essential in machine learning models. Whether you&#39;re new to AI governance or advancing your cybersecurity strategy, this episode offers practical guidance for navigating the evolving AI risk landscape.</p>

<p>Show Notes:<br>
NIST AI resources: <a href="https://www.nist.gov/artificial-intelligence/ai-resources" rel="nofollow">https://www.nist.gov/artificial-intelligence/ai-resources</a><br>
AI Risk Management Framework <a href="https://www.nist.gov/itl/ai-risk-management-framework" rel="nofollow">https://www.nist.gov/itl/ai-risk-management-framework</a><br>
NIST-AI-600-1: AI RMF Generative AI Profile <a href="https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf" rel="nofollow">https://airc.nist.gov/docs/NIST.AI.600-1.GenAI-Profile.ipd.pdf</a><br>
Secure Software Development Practices for Generative AI and Dual-Use Foundation Models: An SSDF Community Profile  <a href="https://doi.org/10.6028/NIST.SP.800-218A" rel="nofollow">https://doi.org/10.6028/NIST.SP.800-218A</a></p>

<p>Email: <a href="mailto:martin.stanley@nist.gov" rel="nofollow">martin.stanley@nist.gov</a><br>
LinkedIn: <a href="https://www.linkedin.com/in/mcs729/" rel="nofollow">https://www.linkedin.com/in/mcs729/</a></p>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
