Episode 77

So What? It’s 5:05! Edition: Beyond the Headlines of AI, Election Disinformation and SpyGPT

00:00:00
/
00:35:22

December 13th, 2023

35 mins 22 secs

Your Host

About this Episode

On this special So What? episode we go deeper in to some of the top stories being covered on the It’s 5:05! podcast with It’s 5:05! contributing journalist, Tracy Bannon. How are cybersecurity stress tests battling misinformation and aiding in election security? Is AI contributing to election disinformation? How is the CIA using SpyGPT? Come along as Carolyn and Tracy go beyond the headlines to address all these questions and more.

Key Topics

  • 04:20 Proactive approach needed for software voting security.
  • 09:12 Deepfake technology can replicate voices and videos.
  • 12:38 Politics focuses on presidential level, ignores others.
  • 15:53 Generative AI creates new content from data.
  • 17:19 New tool aids intelligence agencies process data.
  • 20:13 Bill Gates discusses future AI agents on LinkedIn.
  • 25:24 Navigating biases in AI towards democratic values.
  • 29:13 CISA promotes continuous learning and holistic approach.
  • 30:51 Demystifying and making security approachable for all.
  • 33:33 Open source, cybersecurity, diverse professional perspectives discussed.

Importance of Cybersecurity and Responsible AI Use

Embracing Cybersecurity Measures and Privacy Protections

In their conversation, Carolyn and Tracy discuss the imperative nature of both individuals and organizations in embracing robust cybersecurity measures. As we live in an era where data breaches and cyber attacks are on the rise, the implementation of effective security protocols is not just a matter of regulatory compliance, but also about safeguarding the privacy and personal information of users. Tracy emphasizes the continuous need for cybersecurity vigilance and education, highlighting that it is a shared responsibility. By making use of resources like the CISA cybersecurity workbook, Carolyn suggests that individuals and businesses can receive guidance on developing a more secure online presence, which is crucial in a digital ecosystem where even the smallest vulnerability can be exploited.

Addressing Biases in AI to Align With Public Interest and Democratic Values

Tracy expresses concerns over the biases that can be present in AI systems, which can stem from those who design them or the data they are trained on. Such biases have the potential to impact a vast array of decisions and analyses AI makes, leading to outcomes that may not align with the broad spectrum of public interest and democratic values. An important aspect of responsible AI use is ensuring that these technological systems are created and used in a way that is fair and equitable. This means actively working to identify and correct biases and ensuring transparency in AI operations. Plus, constantly checking that AI applications serve the public good without infringing upon civil liberties or creating divisions within society.

Demystifying Cybersecurity: "We need that public understanding, building this culture of security for everybody, by everybody. It becomes a shared thing, which should be something that we're teaching our children as soon as they are old enough to touch a device." — Tracy Bannon

The Proliferation of Personal AI Use in Everyday Tasks

The conversation shifts towards the notion of AI agents handling tasks on behalf of humans, a concept both cutting-edge and rife with potential pitfalls. Carolyn and Tracy discuss both the ease and potential risks of entrusting personal tasks to AI. On one hand, these AI agents can simplify life by managing mundane tasks. Optimizing time and resources, and even curating experiences based on an in-depth understanding of personal preferences. Yet, Tracy questions what the trade-off is, considering the amount of personal data that must be shared for AI to become truly "helpful." This gives rise to larger questions related to the surrender of personal agency in decision-making. The erosion of privacy, and the ever-present threat of such tools being exploited for nefarious purposes.

CISA's Cybersecurity Workbook

Enhancing Accessibility with AI Use: Summarizing Complex Documents through Generative Tools

Tracy introduces the concept of leveraging generative AI tools such as ChatGPT to summarize lengthy documents. This innovative approach provides a way to digest complex material quickly and efficiently. For instance, users can feed a PDF or a website link into ChatGPT and request a summary which the tool will produce by analyzing the text and presenting the key points. Tracy emphasizes this method as a step toward making dense content like government reports or lengthy executive orders, more accessible. She also transitions to discussing CISA's cybersecurity workbook. Illustrating a movement towards the dissemination of important information in a format that a broader audience can understand and apply, not just tech experts. Tracy appreciates the effort by CISA to create resources that resonate with everyone's level of technical knowledge.

Comprehensive Guidance for Security Measures

The comprehensive guide provided by CISA, Tracy notes, is robust in offering detailed strategies for planning and implementing cyber security measures. The workbook does not shy away from diving deep into the assessment of potential cyber risks. It details leading practices that organizations can adopt. Planning for incident response is a highlighted area, acknowledging that security breaches are not a matter of if but when. The workbook thus serves as an invaluable reference for initiating proactive steps to fortify against cyber threats. This level of comprehensive guidance serves not only as a tool for implementing robust security measures. It is also a learning resource that promotes a widespread understanding of best cybersecurity practices.

Government's AI Use

Potential Introduction of Generative AI by the CIA

Tracy and Carolyn discuss the CIA's plans to potentially introduce generative AI through a program dubbed "SpyGPT." The idea behind this integration is to enable the parsing and understanding of extensive open-source data more efficiently.

Generative AI, similar in concept to models like ChatGPT, could revolutionize how intelligence agencies handle the vast amounts of data they collect. If implemented, this AI would be able to generate new content based on massive datasets. Providing insights that could be invaluable for intelligence processing. Carolyn raises comparisons to traditional methods of intelligence gathering, noting that such technological advancements could have helped in past events had they been available. In response, Tracy emphasizes the historic struggle of intelligence agencies to rapidly sort through surveillance information. A challenge that tools like SpyGPT could mitigate.

The Double-Edged Sword of AI Use in Predictive Analysis

A tool like SpyGPT has the potential to rapidly identify patterns and connections within data. This could lead to quicker and more accurate intelligence assessments. Carolyn points to the use of crowdsourcing information during the Boston Marathon bombing as an example of how rapid data correlation and analysis can be critical in national security efforts. The ability to predict and possibly prevent future threats could be significantly enhanced.

The Dangers of Internet Era Propaganda: "I can take any idea, and I can generate vast amounts of text in all kinds of tones, from all different kinds of perspectives, and I can make them pretty ideal for Internet era propaganda." — Tracy Bannon

However, as Tracy notes, the power of such technology is a double-edged sword, raising concerns about privacy, the potential for misuse and ethical implications. The conversation raises the specter of a "Minority Report"-esque future, where predictive technology verges on the invasive. Both Tracy and Carolyn agree on the tremendous responsibilities that come with the implementation of generative AI when it intersects with privacy, civil liberties and security.

Election Security

The Critical Role of AI Use in Election Security Stress Testing

Stress testing in the context of election security revolves around rigorously probing the voting system to uncover any flaws or weaknesses. This process requires collaboration between various stakeholders, including the manufacturers of voting machines, software developers and cybersecurity experts. Tracy emphasizes the crucial nature of these simulated attacks or real-world scenarios that help reveal potential points of exploitation within the system. Identifying these vulnerabilities well before an election can give officials the necessary time to address and reinforce weak spots. Ensuring the reliability and resilience of the electoral process against cyber threats.

The AI Use in Unveiling Election System Vulnerabilities

Tracy discusses the necessity of not just identifying but also openly revealing discovered vulnerabilities within election systems as a means to foster trust among the populace. Transparency in the security measures taken and the clear communication of vulnerabilities found, when managed properly, instill a higher sense of confidence in the electoral system's integrity. This approach also plays a pivotal role in countering misinformation. By proactively conveying the true state of system security and the efforts being taken to remedy issues. It can help to dismantle unfounded claims and skepticism about the election infrastructure from various sectors of society.

Exploring the Impact of AI Use in Deepfake Technology and Artificial Persona Creation

Capabilities of Deepfake Technology and AI-Language Models

Recent advancements in AI and deepfake technology have brought breathtaking capabilities. Primarily the power to manipulate audio and video content with astounding realism. Tracy emphasizes the profound implications of this tech. Specifically pointing to language models such as "Vall-E," which can simulate a person's voice from just a few seconds of audio input.

The Rise of Deepfakes: "Imagine what's gonna happen with the deepfake. Take a right? I can take your video. I can take your voice." — Tracy Bannon

This technology uses sophisticated algorithms to detect nuances in speech patterns. Allowing it to generate new audio that sounds like the targeted individual, effectively putting words into their mouths that they never actually said. This ability extends beyond simple mimicry. It propels the potential for creating audio deepfakes that can be nearly indistinguishable from genuine recordings. Such capabilities raise significant concerns about the reliability of auditory evidence and the ease with which public opinion could be manipulated.

Creation of Artificial Personas Using AI Tools

Tracy brings to light the increasingly effortless creation of false personas through AI tools such as ChatGPT, which is an iteration of AI language models capable of generating human-like text. These tools can fabricate compelling narratives and even mimic specific writing styles. It can create non-existent but believable social media profiles or entire personas. Tracy points out how these synthetic entities can be programmed to deliver credible-sounding propaganda, influence political campaigns, or sow discord by spamming internet platforms with targeted misinformation. The creation of these artificial personas signifies a dramatic shift in how information can be disseminated. Posing risks of eroding trust in digital communication and complicating the battle against fake news.

About Our Guest

Tracy Bannon is a Senior Principal with MITRE Lab's Advanced Software Innovation Center and a contributor to It’s 5:05! podcast. She is an accomplished software architect, engineer, and DevSecOps advisor having worked across commercial and government clients. She thrives on understanding complex problems and working to deliver mission/business value at the speed. She’s passionate about mentoring and training and enjoys community and knowledge-building with teams, clients, and the next generation. Tracy is a long-time advocate for diversity in technology, helping to narrow the gaps as a mentor, sponsor, volunteer, and friend.

Episode Links