Cybersecurity Snapshot: NIST Offers Zero Trust Implementation Advice, While OpenAI Shares ChatGPT Misuse Incidents

Check out NIST best practices for adopting a zero trust architecture. Plus, learn how OpenAI disrupted various attempts to abuse ChatGPT. In addition, find out what Tenable webinar attendees said about their exposure management experiences. And get the latest on cyber crime trends, a new cybersecurity executive order and more!
Dive into six things that are top of mind for the week ending June 13.
1 - NIST issues practical implementation guidance for zero trust
Zero trust architectures’ (ZTAs) popularity has grown as traditional on-prem security perimeters dissolve with the adoption of cloud services, mobile devices, remote employees, IoT devices and more. But ZTA implementations aren’t “one-size-fits-all” affairs. They must be customized to individual environments.
To help organizations plan and deploy ZTAs, the U.S. National Institute of Standards and Technology (NIST) this week published a guide titled “Implementing a Zero Trust Architecture: Full Document (SP 1800-35).”
By offering 19 concrete ZTA implementation examples, the new guide is meant to complement NIST’s “Zero Trust Architecture (SP 800-207)” which was published in mid-2000 and unpacks what a ZTA is, as well as its components, benefits and risks.
“This guidance gives you examples of how to deploy ZTAs and emphasizes the different technologies you need to implement them,” Alper Kerman, a NIST computer scientist and the guide’s author, said in a statement. “It can be a foundational starting point for any organization constructing its own ZTA.”

To craft the new ZTA guide, NIST enlisted the help of 24 technology partners, including Tenable. “Our role? Help ensure that every device, user, and system is verified, monitored, and protected. This is what public-private partnership looks like at its best,” Tenable Senior VP of Global Government Affairs James Hayes wrote in a LinkedIn post.
In addition to the 19 examples, the guide also includes a description of these core steps applicable to all ZTA implementations:
- Discovering and inventorying all of your environment’s IT assets, including hardware, software, applications, data and services
- Specifying the security policies the ZTA will enforce regarding who can access each resource, according to the principles of least privilege
- Identifying and inventorying your existing security tools, technologies and capabilities, and determining which ones will be part of the ZTA
- Designing your access topology based on risk and the value of your data
- Rolling out ZTA components involving people, processes and technologies, and deploying baseline security components for areas including:
- continuous environment monitoring and asset detection
- identity and access management
- vulnerability scanning and assessment
- endpoint protection
- Verifying the implementation supports your zero-trust objectives by:
- continuously monitoring network traffic for suspicious activity
- auditing and validating access-enforcement decisions
- verifying enforced policies correlate with defined ones
- performing periodic, scenario-based testing
- Continously improving and evolving the ZTA based on changes to your goals, threat landscape, technology and requirements
To get more details, read:
- The “Implementing a Zero Trust Architecture: Full Document (SP 1800-35)” guide
- The “Implementing a Zero Trust Architecture: High-Level Document (SP 1800-35)” complementary document
- The companion fact sheet
- The ZTA homepage of NIST’s National Cybersecurity Center of Excellence
- The statement “NIST Offers 19 Ways to Build Zero Trust Architectures”
For more information about zero trust, check out these Tenable resources:
- “Rethink security with a zero-trust approach” (solutions page)
- “What is zero trust?” (cybersecurity guide)
- “5 Things Government Agencies Need to Know About Zero Trust” (blog)
- “Making Zero Trust Architecture Achievable” (blog)
- “Security Beyond the Perimeter: Accelerate Your Journey to Zero Trust” (on-demand webinar)
2 - OpenAI details recent abuses of its AI products
Cyber espionage. Social engineering. Fraudulent employment schemes. Covert operations. Scams.
Those are some of the malicious uses of OpenAI’s artificial intelligence tools that the company has detected and halted in recent months.
“Every operation we disrupt gives us a better understanding of how threat actors are trying to abuse our models, and enables us to refine our defenses,” the company wrote in the report “Disrupting malicious uses of AI: June 2025,” published this week.
Specifically, OpenAI details 10 incidents with the goal of sharing how it flagged and defused them in the hopes that the lessons it learned can benefit other AI defenders.

Here’s a quick glance at three of the malicious use cases the maker of ChatGPT discusses in the report:
- In a deceptive IT-worker employment scheme, cyber scammers likely based in North Korea used ChatGPT to automate, streamline and enhance a variety of fraudulent activities, such as the creation of false resumes and the recruitment of North American residents to participate in the scams.
- ChatGPT was also abused by fraudsters likely located in China to mass-produce social media posts intended to spread misinformation as part of covert schemes to influence people’s opinions on geopolitical issues related to China. Written mostly in English and Chinese, the posts appeared primarily on TikTok and X, as well as on Facebook and Reddit.
- Russian-speaking malicious actors used a cluster of ChatGPT accounts to develop and refine a multi-stage malware campaign, including the setup of the command-and-control infrastructure. The malware’s capabilities included credential theft, privilege escalation and attack obfuscation.
“We’ll continue to share our findings to enable stronger defenses across the internet,” the report reads.
For more information about AI security, check out these Tenable resources:
- “How to Discover, Analyze and Respond to Threats Faster with Generative AI” (blog)
- “Securing the AI Attack Surface: Separating the Unknown from the Well Understood” (blog)
- “Harden Your Cloud Security Posture by Protecting Your Cloud Data and AI Resources” (blog)
- “Tenable Cloud AI Risk Report 2025” (report)
- “Tenable Cloud AI Risk Report 2025: Helping You Build More Secure AI Models in the Cloud” (on-demand webinar)
3 - Tenable asks webinar attendees about exposure management
During our recent webinar “Security Without Silos: How to Gain Real Risk Insights with Unified Exposure Management,” we polled attendees about their exposure management knowledge, challenges and concerns. Check out what they said.

(44 webinar attendees polled by Tenable. Respondents could choose more than one answer.)

(85 webinar attendees polled by Tenable)

(89 webinar attendees polled by Tenable)
Want to learn more about how unified exposure management works in the real world? Watch this webinar on-demand!
4 - How to prevent AI systems from acting on what they don’t know
As has been widely documented by researchers and experienced by users, AI systems often make mistakes — a major challenge for AI developers. What can be done?
A critical piece of this puzzle is to build AI systems that recognize when they’re presented with a task for which they haven’t been trained, and are able to say they don’t know how to proceed.
That’s according to the article “Out of Distribution Detection: Knowing When AI Doesn’t Know” published this week by two experts from Carnegie Mellon University’s Software Engineering Institute (SEI).
In the piece, Eric Heim, a senior machine learning research scientist, and Cole Frank, an AI workforce development engineer, explore the issue of out-of-distribution detection (OoD) — flagging when an AI system faces situations it’s not trained to tackle — with a focus on AI military applications.
“By understanding when AI systems are operating outside their knowledge boundaries, we can build more trustworthy and effective AI capabilities for defense applications — knowing not just what our systems know, but also what they don't know,” they wrote.

The authors offer three broad categories of OoD detection:
- Anomaly detection and density estimation, which try to model what “normal” data looks like
- Learning with rejection and uncertainty-aware models, which seek to detect instances of OoD
- Adding OoD detection capabilities to existing models
The authors caution that all three OoD detection categories have their pros and cons, and that OoD detection methods aren’t foolproof, and, as such, should be considered “a last line of defense in a layered approach to assessing the reliability of ML models during deployment.”
“Developers of AI-enabled systems should also perform rigorous test and evaluation, build monitors for known failure modes into their systems, and perform comprehensive analysis of the conditions under which a model is designed to perform versus conditions in which its reliability is unknown,” they wrote.
For more information about OoD and about AI model accuracy in general:
- “Never Assume That the Accuracy of Artificial Intelligence Information Equals the Truth” (United Nations University)
- “Accurate and reliable AI: Four key ingredients” (Thomson Reuters)
- “What do we need to know about accuracy and statistical accuracy?” (U.K. Information Commissioner’s Office)
- “Out-of-Distribution Detection Is Not All You Need” (Université de Toulouse, Toulouse, France)
- “Rule-Based Out-of-Distribution Detection” (IEEE)
5 - White House EO seeks modernization of fed agencies’ cybersecurity
The Trump administration has put the spotlight on boosting the U.S. federal government’s cybersecurity posture with the recently issued Executive Order (EO) 14306.
EO 14306 aims “to strengthen the nation’s cybersecurity by focusing on critical protections against foreign cyber threats and enhancing secure technology practices,” reads a complementary White House fact sheet.

The EO addresses topics including AI system vulnerabilities, IoT security, quantum computing risk, patch management, secure software development and critical infrastructure defense.
“This EO reinforces the importance of shifting from reactive to proactive cybersecurity,” Tenable Senior VP of Global Government Affairs James Hayes wrote in a blog.
“By addressing emerging risks — such as AI exploitation, post-quantum threats and software supply chain weaknesses — the administration is signaling the need for adaptability and continuous improvement,” he added.
To learn more about EO 14306 and about how Tenable can help federal agencies comply with the EO’s requirements, check out the blog “New Cybersecurity Executive Order: What You Need To Know.”
6 - Report: Cyber crooks feasting on stolen data
Leveraging AI in increasingly powerful ways, cyber criminals have ramped up data theft, which they’re using as the foundation for myriad cyber attacks, including online fraud, ransomware, child exploitation and extortion.
That’s a key takeaway from Europol’s “Internet Organised Crime Threat Assessment 2025” report, published this week. The report aims to highlight major trends in cyber crime in order to help law enforcement agencies, policy makers and the tech industry respond.
“From phishing to phone scams, and from malware to AI-generated deepfakes, cybercriminals use a constantly evolving toolkit to compromise systems and steal personal information,” reads a Europol statement.

Initial access brokers (IABs) then sell, resell and repackage stolen credentials and data in dark web forums and criminal marketplaces. Cyber criminals have also upped their use of communication apps that offer end-to-end encryption to negotiate deals and sell compromised data.
With regards to AI, cyber crooks continue to abuse it, especially generative AI tools, to launch ever more sophisticated social engineering attacks. “Criminals now tailor scam messages to victims’ cultural context and personal details with alarming precision,” the statement reads.
For more information about data security, check out these Tenable resources:
- “Securing Financial Data in the Cloud: How Tenable Can Help” (blog)
- “CISA and NSA Cloud Security Best Practices: Deep Dive” (blog)
- “Know Your Exposure: Is Your Cloud Data Secure in the Age of AI?” (on-demand webinar)
- “Harden Your Cloud Security Posture by Protecting Your Cloud Data and AI Resources” (blog)
- “Stronger Cloud Security in Five: How DSPM Helps You Discover, Classify and Secure All Your Data Assets” (blog)
- AI
- Cloud
- Exposure Management
- Cloud
- Cybersecurity Snapshot
- Exposure Management
- Federal
- Government
- NIST
- Phishing
- Security Frameworks
- Threat Management