Cyber Threat Intelligence 13 February 2026
-
New Tooling
- OpenClaw Scanner: Open-Source Tool Detects Autonomous AI Agents
"A new free, open source tool is available to help organizations detect where autonomous AI agents are operating across corporate environments. The OpenClaw Scanner identifies instances of OpenClaw, an autonomous AI assistant also known as MoltBot, that can execute tasks, access local files, and authenticate to internal systems without centralized oversight. OpenClaw gained usage in the past few months as an AI agent capable of performing actions on behalf of users. The software can run locally or in the cloud, using messaging platforms as an interface and leveraging autonomous decision-making to carry out tasks across services."
https://www.helpnetsecurity.com/2026/02/12/openclaw-scanner-open-source-tool-detects-autonomous-ai-agents/
https://pypi.org/project/astrix-openclaw-scanner/
Vulnerabilities
- Critical BeyondTrust RCE Flaw Now Exploited In Attacks, Patch Now
"A critical pre-authentication remote code execution vulnerability in BeyondTrust Remote Support and Privileged Remote Access appliances is now being exploited in attacks after a PoC was published online. Tracked as CVE-2026-1731 and assigned a near-maximum CVSS score of 9.9, the flaw affects BeyondTrust Remote Support versions 25.3.1 and earlier and Privileged Remote Access versions 24.3.4 and earlier. BeyondTrust disclosed the vulnerability on February 6, warning that unauthenticated attackers could exploit it by sending specially crafted client requests."
https://www.bleepingcomputer.com/news/security/critical-beyondtrust-rce-flaw-now-exploited-in-attacks-patch-now/ - CISA Adds Four Known Exploited Vulnerabilities To Catalog
"CISA has added four new vulnerabilities to its Known Exploited Vulnerabilities (KEV) Catalog, based on evidence of active exploitation.
CVE-2024-43468 Microsoft Configuration Manager SQL Injection Vulnerability
CVE-2025-15556 Notepad++ Download of Code Without Integrity Check Vulnerability
CVE-2025-40536 SolarWinds Web Help Desk Security Control Bypass Vulnerability
CVE-2026-20700 Apple Multiple Buffer Overflow Vulnerability"
https://www.cisa.gov/news-events/alerts/2026/02/12/cisa-adds-four-known-exploited-vulnerabilities-catalog - Trust Me, I’m a Shortcut
"Windows’ primary mechanism for shortcuts, LNK files, is frequently abused by threat actors for payload delivery and persistence. This blog post introduces several new LNK file flaws that, amongst other things, allow attackers to fully spoof an LNK’s target. It also introduces lnk-it-up, a tool suite that can generate such deceptive LNK files, as well as detect anomalous ones."
https://www.wietzebeukema.nl/blog/trust-me-im-a-shortcut
https://www.bleepingcomputer.com/news/microsoft/microsoft-new-windows-lnk-spoofing-issues-arent-vulnerabilities/
Malware
- AMOS Infostealer Targets MacOS Through a Popular AI App
"Infostealers like (AMOS) represent far more than a standalone malware. They are foundational components of a mature cybercrime economy built around harvesting, trading, and operationalizing stolen digital identities. Rather than acting as the end goal, modern stealers function as large-scale data collection engines that feed underground markets, where stolen credentials, sessions, and financial data are bought and sold to fuel account takeovers, fraud, and follow-on intrusions. What makes these campaigns particularly effective is their highly opportunistic social engineering approach: attackers continuously adapt to technology trends, abusing trusted platforms, popular software, search engines, and even emerging AI ecosystems to trick users into executing malware themselves."
https://www.bleepingcomputer.com/news/security/amos-infostealer-targets-macos-through-a-popular-ai-app/ - “AiFrame”- Fake AI Assistant Extensions Targeting 260,000 Chrome Users Via Injected Iframes
"As generative AI tools like ChatGPT, Claude, Gemini, and Grok become part of everyday workflows, attackers are increasingly exploiting their popularity to distribute malicious browser extensions. In this research, we uncovered a coordinated campaign of Chrome extensions posing as AI assistants for summarization, chat, writing, and Gmail assistance. While these tools appear legitimate on the surface, they hide a dangerous architecture: instead of implementing core functionality locally, they embed remote, server-controlled interfaces inside extension-controlled surfaces and act as privileged proxies, granting remote infrastructure access to sensitive browser capabilities."
https://layerxsecurity.com/blog/aiframe-fake-ai-assistant-extensions-targeting-260000-chrome-users-via-injected-iframes/?utm_source=BC
https://www.bleepingcomputer.com/news/security/fake-ai-chrome-extensions-with-300k-users-steal-credentials-emails/
https://www.theregister.com/2026/02/12/30_chrome_extensions_ai/ - World Leaks Ransomware Group Adds Stealthy, Custom Malware ‘RustyRocket’ To Attacks
"World Leaks, the cyber-criminal data extortion group which has targeted some of the world’s biggest companies, has added a novel, never-before-seen malware to their arsenal, research by Accenture Cybersecurity has revealed. Accenture has named the malware ‘RustyRocket’. It allows World Leaks to stealthily maintain persistence on networks and forms a key part of the extortion groups’ attacks. “The sophisticated toolset is a critical component of World Leaks’ operations and has functioned entirely under the radar, enabling affiliates to stealthily exfiltrate data and proxy traffic across victim environments,” T. Ryan Whelan, MD and global head of Accenture cyber intelligence said in a LinkedIn post, which revealed the research."
https://www.infosecurity-magazine.com/news/world-leaks-ransomware-rustyrocket/ - Criminals Are Using AI Website Builders To Clone Major Brands
"AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website. Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam. One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites."
https://www.malwarebytes.com/blog/news/2026/02/criminals-are-using-ai-website-builders-to-clone-major-brands - Fake Recruiter Campaign Targets Crypto Devs
"The ReversingLabs research team has identified a new branch of a fake recruiter campaign conducted by the North Korean hacking team Lazarus Group. The campaign, which the team named graphalgo, based on the first package included in this campaign in the npm repository, has been active since the beginning of May 2025. It is a coordinated campaign targeting both Javascript and Python developers with cryptocurrency-related fake recruiter tasks. Developers are approached via social platforms like LinkedIn and Facebook, or through job offerings on forums like Reddit. The campaign includes a well-orchestrated story around a company involved in blockchain and cryptocurrency exchanges. The malicious functionality is hidden using several layers of indirection across public services which include GitHub, npm and PyPI."
https://www.reversinglabs.com/blog/fake-recruiter-campaign-crypto-devs
https://thehackernews.com/2026/02/lazarus-campaign-plants-malicious.html - Active Ivanti Exploitation Traced To Single Bulletproof IP—Published IOC Lists Point Elsewhere
"The GreyNoise Global Observation Grid observed active exploitation of two critical Ivanti Endpoint Manager Mobile vulnerabilities, and 83% of that exploitation traces to a single IP address on bulletproof hosting infrastructure that does not appear on widely circulated IOC lists. Meanwhile, several of the most-shared IOCs for this campaign show zero Ivanti exploitation in GreyNoise data. They are scanning for Oracle WebLogic instead. Defenders blocking only the published indicators may be watching the wrong door."
https://www.greynoise.io/blog/active-ivanti-exploitation
https://thehackernews.com/2026/02/83-of-ivanti-epmm-exploits-linked-to.html
Breaches/Hacks/Leaks
- Odido Data Breach Exposes Personal Info Of 6.2 Million Customers
"Dutch telecommunications provider Odido is warning that it suffered a cyberattack that reportedly exposed the personal data of 6.2 million customers. Odido is one of the largest mobile and telecommunications providers in the Netherlands, offering mobile, broadband, and television services to millions of customers nationwide. The company was formed in 2023 through the rebranding of T-Mobile Netherlands and Tele2 Netherlands. The company says they detected the incident on the weekend of February 7 and launched an investigation with internal and external cybersecurity experts."
https://www.bleepingcomputer.com/news/security/odido-data-breach-exposes-personal-info-of-62-million-customers/
https://therecord.media/dutch-telecom-giant-announces-data-breach
https://securityaffairs.com/187927/uncategorized/odido-confirms-massive-breach-6-2-million-customers-impacted.html - When AI Secrets Go Public: The Rising Risk Of Exposed ChatGPT API Keys
"Cyble Research and Intelligence Labs (CRIL) observed large-scale, systematic exposure of ChatGPT API keys across the public internet. Over 5,000 publicly accessible GitHub repositories and approximately 3,000 live production websites were found leaking API keys through hardcoded source code and client-side JavaScript. GitHub has emerged as a key discovery surface, with API keys frequently committed directly into source files or stored in configuration and .env files. The risk is further amplified by public-facing websites that embed active keys in front-end assets, leading to persistent, long-term exposure in production environments."
https://cyble.com/blog/when-ai-secrets-go-public-chatgpt/
General News
- Cybercrime Ethos: The Shifting Sands Of Medical Neutrality
"I have always told myself that I never want to become a stereotypical "stuck in time" security graybeard, the infosec equivalent of "back in my day, we walked to school uphill, both ways, in the snow!" My fear is not of being nostalgic, but that I would become unknowingly rigid in my viewpoints and fail to adapt to the ever-changing threat landscape. After closing out the whirlwind of the last few years, I decided to reflect on the trends and patterns I have observed in my 25+ years within the cybersecurity realm. What I found was deeply unsettling."
https://cofense.com/blog/cybercrime-ethos-the-shifting-sands-of-medical-neutrality - GTIG AI Threat Tracker: Distillation, Experimentation, And (Continued) Integration Of AI For Adversarial Use
"In the final quarter of 2025, Google Threat Intelligence Group (GTIG) observed threat actors increasingly integrating artificial intelligence (AI) to accelerate the attack lifecycle, achieving productivity gains in reconnaissance, social engineering, and malware development. This report serves as an update to our November 2025 findings regarding the advances in threat actor usage of AI tools. By identifying these early indicators and offensive proofs of concept, GTIG aims to arm defenders with the intelligence necessary to anticipate the next phase of AI-enabled threats, proactively thwart malicious activity, and continually strengthen both our classifiers and model."
https://cloud.google.com/blog/topics/threat-intelligence/distillation-experimentation-integration-ai-adversarial-use
https://www.bleepingcomputer.com/news/security/google-says-hackers-are-abusing-gemini-ai-for-all-attacks-stages/
https://thehackernews.com/2026/02/google-reports-state-backed-hackers.html
https://therecord.media/nation-state-hackers-using-gemini-for-malicious-campaigns
https://cyberscoop.com/state-hackers-using-gemini-google-ai/
https://www.infosecurity-magazine.com/news/nation-state-hackers-gemini-ai/
https://www.securityweek.com/hacktivists-state-actors-cybercriminals-target-global-defense-industry-google-warns/
https://www.theregister.com/2026/02/12/google_china_apt31_gemini/ - When Security Decisions Come Too Late, And Attackers Know It
"In this Help Net Security, Chris O’Ferrell, CEO at CodeHunter, talks about why malware keeps succeeding, where attackers insert malicious code in the SDLC, and how CI/CD pipelines can become a quiet entry point. He also breaks down the difference between behavioral detection and behavioral intent analysis, and why explainable results matter for security teams."
https://www.helpnetsecurity.com/2026/02/12/chris-oferrell-codehunter-behavioral-intent-analysis-malware-detection/ - Cloud Teams Are Hitting Maturity Walls In Governance, Security, And AI Use
"Enterprise cloud programs have reached a point where most foundational services are already in place, and the daily work now centers on governance, security enforcement, and managing sprawl across environments. Hybrid and multi-cloud architectures have become routine in large organizations, bringing new operational pressures around consistency and control. A new survey of cloud architects and enterprise cloud decision-makers found that Azure has become a dominant platform in enterprise environments, with 93.4% of respondents reporting an Azure presence. The same group reported strong adoption of resilience practices, extensive use of cloud-native security tooling, and growing dependence on AI workflows. At the same time, the data by theCUBE Researchs hows recurring gaps in infrastructure automation, cloud migration security, and enterprise governance over AI usage."
https://www.helpnetsecurity.com/2026/02/12/enterprise-cloud-governance-gaps-governance-security/ - How To Eliminate The Technical Debt Of Insecure AI-Assisted Software Development
"If we heed the warnings of industry forecasts, 2026 will be the year of artificial intelligence (AI)-driven technical debt: The tech debt for 75 percent of companies will rise to a “moderate” or “high” level of severity this year due to the rapid expansion of AI, according to Forrester. This extends to the software development community, which is seeing a near-ubiquitous presence of AI-coding assistants as teams face pressures to generate more output in less time. While the huge spike in efficiencies greatly helps them, these teams too often fail to incorporate adequate safety controls and practices into AI deployments. The resulting risks leave their organizations exposed, and developers will struggle to backtrack in tracing and identifying where – and how – a security gap occurred. All of which leads to excessive detection and remediation time that companies cannot afford."
https://www.securityweek.com/how-to-eliminate-the-technical-debt-of-insecure-ai-assisted-software-development/ - High-Tech Crime Trends Report 2026
"Cybercrime is no longer defined by isolated breaches. By compromising upstream vendors, SaaS platforms, open-source projects, and managed service providers, attackers inherit trusted access to hundreds of downstream organizations, transforming single intrusions into cascading, multi-victim incidents. The High-Tech Crime Trends Report 2026 reveals how this shift has industrialized cybercrime, exposed the limits of perimeter-based defenses, and elevated identity and trust as the new primary attack surfaces."
https://www.group-ib.com/landing/high-tech-crime-trends-report-2026/
https://www.theregister.com/2026/02/12/supply_chain_attacks/ - AI Skills Represent Dangerous New Attack Surface, Says TrendAI
"The so-called “AI skills” used to scale and execute AI operations are dangerously exposed to data theft, sabotage and disruption, TrendAI has warned. The newly named business unit of Trend Micro explained in a report published this week that AI skills are artifacts combining human-readable text with instructions that large language models (LLMs) can read and execute. “AI skills encapsulate everything, from elements like human expertise, workflows, and operational constraints, to decision logic,” the report explained. “By capturing this knowledge into something executable, AI skills enable organizations to achieve scalability and knowledge transfer at previously unattainable levels.”"
https://www.infosecurity-magazine.com/news/ai-skills-dangerous-new-attack/ - Time To Exploit Plummets As N-Day Flaws Dominate
"The time between vulnerability disclosure and exploitation has plunged 94% over the past five years as threat actors weaponize so-called “n-days,” according to a new Flashpoint study. The threat intelligence vendor claimed that “time to exploit” (TTE) dropped from 745 days in 2020 to just 44 days last year, dramatically reducing the time security and IT teams have to patch. Driving this trend is the growing use of n-day exploits, which relate to vulnerabilities that have been publicly disclosed but remain unpatched by organizations. Flashpoint claimed that n-days now represent over 80% of the CVEs listed in its Known Exploited Vulnerabilities (KEV) database, VulnDB."
https://www.infosecurity-magazine.com/news/time-exploit-plummets-nday-flaws/
อ้างอิง
Electronic Transactions Development Agency (ETDA)
- OpenClaw Scanner: Open-Source Tool Detects Autonomous AI Agents