Attackers often go after outdated and unused test accounts, or stumble upon publicly accessible cloud storage containing critical data that’s a bit dusty. Sometimes an attack exploits a vulnerability in an app component that was actually patched, say, two years ago. As you read these breach reports, a common theme emerges: the attacks leveraged something outdated: a service, a server, a user account… Pieces of corporate IT infrastructure that sometimes fall off the radar of IT and security teams. They become, in essence, unmanaged, useless, and simply forgotten. These IT zombies create risks for information security, regulatory compliance, and lead to unnecessary operational costs. This is generally an element of shadow IT — with one key difference: nobody wants, knows about, or benefits from these assets.
In this post, we try to identify which assets demand immediate attention, how to identify them, and what a response should look like.
Physical and virtual servers
Priority: high. Vulnerable servers are entry points for cyberattacks, and they continue consuming resources while creating regulatory compliance risks.
Prevalence: high. Physical and virtual servers are commonly orphaned in large infrastructures following migration projects, or after mergers and acquisitions. Test servers no longer used after IT projects go live, as well as web servers for outdated projects running without a domain, are also frequently forgotten. The scale of the problem is illustrated by Lets Encrypt statistics: in 2024, half of domain renewal requests came from devices no longer associated with the requested domain. And there are roughly a million of these devices in the world.
Detection: the IT department needs to implement an Automated Discovery and Reconciliation (AD&R) process that combines the results of network scanning and cloud inventory with data from the Configuration Management Database (CMDB). It enables the timely identification of outdated or conflicting information about IT assets, and helps locate the forgotten assets themselves.
This data should be supplemented by external vulnerability scans that cover all of the organization’s public IPs.
Response: establish a formal, documented process for decommissioning/retiring servers. This process needs to include verification of complete data migration, and verified subsequent destruction of data on the server. Following these steps, the server can be powered down, recycled, or repurposed. Until all procedures are complete, the server needs to be moved to a quarantined, isolated subnet.
To mitigate this issue for test environments, implement an automated process for their creation and decommission. A test environment should be created at the start of a project, and dismantled after a set period or following a certain duration of inactivity. Strengthen the security of test environments by enforcing their strict isolation from the primary (production) environment, and by prohibiting the use of real, non-anonymized business data in testing.
Forgotten user, service, and device accounts
Priority: critical. Inactive and privileged accounts are prime targets for attackers seeking to establish network persistence or expand their access within the infrastructure.
Prevalence: very high. Technical service accounts, contractor accounts, and non-personalized accounts are among the most commonly forgotten.
Detection: conduct regular analysis of the user directory (Active Directory in most organizations) to identify all types of accounts that have seen no activity over a defined period (a month, quarter, or year). Concurrently, it’s advisable to review the permissions assigned to each account, and remove any that are excessive or unnecessary.
Response: after checking with the relevant service owner on the business side or employee supervisor, outdated accounts should be simply deactivated or deleted. A comprehensive Identity and Access Management system (IAM) offers a scalable solution to this problem. In this system, the creation, deletion, and permission assignment for accounts are tightly integrated with HR processes.
For service accounts, it’s also essential to routinely review both the strength of passwords, and the expiration dates for access tokens — rotating them as necessary.
Forgotten data stores
Priority: critical. Poorly controlled data in externally accessible databases, cloud storage and recycle bins, and corporate file-sharing services — even “secure” ones — has been a key source of major breaches in 2024–2025. The data exposed in these leaks often includes document scans, medical records, and personal information. Consequently, these security incidents also lead to penalties for non-compliance with regulations such as HIPAA, GDPR, and other data-protection frameworks governing the handling of personal and confidential data.
Prevalence: high. Archive data, data copies held by contractors, legacy database versions from previous system migrations — all of these often remain unaccounted for and accessible for years (even decades) in many organizations.
Detection: given the vast variety of data types and storage methods, a combination of tools is essential for discovery:
- Native audit subsystems within major vendor platforms, such as AWS Macie, and Microsoft Purview
- Specialized Data Discovery and Data Security Posture Management solutions
- Automated analysis of inventory logs, such as S3 Inventory
Unfortunately, these tools are of limited use if a contractor creates a data store within its own infrastructure. Controlling that situation requires contractual stipulations granting the organization’s security team access to the relevant contractor storage, supplemented by threat intelligence services capable of detecting any publicly exposed or stolen datasets associated with the company’s brand.
Response: analyze access logs and integrate the discovered storage into your DLP and CASB tools to monitor its usage — or to confirm it’s truly abandoned. Use available tools to securely isolate access to the storage. If necessary, create a secure backup, then delete the data. At the organizational policy level, it’s crucial to establish retention periods for different data types, mandating their automatic archiving and deletion upon expiry. Policies must also define procedures for registering new storage systems, and explicitly prohibit the existence of ownerless data that’s accessible without restrictions, passwords, or encryption.
Unused applications and services on servers
Priority: medium. Vulnerabilities in these services increase the risk of successful cyberattacks, complicate patching efforts, and waste resources.
Prevalence: very high. services are often enabled by default during server installation, remain after testing and configuration work, and continue to run long after the business process they supported has become obsolete.
Detection: through regular audits of software configurations. For effective auditing, servers should adhere to a role-based access model, with each server role having a corresponding list of required software. In addition to the CMDB, a broad spectrum of tools helps with this audit: tools like OpenSCAP and Lynis — focused on policy compliance and system hardening; multi-purpose tools like OSQuery; vulnerability scanners such as OpenVAS; and network traffic analyzers.
Response: conduct a scheduled review of server functions with their business owners. Any unnecessary applications or services found running should be disabled. To minimize such occurrences, implement the principle of least privilege organization-wide and deploy hardened base images or server templates for standard server builds. This ensures no superfluous software is installed or enabled by default.
Outdated APIs
Priority: high. APIs are frequently exploited by attackers to exfiltrate large volumes of sensitive data, and to gain initial access into the organization. In 2024, the number of API-related attacks increased by 41%, with attackers specifically targeting outdated APIs, as these often provide data with fewer checks and restrictions. This was exemplified by the leak of 200 million records from X/Twitter.
Prevalence: high. When a service transitions to a new API version, the old one often remains operational for an extended period, particularly if it’s still used by customers or partners. These deprecated versions are typically no longer maintained, so security flaws and vulnerabilities in their components go unpatched.
Detection: at the WAF or NGFW level, it’s essential to monitor traffic to specific APIs. This helps detect anomalies that may indicate exploitation or data exfiltration, and also identify APIs that get minimal traffic.
Response: for the identified low-activity APIs, collaborate with business stakeholders to develop a decommissioning plan, and migrate any remaining users to newer versions.
For organizations with a large pool of services, this challenge is best addressed with an API management platform in conjunction with a formally approved API lifecycle policy. This policy should include well-defined criteria for deprecating and retiring outdated software interfaces.
Software with outdated dependencies and libraries
Priority: high. This is where large-scale, critical vulnerabilities like Log4Shell hide, leading to organizational compromise and regulatory compliance issues.
Prevalence: Very high, especially in large-scale enterprise management systems, industrial automation systems, and custom-built software.
Detection: use a combination of vulnerability management (VM/CTEM) systems and software composition analysis (SCA) tools. For in-house development, it’s mandatory to use scanners and comprehensive security systems integrated into the CI/CD pipeline to prevent software from being built with outdated components.
Response: company policies must require IT and development teams to systematically update software dependencies. When building internal software, dependency analysis should be part of the code review process. For third-party software, it’s crucial to regularly audit the status and age of dependencies.
For external software vendors, updating dependencies should be a contractual requirement affecting support timelines and project budgets. To make these requirements feasible, it’s essential to maintain an up-to-date software bill of materials (SBOM).
You can read more about timely and effective vulnerability remediation in a separate blog post.
Forgotten websites
Priority: medium. Forgotten web assets can be exploited by attackers for phishing, hosting malware, or running scams under the organization’s brand, damaging its reputation. In more serious cases, they can lead to data breaches, or serve as a launchpad for attacks against the given company. A specific subset of this problem involves forgotten domains that were used for one-time activities, expired, and weren’t renewed — making them available for purchase by anyone.
Prevalence: high — especially for sites launched for short-term campaigns or one-off internal activities.
Detection: the IT department must maintain a central registry of all public websites and domains, and verify the status of each with its owners on a monthly or quarterly basis. Additionally, scanners or DNS monitoring can be utilized to track domains associated with the company’s IT infrastructure. Another layer of protection is provided by threat intelligence services, which can independently detect any websites associated with the organization’s brand.
Response: establish a policy for scheduled website shutdown after a fixed period following the end of its active use. Implement an automated DNS registration and renewal system to prevent the loss of control over the company’s domains.
Unused network devices
Priority: high. Routers, firewalls, surveillance cameras, and network storage devices that are connected but left unmanaged and unpatched make for the perfect attack launchpad. These forgotten devices often harbor vulnerabilities, and almost never have proper monitoring — no EDR or SIEM integration — yet they hold a privileged position in the network, giving hackers an easy gateway to escalate attacks on servers and workstations.
Prevalence: medium. Devices get left behind during office moves, network infrastructure upgrades, or temporary workspace setups.
Detection: use the same network inventory tools mentioned in the forgotten servers section, as well as regular physical audits to compare network scans against what’s actually plugged in. Active network scanning can uncover entire untracked network segments and unexpected external connections.
Response: ownerless devices can usually be pulled offline immediately. But beware: cleaning them up requires the same care as scrubbing servers — to prevent leaks of network settings, passwords, office video footage, and so on.
updates
Tips