A new and deeply troubling artificial intelligence service has emerged from the depths of the dark web, rapidly drawing the attention of cybersecurity experts worldwide — and for all the wrong reasons. Known as DIG AI, this system represents a dangerous evolution in the misuse of generative AI: an unrestricted, anonymous, Tor-only AI service allegedly designed with no ethical or technical safeguards.
According to researchers at Resecurity, the first traces of DIG AI appeared on September 29, 2025. Within days, the service was being aggressively promoted across darknet forums by its administrator, operating under the alias Pitch. The creator claimed that the platform processed around 10.000 prompts in its first 24 hours, signaling immediate adoption within underground communities.
What Makes DIG AI Different From Previous Criminal AIs
Unlike earlier malicious AI offerings such as FraudGPT or WormGPT, DIG AI breaks several “norms” of underground services:
No registration
No subscription or payment
No user accounts
Access exclusively via the Tor network
This frictionless model dramatically lowers the barrier to entry. Anyone with Tor access can immediately interact with the system, making it far more accessible than previous AI tools marketed to cybercriminals.
The administrator also claims that DIG AI runs entirely on self-hosted infrastructure, avoiding mainstream cloud providers. This design choice increases resilience against takedowns, monitoring, and external pressure — a familiar strategy in darknet service architecture.
Capabilities That Alarm Security Researchers
Resecurity conducted controlled tests on DIG AI and reported deeply concerning results. The system reportedly:
Responds without hesitation to prompts involving explosives, narcotics, fraud, and prohibited substances
Generates functional malicious code, including backdoors and malware installers
Produces outputs assessed as immediately usable in real-world attacks
From a defensive perspective, this is a critical shift. The AI doesn’t merely assist ideation — it operationalizes cybercrime, automating steps that previously required technical skill and experience.
The Most Disturbing Aspect: Abusive and Illegal Content
Among all findings, analysts describe DIG AI’s handling of pornographic content as the most alarming dimension.
According to the investigation, the system was capable of:
Creating fully synthetic illegal material
Manipulating images of real minors, transforming benign photographs into illicit content
This places DIG AI in a category far beyond “dual-use” technology. It directly facilitates some of the most severe crimes recognized under international law, triggering serious ethical, legal, and enforcement concerns.
Limitations Today, Risks Tomorrow
Despite its dangerous capabilities, DIG AI is not without constraints. Some operations reportedly take several minutes to complete, suggesting limited computational resources.
However, this limitation is not structural — it’s economic. Analysts warn that introducing paid tiers or scaling hardware could easily eliminate current bottlenecks. In other words, DIG AI’s present inefficiencies should not be mistaken for safety.
An Ecosystem That Reveals Its Target Audience
DIG AI is already being advertised via banner ads across Tor-based marketplaces associated with:
Drug trafficking
Stolen payment card data
Compromised identity resale
This context leaves little ambiguity about the intended audience. The service is not a neutral experiment gone wrong — it appears deliberately positioned within the cybercrime economy.
Notably, the administrator claims that one of DIG AI’s three models is based on ChatGPT Turbo, though this assertion remains unverified. Regardless of the claim’s accuracy, it highlights a broader trend: criminal actors are increasingly adept at repurposing or replicating large language models.
Between 2024 and 2025, mentions of malicious AI tools on underground forums reportedly tripled, reflecting both growing demand and rising technical competence among threat actors.
Looking Ahead: 2026 and Beyond
Security analysts warn that the real impact of tools like DIG AI may unfold starting in 2026, when AI-driven cybercrime reaches new levels of scale and automation.
This concern is amplified by upcoming global events such as:
The 2026 Winter Olympics
The FIFA World Cup
Large international events have historically been magnets for cyberattacks, disinformation campaigns, and fraud. AI systems that lower skill barriers and multiply attack output could significantly expand the pool of capable attackers during these high-risk periods.
Why DIG AI Matters — Even If It Disappears
Even if DIG AI itself is eventually taken offline, its emergence sends a clear signal:
criminal AI is no longer experimental — it is operational.
By automating complex tasks, generating ready-to-use malware, and removing entry barriers, systems like DIG AI fundamentally change the threat landscape. The danger is not just what this single tool can do, but what it represents: a future where advanced cybercrime capabilities are available to virtually anyone, anonymously and at scale.
For defenders, policymakers, and AI developers alike, DIG AI is not an anomaly — it is a warning.

