top of page

Stop! I need to search your Downloads folder.

In cybersecurity, the battlefield is no longer just the firewall or the network: the new front is in the documents we exchange every day.


Files with enabled macros, disguised links, hidden scripts, and silent payloads are being used by Red Teams to sneak in without raising suspicion.


The situation: you don’t need to be a big company to be a target.


Today, thanks to advanced AI models, personalized malicious documents can be created that adapt to their victims in real time.


From GPTs trained to write convincing emails to adversarial generators that make unique files per session, automation is raising the sophistication of these attacks to levels unthinkable a few years ago.

Image #1 – So, how do we educate against the invisible? And we’re not talking about generic threats.
Image #1 – So, how do we educate against the invisible? And we’re not talking about generic threats.

This review digs into the signs that reveal a tampered document, modern evasion techniques (fileless, living-off-the-land), and suggests concrete practices to build critical digital awareness in students, teachers, professionals, and everyday users.


Here’s a realistic scenario about fileless agents: A Red Team targets a university.


They use a language model trained on public institutional content (website, PDFs, press releases) to generate a perfectly written, credible email aimed at academic staff.


The message pretends to be from Academic Management, announcing an “internal regulations update” and includes an attached file: Reglamento_2025.docm.


This document has a hidden macro that, when enabled, runs PowerShell code silently to connect to a C2 server, giving remote access to the attacker.


Because the file is uniquely generated, antivirus signatures don’t catch it.


Tools enabling these attacks

This mix of generative AI and proven offensive tools makes threats more personalized, evasive, and dangerous.


  1. GPT-4 or similar fine-tuned models for spear phishing

Models like OpenAI GPT-4, LLaMA 3, or Claude can be fine-tuned with institutional language to create highly believable emails, file names, and context.


With enough training, they can perfectly mimic the tone of an official email—even with internal details.


  1. Empire Framework or Mythic (offensive C2 frameworks)

Tools like Empire let attackers embed malicious code in Office macros or obfuscated scripts that run when the file opens.


Source: MITRE ATT&CK Framework (https://attack.mitre.org), Red Team Field Manual (https://rtfm.dev), Empire Project Documentation (https://bc-security.org/empire), OpenAI GPT-4 Technical Report (https://openai.com/research/gpt-4), Cobalt Strike Documentation (https://www.cobaltstrike.com/help), LOLBins GitHub Repository (https://github.com/LOLBAS-Project/LOLBA), NIST SP 800-150 (https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-150.pdf).

 
 

Join the Club!

Our mailing list makes it easy to send market updates and opinion pieces from our cybersecurity experts.

Thanks for suscribing!

bottom of page