AI Engineer Guide

Malware at scale with LLM

I came across this really good article by Nicholas 👉 Are large language models worth it?

Here are some of the highlights

Previously, when malware developers wanted to go and monetize their exploits, they would do exactly one thing: encrypt every file on a person’s computer and request a ransome to decrypt the files. In the future I think this will change.

LLMs allow attackers to instead process every file on the victim’s computer, and tailor a blackmail letter specifically towards that person. One person may be having an affair on their spouse. Another may have lied on their resume. A third may have cheated on an exam at school. It is unlikely that any one person has done any of these specific things, but it is very likely that there exists something that is blackmailable for every person. Malware + LLMs, given access to a person’s computer, can find that and monetize it.

It feels like a science friction but it is might be already happening

And things will only get worse.

I think, from our end, we need to be more mindful of what we install on our devices and take measures to prevent becoming a victim of such attacks.

Related paper: https://arxiv.org/abs/2505.11449

#Malware

Stay Updated

Get the latest AI engineering insights delivered to your inbox.

No spam. Unsubscribe at any time.