From my understanding all of these language models can be simplified down to just: “Based on all known writing what’s the most likely word or phrase based on the current text”. Prompt engineering and other fancy words equates to changing the averages that the statistics give. So by threatening these models it changes the weighting such that the produced text more closely resembles threatening words and phrases that was used in the dataset (or something along those lines).
Attention Is All You Need: https://arxiv.org/abs/1706.03762
https://en.wikipedia.org/wiki/Attention_Is_All_You_Need
From my understanding all of these language models can be simplified down to just: “Based on all known writing what’s the most likely word or phrase based on the current text”. Prompt engineering and other fancy words equates to changing the averages that the statistics give. So by threatening these models it changes the weighting such that the produced text more closely resembles threatening words and phrases that was used in the dataset (or something along those lines).
https://poloclub.github.io/transformer-explainer/
Modern systems are beyond that already, they’re an expansion on:
https://en.m.wikipedia.org/wiki/AutoGPT