by Pronetic | Jan 23, 2024 | News and Events, Newsletters
AI company Anthropic has published a research paper highlighting how large language models (LLMs) can be subverted so that at a certain point, they start emitting maliciously crafted source code. For example, this could involve training a model to write secure code...