Large language models are now capable of automating attacks that were previously only possible by human adversaries. In this talk, I discuss several ways that adversaries could mis-use current models in order to cause harm both at a larger scale and at a lower cost than they do currently. For example, we find that recent state-of-the-art models can now find 0-day vulnerabilities in large software projects that have been extensively tested by humans for decades. These new capabilities will alter the threat landscape and require we rethink security in the coming years.