New pre-print: “Authorship Impersonation via LLM Prompting does not Evade Authorship Verification Methods”

I’m pleased to announce the pre-print of a new article on LLM impersonation, with Baoyi Zeng as first author. The paper shows that current state-of-the-art authorship verification methods tend not to be fooled by an LLM trying to impersonate someone simply using prompting. Several high profile forensic linguistic cases involved the perpetrator manually trying to impersonate someone, such as the victim. We show that if a perpetrator tried to use an LLM to do so these methods would not be misled. The paper is on arXiv and can be found here: https://arxiv.org/abs/2603.29454.

Leave a comment