Researchers at Cornell University have discovered a new way for AI tools to steal your data — keystrokes. Anew research paperdetails an AI-driven attack that can steal passwords with up to 95% accuracy by listening to what you type on your keyboard.
The researchers accomplished this by training an AI model on the sound of keystrokes and deploying it on a nearby phone. The integrated microphone listened for keystrokes on aMacBook Proand was able to reproduce them with 95% accuracy — the highest accuracy the researchers have seen without the use of a large language model.

The team also tested accuracy during a Zoom call, in which the keystrokes were recorded with the laptop’s microphone during a meeting. In this test, the AI was 93% accurate in reproducing the keystrokes. In Skype, the model was 91.7% accurate.
Before your throw away yourloud mechanical keyboard, it’s worth noting that the volume of the keyboard had little to do with the accuracy of the attack. Instead, the AI model was trained on the waveform, intensity, and time of each keystroke to identify them. For instance, you may press one key a fraction of a second later than others due to your typing style, and that’s taken into account with the AI model.
In the wild, this attack would take the form of malware installed on your phone or another nearby device with a microphone. Then, it just needs to gather data from your keystrokes and feed them into an AI model by listening on your microphone. The researchers used CoAtNet, which is an AI image classifier, for the attack, and trained the model on 36 keystrokes on a MacBook Pro pressed 25 times each.
There are some ways around this kind of attack, as reported byBleeping Computer. The first is to avoid typing your password in at all by leveraging features like Windows Hello and Touch ID. You can also invest in agood password manager, which not only avoids the threat of typing in your password but also allows you to use random passwords for all of your accounts.
What won’t help is a new keyboard. Even thebest keyboardscan fall victim to the attack due to its method, so quieter keyboards won’t make a difference.
Unfortunately, this is just the latest in a string of new attack vectors enabled by AI tools, includingChatGPT. Just a week ago, theFBI warned about the dangersof ChatGPT and how it’s being used to launch criminal campaigns. Security researchers have also seen new challenges, suchas adaptive malwarethat can quickly change through tools like ChatGPT.