In a previous message, we discussed the revelation that various videoconferencing apps were continuing to collect audio data even when microphones were muted, against the expectations of most users.
Taking concerns further, if you wish to guard against an unknown microphone surreptitiously listening in on your conversations, what can be done to make sure the recorded audio is worthless?
A research team from Columbia University set out to see if they could design a method which would confuse AI trying to accurately construct transcripts from collected recordings. They call it Real-Time Neural Voice Camouflage. They targeted Deep-Speech, a widely used speech recognition system, to measure their method’s effectiveness in scrambling word enunciations in real-time, like the instant effects in noise cancellation mechanisms.
With microphones now in so many devices around our homes and offices (computers, TVs, kitchen appliances), and on our body (smartphones or smart watches), it can be difficult to know where all the microphones are and if they are active and collecting data to be crunched by artificial intelligence programs such as Deep-Speech.
Unlike previous methods, the proposed solution did not rely on white noise to confuse speech recognition systems, which had the limitation that the original audible words could be recreated. Instead, the solution used AI to quickly generate a speech-adaptable “smart noise” to obfuscate the sounds produced in a way that was difficult to reverse without causing annoyance to speakers.
The researchers recognized that if the problem was artificial neural networks utilizing machine learning to become adept at speech recognition — the solution could also be training AI, except to introduce disruptive elements instead. Essentially, this would be an AI arms race — fighting fire with fire.
An underlying lesson to be drawn from the success of this project is that we are no longer in a world where the answer to “what if AI is being used unethically” is “stay away from AI.” This is impossible. Instead, we must be ready to engage in ethical uses of AI not just as a general principle of our profession but to defend against those uses of AI which are unethical or even overtly malicious.