Apple recently came under fire after a viral TikTok video exposed a peculiar glitch in its iPhone voice-to-text feature.
The issue, highlighted by Fox News Digital, showed the word “Trump” momentarily appearing when users dictated the word “racist” before it corrected itself to the intended word. The anomaly was inconsistent, with other unrelated terms like “Reinhold” or “you” also appearing sporadically.
Responding to the incident, an Apple spokesperson confirmed the glitch’s existence, which was attributed to a phonetic overlap in the speech recognition model that powers the dictation feature. “We are aware of an issue with the speech recognition model that powers Dictation, and we are rolling out a fix as soon as possible,” the spokesperson stated. Apple clarified that this bug could affect other words starting with ‘r’ and emphasized that the glitch was purely technical and not indicative of political bias.
Users noticed that Apple's iPhone voice-to-text feature sometimes swaps out the word "racist" for "Trump" 👀
Details here 👉 https://t.co/TLkBEfqw38 pic.twitter.com/HNtIo4tSkf
— TMZ (@TMZ) February 25, 2025
The incident echoes controversies involving AI-driven features and political bias in big tech. For instance, in September 2024, Amazon faced backlash when its virtual assistant Alexa appeared to promote Kamala Harris while neglecting to offer similar content for Donald Trump or Joe Biden. The issue, resulting from pre-programmed manual overrides, was swiftly addressed by Amazon, which conducted a thorough audit to ensure neutrality in its election-related prompts.
Read: Musk’s Job Justification Demand Adds to Chaos in Trump Administration
While Apple maintains that its voice-to-text glitch does not reflect any political preferences, such incidents continue to raise significant concerns about the role of artificial intelligence in shaping public discourse and the necessity for tech companies to uphold neutrality and accuracy.