[ad_1]
Google has been criticised for an “inclusive language” feature that will recommend word substitutions for people writing in Google Docs.
The tool will offer guidance to people writing in a way that “may not be inclusive to all readers” in a similar manner to spelling and grammar check systems.
Although the suggestions are just suggestions – they aren’t forced on writers and the tool may be turned off – critics have described it as “speech-policing” and “profoundly clumsy, creepy and wrong”.
The new feature is officially called assistive writing and will be on by default for enterprise users, business customers who might want to nudge particular writing styles among their staff.
The language the system favours reflects decades of campaigning for gender-neutral terms (“crewed” instead of “manned”) and against phrases that reflect racial prejudice (“deny list” instead of “blacklist”), as well as more modern concerns about the impact of our vocabulary on how we identify people.
But despite enormous developments in how computers understand natural language, the technology is still in its infancy.
Among the words that the system has flagged in tests are “mankind”, “housewife”, “landlord” and even a computer “motherboard” – which may not cause offence.
Google states: “Potentially discriminatory or inappropriate language will be flagged, along with suggestions on how to make your writing more inclusive and appropriate for your audience.”
The tool is reminiscent of Microsoft’s infamously annoying assistant Clippy, which interrupted writers’ own prose stylings with often unwelcome suggestions.
Vice News tested the feature by submitting several famous speeches and literary passages, including the Sermon on the Mount in the Bible, and found most received bad recommendations.
Read more:
Google paves way to expand UK workforce with £730m office investment
UK regulator secures global competition commitment from Google
Notably it also found an interview with the former Ku Klux Klan leader David Duke – in which he spoke about hunting black people – prompted no inclusivity alerts or warnings.
Silkie Carlo, the director of Big Brother Watch, which campaigns for the protection of civil liberties, told The Telegraph: “Google’s new word warnings aren’t assistive, they’re deeply intrusive. With Google’s new assistive writing tool, the company is not only reading every word you type but telling you what to type.
“This speech-policing is profoundly clumsy, creepy and wrong, often reinforcing bias. Invasive tech like this undermines privacy, freedom of expression and increasingly freedom of thought.”
Lazar Radic of the International Centre for Law and Economics told the newspaper: “Not only is this incredibly conceited and patronising – it can also serve to stifle individuality, self-expression, experimentation, and – from a purely utilitarian perspective – progress.”
Google said: “Assisted writing uses language understanding models, which rely on millions of common phrases and sentences to automatically learn how people communicate. This also means they can reflect some human cognitive biases.
“Our technology is always improving, and we don’t yet (and may never) have a complete solution to identifying and mitigating all unwanted word associations and biases.”
[ad_2]