By Laura BennettThursday 19 Oct 2023Hope AfternoonsLifestyleReading Time: 3 minutes
Since ChatGPT was unleashed in the world on November 30, last year, the implications of generative artificial intelligence – a form of AI that can create content – are being understood in real time.
There have been early models of the technology in existence since the 1960s, however, ChatGPT is the first time we’re experiencing AI at scale, with global availability.
As AI becomes more commonplace and immersive, questions are being asked about where it draws its source information from, and how that information is influenced by the inherent biases of the person/people constructing it.
At Austin’s famed SXSW Festival, which arrived in Sydney for the first time this week, Shark Tank Australia’s Dr Catriona Wallace joined journalist Tracey Spicer and Elladex CEO and Founder Shivani Gopal to consider how gender inequity could be further fuelled by machine learning and how it’s entrenched in AI technologies.
Source: Tracey Spicer AM GAICD Facebook
“Entrepreneurs don’t know what the possibilities are and where it will go,” Dr Catriona said.
While AI itself may not be an issue, it enables a “multiplication effect” of whatever information or task its applied to.
“Will AI be built with the same biases that exist within humanity?” Tracey said.
“The unconscious biases of the past [could] be embedded into the technology.”
According to a UN report referenced by the BBC in 2019, that’s already being witnessed in gendered service-based technologies (think Siri or Alexa) which are voiced by females, and which users opt to have voiced by females when an option is available.
The UN said female voicing reinforces existing beliefs that women are “subservient” and “obliging and eager to please”.
Another factor shaping the development of generative AI is the millions of data points it absorbs from the internet at large, and how that information is/isn’t filtered before it forms part of the “library” or “brain” the AI refers to when creating content itself.
There’s a risk of “digesting the information on Google as fact” Shivani Gopal believes, “which can be extremely limiting when the reality is much bigger”.
For instance, if you currently ask Google to show you a “list of CEOs” the results are highly skewed toward males of narrow racial background. When those results become the baseline knowledge for generative AI, any questions it’s asked about “how to be a CEO”, “who can be a CEO” or “what does a CEO look like”, etc. are going to use the assumption of CEOs being male and Caucasian to define its response.
Citing a contributor to her book Man-Made, Tracey reminds us that “an algorithm is an opinion written in code”.
“An algorithm is an opinion written in code,” – Tracey Spicer, journalist
Re-telling a story about the testing of automated soap dispenser technology – which is the same used in driverless cars – Tracey explained that the soap dispensers worked when a white set of hands were waved under them, or when a white piece of paper was waved, but not when a person of colour tested it.
“That’s because the developers were all white,” Tracey said.
If you take that case study to its end, the bias becomes a hazard when a driverless car doesn’t recognise someone as human and “fails to stop” because they don’t match the inputted parameters.
There was no suggestion developers weren’t doing their due diligence to rectify issues raised in testing, but the panelists all agreed AI can’t be viewed as an unbiased blank slate.
SXSW Sydney runs until October 22.