What would you think if you were told that Artificial Intelligence is the latest tool in fighting racism? Believe or not, this is the latest ‘thing’ in AI, whereby researchers from the University of Virginia in the U.S. have created a system which endeavours to assess one’s tendency to bigotry. As incredible as it sounds, the device which they are developing will be able to raise the alarm when someone’s thoughts tend towards racial bias.
Disclaimer: this doesn’t mean the machine will just detect bigots walking down the street. It will also not be able to determine acts of racial prejudice or discrimination. It neither red flag racial thought patterns through vital signs, like one monitors heart rates or steps through a digital watch. Perhaps this can be developed in the future, but right now, the scientists working on this project have another thing in mind. The research is more focused on how automatic racial reflexes or racial conditioning and actual racism interact.
What the team has come up with is a study called the ‘Implicit Association Test’. It resembles the traditional psychological association investigations. In this case, the idea is to match up words and images as quickly as possible. For instance, ‘light skin’, ‘dark skin’, ‘good’ and ‘bad’ would be among the concepts serving towards the research’s ends.
Furthermore, the research is hoping to find how people automatically respond to difference. For instance, how do you view people who are not of the same ethnic race or background? Do they represent danger, hostility or provoke some desire for distancing? Would the individual have differential reactions to people of distinct races in a singular situation?
What the research team did to answer these questions was take 76 volunteers and submit them to the Implicit Association Test. While they answered the questions, their physiological responses were recorded from a wearable device. The experiment also included the elaboration of a machine learning system which would infer from data it recorded from the research. The idea is to determine whether any sequence of the recorded physiological responses would imply racist reflexes?
Here’s what the team has found out so far: “Our machine learning and statistical analysis show that implicit bias can be predicted from physiological signals with 76.1% accuracy.”
An analysis of their research paper indicates that 76% is not enough to declare a win in a machine learning undertaking. Neither is the methodology whereby cartoon faces of people of different ethnicities sufficient to receiving the desired authentic physiological responses. But the idea behind the study isn’t to create a gadget that would single out bigots. It’s more about understanding how associations between thought processes and conditioning to skin colour, as well as the physiological reactions thereto.
What the research may eventually bring to light is subconscious thinking patterns behind radical or paranoid behaviour when it comes a particular race. It intends to demonstrate even those who are conscious of racial discrimination and profess to be against such behaviours may in the end have automatic racist upshots which they are perhaps not aware of. Basically it means one may be a bigot without intending it or feeling so.
Moreover, more than flag racial prejudice, what AI is used for in this research is predict racist patterns. It would be like a doctor associating a symptom to an underlying ailment. In this sense, AI would be a tool to pre-empt as opposed to name and shame.