The influence of Artificial Intelligence (AI) in our lives has become too important to be ignored or too crucial to be left in the hands of a few.
When computers are relied upon to make vital decisions about whether you are invited to job interviews, eligible for a mortgage or a candidate for close surveillance by law enforcement, it’s in everybody’s business to know how they work.
AI is the ability of a computer programme or machine to think and learn.
AIs derive their thinking prowess from troves of data on which they feed. The more data an AI system is fed, the smarter it becomes.
The systems look for patterns in those data and use them to spit out decisions.
It, therefore, matters what data AI learns from. It matters how formulas that guide its decisions are tweaked.
Akin to the old computer maxim that “garbage in”, “garbage out”, if data inputted in the systems are distorted, so will the results be.
There is a growing body of evidence that shows that AIs are being gamed to amplify biases relating to race, gender and social-economic classes.
Baked in AI, these deep-seated biases are difficult to detect and can cause far-reaching harm to affected individuals and even communities.
It's all too easy to resign to fate when a decision is made by a computer system and therefore unquestionable, but we don’t often understand whether the process of making a computer-generated decision had been above board.
AI systems rely on past experiences and use them to predict the future. But future outcomes cannot just rely on the past alone.
The past is not always a perfect predictor of the future. Relying on uninterrogated data or consciously changing data or algorithms to give an intended result cannot only be socially unacceptable but can be illegal too.
As the use of AIs become more widespread, biased AI systems are likely to become a pervasive problem.
There’s a real danger that vulnerable groups could be hurt or have their rights impinged by prejudiced AI.
For example, recruitment AI systems may filter employees’ surnames in ways to preclude people from certain regions or tribes from jobs or other services.
They can be configured to show that people from certain communities or gender or age are more likely to commit crime when it may not be true.
Based on these results, authorities may surveil and arrest people who fit the profile depicted by the past data, even when the previously crime-prone profiles have changed.
These biases can affect one’s ability to get credit facilities or be denied quality education or medical services.
Undoubtedly, AI systems can be incredibly helpful extensions of how humans work but given that AI architects have inherent biases and blind spots, they should not be taken as gospel truth.
Wambugu is an informatician. Email: [email protected] @samwambugu2