The dark side of artificial intelligence

What you need to know:

  • Whereas artificial intelligence (AI) as a discipline has been around since the 1960s, its pervasiveness has accelerated in recent times due to its subset discipline known as machine learning (ML).
  • With ML techniques, computer programs are able to improve outcomes beyond what was originally coded, since the machine is able to refine decisions based on experience and increasing volumes of data sets.
  • It is possible that some programmers and financial institutions could maliciously and deliberately introduce biases by ensuring that candidates from selected parts of the country are charged higher interest rates.
  • Perhaps the software industry should think about self-regulations and codes of practice to avoid government regulation, which often tends to be heavy-handed. If this is not forthcoming, government interventions may need to be triggered to safeguard the rights of citizens.

Whereas artificial intelligence (AI) as a discipline has been around since the 1960s, its pervasiveness has accelerated in recent times due to its subset discipline known as machine learning (ML).

In older AI applications, the programmers were solidly in charge and decisions or outcomes made by the computers were restricted to the instructions given by the programmers.

With ML techniques, computer programs are able to improve outcomes beyond what was originally coded, since the machine is able to refine decisions based on experience and increasing volumes of data sets.

A simple example can be seen in the way the Google Maps application is able to dynamically advice road users on the shortest route to a destination — based on incremental past and current traffic conditions on various alternative routes.

Essentially, the application is “trained” on large historical data sets in order to pick patterns and provide new insights that would otherwise not be visible to the human mind.

This is all nice and progressive. Except when it is not.

UNANTICIPATED OUTCOMES

There are two instances when ML applications can give outcomes that are not anticipated by the programmers or in some cases deliberately discriminatory.

The first relates to the data sets that the program is mining to produces insights. If the data sets are biased or not representative, then the outcomes tend to be discriminatory or counterproductive.

Imagine an AI recruitment application that is used to identify the best candidate for a position. Assume the training data for this app is scanning through 50 or more years of hiring data in order to learn about previous historically successful hires.

Given that historically, and to a large extent in Kenya, women have been discriminated against when it comes to hiring, the application will wrongly “learn” that women are not good candidates — simply because the historical data lacks the data of women in employment.

Basically, the application would reinforce biases or negative outcomes because it is mining and getting patterns from data that is historically biased.

DELIBERATELY INTRODUCED BIASES

The second instance that can result in AI “failures” may occur even in cases where the data sets are non-biased and representative. What is considered failure from the customer’s perspective may be considered positive from the implementer’s perspectives.

A case in point relates to the many mobile loan applications that allow customers to apply for loans and by mining their spending history, social networks, gender, education background, and location, among other parameters, before deciding whether the candidate qualifies for the loan.

Whereas such applications are widely used in the financial sector, the weighting of the various parameters is left to the discretion of the financial institution and is not visible to regulators since it is often considered intellectual property.

However, it is possible that some programmers and financial institutions could maliciously and deliberately introduce biases by ensuring that candidates from selected parts of the country are charged higher interest rates.

Whereas this may lead to better profits, it is unlikely that it would be a fair treatment to those customers living in less-favoured regions.

REGULATORY OVERSIGHT

A similar AI algorithm may decide to deny you medical insurance or charge you higher premiums based on your shopping habits. The shopping habits may have indicated that most of the items on your monthly shopping list fall into the category of “junk food” and you automatically get flagged as a risky candidate.

This examples emphasise the need to have frameworks for managing the AI industry, which seems to be evolving faster than regulatory oversights.

Perhaps the software industry should think about self-regulations and codes of practice to avoid government regulation, which often tends to be heavy-handed. If this is not forthcoming, government interventions may need to be triggered to safeguard the rights of citizens in the face of AI-driven solutions.

Mr Walubengo is a lecturer at Multimedia University of Kenya, Faculty of Computing and IT. Email: [email protected], Twitter: @Jwalu