As AI proliferates in different areas of our daily lives, it is increasingly used to evaluate job applications, housing, college, insurance, etc.
The social welfare optimization method can help balance out the scales, particularly for those from less advantaged populations. The researchers propose a welfare-oriented way to assess group decisions, presenting new points of view on what constitutes fairness. He elaborated on one potential situation of “latent bias,” where loan applicants from a minority group may be rejected if machine learning finds a connection between that group and higher default rates. This may be due to minorities living in low-income neighborhoods, where residents default more often.
“Dozens of mutually incompatible parity measures have been proposed, and there is no consensus on which is the right one,” he wrote.example of AI bias can be found when considering parole. The COMPAS software, used by US courts to assess the likelihood of a defendant becoming a repeat offender or a recidivist, aspires to so-called “predictive rate parity.” Still, critics pointed out that it should have rather tried to achieve “equalized odds.