AI bias solved? New study proposes radical fairness framework

  • 📰 IntEngineering
  • ⏱ Reading Time:
  • 52 sec. here
  • 4 min. at publisher
  • 📊 Quality Score:
  • News: 31%
  • Publisher: 63%

AI News

Artificial Intelligence,Inventions And Machines

A new study proposes a way to avoid biases in decisions made by AI like deciding who gets a new mortgage or hired for a job.

As AI proliferates in different areas of our daily lives, it is increasingly used to evaluate job applications, housing, college, insurance, etc.

The social welfare optimization method can help balance out the scales, particularly for those from less advantaged populations. The researchers propose a welfare-oriented way to assess group decisions, presenting new points of view on what constitutes fairness. He elaborated on one potential situation of “latent bias,” where loan applicants from a minority group may be rejected if machine learning finds a connection between that group and higher default rates. This may be due to minorities living in low-income neighborhoods, where residents default more often.

“Dozens of mutually incompatible parity measures have been proposed, and there is no consensus on which is the right one,” he wrote.example of AI bias can be found when considering parole. The COMPAS software, used by US courts to assess the likelihood of a defendant becoming a repeat offender or a recidivist, aspires to so-called “predictive rate parity.” Still, critics pointed out that it should have rather tried to achieve “equalized odds.

 

Thank you for your comment. Your comment will be published after being reviewed.
Please try again later.
We have summarized this news so that you can read it quickly. If you are interested in the news, you can read the full text here. Read more:

 /  🏆 287. in PROPERTY

Property Property Latest News, Property Property Headlines