• April 26, 2024

Bias in human decision making is a well-known and oft-studied challenge for people in every facet of their lives. Artificial intelligence, which uses machine learning to make predictions on which decisions are based, is subject to bias as well, depending on what inputs the system uses to learn.

A Chicago-based company that provides AI-based technology for automation has made updates to its platform that aim to reduce biased predictions, enabling better automated decisions. InRule Technology said it has introduced bias detection to its xAI Workbench suite of machine learning modeling engines.

Operating on “fairness through awareness” principles, the company said the platform now uses all elements of data that are relevant to a prediction, even data that is not used to train the model itself. “Organizations leveraging machine learning need to be aware of the harmful ways that bias can creep into models, leaving them vulnerable to significant legal risk and reputational harm,” said David Jakopac, Ph.D., vice president, Engineering and Data Science, InRule Technology. “Our explainability and clustering engines provide unprecedented visibility that enables organizations to quantify and mitigate harmful bias through action in order to advance the ethical use of AI.”