InRule Technology said it has just launched a bias detection report tool that enables businesses to evaluate their machine learning models for bias.
Adoption of technologies based on machine learning and artificial intelligence—notably RPA and intelligent automation solutions—often suffers when users, or even designers, can’t understand the decisions AI or ML systems make. As a result, a movement to make AI explainable and detect any biases in the algorithms is gaining momentum in the automation space. InRule said the new tool makes automation more understandable and enables no-code, explainable solutions that protect organizations from risk stemming from opaque AI and ML models.
“Many organizations hesitate to take advantage of the power of machine learning as they are keenly aware that deploying biased models exposes them to a range of regulatory and reputational risks,” said Danny Shayman, InRule’s AI and machine learning product manager. “InRule’s bias detection report adds another layer to our bias detection capability, empowering teams to deploy machine learning models with confidence.”