Seven Principles of Building Fair Machine Learning Systems


This post is by Parinaz Sobhani from Georgian


I recently gave a lecture for the Bias in AI course launched by Vector Institute for small-to-medium-sized companies. In this lecture, I introduced seven principles of building fair Machine Learning (ML) systems as a framework for organizations to address bias in Artificial Intelligence (AI) systematically and sustainably and go beyond the desire to be ethical in deploying AI technologies. 

We’ve all heard examples of unfair AI. Job ads targeting people similar to current employees drive only young men to recruiter inboxes. Cancer detection systems that don’t work as well on darker skin. When building these machine learning (ML) models, we need to do better at removing bias, not only for compliance and ethical reasons but also because fair systems earn trust, and trusted companies perform better.

All companies have an opportunity to differentiate themselves on trust. The most trusted companies will become leaders in their market as customers stay loyal for longer and are more willing to share data, creating more business value and attracting new customers. Even at startups, with limited resources spread thin, it’s still essential to establish trust now. That’s why trust is one of our key investment thesis areas. Our trust framework has different pillars: fairness, transparency, privacy, security, reliability and accountability. I’m excited to share more about how we help our portfolio companies to create fairer AI solutions.

We care about fairness because we know machine learning models are not perfect. They make errors, just like us. In addition to inheriting bias from (Read more...)