Algorithms can be trained to mimic human behavior, but what happens when that behavior is biased or the human developing the algorithm inadvertently otherwise allows bias into the training process? In our blog, Mosaic examines using MLOps to monitor for bias in algorithmic decisions.

Mosaic – Algorithm Quality Assurance via Auditing & MLops