Bias

Intermediate

Systematic differences in model outcomes across groups; arises from data, labels, and deployment context.

AdvertisementAd space — term-top

Why It Matters

Addressing bias is crucial for creating fair and equitable AI systems, especially in sensitive areas like hiring, lending, and law enforcement. By mitigating bias, we can enhance the reliability and acceptance of AI technologies, ensuring they serve all segments of society without discrimination.

Bias in machine learning refers to systematic differences in model outcomes across demographic groups, which can arise from various sources, including training data, labeling processes, and deployment contexts. Mathematically, bias can be quantified using metrics such as demographic parity, equal opportunity, and disparate impact, which assess the fairness of predictions across different groups. Bias can originate from historical data reflecting societal prejudices, leading to models that perpetuate or exacerbate these inequalities. Techniques for mitigating bias include re-sampling, re-weighting, and adversarial debiasing, which aim to create more equitable models. Understanding and addressing bias is critical in the broader context of fairness in AI, as it directly impacts the ethical implications of deploying machine learning systems in real-world applications.

Keywords

Domains

Related Terms

Welcome to AI Glossary

The free, self-building AI dictionary. Help us keep it free—click an ad once in a while!

Search

Type any question or keyword into the search bar at the top.

Browse

Tap a letter in the A–Z bar to browse terms alphabetically, or filter by domain, industry, or difficulty level.

3D WordGraph

Fly around the interactive 3D graph to explore how AI concepts connect. Click any word to read its full definition.