Artificial intelligence (AI) models are widely adopted in various industries, yet their decision-making processes often exhibit biases that reflect societal inequalities. This review investigates how biases emerge in AI systems, the consequences of biased decision-making, and strategies to mitigate these effects. The paper follows a systematic review methodology, utilizing PRISMA guidelines to analyze existing literature. Key themes include data-driven biases, algorithmic influences, and ethical considerations in AI deployment. The review concludes with future research directions, emphasizing the need for fairness-aware AI models, robust governance, and interdisciplinary approaches to bias mitigation.