Explainable AI: Bridging the Gap between Algorithms and Interpretability

Authors

  • Karpoor Gaurav Author

Abstract

The rapid adoption of machine learning and artificial intelligence (AI) models in various applications has highlighted the importance of understanding model decisions and making them interpretable. This paper explores the concept of Explainable AI (XAI) and its role in bridging the gap between complex algorithms and interpretability. We delve into various techniques and approaches that enhance the transparency and accountability of AI systems, making them more accessible to users and regulators. We discuss the significance of XAI in healthcare, finance, and autonomous systems and present case studies that demonstrate its practical utility. Additionally, we provide insights into the current challenges and future directions in XAI research.

Downloads

Download data is not yet available.

References

Chaitanya Krishna Suryadevara, “TOWARDS PERSONALIZED HEALTHCARE - AN INTELLIGENT MEDICATION RECOMMENDATION SYSTEM”, IEJRD - International Multidisciplinary Journal, vol. 5, no. 9, p. 16, Dec. 2020.

Suryadevara, Chaitanya Krishna, Predictive Modeling for Student Performance: Harnessing Machine Learning to Forecast Academic Marks (December 22, 2018). International Journal of Research in Engineering and Applied Sciences (IJREAS), Vol. 8 Issue 12, December-2018, Available at SSRN: https://ssrn.com/abstract=4591990

Suryadevara, Chaitanya Krishna, Unveiling Urban Mobility Patterns: A Comprehensive Analysis of Uber (December 21, 2019). International Journal of Engineering, Science and Mathematics, Vol. 8 Issue 12, December 2019, Available at SSRN: https://ssrn.com/abstract=4591998

Chaitanya Krishna Suryadevara. (2019). A NEW WAY OF PREDICTING THE LOAN APPROVAL PROCESS USING ML TECHNIQUES. International Journal of Innovations in Engineering Research and Technology, 6(12), 38–48. Retrieved from https://repo.ijiert.org/index.php/ijiert/article/view/3654

Chaitanya Krishna Suryadevara. (2020). GENERATING FREE IMAGES WITH OPENAI’S GENERATIVE MODELS. International Journal of Innovations in Engineering Research and Technology, 7(3), 49–56. Retrieved from https://repo.ijiert.org/index.php/ijiert/article/view/3653

Chaitanya Krishna Suryadevara. (2020). REAL-TIME FACE MASK DETECTION WITH COMPUTER VISION AND DEEP LEARNING: English. International Journal of Innovations in Engineering Research and Technology, 7(12), 254–259. Retrieved from https://repo.ijiert.org/index.php/ijiert/article/view/3184

Chaitanya Krishna Suryadevara. (2021). ENHANCING SAFETY: FACE MASK DETECTION USING COMPUTER VISION AND DEEP LEARNING. International Journal of Innovations in Engineering Research and Technology, 8(08), 224–229. Retrieved from https://repo.ijiert.org/index.php/ijiert/article/view/3672

Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., & Elhadad, N. (2015). Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1721-1730).

Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). "Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144).

Lipton, Z. C. (2016). The mythos of model interpretability. In Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016).

Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

Chen, J., Song, L., Wainwright, M. J., & Jordan, M. I. (2018). Learning to explain: An information-theoretic perspective on model interpretation. In Proceedings of the 35th International Conference on Machine Learning (Vol. 80, pp. 883-892).

Published

2022-11-05

Issue

Section

Articles

How to Cite

Explainable AI: Bridging the Gap between Algorithms and Interpretability. (2022). International Journal of Machine Learning and Artificial Intelligence, 3(3). https://jmlai.in/index.php/ijmlai/article/view/8