Special Issues

Machine Learning Safety and Fairness in Medical Field

Submission Deadline: 04 September 2022 (closed) View: 74

Guest Editors

Dr. Mahendrakhan M, Hindusthan Institute of Technology, India.
Dr. Uma Maheshwari, Hindusthan Institute of Technology, India.
Dr. Paulchamy Balaiyah, Hindusthan Institute of Technology, India.

Summary

Safety and fairness have been increasingly essential themes in machine learning (ML) in recent years, owing to the fact that ML has become an integral part of our everyday lives. ML is used in a variety of applications, including traffic prediction, recommendation systems, marketing analysis, medical diagnosis, autonomous driving, robot control, corporate decision-making assistance, and even government decision-making. By harnessing the massive quantity of data accessible in the Big Data age, machine learning algorithms have created a disruptive revolution in society, allowing the automation of numerous jobs. In several cases, machine learning algorithms have outperformed humans in particular applications.

 

Despite these accomplishments, the use of machine learning in many real-world applications has created new hurdles in terms of system trustworthiness. The potential for these algorithms to induce unwanted behaviours is causing increasing worry in the machine learning field, particularly when they are incorporated into real-world safety-critical systems. It has been proven that ML might delay medical diagnosis, inflict environmental damage or injury to people, induce racist, sexist, and other discriminating behaviours, and even cause traffic accidents when used in the real world.

 

Furthermore, learning algorithms are susceptible to skilled attackers who might gain a major advantage by exploiting the flaws in machine learning systems. In light of these issues, one important question arises: can we prevent unwanted behaviours by designing ML algorithms that are both safe and fair?

 

This special issue aims to bring together papers that outline the safety and fairness implications of using machine learning in real-world systems, papers that propose methods to detect, prevent, and/or alleviate undesired behaviours that ML-based systems may exhibit, papers that analyse the vulnerability of ML systems to adversarial attacks and possible defence mechanisms, and, more broadly, any paper that stimulates progress on topics related to safe and fair ML.


Keywords

Contributions are sought in (but are not limited to) the following topics:
• Machine learning fairness and/or safety
• Safe reinforcement learning
• Safe robot control • Machine learning biases
• Adversarial examples in machine learning and defence mechanisms
• Applications of transparency to machine learning safety and fairness
• Verification techniques to ensure safety and robustness
• Having a person in the loop ensures safety and interpretability
• Machine learning backdoors
• Machine learning transparency
• Robust and risk-sensitive decision making

Share Link