Trustworthy AI in neuroengineering: from data management to ethics assessment
This talk will deepen the potentiality of artificial intelligence in neuroengineering. Artificial intelligence has revolutionized the field of biomedical engineering and can now provide healthcare professionals with new algorithms for decision support and context awareness. Artificial intelligence was first described in 1956, and at that time it consisted of a simple series of “if, then rules”. It advanced over several decades, and, at the end of the 1990s, data-driven machine-learning techniques were introduced. This represented a shift from systems that are completely designed by humans to systems that are trained by computers using example data from which features are extracted. While today the benefits of these systems are recognized in the neuroengineering literature, their accountability has poorly been taken into consideration. Accountability is one of the key requirements for realizing trustworthy (that is, human-centered, ethical, and robust) artificial intelligence. The lack of accountability and its mechanisms (i.e., the accountability gap) might prevent the possibility of avoiding a potential adverse (or negative) impact and, thus, violate the core of artificial-intelligence ethics. During this talk, different case studies will be presented, starting from the design till the deployment of trustworthy AI algorithms in neuroengineering, both from the technical and ethical perspectives.
Lecture video: sorry, no video available at this time