Autonomous Systems Functional Safety Overview with Multimodality and Explainability Perspectives

Authors

Pierre Tiako
CITRD Lab, Oklahoma City, OK, USA
Bernard Kamsu-Foguem
University of Toulouse, France

Keywords:

Functional Safety, Explainability, Machine Learning, Deep Learning, Autonomous Vehicles, Industrial Applications, Soft Error, Classification, Multimodality

Synopsis

This is a Chapter in:

Book:

Intelligent Computing and Consumer Support Applications

Series:
Chronicle of Computing

Chapter Abstract:

Functional safety is crucial in automation systems, particularly for autonomous vehicles. It is important because it protects humans, systems or vehicles, and operating environments from harm. Automated systems can expose operators to severe safety risks. Functional safety aims to minimize safety risks associated with autonomous systems to protect operators, the environment with nearby infrastructure and people, and the systems themselves. This paper overviews multiple facets of Artificial Intelligence (AI) techniques that reduce functional safety risks. As AI and Machine Learning (ML) progress theoretically and in their application, we face new technical challenges in dealing with Multimodality and Explainability. We will discuss these concepts before briefly providing their perspectives on minimizing safety risks in autonomous systems.

Keywords:
Functional Safety, Machine Learning, Autonomous Vehicle, Soft Error, Explainability, Multimodality, Deep Learning, Neural Network

Cite this paper as:

Tiako P.F., Kamsu-Foguem B. (2023) Autonomous Systems Functional Safety Overview with Multimodality and Explainability Perspectives. In: Tiako P.F. (ed) Intelligent Computing and Consumer Support Applications. Chronicle of Computing. OkIP. https://doi.org/10.55432/978-1-6692-0003-1_1

Presented at:
The 2022 OkIP International Conference on Automated and Intelligent Systems (CAIS) in Oklahoma City, Oklahoma, USA, and Online, on October 3-6, 2022

Contact:
Pierre Tiako
tiako@ieee.org

 

References

Abella, J et al. (2023), SAFEXPLAIN: Safe and Explainable Critical Embedded Systems Based on AI. 2023 Design, Automation & Test in Europe Conference & Exhibition, Antwerp, Belgium, 2023, pp. 1-6, doi: 10.23919/DATE56975.2023.10137128.

Ahmed, F., Mahmud, M. S., Al-Fahad, R., Alam, S., and Yeasin M. (2018) Image Captioning for Ambient Awareness on a Sidewalk. 2018 1st International Conference on Data Intelligence and Security (ICDIS), South Padre Island, TX, USA, 2018, pp. 85-91, doi: 10.1109/ICDIS.2018.00020.

Annarumma, M., Withey, S.J., Bakewell, R.J., Pesce. E., Goh, V., & Montana, G. (2019). Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks. Radiology, 291(1):196-202. Doi: 10.1148/radiol.2018180921.

Arrieta, A. B., Diaz-Rodriguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020) Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Information Fusion, Volume 58, Pages 82-115. Doi: 10.1016/j.inffus.2019.12.012.

Athavale, J., Baldovin, A., Mo S., & Paulitsch, M. (2020). Chip-Level Considerations to Enable Dependability for eVTOL and Urban Air Mobility Systems. In 2020 AIAA/IEEE 39th Digital Avionics Systems Conference (DASC), pp. 1-6. Doi: 10.1109/DASC50938.2020.9256436.

Baltrusaitis, T., Ahuja, C., & Morency, L.-P. (2019). Multimodal Machine Learning: A Survey and Taxonomy. IEEE Transactions on Pattern Analysis and Machine Intelligence, 41(2), pp. 423-443. Doi: 10.1109/TPAMI.2018.2798607.

Bennetot, A., Franchi, G., Del Ser, J., Chatila, R., & Diaz-Rodriguez, N. (2022). Greybox XAI: A Neural-Symbolic Learning Framework to Produce Interpretable Predictions for Image Classification. Knowledge-Based Systems, Volume 258, 109947. Doi:10.1016/j.knosys.2022.109947.

Biswas, A., Racunas, P., Cheveresan R., Emer, J., Mukherjee S. S., & Rangan, R. (2005). Computing Architectural Vulnerability Factors for Address-based Structures. In 32nd International Symposium on Computer Architecture (ISCA'05), pp. 532-543. Doi: 10.1109/ISCA.2005.18.

Choi, R. Y., Coyner, A. S., Kalpathy-Cramer, J., Chiang, M. F., & Campbell, J.P. (2020). Introduction to Machine Learning, Neural Networks, and Deep Learning. Translational Vision Science & Technology, 9(14). Doi: 10.1167/tvst.9.2.14

Coulibaly, S., Kamsu-Foguem, B., Kamissoko, D., & Traore D. (2022). Explainable Deep Convolutional Neural Networks for Insect Pest Recognition. Journal of Cleaner Production, Volume 371, 133638. Doi: 10.1016/j.jclepro.2022.133638.

Dubrawski, A., & Sondheimer, N. (2021). Techniques for Early Warning of Systematic Failures of Aerospace Components. In 2011 Aerospace Conference, pp. 1-9. Doi: 10.1109/AERO.2011.5747589.

Furst, S. (2019). System/ Software Architecture for Autonomous Driving Systems. In 2019 IEEE International Conference on Software Architecture Companion (ICSA-C), pp. 31-32. Doi: 10.1109/ICSA-C.2019.00013.

Gohel, P.,Singh, P., & Mohanty, M. (2016). Explainable AI: Current Status and Future Directions. IEEE Access, Vol. 4. Doi: 10.1109/ACCESS.2017.DOI

Ignat, N., Nicolescu, B., Savaria Y., & Nicolescu, G. (2006). Soft-error Classification and Impact Analysis on Real-time Operating Systems. Proceedings of the Design Automation & Test in Europe Conference. Doi: 10.1109/DATE.2006.244063.

ISO, 26262., (2018). ISO, 26262 Road vehicles — Functional safety. https://www.iso.org/standard/68383.html

Junchi, M., Yu, D., Wang, Y., Cai, Z., Zhang, Q., & Hu, C. (2016). Detecting Silent Data Corruptions in Aerospace-Based Computing Using Program Invariants. International Journal of Aerospace Engineering, 1(10). Doi: 10.1155/2016/8213638.

Khan, M. M., & Vice, J. (2022). Toward Accountable and Explainable Artificial Intelligence Part One: Theory and Examples. IEEE Access, Vol. 10, pp. 99686-99701. Doi: 10.1109/ACCESS.2022.3207812.

Kounta, C. A. K. A., Kamsu-Foguem, B., Noureddine, & F., Tangara, F. (2022). Multimodal Deep Learning for Predicting the Choice of Cut Parameters in the Milling Process. Intelligent Systems with Applications. Volume 16, 200112. Doi: 10.1016/j.iswa.2022.200112.

LeCun, Y., Bengio, Y. & Hinton, G. Deep Learning. Nature 521, 436–444 (2015). https://doi.org/10.1038/nature14539

Mariani, R., Maor, N., Athavale, J., & Gay K. (2021). The Importance of Interoperability in Functional Safety Standards. Computer 54(3), pp. 80-84. Doi: 10.1109/MC.2021.3050453.

Mukherjee, S. S., Weaver, C., Emer, J., Reinhardt, S.K., & Austin, T. (2003). A Systematic Methodology to Compute the Architectural Vulnerability Factors for a High-performance Microprocessor. In Proceedings. 36th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO-36), pp. 29-40. Doi: 10.1109/MICRO.2003.1253181.

Nie, J., Yan, J., Yin, H., Ren L., and Meng, Q. (2021). A Multimodality Fusion Deep Neural Network and Safety Test Strategy for Intelligent Vehicles. IEEE Transactions on Intelligent Vehicles, Vol. 6, No. 2, pp. 310-322, Doi: 10.1109/TIV.2020.3027319

Paraskevopoulos, G., Pistofidis, P., Banoutsos, G., Georgiou,E, Katsouros, V (2022) Multimodal Classification of Safety-Report Observations. Applied Science, 12,5781. https://doi.org/10.3390/app12125781

Poria, S., Cambria, E., Howard, N., Huang, G.-B., & Hussain, A. (2016). Fusing Audio, Visual, and Textual Clues for Sentiment Analysis from Multimodal Content. Neurocomputing, Volume 174, Part A. Doi: 10.1016/j.neucom.2015.01.095

Rahate, A., Walambe, R., Ramanna, S., Kotecha, K. (2022). Multimodal Co-learning: Challenges, Applications with Datasets, Recent Advances and Future Directions. Information Fusion, Volume 81, Pages 203-239, ISSN 1566-2535, https://doi.org/10.1016/j.inffus.2021.12.003

Rajaram, B. (2020). Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 2950. Texas Instruments Incorporated SLOA294. https://www.ti.com/lit/wp/sloa294/sloa294.pdf

Sridharan, V., Kaeli, D. R. (2010). Using Hardware Vulnerability Factors to Enhance AVF Analysis. SIGARCH Comput. Archit. News 38(3), 461–472. Doi: 10.1145/1816038.1816023

Sulubacak, U., Caglayan, O., Grönroos, SA. et al. (2020). Multimodal Machine Translation through Visuals and Speech. Machine Translation 34, 97–147. https://doi.org/10.1007/s10590-020-09250-0

Vankeirsbilck, J., Hallez, H., & Boydens, J. (2015). Integration of Soft Errors in Functional Safety: a conceptual study. Proceeding of the 2015 International Scientific Conference Electronics, Volume: 9, Sozopol, Bulgaria.

You, Q., Jin, H., Wang, Z., Fang, C., Luo, J. (2016) Image Captioning with Semantic Attention. arXiv 2016, arXiv:1603.03925

Wang, T., Chen, X., Cai, Z., Mi, J., & Lian, X. (2019). A mixed Model to Evaluate Random Hardware Failures of Whole-redundancy System in ISO 26262 based on Fault Tree Analysis and Markov Chain. Proceedings of the Institution of Mechanical Engineers, Part D: Journal of Automobile Engineering. 233(4):890-904. Doi:10.1177/0954407018755613

Zheng, Z., Ma, A., Zhang, L., Zhong, Y. (2021). Deep Multisensor Learning for Missing-modality all-Weather Mapping, ISPRS J. Photogramm. Remote Sens. 174 (2021) 254–264. https://doi.org/10.1016/j.isprsjprs.2020.12.009.

Safety and Explainability Overview in Machine Learning Autonomous and Industrial Applications

Published

September 21, 2023

Online ISSN

2831-350X

Print ISSN

2831-3496