Study on Risk Assessment Methods and Multi-Dimensional Control Mechanisms in AI Systems

Authors

  • Chong Lam Cheong Tiktok –ByteDance, San Jose, CA, USA Author

DOI:

https://doi.org/10.71222/58dr7v22

Keywords:

AI security, quantitative risk assessment, adversarial machine learning, Defense-in-Depth, data poisoning, human-in-the-loop

Abstract

As Artificial Intelligence (AI) rapidly transitions from experimental prototypes to critical infrastructure, the historical "Performance-First" paradigm has left systems inherently vulnerable to adversarial attacks and data manipulation. This dissertation addresses the critical lack of standardized, quantitative methods for managing these risks by introducing the Risk Assessment Model for AI (RAM-AI). Utilizing a dual-domain simulation approach across Computer Vision and Financial datasets, the study empirically quantifies the "robustness boundary" of deep learning models. The findings reveal that single-layer defenses are inadequate; specifically, models exhibit "Data Hypersensitivity," suffering non-linear performance collapse under data poisoning rates as low as 3%. Furthermore, standard accuracy metrics fail to detect high-confidence evasion attacks. To mitigate these vulnerabilities, the research validates a Multi-dimensional Control Framework that integrates technical safeguards—such as adversarial training and input sanitization—with procedural governance, including Human-in-the-Loop (HITL) protocols. The results demonstrate that this Defense-in-Depth architecture significantly recovers system integrity, reducing critical error rates by 88% in high-stakes scenarios, and offers a strategic playbook for Enterprise Risk Management in the era of emerging AI regulations.

References

1. Sarkar, S., Sunheriya, N., Giri, J., Al-Qawasmi, K., & Chadge, R. (2025). A Comprehensive Quantitative Model for Ethical AI Risk Assessment: EU Act on Artificial Intelligence. In Artificial Intelligence in the Digital Era: Economic, Legislative and Media Perspectives (pp. 145-165). Cham: Springer Nature Switzerland.

2. Grosse, K., Bieringer, L., Besold, T. R., Biggio, B., & Krombholz, K. (2023). Machine learning security in industry: A quantitative survey. IEEE Transactions on Information Forensics and Security, 18, 1749-1762.

3. Eckhart, M., Brenner, B., Ekelhart, A., & Weippl, E. (2019). Quantitative security risk assessment for industrial control systems: Research opportunities and challenges.

4. Hellas, M. S., Chaib, R., & Verzea, I. (2020). Artificial intelligence treating the problem of uncertainty in quantitative risk analysis (QRA). journal of engineering, design and technology, 18(1), 40-54.

5. Piorkowski, D., Hind, M., & Richards, J. (2025). Quantitative ai risk assessments: Opportunities and challenges. Seton Hall J. Legis. & Pub. Pol'y, 49, 644.

6. Murray, M., Barrett, S., Papadatos, H., Quarks, O., Smith, M., Boria, A. T., ... & Campos, S. (2025). A Methodology for Quantitative AI Risk Modeling. arXiv preprint arXiv:2512.08844.

7. Barrett, S., Murray, M., Quarks, O., Smith, M., Kryś, J., Campos, S., ... & Papadatos, H. (2025). Toward Quantitative Modeling of Cybersecurity Risks Due to AI Misuse. arXiv preprint arXiv:2512.08864.

8. Siddiqui, S. A., Thapa, C., Wang, D., Holland, R., Shao, W., Camtepe, S., ... & Shah, R. (2025). TELSAFE: Security Gap Quantitative Risk Assessment Framework. arXiv preprint arXiv:2507.06497.

9. Paracha, A., & Arshad, J. (2025). A bibliometric study toward quantitative research assessment of security of machine learning. Information Discovery and Delivery, 53(4), 481-498.

10. Grosse, K., & Alahi, A. (2024). A qualitative AI security risk assessment of autonomous vehicles. Transportation Research Part C: Emerging Technologies, 169, 104797.

11. Crotty, J., & Daniel, E. (2022). Cyber threat: its origins and consequence and the use of qualitative and quantitative methods in cyber risk assessment. Applied Computing and Informatics, (ahead-of-print).

12. Juric, M., Sandic, A., & Brcic, M. (2020, September). AI safety: state of the field through quantitative lens. In 2020 43rd International Convention on Information, Communication and Electronic Technology (MIPRO) (pp. 1254-1259). IEEE.

Downloads

Published

15 January 2026

Issue

Section

Article

How to Cite

Cheong, C. L. (2026). Study on Risk Assessment Methods and Multi-Dimensional Control Mechanisms in AI Systems. European Journal of AI, Computing & Informatics, 2(1), 31-46. https://doi.org/10.71222/58dr7v22