Shaul Eliahou-Niv
doi.org/10.36647/CIML/06.02.A011
Abstract : In recent years, artificial intelligence systems have become an integral part of modern life, permeating both personal domains and a wide range of professional and technological fields. These technologies are increasingly being adopted across disciplines such as engineering, medicine, transportation, and defence, expanding the scope of automation and contributing to significant improvements in efficiency, precision, and performance. We are now entering an era in which artificial intelligence systems can perform tasks that were once considered uniquely human, including complex data analysis, decision-making, and even the generation of new knowledge. To fully harness the potential of artificial intelligence, organizations and individual users must undertake substantial adjustments to their operational models and workflows, integrating these technologies in a systematic, controlled, and meaningful manner. This evolution raises a fundamental question: to what extent can artificial intelligence systems be trusted to support decision-making processes, and what level of human intervention remains necessary to ensure quality, ethical integrity, and professional accountability? Moreover, it is essential to establish robust evaluation frameworks, including quantitative metrics and benchmarking methodologies, that enable objective comparisons and provide clear, evidence-based assessments of artificial intelligence performance. This study explores these issues within the context of systems engineering, with a particular focus on the field of aeronautics. The research aims to delineate the boundaries of autonomous artificial intelligence capabilities and to identify the conditions required for optimal integration between human judgment and advanced technological systems.
Keyword :Artificial intelligence, benchmark, decision-making, systems engineering.