Tackling system complexity with sophisticated evaluation solutions
Increasingly automated software-based systems require stringent quality assurance, which, despite facing more and higher demands, must be designed with sustainability in mind when it comes to efficiency and scalability in the first place. In this context, we are developing AI-supported methods that can be used for the dynamic configuration of virtual testbeds based on the established VCIP reference architecture and the FERAL simulation framework. Our goal is to improve the prediction quality of the underlying models while reducing resource consumption during the evaluation of associated system functionality through optimized parameterization and coupling of relevant simulation components. The excessive energy demand due to suboptimal testbed configuration in the continuous operation of such complex input-output chains is to be significantly reduced by means of selected Machine Learning (ML) techniques and the deployment of Large Language Models (LLMs). This shall be achieved by fine-tuning pre-trained baseline LLMs whose parameter spaces are focused on domain-specific expert knowledge and thus cause only a fraction of their original resource requirements while, at the same time, allowing for the establishment of more resource-efficient development environments.