Ángela del Robledo Troncoso García earned a Biomedical Engineering Degree at the University of Seville in 2020. Then, she received a Master in Biomedical Engineering and Digital Health at the University of Seville in 2021. Nowadays, she is a PhD student in Computer Science at University Pablo de Olavide in Seville where she is working as a researcher in the Data Science and Big Data Lab.
She has work in an internship related with software and data science in the health scope at the Information and Communication Technology Department in University Hospital ‘Virgen del Rocio’ during three months. She has also work as a visiting teacher and researcher at HTWG Konstanz (Germany) during one semester. The researcher work was about non invasive techniques for respiratory sound monitoring, related with Internet of Things (IoT) devices and physiological data analysis techniques.
Her major fields of research are data science, (bio)medical data analysis and bioinformatics.
Publications
2025 |
A. R. Troncoso-García and M. Martínez-Ballesteros and F. Martínez-Álvarez and A. Troncoso A new metric based on association rules to assess explainability techniques for time series forecasting Journal Article Forthcoming In: IEEE Transactions on Pattern Analysis and Machine Intelligence, Forthcoming. @article{TRONCOSO-GARCIA25, This paper introduces a new, model-independent, metric, called RExQUAL, for quantifying the quality of explanations provided by attribution-based explainable artificial intelligence techniques and compare them. The underlying idea is based on feature attribution, using a subset of the ranking of the attributes highlighted by a model-agnostic explainable method in a forecasting task. Then, association rules are generated using these key attributes as input data. Novel metrics, including global support and confidence, are proposed to assess the joint quality of generated rules. Finally, the quality of the explanations is calculated based on a wise and comprehensive combination of the association rules global metrics. The proposed method integrates local explanations through attribution-based approaches for evaluation and feature selection with global explanations for the entire dataset. This paper rigorously evaluates the new metric by comparing three explainability techniques: the widely used SHAP and LIME, and the novel methodology RULEx. The experimental design includes predicting time series of different natures, including univariate and multivariate, through deep learning models. The results underscore the efficacy and versatility of the proposed methodology as a quantitative framework for evaluating and comparing explainable techniques. |
2024 |
A. R. Troncoso-García and M. J. Jiménez-Navarro and M. L. Linares-Barrera and I. S. Brito and F. Martínez-Álvarez and M. Martínez-Ballesteros Time Series Forecasting in Agriculture: Explainable Deep Learning with Lagged Feature Selection Conference SOCO 19th International Conference on Soft Computing Models in Industrial and Environmental Applications, Lecture Notes in Networks and Systems 2024. @conference{SOCO24_Troncoso, |
F. Rodríguez-Díaz, A. M. Chacón-Maldonado and A. R. Troncoso-García and G. Asencio-Cortés Olive grove and Grapevine pest forecasting through machine learning-based classification and regression Journal Article In: Results in Engineering, vol. 24, pp. 103058, 2024. @article{RODRIGUEZ24, Pests significantly impact agricultural productivity, making early detection crucial for maximizing yields. This paper explores the use of machine learning models to predict olive fly and red spider mite infestations in Andalusia. Four datasets on crop phenology, pest populations, and damage levels were used, with models developed using the Python package H20, which focuses on interpretability through SHAP values and ICE plots. The results showed high precision in predicting pest outbreaks, particularly for the olive fly, with minimal differences between models using feature selection. In the vineyard dataset, the selection of characteristics improved the performance of the model by reducing the MAE and increasing R2. Explainability techniques identified solar radiation and wind direction as key factors in olive fly predictions, while past pest occurrences and wind velocity were influential for red spider mites, providing farmers with actionable insights for timely pest control. |
A. R. Troncoso-García and M. J. Jiménez-Navarro and F. Martínez-Álvarez and A. Troncoso Ground-Level Ozone Forecasting using Explainable Machine Learning Conference CAEPIA Conference of the Spanish Association for Artificial Intelligence, vol. 14640, Lecture Notes in Artificial Intelligence 2024. @conference{TRONCOSO-GARCIA24c, The ozone concentration at ground level is a pivotal indicator of air quality, as elevated ozone levels can lead to adverse effects on the environment. In this study various machine learning models for ground-level ozone forecasting are optimised using a Bayesian technique. Predictions are obtained 24 h in advance using historical ozone data and related environmental variables, including meteorological measurements and other air quality indicators. The results indicated that the Extra Trees model emerges as the optimal solution, showcasing competitive performance alongside reasonable training times. Furthermore, an explainable artificial intelligence technique is applied to enhance the interpretability of model predictions, providing insights into the contribution of input features to the predictions computed by the model. The features identified as important, namely PM10, air temperature and CO2 concentration, are validated as key factors in the literature to forecast ground-level ozone concentration. |
2023 |
A. R. Troncoso-García and I. S. Brito and A. Troncoso and F. Mártinez-Álvarez Explainable hybrid deep learning and Coronavirus Optimization Algorithm for improving evapotranspiration forecasting Journal Article In: Computers and Electronics in Agriculture, vol. 215, pp. 108387, 2023. @article{TRONCOSO-GARCIA23b, Reference evapotranspiration is a critical hydrological measurement closely associated with agriculture. Accurate forecasting is vital in effective water management and crop planning in sustainable agriculture. In this study, the future values of reference evapotranspiration are forecasted by applying a recurrent long shortterm memory neural network optimized using the Coronavirus Optimization Algorithm, a novel bioinspired metaheuristic based on the spread of COVID-19. The input data is sourced from the Sistema Agrometeorológico para a Gestão da Rega no Alentejo, in Portugal, with meteorological data such as air temperature or wind speed. Several baseline models are applied to the same problem to facilitate comparisons, including support vector machines, multi-layer perceptron, Lasso and decision tree. The results demonstrate the successful forecasting performance of the proposed model and its potential in this field. In turn, to gain deeper insights into the model’s inner workings, the SHapley Additive exPlanation tool is applied for explainability. Consequently, the study identifies the most relevant variables for reference evapotranspiration forecasting, including previously measured evapotranspiration values. Additionally, a univariable model is tested using historic evapotranspiration values as input, offering a comparable performance with a considerable reduction of computational time |
A. R. Troncoso-García and M. Martínez-Ballesteros and F. Mártinez-Álvarez and A. Troncoso A new approach based on association rules to add explainability to time series forecasting models Journal Article In: Information Fusion, vol. 94, pp. 169-180, 2023. @article{TRONCOSO-GARCIA23, Machine learning and deep learning have become the most useful and powerful tools in the last years to mine information from large datasets. Despite the successful application to many research fields, it is widely known that some of these solutions based on artificial intelligence are considered black-box models, meaning that most experts find difficult to explain and interpret the models and why they generate such outputs. In this context, explainable artificial intelligence is emerging with the aim of providing black-box models with sufficient interpretability. Thus, models could be easily understood and further applied. This work proposes a novel method to explain black-box models, by using numeric association rules to explain and interpret multi-step time series forecasting models. Thus, a multi-objective algorithm is used to discover quantitative association rules from the target model. Then, visual explanation techniques are applied to make the rules more interpretable. Data from Spanish electricity energy consumption has been used to assess the suitability of the proposal. |
A. R. Troncoso-García and M. Martínez-Ballesteros and F. Martínez-Álvarez and A. Troncoso Deep Learning-Based Approach for Sleep Apnea Detection Using Physiological Signals Conference IWANN International Work-conference on Artificial Intelligence, Lecture Notes in Computer Science 2023. @conference{IWANN2023, |
A. R. Troncoso-García and m. Martínez-Ballesteros and F. Martínez-Álvarez and A. Troncoso Evolutionary computation to explain deep learning models for time series forecasting Conference SAC 38th Annual ACM Symposium on Applied Computing, 2023. @conference{SAC2023, |
2022 |
A. R. Troncoso-García and M. Martínez-Ballesteros and F. Martínez-Álvarez and A. Troncoso Explainable machine learning for sleep apnea prediction Conference KES International Conference on Knowledge Based and Intelligent information and Engineering Systems, 2022. @conference{TRONCOSO-GARCIA22, Machine and deep learning has become one of the most useful tools in the last years as a diagnosis-decision-support tool in the health area. However, it is widely known that artificial intelligence models are considered a black box and most experts experience difficulties explaining and interpreting the models and their results. In this context, explainable artificial intelligence is emerging with the aim of providing black-box models with sufficient interpretability so that models can be easily understood and further applied. Obstructive sleep apnea is a common chronic respiratory disease related to sleep. Its diagnosis nowadays is done by processing different data signals, such as electrocardiogram or respiratory rate. The waveform of the respiratory signal is of importance too. Machine learning models could be applied to the signal's analysis. Data from a polysomnography study for automatic sleep apnea detection have been used to evaluate the use of the Local Interpretable Model-Agnostic (LIME) library for explaining the health data models. Results obtained help to understand how several features have been used in the model and their influence in the quality of sleep. |
2021 |
A. R. Troncoso-García and J. A. Ortega and R. Seepold and N. Martínez-Madrid Non-invasive devices for respiratory sound monitoring Conference KES International Conference on Knowledge Based and Intelligent information and Engineering Systems, 2021. @conference{TRONCOSO-GARCIA21, |