Doktorand Anna Hedström
Aufsätze in referierten Fachzeitschriften [3 Ergebnisse]
- Bommer, P.; Kretschmer, M.; Hedström, A.; Bareeva, D.; Höhne, M. (2024): Finding the right XAI Method - A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science. Artificial Intelligence for the Earth Systems (AIES). : p. 1-55. Online: https://doi.org/10.1175/AIES-D-23-0074.1
- Hedström, A.; Weber, L.; Lapuschkin, S.; Höhne, M. (2023): Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test. arXiv. : p. 1-19. Online: https://arxiv.org/abs/2401.06465
- Hedström, A.; Bommer, P.; Wickstrøm, K.; Samek, W.; Lapuschkin, S.; Höhne, M. (2023): The Meta-Evaluation Problem in Explainable AI: Identifying Reliable Estimators with MetaQuantus. Transactions on Machine Learning Research. (06): p. 1-35. Online: https://openreview.net/forum?id=j3FK00HyfU
Beiträge zu Sammelwerken [7 Ergebnisse]
- Bareeva, D.; Ümit Yolcu, G.; Hedström, A.; Schmolenski, N.; Wiegand, T.; Samek, W.; Lapuschkin, S. (2024): Quanda: An Interpretability Toolkit for Training Data Attribution Evaluation and Beyond. In: . NeurIPS ATTRIB 2024 workshop. p. 1-16. Online: https://arxiv.org/abs/2410.07158
- Wickstrøm, K.; Höhne, M.; Hedström, A. (2024): From Flexibility to Manipulation. The Slippery Slope of XAI Evaluation. In: Explainable Computer Vision: Where are We and Where are We Going?. eXCV Workshop at ECCV 2024. p. 1-19. Online: https://excv-workshop.github.io/publication/from-flexibility-to-manipulation-the-slippery-slope-of-xai-evaluation/paper.pdf
- Kopf, L.; Bommer, P.; Hedström, A.; Lapushkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons. In: . ICML Workshop Next Generation of AI Safety. p. 1-21. Online: https://arxiv.org/abs/2405.20331
- Kopf, L.; Bommer, P.; Hedström, A.; Lapushkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons. In: . ICML Workshop on Mechanistic Interpretability. p. 1-21. Online: https://arxiv.org/abs/2405.20331
- Hedström, A.; Weber, L.; Lapuschkin, S.; Höhne, M. (2024): A Fresh Look at Sanity Checks for Saliency Maps. In: . 2nd World Conference on eXplainable Artificial Intelligence (XAI-2024). Springer, Heidelberg, p. 1-26. Online: https://doi.org/10.48550/arXiv.2405.02383
- Liu, S.; Hedström, A.; Hanike Basavegowda, D.; Weltzien, C.; Höhne, M. (2024): Explainable AI in grassland monitoring: Enhancing model performance and domain adaptability. In: Hoffmann, C.; Stein, A.; Gallmann, E.; Dörr, J.; Krupitzer, C.; Floto, H.(eds.): Informatik in der Land-, Forst- und Ernährungswirtschaft. Focus: Biodiversität fördern durch digitale Landwirtschaft: Welchen Beitrag leisten KI und Co?. 44. GIL-Jahrestagung - Biodiversität fördern durch digitale Landwirtschaft: Welchen Beitrag leisten KI und Co?. Gesellschaft für Informatik (GI), Bonn, (1617-5468/978-3-88579-738-8), p. 143-154. Online: https://gil-net.de/wp-content/uploads/2024/02/GI_Proceedings_344-3.f-1.pdf
- Hedström, A.; Weber, L.; Lapuschkin, S.; Höhne, M. (2023): Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test. In: XAI in Action: Past, Present, and Future Applications. NeurIPS 2023. Neural Information Processing Systems, San Diego, p. 1-19. Online: https://openreview.net/forum?id=vVpefYmnsG
Vorträge und Poster [8 Ergebnisse]
- Hedström, A. (2024): Reimagining Explainable AI: Evaluation with LLMs.
- Hedström, A.; Weber, L.; Lapuschkin, S.; Höhne, M. (2024): Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
- Kopf, L.; Bommer, P.; Hedström, A.; Lapuschkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons.
- Liu, S.; Hedström, A.; Hanike Basavegowda, D.; Weltzien, C.; Höhne, M. (2024): Explainable AI in Grassland Monitoring: Enhancing Model Performance and Domain Adaptability.
- Hedström, A.; Weber, L.; Lapuschkin, S.; Höhne, M. (2023): Sanity Checks Revisited: An Exploration to Repair the Model Parameter Randomisation Test.
- Hedström, A. (2023): Exercise 2: Advancements in XAI Evaluation: A Practical Tutorial using Quantus and MetaQuantus.
- Bommer, P.; Hedström, A. (2023): Tutorial: Quantus x Climate - Applying explainable AI evaluation in climate science.
- Bommer, P.; Kretschmer, M.; Hedström, A.; Bareeva, D.; Höhne, M. (2023): Evaluation of explainable AI solutions in climate science.