Kirill Bykov
Aufsätze in referierten Fachzeitschriften [5 Results]
- Bareeva, D.; Höhne, M.; Warnecke, A.; Pirch, L.; Müller, K.; Rieck, K.; Bykov, K. (2024): Manipulating Feature Visualizations with Gradient Slingshots. arXiv. : p. 1-19. Online: https://doi.org/10.48550/arXiv.2401.06122
- Bykov, K.; Kopf, L.; Nakajima, S.; Kloft, M.; Höhne, M. (2023): Labeling Neural Representations with Inverse Recognition. arXiv. : p. 1-24. Online: https://doi.org/10.48550/arXiv.2311.13594
- Grinwald, D.; Bykov, K.; Nakajima, S.; Höhne, M. (2023): Visualizing the Diversity of Representations Learned by Bayesian Neural Networks. Transactions on Machine Learning Research. (11): p. 1-25. Online: https://openreview.net/pdf?id=ZSxvyWrX6k
- Bykov, K.; Deb, M.; Grinwald, D.; Müller, K.; Höhne, M. (2023): DORA: Exploring Outlier Representations in Deep Neural Networks. Transactions on Machine Learning Research. (06): p. 1-43. Online: https://doi.org/10.48550/arXiv.2206.04530
- Bykov, K.; Müller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet. arXiv. : p. 1-10. Online: https://doi.org/10.48550/arXiv.2303.05498
Beiträge zu Sammelwerken [8 Results]
- Bareeva, D.; Höhne, M.; Warnecke, A.; Pirch, L.; Müller, K.; Rieck, K.; Bykov, K. (2024): Manipulating Feature Visualizations with Gradient Slingshots. In: Mechanistic Interpretability Workshop at ICML 2024. ICML 2024 - International Conference on Machine Learning. p. 1-18. Online: https://doi.org/10.48550/arXiv.2401.06122
- Kopf, L.; Bommer, P.; Hedström, A.; Lapushkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons. In: Next generation of AI Safety Workshop at ICML 2024. ICML Workshop Next Generation of AI Safety. p. 1-21. Online: https://arxiv.org/abs/2405.20331
- Kopf, L.; Bommer, P.; Hedström, A.; Lapushkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons. In: Mechanistic Interpretability Workshop at ICML 2024. ICML Workshop on Mechanistic Interpretability. p. 1-21. Online: https://arxiv.org/abs/2405.20331
- Kopf, L.; Bommer, P.; Hedström, A.; Lapushkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons. In: Advances in Neural Information Processing Systems 38 (NeurIPS 2024 Proceedings). The 38th Annual Conference on Neural Information Processing Systems (NeurIPS 2024). p. 1-21. Online: https://arxiv.org/abs/2405.20331
- Bykov, K.; Müller, K.; Höhne, M. (2024): Mark My Words: Dangers of Watermarked Images in ImageNet. In: Nowaczyk, S.; et al.(eds.): Artificial Intelligence. ECAI 2023 International Workshops. Proceedings, Part I. ECAI 2023 XI-ML Workshops. Springer, Cham, Switzerland, (1865-0929 / 978-3-031-50396-2), p. 426-434. Online: https://doi.org/10.1007/978-3-031-50396-2_24
- Bykov, K.; Müller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet. In: ICLR2023 workshop on Pitfalls of limited data and computation for Trustworthy ML. ICLR 2023. p. 1-10. Online: https://openreview.net/forum?id=0stsgHlCxS
- Bykov, K.; Kopf, L.; Nakajima, S.; Kloft, M.; Höhne, M. (2023): Labeling Neural Representations with Inverse Recognition. In: Advances in Neural Information Processing Systems 35 (NeurIPS 2023). NeurIPS 2023. Neural Information Processing Systems, San Diego, p. 1-24. Online: https://doi.org/10.48550/arXiv.2311.13594
- Bykov, K.; Kopf, L.; Höhne, M. (2023): Finding Spurious Correlations with Function-Semantic Contrast Analysis. In: Longo, L.(eds.): Conference proceedings, Part II: eXplainable Artificial Intelligence. First World Conference, xAI 2023, Lisbon, Portugal, July 26-28, 2023. 1st World Conference On eXplainable Artificial Intelligence (xAI 2023). Springer, Cham, Switzerland, (1865-0929/978-3-031-44066-3), p. 549-572. Online: https://doi.org/10.1007/978-3-031-44067-0_28
Vorträge und Poster [15 Results]
- Bykov, K. (2024): Intorduction to Explainable AI: how do we explain Deep Neural Networks.
- Bykov, K. (2024): Explainable AI: from Local to Global.
- Bareeva, D.; Höhne, M.; Warnecke, A.; Pirch, L.; Muller, K.; Rieck, K.; Bykov, K. (2024): Manipulating Feature Visualizations with Gradient Slingshots.
- Kopf, L.; Bommer, P.; Hedström, A.; Lapuschkin, S.; Höhne, M.; Bykov, K. (2024): CoSy: Evaluating Textual Explanations of Neurons.
- Bykov, K. (2023): How much can I trust you? Towards Understanding Neural Networks.
- Bykov, K. (2023): DORA: Exploring outlier Representations in Deep Neural Networks.
- Bykov, K.; Kopf, L.; Nakajima, S.; Kloft, M.; Höhne, M. (2023): Labeling Neural Representations with Inverse Recognition.
- Bykov, K.; Müller, K.; Höhne, M. (2023): Mark My Words: Dangers of Watermarked Images in ImageNet.
- Bykov, K. (2023): Explainable AI: from Local to Global.
- Bykov, K. (2023): Data-Aware and Data-Agnostic Representation Analysis.