Finalist of the category:
Outstanding Scientist in Slovakia Under the Age of 35

Daniela Vacek

Systematic Philosophy

Artificial intelligence is not something meant to replace us. It is an essentially collaborative project in which humans remain key and responsible for the outcomes

Philosophy captivated Daniela Vacek already in high school. At university, she gradually moved from logic and semantics to questions of responsibility, eventually encountering a fundamental challenge - what to do about responsibility for the impact of artificial intelligence? Today, PhDr. Daniela Vacek, PhD., is a leading philosopher specializing in AI ethics, analytical aesthetics, and philosophical logic. She works as a researcher at the Department of Analytical Philosophy at the Institute of Philosophy of the Slovak Academy of Sciences (SAV), and partly at the Department of Logic and Methodology of Sciences at the Faculty of Arts, Comeniu University. Recently, she joined the team Ethics and Human Values in Technology at the Kempelen Institute of Intelligent Technologies, whose achievements are a major source of inspiration for her.

Daniela began exploring the issue of responsibility for the unintended impact of AI about seven years ago, at a time when the topic was considered science fiction. "They saw me as a futurist. Today, it's a highly relevant and hot issue, one we should have been seeking (and finding) answers to long ago," she notes.

Her core research focuses on both the positive and negative aspects of responsibility. The study of responsibility for harm caused by AI is central to AI ethics. Daniela also asks the reverse question: who is responsible for AI's successes? On the positive side of responsibility, questions arise around copyright and praise. "Who should be praised for a beautiful image generated by artificial intelligence?" she asks. Her approach highlights the collaborative nature of technology, which is often overlooked. She believes that praise and responsibility for successes should return to the people involved in the process - from developers and users to those who tested the system.

The philosopher proposes practical solutions through the concept of "indirect responsibility" - a legal concept she applies to ethical questions in AI. This approach addresses "gaps in responsibility" that arise between who should be responsible and who actually bears it.

"A lot of academic effort today is invested not only in the positive use of AI tools but also in their reverse use to prevent threats of AI misuse," says Vacek, who almost always prefers a golden middle path when asked about her vision of life with AI. In this context, she recommends maintaining caution and critical thinking. In the future, she hopes to focus more on applying philosophical insights in practice.

She draws topics and ideas for philosophical research from leading international journals on AI ethics, in which she also publishes. "My research stems from problems and questions I don't consider satisfactorily resolved. When I don't find a theory, I can identify with in my area of interest, I create my own. If you want to do original scientific research at a global level today, it's primarily about creativity and pushing the boundaries of knowledge," she emphasizes.

For Daniela Vacek, science is "knowledge par excellence" - the most prestigious and valuable source of information, which we should primarily trust. In terms of skills, science has given her the ability to argue, which she enjoys using in her personal life. "Your judgment is not valid." is a useful phrase in the academic household of the young philosopher, whose husband is also a philosopher at SAV.

In her free time, Daniela enjoys spending time with her 19-month-old son Liam, her family, and friends. She likes working in the garden. "I have a large collection of roses from David Austin, the world-renowned English breeder. I love going to the forest and mountains. But my greatest hobby remains creative scientific research. When I can philosophize and write something that is original, timely, and globally relevant, it's an amazing feeling," declares the respected philosopher.

Daniela Vacek believes that the story of the most significant technology of the 21st century is being written by us. Her research shows that even here, there is room for human responsibility. While traditional philosophical theories focus primarily on the responsibility of isolated individuals, today's era and the collaborative nature of AI require examining responsibility in the context of collective actors, as well as the relationships between actors and a broader normative framework of duties, norms, rights, and expectations.

Share