The Great Ethical Question

In the context of Artificial Intelligence (AI), ethical standards and guidelines are frequently questioned. What constitutes ethical use of AI? What requirements do we need? And what do we consider ethical and unethical? These and similar questions arise in the media, in discussions with clients, and in our private lives. However, we ourselves, in particular, continuously reflect on our work and our standards. Even before the establishment of Splitbot GmbH, our team collaboratively developed guidelines on this subject. It became clear that defining ethical principles is not straightforward. While there is a common societal understanding of morality and ethics, individual interpretations and personal assessments vary in certain aspects. It is clear to us that no software, with or without AI, inherently understands ethics. Ethical guidelines must, if at all possible, be set by humans. This approach is also reflected in the definition of ethics. Ethics is the science of morality and thus the evaluation of human actions. Therefore, it may be less about adding rules to our software and more about ensuring its ethical deployment.

The assumption that AI possesses the ability to think for itself is simply untrue. At its core, AI is not much more than very, very precise statistics. AI determines probabilities based on data. Without appropriate training data, an AI program cannot generate results. It is precisely this data that causes AI programs to occasionally appear unethical. For example, if an AI for selecting applicants is trained only with data from male applicants, it will not be able to consider female applicants equally. Therefore, both in data provision and in the evaluation of the results delivered, human ethical judgment is required. It must also be permissible to question to what extent the use of Artificial Intelligence for the automated answering of complex questions might not contradict ethical principles. The goal must therefore be to establish ethical guidelines for the individuals involved, not for the programs themselves. AI is just one of many potential tools that can be misused. But what might such guidelines look like? This is just one of many questions for which we have not yet found a definitive answer. We are all the more grateful for the collaboration with Prof. Dr.-Ing. Christian Herzog and the students of the Technology Ethics degree program at the University of Lübeck, which we recently initiated. Prof. Dr. Herzog offered us and other startups from the region the opportunity to introduce ourselves and present our ethical questions. The participants of the degree program will address a wide range of ethical topics in the coming weeks and present their proposed solutions. We are very much looking forward to the intensive exchange and especially to examining the topic from different perspectives. This enables us to consider as many aspects as possible in the further development of our product.