Category: News

  • OUR SPLITBLOG IN AUGUST: GReen AI

    OUR SPLITBLOG IN AUGUST: GReen AI

    This month’s topic request comes from our apprentice Amirreza, and we are asking whether climate protection and the use of large AI models are compatible.

    Anyone who has recently delved deeper into CO2 consumption will certainly have become aware of the environmental impact of generative AI. Studies indicate that the electricity demand for AI data centers will be eleven times higher by 2030 than in 2023. A threefold increase in water demand is also predicted. A simple query to ChatGPT, for example, requires ten times as much energy as a simple Google search. The longer the generated response from a language model, the higher the energy consumption. In April of this year, Sam Altman commented on the immense costs caused by technically unnecessary polite phrases such as “please” and “thank you”. There is talk that large tech companies intend to operate their own nuclear power plants in the future.

    All of this sounds as if companies striving to keep their CO2 footprint low would have to forgo generative AI. But is there really no alternative?

    In fact, before deploying generative AI, companies should ask themselves some economic and ecological questions, for example: Is the use of generative AI proportionate? Can the tasks the model is intended to solve not be solved by any other technology?

    Apart from that, there are also ways to influence the climate impact of generative AI. An important factor here is, of course, the choice of operator and its location. Because there are indeed operators who run AI systems in climate-neutral data centers. For example, we at Splitbot rely on data centers that are powered by renewable energies and sensibly reuse the generated waste heat. Furthermore, we offer our clients the option to operate KOSMO on-premise. This is the ideal solution if your own IT or building is already climate-neutral.

    Another exciting aspect is the training of the models themselves. Scientists have discovered that during the training of AI models, parts of the calculations are performed unnecessarily quickly. The computational work during AI training is distributed across multiple GPUs – but unevenly. This leads to less utilized GPUs having to “wait” for the more heavily used ones. Since this waiting time occurs anyway, the fast calculation is unnecessary. By adjusting the computing speed, electricity consumption could be reduced here. The researchers directly provided the suitable solution: The open-source software Perseus controls the GPU frequency of each individual calculation, thereby keeping power consumption as low as possible.

    Sources: https://t3n.de/news/ki-stromverbrauch-energie-perseus-1656380/

    https://reset.org/sprachmodelle-nachhaltig-nutzen-sparsamer-genai-gruen/

  • NEW FEATURES IN KOSMO

    NEW FEATURES IN KOSMO

    Some of you have eagerly anticipated this: KOSMO has gained new features with the latest release. Today, we will reveal what these entail.

    PDF Viewer

    When KOSMO generates a response, the sources used are always provided. If the source was a website, you could previously open it with a simple click. This functionality now extends to PDF files that you have provided to KOSMO. With a single click, the file opens in the PDF viewer. The text passages KOSMO utilized for the answer are highlighted. Additionally, you can directly print or download the file. This eliminates the lengthy search for documents!

    Scheduled tasks

    Do you regularly submit the same requests to KOSMO? Then we have the perfect solution for you: scheduled tasks. From now on, you can define what KOSMO should do for you, when, and how often. From weather reports to the latest posts from your favorite website – KOSMO summarizes your updates in a separate chat, ensuring you always stay informed.

    Push notifications

    The latest information is, of course, also available directly on your smartphone. KOSMO notifies you when scheduled tasks have been completed. This ensures you never miss any important information.

    E-Mail connection (beta)

    Currently in beta, but soon fully functional: the integration of your email inbox. Simply store your access credentials, and you can ask KOSMO about the content of your emails. This transforms your emails into a valuable source of information. This feature is already available for IMAP. Gmail users will need to exercise a little more patience.

    By the way: The email feature, as well as the familiar functions “Nextcloud”, “File Storage”, “Save Websites”, and “Standard Instructions”, can now be found under the menu item “External Resources”.

    And a small preview: The next release is already in the pipeline and is scheduled for late October. Among other things, it will include summaries at the push of a button – you can look forward to it!

  • OUR JULY SPLITBLOG: WHEN CHATBOTS BECOME POLITICAL

    OUR JULY SPLITBLOG: WHEN CHATBOTS BECOME POLITICAL

    The July Splitblog – When Chatbots Become Political

    This month, we highlight why it is important to question the origin of chatbots and AI models and to remain critical when interacting with them. The suggestion for this topic was provided by Mats from our backend team.

    Grok 4 has impressively demonstrated in recent weeks how the programming of an AI assistant or chatbot can influence its response behavior. Unrestrained, Grok generated antisemitic and racist statements that made headlines. The company xAI has since apologized, stating that Grok was programmed to respond “honestly” and “not be afraid to shock politically correct people”. Regarding the latter instruction, the goal has certainly been achieved. And even under the premise that bad press is good press, Grok has certainly served its purpose. In any case, the headlines are reason enough to seriously examine the various manufacturers and providers of chatbots and AI assistants. Regardless of the area in which the systems are to be used, a thorough review and extensive testing beforehand are urgently necessary. Especially if companies allow themselves to be represented by chatbots in their public image, serious damage to their reputation can otherwise result.

    But how can AI assistants be led to make such statements? The basis of all language models is training data of varying scope and origin. In other words, vast amounts of information are available for generating responses. How and in what way answers are to be generated from this is a question of programming or individual settings. For example, it can be determined that certain information sources should be used preferentially, or that the generated answers should be particularly humorous, scientific, long, or short. In Grok’s case, according to data scientist Jeremy Howard, there are also indications that the chatbot often represents the opinions and statements of xAI owner Elon Musk on controversial topics. However, according to programmer Simon Willison, this could be attributed to Musk’s prominent role.

    Similar trends to those currently seen with Grok can also be observed with other chatbots. DeepSeek also does not answer a number of political questions neutrally. In some cases, the generated answers are deleted shortly after creation and replaced with a “Let’s talk about something else”. Apparently, the bot’s answers are at least somewhat more neutral when using the English version than in the Chinese version. Extensive experiments with DeepSeek reveal a programmed “self-censorship”.

    In Europe, it is not uncommon to equip chatbots with certain ethical standards before they are unleashed upon humanity. For example, our chatbot KOSMO, which is based on a language model from Mixtral, responds politely evasively when it comes to violence and crime. While this behavior is desirable, we believe that objectivity in the presentation of facts should always be ensured. The integrated source verification contributes to this, giving users the opportunity to check and evaluate the sources used.

    A certain bias in language models can never be completely ruled out. A chatbot’s knowledge is only as extensive as its training data and additional information, and its response behavior is often also influenced by user feedback during finetuning. Users themselves can also significantly influence the response behavior through the prompts entered (unconsciously).

    In addition to other factors, the origin of the language model used should therefore also be thoroughly examined before relying too heavily on the correctness of the answers.

  • OUR SPLIT BLOG IN JUNE: is AI Changing Academic Exams?

    OUR SPLIT BLOG IN JUNE: is AI Changing Academic Exams?

    This month, we look into the future and address the question of how AI will impact examinations at universities and schools. This topic suggestion comes from our working student Vincent, who is currently completing an exchange semester in Sweden.

    Reports of AI-generated work by pupils and students are becoming more frequent. Increasingly, the question is being discussed in the media how educational institutions are supposed to identify which texts were actually created by humans. Despite some indications, such as specific phrasings, writing styles, and above-average flawlessness, it is already difficult to determine beyond doubt whether a particular text truly originates from a human. With increasingly improving language models and prompting methods (e.g., “Formulate as humanly as possible and include errors”), unambiguous detection will become progressively more difficult. This is a major problem, considering that a large part of academic education relies on the creation of texts. Be it for applications, examinations, master’s theses, or term papers – examiners everywhere rely on text-based methods. However, there is a high risk that these examination methods will no longer function reliably in the long term. Detector software, which promises to identify artificially generated texts, can provide clues, but is not reliable enough itself and can often be circumvented with simple means. And particularly alarming: Texts written by non-native speakers are often falsely identified as AI-generated by these programs. The risk of discrimination in selection processes can thereby increase significantly. Especially since it is not only difficult to prove that a text was created by AI, but also that the opposite is true.

    But how can universities and other educational institutions address this? Oral examination procedures could in most cases clearly show whether someone has truly thought for themselves and understood. However, oral examinations are associated with enormous time and personnel expenditure and cannot easily assess the same scope of knowledge as written examinations.

    The majority of educational institutions currently still rely on a straightforward prohibition. However, some institutions are already exploring new approaches. Till Krause from the University of Landshut, for example, allows students to actively use AI as a source – as long as this is clearly indicated. Thus, a precise indication of the language model used and the prompt applied is required. Because despite all the challenges that the use of AI brings to educational institutions, AI offers one thing above all: an incredibly vast wealth of information that can be excellently used for learning and provides a fantastic basis for the development of one’s own ideas and thoughts.

    At the University of Economics in Prague, too, a pragmatic approach is taken to the use of AI. The local Business Administration program will no longer require a traditional bachelor’s thesis starting in autumn 2024. Instead, there will be project work, the results of which will be evaluated. Many consider this approach more sensible and practical than the previous assignments. This is a thoroughly sensible approach, especially for study programs where the primary focus is not on flawless and artful writing. Perhaps this even presents an opportunity to highlight the talents of individuals who, for example, have a spelling disability.

    The fact is, academic examination procedures will have to be changed. AI – similar to other technological means – has already entered the daily lives of pupils and students. Now, methods are needed to assess human knowledge in other ways.

    An absolutely worthwhile podcast on this topic is available here: https://www.ardaudiothek.de/episode/11km-der-tagesschau-podcast/ki-or-not-ki-koennen-wir-ihre-texte-noch-enttarnen/tagesschau/13779441/