In this blog post, I will discuss the future of FLOSS in terms of licensing and AI integration, two topics I am interested in exploring.
Licenses
As I discussed in my previous blog post on the need for stricter guidelines in open source software (OSS) to prevent misuse, I decided to dive deeper into the emerging initiatives that address this issue. These initiatives have the potential to reshape the future of Free/Libre and Open Source Software (FLOSS).
The Rise of Ethical Open Source Licenses
Traditional open source licenses, like MIT or GPL, allow anyone to use the software for any purpose, even if that purpose is harmful. This open-ended freedom can lead to the unintended use of software in ethically questionable ways, such as powering military weapons, mass surveillance systems, or technologies that infringe on human rights. Recognising this risk, some developers have started to create ethical open-source licenses that explicitly restrict harmful uses.
One of the leading figures in this movement is Coraline Ada Ehmke, known for creating the Contributor Covenant and the Hippocratic License. In 2020, she also founded the Organisation for Ethical Source (OES) with the mission to “empower developers, giving us the freedom and agency to ensure that our work is being used for social good and in service of human rights.”
The Hippocratic License
It is essentially a modified MIT license with additional conditions that explicitly restrict the use of the software for unethical purposes. According to the Hippocratic License: “the software may not be used by individuals, corporations, governments, or other organisations for systems or activities that knowingly endanger, harm, or otherwise threaten the physical, mental, economic, or general well-being of underprivileged individuals or groups in violation of the United Nations Universal Declaration of Human Rights.”
The Debate Around Ethical Licensing
Despite the growing interest in ethical licensing, the concept remains controversial. Traditional free software advocates, such as the Free Software Foundation (FSF) and the Open Source Initiative (OSI), define free software as giving users the following four essential freedoms:
- Freedom 0: The freedom to run the program as you wish, for any purpose.
- Freedom 1: The freedom to study how the program works and change it as you wish. Access to the source code is a precondition for this.
- Freedom 2: The freedom to redistribute copies so you can help others.
- Freedom 3: The freedom to distribute copies of your modified versions to others, giving the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
The ethical source movement seeks to protect user rights by modifying Freedom 0. This intentional restriction means that ethical source licenses do not fully align with the definitions of free and open source software as defined by the FSF and OSI, as they impose moral constraints on the software’s use. This leads to debates about whether or not implementing ethical use licenses still classifies the project as open source.
The future
As the ethical source movement develops, it raises important questions about the future of open source software. In my opinion, the future of OSS licenses should prioritise responsibility and ethics, enforcing stricter rules to prevent misuse. This may require modifying the traditional definitions of OSS to reflect these evolving values. This ongoing debate will likely shape the future of open source in profound ways, pushing the community to reconsider the balance between freedom and responsibility in software development.
Artificial Intelligence
As AI continues to rise, the role of open-source AI has become increasingly significant. Google recently highlighted how open-source communities have driven rapid advancements by pushing, testing, integrating, and expanding the capabilities of large language models in ways that private efforts alone might struggle to match. I previously discussed the advantages of open-source software (OSS) in AI development. However, open-source AI also presents a unique set of challenges that can delay its broader adoption.
While open-source AI provides transparency in its code, it often relies on data that isn’t open-source, such as proprietary or restricted datasets, which limits the full benefits of openness. This lack of transparency makes it harder to assess the quality of the AI and can reduce trust in its outcomes. Additionally, because open-source AI systems are publicly accessible, they are more vulnerable to cyberattacks. Malicious actors can exploit exposed code, system configurations, or even manipulate the data itself, leading to corrupted models or misleading outputs. Unlike traditional software, AI models are highly sensitive to the quality and diversity of their training data. Poor datasets can lead to biased or unreliable results. To avoid such risks, organisations sometimes withhold training data for legal, ethical, or safety reasons. For example, OpenAI kept the training data for LLaMA 2 confidential, highlighting the ongoing challenge of balancing transparency with security in open-source AI.
Open-source AI also faces legal and intellectual property challenges. Unclear origins in some models raise risks of unintentional copyright infringement. For example, AI code generators trained on large datasets often lack transparency about the source of their training data, potentially leading to disputes over proprietary code used without proper attribution.
While there are opportunities for financial support, a clear funding structure for open-source AI is essential. Relying solely on volunteers can create sustainability and risk management challenges, says the European legislation. Governments should ensure fair compensation for developers and invest in resources for reviewing contributions, addressing feedback, and improving security. Without proper monitoring, poorly managed open-source systems can pose significant security risks.
The future
While open source AI offers huge potential for collaboration and innovation, it also presents unique challenges that must be carefully managed. I agree that training data should remain protected, but transparency, security, and ethical responsibility must be top priorities in open-source AI development. More efforts, like the EU’s regulatory approach, are needed to establish clear guidelines for safe and responsible progress.
Leave a Reply