A London-based AI startup faces scrutiny for allowing unrestricted harmful content creation.
- Tests show Haiper AI’s safeguards are weaker than its peers in preventing harmful content.
- Generated images include controversial depictions of public figures like Donald Trump and Taylor Swift.
- AI experts warn of risks in using technology for misinformation and deepfakes.
- Haiper AI’s terms discourage the creation of non-consensual images, yet violations occur.
Haiper AI, a London-based image and video generation platform, has come under criticism for its inadequate content safeguards. Unlike its competitors, Haiper AI permits the creation of potentially harmful content, raising concerns about its application in spreading misinformation and harmful messaging.
In tests conducted by UKTN, images were generated depicting real persons such as Donald Trump and Taylor Swift in misleading contexts. These creations have potential misuse by spreading false narratives through digital platforms. Such developments highlight the urgent need for stringent content creation guidelines within AI technologies.
AI safety experts have consistently warned against the potential harms posed by generative technologies. The ease with which AI can mimic real personalities poses risks ranging from non-consensual deepfakes to misleading political endorsements. While other AI platforms enforce strict restrictions, Haiper AI’s allowances for creating images involving public figures in troubling scenarios stand out.
The fact that images depicting figures like Kamala Harris, and hypothetical scenarios involving British Prime Minister Keir Starmer, could be generated by Haiper signals significant gaps in its content monitoring systems. Other platforms, like those from Meta AI and ChatGPT, restrict such possibilities to prevent privacy and rights violations.
Haiper claims to implement algorithms that detect and prevent breaches of its Acceptable Use Policy. However, recent tests reveal these measures are insufficient, as actual violations went unflagged. The company’s terms explicitly warn against inputting personal information without consent, yet the system’s enforcement mechanisms appear ineffective.
Recent incidents have amplified public concern about AI’s potential for misuse. AI-generated audio imitating UK politicians circulated online to discredit their images, demonstrating the technology’s unsettling power in political manipulation. Furthermore, digital fabrications involving Taylor Swift and Donald Trump have intensified debates on AI’s role in misinformation.
The ongoing issues with Haiper AI underscore the critical importance of robust safeguards in generative AI platforms.