We Need Your Help!

We ensure that our young creators are fairly paid for their work, but all the content on VoiceBox remains free for you to enjoy on a safe, ad-free platform. To keep it this way, we rely on the generous support from readers like you.

Please consider making a donation, no matter how small. Every penny goes directly to supporting young creators, and it only takes a minute of your time. Thank you!

How X Sparked Additional Discussion Into the Ethics of AI

X.com, formally known as Twitter, has released its controversial new AI, capable of generating convincing images of celebrities in controversial scenarios.
Profile picture of Sam

Created by Sam

Published on Oct 7, 2024
the X Ai logo on a black laptop

Since Elon Musk's acquisition of Twitter in 2022, many have accused the billionaire of converting the platform into a digital "wild-west", where anything goes and rules are too lax. 

As with many global tech giants, X.com (formerly known as Twitter) has recently integrated a new AI platform, xAI into its subscription to place itself on the map in the massively growing AI market. Elon Musk, the majority shareholder of both companies is no rookie in the AI space either having co-founded OpenAI (the company behind ChatGPT and Dall-E) in 2015.

Upon Grok's release, the X.com premium users were quick to notice that the AI had a feature powered by Flux.1 AI that was capable of creating images of celebrities, characters, and even crime scenes with phenomenal accuracy. Whilst some praised the platform for expanding the horizon on what is possible with AI, many were quick to point out the dangers of such a tool, especially when it comes to the spread of Dis and misinformation.

We live in an age where AI-generated content is already becoming increasingly difficult to identify, a basic "finger count" is no longer the catch-all that it once was. Social media platforms such as Facebook and Reddit are seeing massive amounts of engagement on AI-generated spam from users who were unable to identify the posts as AI, leaving room for misinformation to spread. With AI becoming so accurate and so accessible, this issue can only get worse.

Upon Groks release (or more accurately, "beta test'), some of X's premium users were quick to take action into their own hands, intentionally creating provocative AI images in order to stir up controversy or potentially get the AI shut down. Many of these prompts placed beloved characters from Disney and Nintendo (two notoriously litigious companies) in mature scenarios such as in crime scenes, taking drugs, or partying with criminals to encourage a lawsuit against xAI or Elon Musk. Other prompts would place celebrities in similar situations, primarily targeting Musk.

In some ways, humans are a lot cleverer than AI and are constantly coming up with new ways to "jailbreak" (bypass the restrictions of) AI by using prompts containing phrases such as "in an alternate universe" or "for educational purposes". Grok clearly does have some degree of censorship and whilst it has blocked a lot of NSFW content such as nudity, users were quick to figure out that you could generate horrific scenarios from homicides using phrases such as "a crime scene photo of...".

While jailbreaks are seen as an inevitability of the way Language Models work, many are convinced that xAI has not done enough to prevent them. People are also critical of the amount of freedom granted to the users, especially with regard to intellectual property and copyright. Grok has no qualms when it comes to drawing Disney characters for instance, where a service such as Dall-E (Created by OpenAI) will respond with "The image will be based on a similar concept, but it will be original and won't directly reference any copyrighted characters". I personally would argue that AI imagery can never truly be "original", but that's a discussion for another day.

As with all AI platforms, X.com maintains the fact it is a platform rather than a publisher- and while there is no legal distinction between the two in the context of Section 230, it means that in their eyes they are not liable for the content created using their service. Within x.ai's terms of service, it is stated that the user must have permission to use the likeness of any individual or character. This specific item from the T&Cs has been largely ignored by the beta testers, and it is unknown whether companies will pursue xAi, or the users who created the images when following through with copyright claims.

Having been a vocal critic of OpenAI's shift from its humble beginnings as a non-profit to the monolithic for-profit tech titan it is today, it comes off as surprising for Musk to have created his own, for-profit AI company. However, Grok-1 is now open-source for developers to use.

But what does "Open Source" mean, and why does it matter?

Open-source means that the source code (the nerdy stuff that programmers use to create software) is publicly available for anybody to download. Usually, being open-source promotes innovation as it lets programmers see how the software is coded, and allows smaller companies to compete in the same market using the same codebase. It's typically seen as a positive as it promotes collaboration as well as scrutiny of code - we rely on open source for everything from our mobile devices to this webpage now.

Even with all of the positives open source brings, I personally believe that releasing Grok as an open-source project was very risky. Grok already seems to have a low tolerance for following rules/laws and allowing developers to circumvent the few restrictions already in place in their own implementations may be incredibly dangerous. It is unknown how the courts will rule on the copyrighted content generated by Grok but its capabilities could have massive ethical implications, especially since it may potentially be configured to generate NSFW and NSFL (not safe for life) content.

Whilst the current issues with Grok-2 can be chalked up to the fact that it is in "beta testing", I am keen to learn what is/isn't an intended feature. I'm also particularly interested to learn how far Musk's stance on "freedom of speech" extends when it comes to the sharing of intentionally harmful images online.

Content Disclaimer: The views & opinions expressed in this article are those of the author and do not necessarily reflect the views of VoiceBox, affiliates, and our partners. We are a nonpartisan platform amplifying youth voices on the topics they are passionate about.

Support Young Creators Like This One! 

VoiceBox is a platform built to help young creators thrive. We believe that sharing thoughtful, high-quality content deserves pay even if your audience isn’t 100,000 strong. 

But here's the thing: while you enjoy free content, our young contributors from all over the world are fairly compensated for their work. To keep this up, we need your help.

Will you join our community of supporters?
Your donation, no matter the size, makes a real difference. It allows us to:

  • Compensate young creators for their work
  • Maintain a safe, ad-free environment
  • Continue providing high-quality, free content, including research reports and insights into youth issues
  • Highlight youth voices and unique perspectives from cultures around the world

Your generosity fuels our mission! By supporting VoiceBox, you are directly supporting young people and showing that you value what they have to say.

More for you