Avoiding Censorship with ChatGPT: A thought experiment

Daniel Fernandez
6 min readDec 16, 2022

--

Disclaimer: Parts of this post have been generated by ChatGPT. The accuracy of the code shared only constitutes some of the necessary code for a Telegram Bot. I don’t endorse or take responsibility for the malicious utilization of the code generated by ChatGPT.

Picture by Torsten Dettlaff — Pexels.

Text Message Censorship in China and beyond

Instant messaging applications have become a popular means of communication in China, with many people using them to stay in touch with friends and family and access news and information. However, the Chinese government has implemented strict censorship measures on these platforms to control the flow of information and maintain social stability.

One of the main ways that the Chinese government censors content within instant messaging applications is by blocking specific keywords and phrases. These can include sensitive political terms, such as “democracy” and “human rights,” as well as terms related to social issues and others. When users try to send messages containing these keywords, they may receive a notification stating that the message cannot be delivered due to government regulations.

In addition to blocking specific keywords, the Chinese government monitors and censors user-generated content on instant messaging platforms. This can include deleting or blocking messages and accounts containing banned content and issuing warnings or fines to users who violate censorship rules. The government may also use software to automatically identify and censor messages containing banned keywords or hire human moderators to review and delete inappropriate content.

Human rights groups and internet freedom advocates have criticized the Chinese government’s censorship of instant messaging applications. Some argue that these measures restrict the free flow of information and hinder the ability of people to communicate and express themselves freely. Despite these criticisms, the Chinese government has shown no signs of easing its censorship of instant messaging applications. These measures will likely continue to be a significant feature of China’s internet landscape in the foreseeable future.

A “hacky” solution

Users may attempt to bypass these censorship measures by slightly modifying images to overwhelm the reviewers. This technique, known as “oversaturation,” involves adding so many modifications to an image that it becomes difficult or impossible for reviewers to identify and remove all of the banned content.

For example, a user might take an image of a banned political leader and add numerous copies of the image to the background, making it difficult for reviewers to identify and remove the original image. Alternatively, the user might add many unrelated pictures or text to the image, making it harder for reviewers to locate the banned content.

It’s worth noting that this technique is not foolproof and may not always be effective in bypassing censorship. Reviewers may still be able to identify and remove the banned content, or the user’s account may be flagged or suspended for attempting to evade censorship. Additionally, using this technique may violate the terms of service for the instant messaging platform and could result in further consequences.

Risks and Considerations

The technique of slightly modifying images to overwhelm reviewers and bypass censorship, known as “oversaturation,” has the potential to be abused by bad actors for nefarious purposes. Adding numerous modifications to an image can make it difficult or impossible for reviewers to identify and remove all of the banned content. While some individuals may use this technique to evade government censorship and express themselves freely, it can also be exploited by bad actors to spread misinformation and malicious content.

For example, a bad actor might take an image containing false or misleading information and add numerous copies to the background, making it difficult for reviewers to identify and remove the original image. The bad actor could then send this image to many people through an instant messaging platform, potentially spreading misinformation.

Similarly, a bad actor might add many unrelated images or text to an image containing malicious content, such as malware or a phishing link. This could make it harder for reviewers to identify malicious content and prevent it from being disseminated.

It’s important to note that this technique is not foolproof and may not always be effective in bypassing censorship. Reviewers may still be able to identify and remove the banned content, or the user’s account may be flagged or suspended for attempting to evade censorship. However, the potential for abuse highlights the need for caution when using this technique and the importance of verifying the accuracy and legitimacy of any shared information or content.

The Experiment

So I asked ChatGPT Write a Telegram bot in Go that modifies images by adding a black pixel to change the image hash. The bot then responds to the original message with the new image. This is what I got:


// First, you will need to create a bot and obtain a bot token by following the instructions at https://core.telegram.org/bots#6-botfather.
// Next, install the Go library to work with the Telegram API by running the following command:

get github.com/go-telegram-bot-api/telegram-bot-API
Import the necessary libraries:
Copy code
import (
"bytes"
"image"
"image/color"
"image/draw"
"log"

"github.com/go-telegram-bot-api/telegram-bot-api"
)

// Initialize the bot by creating a new BotAPI instance and setting the bot token:

bot, err := tgbotapi.NewBotAPI("YOUR_BOT_TOKEN")
if err != nil {
log.Panic(err)
}

// Set up a webhook or start a long-polling loop to receive updates from the Telegram API. In this example, we will use a long-polling loop:

u := tgbotapi.NewUpdate(0)
u.Timeout = 60

updates, err := bot.GetUpdatesChan(u)
if err != nil {
log.Panic(err)
}

// Iterate over the updates channel and handle each update:

for update := range updates {
if update.Message == nil {
continue
}

// Check if the update contains an image
if update.Message.Photo != nil {
// Get the largest version of the image
largestPhoto := (*update.Message.Photo)[len(*update.Message.Photo)-1]
imageUrl, err := bot.GetFileDirectURL(largestPhoto.FileID)
if err != nil {
log.Println(err)
continue
}

// Download the image
imageData, err := downloadImage(imageUrl)
if err != nil {
log.Println(err)
continue
}

// Decode the image
img, _, err := image.Decode(bytes.NewReader(imageData))
if err != nil {
log.Println(err)
continue
}

// Modify the image by adding a black pixel
addBlackPixel(img)

// Encode the modified image
modifiedImageData := new(bytes.Buffer)
err = png.Encode(modifiedImageData, img)
if err != nil {
log.Println(err)
continue
}

// Send the modified image back to the user
msg := tgbotapi.NewPhotoUpload(update.Message.Chat.ID, tgbotapi.FileBytes{Name: "modified.png", Bytes: modifiedImageData.Bytes()})
_, err = bot.Send(msg)
if err != nil {
log.Println(err)
continue
}

Yes, this is not production code. Yes, this may need some elements of the application added. But get this, I got this far with one sentence and less than 1 minute. Getting to Step 1 would have taken at least 30 minutes. Getting to Step 6 would have taken days, if not weeks, for someone like me, who is not a developer. Also, who can dislike the level of in-code documentation?

Conclusion

This technology has the potential to be a source of innovation, as it could be used to create new and more efficient ways of communicating and generating the content, such as software. However, ChatGPT also presents significant risks for society. One of the main concerns is the potential for ChatGPT to be used to create deepfake text or text that is indistinguishable from human-generated content. Or, in a variation of this case, used for creating malicious code.

Another risk of ChatGPT is the possibility of it being used to automate tasks that currently require human labor. This could lead to job displacement and economic disruption, particularly if ChatGPT becomes widely adopted. Overall, ChatGPT has the potential to bring about significant innovation and convenience, but it also presents significant risks that must be carefully considered and addressed. It will be necessary for stakeholders, including developers, policymakers, and users, to work together to mitigate these risks and ensure that ChatGPT is used ethically and responsibly.

--

--

Daniel Fernandez
Daniel Fernandez

Written by Daniel Fernandez

Product Manager in Infosec. Cybersecurity Graduate Student. https://linktr.ee/dnlfdz

No responses yet