Instagram AI appeals to users’ reason
Harassment on social media has unfortunately become an everyday phenomenon. It is also a major problem for operators of social networks and messaging services. Especially Facebook has had to delve deeper and deeper into the depths of hate speech and conspiracy theories and take more consistent action against them – with artificial intelligence and human moderators. They go through content hell every day, which concerns both the viewing, evaluating, and blocking of content and the sometimes adverse working conditions of those acting as Facebook content cleaners.
Facebook’s subsidiary Instagram seems to be taking a different approach to combating inflammatory comments and posts on its platform. Therefore, the photo-sharing app has now announced the rollout of an artificial intelligence that appeals to the users’ self-reflection ability, first bringing out the cerebral carrot before wielding the whip and club. The new feature aims to address two problems at the same time:
- Self-censorship and self-criticism: As soon as a user posts something that Instagram’s AI flags as offensive, inflammatory, or otherwise violating the user guidelines, the software attempts to appeal to the user’s reason by asking them via a pop-up if they are really sure they want to publish it. If they are not so sure anymore, they can undo the post via an “Undo” button without anyone except Instagram’s AI, and especially not the recipient of the comment, knowing about it.
On the other hand, the tool also offers users the opportunity to contact Instagram if they believe that content has been wrongly flagged as offensive. This, in turn, serves as valuable training material for artificial intelligence, as it can fill the database and make the software even smarter, enabling it to better and more quickly understand situational, contextual, and also misunderstood content in the future.
- Restrict: This application is primarily intended for users to keep hateful commentators and their verbal diarrhea away from their account. Moreover, the tool is intended to protect younger users in particular and strengthen their willingness to report offensive content. Because precisely because many teens also know their personal trolls in real life, they often avoid contact with the platforms and do not report such offenses. Restrict is intended to provide a remedy here. Thanks to the feature, the comment author who is “restricted” by one user is the only one who can still see their comment, without knowing it, however. In addition, haters can no longer see when and if the victims of their hate tirades are online and also cannot send them direct messages anymore. As a user, you can always decide to make the restricted comments publicly visible. You can also delete the comment or undo the restrict.
Hope for insightful users
Instagram also referred to the results of previous tests with similar features in its blog post. The results, according to Instagram CEO Adam Mosseri, showed that the influence of artificial intelligence on users and the appeals to their reason and manners sometimes led users to reconsider and withdraw their comments.
This is undoubtedly good news. On the other hand, less well-meaning people might claim that Instagram is partially shifting the responsibility for a hate-free, safe, and clean platform onto the users, at least in terms of restrict. And sure: Wherever people interact with machines, there can be misunderstandings and misuse of the tools. This means: The danger that comments are wrongly flagged by the AI and then you have to explain why you do not find this correct, or that users arbitrarily impose a restrict ban on other users and thus “mute” them, is omnipresent.
On the other hand, in favor of Instagram’s artificial intelligence offensive and the appeal to maintaining a good tone, it must also be mentioned that by involving users and their interaction with the platform AI, self-reflection could finally gain more space again compared to impulsive posting of angry comments.