How we can help

Need help with your Managed IT Services?

Our team are available Mon – Fri: 7:30am-5:30pm

Call Now On:
Stourport: 01299 848311 Hereford: 01432 663026

Technical Support

Contact us

- 13th Jan 2021

Featured Article - Rules & Regs: Social Media

In this article, we look at the rules, policies and guidance around what types of content is –  and isn’t – allowed on social media websites.

Social Media Platforms

Social media platforms reflect many different views and motivations and can be used for harm (as well as good) e.g., cyberbullying, hate speech, grooming and more.  With social media forums positioning themselves as platforms rather than publishers, they are currently protected from the same regulation that publishers are subject to.  Given that there are now vast quantities of rules and guidelines introduced and published by the social media platforms themselves to show that they can work without the need for regulation or other intervention, this article focuses mainly on Facebook and Twitter as examples.

Safe Posting

Safe posting on platforms such as Facebook relies not just upon the user’s own views and behaviour but also on how the social media platforms are able to detect (mainly through algorithms, reports from users and some internal reviews), moderate, and act where posts which break the rules are found.  

Facebook – Community Standards

Facebook, for example, regards its social network as an online community and as such, issues guidance about the types of behaviour permitted or not permitted.  These rules/standards are listed in its ‘Community Standards’.  Facebook says that the goal of these standards is to “create a place for expression and give people a voice”.

Values Vs Expression

Facebook is keen to stress that it favours expression but when it does have to limit this expression, it does so because this expression is at odds with its published values for which are the preserving and protecting of authenticity, safety, privacy and dignity (rights).

Facebook justifies allowing some content that would appear to go against its Community Standards if it is deemed to serve a purpose for “public awareness” (in the public interest/newsworthy). An example of this could be a graphic depiction of war to show the consequences of war.

Challenges

Some of the challenges of moderating a social media platform were revealed in November 2020 when Facebook revealed (via its Community Standards Enforcement Report) that 22.1 million pieces of hate speech content had been found on Facebook and 6.5 million instances of hate speech had been found on its Instagram platform between July and September.  10 million instances of hate speech per month were recorded across Facebook and Instagram during those 3 months.

The same report detailed 13 million pieces of child nudity and sexual exploitation content, and more than a million items of suicide and self-injury being found across Facebook’s platforms in that period.

Facebook has also reported recently that it is making efforts to crack down upon misinformation relating to coronavirus, conspiracy theories, the Holocaust and QAnon (a far-right conspiracy theory).

In addition to guarding against racism, bullying, hate speech and more, as shown after the 2016 election, social media platforms also need to guard against state-sponsored political misinformation and influence.

Algorithms

Keeping up with policing a vast social media platform involves the use of complex algorithms designed to detect things like hate speech, racial slurs, bullying and more. Facebook, for example, is currently reported to be updating algorithms that detect hate speech and racism as part of its “worst of the world” project (WoW Project). This project is reported to be designed to make algorithms better at spotting abusive content aimed at people of colour, muslims, the LGBTQ community, and jews.

Subjects

Facebook has Community Standards rules and guidelines that are intended to protect its platform and users.  For example, these standards cover:

– Violence and criminal behaviour.  This covers preventing potential offline harm that may be related to content on Facebook, stopping organisations or individuals that proclaim a violent mission or are engaged in violence to have a presence on Facebook e.g., terrorists or human trafficking, stopping posts relating to coordinating harm and publicising crime, prohibiting attempts to buy and sell regulated goods e.g., drugs and firearms, and removing content relating to fraud and deception.

– Safety. This covers content related to child sexual exploitation, abuse and nudity, sexual exploitation of adults, bullying and harassment (with a distinction made between public figures and private individuals to allow for discussion – Facebook also has a Bullying Prevention Hub for teenagers, parents and educators), human exploitation and privacy violations and image privacy rights i.e. “remove content that shares, offers or solicits personally identifiable information or other private information that could lead to physical or financial harm, including financial, residential and medical information, as well as private information obtained from illegal sources.”

– Objectionable content. This relates to stopping ‘hate speech’ which Facebook defines as “a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, caste, sex, gender, gender identity and serious disease or disability”. Facebook separates any such attacks on its platform into 3 tiers of severity.  Facebook lists how it approaches many issues relating to hate speech here https://about.fb.com/news/2017/06/hard-questions-hate-speech/ and quotes its own 2017 figure showing that it deleted “288,000 posts a month globally” relating to hate speech. 

– Integrity and authenticity. This relates to stopping fake accounts being created i.e., preventing impersonation and identity misrepresentation by removing accounts that are harmful to Facebook’s community.

Within this section, Facebook also regulates matters relating to dealing with spam, cyber-security i.e., not allowing attempts to gather sensitive user information through the abuse of the platform and products, inauthentic behaviour (people misrepresenting themselves, or using use fake accounts for dishonest purposes), false news i.e., not removing it altogether but reducing its distribution by showing it lower in the News Feed to preserve satire or opinion.

– Manipulated media – image, audio, or video (e.g. deepfakes).

– Memorialisation. When a Facebook user dies, friends and family can request that their account is memorialised, whereupon the word “Remembering” is added above the name on the person’s profile.

– Respecting intellectual property. This relates to Facebook users respecting other peoples’ copyrights, trademarks and other legal rights when posting on the platform.

– Content-related requests and decisions. This relates to requests for removal of accounts, additional protection of minors (e.g. removal of child abuse imagery) and decisions referred to Facebook’s Independent Oversight Board.

– Additional information. This relates to gathering information from ‘Stakeholders’ i.e., Facebook wanting to make policies based on feedback from community representatives and a broad spectrum of the people who use its service.

Reporting

Facebook offers fast ways for users to report posts and other users. For example, reporting a post involves clicking on the 3 dots (top right) and selecting “Find support or report post”.  Other ways of reporting are listed here: https://en-gb.facebook.com/help/reportlinks/

Twitter

Twitter, of course, has its own extensive rules and policies designed to “serve the public conversation” which are published online here https://help.twitter.com/en/rules-and-policies/twitter-rules.

Twitter’s online guidance focuses very clearly on what is ‘not’ allowed on its platform. For example, Twitter is very clear that users must not:

– Threaten violence against an individual or a group of people.

– Threaten or promote terrorism or violent extremists.

– Engage in the targeted harassment of someone or incite other people to do so.

– Promote violence against, threaten, or harass other people based on race, ethnicity, national origin, caste, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease.

– Promote or encourage suicide or self-harm.

– Publish or post other people’s private information.

– Post or share intimate photos or videos of someone that were produced or distributed without their consent.

– Use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes.

There are, of course, many other rules and guidelines.

Enforcement

Enforcement (i.e. action taken by Twitter if these rules and guidelines are broken/contravened) can be taken at Tweet level, direct-message level and account level.

Tweet-level enforcement includes limiting Tweet visibility, requiring Tweet removal, hiding a violating Tweet while awaiting its removal, or placing a tweet behind a notice explaining why it is a public exception (in the public interest).

Direct Message-level enforcement includes stopping conversations or placing a violating Direct Message behind a notice so that no one else in the group can see it again.

Account-level enforcement includes requiring media or profile edits, placing an account in read-only mode, verifying account ownership, and permanent suspension.

Trump – Permanent Suspension

One extremely high-profile, recent permanent suspension of a Twitter account was that of President Donald Trump following what he said in Tweets prior to his supporters descending on Washington, and after an initial temporary suspension.  Interestingly, Twitter has specific guidelines relating to how world leaders are permitted to use its platform.  See: https://blog.twitter.com/en_us/topics/company/2019/worldleaders2019.html

Non-Violating Content

Twitter can also act against non-violating content.  This can include:

– Placing a Tweet behind a notice (e.g. adult content or graphic violence).

– Withholding a Tweet or account in a country –  e.g. where laws in a specific country may apply to Tweets and/or Twitter account content.

Reporting / Complaining

There are already mechanisms built-in-to Twitter that enable users to report a Tweet, account, or conversation on the grounds that it is abusive.  This usually involves clicking on the 3 dots/more icon and clicking the appropriate reporting link from there. Information can be found here:  https://help.twitter.com/en/safety-and-security/report-abusive-behavior

Your Safety

Part of using social media safely involves taking steps to protect yourself in terms of privacy and data security.  Ways this can be achieved include:

– Not sharing personal information online, not ‘over-sharing’ and not sharing anything you wouldn’t want your family to see.

– Not sharing too many personal details that could be clearly linked with your identit (e.g. real date of birth, address details etc.)

– Checking privacy settings, reviewing what you make ‘public’, and being careful when sharing location information.

– Being very wary of accepting friend requests from complete strangers or requests from those you think you are already friends with (this could be a sign of a hacked account).

– Watching out for phishing scams i.e., following links that could direct you to malicious websites.

Safe Posting

Advice for safe posting on social media websites, therefore, could include:

– Be aware of the rules of the platform.

– Keep it “light and interesting”, without revealing too many personal details.

– Post within the rules of any group.

– Avoid getting involved in heated arguments with members of groups or other less familiar Facebook friends.

– Be careful not to use language or express views that could upset others.

– Report abuse/offensive posts and behaviour.

– If you have children who want to use social media, set guidelines about social media use, make sure they’re not posting personal details or photos of themselves and not accepting friends requests from people they don’t know or joining inappropriate groups, keep their profile private and check the privacy settings, and keep an open dialogue with them about their digital activities. 

Google Rating
5.0
Based on 42 reviews