Happy New Year, and welcome to my first social media round-up of the year, covering December 2016 and the beginning of 2017. With Donald Trump in the White House and loose on Twitter, this year should be a social media year to remember.
Starting this month I will trial a slightly different format by focusing on fewer major current issues, a format better suited for handling a veritable tsunami of social media stories.
This month I will take another look at the fake news phenomenon, explore social media companies’ legal and public relations struggle with allegations of supporting terrorist organisations and not doing enough to combat trolling and hate speech, highlight the abuse of free speech by oppressive regimes, and how social media can go wrong for users elsewhere, resulting in a multitude of legal issues, including whether a GIF sent on Twitter could constitute an assault, and review recent regulatory developments.
If you would like to attend one of my talks on social media, I will be presenting a professional skills seminar on ‘Social media and lawyers,’ as part of The College of Law’s Continuing Professional Development Program, on 22 March 2017. I will post further details closer to the date.
See all previous issues of
‘Social Media Round-Up‘
Tweet of the month
Tweet of the month goes to a joint effort by the Scottish Sunday Herald and George Takei.
Damien Love of the Scottish Sunday Herald for conceiving it, and George Takei for bringing it to the world.
You can’t say you haven’t been warned. I did flag Donald Trump would be featuring often in social media news this year.
Fake news | Social media, terrorism, and hate speech | Free speech | Social media gone wrong | Crime (and punishment) | Regulatory issues
I first formally addressed fake news in August last year, after an alarming report by Adrien Chen in The New York Times Magazine on the activities of Russia’s shadowy professional troll-army dedicated to creating and spreading fake news.
Complicating the landscape is the fact fake news is not manufactured by foreign provocateurs alone. There are also partisan hacks, and people simply looking to profit from traffic to fake news through online advertising.
Despite the initial vehement denials of a problem by social media companies and pundits alike, the subject had now come to public prominence, causing growing concern with fake news’ true influence on politics, and society at large.
The security agencies of the United States had also publicly acknowledged now that significant Russian efforts were made to influence the American presidential election, even if argument remains about the precise practical effect of that meddling on the election result.
At the end of last year, Germany’s spy chief also issued a warning about Russian operatives flooding the country with dangerous misinformation designed to interfere with its upcoming election, Justice Minister Heiko Maas called for the prosecution of people who spread fake news on social media sites, and it was suggested that the government is considering the introduction of new legislation that would see social media operators fined for distributing fake news.
Closer to home concerns have been expressed about the use of the moniker ‘fake news,’ and false claims of fake news, in the upcoming Western Australian State election campaign, with the website of the Liberal Party of Western Australia setting up a webpage encouraging supporters to report ‘Labor lies’ on Facebook, by tagging them as ‘inappropriate, annoying or not funny’ under the social network’s reporting system.
Reporting fake news is the responsible thing to do, but to falsely report political communications as ‘fake’ or ‘false,’ potentially trivialises and confuses the real issue, and it is highly irresponsible in the current climate.
If you continue to be inclined to dismiss the seriousness of fake news, ‘#Pizzagate’ offers a disturbing illustration of the potential everyday legal, insurance, workplace, and public safety implications of its unchecked proliferation.
‘#Pizzagate’ is a debunked conspiracy theory started by a fake news story, whereby it was alleged that a child abuse ring led by Hillary Clinton, and her top campaign aide, was being run out of a family pizza restaurant in Washington. The story went viral on social media.
Most people would immediately recognise the likely insanity of such baseless allegations, and the conspiracy has in fact been thoroughly debunked.
Nevertheless, that didn’t stop 28-year-old Edgar Maddison Welch of Salisbury, North Carolina from travelling to Washington, entering Comet Pizza armed with a rifle on 4 December 2016 to ‘investigate’ the allegations, and proceeding to discharge his gun in a packed restaurant.
This case illustrates how fake news can threaten public and workplace safety, cause business disruption and subsequent insurance claims, and give rise to defamation claims over scurrilous fake allegations, such as being involved in child abuse.
Fake news can ‘weaponise’ social media with highly undesirable consequences, from endangering lives to undermining the very foundations of our democratic societies.
Facebook’s latest responses to the issue include a personal note from Mark Zuckerberg on his Facebook page detailing a plan for fighting misinformation, testing new tools designed to help identify fake news with the assistance of users, launching a journalism project designed to deepen collaboration with news organisations, and updates to the ‘Trending’ feature to improve the quality of news items shown.
Facebook also announced it would specifically target fake news in Germany in response to government statements about the issue.
Google is also deeply involved in the fight – it permanently banned a further 200 ‘publishers’ from its ad network in the last two months of 2016 in a continuing effort to remove the financial incentive which is often one of the drivers for the creators and publishers of fake news.
From November to December 2016, we reviewed 550 sites that were suspected of misrepresenting content to users, including impersonating news organizations. We took action against 340 of them for violating our policies, both misrepresentation and other offenses, and nearly 200 publishers were kicked out of our network permanently.
How we fought bad ads, sites and scammers in 2016, Google (25 January 2017)
Although it remains to be seen how successful social media companies will be in combating fake news when the new president of the United States himself has a long history of spreading fake news, pardon me ‘alternative facts,’ and exploiting it for political purposes.
In the meantime a Cambridge University study indicates that all may not be lost, even if the social media companies fail us. The study suggests that it may be possible to fight fake news by ‘pre-emptively highlighting false claims and refuting potential counterarguments’ to make people less vulnerable to falsehoods. A kind of fake news ‘inoculation’ or ‘vaccine’ …
Social media, terrorism, and hate speech
Social media operators also continue to be pursued over allegations they provide a platform for terrorist organisations to recruit members and solicit funds, and struggle against hate speech on their networks.
In August I reported on Judge William Orrick dismissing a lawsuit by the wives of two contractors killed in a terrorist attack in Jordan, which claimed a liability on part of Twitter over providing a platform for Daesh to spread its propaganda, raise funds, and attract recruits, but granting leave to the plaintiffs to replead their case.
The case returned before Judge Orrick in November, and was dismissed again, this time with prejudice, bringing the case to an end. The Judge held the case barred by the so-called section 230 immunity offered by the Communications Decency Act (CDA). Section 230 provides a shield for online companies from being held responsible for third-party content. There is now a growing body of authority rejecting such lawsuits on the basis of section 230 of the CDA.
Nevertheless, similar lawsuits remain on foot against Twitter, Google, and Facebook, including Gonzalez et al v. Twitter, Inc. et al Case 4:16-cv-03282 filed last year, and a recent federal suit filed in Detroit, in the Eastern District of Michigan, by the families of three victims of the Orlando Pulse Nightclub massacre, claiming Twitter, Google, and Facebook provide ‘material support’ to Daesh.
Meanwhile, two weeks ago Dallas police officer Demetrick Pennie filed a suit in the Northern District of California, alleging the social media operators provide ‘material support’ to the Palestinian militant group Hamas by knowingly and recklessly providing them with social media accounts as a tool for spreading extremist propaganda and radicalising Micah Johnson, the army veteran who killed five police officers in an ambush last year.
Each of these suits is targeting the section 230 immunity, but the chances of their success is questionable unless the plaintiffs can come up with an innovative new way to pin liability on the social media operators.
Relevantly, in December YouTube, Facebook, Twitter, and Microsoft announced the formation of a shared database of identifiers of online terror images and videos in an attempt to coordinate and improve their efforts in taking down terror related content.
Twitter also suspended the account of Jordanian cleric Abu Qatada, who is suspected of links to al-Qaeda, and two other Islamic scholars allegedly supportive of the terror group.
The major social media operators also struggle with hate speech on their platforms, increasing public perception that they are not doing enough about it, and growing political pressure to act faster and more decisively.
In December a UK activist group TellMAMA accused Twitter of failing to tackle online abuse against Jewish people, the LGBTI community, and Muslims.
Twitter also caused outrage when, after a brief period of crackdown on racist, white nationalist, and neo-Nazi accounts, it restored the personal account of Richard Spencer, the leader of an American white nationalist, neo-Nazi group.
You may know Mr Spencer from a couple online videos that went viral over the last few months, one which shows him leading a ‘Hail Trump’ nazi-style salute at a white nationalist conference, and another in which he is punched in the face by protester during a live TV broadcast.
Used to operating in the cultural and legal landscape of the United States where free speech is a revered constitutional concept, American social media companies continue to struggle to adjust to a markedly different cultural and legal landscape in Europe, where hate and discriminatory speech is illegal in a number of jurisdictions.
For example, section 130 of Germany’s Criminal Code criminalises ‘volksverhetzung’ or ‘incitement to hatred,’ and articles R. 624-3 and R. 624-4 of France’s Penal Code also contain provisions protecting individuals and groups from being defamed or insulted on the basis of their ethnicity, nationality, race, gender, sexual orientation, disability, or religion, while article R. 625-7 deals with incitement to discrimination, hatred, or violence in respect of the same.
In May last year social media companies signed up to a code of conduct requiring them to take action on hate speech in Europe within 24 hours, but in December the European Commission saw it fit to remind Facebook, Twitter, Google, and Microsoft of their obligations under the code after shortcomings were identified in their compliance.
Meanwhile leading academics warned Google its search algorithm is being manipulated by right-wing propagandists to ensure anti-Semitic, anti-Muslim, and anti-women results dominate, spurring Google into action to address the issue.
Fighting social media hate could also be complicated by the Trump presidency, given the new president is a well-known social media troll himself.
Thankfully, many innovative solutions are being suggested and developed, such as the concept of an ‘online mother‘ social media account which came out of the recent four-day Hackathon 2.0 in Perth, Australia.
In some parts of the world engaging with social media can be a dangerous decision. Oppressive regimes keep tight control over their citizens’ use of social media and the price of free speech, or ‘misspeaking,’ can be very high.
Thailand is one good example of this with its extreme lese-majeste law which is designed to protect Thailand’s royal family from insults and threats. Article 112 of the Thai Criminal Code makes it an offence to insult or threaten ‘the King, the Queen, the Heir-apparent or the Regent,’ and prescribes imprisonment of three to fifteen years as punishment for such an offence.
Section 8 of the Thai Constitution also expressly enshrines the protection of the King:
The King shall be enthroned in a position of revered worship and shall not be violated.
No person shall expose the King to any sort of accusation or action.
Unhelpfully, there is no definition of what constitutes an insult under Article 112, and human rights groups assert the provision is often used against political opponents and dissidents as a weapon to stifle free speech.
This accusation was recently illustrated by the arrest of Thai political activist Jatupat Boonpattararaksa for sharing on Facebook the profile of the new King, Maha Vajiralongkorn, published by the BBC. The BBC profile of the new King itself is alleged to have breached Article 112 and is currently under investigation, thus its mere sharing on Facebook is considered a potential offence.
In Iran the Islamic government has been cracking down on ‘un-Islamic acts’ and imprisoning young Iranians over their Instagram posts. Reportedly at least 170 people have been arrested to date, with twelve people sentenced to prison for up to six years over posting their photos to Instagram.
In Saudi Arabia a woman, identified by various sources as Malak al-Shehri, was arrested for ‘violations of general morals’ over tweeting a photo of herself stepping out to breakfast without a hijab, in violation of the country’s oppressive moral code controlling women’s conduct.
In Bahrain, prominent activist Nabeel Rajab faced court, accused of spreading ‘false news and rumours and inciting propaganda,’ with his tweets being presented by the prosecution as evidence against him.
Meanwhile, Singapore‘s Amos Yee is seeking asylum in the United States following his latest run-in with censorship in the city-state, having been jailed again for ‘wounding religious feelings.’
The Canadian court case worrying human rights groups
In the Supreme Court of Canada a lawsuit is playing out which concerns human rights and civil liberty groups enough to intervene. In September I reported on the upcoming appeal by Google in the Equustek case which resulted in an order by the Supreme Court of British Columbia for Google to stop displaying certain websites in its search results worldwide over an intellectual property dispute.
Activists are worried the case could set a precedent and enable nefarious operators to suppress free expression by regulating internet content outside their borders.
Australia, United States, and the United Kingdom
In Australia, Twitter came under fire after it suspended the account of the family of Julieka Dhu, an indigenous woman who controversially died in police custody.
The @JusticeForDhu account is being used by the family to shine a light on Ms Dhu’s tragic death.
The suspension came suspiciously close to the day when the results of a coronial inquiry into the death were due to be released. The account has been restored since.
Causing further controversy, Twitter also temporarily suspended the account of an American woman who complained on Twitter about anti-Semitic abuse directed at her by Twitter trolls, while allowing the trolls to remain online.
Facebook didn’t fare better either, after initially blocking a photograph of a 16th-century Italian statue of Neptune in Piazza del Nettuno for showing too much skin and being ‘sexually explicit,’ temporarily banning Australia’s own Mariam Veiszadeh over sharing an anti-Islamic rant aimed at her in order to highlight her daily experiences of abuse, and American author and journalist Kevin Sessums over a post in which he referred to Donald Trump’s supporters as ‘a nasty fascistic lot.’
Social media gone wrong
In Western democracies citizens generally enjoy greater freedoms, but there are still legal, cultural, and social limitations on what qualifies as acceptable conduct. And sometimes things can just go wrong …
There has been another concerning trend on social media, the growing phenomenon of the ignorant and lazy mistaken identity, which can subject the wrong person, a completely innocent person, to a hate tsunami.
A recent example of this trend is offered by New York University communications professor Kerry O’Grady, the unfortunate namesake of Secret Service agent Kerry O’Grady.
It was reported in the news that the namesake Secret Service agent is under investigation for posting a statement on Facebook indicating she would rather go to jail than be shot and killed for President Trump.
Cue the lazy, clueless, hate-filled social media lynch mob.
All it took was a few less than bright sparks misidentifying Ms O’Grady for the Secret Service agent in question, and suddenly she was subjected to a tsunami of trolling and hate-filled social media messages on Facebook and Twitter.
All these people had to do was to read her social media profile to realise they had the wrong person, but clearly that would be too much to hope for from your average social media troll.
Professor O’Grady, who ironically made her career by managing high-profile public relations crises for corporate clients, managed the affair with grace and professionalism, but she is very lucky to have the skills to do so, and the support of friends and colleagues.
If you are experiencing déjà vu after reading this story, it is perhaps because you recall the story of British IT worker Mark Horton last year mistaken by Chinese fans for the Australian Olympic swimmer Mack Horton, who criticised a Chinese competitor. Subsequently, Mark Horton became the subject of a social media trolling frenzy, and there was no convincing the misguided ‘lynch mob’ about the mistaken identity …
Social media and the workplace
In a recent unfair dismissal case before Australia’s Fair Work Commission (FWC) social media ‘tagging’ emerged as a significant factor in a case where a mental health nurse tagged two work colleagues on a sexually offensive video on his Facebook page, among other questionable workplace conduct.
In Michael Renton v Bendigo Health Care Group  FWC 9089, Commissioner Bissett handed down a curious decision. While agreeing Mr Renton showed an ‘appalling lack of judgment,’ and describing his actions as crass and careless, she held that while the conduct provided a valid reason for his subsequent dismissal, it was nevertheless too harsh and ‘disproportionate to the gravity of the misconduct.’
The Commissioner agreed with Bendigo Health that reinstatement would not be appropriate in the circumstances, and directed the parties to file submissions relating to compensation over the dismissal.
A former American marine, and firefighter in Belding, Michigan was fired recently over racist posts on Facebook, in response to a woman expressing support for American footballer Colin Kaepernick, who has been publicly protesting police brutality against African-Americans by taking a knee during the national anthem at each of his games. Ryan Hudson responded to the expression of support by profanity-laden posts of his own, resulting in his dismissal.
Some feared the election of Donald Trump would lead to the immediate suspension of civility and good manners in the United States. Arkansas native, Hunter Hatcher learned the hard way that, thankfully, that’s not quite the case.
Mr Hatcher, Outreach Coordinator at the Arkansas State Treasurer’s office, and a staff sergeant in the Kansas Army National Guard, was forced to resign from his state job after his off-colour social media posts on Facebook and Twitter drew unflattering public and media attention:
I love Subway cause I can tell a woman to make me a sandwich and she does it with a smile on her face. I wish all women had that Subway work ethic. And equality? Don’t get equal, get to cooking woman, get equal on your own time.
Facebook, 1 January 2017
Y’all in Trump’s America now! Time to flick that chip off ya shoulder and quit being so offended. Gay jokes are back on ya bunch of homos!
Twitter, 20 January 2017
A British non-league footballer, Alfie Barker is experiencing what it is like to receive social media abuse after unwisely dishing out some abuse of his own. Mr Barker had tweeted an unkind message at footballer Harry Arter involving his stillborn child, after Mr Arter’s team drew with Arsenal in a nail-biting Premier League game.
Mr Barker was promptly sacked by his football club, fired from his carpentry job, and has been inundated with abusive and threatening messages.
Social media and defamation
Closer to home an anti-Islam group, supported by Liberal Senator Cory Bernardi and Nationals MP George Christensen, is facing a defamation suit brought by a halal certifier over two YouTube videos. Mr Mohamed El-Mouelhy argues the videos make several defamatory imputations, including that he is un-Australian, seeks to mislead and deceive the public, pushes for sharia law to be introduced in Australia, promotes ‘a global push for Islamisation calculated to destroy Australian values of freedom and tolerance,’ and ‘reasonably suspected’ of financially supporting terrorism. The trial is scheduled to go ahead later this year.
In Sydney, the defamation case by Mr Ali Ziggi Mosslmani continues to work its way through the District Court of New South Wales. I last reported on this case in November, when Judge Gibson allowed certain amendments to the claim.
At the latest hearing of the matter in December the ‘monster mullet claim’ got a big chop, including the claim against The Daily Telegraph over their online version of the article in question, leaving only the claim over the print edition on foot. Further, in respect of the claim over the print edition, Judge Gibson struck out the pleading that it conveyed that Mr Mosslmani was ‘stupid,’ among other things. The case will return to court next week.
You may also recall the defamation case by Ranjit Rana against Google Inc., that played out in the Australian Federal Court in South Australia back in 2013. I first referred to Mr Rana’s case in March 2015 in the context of looking at defamation cases against Google, and the 2013 decision of Justice Mansfield which halted the progress of Mr Rana’s claim.
Since then, Mr Rana had been busy with a number of related, and unrelated, proceedings in a range of forums. Admittedly, the phrase ‘vexatious litigant’ had been mentioned in the same sentence with Mr Rana’s name more than once.
But, he was back having a fresh, ill-pleaded, go at Google in the Federal Court. Last year Justice Mansfield refused him leave under section 21 of the Defamation Act 2005 (SA) for the claim to proceed, stood the matter over, and granted leave for Mr Rana to file an amended application and statement of claim to ensure the case was adequately pleaded.
In January this year the matter made another appearance, this time before Justice Charlesworth who dismissed Mr Rana’s application to commence the proceeding against Google on the basis Mr Rana’s statement of claim is not sufficiently clear as to fairly require Google to respond to it. Given Mr Rana’s litigation history, I suspect this may not be the last we heard of his adventures against Google …
A Syrian refugee, who took a celebratory selfie with German Chancellor Angela Merkel back in 2015, is suing Facebook in the Würzburg Landgericht (District Court) alleging the social network is failing to take appropriate steps to prevent the spread and sharing of defamatory posts, and fake news articles, using the picture falsely linking him to terrorist and criminal activities time-and-time again.
In New Jersey, a new defamation and invasion of privacy lawsuit is targeting social media users who pursued a hunter after the killing of a much-beloved bipedal local bear, nicknamed ‘Pedals’. John DeFilippo argues he has been wrongly identified by social media users as the hunter responsible, resulting in ongoing harassment.
Actor James Woods is also continuing a long-running defamation action. I first reported on this case in November 2015, after the actor filed a $10 million lawsuit in July that year against an anonymous Twitter user who called him a ‘cocaine addict,’ among other things.
While the lawyer for the defendant insisted his client had passed away, Mr Woods refused to walk away and in January obtained a judgment requiring the defendant’s lawyer to reveal the identity of his deceased client.
Learn this. Libel me, I’ll sue you. If you die, I’ll follow you to the bowels of Hell. Get it?
It will be interesting to see if Mr Woods will now try to pursue his estate.
Can a GIF sent on Twitter constitute an assault?
Giving James Woods a run for his money, in one of the most fascinating lawsuits to come out of social media yet, Newsweek journalist Kurt Eichenwald filed a lawsuit in the District Court of Dallas County in Texas, pursuing an anonymous Twitter troll who tweeted at him an image of a strobe flashing at rapid speed, accompanied by the message ‘[y]ou deserve a seizure for your post.’ It is a well-known public fact that Mr Eichenwald suffers from epilepsy.
In the context of his well-known health condition, and the message accompanying the image, Mr Eichenwald argues the tweet was sent ‘with the intent of causing a seizure,’ and the anonymous Twitter user succeeded in his effort ‘to use Twitter as a means of committing assault, causing [Mr Eichenwald] to have a seizure which led to personal injury.’
This novel claim would set a groundbreaking precedent for the potential consequences of social media trolling if successful.
When your social media posts cost you a legal case
Meanwhile, a Pennsylvania woman found her social media posts cost her a medical malpractice lawsuit. Nancy Nicolaou was suing the doctors who allegedly misdiagnosed her Lyme disease. The Superior Court of Pennsylvania tossed out Ms Nicolaou’s case, finding it was time-barred by the two-year statute of limitations applicable to the claim.
The court found her Facebook posts demonstrated she was well aware of her injury long before she filed her lawsuit out of time, and there were no sufficient reasons to excuse her delay.
There is also at least one good legal news story this month. In November I reported on the Australian Government threatening to sue 66-year-old retired grandfather Mark Rogers over alleged breaches of copyright, arising from a website he set up to campaign against cuts to Medicare.
I am pleased to note that the Department of Human Services backed away from the threats, and the Government Solicitor sent a follow-up letter to Mr Rogers informing him the Department has no intention of suing him over the matter, but asking him to add a disclaimer to his site, making it clear it is not authorised by Medicare or the Department. The letter also conveyed an apology ‘if the more formal, legal nature of [their] original letter caused [Mr Rogers] unnecessary concern’.
Crime (and punishment)
Social media is pervasive and permissive in many ways, which is probably why so many people tend to forget that what’s criminal, or otherwise contrary to the law, remains criminal and contrary to the law on social media, even in democratic societies. Social media disputes can also lead to real-life confrontations with serious consequences.
Malicious communications in the United Kingdom
In the United Kingdom, a 25-year-old man was arrested on suspicion of sending malicious communications, after he tweeted ‘someone jo cox Anna sourby please’ (sic). Anna Sourby is a conservative Member of Parliament.
Sending malicious communications in the United Kingdom is an offence under the Malicious Communications Act 1988, and the Act applies to electronic communications, including social media.
Especially after the shocking assassination of Jo Cox, a member of the British Parliament, by a far-right nationalist, the police is unlikely to be taking any chances with threatening social media messages.
As one British man was arrested, another one was found guilty over the anti-Semitic Twitter harassment of Labor MP Luciano Berger. Joshua Bonehill-Paine was found guilty of racially aggravated harassment and sentenced for two years imprisonment.
Still in the UK, a second man was arrested on suspicion of racially aggravated malicious communications, joining a Swindon man arrested last December for the same, and eight other people receiving ‘cease and desist’ notices over their online behaviour, towards Gina Miller, the woman behind the landmark BREXIT legal challenge who has been facing an onslaught of online abuse since taking up the cause.
Arrests in Israel
In Israel, two people were arrested, accused of inciting violence on social media against three military judges who recently convicted an army medic of manslaughter in a highly controversial case involving the fatal shooting of an injured and incapacitated Palestinian terrorist.
When a social media tiff boils over into real world violence
In Hobart, a woman was admitted to Royal Hobart Hospital with a stab wounds, and another woman was arrested over the stabbing, allegedly sparked by a Facebook tiff between the two women.
The troubles of Facebook Live
Facebook Live made a few appearances in the news lately, and sadly not in a positive way. There were the young Rhode Island man and Pennsylvania woman who live streamed their respective car crashes in December after they foolishly started up Facebook Live while driving. In January four people were arrested in Chicago after they live streamed the beating and torture of an intellectually disabled man while laughing and taunting him with racist comments. And in Sweden, the police swooped on three men after the gang-rape of a young woman was live streamed by the perpetrators.
In the past month there have been at least two tragic instances of live streamed suicides on Facebook Live as well. One by 14-year-old Naika Venant at her Miami foster home, and another by aspiring actor Frederick Jay Bowdy on the other side of the country in North Hollywood, after he was arrested on a domestic violence complaint. There was at least one more similar incident in December using another live video streaming service, the Live.me App, involving a 12-year-old girl Katelyn Nicole Davis in Cedartown, Georgia, who alleged she had been physically and sexually abused by a family member. Police are investigating each of these terrible incidents.
If you or someone you know needs help, in Australia call Lifeline on 13 11 14, or beyondblue on 1300 22 4636 or visit their website. If it’s an emergency, call 000 immediately.
These are disturbing new developments in the use of social media, in particular the recent live streaming phenomenon.
Facebook’s troubles had escalated in December with the European Commission sending a Statement of Objections to Facebook alleging it provided misleading information to the Commission in the run-up to its acquisition of WhatsApp.
When reviewing Facebook’s planned acquisition of WhatsApp, the Commission looked, among other elements, at the possibility of Facebook matching its users’ accounts with WhatsApp users’ accounts. In its notification of the transaction in August 2014 and in a reply to a request of information, Facebook indicated to the Commission that it would be unable to establish reliable automated matching between the two companies’ user accounts. While the Commission took this information into account in its review of the transaction, it did not only rely on that information when clearing the transaction.
In today’s Statement of Objections, the Commission takes the preliminary view that, contrary to Facebook’s statements and reply during the merger review, the technical possibility of automatically matching Facebook users’ IDs with WhatsApp users’ IDs already existed in 2014. At this stage, the Commission therefore has concerns that Facebook intentionally, or negligently, submitted incorrect or misleading information to the Commission, in breach of its obligations under the EU Merger Regulation.
While the move will not unravel the merger, it could lead to Facebook facing a significant fine, up to 1% of its global turnover in 2014, under Article 14(1) of the EU Merger Regulation. Facebook had until 31 January to reply to the Commission, so watch this space.