- Journal Archives
- Subscribe to JETLaw
- Volume 15
- Volume 14
- Volume 13
- Volume 12
- Volume 11
- Volume 10
- Volume 9
- Volume 8
- Volume 7
- Volume 6
- Volume 5
Facebook has recently come under fire in the United Kingdom for not providing adequate user safety protections, which would help prevent pedophiles from targeting minors online. Specifically, Jim Gamble, of the Child Exploitation and Online Protection Centre (CEOP), has criticized Facebook for failing to install a “panic button” that would allow British children to get immediate police assistance if they feel that they are being contacted inappropriately.
Public pressure for this new feature has increased in the wake of the rape and murder of British teenager Ashleigh Hall, who was contacted by her killer via Facebook. Gamble has spoken about the responsibility of social networking sites to protect users and, in particular, their “duty of care to the young and vulnerable.” Additionally, Gamble charges that, “there is no legitimate reason for not taking [this feature] and placing it on a site.”
Facebook officials have responded that they have yet to see any evidence that CEOP’s panic button is an effective tool for combating this problem and point to their own extensive reporting mechanisms as proof of their dedication to user safety. One spokeswoman also questioned the practicality of such a feature, stating “one button in one country is not a scaleable solution when you’re a global platform with 400 million users.”
Incidents with Facebook users in the United States also point to the same kind of issues and the connection between social networking sites and crime. In Wisconsin, a 19-year-old blackmailed male classmates after posing as a female Facebook user and soliciting nude photographs from them. In New York, a man accessed an ex-girlfriend’s Facebook account, changed her personal settings, and then demanded cash to change the settings back.
Although Facebook certainly has a responsibility to protect its users, it is difficult to see what a “panic button” would add to the measures already in place. Currently, users can report unwanted messages or content to Facebook (or if they feel truly threatened, file a police report). Additionally, Facebook has made it a violation of its Statement of Rights and Responsibilities to use a fake name on the site and warns that users should be cautious about accepting “friend requests” from people they do not know because, despite their efforts, not every user is who they appear to be.
Part of the problem with online predators is recognizing the danger. If Facebook users don’t recognize that they are engaging in potentially dangerous communications, then they would be unlikely to use this feature. Moreover, if people aren’t using the available reporting mechanisms, then why would they choose to use this method?
Furthermore, the button requested by CEOP would send reports of inappropriate contacts to the CEOP Centre, which does not seem like something easily transferrable to other countries. Facebook would need to establish relationships with many different governments, identify which agency these reports should be sent to, and then the government agency would have to establish a system for analyzing the reports.
The AOL-owned site Bebo was among the first to install CEOP’s button. Bebo’s experience with the feature will be closely monitored by other social networking sites, to measure its effectiveness and see whether this feature is something that Facebook should be seriously considering.
– Joanna Barry
TagsAdvertising antitrust Apple Books Career Celebrities Constitution Contracts Copyright copyright infringement Courts Creative content Criminal law Entertainment Facebook FCC Film/Television Financial First Amendment Games google Government Intellectual Property Internet JETL Journalism Lawsuits Legislation Media Medicine Monday Morning JETLawg Music NFL Patents Privacy Progress Publicity rights Radio Social Networking Sports Technology Telecommunications Trademarks Twitter Uncategorized