Content Regulation
The ability to publicize personal causes and messages has created a need to regulate what kinds of messages can and cannot be promoted. Facebook explicitly states their restrictions and user expectations of user content in their terms of use. In particular, Facebook does not allow users to in any way "make available any content that we deem to be harmful, threatening, unlawful, defamatory, infringing, abusive, inflammatory, harassing, vulgar, obscene, fraudulent, invasive of privacy or publicity rights, hateful, or racially, ethnically or otherwise objectionable." Facebook has cracked down on users that have violated these restrictions. In January 2006, Facebook shut down a group titled "I Hate Jesus" after Facebook received complaints from other users about the group. As Facebook explained to the group's creator in an email, "'Hate groups of any kind are not tolerated on the site, even if they are meant to be comedic.'" (DeCuir)
The incident surrounding the "I Hate Jesus" group raises an interesting question: what specifically does Facebook consider to be "objectionable" content? What is Facebook's definition of a hate group? There are a number of unrestricted groups on Facebook that preach "hate," ranging from "I Hate the Stanford Post Office" to "I Hate Lance Armstrong Bracelets." Although Facebook does not give specific examples of "objectionable" messages, the removal of the "I Hate Jesus" seems to indicate that user complaints largely determine this definition. It is in Facebook's and other social networking sites' best interest to respond to user complaints - in doing so, they protect the ethical values of whatever communities they serve. Facebook is also able to withhold its own ethical judgments, which may be biased and differ from its users. In essence, Facebook has placed itself in the position of enforcing - not creating - ethical regulations on their site.
Even in some situations where users might not explicitly complain about hateful content, Facebook reserves the right to prohibit "content that would constitute, encourage or provide instructions for a criminal offense, violate the rights of any party, or that would otherwise create liability or violate any local, state, national or international law." (Facebook, Terms of Use) In claiming this right, Facebook is able to automatically exclude messages such as those of white supremacist groups promoting violence toward minorities. Facebook's content regulation makes the site a "safer" version of the internet, one in which justifiably inappropriate content is prohibited.
There are some situations in which both legal restrictions and user complaints fail to properly regulate content. Such cases highlight the ethical dilemmas inherent in giving the general public the power of mass-publicity. A homicide case near Toronto in early January 2008 created exactly this kind of problem. After several Facebook groups were created in memory of the victim, a several users posted the names and pictures of the two suspects onto the group's page. However, because the two suspects in question were minors - a 17-year-old boy and a 15-year-old girl - this went against Canada's Youth Criminal Justice Act (YCJA), which "prevents media from publishing the names of the accused." Had any user or government official complained to Facebook about the violation, Facebook surely would have removed the content and respected its promise to disallow content that could "violate any local, state, national, or international law." However, because Facebook is incapable of instantly monitoring all of its content, laws like the YCJA are easily violated.
Part of the issue regards user ignorance of what is and isn't legal content - in all likelihood the users who posted the names of the suspects did not know they were breaking the law. One could argue that this case is evidence that Facebook's power for mass publicity is unethical in that it enables users to easily break the law. However, neither Facebook nor social networking in general is necessarily at fault. A similar situation could have come about easily on any popular blog, website, or mailing list, where users are able to post content. Facebook only differs in that it has the potential to spread this information to a much greater extent: for example, the users who leaked the suspects' names could have sent out individual messages with this information to all members of the group. Still, Facebook is better at safeguarding laws like the YCJA on the internet: the company has oversight over what is posted on their site. A series of Facebook messages can be stopped and removed, unlike a series of emails. At the very least, Facebook can stop the spread of objectionable user content.
There are clearly a number of ethical and societal implications raised by the power of social networks to mass-publicize user content. Although Facebook makes it easier to publicize questionable content on the internet, it also sufficiently restricts this content.
« Raising Awareness |
|