Censoring the Web Isn’t the Solution to Terrorism or Counterfeiting. It’s the Problem.

Print Friendly, PDF & Email

EFF.orgIn pol­i­tics, as with Inter­net memes, ideas don’t spread because they are good—they spread because they are good at spread­ing. One of the most vir­u­lent ideas in Inter­net reg­u­la­tion in recent years has been the idea that if a social prob­lem man­i­fests on the Web, the best thing that you can do to address that prob­lem is to cen­sor the Web.

It’s an attrac­tive idea because if you don’t think too hard, it appears to be a polit­i­cal no-brain­er. It allows gov­ern­ments to avoid address­ing the under­ly­ing social problem—a long and cost­ly process—and instead sim­ply pass the buck to Inter­net providers, who can quick­ly make what­ev­er con­tent has raised ran­kles “go away.” Prob­lem solved! Except, of course, that it isn’t.

Amongst the dif­fi­cult social prob­lems that Web cen­sor­ship is often expect­ed to solve are ter­ror­ism, child abuse and copy­right and trade mark infringe­ment. In recent weeks some fur­ther cas­es of this tac­tic being vain­ly employed against such prob­lems have emerged from the Unit­ed King­dom, France and Australia.

UK Court Orders ISPs to Block Web­sites for Trade Mark Infringement

In a vic­to­ry for lux­u­ry brands and a loss for Inter­net users, the British High Court last month ordered five of the coun­try’s largest ISPs to block web­sites sell­ing fake coun­ter­feit goods. Whilst alarm­ing enough, this was mere­ly a test case, lead­ing the way for a report­ed 290,000 web­sites to be poten­tial­ly tar­get­ed in future legal proceedings.

Do we imag­ine for a moment that, out of a quar­ter-mil­lion web­sites, none of them are false pos­i­tives that actu­al­ly sell non-infring­ing prod­ucts? (If web­sites blocked for copy­right infringe­ment or pornog­ra­phy are any exam­ple, we know the answer.) Do we con­sid­er it a wise invest­ment to tie up the jus­tice sys­tem in block­ing web­sites that could very eas­i­ly be moved under a dif­fer­ent domain with­in minutes?

The rea­son this rul­ing con­cerns us is not that we sup­port coun­ter­feit­ing of man­u­fac­tured goods. It con­cerns us because it fur­ther nor­mal­izes the band-aid solu­tion of con­tent block­ing, and deem­pha­sis­es more per­ma­nent and effec­tive solu­tions that would tar­get those who actu­al­ly pro­duce the coun­ter­feit or ille­gal prod­ucts being pro­mot­ed on the Web.

Britain and France Call on ISPs to Cen­sor Extrem­ist Content

Not con­tent with enlist­ing major British ISPs as copy­right and trade mark police, they have also recent­ly been called upon to block extrem­ist con­tent on the Web, and to pro­vide a but­ton that users can use to report sup­posed extrem­ist mate­r­i­al. Usu­al sus­pects Google, Face­book and Twit­ter have also been roped by the gov­ern­ment to car­ry out block­ing of their own. Yet to date no details have been released about how these extra­ju­di­cial block­ing pro­ce­dures would work, or under what safe­guards of trans­paren­cy and account­abil­i­ty, if any, they would operate.

This fix­a­tion on solv­ing ter­ror­ism by block­ing web­sites is not lim­it­ed to the Unit­ed King­dom. Across the chan­nel in France, a new “anti-ter­ror­ism” law that EFF report­ed on ear­li­er was final­ly passed this month. The law allows web­sites to be blocked if they “con­done ter­ror­ism.” “Ter­ror­ism” is as slip­pery a con­cept in France as any­where else. Indeed France’s broad def­i­n­i­tion of a ter­ror­ist act has drawn crit­i­cism from Human Rights Watch for its legal imprecision.

Aus­tralian Plans to Block Copy­right Infring­ing Sites

Finally—though, sad­ly, prob­a­bly not—reports last week sug­gest that Aus­tralia will be next to fol­low the exam­ple of the UK and Spain in block­ing web­sites that host or link to alleged­ly copy­right mate­r­i­al, fol­low­ing on from a July dis­cus­sion paper that moot­ed this as a pos­si­ble mea­sure to com­bat copy­right infringement.

How did this become the new nor­mal? When did politi­cians around the world lose the will to tack­le social prob­lems head-on, and instead decide to sweep them under the rug by block­ing evi­dence of them from the Web? It cer­tain­ly isn’t due to any evi­dence that these poli­cies actu­al­ly­work. Any­one who wants to access blocked con­tent can triv­ial­ly do so, using soft­ware like Tor.

Rather, it seems to be that it’s polit­i­cal­ly bet­ter for gov­ern­ments to be seen as doing some­thing to address such prob­lems, no mat­ter how token and inef­fec­tu­al, than to do nothing—and web­site block­ing is the eas­i­est “some­thing” they can do. But not only is block­ing not effec­tive, it is active­ly harmful—both at its point of appli­ca­tion due to the risk of over-block­ing, but also for the Inter­net as a whole, in the legit­imiza­tion that it offers to repres­sive regimes to cen­sor and con­trol con­tent online.

Like an overused Inter­net meme that deserves to fade away, so too it is time that courts and reg­u­la­tors moved on from web­site block­ing as a cure for soci­ety’s ills. If we wish to reduce polit­i­cal extrem­ism, cut off the pro­duc­tion of coun­ter­feits, or pre­vent chil­dren from being abused, then we should be address­ing those prob­lems directly—rather than by mere­ly cov­er­ing up the evi­dence and pre­tend­ing they have gone away.