Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

In a blogpost yesterday, we addressed the principles we rely upon when faced with numerous and various requests to address the content of websites that use our services. We believe the building blocks that we provide for other people to share and access content online should be provided in a content-neutral way. We also believe that our users should understand the policies we have in place to address complaints and law enforcement requests, the type of requests we receive, and the way we respond to those requests. In this post, we do the dirty work of addressing how those principles are put into action, specifically with regard to Cloudflare’s expanding set of features and products.

Abuse reports and new products

Currently, we receive abuse reports and law enforcement requests on fewer than one percent of the more than thirteen million domains that use Cloudflare’s network. Although the reports we receive run the gamut — from phishing, malware or other technical abuses of our network to complaints about content — the overwhelming majority are allegations of copyright violations copyright or violations of other intellectual property rights. Most of the complaints that we receive do not identify concerns with particular Cloudflare services or products.

In the last year or so, we’ve also launched a variety of new products, including our video product (Cloudflare Stream), a serverless edge computing platform (Cloudflare Workers), a self-serve registrar service, and a privacy-focused recursive resolver (1.1.1.1), among others. Each of these services raises its own complex set of questions.  

There is no one-size-fits-all solution to address possible abuse of our products. Different types of services come with different expectations, as well as different legal and contractual obligations. Yet as we discussed in relation to our focus on transparency on Monday, being fully transparent means being consistent and predictable so our users can anticipate how we will respond to new situations.

Developing an approach to abuse

To help us sort through how to address both complaints and law enforcement requests, when we introduce new products or features, we ask ourselves four basic sets of questions about the relationship between the service we’re providing and potential complaints about content:

  • First, how are Cloudflare’s services interacting with the website content? For example, are we doing anything more than providing security and acting as a reliable conduit from one location to another?  Are we providing definitive storage of content? Did we provide the website its domain name through our registrar service? Is the Cloudflare service or product doing anything that could be seen as organizing, analyzing, or promoting content?
  • Second, what type of action might a law enforcement or private complainant want us to take and what are the consequences of it?  What sort of information might law enforcement request — private information about the user, content of what was sent over the Internet, or logs that would track activity?  Will third parties request information about a website; would they request removal of content from the Internet? Would removing our services address the problem presented?
  • Third, what laws, regulations or contractual requirements apply? Does the nature of our interaction with the online content impact our legal obligations? Has the law enforcement request or regulation satisfied basic principles of the rule of law or due process?
  • Fourth, will our response to the matter presented scale to address the variety of different requests or complaints we may receive over time, covering a variety of different subject matters and viewpoints? Can we craft a principled and content-neutral process to respond to the request? Would our response have an overbroad impact, either by impacting more than the problematic content or changing the Internet in jurisdictions beyond the one that has issued the law or regulation at issue?

Although those preliminary questions help us determine what actions we must take, we also do our best to think about the broader implications on the Internet of any steps we might take to address complaints.

So how does this work in practice? Response to abuse complaints for customers using our proxy and CDN services

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

People often come to Cloudflare with abuse complaints because our network sits in front of our customers’ sites in order to protect them from cyber attacks and to improve the performance of their website.

There aren’t a lot of laws or regulations that impose obligations to address content on those providing security or CDN services, for good reason. Most people complaining about content are looking for someone who can take that content off the Internet entirely. As we’ve talked about on other occasions, Cloudflare is unable to remove content that we don’t host, so we therefore try to make sure that the complaint gets to its intended audience — the hosting provider who has the ability to remove the material from the Internet. As described on our abuse page,  complaining parties automatically receive information about how to contact the hosting provider, and unless the complaining party requests otherwise, abuse complaints are automatically forwarded to both the website owner and the hosting company to allow them to take action.

This approach has another benefit, consistent with the fourth set of questions we ask ourselves. It prevents addressing content with an unnecessarily blunt tool. Cloudflare is unable to remove its security and CDN services from only a sliver of problematic content on a website.  If we remove our services, it has to be from an entire domain or subdomain, which may cause considerable collateral damage. For example, think of the vast array of sites that allow individual independent users to upload content (“user generated content”). A website owner or host may be able to curate or deal with specific content, but if companies like Cloudflare had to respond to allegations of abuse by a single user’s upload of a single piece of concerning content by removing our core services from an entire site, and making it vulnerable to a cyberattack, those sites would be much more difficult to operate and the content contributed by all other users would be put at risk.

Similarly, there are a number of different infrastructure services that cooperate to make sure each connection on the Internet can happen successfully – DNS, registrars, registries, security, etc.  If each of the providers of those services, any one of which could put the entire transmission at risk, is applying blunt tools to address content, then the aperture of what content will stay online will get smaller and smaller. Those are bad results for the Internet. Actions to address troubling content online should focus narrowly on the actual concern to avoid unintended collateral consequences.

While we are unable to remove content we do not host, we are able to take steps to address abuse of our services, such as phishing and malware attacks. Phishing attacks typically fall into two buckets — a website that has been compromised (unintentional phishing) or a website solely dedicated to intentionally misleading others to gather information (intentional phishing). These buckets are treated differently.

We discussed earlier that we aim to use the most precise tools possible when addressing abuse, and we take a similar approach for unintentional phishing content. If a website has been compromised (typically an outdated CMS) we can place a warning interstitial page in front of that specific phishing content to protect users from accidentally falling victim to the attack. In the majority of situations, this action is taken at a URL level of granularity.

In the case of intentional phishing attacks, such a domain like  my-totally-secure-login-page{.}com in combination with our Trust & Safety team being able to confirm the presence of phishing content on the website, we take broader action including a domain-wide interstitial warning page (effectively *my-totally-secure-login-page{.}com/*), and in some cases we may terminate our services to the intentionally malicious domain. To be clear though, this does not remove the phishing content that remains hosted by the website’s hosting provider. Ultimately, action still needs to be taken by the website owner or hosting provider to fully remove the underlying issue.

Response to complaints about content stored definitively on our network

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

We think our approach requires a different set of responses for the small, but growing, number of Cloudflare products that include some sort of storage. Cloudflare Stream, for example, allows users to store, transcode, distribute and playback their videos. And Cloudflare Workers may allow users to store certain content at the edge of our network without a core host server. Although we are not a website hosting provider, these products mean we may be be the only place where a certain piece of content is stored in some cases.  

When we are the definitive repository for content through any of our services, Cloudflare will carefully review any complaints about that content and may disable access to it in response to a valid legal takedown request from either government or private actors. Most often, these legal takedown requests are from individuals alleging copyright infringement.  Under the U.S. Digital Millennium Copyright Act, there is a specific process online storage providers follow to remove or disable access to content alleged to infringe copyright and provide an opportunity for those who post the material to contest that it is infringing. We have already begun implementing this process for content stored on our network.  That’s why we’ve begun a new section of our transparency report on requests for content takedown pursuant to U.S. copyright law for content that is stored on our network.  

We haven’t received any government requests yet to take down content stored on our network. Given the significant potential impact on freedom of expression from a government ordering that content be removed, if we do receive those requests in the future, we will carefully analyze the factual basis and legal authority for the request.  If we determine that the order is valid and requires Cloudflare action, we will do our best to address the request as narrowly as possible, for example, by clarifying overbroad requests or limiting blocking of access to the content to those areas where it violates local law, a practice known as “geo-blocking”. We will also update our transparency report on any government requests that we receive in the future and any actions we take.

Response to complaints about our registrar service

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

If you sign up for our self-serve registrar service, you’re legally bound by the terms of our contract with the Internet Corporation for Assigned Names and Numbers (ICANN), a non-profit organization responsible for coordinating unique Internet identifiers across the world, as well as our contract with the relevant domain name registry.  

Our registrar-focused web page for abuse reporting does not reference abuse complaints about a website’s content.  In our role as a domain registrar, Cloudflare has no control or ability to remove particular content from a domain. We would be limited to simply revoking or suspending the domain registration altogether which would remove the website owner’s control over the domain name. Such actions would typically only be done at the direction of the relevant domain name registry, in accordance with their registration rules associated with the Top Level Domain, or more usually to address incidents of abuse as raised by the registry or ICANN. We therefore treat content-related complaints submitted based on our registrar services the same way we treat complaints about content for sites using our CDN or proxy services.  We forward them to the website owner and the website hosting company to allow them to take action or we work in tandem with the relevant registry and at their direction.

Running a registrar service comes with other legal obligations. As an ICANN accredited registrar, part of our contractual obligations include adhering to third party dispute resolution processes regarding trademark disputes, as handled by providers such as the World Intellectual Property Organization (WIPO) and the National Arbitration  Forum. Also, we continue to be part of the ICANN community discussions on how best to handle the collection, publication and provision of access to personal data in the WHOIS database in a manner consistent with the EU’s General Data Protection Regulation (GDPR) and other privacy frameworks. We will provide more updates on that front when the discussions have ripened.

Response to complaints about IPFS

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

Back in September, we announced that Cloudflare would be providing a gateway to the InterPlanetary File System (IPFS). Cloudflare’s IPFS gateway is a way to access content stored on the IPFS peer-to-peer network. Because Cloudflare is not acting as the definitive storage for the IPFS network, we do not have the ability to remove content from that network. We simply operate as a cache in front of IPFS, much as we do for our more traditional customers.

Because content is stored on potentially dozens of nodes in IPFS, if one node that was caching content goes down, the network will just look for the same content on another node. That fact makes IPFS exceptionally resilient. That same resilience, however, means that unlike with our traditional customers, with IPFS, there is no single host to inform of a complaint about content stored on the IPFS network.  Cloudflare often has no knowledge of who the owner is of content being accessed through the gateway, and this makes it impossible to notify the specific owner when we receive a complaint.

The law hasn’t yet quite caught up with distributed networks like IPFS, and there’s a notable debate among IPFS users about how best to deal with abuse. Some argue that having problematic content stored on IPFS will discourage adoption of the protocol, and advocate for the development of lists of problematic hashes that  IPFS gateways could choose to block. Others point out that any mechanism intended to block IPFS content will itself be subject to abuse. We don’t have the answer to that debate, but it does demonstrate to us the importance of being thoughtful about how we proceed.

For the time being, our plan is to respond to U.S. court orders that require us to clear our cache of content stored on IPFS. More importantly, however, we intend to report in future transparency reports on any law enforcement requests we receive to clear our IPFS cache, to ensure continued public discussion.

Cloudflare Resolvers: 1.1.1.1 and Resolver for Firefox

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

In April of last year, we launched our first DNS resolver, 1.1.1.1.  In June, we partnered with Mozilla to provide direct DNS resolution from within the Firefox browser using the Cloudflare Resolver for Firefox. Our goal with both resolvers was to develop fast DNS services that were focused on user privacy.  

We often get questions about how how we deal with both abuse complaints and law enforcement requests related to our resolvers.  Both of our resolvers are intended to provide only direct DNS resolution. In other words, Cloudflare does not block or filter content through either 1.1.1.1 or the Cloudflare Resolver for Firefox. If Cloudflare were to receive a request from a law enforcement or government agency to block access to domains or content through one of our resolvers, Cloudflare would fight that request. At this point, we have not yet received any government requests to block content through our resolvers. Cloudflare would also document any request to block content from our resolvers in our semi-annual transparency report, unless we were legally prohibited from doing so.

Similarly, Cloudflare has not received any government requests for data about the users of our resolvers, and would fight such a request if necessary. Given our public commitment not to retain any personally identifiable information for more than 24 hours, we believe it is unlikely that we would have any information even if asked. Nonetheless, if we were to receive a government request for data about a resolver user, we would document the request in our transparency report, unless legally prohibited from doing so.    

The long road ahead

Out of the Clouds and into the weeds: Cloudflare’s approach to abuse in new products

Although new products offered by Cloudflare in the future, as well as the legal and regulatory landscape, may change over the years, we expect that our approach to thinking about new products will stand the test of time. We’re guided by some central principles — allowing our infrastructure to be as neutral as possible, following the rule of law or requiring due process, being open about what we’re doing, and making sure that we’re consistent regardless of the wide variety of issues we face. And we will work hard to make sure that doesn’t change, because even the smallest tweaks to the way we do things can have a significant impact at the scale we operate.

Source

You might also like:

Comment on this post

Loading Facebook Comments ...
Loading Disqus Comments ...

No Trackbacks.