Possible Performance Degradation from CDN Usage
Most developers and administrators think that adding a CDN-hosted static file improves performance. The idea has been that a CDN has fast edge servers that cache content and deliver it based on the user’s geolocation. These cached servers are faster than a traditional single hosting server, and the developers got the benefit of convenience.
The same study also showed that slower mobile connections experienced higher latency from loading files on third-party servers. Over 3G, the same client’s customers experienced a 1.765 second slowdown compared to self-hosted files. After migrating their files to a local server, the client went from a load time of 5.4 to 3.6 seconds.
This might not seem like a considerable amount, but think in terms of large enterprise sites that have millions of user visits a day, which can easily add up to hundreds of millions a month. At scale, latency issues quickly trickle down to the end-user experience. The speed of a website has shown to affect bounce rate, customer satisfaction, and customer retention, not to mention the way in which Google bakes site speed into its ranking algorithm.
Avoid Single Points of Failure
If you’ve ever been through a disaster recovery exercise, you’ll know that redundancy is the key to resiliency against failure. Should these third-party servers fail, internal infrastructure also fails unless you have failover systems configured. Popular third-party CDNs and cloud services have failover baked into their infrastructure, but as we’ve seen a lot recently, even the biggest cloud providers occasionally have outages.
One school of thought is that services in the cloud rarely, if ever, fail but it does happen to even the biggest providers. For example, back in 2017, a simple operational error crashed AWS S3 buckets in the entire Virginia US East data center region. S3 buckets are used as cloud storage and the downtime affected thousands of AWS customers. Those businesses without failover configured that relied solely on AWS would certainly have experienced downtime.
A small but related risk is when the third-party host retires services. This is rare with a large organization such as AWS, GCP or Azure, but smaller hosters could shut down services at any time, leaving the site application owner struggling to find an alternative as quickly as possible.
How Network Penalties Can Impact Performance
The network penalties associated with third-party hosts tie into performance degradation, but they provide more insight into why it happens. For every new origin included during load time, the browser opens a new TCP connection. It’s not uncommon for some sites to have several external scripts included in a web application.
In addition, most sites are now using SSL/TLS, so there’s a handshake between the client and host to determine the cipher that will be used to encrypt data and to transfer the symmetric key used to establish a session. If you have dozens of third-party files, this can quickly add latency to load times. As a workaround, you can minimize the impact of opening files from third-party domains using the preconnect resource hints to indicate to the browser that you want to process the connection and file download as soon as possible.
The second penalty is in the loss of prioritization available in HTTP/2, which is what most applications use currently. The HTTP/2 protocol provides features for prioritizing connections. Prioritization allows developers to define important connections so that critical files can be returned faster and delay loading of less important content.
Connection prioritization works well on the same domain, but a dependency tree must be built for each new external TCP connection created for external files. This means that you cannot build one dependency tree when several third-party domains are used to host files, which adds latency. Note that connection coalescence is available to overcome this limitation, but the domains must resolve to the same IP address and each browser handles it differently.
Take Into Account Security Considerations
Third-party hosting also adds the risk of sensitive data disclosure to the third-party host if data is sent over URLs. If OAuth tokens, for example, are sent across a connection to a hosted script, it discloses these access tokens to the third-party host and could lead to a scenario in which an attacker can execute commands as the victim using their access tokens.
OWASP has a cheat sheet available to help developers code for this issue, but self-hosted static files would not be a threat for this specific issue. Secrets and access tokens should never be included in query string parameters, because they can be recorded in several locations (e.g., logs and browser cache). By self-hosting files, developers eliminate sensitive data disclosure to third parties.
If performance and security are a concern, self-hosted static files are a better option for developers to support Ajax functionality. The few hundred milliseconds saved can improve user experience, reduce load times, and lower resource costs. For security, developers reduce the risk of sensitive data disclosure and can protect source code from outside tampering. Finally, organizations should eliminate a single point of failure based on reliance on a third-party host in order to avoid downtime should their cloud provider have an unforeseen outage.