It's a particularly good implementation able to block more of the connections based on which content isn't needed, along with having some extensions to the filter syntax. It uses more than EasyList and EasyPrivacy. Other implementations aren't able to block as much of it.
Conversation
It's a best effort stopgap measure as a form of attack surface reduction and uBlock Origin does it best. I don't believe in blacklisting as a way to fundamentally improve privacy since it's very incomplete, easily bypassed and is pushing advertising / tracking to being 1st party.
3
4
I think the end result of increasing usage of client-side filtering is going to be a strong move towards the tracking / advertising being heavily integrated into site functionality, without an easy way to remove it. Server-side middleware makes this easy for the site operators.
1
But the party who wants the tracking can't trust that the site isn't falsifying data, and there's a huge commercial incentive to falsify it. That's why killing third party content is such a good approach.
1
1
It exploits the adversarial nature between the parties who are screwing the user over.
1
I don't think it makes that much difference if the code is pulled in by middleware on the server or the browser. Either way, the site can interfere with it, and in both cases they can try to defend against that with obfuscation and having lots of constant churn changing it.
1
The older approach that they took of having sites include a script tag pulling in their third party analytics is not working well for them anymore. They're already getting inaccurate, biased data due to the huge assortment of ad-hoc best effort anti-tracking mechanisms deployed.
1
For example, ubuntu.com has Google analytics on it but I wouldn't be surprised if >50% of the visitors to the site are blocking it in their browsers. The groups of people blocking vs. not blocking it are also not the same, so these kinds of analytics aren't accurate.
1
On the other hand, if they do it via their servers and first party code, which they might already do, they are going to get far more accurate and representative data. It can still be 3rd parties providing tracking code / middleware instead of having the browser load from them.
1
It's in their interest to do it through the same domain / connections and integrated into how the site works. For example, in Reddit's redesign, and the approach in many overhauled news sites, they made things very dynamic in the client with lots of API calls to the server.
1
They get a ton of analytics data simply from the basic operation of the site, inseparable from basic functionality. They can and do also integrate third party analytics into what they serve from their own servers. Trying to identify and block it separately is a losing battle.
I also fully expect that sites are going to move towards serving obfuscated content requiring complex JavaScript to display anything, probably even with DRM for all the basic content so they can use the terrible laws making it illegal to bypass DRM to forbid blocking the ads, etc
1
1
Basically, I don't see a good alternative to an approach like the Tor Browser in the long-term, but hopefully on a better base than Firefox and with a much more exhaustive approach to preventing fingerprinting and network-based attacks. I see this blacklisting as only a stopgap.
2

