Where Angels Fear
177 min readJun 8, 2020

--

WWW.rong

A Beginner’s Guide To Reclaiming (some) Online Privacy

[Preamble]

I’ve been intending to write this one for a while now but kept putting it off because …

… because every time I thought about how daunting the prospect was, I quickly felt like doing something else (like going shopping).

It’s a … ‘herculean’ isn’t the word (it’s a First World Problem after all), but you get the idea … task covering it all — explaining not only the whats and hows but the whys takes time for such a complex topic. But, I’m going to have to pull my finger out and do it at last. And on the plus side, I suppose, I can be somewhat less dryly formal here than elsewhere, so here goes.

First things first.

I will not be telling you how to get a grip on your privacy with Google’s Chrome web browser. If you can’t think why that might be, you need to look at Google’s business model. In short, if you’re using Chrome, you have no privacy — nor, in all likelihood, will you ever do so.

I will not be recommending prepackaged ‘privacy’ browsers like Brave or what have you. They are seldom what they appear and almost only ever recommended by people who don’t understand the issues well enough to realise they don’t know enough¹. If you research them properly, you find there are all sorts of issues that give the lie to their claims of being privacy enhancing² at all, let alone ‘the most private’ anything. The fanbois and fangrrrlz who rave about them only do so because they’re easily convinced by other fanbois and grrrlz, because they aren’t themselves technically knowledgeable — it’s a vicious-circle of ignorance. Furthermore, if you simply install a ‘privacy browser’ without understanding the issues or technology, then … even if it is, miraculously, the holy grail of privacy protection to start with … you learn nothing about either and you’re in no position, therefore, to assess when changes in its tech base stop it from being the useful tool it might once have been.

I will not be talking about browsers like Opera or Vivaldi; they are, in my experience, horrible … with shortcomings (such as no option to remove default search engines) that are, in this day and age, unforgivable. Again, fanbois and grrrlz rave about them out of ignorance; do not listen to them — albeit unwittingly, with the best of intent, they will do you wrong³.

So, if you’re after information on how to enhance your privacy with any of the above, forget it. It’s not that I don’t know how to, it’s that, having investigated, I know it to be a waste of time — one of those instances of “if you want to go to there, my advice would be don’t start your journey here.”

If you’re looking for some simple (and some not so simple) things you can do to enhance your privacy with Firefox, on the other hand, you’ve come to the right place.

[Foreword]

Still here?

I hope you’re sitting comfortably … this is going to be a loooooooong read.

Okay, let’s start with the fact that there are no simple solutions to the issue. Security and privacy are a moving target — at least until quantum computing becomes ubiquitous (and possibly even then). It’s like trying to keep an unmoored boat in place: you have to keep adjusting for the wind and the current/tide/waves and will seldom, if ever, find yourself more than close to (never exactly at) your goal.

If you want true privacy, you get rid of all your technology, move to a deep cave system and never leave it; subsisting on whatever lichens and creatures-of-the-deep you can find there, licking the condensation off rocks for hydration.

And even then, unless you control

  1. the temperature levels (so that your heat signature can’t be detected from outside)
  2. moisture/chemical levels (likewise)
  3. the vibration footprint of the devices that achieve this
  4. mask your electromagnetic signature
  5. whatever else I haven’t thought of

… some satellite/geologist/chemist/passing treasure-hunter with a metal-detector will locate you.

Otherwise you accept that you’ll forever need to keep an eye on the state of play, adjust your tactics to new circumstances accordingly and compromise between not only security and privacy themselves but also between them and an awful lot of inconvenience.

If you want convenience, you’re in the wrong place and should stop reading now, go into the street, drop your underwear and grasp your ankles … with a sign around your neck bearing the legend “Come and have a go if y̶o̶u̶ ̶t̶h̶i̶n̶k̶ you’re hard enough!” Not long after you’ve finished this and followed all my recommendations, you’ll hate me. If you stick with it a bit longer, you’ll have an epiphany and, instead, hate yourself for foolishly following my advice. If you’re sensible and stick with it after that, however, you’ll have a Damascene conversion and start hating everyone else for being

  1. sociopaths who oblige you to take all these inconvenient measures in the first place.
  2. lazy/incompetent, complacently relying on others’ code and resources rather than developing their own solutions.
  3. the W3C, allowing things to get into the mess they’re in rather than encouraging good practice.

Also, you might learn some things along the way as you browse the Web and investigate who’s trying to do what in your browser.

[Introduction]

There’s a tradeoff to be made between privacy and anonymity. There is a further tradeoff between privacy/anonymity on the one hand and security on the other …

Do not be misled by the ‘many eyes’ theory that evangelists of OSS solutions proclaim. In theory, it’s a better approach than proprietary solutions but, in practice, what all too frequently happens is that few, if any, of those eyes are applied to anything much because everyone assumes that lots of other people are doing it, so there’s no need for them to — which is how things like Shellshock happen.

The fundamental reality is: if you didn’t design and manufacture your own hardware, operating system, software and (inter)network yourself, you’re taking it on trust that everything you use is aboveboard and does no more, nor any less, than is claimed.You actually have absolutely no idea whether it really is though. You can read all the source code you like, if you aren’t running the entire platform and codebase yourself, you have no idea what’s really going on — reading the source code is a waste of time, if you have no way of monitoring what’s running on someone else’s server.

A lot more attention is paid to security/privacy solutions (by people with a great deal of knowledge) than a lot of other things, but not everything gains a profile and some of the things I will recommend later are the result of my applying knowledge and experience to half judge/half intuit which of them are safe to use. Do not assume, therefore, that I will not be in error about some (or even all) of them — I cannot be 100% certain from my analysis of the community responses to a project that they paint the whole picture … there may well be things that neither I nor anyone else notice(s).

Furthermore, as mentioned, security/privacy/anonymity are a moving target. Something you use might be a perfectly good solution and then sold on to some entity that does not have the same goals or values — q.v. my remarks about StartPage here, for instance (which is the same reason why I advise people to steer clear of Ghostery for example). Or, the original team providing the solution is, over time, replaced by people who have a different agenda — c.f. the transition of AdBlock Plus from ad-blocker neo/quasi protection-racket. Or there could be no obvious point in Time when the rot set in, just that something touted as a privacy/security enhancement is anything but that (as in the case of WOT).

So, I will make recommendations based upon my appreciation of how reliable and useful things are, but both your and my own mileage may vary over time — I am not going to claim that my pronouncements are either gospel or not subject to change. As said, there’s a trade-off to be made … and the price we pay for not designing and developing all the hardware and software ourselves is the risk that a solution we use is actually detrimental to our privacy/security. The price we pay for using third party solutions is the potentially increased attack surface of our privacy/anonymity and security.

[Not Tor]

Why not go all-in and use TorBrowser instead of vanilla Firefox?

Well, I’m in two minds about Tor: on the one hand, it’s a good idea … but on the other.

And even though NoScript is a default extension, it is, by default, set to allow all JavaScript to run in the browser!

Just think about that for a moment.

I’ve read the Tor Project’s explanation for this and, fundamentally, it makes no sense whatsover: you cannot ensure anonymity, if you allow scripts to run that identify you as an individual — which you will (potentially) be doing by default on every site you visit. And it’s too late to block them afterwards.

Furthermore, there is the inherent issue of it limiting the use of other privacy enhancing solutions because they interfere with its approach to the provision of anonymity.

Then there’s the fact that true anonymity is impossible anyway …

Not only that but behavioural fingerprinting is a thing: no two people will look at the same webpages in the same order and linger on them for the same length of time — no matter what you do to obfuscate your identity, an adversary with the will and resources to do so will identify you with time. So, if the sites, pages and topics I browse when I exit the Tor network identify me as a person of interest then those with an interest will pay attention to my behaviour and watch me anyway.

Given that most (if not all) such analysis is automated we can be reasonably sure that even those of us who are not people of interest are monitored whether we use Tor or not, whether we can be directly identified or not. A behaviour pattern is a behaviour pattern and, if I want to know how a particular demographic is behaving, I don’t need to identify you personally in order to identify you as part of that demographic. So I’ll track you and observe you and find ways to identify you as an individual in that demographic. I don’t need to know who you are IRL, I just need to be able to pick you out in the crowd.

Picking you out in the crowd is what its all about.

People are creatures of habit and, given enough behavioural data, predictable. If you’re the kind of person who purchases a new television every five years, your purchase records will tell me that. I can easily determine where you live, where you work … your purchase history; it’s all a matter of either public record or obtainable from someone for the right price — the authorities can simply demand it (even if they do have to get a n̶o̶t̶e̶ ̶f̶r̶o̶m̶ ̶t̶h̶e̶i̶r̶ ̶p̶a̶r̶e̶n̶t̶s̶ court order first).

Putting it together, I can join the dots between the places you’ve lived and where you’ve worked. I can work out the behavioural relationship between the two (how often you change jobs, how far you’re willing to commute, anything and everything relevant, the more datapoints the better) and figure out when you’re due a change. I can then calculate the vector changes of each and predict where you’re likely to move to next (or whether you’ll stay in your current home). I can look at your purchase history and note that you travel no more than fifty miles to make purchases.

Given the data on where you are likely to be living two years from now, how far you travel to shop for things, what you purchase, when you purchase it, how much you spend, what options there are within your financial, physical and behavioural range … knowing that you will (with a 91% probability) be purchasing a new television in twenty-three months’ time, that you will (with a 95% probability) still be living at your current address, that you will (with a 97% probability) not travel further than fifty miles to purchase it, that you will (with a 99% probability) spend within a certain minimum/maximum range. that you will (with a 76% probability) make that purchase at 15:00/3pm … allowing for inflation and what stores there are within fifty miles of your address that will be in a position to cater to your behavioural needs … I can predict where you will be and what you will be doing at 15:00/3pm (+/- 14 minutes, 24 seconds) twenty-three months from now.

I know you better than you do yourself.

Sceptical?

Some years ago, I did some work for a company, testing a software solution they were developing that combed publicly available data (including on social networks) and combined it with their clients’ in-house data. Given no more than the purchase record of a book in a store on a certain date, in a certain place … thanks to another purchase record, of a pair of jeans in another store, in a different city, across the country, two years previously … I was able to determine (amongst a scary amount of other detail), that person’s inside leg measurment.

Given those two datapoints, I could, instead, have worked backwards and identified the individual in question — I did a few tests on that as well. I just need your behavioural data and, sooner or later, I will find out who you are.

Tor isn’t going to protect me from someone who really wants to know; even if they don’t actually care and just want the demographic data, it all gets collected, collated … it’s all there waiting to be mined in more depth — all they have to do is decide they want to.

So, as far as I’m concerned, it doesn’t matter if you can identify me in the crowd, so long as you can’t determine anything useful when you do so. And for that, I need other tools … tools that, as said, interfere with Tor’s model. Yes, I might stand out from the crowd, if I wear a mask but, if that mask hides my identity, what does it matter? Equally, if you can’t see what I’m doing then the fact that I stand out like a sore thumb is not a concern either — I might be changing my underwear in public but, so long as I’m hidden under something and nobody can’t see that, it isn’t a problem.

It’s not about seeking anonymity in the crowd but about sowing doubt — creating fake identities and false data trails to misinform and mislead.

[Configuring Firefox]

Firefox … and a number of extensions … it is then.

First up, let’s look at Firefox itself.

The irony there is that, whilst I (nearly) always recommend Linux over Windows (especially Windows 10), there is a privacy/security enhancing option on Windows that I can’t use on Linux: namely portable Firefox — get it here.

The reason why I wish I could use it on Linux too is that Linux is far from being as invulnerable as its fanbois/grrrlz would have you believe and, if you are compromised, a number of different portable installations (each with its own profile and data) … in atypical locations … is a much better way of obscuring things than a single profile in your home profile — no, multiple profiles aren’t the same thing … they’re all still stored in your home profile.

It’s not an insurmountable problem for anyone determined to extract your data, but, if you take the trouble to use different installations to create fake profiles then ascertaining which of them is the real you can be harder: am I the gamer who spends their time on the Steam forums … or am I the person with the arts and crafts hobby who frequents Etsy? The gamer may be more active and their profile more recently accessed, but is the crafter necessarily an irrelevance — could they be the real owner of the machine but busy elsewhere and the gamer simply borrowing it?

So, whether you’re an inveterate Windows lover, or reluctantly obliged to use it against your better judgement, your first step should be to start using portable Firefox rather than an installed version. Unpack however many copies you need to a secure folder somewhere outside your own account, that can be accessed by any user with an account on the machine and secure the copies you intend to use yourself so that only you can — this creates plausible deniability (so to speak), by separating the link between your user account and the fake profiles, meaning the possibility that the others belong to other users is more plausible. Then you simply browse around with the others to create the fake usage patterns for them … remembering to update those patterns from time to time.

The downside of the approach outlined in the previous paragraph is that files you create (or Firefox modifies) as you browse, will clearly be owned (modified) by you and not someone else … potentially giving the game away.

An alternative, therefore, is to create fake user accounts on your machine and switch accounts to use the ‘fake’ browsers … it’s up to you how far you want to take this — it depends upon your threat landscape: if you’re living under threat, it may be a suitable approach … if you’re living in a free democracy, not in danger from an abuser and just want to keep your data out of the hands of the thieves who hack the advertisers who slipped up and left the database containing your profile unprotected, it’s probably overkill. This trick works for all operating systems, however, so … if it would be suitable approach for your needs … it’s worth considering even if you’re not using Windows.

From here on though, whichever platform you use, my recommendations will remain the same.

[Firefox Preferences]

General

Consider unchecking Restore Previous Session. You lose the convenience but, if you are likely to need to crash your browser on purpose … such as a journalist working in an oppressive regime receiving an unpleasant visit from the authorities … or living with an abusive partner … it will ensure that your activity cannot be recovered by simply relaunching the browser.

Leave language and appearance settings on their defaults. If it won’t interfere overly with your use, set the default language to English (US) — the less your configuration says about you the better.

If necessary, uncheck Check your spelling as you type. As much as anything else, for practical reasons, it makes more sense to write things in a local app first (like a word processor, such as LibreOffice) to eliminate the risk of a browser crash wiping out all your hard work. It will also improve performance.

In the Files and Applications section, set the default download directory to somewhere innocuous, like a secure download folder outside your home (or Windows’ %user%) folder. Tempting though it might be to choose Always ask you where to save files, the danger is that a feature (or exploit) on the site/page you’re visiting might record that location, thereby gaining information about your directory structure and, hence, how you think (psychologically profiling you as well as gaining information on where your files are stored).

Whether you choose the option Always ask rather than preferred default applications for file handling is up to you but, again, the less you stray from the defaults (always ask) the less chance there is for a rogue page or site to extract that information from your browser.

It’s up to you whether you enable Play DRM-controlled content. I don’t, but I don’t use online streaming or similar services and almost certainly never will — I don’t need Spotify’s help to find music (I can browse the ‘People also liked/bought’ links myself) and there’s no way I’d ever use Amazon to watch movies (if I can’t purchase something for cash, I get other people to purchase things on my behalf, thereby muddying their profile as well as eliminating my own).

I uncheck the Performance options myself — it’s a web browser, not a home cinema or games console and I just don’t need them, so it’s worth the minor hit in performance to keep that extra bit of information out of the hands of rogue sites/pages exploiting them. But your mileage may vary, so try it out and see if the impact is acceptable for your purposes.

It’s also worth considering how this may also be relevant to browser fingerprinting (discussed later).

In the Browsing section, uncheck

  1. Search for text when you start typingagain, whilst I’ve no doubt it’s all aboveboard from Mozilla’s end, there’s no way of knowing how it might be exploited by those with nefarious intent.
  2. Recommend extensions as you browse and Recommend features as you browse — they might be convenient, but those latter two are waaaay too likely to end in tears, if you’re not careful; besides which, you can always search https://addons.mozilla.org yourself … search the web for recommendations … and, furthermore, be confident that, if there’s a feature that a site thinks people might be interested in, it’ll be plugging it for all its might without any help from Mozilla.

Under Network Settings, unless you know what you’re doing here (in which case you won’t nee me to explain it to you anyway), you want the default use system proxy settings option. You should review this from time to time anyway though; if someone does hijack your browser, redirecting your searches and pages, delivering adware and/or malware, this is one place they might do it on the basis that most people never investigate this feature — it’s an unsophisticated attack (and it would be far more likely that they’d attempt to change your proxy settings system-wide, rather than in your browser), but a potential nevertheless, so it doesn’t hurt to take a look once in a while.

Home

For Homepage and New Windows … and for New Tabs … select Blank Page. You can use bookmarks for your favourite pages, if you must … or, better yet, simply type the addresses in yourself — unless you have accessibility issues, it takes no more time to type ‘medium.com’ for instance than it does to mouse over to the bookmark bar and click on a link (and you can save yourself the bother of mousing over to the address bar by pressing ALT+D). This prevents anyone with malign intent gleaning information about you by querying your browser (or, in the case of an abuser, simply looking).

Under Firefox Home Content, uncheck everything — again, you’re preventing the disclosure of information about your browsing habits.

Search

Although I’m having trouble tracking down any information on this right now, in the Past, searching from the address bar was not infrequently mentioned as a privacy risk. This may have changed in the meantime though and that might explain why I can’t find anything on this now. So, it’s up to you — if you want to play it safe and separate the two functions, select Add search bar in toolbar (use CTRL+J or CTRL+K to focus the search bar without having to mouse around).

Your default search engine is a matter of personal choice but, for now, we aren’t going to make a decision — we’ll come back to this at the end of this section.

For various reasons, I find it hard to recommend any of them, frankly. Obviously, you don’t want to use Google, Bing, Yahoo and the like, but what about DuckDuckGo, StartPage, Quant, Ecosia, et al? Well, you can get an idea of why I have a hard time recommending anything, if you read this article and this one.

The situation isn’t great, is it?

And if you consider the CLOUD Act as well then the idea that your searches will be protected from the prying eyes of various US agencies by using servers located off US soil flies out of the window; so you want to avoid anything that is registered as/to a US company (which, fundamentally means anything that is a .com as well ) — the GDPR won’t help you ... especially not as the CLOUD Act was formulated pretty much in response to the GDPR (the US government wanted data from Facebook, who were refusing to hand it over because it was stored on servers in Éire and, therefore protected by the GDPR).

A note on Qwant. Whilst it might (for now) appear to get a clean bill of health compared to many of the others and not be based in the US, I wouldn’t trust it unquestioningly. Yes, France is signed up to the GDPR, but I still do not take it on trust that the French government is any less assiduous about monitoring searches than, say, the US — especially if a search query originates outside France/the EU.

Swisscows has the advantage of being based in Switzerland, but you may find its family friendly content policies a bit restrictive.

Personally, I favour self-hosted Searx. But that’s not necessarily an option for everyone; especially not if you’re not technically knowledgeable and/or don’t have a spare computer to dedicate as a separate proxy server.

You could use the public Neocities instance with the caveat that you are running the risk of your searches being logged by someone. But, on the other hand, as long as they aren’t logged by Neocities, spreading your searches across twenty (or more) different providers might mitigate that because your search pattern will be somewhat obfuscated.

All else being equal, therefore, when Searx isn’t an option for you, I’d have to say that … for now anyway … if you don’t think Ecosia, Qwant, Swisscows or any of the others is for you, DuckDuckGo looks to be the best of an imperfect lot — albeit, my more recent experience with it has been that the quality of its results have been pretty poor for a while now compared to previously, so your mileage may vary.

Under Search Suggestions, uncheck all the sub-options first and only then uncheck Provide search suggestions — the reason for this is that, if you don’t, a glitch/error that flipped it back on again would enable all the sub-options too (so disabling them first is the safest approach).

Under One-Click Search Engines, remove the ones you aren’t comfortable with (Google, etc.).

To add others, open a new tab and navigate to the engine you want to add. Click on the magnifying glass in the search bar (or click the ellipses to the right of the address bar) and select the option to add the engine. You aren’t limited to pure search engines and can add the likes of Amazon, Wikipedia and YouTube (even Medium) for instance (if a site can be added, there’ll be a green icon/symbol overlaid on the magnifying glass). Just think carefully about which sites you add — it’s all too easy to accidentally select the wrong option from a list and send your search term to a site you’d really rather you hadn’t (which is why we removed undesirable ones from the list).

When you’ve added all the engines you want, go back to the top of the search settings page and choose your default search engine.

Although you could always click on the Find more search engines link under the list of one-click search engines, this will take you to Mozilla’s site to add an extension and the problem with that is that you’re relying on the extensions to be maintained. This isn’t usually an issue with the larger engines, but the less popular ones may not be able to dedicate quite so much time or as many resources to them (and exploits remain unfixed) … whereas a simple addition to the list of search engines ensures that you go directly to the most up-to-date version of the site without the need to worry about that possibility (so, I consider it preferable to adding an extension).

Privacy & Security

Under Enhanced Tracking Protection, select the Custom option.

Check the Cookies option and make your choice from the drop-down menu. I opt for All third-party cookies myself and have yet to experience any website breakages as a result — probably due to the fact that most sites ignore your preference here and set cookies anyway.

If necessary, uncheck the Tracking content option — we’ll handle this with extensions later.

Until I find a better third-party solution, I opt to block cryptominers here as well.

Leave Fingerprinters unchecked — we’ll deal with those by means of a more sophisticated solution later

When it comes to deciding whether you want to send a DNT (Do Not Track) request to sites, it’s not easy to say what the best option is right now. Apart from the fact that barely any site will honour your request anyway, Mozilla haven’t really given us much of a choice — either we send one to all sites or only when Firefox is set to block known trackers, but we’re going to send one to at least some, if not most, sites whatever we do here.

We’re straying into slightly murky waters here. Sending a DNT request can help trackers create a fingerprint of your web browser (and, in 2019, Apple decided to remove the option from Safari for that very reason), so a privacy enhancing option can in fact reduce it!

It’s possible that (combined with the option to only send a DNT request when Firefox is set to block known trackers) choosing to block all third-party cookies (rather than only cross-site and social media trackers) reduces (if not eliminates) this, but I can’t say for certain. I err on the side of cautious optimism, however, and use exactly that combination, so you might as well opt to send a DNT request only when Firefox is set to block known trackers here. Either way around though, by the time we’re done it’ll be academic because, whilst our browser configuration will not entirely anonymise us, it will obfuscate a fair amount of what we’re doing and how; rendering the data we do leak less useful for the purposes of identifying who we are IRL (which is basically the point of the whole exercise).

If necessary, uncheck the option to delete cookies and site date when Firefox is closed — we’ll deal with that below and it’s best not to have competing approaches vying for control over the process.

Do. Not. Ever … let your web browser save logins or passwords!

Uncheck the two options below (Autofill logins and passwords / Suggest and generate strong passwords) and then uncheck Ask to save logins and passwords for websites — again, we do it this way to ensure that a later mishap doesn’t re-enable the sub-features at the same time.

You aren’t going to be using Firefox to manage passwords, so there’s nothing to be gained by checking the option to show alerts about passwords for breached websites. Mozilla might enhance that feature in the Future … notifying you of breaches, even if Firefox doesn’t know whether you have a password for a site (let alone manages it for you) … so, you might consider leaving that option checked but, as it stands, it will be of no benefit to you and is, furthermore, yet another potential vector of attack — uncheck it, therefore.

Don’t use a master password. You aren’t storing any passwords anyway … it adds to your cognitive load, requiring you to memorise yet another one … and this feature in Firefox does not have a good track record either — by using a master password, all you’re doing is adding another layer of complexity for little return and further adding a potential point of failure (if you need to leave your computer unattended you should be locking your whole account anyway, not just your web browser!)

To the right of the words Firefox will, drop the menu down and select Use custom settings for History.

Uncheck the option to remember your browsing and download history and, no, you certainly don’t want Firefox to remember your search and form history!

Check the box Clear history when Firefox closes.

Remember I said that we would deal with cookies and site data elsewhere? We’re going to do that now — but in a more fine-grained manner.

Click on the Settings button to the right and select all the options — if you’re concerned about your privacy, you’re navigating to sites from the address bar, not bookmarks … and remembering your passwords, not saving that information in your browser to be slurped by a zero-day exploit … so you’ve no reason to want to keep any of it in your cache or profile.

When using the address bar suggest … nothing at all … uncheck every option — those mechanisms are just waiting to be exploited!

Under the Permissions section …

Click the Settings button for each option and check the box to block new requests.

Except for:

  1. Location — we’ll come back to this later, so leave it unchecked for now.
  2. Autoplay — the option you want here is Block Audio and Video, which you select from the Default for all websites drop-down menu at the top.

If you need to enable camera/microphone access at a later date and, for some reason, you need to use your web browser rather than a dedicated app¹⁰, you can enable the necessary request feature (don’t forget to disable it afterwards!)

Check Block pop-up windows and Warn you when websites try to install add-ons. And there are (almost certainly) no occasions on which you will need to say ‘yes’ to that latter once you’ve whitelisted those (few) reputable sites you use for whatever purposes you rely upon them¹².

Unless you have a need for them, check the option Prevent accessibility services from accessing your browser.

Uncheck all the Firefox Data Collection and Use options — quite apart from Mozilla making some dubious use of them in the Past (most notably studies), those mechanisms are simply crying out to be exploited.

You might have misgivings about the Deceptive Content and Dangerous Software Protection options but, on balance, I think it best to leave them checked — my more recent reading on the matter has led me to conclude that the benefits outweigh the privacy concerns I used to have, so, until I learn otherwise, I now check them.

Certificates: ask every time and query OCSP responder servers — certificates are a dreadful way to enforce security (because, basically, they don’t), but they’re what there is and better that than nothing.

Sync

No.

Just no.

If you’re concerned about privacy then the last thing you want to do is store revealing information ̶ ̶i̶n̶ ̶t̶h̶e̶ ̶C̶l̶o̶u̶d̶ on ̶t̶h̶e̶ ̶I̶n̶t̶e̶r̶n̶e̶t̶ someone else’s computer (probably Amazon’s).

[Firefox Extensions]

The caveat here is that we are going to be relying on a number of third parties to help us secure things. This means giving them intimate access to our browser and, therefore, data. So, I try to limit my exposure by only using those extensions I feel are

  1. necessary (or at least worth whatever tradeoff is made by using them)
  2. reputable

As I remarked above, some of the things I will recommend are the result of my using my knowledge and experience to half judge/half intuit which of them are safe to use and, however good I might be at it, there is no guarantee that I will not be in error about some (or even all) of them — so I will make recommendations based upon my appreciation of how reliable and useful things are, but both your and my own mileage may vary over time.

That said, however, as I observed here, not every member of the human race is a sociopath and some of us actually do things for the betterment of all mankind for no reason other than it pleases us to help people. So, I’m pretty confident that, on balance, most (if not all) of the ones I recommend are unlikely to be detrimental to your security/privacy except by accident (due to a bug in the code rather than nefarious intent). Just remember that good things can turn bad (like AdBlock Plus) … so keep an eye on the reviews on a regular basis (users occasionally post warnings and reports that are significant) and on who maintains them (if there is a transfer of ownership, you want to investigate the new owners/maintainers and make sure there are no untoward changes to any of your extensions’ behaviour).

As I mentioned at the start, there is a tradeoff to be made between privacy and anonymity. When you’ve finished following this guide, it’s entirely possible that you’ll be identified as a unique individual. This is particularly likely if you test yourself on Panopticlick or Amiunique. What needs to be borne in mind, however, is that

  1. both those sites use limited datasets of in-the-wild browser profiles and have also not entirely infrequently been out-of-date, testing against versions of browsers that are so old almost nobody has used them in a long time indeed (thus skewing the results, because the new version you are using is not in the dataset and, therefore, considered … wait for it … unique).
  2. being anonymous doesn’t help, if someone tracking you can get their hands on your behavioural data (behavioural fingerprinting is a thing, remember) and the loss of a certain amount of anonymity in return for privacy can be acceptable — if you can’t see my face, it doesn’t matter if I stand out in the crowd … but if you can’t see what I’m doing, it doesn’t matter if you can see my face.

So, let’s look at what we can do to achieve a reasonable balance.

First of all, there are many useful guides out there to hardening Firefox — the ghacks-user.js is probably the most ‘famous’ one (and very good it is too!)

But … they require an in-depth understanding of what the purposes and effects are and how the different tweaks interact with each other and with any other extensions/tricks you might use. I have, for instance, seen people recommend enabling FPI (First Party Isolation) in the same breath as extensions, features of which (at least at the time of writing this guide), will stop functioning if you enable FPI. So, following about:config hacking guides on the Internet can result in some decidedly untoward side-effects, if you’re not 100% certain of what you’re doing and why.

It can also be very difficult to determine exactly which changes resulted in undesirable results — especially if you don’t notice the untoward behaviour for a while and have forgotten precisely which changes you made in your browser’s configuration.

As a result, I’m going to limit the discussion here to browser extensions — if things go wrong, it’s easily remedied by changing whichever setting is the cause of the problem (or simply disabling the extension altogether), rather than having to delve back into about:config and track down all the relevant entries.

By all means take a look at the ghacks guide and any others you may find but, if you’re going to make use of them, I recommend you do so with either a separate portable instance of Firefox or (on Linux/Mac OS) that you create a separate user profile and start out by implementing only those tweaks that can be limited to that profile rather than applied browser-wide — it’s much easier to delete the offending profile than it is to uninstall your browser and completely sanitise your system before re-installing it (there are always hard to track down remnants left after the uninstall that can leave you tearing your hair out¹³).

Location Guard

I’ll start out with a ‘fun’ one.

Remember how, under the Privacy & Security/Permissions section we blocked new requests for everything except location … this is why we made that exception.

It isn’t used by many people and that’s often a warning sign but, but if you look at the reviews and the quality of the feedback, you get the sense that the developer is genuine: it’s not thousands of five-star reviews without commentary or full of fawning “great/fantastic/amazing/I’m going to marry the developer” comments … nor does the developer ignore/dismiss reviews but takes time to work with users to resolve things. I can’t put my finger on it precisely but, experience says to me that this is a genuine attempt to add value to your life rather than a scam. In any event, I’ve been using it for years and have yet to experience any browser behaviour that made me wonder whether it might be doing something undesirable in the background.

It’s not updated terribly frequently, but it doesn’t seem to need to be except when there’s a major update to Firefox (e.g. Quantum) — from the tech perspective, unless there’s a substantial change in how Firefox handles Javascript or in Javascript itself, it’s probably pretty much unnecessary.

After installing it and following the tutorial, the first thing to do is to go into the Options section of its settings and click the button Delete fake location cache, to clear out unwanted data.

Now set the default level to Use fixed location. This is the setting offering the most privacy, because you can choose a fake location rather than just adding noise to your real location data.

And this is where the (sort of) fun bit comes in: choosing a fake location.

Pick somewhere that isn’t too implausible. Nobody is going to believe you were on top of Mount Everest for the last six months, for instance. Likewise, pretending to be on a boat in the middle of the ocean might seem like a good idea but, trust me, quite apart from little things like questions concerning why there’s no record of any boat where you are pretending to be … or why you never go to port … you will, sooner rather than later, tire of going into the settings to update your location to reflect a plausible itinerary (thus giving the game away).

It’s better to pick a place sufficiently far from your real location that means you’re faking it, but somewhere you could plausibly be spending your life, or at least a few years at a stretch (at college/university for instance). Remember … a motivated adversary with the means to do so will find the truth (Location Guard isn’t going to fool the government), so this isn’t a means of anonymising yourself but of enhancing your privacy by limiting the harm that can be done by opportunistic attempts to get information as you browse. It’s not a sfoolproof way of doing so … much of the location tracking that goes on today is based upon your IP address and Location Guard can’t prevent that. But, geolocation by IP isn’t incredibly accurate (see the value for City here), whereas determining geolocation by means of the Javascript API is very much so (compare the result here)… and that’s what Location Guard defends you against — it’s not necessarily called upon to do so very often but, when it does its job, it does it well.

If choosing a fixed, fake location isn’t something that you feel is a practical solution for you then there’s the matter of the Privacy Levels to consider. It might be tempting to ‘turn the dial up to eleven’, increasing them all to the maximum, but this might result in suspiciously anomalous data, potentially drawing attention. I suggest that, if you can reasonably do so, setting your default level to High and adjusting the protection level to something suitable.

If you are situated next to an ocean, for instance, don’t necessarily max it out (resulting in your location potentially being in the ocean, thus indicating that something is awry with the data) … but, if you’re in a sparsely inhabited rural location, a 3KM protection zone with a 7KM accuracy might not be entirely unreasonable.

On the other hand, if you’re in a large urban location (or at least pretending to be), there are many more ways to pinpoint someone (telephone masts, multiple WiFi access points) and a tweaked Medium setting (or even Low, if you’re on a mobile data connection) might be more realistic.

Think about what you’re trying to achieve here: you’re telling a white lie about your location (pretending to have moved to a different address in the same area) rather than an outright fabrication (you’ve got the lead in a Hollywood blockbuster and moved to a penthouse in L.A.) — so, choose a setting that supports your cover story rather than one that makes people suspicious.

Don’t worry about it interfering with mapping services:

  1. You’re concerned with privacy, so you aren’t foolish enough to use them in the first place — even, if you do, you certainly don’t use them to plan journeys starting from your real location but rather from a suitable starting point you know how to get to without any help (and you vary that starting point as frequently as is practical too).
  2. You can select a different level for a specific website using the location icon, if you ever need to.

The FAQ is pretty good at explaining the ins and outs of your choices — read it.

If you use a VPN, consider setting your location to match that of your exit server — it’s more work to update it each time that changes, but the payoff is that you tell a more consistent story.

Disconnect

Install Disconnect next. I’ve been caught out in the Past when reinstalling this in a new browser installation because I forgot that (whilst it will happily coexist with other extensions performing similar tasks) it won’t show blocking information on the toolbar icon unless it’s the first of them to be installed — so, remember that if you ever find yourself in the same situation (it is tedious to have to re-install and reconfigure everything else again afterwards … so, you won’t).

That said, I seldom (if ever) pay any attention to its notifications: there aren’t any settings to fiddle with and it’s basically a set-and-forget option for me … a Ghostery without that latter’s dubious pedigree … doing its job in the background, catching things the other extensions might miss. It may be redundant, or not do much, but it doesn’t interfere with the others, so, at worst, it’s harmless and, at best, it does something useful — the belt and braces approach.

You can be pretty happy that this one is aboveboard, because it’s in the Recommended Extensions Program.

For those interested in such things (which you should be), its block lists can be found here, here and here.

Decentraleyes

Next, we’ll install another set-and-forget option.

This too is in the Recommended Extensions Program, so I’m happy I’m not taking any ill advised risks by using it.

As you may be aware … and, if you’re not, you certainly will be after using some of the other extensions I recommend here … a huge amount of what you see on the sites you browse aren’t sourced from the site you are browsing but delivered from third party CDNs (content delivery networks).

Decentraleyes comes bundled with commonly used files and serves them locally whenever a site tries to fetch them from a CDN.

Why is this important?

These two FAQ entries answer that:

Can CDNs track me even though they do not place tracking cookies?
Absolutely. Requests to Content Delivery Networks contain the “Referer” HTTP header (originally a misspelling of referrer) that reveals what page you’re visiting. Techniques like IP address tracking and browser fingerprinting can then be used to associate the aggregated data with your identity.

My browser caches downloaded CDN libraries, doesn’t that protect my privacy?
Sadly, no. Even if the file in question is stored inside of your cache, your browser might still contact the referenced Content Delivery Network to check if the resource has been modified.

Additionally, as you’re loading those resources locally and don’t need to request them over the Internet, it can result in a faster browsing experience — and we all like that.

I recommend the only unchecked option there should be in the settings is Block requests for missing resources.

HTTPZ

This one comes with a warning that it is not monitored by Mozilla. It also has very few users … which, as previously mentioned, is usually grounds for caution. And with so few reviews, I’d normally give it a wide berth even if only on the grounds that I’m not sure it will be supported for very long.

However, I stumbled upon it after reading a site that, in my opinion, is well written by a knowledgeable individual — the article in question was an FAQ update to two previous posts on hardening Firefox that explicitly stated that one ought to have followed one or other of those articles and/or incorporated the ghacks user.jss file¹⁴ before reading the FAQ itself. So, I’m reasonably confident it’s a reliably useful one.

I used HTTPS Everywhere, from the EFF (Electronic Frontier Foundation) for many years but, reading the articles in question, I learned something I was entirely unaware of: namely that HTTPS Everywhere relies upon a database of sites that will deliver an https connection, so, if a site isn’t in the database, HTTPS Everywhere won’t attempt to request delivery by https — which is kind of an issue, when you think about it. Also, it seems that database may not be very large and the only reason it appears to work as frequently as it does is the fact that so many sites have upgraded their connection to https in the meantime that although HTTPS Everywhere isn’t responsible for every https connection, the user assumes it must be.

HTTPZ, on the other hand, requests an https connection and only falls back to http if one is not forthcoming. Moreover, unlike a static list (like the one used by HTTPS Everywhere) … which results in no request for https ever, if the site isn’t in the list … HTTPZ will wait a week and then re-request an https connection from the site in question — which means that a site that is updated to offer it later will do so (which, obviously, it won’t if it’s still not in the HTTPS Everywhere list).

There is one caveat: namely that it doesn’t handle sub-requests (yet) … meaning, if you click on a link on an http site, it won’t request an https link from the site behind that link. However, we’ll resolve that problem with another extension (Temporary Containers), so, it’s not quite as serious an issue it might appear (at least not as far as our usage is concerned).

Cookie Quick Manager

I used to use Cookie Monster but, sadly, that has not seen a new lease of life in a WebEx version, so I was obliged to find an alternative. Fortunately, I found Cookie Quick Manger to be a reasonable alternative. It’s not as powerful, nor as flexible, and managing your cookies takes more steps, but it does performs the basic task well and without fuss. It’s also in the Recommended Extensions Program, so, once again, I’m as confident as I can be that it is free of nefarious behind-the-scenes activity.

In the preferences, I uncheck the option to delete all cookies on restart, because it is not as effective as Firefox’s own management of this process and there is, therefore, no point in having it attempt to delete cookies on restart if Firefox already deleted them on shutdown — it’s just an extra process that slows things down in order to do nothing at all, never mind anything useful.

Do not check the option to import cookies as protected cookies unless you have a very specific need to do so (like you’re a website developer and need to test things) — if it’s checked, uncheck it.

It’s up to you whether you check the option Prevent sites from clearing protected cookies, even if they are expired. If you’re concerned about privacy then, really, this should not be something you consider desirable. But you may have specific needs to protect certain cookies, so, it might be a useful option. Note, however, that, in that case, you’ll have to go back to the Privacy & Security section of the Firefox preferences, scroll down to the History settings, click on the Settings button to the right of the words Clear history when Firefox closes and uncheck Cookies — from a privacy perspective this is a weak option, because Cookie Quick Manger doesn’t delete them until two seconds after Firefox has restarted, meaning they are still in the cache until you next restart Firefox … but if you need this feature, it’s available.

Do not check the box First-Party Isolation! As mentioned before, it will play havoc with other measure we will be undertaking (ironically, Mozilla’s implementation is so draconian that it rules out a lot of techniques that we want to implement) — besides which, we’ll be using containers to isolate domains and it is, therefore, (to a certain extent) redundant.

I check the box Always open Cookie Quick Manager in a new tab — it saves frustration afterwards, because I don’t end up losing my place on a page (or even everything on that page in some unfortunate instances) just because I wanted to examine/delete some cookies.

I also check the option Display an alert before removing cookies from entire browsing context(s) — there’s a good reason why your computer asks you if you’re sure before it deletes things and, whilst deleting browser cookies (probably) won’t ruin your life the way deleting important files will, it never hurts to be safe rather than sorry.

Using it is pretty straightforward: domains on the left, list of cookies in the middle, details (cookie content) on the right … and some buttons at the bottom to do some things with/to them — be careful with the buttons (you don’t want to delete/protect anything inadvertently).

Right-clicking on a domain entry pops up a menu allowing you to so some other useful things — again, be careful … it won’t upload deepfakes of you to PornHub, if you click on something inadvertently, but you could find yourself logged out of something at an inconvenient moment … or your shopping basket empty of the carefully researched items you added to it before closing the relevant (and now forgotten) pages.

eCleaner Forget Button

The nuclear option when it comes to cookies and not essential, but it can be convenient.

Be aware that this really is the nuclear option and will delete absolutely everything you tell it to, from all containers across the board … it doesn’t limit itself to the container you’re in, the tab you’re in, or the window you’re in, it wipes out everything you tell it to — so, if you’re not completely confident you know what you’re doing, you can safely give this one a miss and rely upon things being cleaned up when you quit Firefox instead.

I don’t often need to use it, but it comes in handy when I’m feeling lazy: for instance, when I’ve been browsing a few sites, don’t care about keeping any settings on them, don’t want the cookies hanging around in my cache, waiting to be slurped by an exploit capable of ignoring container restrictions and can’t be bothered to delete them manually.

I check all the options except the Extensions Zone (I want to keep my extension settings for the session) and set the deletion period to be From day one (dangerous zone!), but you can tailor it to your needs. As said, I use it as the nuclear option for quick and dirty cleanup, but your needs may be different, so look at the options carefully before deciding.

Don’t worry about deleting

  1. Forms — unless you haven’t finished filling one in, you can safely (and should) delete these.
  2. Certificates, unless you need to keep using a site that issues its own (which is a dangerous thing but, sometimes, necessary¹⁵) — it only deletes temporary certificates, not permanent ones, and you can safely check that option unless you have a need to keep a specific temporary certificate active.
  3. Plugins Data, unless you are using some plugin to achieve something weird and/or wonderful … in which case, you are probably a developer and know when you do and don’t want to delete it — otherwise, you shouldn’t be using any plugins anyway … they are dangerous in terms of security (e.g. Flash or Java) … and you should delete the data and, ideally, uninstall them¹⁶.
  4. Service workers, Unless you are a web developer/manager, the chances are good that you have no idea what service workers are. You don’t need to either … they’re boringly technical. Just rest assured that, unless you are on a employer/client/customer site (in which case, ask about them before you do anything drastic), connected to your bank, or making some other online transaction (likely involving the foolish use of your credit card), you (almost certainly) don’t want them hanging around and would like to delete them too.

ClearURLs

What this does is very simple: it strips out tracking elements from URLs/links (see the extension’s description on the Mozilla site) — I’m sure you don’t need me to clarify why this is a good thing in terms of privacy.

Once caveat: it will play havoc with your use of Medium, including ’tagging’ people with their ‘@<username>’, selecting tags for your post and any number of other things that I can’t remember any more because I disable it before I use Medium. So, you too will need to remember to disable it before using Medium and re-enable it afterwards.

This is particularly frustrating because there’s really no reason for it — it’s only because, for reasons I simply cannot fathom, Medium’s developers seem to have chosen to use techniques that nobody else in the entire World uses unless they are trying to track you. But that is an altogether different topic and I won’t go into it here because it will only make me angry.

TrackMeNot

It’s debatable whether there’s actually any point in using this .. and the consensus amongst the privacy cognoscenti is pretty much that there isn’t.

The argument against is basically that an algorithm that makes random connections to random sites creates a profile that looks random, which is easy to filter out — if you read the Financial Times news site every morning at 8am, browse Amazon at 12pm and log in to Medium every evening at 7pm, a profile can still be built of you, because there’s a discernible pattern amidst the noise.

Other criticisms leveled at it, albeit a long time ago now, are that, if the searches are made at regular intervals, it’s easy to filter them out because they’re too regular.

The most scathing (although admittedly funny) takedown was on Bruce Schneier’s site: https://www.schneier.com/blog/archives/2006/08/trackmenot_1.html

But …

That criticism is somewhat mitigated by the fact that they’re, largely, of its default settings.

Burst mode means that the searches are sent in conjunction with your own browsing, which mitigates the timing problem somewhat.

Adding the sites you frequently visit to the list and removing those you don’t makes the searches less ‘random’, so to speak — i.e. it becomes more difficult to filter them out because they fit the pattern of your real searches more closely.

You can add the RSS feeds of those sites and draw appropriate search terms from them, making the searches more plausible.

You can add your own words to the Use list field.

You don’t have to (and I don’t) use the dubious options of including keywords that the authorities look out for — unless, of course, you’re in the habit of (foolishly) searching those terms yourself anyway.

No, it won’t disguise your habits in a massive cloud of erroneous data linked to sites you don’t really visit — what sites you visit will still be obvious. But it might well mask what you’re doing on those sites to a certain degree by sending random numbers of queries to them, containing random searches that you could, plausibly, have made yourself.

So, I add it anyway — if it doesn’t work, it doesn’t matter and it doesn’t seem to impact upon my browser’s performance in any significant way, so, why not?

The Options interface is a bit clunky and sometimes it appears to not to be working until you set your zoom level to something it’s happy to work with (trial and error is required here).

I add DuckDuckGo (with and without Javascript enabled), StartPage (with and without Javascript enabled), Qwant (with and without, you get the idea), Ecosia, Wikipedia, YouTube … and a number of other sites I habitually use, that don’t threaten to block my IP address for attempting to perform a seeming DOS (Denial Of Service) attack upon them (again, it takes trial and error to find out which ones work).

I enable burst mode and set the number of queries to ten per hour — which is reasonably realistic, when you consider that it amounts to one every six minutes on average.

I’ve never managed to get the RSS Feed field to accept my own list of feeds, unfortunately … the extension just stops working whenever I try — this may, however, be more to do with the feeds than with TrackMeNot. I’ve successfully added words to the Use list, making my searches more plausible, however.

So, you might want to think about it — like I said, it might be a waste of time, but it doesn’t seem to hurt either. If you’re on a mobile (or capped) data plan, however, you might want to think twice though.

Firefox Multi-Account Containers and Temporary Containers

These are a significant step you can take in improving your privacy — they isolate different sites from each other (and, with careful usage, even pages on the same site), preventing them from tracking your movements around the Web.

I already wrote a detailed ‘howto’ …

… so, I won’t repeat myself here.

The only thing to be aware of is that, if you open a link by right-clicking it, there is the danger of opening it in the same container, because the first menu option is to do so — so, unless you want to open a new tab in the same container (like another Medium article) it’s more reliable to either click links normally (which either opens them in the same tab if the target is in the same domain, or in a new tab and container if in a different domain) … or CTRL+click (which achieves the same in a background tab).

One other useful effect of Temporary Containers is that it solves the problem I mentioned in the caveat to HTTPZ — because opening a new container opens the page in a new context, HTTPZ makes a new attempt to establish an https connection.

[A Note About Other Containers and Related Extensions]

If you search https://addons.mozilla.org for the word ‘container’, you will find a lot of very interesting and useful looking extensions listed in the results, many of which I would only too happily install myself, except …

Some of them haven’t been updated in a disturbingly long time and I wouldn’t, therefore, trust them to still function properly.

Some of them seem like a good idea until you stop and think about them.

An ‘unsafe container’, for instance, that places all your web activity in a container named “unsafe container” and allows you to move it afterwards.

Let’s just think about that for a moment.

If I have Firefox Multi-Account Containers installed, the only things that will end up in containers are the things I explicitly put in containers — either by setting them up to automatically open in a specific container every time, or else by right-clicking on a link and choosing to open it in one from the list. Everything else will not be. As a result, the browser outside those containers is already effectively a single unsafe container. Moreover, how does putting everything in one container (where every site that reaches into it can access all the data from every other site in the container) aid privacy and security? That’s right, it doesn’t.

So, all this extension will be doing is slowing my browser down whilst it redundantly puts pages into a container that does precisely what Firefox already does without it.

Try not to hurt yourself facepalming,

Some may be useful in and of themselves but not to us: whilst an extension to open bookmarks in containers might be useful if you are relying solely on Firefox Multi-Account Containers (and have not set sites to always open in a default container), our setup is such that any site we’d be likely to bookmark (but don’t because it’s a privacy concern) would automatically open in a default container that we set up, or else in a temporary container anyway, so it is surplus to our needs.

There are many others that seem to offer a desirable functionality but really don’t — unless you are a developer looking to test whether your container extension is working properly … which is a bit of a tautology.

Every time I investigate, I find that …. with very few exceptions indeed (like the ones to isolate Facebook, Google, Twitter, et al) …. my privacy needs are more than adequately met by Firefox Multi-Account Containers and Temporary Containers — and even the few exceptions wouldn’t meet any current needs.¹⁹

Imagine you are viewing a page on www.a.com in container 1.

All links to www.a.com/<something.html> open in container 1, irrespective of whether they are opened in the same tab or a new one.

As long as the links you click on are in a different domain, whenever you open them (whether in the foreground or background)), Temporary Containers isolates them from each other, placing each newly opened site into its own container.

So, links to www.b.com will open in container 2, links to www.c.com in container 3 and so on.

Try it …

Open a tab, type in ‘wikipedia.org’ and press the [ENTER] key. Make a note of the container number. Click on the link to take you to the English language home page. If you click on a link on that page to another Wikipedia page, it will open in the same tab and container. If you hold down the [CTRL] key and click, the link is opened in a background tab, in the same container (take a look). If you right-click on a link on that page, you will be presented with the option to open it in a new tab in the same container (Open Link in New <number> Tab) or to Open Link in New Container Tab, which opens a sub-menu containing a list of all the containers (predefined and temporary) available to you.

Now open a link to an external domain and compare the container numbers. If you can find one on the Wikipedia page, open another link to the same external domain and you’ll see that it has opened the page in a new tab, but still in the same container as the previous link.

Now open a link to a second external domain and you’ll see that it has opened in yet another new container.

If you want to open a page in the same domain and don’t want to be tracked, right-click on the link, copy the address, open a new tab and paste the link.

Try it ….

Right-click a link and select Copy Link Location. Open a new, blank tab, paste the address into the address bar and press the [ENTER] key (or select ‘Paste & Go’ from the context menu). You’ll note that the container number is different. And it doesn’t matter how often you do this — you can keep opening new tabs and going to the same URL until your browser crashes … each time you do, you will find yourself in a new container.

So, I’m not even sure I need to worry about Google, et al tracking me, because I can use exactly the same technique to prevent them from following me between temporary containers and ClearURLs ensures I don’t take the tracking code with me when I copy the link address.

Not only that but, given AdNauseam’s blocking lists as well (see below), it does sort of feel a bit like wearing a belt, braces and a second pair of trousers (pants) too … and, despite carefully configuring things to minimise conflict due to function overlap, all these things doing their own thing without regard for the others does run the risk of overloading the browser.

You may, however, find the above approach inconvenient — I did warn you you’d come to the wrong place, if that was what you were looking for though.

So, how can you make things a little less inconvenient?

I used to use Twitter Disconnect and Google Disconnect as set-and-forget options. As time went by, however, they were no longer supported (long before Firefox Quantum arrived). Disconnect for Facebook continued to be supported and even saw a new lease of life as a WebEx extension but, unfortunately, it hasn’t been updated in over three years now and I can’t recommend it. Although it still works, inasmuch as it can, it’s just too out-of-date and won’t have kept up with all the additional domains Facebook have added to their stable in the meantime.

So, the next best thing would appear to be Facebook Container. As discussed above, I’m really not sure that it’s necessary but, if you’re concerned that you might forget to rigorously isolate things by hand and want an automated solution, you might consider it.

Naturally, I don’t use Facebook myself (I value my privacy) but, if I’ve correctly understood how it functions, the side effect of using it has somewhat the same end result: because Facebook can only be logged into within the container, embedded Facebook comments and ‘like’ buttons in tab/windows outside the container will not work, preventing Facebook from tracking your activity across sites and pages.

Be aware though that websites that allow you to create/log-into an account using your Facebook credentials (like Medium, for instance) will not work properly. But you won’t be bothered about that, because … as someone concerned about privacy …

  1. you wouldn’t dream of being so foolish as to log in to anything with your Facebook credentials.
  2. you aren’t foolish enough to use Facebook in the first place and don’t, therefore, have any Facebook credentials with which to log in anyway.

There are, as mentioned, other extensions you might consider (Google/Twitter/Amazon/Reddit/YouTube/Microsoft/Tumblr, to name just the ones in the first page of search results).

But take a good look at them before you decide to install them:

When were they last updated?
How many users are there?
How many reviews?
What kind of reviews? (blank/fawning, overwhelmingly negative, balanced, containing information/bug reports …)

When was the most recent review posted?
How actively does the developer engage with users?
What is the tone of their intervention (Un/Willing to accept criticism, un/willing to support users, informational/short on detail)?

Does the developer respond quickly to bug reports or do they take their time and only update intermittently?

If the answers to the above questions are largely poor (not updated in a long time, few users, few reviews, innumerable/mostly blank/fawning five-star reviews, nobody has reviewed in a long time, developer interacts infrequently and/or poorly with users and doesn’t update with any rapidity, not even in response to critical bugs), don’t risk it. It’s a mixture of judgement and intuition but always err on the side of caution, especially if you can’t quite put your finger on it but something about it seems ‘off’ — you’re thinking about giving someone you don’t know intimate access to your browser and data, potentially allowing them to exploit them, lifting passwords and other personal information as a result.

Some of them not only state that they are a fork of Facebook Container but have word for word the same description, only with ‘Facebook’ replaced by ‘Google’, ‘Amazon’, or whatever. Normally, I wouldn’t be concerned by that. It makes sense, after all, for someone not to reinvent the wheel, does it not? Except that that’s precisely what they are doing here, as it were, so, if they haven’t bothered to do more than a cosmetic transformation of the description, have they done more than that for the containers themselves? The principles behind the technology will remain identical, but the details (and, I suspect, there will be many details) will vary based upon how the the thing the container is meant to capture functions. And Facebook will no more willingly have given away its ‘like button’ technology to Google than will have Google given away the secrets of its search engine to Amazon, say. So, it’s unlikely to be a case of “This is how Facebook ‘like’ buttons work, so the same tricks will defeat Google”, is it? I suspect, they might handle the most obvious things (like ‘Log in with your <Google/Twitter/whatever> account’ ), but I’m dubious about more than that — particularly as, if you read the reviews, you’ll find some pretty detailed bug reports indicating they don’t in fact work properly for various reasons, to which there is no developer response.

AdNauseam

Not just an ad-blocker.

Admit it, this is really what you thought you came here for, isn’t it? Well, if the only thing you take away from this is which ad-blocker to use, make sure it’s this one.

Google won’t list this, so, unless you’re using Firefox, your only other option is to go the developer site, download it and sideload it into your browser. Which I don’t recommend, because

  1. if you’re concerned about privacy, you shouldn’t be using Chrome or anything like it anyway.
  2. if you are the kind of person who sideloads extensions into their browser then … unless you’re sufficiently technically knowledgeable to not need to read this article in the first place … you’re going to end up in trouble and I’m not encouraging you down that path.

The significant things about this extension are

  1. It’s built on uBlock Origin — which has one of the best ad/content blocker pedigrees there are (if not the best).
  2. It’s ethical — whilst you won’t see any ads you don’t want to, not only will you not be tracked by advertisers nor profiled by analytics, but you won’t necessarily be depriving website owners of their income²⁰ … and when you do, you do so in a manner that will (hopefully) encourage them to change that themselves.
  3. More than simply blocking things, it actively obfuscates your behaviour and makes it harder to profile you by, in the background, sending a click event to every ad on the pages you visit.

Once you’ve installed it, click on the icon on the toolbar.

In the bottom-left corner, you’ll see a μ symbol …

… which will open a panel with a few (uBlock Origin) tools that allow you to perform some general actions on the page — I don’t have any use for these tools myself (they’re not what I use it for), but you might want to take a look.

For our purposes here, we want to focus on the ad/tracker blocking/obfuscation, so click on the Settings button.

(If necessary) scroll to the bottom and check the box I am an advanced user.

If it isn’t enabled by default, toggle HIDE ADS to the right — it will turn blue, indicating that it is enabled.

Decide whether you want to see non-tracking ads or not and, if you do, check the box Don’t hide non-tracking Ads. One positive aspect of doing so is that website owners will notice that the non-tracking ads were viewed, which should (hopefully) encourage them to give more space to non-tracking ads and less to tracking ads. Another is that it rewards ethical advertisers, encouraging them to remain so.

Decide whether you want to block large media elements and what size they should be — if on a sufficiently fast broadband connection with unlimited bandwidth, you needn’t worrry, but you might want to if you face restrictions on either.

Decide whether you want to block remote fonts. Generally speaking, this is a good idea for the same reason we installed Decentraleyes, but you may find that certain sites (like my webmail service, for instance) become more inconvenient to use because, instead of images for icons, they load remote fonts (and they’re replaced with those font shapes that are oddly reminiscent of domino tiles) … meaning you can’t see what their function is at a glance but have to hover your mouse over them and squint at the alt text instead — it’s not a dealbreaker, but it does get tiresome after a while, so it’s up to you.

Toggle the CLICK ADS button to the on/enabled position, if it isn’t already, and select the frequency/probability with which it will click ads. Set it to Always to generate the maximum ‘noise’ on your profile, rendering it useless … or choose one of the other settings to create a potentially more realistic one — do you want to hide or wear a disguise?

Decide whether you want to click non-tracking ads. A positive aspect of checking the Don’t click non-tracking Ads option, is that it rewards ethical advertisers who respect DNT requests but, as we aren’t sending DNT requests for the reasons discussed above, it’s academic in this instance — you might, however, decide that you want to do this, in which case, you’ll need to enable DNT requests in the Enhanced Tracking Protection under Privacy & Security in Firefox’s settings.

In the EXTRA PRIVACY section, check all three (at the time of this writing) options

Pre-fetching is designed to speed browsing by making background requests for pages before they are actively requested by the viewer: “If you’re reading this page,” the webdeveloper thinks, “it’s likely you’ll click on one of the links on it.” So they preload them for you. But it’s a security and privacy risk thanks to the potential downloading of pages from un-requested sites and drive-by downloads — so, no, we don’t want that, thankyou!

Hyperlink auditing? No, thanks!

What’s the difference between this and normal tracking?

Some of you might argue that there are other ways to track where we go and what we click. And you would be right. But these other methods use Javascripts, and browser users can choose whether they allow scripts to run or not. Hyperlink auditing does not give users this choice. If the browser allows it, it will work.

Which browsers allow link auditing?

Almost every browser allows hyperlink tracking, but until now they offered an option to disable it. However, now major browsers are removing the option for their users to disallow hyperlink auditing.

As of presstime, Chrome, Edge, Opera, and Safari already allow link auditing by default and offer no option to disable it. Firefox has plans to follow suit in the near future, which is surprising as Firefox is one of the few browsers that has it disabled by default. Firefox users can check the setting for hyperlink auditing under about:config > browser.send_pings.

Prevent my browser from leaking my IP address via WebRTC ? Yes, please — if I’m using a VPN, it’s to hide my IP address, not give it away!

Select whatever Interface options tickle your fancy.

As far as the FILTER LISTS section is concerned, a picture speaks a thousand words, as they say, so …

In order to select specific filter lists, you click on the ‘+’ to the left of the section title (e.g. MALWARE) and check the ones you want.

There are many more than the ones in the image above and you may be tempted to check them all.

Don’t.

First of all, if you check them all, you will soon find that your browser freezes every time you open or refresh a page as all the lists are applied, and the page sanitisation scripts run, one after the other, before it is loaded and rendered — and, by the time you’ve finished following this guide, you’re going to have more than enough reasons to hate me, yourself and browsing the Web, believe me … you don’t need more. Moreover, it’s redundant — there’s really no point checking for the same 84,000 items for the fiftieth time in a row before loading a page. The ones I’ve picked are pretty much the standard items everyone picks. The reason for this is that, in a way, it’s a popularity contest: the lists that are the most popular are the ones that get updated most frequently, because there’s a demand for them — so, if I want my lists to be up-to-date, I run with the crowd. But read around and decide which ones best suit your needs.

Secondly, we’ll be managing a lot of things manually later on and, by default, blocking everything that we don’t explicitly allow anyway — using these lists is just a way to reduce the number of items we have to examine before deciding which ones to allow … (I don’t know about you, but I really don’t want to have to scan through some 100,000 items each time, to find the needles in the haystack that are the things I do want to see in my browser) … and to reduce the risk that we will inadvertently allow something we shouldn’t.

I haven’t created any filters of my own, so we’ll skip the MY FILTERS section — if you know what you’re doing, however, it’s pretty self-explanatory what you need to do here.

I like to keep MY RULES clean, because not only do I not want there to be redundant application of the same rules by different extensions (slowing my browser to a crawl) but I also don’t want to have to hunt through multiple extensions in order to track down where a problem might lie. The best approach is KISS (Keep It Simple, Stupid): do one thing only and do it well. So, as we’ll be handling our explicit element blocking/unblocking with a different extension (uMatrix), we want the most basic ruleset here …

If you find you need to edit this, just copy and paste the below into a text file with a sensible title (e.g. my-AdNauseam-permanent-rules_2020–05–31_10.45.53.txt) …

no-large-media: behind-the-scene false
behind-the-scene * * noop
behind-the-scene * 1p-script noop
behind-the-scene * 3p noop
behind-the-scene * 3p-frame noop
behind-the-scene * 3p-script noop
behind-the-scene * image noop
behind-the-scene * inline-script noop

Then clear the temporary rules, import the text file and commit the new temporary rules to the permanent ruleset.

I don’t whitelist anything — sure, it might be fine now … but what about later, when it’s been compromised?

So, the defaults will do fine …

This is one area where the interface is a bit lacking and you can’t expand the box,

The full list is …

about-scheme
chrome-extension-scheme
chrome-scheme
moz-extension-scheme
opera-scheme
vivaldi-scheme
wyciwyg-scheme

As far as we’re concerned here, most of it is redundant anyway … simply there as a default list so that AdNauseam automatically ignores the configuration API of the browsers it is supported by (you don’t want an extension to prevent you from configuring your browser) and I only mention it to make you aware that, if you have a need to whitelist particular (typically employer/client/customer) sites and bathe in lurid advertising, this is where you do it — who am I to judge your degenerate peccadillos, and censor your knowledge, after all?

It’s largely a set-and-forget extension — most of the time, it simply works in the background, silently blocking undesirable elements on pages.

Occasionally, however, it will become very visible …

I don’t recommend unblocking such pages even temporarily — they’ve been blocked for a reason and unblocking them defeats the purpose of using AdNauseam in the first place.

Chameleon

This is a WebEx port of the Random Agent Spoofer extension from pre-Quantum days — it’s not a part of the Recommended Extensions Program but, again, after investigation, I’m pretty confident it’s aboveboard (Random Agent Spoofer certainly seemed to be).

It’s simultaneously

  1. easy to use
  2. complex, requiring careful configuration
  3. one of the extensions most likely to ruin your browsing experience
  4. one of the most useful in terms of improving privacy

Hovering the mouse over the toolbar icon will pop up a box displaying what your browser is currently pretending to be — in this instance, websites will be told I’m browsing them with Chrome v.81 on a phone running iOS 11.

Click the toolbar icon on to configure it.

Click on the globe icon on the left

Decide whether you want it to report

  1. the (browser and operating platform) profile you are using.
  2. a random profile.
  3. a random desktop profile.
  4. a random mobile platform.

For maximum obfuscation, select Random, which will pick a random platform, operating system/version and browser/version.

Decide whether you want your profile to change periodically and, if so, the frequency …

It might seem to be obvious that the maximum privacy will be obtained by randomising the profile, but stop and think for a moment.

If you don’t linger long on any sites then a different profile every few minutes is fine for obscuring how you’re browsing them.

If, however, you spend longer on sites then you have to consider that the likelihood of someone suddenly changing their entire profile in the middle of their session is so unlikely as to draw attention.

Doing so with a frequency of once per hour is unusual enough, let alone any more frequently than that. People don’t tend to switch devices every hour, never mind every minute … they pick a device and stick with it.

If they do start a new browsing session with a different device/profile, it also tends to be the case that they do not log into the same sites but have chosen the new device/profile specifically so that they can browse different things whilst leaving the previous session available for when they want to return to it. They might move from a desktop profile to a mobile profile because their activities dictate a need to do so if they wish to continue where they left off. They might do the opposite after returning to ‘base’ after being on the move. But even then, they tend to use a single profile for each platform. You don’t see many people log into Facebook on an Android 6 device, using Firefox v.76, suddenly switch to Chrome v.81 on an iPhone running iOS 11, then switch to Internet Explorer v.11 on Windows 8.1, then to Safari v.13 on MacOS 10.15 … ever … never mind on the hour every hour (let alone all within the space of an hour) — and that’s without factoring in that they seemingly did it without logging out of Facebook whilst doing so!

In fact, you don’t even see people logging in to a site and switching versions of the same browser on the same platform every hour, let alone more frequently.

Additionally, if there are multiple devices in use in your residence, they tend to be used by different people, so, again, it’s extremely unusual for the same user to log in to a site and then change to one of the other devices, even if you’re careful to limit the number of different profiles you use — if there’s an Android device in your home and an iPhone, then their users will tend to consistently log into sites with different user accounts, not share the same account.

And, remember, Location Guard … whether you’ve chosen a fake location or simply added noise to your real one … is broadcasting a fixed location — as far as the websites you are browsing are concerned, you’ve switched devices, but you’re not on the move.

As far as IP based geolocation is concerned, you haven’t left the general area.

As far as an ISP is concerned, you’re wherever you last connected from, not on the move.

If you’re using a data connection then, as far as your service provider is concerned, you‘re not on the move.

Equally, over time, you don’t tend to suddenly change your profile either. You might have a couple of PCs/laptops at home, running different versions of Windows, maybe a couple of different mobile devices in your home, possibly an iPad, maybe a Mac, who knows … but you probably don’t buy new ones even every year, never mind every week (forget every day, it doesn’t happen).

On top of all that, you can find that features suddenly stop working — if I log into Medium with a mobile profile, for instance, I can’t create new stories or reply to things, because you can only do that on a mobile platform if you use their app.

So, choose the frequency carefully, bearing in mind your activities and what you’re trying to achieve. If you’re just browsing around then random profile changes are probably fine — a different profile for each site can improve privacy. But, if you’re logged in to a site … or browsing somewhere like Amazon (even if not logged in) … their analytics will quickly detect that something is amiss and that might have undesirable consequences — at best, it might arouse suspicion, at worst it might result in a suspension of your access to the site, ‘due to suspicious activity’.

For those reasons, I prefer not to randomly change my profile but let Chameleon pick one at random when I launch Firefox and leave it at that until such time as I feel it appropriate (and ‘safe’, as it were) to change it — i.e. when I’ve logged out of one account before logging in to another … or have logged out of all accounts and am simply browsing.

You can do this by clicking the Change button on Chameleon’s home screen …

… which will generate a new, random profile.

You can limit the pool of profiles from which Chameleon chooses by selecting either desktop or mobile profiles only and/or excluding certain options — click on each of the operating system labels and check the boxes of the combinations you don’t want.

Again, bear in mind the above discussion and ask yourself how many different profiles (and which ones) will achieve your aim(s). If you want to give the impression that you live in a nice suburban community, as part of a nuclear family with the average two cars and 1.4 children … rather than a crackhouse in the wrong part of town, with constant comings and goings … or you want to disguise the fact that you are a single female, living alone and potentially vulnerable … so you decide to pretend there are three different Android devices, an iPhone, a Mac and two PCs/laptops in your residence … it’s probably wise to pick four phones you know work for most (if not all) sites, one Mac and two Windows profiles (or one Windows and one Linux) … and exclude the others.

This will also help, if you decide that you’ll always log into certain accounts with the same profile (or maybe one mobile and one desktop) — it’s much easier to cycle through four-to-eight profiles with the Change button than it is through all of them.

As I mentioned above, there will be some sites that will refuse to work 100% with certain profiles (like Medium and mobile browsers).

Others will refuse to work with a particular profile altogether. If you’re old enough to remember the bad old days when depending on the site, it would ‘work best with’ certain browsers, you’ll know what I’m talking about when I warn you that differences in browser technology can render certain features inoperative: you’re using Firefox on Linux, but the site you’re connected to thinks you’re using Chrome on Windows 8.1 and it isn’t simply that the differences between the two are sufficient that things don’t work properly, the site actually tries to deliver you a different webpage and/or in a different way … ‘to optimise your experience’ … rather than sticking to universally agreed standards — it’s almost enough to make you wonder what the point of the W3C is </sarcasm>

So, you might find you need to engage in a bit of trial and error before finding suitable profiles for the sites you log into but, once you have, it’s almost certainly best to remain consistent — if you log into Medium with a Windows 8.1/Edge 8.1 profile and find it works then stick with that until such time as there’s a new version of Edge available … or you might switch to a different combination that works (say Windows 10/Edge 8.1) after a while but, if you do, stick with that for a while afterwards (remember, abnormally frequent change can arouse suspicion).

What you should also do, is make sure to exclude whichever profile you are really using: if you’re on Windows 10/Firefox 76, exclude that profile — you don’t maintain your privacy by telling websites what you’re really using. If you can, exclude all profiles that include Firefox — unless you’re really unfortunate, you’ll find most sites will continue to work …. and for those few that don’t, you can always add them to the whitelist, if you can’t find a fake profile that works.

Configure the Headers section as below. I don’t disable the referrer, because I need it to keep things working properly within domains for those sites I log into, so I reduce the quality of the data by setting the Referer X Origin Policy and Referer Trimming Policy instead. Furthermore, it draws less attention than completely withholding all information.

Note that, whilst the Referer X Origin Policy here will prevent the site you’re visiting from knowing where you came from, it won’t prevent it tracking you to an external site, if you click on a link to that site within it — the only way to prevent that is to copy the link address (by right-clicking on it and selecting Copy Link Location) and then pasting the link into a new tab or window (ClearURLs should see to it that tracking elements are stripped out, but, if not, simply edit the URL yourself before going to the new site).

Configure the Options/Injection section at least as below, but …

… you might consider blocking media devices (webcam/microphone) to prevent websites detecting these. For general browsing (such as Medium), I use a laptop with no webcam and a disabled microphone, so I haven’t bothered to do so here, but it is a consideration. As previously discussed, there really should be no reason for most people to ever allow a website access to these devices. There are dedicated tools available for videoconferencing, etc. and really only those with very specific reasons would have any call to leave them enabled in their web browser. Moreover, even though we told Firefox to refuse requests to access them (so theoretically no harm can come) leaving them activated within your browser is still not a good idea from a security/privacy perspective — who knows when a zero-day exploit will ignore that setting?

However, later on I will discuss another anti-fingerprinting tool that spoofs the audio API (CanvasBlocker) and that is, for me at least, preferable in terms of privacy than simply disabling it outright. I prefer to misinform about things than outright disinform: it’s not only less obvious but, furthermore, creates a cloud of inaccurate data that hampers attempts to track me in a way that encouraging people to focus on what data they do have cannot (sometimes more, not less, is more) ... and blocking the media interfaces with Chameleon would prevent that. Additionally, if I’m spoofing with one extension and blocking with another, there’s a conflict between the two which could potentially result in neither of them working, so, I prefer to leave one of them to do its job and the other do nothing in that area. You might want to consider it, however — it depends upon whether you decide to make use of the other extensions as well.

I don’t limit my tab history because I prefer to have hands-on control of the values I report as far as possible, so I spoof my history with CanvasBlocker instead. You can do it here instead, however, if you aren’t bothered what the value will be.

Setting a keyboard fingerprint delay of greater than 1 (31ms) causes unacceptable lag on my system (I’m impatient, yes), but you’re not me, so experiment with it until you find a setting that works for you.

I don’t protect the window name with Chameleon because (like the audio API), I use CanvasBlocker to protect it.

Again, CanvasBlocker protects my audio API, so I don’t spoof the audio context with Chameleon.

I’m experimenting with spoofing the font fingerprint right now. I’m not sure why I wasn’t doing so before. I recently rebuilt my system and it may have simply been an oversight. That would be untypical of me though: when it comes to these these things I usually err on the side of paranoia until I’m sure there’s an overriding reason not to, so I may have had a good reason not to do this with Chameleon that has simply slipped my mind in the meantime — if I find there’s a good reason to uncheck that option, I’ll update this section.

The screen size is set to default because I spoof it with CanvasBlocker, so there’s no need to do it here. But, if you decide not to use CanvasBlocker then here is the place to do it instead.

I’m spoofing my timezone, I’m just not prepared to publish that value on the Internet for all the World to see which timezone I’m not in — that would defeat the point of spoofing it.

In the Options/Standard section, as discussed, I don’t enable FPI because it interferes with the function of other extensions. Likewise, I don’t enable the Resist Fingerprinting option (which would defeat CanvasBlocker’s function) .

If you recall, when configuring AdNauseam, we opted to disable WebRTC.

I also mentioned, above, how using multiple extensions to perform conflicting actions could result in neither of them working.

Well, we don’t want a conflict to arise between between disabling it in AdNauseam and not disabling it with Chameleon, so it’s checked here as well.

Tracking Protection Mode is off because we’re managing this with other extensions, including AdNauseam (and Decentraleyes in its way) … will be explicitly handling others with another extension (uMatrix) later …. and doing it here as well would simply lead to the aforementioned browser slowdown induced by redundant re-checking of things that have already been dealt with — if Chameleon does manage any trackers the others don’t that isn’t clear from the FAQ, so I err on the side of caution.

Until such time as I find I need them (and I haven’t done yet), I block all third party websockets. Like service workers, websockets are boringly technical … (read the introduction on Wikipedia, you’ll see what I mean) … and I don’t want to bore you any more than I already have. All you really need to know is that, “[u]nlike regular cross-domain HTTP requests, WebSocket requests are not restricted by the Same-origin policy“— which is enough to have me reach for my third-party-websocket blocker faster than you can say “Wait … electric spiders? What? ²¹

We will set the base cookie policy with Chameleon because, as you’ll notice, there’s no way not to — and, as you already know, having conflicting settings across extensions and/or the browser is a bad idea, so there’s no option but to do it here.

As it happens, if you now go and look in your Firefox preferences, under Privacy & Security, you’ll note that it’s official … you have handed control of the base cookie management policy to Chameleon …

Muahahahaha! All your base cookie management are belong to Chameleon! Tremble, mortal!

… which is why we set it here — even if we had simply left it untouched, it would have overridden whatever policy we had previously set, so it’s best to set it here explicitly.

In any event, we won’t check the box to delete cookies and site data after the window is closed. It’s nice from a security/privacy perspective, but

  1. we’ll be handling them with another extension (uMatrix) anyway
  2. I occasionally accidentally shut the wrong window, and it would be really annoying to have to go and log in again, re-open my email, click the link that’s (hopefully) been sent to me and try to get back to where I was … when I could just go to my history for the session, reopen the closed window with all its tabs and cookies intact and pick up where I left off when I had my aneuyrism.

Last, but not least, a disaster waiting to happen …

Add a context menu item to whitelist the entire domain? Are they mad!?

Do. Not. Touch. That. Checkbox!

Unless, for some bizarre reason it’s already checked — in which case, uncheck it now!

Seriously, if you ever have a major mental breakdown … a psychotic break that leaves you in a fugue state, with no idea whether Thursday is above or below you, or how may minutes there are in an orange, unable to remember what triangles taste like and you can’t think of a reason why you shouldn’t whitelist a site … you can still do it manually from within Chameleon!

Add a context menu item to whitelist the entire domain? In that long list of other options that have crept in over the years and that your mouse all too frequently slips over, clicking the wrong thing and … whoops … what did I just click on?

The Simpsons — Itchy & Scratchy Land

Anyway …

The Whitelist section is very useful.

Remember the earlier discussion about not changing your profile too frequently when using sites you log into and/or spend a lot of time on, in order not to arouse suspicion?

This is where you resolve that problem.

The previous discussion still stands … the points are still valid … but, you use the whitelist to always open a specific URL with a specific profile, thereby circumventing the problem.

Rather than re-invent the wheel here, however, I’ll just link to the FAQ entry for it — there’s nothing I can add to the discussion at this stage that I haven’t already said and it’s just a technical matter of how to configure things, not what and why.

An alternative to Chameleon is Random User Agent.

It does pretty much the same thing as Chameleon, in pretty much the same way, and I used it for a while after Firefox Quantum was released and Random Agent Spoofer wouldn’t work with it.

It’s pretty good (definitely adequate for performing the task of spoofing your profile) and one advantage it has is that, instead of having to exclude all the profiles you don’t want, you simply include the ones you do (which is quicker, easier and, to my mind, at least, more logical).

When I noticed that Chameleon was available, I installed that in a separate instance of portable Firefox and ran them both in tandem for a while.

I can’t remember now exactly why I ended up favouring Chameleon; possibly simply inertia, as it became tedious to keep switching between the two instances to see if I really felt more strongly in favour Random User Agent after all and simply ended up using Chameleon by default — I do seem to recall having a sense of RUA being better in some ways, but I can’t remember why any more.

You might want to give it a look though and see if you prefer it to Chameleon — as long as it does as good a job of improving your privacy and security, there’s (possibly? probably?) no reason why you shouldn’t.

NoScript

If NoScript isn’t the most famous web browser extension ever, it has to be in the top ten at the very least. It’s been around for years and was quite possibly the very first security extension of any serious worth — I certainly don’t recall noticing uBlock or any of the other, similar, extensions until much later.

It’s essential — so much so that it’s included with Torbrowser by default.

Its WebEx incarnation is clunkier than the old XUL version, but it still does the job you need it to do: block scripts, preventing them from doing anything at all until you explicitly authorise them, thus preventing you from harm that wasn’t self-inflicted — there’s nothing anyone can do to prevent you from authorising the ‘hijack-browser.js’ script, if that’s what you choose to do, but NoScript will prevent it from running until you do.

So, if you don’t want (amongst far too many others) google-analytics or adsense scripts running riot in your browser whilst you allow the scripts from the site you’re actually visiting to perform their (probably) genuinely useful task of making the site work properly, NoScript is a godsend and you should install it now. No ifs, no buts, just do it.

Remember that remark I made about not proclaiming my pronouncements either gospel or not subject to change and there was a footnote humourously implying that there might be exceptions to that rule? This is one of those exceptions — if you don’t install and make assiduous use of NoScript, you might as well do the whole dropping your underwear and bending over in the street thing … or use Chrome.

This pronouncement is gospel and not subject to change unless NoScript gets taken over by some entity that turns it into HereIsThePasswordToTheAdministratorAccountHelpYourselfToEverythingForeverAndWhilstYou’reHereLetMeGiveYouMyBankAccountPINMyFullNameDateOfBirthSocialSecurityNumberAndMother’sMaidenNameAsWellScript — that’s how important it is to install NoScript.

If you install nothing else, install NoScript.

It’s vital that you install NoScript.

Do it NOW!

In the Add-ons Manager, select the Extensions page and open NoScript’s preferences.

In the General tab, the first thing we will do is to uncheck every box — by default, we don’t want to let anything happen without our knowledge.

By default, NoScript allows everything through for trusted sites.

If you’re not a versed in web design and development, this is where you will get your first insight into what kinds of things go on behind the scenes when you browse around … and why you might not want them to.

We’ll revisit this issue shortly, so, for now, leave the default settings.

In the Per-site Permissions tab, set all of the entries to Default — as said, we don’t want anything happening without our knowledge … and some of them are more than a little dubious as far our privacy is concerned (at least with the default trust settings).

Don’t forget to scroll to the bottom of the list to catch them all.

Returning here later later, you can see what you have authorised and in what manner — meaning you can clean up anything you don’t need/want by resetting it to Default if you forgot to unauthorise it previously (which is easily done when simply closing a tab window in the middle of a session) … or whatever setting you consider appropriate under the circumstances (again, we’ll explore these choices later).

The choices you make in the Appearance tab are a matter of personal preference, but I recommend you uncheck the first two options and check List full addresses in the permissions popup.

Unchecking the context menu item saves frustration later when, as all too frequently happens, your mouse jumps and selects it instead of what you really wanted — it only opens the menu from the toolbar, so you’ll have to mouse over to interact with it anyway.

Unchecking the blocked items count saves distraction — you can see when there’s something to attend to thanks to the badge on the icon anyway … and you have to click on it in order to do so, so knowing how many there are in advance is redundant.

Listing the full addresses means we see more (important) detail.

Compare …

… with …

In the Advanced tab, check Sanitize cross-site suspicious requests.

Why?

Q: What is XSS and why should I care?
A: XSS stands for Cross site scripting, a web application vulnerability which allows the attacker to inject malicious code from a certain site into a different site, and can be used by an attacker to “impersonate” a different user or to steal valuable information. This kind of vulnerability has clear implications for NoScript users, because if a whitelisted site is vulnerable to a XSS attack, the attacker can actually run JavaScript code injecting it into the vulnerable site and thus bypassing the whitelist.
NoScript FAQ.

NoScript, doesn’t prevent scripts from loading into your browser.

This is potentially a privacy concern. We installed Decentraleyes because we can be tracked by CDNs (which might very well, and often do, supply that information to the sites using them). Equally, when a script gets called by a browser there is nothing stopping the host site from logging that information and it would be unusual for them not to log at least some when they are … information to which we are not privy (so we cannot know whether we need be concerned or not). We can do something about this with another extension (uMatrix) later, so for now we won’t concern ourselves with it but focus on the benefit of NoScript: it prevents scripts from doing anything until we authorise them — until we authorise a script, any privacy or security risk it poses is contained.

To see the current rules NoScript is applying, click on the icon on the toolbar.

Here are the rules for the page I’m currently editing. I didn’t write this piece in strict order, so don’t worry if it looks daunting. In fact, those are all the things that are necessary for this page to display properly as I edit it, including things that follow this section — there are simply many that haven’t been authorised to run and are, instead, blocked by default.

Note that I have only temporarily trusted certain elements.

Given that Medium is a site I use regularly and inevitably end up having to trust certain elements of it (more about which below), why not simply trust them permananently?

Remember the earlier discussion about perfectly good extensions being sold on or the team maintaining them changing …. the same thing happens to websites, or even (the companies that own) the companies developing and maintaining those websites — priorities and practices can change.

Now imagine, you’ve trusted a site, logged in and let everything through. Then, whilst browsing something else, you learn that just such a change to that site took place some time ago and you don’t like the ramifications. Well, it’s too late now — whatever damage might have been done has been done.

Now imagine that, instead, you discovered the information before you logged in. You’re in a position to review the elements you authorise before they can do any harm.

At the end of the day, trusting an element temporarily still allows it to do things and you’ve no control over what those things might be, only whether they do them — in that regard, temporarily trusting them is no different to doing so permanently. But I think it best to err on the side of caution … just in case I’m lucky and learn that I shouldn’t before it’s too late.

Regarding the specific elements you trust in the default Trusted policy, it is beyond the scope of this discussion to dig into the detail of how NoScript handles the individual elements in further depth. Moreover, doing so can interfere with the action of uMatrix (see below), resulting in confusion as things authorised by uMatrix continue to be blocked by NoScript … giving the appearance of one or other of them not functioning properly, when in fact, both are working exactly as they should.

However, when you have concerns that elements you allow through uMatrix, may contain things you would rather not authorise … such as those contained within XHR object resources, for instance …. using the Custom option in NoScript, rather than (temporarily) trusting them can provide even greater control without the need to edit the default Trusted policy.

If we imagine that we have authorised medium.com to load XHR objects in uMatrix, before refreshing the page, we can select the Custom option in NoScript and authorise only those elements we wish.

Here, we have selected the Custom option for medium.com …

This will make more sense after reading about uMatrix.

Note that custom settings only last for the session and are reverted to the default values when the browser is next launched.

The things you can de/select are

  • Script — Any type of script the site attempts to execute.
  • Object — The HTML object tag.
  • Media — Media elements.
  • Frame — Frames that the site attempts to load.
  • Font — Font elements.
  • WebGL — WebGL elements.
  • Fetch — Requests that use fetch APIs.
  • Ping — A list of URLs to be notified if the user follows the hyperlink (potentially resulting in tracking).
  • Other — Unknown.

If you were going to use the custom settings to further limit things, then the most obvious ones that might be of concern (after scripts) world be WebGL amd/or Fetch and/or Ping.

I said the list above might look daunting. For the purpose of this guide, I have ublacklisted the elements I would normally block by default. As time goes by and you decide which things you never want running in your browser (advertising networks, analytics and such), you can choose to block them permanently …

By blocking the TLD (Top Level Domain) … in this case, doubleclick.net … you block all elements of it. Note how, after clicking away from NoScript and then re-opening the list, neither https://googleads.g.doubleclick.net nor https://static,doubleclick.net is listed …

<everything>.doubleclick.net is blocked by default on every site.

If you want to block only specific elements of a domain, you simply block the ones that concern you. Here, for instance, we are blocking https://googleads.g.doubleclick.net

… with the result that, subsequently, when we look at the list we see what we are both blocking and allowing from the TLD …

In this instance, of course, we are not allowing anything from doubleclick.net, but you get the idea — if we wanted to allow scripts from https://static,doubleclick.net to run, we could authorise them specifically … but we could simply authorise the enttire TLD and https://googleads.g.doubleclick.net would still be blocked.

Temporary authorisations remain active across all pages and sites for the session — if you close one page/site and browse to another, the decisions you made for the first, will be applied to the new one. Bear that in mind, if you want to allow domains to run scripts on some sites but not on others. They will show up in NoScript’s list, where you can choose to block them again, or change the Custom choices, but they will have already run their scripts before then. The best thing to do, therefore, is to look in NoScript before browsing to another site (even one within the same domain) and restore the default before you do.

Generally, as long as you are careful about what you authorise, this isn’t an enormous problem, but it will allow sites to track you, if those scripts are present on other sites, so it’s worth keeping in mind — you may be happy to allow scripts to run on youtube.com when you watch something there, for instance, but you might not want Google knowing that you visited pages on another site (like Medium) on which the author(s) linked to videos you didn’t watch.

If you accidentally whitelist/blacklist a domain … or if you simply change your mind later … in the Add-ons Manager

  1. Select the Extensions page.
  2. Click on the ellipses to the right of the NoScript entry.
  3. Select Preferences (or Options). to open the Settings for NoScript.
  4. Select the Per-site Permissions tab.
  5. Select the permission you want to grant the domain

[NOTE]

When you click away from the NoScript menu, it automatically reloads the page (from the cache), running any authorised scripts. It is better to take time to deliberate the ramifications of your choice(s) carefully beforehand, therefore — afterwards, it might be too late. Trackpads can be particularly sensitive, resulting in inadvertent cursor movement and clicks (be especially careful when using them).

uMatrix

If NoScript doesn’t make you hate me, uMatrix most definitely will.

It makes me hate me.

It makes me hate every single webdesigner in the entire World. It makes me hate Google, Facebook, Amazon and everyone else who turned the Web into the cesspool of Surveillance Capitalism it is today.

If there is one thing guaranteed to make my blood boil and me want to hurl my computer through the window and never go near the Web again, it’s uMatrix. There isn’t a day goes by on which it doesn’t make me utter a stream of invective that would make even the kids from South Park blush …

South Park — Kyle swears at the aliens

Multiple times a minute.

All day long.

One day, I swear, I’ll lose my cool altogether, have mental breakdown and then …

Well, I don’t like to think really … but I suspect the words ‘spree’, ‘bloodbath’ and ‘web desvelopers’ might feature prominently in a number of news stories around the globe — and the word ‘atrocities’ will probably have to be redefined to account for a level of wrath-induced violence the like of which the World has never before seen … not even in its foulest nightmares.

But I digress … where was I?

Oh, Lord …

Yes … uMatrix!

<sigh>

The problem is … you know how vital NoScript is … well, uMatrix is just as vital … possibly even more so, in fact — if I had to pick one (and only one) of them, I couldn’t swear, hand-on-heart, that it wouldn’t be uMatrix.

Without it, or something like it (say uBlock Origin) you’re in deep trouble vis-à-vis both security and privacy. And good though alternatives like uBlock might be, they don’t offer quite the same granular control nor the range of uMatrix. I used to swear by uBlock until I discovered uMatrix. They’re both by the same developer and he says himself that uMatrix is more powerful, so really, there’s no question that uMatrix is the one to choose — and that’s certainly borne out by my own experience as well.

But you will hate using it.

Every web page, on every site, becomes a sisyphean nightmare of authorising elements and reloading the page only to discover that, as a result, there are new elements that (might) need to be authorised. And when you authorise those the same thing happens … again … and again … and again … until you just want to give up (or disable uMatrix).

Don’t … … …. (give up or disable uMatrix).

Because by Heaven, do you learn a lot along the way: about how poorly sites are designed … about how many of them rely upon third-party elements, that rely upon other third-party elements, that rely upon yet others … how so few designers/maintainers must actually have a clue what is going on on their sites because they are relying upon others to do the work for them and simply re-using it without (seemingly) really caring, let alone knowing, how it works. The article below reports on a story that highlights just how bad it has become …

You get to see how many sites rely upon content delivery networks and third-party resources (especially from Google) … how many include mountains of links to ad and analytics networks … and block them.

You see just how many sites are perfectly usable with no more than CSS and images loaded … and just how many of them refuse to work without Javascript, even though they could do so perfectly with just CSS and images.

Yous see that article about the node.js chaos I linked to above above … well, take a look at this …

The only things missing, are the images.

The only things that don’t work are the hamburger menu (the three vertically stacked lines in the top-left corner) and the user-menu (the silhouette of a person’s head and chest in the top-right corner) — even the search function works.

What did I have to enable for this site to function so well?

Absolutely nothing — not even CSS!

And Quartz isn’t an amateur production making use of outmoded technology …(take a look at the source; you won’t see a single ‘<tr>’ or ‘<frameset>’ in it) …it’s a respected publication.

Compare it to other sites as you browse around with uMatrix enabled and blocking everything and you’ll quickly realise just how poorly designed they are … how much they rely upon code and resources supplied by third parties .

And, no, the defence that they are inherently more complex because they provide a more sophisticated user experience won’t wash. How sophisticated does a website need to be exactly?

It needs to present text (HTML), images (HTML) and (maybe) videos (HTML) with layout formatting (potentially, CSS) — HTML and (possibly) CSS then.

It needs to provide links (HTML) and (again, only possibly) menus (CSS).

The most complex thing it might need to do is provide formatted data (so, some Javascript/PHP/Python/PERL/SQL/whatever, but only server-side is absolutely necessary for that) … login/authentication (the same applies).

Maybe some account facilities once a user has logged in (messaging/email/whathaveyou … but they don’t need to play any part in the page until I log in).

Maybe some widgets (like a Soundcloud player or whatever) — but that’s simply an HTML ‘<embed>’, not reams and reams of Javascript.

Maybe a slideshow — for which you don’t even need to embed a widget (you can can achieve it with HTML/CSS refresh).

All the so-called ‘extra value’ offered is, for the most part … for the casual browser who isn’t going to log in and make use of whatever whizzy features there are for account-holders, not try to create a ‘mashup’ site of their own, not collate snippets, not redesign the page on the fly … no value at all. And it comes at the cost of poor design, intrusive data collection/profiling and increased security risks.

And, unless you’re an experienced web developer, the only way you’ll learn al this, without following a course of (self) study, is by object lesson: watching how it’s done and trying to re-create it yourself — which, in a crude way, you do with uMatrix by trial-and-error authorisation of things until you learn what’s necessary on sites, what isn’t, and what it does either way around. uMatrix provides an education as well as an important line of defence.

On top of all that, you notice trends.

Over the years … thanks to using various blockers (like NoScript, uBlock and uMatrix) … I’ve seen the rise of CDNs long before they became common knowledge, the rise of third-party frameworks and API resources (similarly, before most people even knew they existed, let alone what they were), the explosion in tracking and analytics … all simply by seeing what new things were suddenly appearing in my blockers — learning about the technology is not the same as seeing it explode all over the Internet.

You learn about what is going on in the World (because these things have a wider impact than simply the delivery of web pages and services) … like how little privacy you have on the Web — which leads you to investigate other dimensions of the phenomenon (such as facial recognition, in-store tracking, facial reconstruction from found DNA, deepfakes, vocal reconstruction, etc., etc., etc.)

So …

Like NoScript, uMatrix is an exceptional extension and my pronouncement that you have to install it NOW is gospel and not subject to change unless something very unfortunate happens in the Future.

[NOTE]

At this time, uMatrix must be the last extension of its type to be enabled, if it is to function correctly — therefore, after launching Firefox, and before browsing, disable it in the Extensions page of the Add-ons Manager and immediately re-enable it]

Just like NoScript, to see what the active policy is for the current page/site/domain, click on the icon on the toolbar.

‘Out of the box’, uMatrix is quite lenient with its blocking, allowing all first party elements … and CSS and images for all domains.

For Medium’s landing page, the default policy results in this …

Before even logging in, Medium has been allowed to

  • Set eight cookies
  • Load four CSS stylesheets
  • Load twenty-one images
  • Run eight scripts
  • Load four XHR objects
  • Load frames (none as it happens, but it’s still a potentially risky default)

… all of which is decidedly too lenient for me — in fact, I’d go so far as to say it isn’t simply lenient but lax. So, we’ll be locking things down a lot tighter before we’re done.

As you can see, uMatrix can block/allow rather more than NoScript‘s default operation and is more informative, in that it shows us how many scripts a page/site/domain will run, rather than simply that it will, if allowed to.

Furthermore, like NoScript, if two pages want to load different resources, you can see that by comparing them.

Naturally, my settings are stricter than the defaults, but here’s the list for my Medium home page …

And here’s the one for this page right now, whilst I’m editing it …

You’ll notice that it isn’t as informative as, nor does it offer quite the fine control of, NoScript with regard to whether a source is delivered via https or http, giving just the domain. With NoScript in our arsenal as well, this is less of a concern than it would otherwise be, but bear it in mind — if you have reason to be particularly concerned, you can always take a look in NoScript’s custom policy and see if you can’t restrict things further, but there are still limits to just how precise your level of control will be.

Also important to note is that, like NoScript, once you authorise something, that’s how it stays until you leave the site/domain. It’s not as universal as NoScript in that regard …permissions are scoped by domain … but, again, something to pay attention to — for the same reason you might want to be wary of allowing NoScript’s permissions for youtube.com to persist across domains, you might not want to allow some of uMatrix’s rules to persists within a domain (cookies from weather.example.com being available for inspection on news.example.com for instance).

Let’s lock things down a bit tighter then.

In the Add-ons Manager in Firefox

  1. Select the Extensions page
  2. Find uMatrix
  3. Open its Preferences/Options

Configure the Settings as below

You needn’t show the number of blocked resources on the icon, but it’s a useful indication of when there are resources to be unblocked — it’s not always immediately apparent.

You can hide the placeholder of blocked elements, but I find it has undesirable effect on the page layout, so I don’t.

I do, however, block the placeholder of blacklisted elements. Informative though it might be to see them and know they were potentially part of the page, the fact that a placeholder was rendered might be used to register that the element was referenced … resulting in tracking by that element. Moreover, there is the potential risk of a (future) exploit making use of the mere presence of a placeholder to execute its payload … so, I err on the side of “better safe than sorry”.

Whilst it has been known to result in some unwanted side-effects, I have yet to experience one of them myself, so I spoof the <noscript> tag when first party scripts are blocked. This has nothing to do with NoScript. The HTML <noscript> tag defines alternative HTML to be delivered if a script type is not supported or if scripting is disabled by the browser. Although not guaranteed to work with every site (many simply refuse to work at all if Javascript is disabled), on those sites that do, enough will load for me to see whether it’s worth authorising anything, (if the page doesn’t contain what I’m looking for, there’s no point) — (if you spoof the <noscript> tag) those are the sites you’ll see that appear to be CSS formatted and contain (at least some) images even though you haven’t authorised them (some elements of which may also be supplied by Decentraleyes).

I don’t use cloud storage .. and if I did, I certainly wouldn’t run the risk of using any ‘clever’ cloud scripting to do so … so I don’t need the support. … but you can enable it here, if you need to.

My default scope is, Domain. The more restrictive Site policy means there’s more chance of breakage on sites that rely on cross-domain communication (like Medium, for instance) …. whilst a default of Global runs the risk of your choices being applied (similarly to NoScript) to sites/domains you’d rather they weren’t.

I delete blocked cookies because ….

Blacklisted cookies are not prevented by uMatrix from entering your browser. However they are prevented from leaving your browser, which is what really matters. Not blocking cookies before they enter your browser gives you the opportunity to be informed that a site tried to use cookies, and furthermore to inspect their contents if you wish. Once these blacklisted cookies have been accounted for by uMatrix, they can be removed from your browser if you wish so. Important note: Extensions can make web requests during the course of their normal operation. These requests can result in cookies being created in the browser. If the hostname from where a cookie originate is not whitelisted, the cookie will be removed from the browser by uMatrix if this option is checked. So be sure that the hostname(s) with which an extension communicate is whitelisted.

… and just explicitly authorise those that I need — no unwanted cookies for me!

I don’t delete non-blocked session cookies (I’d keep getting logged out of Medium, if I did), but you can set this as a way of ensuring session logout in the event you might be called away from your computer without getting the chance to do so manually. Although you can (and should) simply lock your computer when you walk away instead, if your circumstances are such that you may not have the opportunity to do so (emergency situations do arise) … and leaving your device unattended might result in someone else taking advantage of the situation … then, if you can live with the frustration of being regularly logged out (or your shopping basket being emptied just before you go to check out), this option is one to consider enabling.

We’re already spoofing the HTTP Referrer with Chameleon, so there’s no need to do it here.

I’m not currently applying strict HTTPS rules, because it causes breakage on a couple of sites I need to use but, as soon as they switch to https, I will do — in the meantime, HTTPZ keeps things clean for the most part. However, I recommend that you do apply strict HTTPS rules until such time as you find that it causes unacceptable breakage — and even then you should question very seriously indeed whether the site you are using really warrants the risk of http.

We’re already preventing hyperlink auditing with AdNauseam. Extensions competing with each other to perform the same task is a bad idea, so we don’t do it here as well. Moreover, doing so means that, in the event we forget to disable/re-enable uMatrix (ensuring it is the last such extension to be activated), AdNauseam suffers from no such limitation and will prevent the auditing in any event, so (for the time being at least) this configuration is more secure.

Don’t follow the next steps now, because you’ll get logged out of everything. Instead, copy these instructions and do it when you’ve finished everything else you’re doing: reading self-help listicles on Medium, foolishly giving Amazon your credit card details and buying stuff you don’t need, uploading cat videos to Youtube — oh, yeah … and reading this guide to the end (although you could always try saving the page).

Copy the following rules to a text editor and save them with a suitable name (global-strict-umatrix-rules.txt, for instance) …

https-strict: behind-the-scene false
matrix-off: about-scheme true
matrix-off: behind-the-scene true
matrix-off: chrome-extension-scheme true
matrix-off: chrome-scheme true
matrix-off: moz-extension-scheme true
matrix-off: opera-scheme true
matrix-off: vivaldi-scheme true
matrix-off: wyciwyg-scheme true
noscript-spoof: * true
referrer-spoof: behind-the-scene false
referrer-spoof: blank.about-scheme false
* * * block
* noscript-csp.invalid other allow

Select and delete all Temporary rules.

Import the text file into your Temporary rules.

Click the Commit button.

These rules are strict … blocking everything, everywhere, all the time until you authorise something temporarily/permanently.

As you become more familiar with uMatrix and make changes to your configuration … whitelisting certain elements for certain sites/domains and scopes … your permanent rules will be automatically modified to reflect that, but the above rules provide a solid base from which to depart — it doesn’t get more secure than ‘None shall pass!” / “If your name’s not on the guestlist, you’re not getting in.”

Okay, it’s safe to carry on changing things now.

Currently, assets consist of lists of block rules for undesirable sources (like in AdNauseam) and, as you make use of uMatrix, you’ll notice that certain domains/sources are highlighted in red, indicating they are blocked by default (or as the result of uMatrix’s rule-propagation logic).

As previously mentioned, different extensions competing to perform the same task can have undesirable to extremely negative results, so only check the items that are not replications of those in AdNauseam, or whatever other extensions you use that perform the same function — check back here after updates to the extension, to see what, if anything has been added.

[NOTE]

I don’t recommend you do use any others, however, whatever the fanbois/grrrlz may tell you. If you’re in danger of being swayed by their evangelical zeal for <whatever it is>, first ask them to explain the precise technical methods their preferred solution uses to achieve better results. And in detail: does it take measures to mitigate the risks of not blocking third-party websockets, for instance, and, if they don’t think the risks of any concern, why not? If they just spout marketing hype about ‘the most private/secure/best/popular’ <whatever it is>, blather buzzwords, or plain don’t answer the question (referring to some other supposedly amazing feature instead) … it’s probably safe to assume they haven’t a clue what they’re on about and ignore them.

The specifics of what can be achieved by modifying the raw settings are beyond the scope of this piece — you can read about them on the wiki, if you are interested.

Just heed the warning that you change these raw configuration settings at your own risk.

The About tab is your portal to everything you could want to know about uMatrix (and probably more besides).

You can also back up and restore your settings (not your rules) here.

[NOTE]

In the Decentraleyes FAQ, you will find the following:

Why doesn’t it deliver resources from CDNs I block using a different add-on?

Most content blockers do not block delivery networks by default, as doing so breaks pages. This extension works out of the box, unless it’s asked to operate under strict blocking rules. Any policies set by other extensions are respected. As such, blocked resources will not be served locally.

It is followed by instructions on how to remedy this for uMatrix (and other blockers).

Many pages/sites will simply not work unless you authorise these elements — ajax.googleapis.com, is often necessary, for instance. Consequently, however reluctantly, you may find yourself authorising those elements … as a result of which, they will be downloaded from the CDN — defeating Decentraleyes’ purpose.

If you have no intention of ever authorising those resources then this will not be a concern for you.

Otherwise, as advised in the FAQ, you should

  1. Complete any forms or other procedures you are working on and/or save any drafts open in your browser (feedback, applications, shopping carts, emails, etc.) — your temporary uMatrix policies will be reverted and you will find yourself logged out of site, session cookies invalid, unsaved work lost, anything you haven’t completed potentially set back to square one.
  2. open uMatrix’s preferences/options
  3. select the My rules tab
  4. copy any temporary rules to a text editor and save them to a file — you won’t be keeping the file, so name it what you like
  5. delete the temporary rules from uMatrix
  6. paste the below rules into the temporary rules section
    * ajax.googleapis.com * noop
    * ajax.aspnetcdn.com * noop
    * ajax.microsoft.com * noop
    * cdnjs.cloudflare.com * noop
    * code.jquery.com * noop
    * cdn.jsdelivr.net * noop
    * yastatic.net * noop
    * yandex.st * noop
    * apps.bdimg.com * noop
    * libs.baidu.com * noop
    * lib.sinaapp.com * noop
    * upcdn.b0.upaiyun.com * noop
    * cdn.bootcss.com * noop
    * sdn.geekzu.org * noop
    * ajax.proxy.ustclug.org * noop
  7. click the Save button
  8. click the Commit button
  9. open the previously saved text file and copy the rules into uMatrix’s temporary rules
  10. click the Save button

You should now no longer need to authorise those elements for a page/site/domain to function. There are no guarantees, however — its possible that the gap between changes being made on a page/site/domain and Decentraleyes’ next update might result in breakage for longer than is acceptable. So, be prepared to reverse this process, if necessary.

Right …

If you’ve followed the guide to the letter, the next part is academic, but we’ll go through it nevertheless, because you can use the techniques to reverse engineer the strict rules you imported to something you find more suitable.

If you’ve been reading without actually installing any of the extensions (or, at least, not necessarily configuring them as recommended) then this will be useful for deciding how you want to configure uMatrix from the ground up.

Let’s look at the default policy for Medium’s landing page (before logging in) again …

At the top-left (in blue) we see scope the of this policy (you may recall we set the default scope to be Domain, in uMatrix’s settings) and the domain to which it is being applied (medium.com).

If we browse to some other site … in the same or a different tab/window. .. what we see here will reflect that …

The scope here is still Domain, but the domain name is that of the new domain (youtube.com).

We may feel (and I do) that the default policy for a site, domain (or even for the scope itself) is too permissive for our liking and want to change it.

So, let’s do something about that.

Before making any changes though, let’s look at the panel’s interface.

Along the top, there are ten icons with which we can perform a range of tasks.

The first is a little deceptive in that it does not look like an icon but is the blue bar indicating the scope and domain to which it applies — this is the Scope Selector.

The uMatrix wiki describes how to make use of this, so, I’ll simply link to it here.

If you look at the previous two images, you’ll notice that the medium.com scope is entirely blue — it is not possible to select a broader scope.

The youtube.com scope, however, is only partially blue — we could further narrow it to www.youtube.com, if we wanted to apply rules to the web service but not the wider youtube domain.

The second allows us to select the global scope, enabling us to make changes that will apply to all pages, sites and domains in all scopes unless overriden by other, scope specific, policies.

The third allows you to disable/(re)enable uMatrix for the current scope.

The fourth provides access to some features for which we set defaults in the main settings but might want to alter on a per site/domain basis (i.e. in the current scope).

By default, uMatrix will spoof both our referer header and <noscript> tag, but we have already disabled the referer spoofing to allow Chameleon to do it, so that will be disabled if you look here.

The fifth, does something with rules, but I have not been able to determine precisely what from either the uMatrix wiki or anywhere else — it seems only the developer knows … and it seems he’s not telling.

Clicking on the plus ( ‘+’ ) symbol displays rules that I guess are associated with the source.

Clicking on the download icon to the right of the source seems to apply those rules to the source.

Clicking on the padlock to the right of the source after applying those rules, will save those rules.

I have not been able to discern a pattern. On some sites with external references (e.g. embedly.com) the icon is enabled and the actions can be performed, but not on all. On sites that have multiple external references, only one of them appears when the icon is clicked, so I cannot determine the criteria by which it is selected by comparing the rules that will be applied or by the kind of resources involved.

As I can apply such rules in other ways when I want to …. and have yet to experience any sites not functioning as I want them to by using only those methods … I’m at a loss to say what functionality it might supply.

So, it’s there and I can see what it does, but not how or why.

I don’t appear to need it, however, so my advice is to leave it alone rather than do something untoward by playing with it.

The sixth saves temporary changes as permanent rules for the scope — be careful when working in the global scope as these changes will be permanently applied to all sites, domains and scopes.

The seventh undoes temporary changes for the scope.

The eighth reloads the page — unlike NoScript, uMatrix does not do this automatically when we click off elsewhere.

Observation of subsequent behaviour leads me to suspect that it does so from the cache rather than making a new request for data from the source.

The ninth reverts all temporary changes globally — be careful not to click this unless you want to set all scopes back to the global defaults; you’ll get logged out of sites, your shopping cart will be empty, cats and dogs will move in together, eyes will melt, flesh explode … everybody dead.

The tenth opens the logger — where you can see, and filter, a live update of all the events that take place from that moment until you close it again.

Down the left, starting with the label 1st party, are the sources.

To the right is a grid displaying what types of objects/elements the source will deliver (cookies, CSS, images, media, scripts, XHR, frame, other), if authorised, and how many of each …

Dark green indicates that a source is explicitly authorised (either due to a permanent rule or a temporary one).

Dark red indicates that the source is explicitly blocked.

Light green indicates that the source is authorised by virtue of inheriting the rules from its parent. In the above image, for example, as the first party domain, medium. com inherits the across-the-board allow policy for first party resources … as a result of which <everything>.medium.com has the same rules applied to it. Likewise, as the default policy for all is to block everything except cookies and CSS … and as there is no other policy applied to the source domains to override it (like there is for 1st party) … all other sources are only allowed to deliver images and CSS (and the other cells in the grid are therefore, light red).

Clicking on a source toggles its status. The rule is then applied to all resources across the board unless there is a rule above it that overrides it.

So …

In order to play the South Park video, the three resources picked out in green are necessary and are the only ones authorised.

Toggling the policy of www.youtube.com …

… allows those to enter the browser and also the cookies, XHR objects and ‘other’ objects.

Toggling the policy for <alphanumeric>-aig6.googlevideo.com

… aauthorises the previously blocked images as well as the XHR.

Toggling the policy for googlevideo.com …

… authorises not only those images but also the XHR objects from the other <something>.googlevideo.com sources.

Toggling sources this way results in changes that are temporary for the scope — they will last only for the session and the default rules will be reapplied when the browser is next launched.

Whilst this can be convenient, I do not recommend it, however. As demonstrated later (when some example sites are browsed), this can have unintended side-effects that cannot be foreseen, potentially compromising both privacy and security … and I strongly advise you, therefore, to only authorise specific objects/elements rather than apply across-the-board policies to sources — ignore the ones applied by Decentraleyes, which let it deliver them locally.

As noted above, clicking on the padlock icon saves temporary changes as permanent rules for the scope. Once you have found a suitable policy that allows you to make use of a site to the extent that you need, saving the changes means that the site will always work the same way when you browse to it (at least until such time as the site is redesigned anyway) without your needing to make any changes in uMatrix.

As we will see below, however (when we start exploring some sites and applying policies to them) it’s not always as simple as that. Some sites make regular/fairly static use of certain sources and the above approach will work well, Others, on the other hand, do not, varying some of them every time. YouTiube, for example does not always reference all of the same sources, so applying the first policy that works would be a mistake, because there is no guarantee it would work a second time. Medium makes reliable use of a lot of its sources but there is no guarantee that the ones that result from a user posting a link to an external resource will even exist at some later date (they might delete them or even delete their account altogether) and those sources might subsequently be re-used by the external source hosting other (potentially undesirable) content.

It pays to be conservative about committing temporary changes to permanent rules, therefore … saving only those that can be determined to be reliably constant in their presence and behaviour.

Unlike NoScript, temporary changes made in uMatrix do not apply outside their scope, so … unless they are made to the global scope … they will not persist across sites/domains and (unless it’s on the same site/domain) you can browse to some other page without worrying that they will follow you and be applied to the new one.

Occasionally though, when trying to determine what is strictly necessary to see what we want on a page, we can find ourselves authorising all kinds of things before we finally authorise the right resources from the right sources. In such cases, once we have determined what it is strictly necessary to authorise, if we are quickly successful, it is a simple matter to unauthorise what is necessary. If it was a long process, on the other hand, the number of things to unauthorise can be inconveniently large. In that latter case, so long as we have made a note of what we do need, it can be quicker to simply revert all temporary changes for the scope and then apply the necessary changes again. As mentioned, be careful to only do so for the scope, by clicking the blue arrow and not the black one — that latter will revert all changes in all scopes, meaning you will lose changes you might rather keep (on other pages in other tabs/windows).

Bearing the above in mind, you should find it easy enough to create your own policies that are more liberal than the strict one I recommended at the start and save them for those sites/domains/scopes you make regular use of, whilst ensuring that sites you may find yourself browsing to randomly (as the result of searches or links) remain strictly curtailed until you decide to allow them to do more than “absolutely nothing at all until I say otherwise, you hear” … thus minimising any security and privacy risks they might pose.

The uMatrix wiki has some very helpful articles on how to use and configure uMatrix that will take you into things in more depth, should you wish. When reading it, however, be aware that the developer uses different terminology than I do here and/or the same terms to mean the opposite from my use here. I would be willing to bet a reasonable sum on the idea that his use reflects his thinking as a developer and is a reflection of his thinking in terms of messaging protocols and processes. For non-developers, this can be an unintuitive way of thinking about things and potentially lead to confusion and misunderstanding though. As a result, I have favoured a more ‘concrete’ approach here: I am putting things (resources/objects/elements) into my browser from places (sources), rather than making network requests to destinations that return pointers/handles to rsource references. I could be wrong, but I think it a more logical/intuitive approach when viewed from the browser-centric user’s perspective rather than through the web-technology focused eyes of a developer. In any case, even allowing for those discrepancies in terminology, the wiki is very well written, with lots of example images, so I don’t think it beyond anyone having got this far to understand the material there.

[A note about XHR objects]

These include XMLHttpRequest, Fetch and WebSockets.

I’ve already mentioned reasons to be wary of websockets and there is a potential for XMLHttpRequest to be abused as well.

As a result, although, technically, completely inaccurate … (I did say I was abusing the terminology for the purpose of giving it a more intuitively concrete dimension) … I find it useful to remember that they have the potential to be ‘Cross Hosted Resources’ — they aren’t … but they could be used to deliver them.

Let’s find you reasons to hate me by having a look at some of the frustrations that await, shall we?

Note that I’m going to stick to a single desktop Firefox profile here. The precise elements that you will need to deal with will sometimes be different from what you’ll see here, as Chameleon switches to different browser and platform profiles — especially, when it’s a mobile profile, where a lot of elements start with m. something (m.youtube.com, for instance) … or an Apple profile (Mac OS and iOS).

So, you’ll need to go through a bit of tedious trial and error initially but, after a while you recognise the differences and what’s required in each case — it gets easier.

I’m also going to leave NoScript’s default Trusted policy of ‘trust everything’ (which is not the same thing as our defined default policy of ‘trust nothing’) unmodified and ignore custom options. There’ll be enough going on without drilling down that far — but, once you’ve got the hang of things, the extra facility offered by limiting what you trust by modifying the Trusted policy is something I advise you to explore. And, remember … unless on a site we genuinely trust (like Medium)…. and perhaps not even then … we only ever temporarily trust, never outright trust a site.

I’ll start with something simple: a search for Medium, using DuckDuckGo.

Pretty ugly, no?

But, functionally, perfectly adequate: I was only searching on a text string and the results I got back were all I needed — if I wanted to search for images, it would be a different matter, but I didn’t, so why waste time/bandwidth?

So, the only real frustration here is authorising the elements necessary to make it a more fulsome experience.

Let’s see what happens, if we let it load CSS.

Still a bit bland, but definitely easier to read.

How about some images?

Not a huge difference, but fair enough — it’s a search engine and I’m searching text, not images.

I know from experience what I’m going to need to authorise for the full-fat experience, so I’ll just cut to the chase …

Erm … wait a minute …

That’s not what I was anticipating!

Oh, yeah … NoScript.

But temporarily authorising DuckDuckGo’s scripts doesn’t seem to make any difference and it looks just the same afterwards …

What’s going on?

Ah … of course …

… I’ve been redirected to the non-Javascript site, so there aren’t actually any scripts to authorise.

As an aside (and this is why I did my search on DuckDuckGo) … you see that box in the top-left corner … with the text “Ignore this box please” …

That’s a bit weird, don’t you think?

Why is it there? What’s it for? Where does it go when I reach the non-Javascript site? Does it go anywhere … or is it just hidden? After all, CSS works whether there’s any Javascript or not, so it might well be hidden.

If it’s perfectly innocent, why not say so? Why not put some text following it, to the effect that “Don’t worry about this box, it’s just a spacing element” or something like that? What’s with the “Ignore the man behind the curtain” spiel?

Is it a tracker of some kind? I thought DuckDuckGo prided themselves on not tracking us.

And that is the reason why I don’t make my life easier by automatically authorising certain elements (like CSS) by default in uMatrix — I’d never see things like this, if I did.

It’s also another reason why, as I said, I’m not altogether happy about any of the search engines: they may be perfectly ethical and there nothing to worry about but, along with the other reasons given in the articles I linked to, things like this give me pause for thought — if that box is nothing to worry about, then what reason have they got for not pre-emptively allaying our concerns by being upfront about it? (Lying by omission is still lying).

Also … remember that the problem was that I’d been redirected to the non-Javascript site, so there weren’t any scripts to authorise. I do wonder why then, if there were no scripts to authorise, NoScript was detecting scripts and blocking them. Could it be the result of our <noscript> tag spoofing? No — I see the same behaviour when I perfom the same search without <noscript> tag spoofing, so this is the page DuckDuckGo delivers when Javascript is simply disabled in the browser (in a new session with an empty cache … and cleared DOM).

It may be (probably is) nothing to be concerned about but, again, for all the vaunted transparency, there are things going on that give the lie to to the idea that everything is as it seems — on a non-Javascript site, NoScript should find nothing to block, should it?

Anyway … searching again in a new tab takes us to the Javascipt-enabled version of the site …

… and from now on every search will show results like that until we close Firefox and relaunch it … or change what we allow DuckDuckGo to do.

So, it was a bit of a faff, yes … and required that I use my brain to figure out what the issue was … but nothing too onerous (barely even a First World Problem).

But a list of links isn’t exactly why we use the Web, is it ? They’re just a means to end — to deliver what we really want to see.

So, let’s try something we might want to look at.

How about Wikipedia?

Hmmmm … doesn’t look too auspicious, does it?

What does it look like further down?

Okay, that’s not so bad actually: it’s got text, links … everything we need in fact — sure, it’s a bit spartan, but it’s adequate for our purposes … and it loaded lightning fast and used virtually no bandwidth (which, if we’re on a mobile data connection, isn’t to be sneezed at).

But let’s say we’re in the mood for a bit more pizzaz from our Wikipedia experience … what do we need to do for that?

How about some CSS to lay it out neater?

Yes … that’s a bit more like what we’re used to.

But, wait … what are those two grey boxes? Where are the pictures?

Duh … I’m blocking images still.

Okay, it’s not significant when it comes to researching a website, perhaps, but what if I’m looking to confirm that an actor I’m researching really is who I think they are? Or what if I want to see where somewhere is on a map?

That’s weird.

The Wikipedia logo is now there … so, obviously, images are getting through now, but it doesn’t seem to have loaded anything into the panel on the right.

It doesn’t want any frames, so it’s not a matter of a nested element not being referenced.

Or could it be? We haven’t let any Javascript run yet.

No.

Authorising images from upload.wikimedia.org fixes it … no Javascript necessary.

There we go then … Wikipedia in all its glory and we haven’t had to allow any scripts — we haven’t even had to look at NoScript.

And every Wikipedia search we do from now until we close our browser Ior unauthorise elements) will look just the same.

Knowing that, next time we can do it all in one visit to uMatrix, simply enabling CSS/images/uploads.

Result!

But, look … text is all very well, but …. I don’t know about you, but I expect a bit more from my web browsing — I want pix and vids and music and stuff.

So YouTube then.

Oh, dear … that doesn’t look too promising.

Okay, well, let’s see what we need to do.

CSS.

Nope …. not enough. That’s hardly surprising, but it’s best to be cautious — who knows what lurks on YouTube?

YouTube’s full of images, so let’s try that …

No … no better.

Okay, look, it’s YouTube and our brower’s going to do some pretty heavy lifting. As a result, I wouldn’t take your money if you offered to bet me we wouldn’t need scritpts. So, I might as well get it over with …

And voila!

Not all there, but not bad for just three quick guesses — it’s definitely looking more like we’re used to now.

What else do we need?

Scrolling down the uMatrix panel in the hope of finding something that isn’t too risky reveals the option of another source containing fourteen new images and …

Yesthat’s more like it!

Doesn’t look like it’s preloading the video though — and clicking on the [PLAY] button just results in the ‘spinning wheel’ animation.

What about NoScript?

Oaky, we want to watch a video … YouTube is a Google company … we’ll give googlevideo.com a try — it’s certainly the least disconcerting option there anyway.

Oaky … no change … what next?

Back to uMatrix.

I know from past experience that I’m going to have to authorise one of the XHR sources, so I’ll do that now.

And, there we go, it’s started buffering the video.

So, what did we have to do in the end?

Surprisingly little actually.

  1. CSS from www.youtube.com — not even from the whole domain but just from the web service
  2. Scripts for youtube.com in uMatrix
  3. Images from two ytimg.com services
  4. One externally hosted (XHR) resource — more on which in a moment
  5. Scripts for https://www.youtube.com in NoScript
  6. Scripts for googlevideo.com in NoScript

And that is all we will ever need to authorise when we want to watch something on YouTube — at least until the fucntionality is redesigned anyway (which, for our purposes, it hasn’t been in years).

The faff arises because simply enabling them isn’t always enough.

I’ve cheated here, for the sake of brevity, but in practice it can be anything from that easy to somewhat trying. When it wants to, it can really test our patience …

Authorise the www.youtube.com CSS/images/scripts in uMatrix and the the youtube.com scripts in Noscript.

Then we have to refresh the page — fortunately, as soon as we click away from the NoScript interface, it does that for us automatically.

Then we have to authorise the XHR content.

Then we refresh again — manually this time.

Then we attempt to play the video and it stalls.

Then we go into NoScript again and now we can authorise the googlevideo.com scripts; they weren’t there before and only got loaded after the previous page refresh — given that they’ve got to be pretty standard though, you’d think they could, surely, have been loaded with the first set (which makes you wonder about the site/service design).

NoScript refreshes the page again.

Then we attempt to play the video and it stalls again.

We sigh and refresh the page once more by either actively reloading it from the address box (not simply clicking the refresh button) … or by pressing CTRL+SHIFT+R (not simply CTRL+R) … to bypass the cache (which is empty for no good reason) and re-downloading the video data.

Then we attempt to play the video again.

It stalls again.

We mutter/shout something I’m not going to reproduce in print here and re-download the video data again!

This time (all else being equal), the wretched thing plays at last!

So, yeah, that’s a bit more of a pain in the proverbial than I’d like. And having to repeat the process for every single video I want to watch …having to authorise new XHR content each time … can be maddeningly frustrating — although at least I don’t have to go into NoScript every time, just keep refreshing the page until it plays (and sometimes you get lucky and the next vid you want to watch is from the same XHR and just loads first time).

But it doesn’t happen every time.

And, on the other hand, take a look at those images again.
You see all the stuff we didn’t authorise.
Yesh … that.
What is it and what’s it for?
It’s not for our benefit, that’s for sure — we don’t need it in order to watch anything.

And there’s a lot of it too.

I’ve written about this before …

All I’ll say here is that, since I wrote that, things seem to have settled down a bit (been somewhat rationalised) and it seems to be consistent now that the XHR resource reference that needs to be authorised always ends in -aig<one character>.googlevideo.com.

More often than not, that one character is a digit or else the letter ‘d’.

So …

7pgbpohxqp5-aig4.googlevideo.com
8pgbpohxqp5-aig6.googlevideo.com
9tgbpohxqp5-aigd.googlevideo.com
… and so forth

It’s posible that I might have forgotten occasions involving -aig<two characters>.googlevideo.com … but I don’t think so — it shouldn’t be too difficult to figure it out, if you do see any like that though (they seem to always be preceded with -aig, if they’re the video data).

[NOTE]

In the above esample we were cautious about letting XHR obects into our browser and, instead, looked at NoScript to see if there were anything we could there first.

As a result, we authorised googlevideo.com.

Since that made had no effect, we then authorised some XHR objects after all.

The danger there is that we do not know what those XHR objects will themselves reference.

Looking at NoScript again afterwards reveals more detail …

… and it appears the XHR objects have imported some scripts along with anything else they might have referenced.

In authorising googlevideo.com, therefore, we have potentially authorised scripts we might not want to.

In this instance, because we only authorised the correct XHR objects, the risk was minimised, but what if we had authorised the wrong ones instead/as well? Then the scripts associated with them would have run instead/as well.

Now that we know the trick of only needing to authorise the XHR objects from -aig<character/number>.googlevideo.com, however, we can avoid this risk in future by authorising them before looking in NoScript … where we will only authorise the corresponding scripts, rather than all scripts from googlevideo.com …

Alright, YouTube can be a little frustrating but, now we know that there are actually very few things that need authorising and … as we know in advance what all of them will be (apart from the specific XHR source), … it’s really only a handful of mouseclicks, a few keypresses, (occasionally) a bit of sighing/cursing and it’s all good. And if you can’t be bothered to do that in return for staying safe from a webpage that appears to consist of 84% stuff that serves someone else’s interests and not your own then … really … how much do you actually care about your privacy and security? Have you thought about leaving your doors and windows open when you go out as well? I mean … it’s a lot less inconvenient than going around shutting and locking them all, isn’t it?

[NOTE]

Remeber I said that making temporary rules permanent once we’d ascertained what was necessary for a page/site to work wasn’t quite as simple a decision as it might first appear. YouTube is a prime example of that.

Most of what we’ve learned will remain constant and saving the decisions we made here as a permanent policy will reliably deliver the site to us pretty much as we need it to be. But whitelisting the -aig<char/digit>.googlevideo.com source (either in uMatrix or NoScript) will not help in any way, because the specific page we browse to on the site will very possibly reference a different source for the XHR objects. Moreover, whitelisting that source will, not only not necessarily deliver what we are expecting but may, furthermore, deliver us something we would rather it didn’t (who knows what it will contain next time?).

It’s best, therefore, to save a minimal ruleset for a site/domain that will result in it reliably delivering its core functionality. The specific tweaks required to make a given page on that site work can then be applied as and when necessary.

Of course … knowing what you now do … as someone who is concerned about their privacy and security, you will not be so foolish as to save any rules as permanent policies but continue to block everything and suspiciously investigate what the requirements are each time, to ensure nothing slips past you. After all, you know what you need for YouTube, so it’s easy enough to pre-enable it when you land on a page … and a lot less effort than having to delve through Add-ons manager/Extensions page/<extension>/ellipses/options/settings/<tab>/<box>/<field>/<entry> for both uMatrix / NoScript and make all the necessary changes when you learn that those rules don’t work any more — ain’t nobody got time for that … I just wanna watch a video! 😉

So much for text and video direct from the source. What about embeded objects?

Let’s go to Audjoo — there’s a demo I want to listen to.

Typicallly the page is barren, but we’ll fix that.

Let’s see …

We might want the images and CSS, we might not, but it probably won’t hurt.

For obvious reasons, we almost certainly aren’t interested in the fonts and won’t authorise those unless we find we have no choice.

Looks better already, but there’s no sign of any demo.

w.soundcloud.com wants to load a frame though. Well, Soundcloud is a music hosting site, so we probably want to let stuff through from there … and the ‘w’ possibly indicates a widget (which the frame might contain), so, let’s give it a try.

It’s a bit prettier, but not what I’m after — I want to play the demo and there’s still no way to do that.

You can’t see frames but they sometimes contain scripts (which is why we normally block them by defualt), so let’s take a look.

Nope … nothing new in uMatrix … what about NoScript?

Okay, well, we’re probably goning to have to authorise those sooner or later, so let’s temporarily authorise https://audjoo.com.

Hmmmm … nothing’s changed and there’s nothing new to see in either uMatrix or NoScript either.

Let’s refresh the page, bypassing the cache.

Okay, that unsightly, grey box is gone, but there’s still nothing new to see and nothing new in uMatrix.

NoScript it is then …

Aha!

Frames sometimes contain scripts (which is why we block them by deafult) but this seems to be our only option … and we did speculate earlier that the frame might load a widget, so, lets try temporarily authorising https://w.soundcloud.com.

Yes … that gave us some more options …

I don’t know what role w.soundcloud.com plays in the soundcloud.com domain, but the css is probably pretty harmless.

And I’m pretty sure I want the widget, so okay, I’ll risk the script …

Ooh … something’s happening … I can see some colours and something’s loading and, oh, no, it’s gone again …

WHAT!?

NOOO …. OOO … OOo … Ooo … ooo …!!!

<sigh>

But now it’s saying it can’t find the playlist, so we’re probably on the right track (no pun intended) — it’s an indication that it’s trying to load some music from somewhere.

Oh, thank goodness for that … there’s still something we can do to resolve this …

Hmmm … a number of choices now.

I’ve no idea what i1.sndcdn is, so I think I’ll give that a miss, even though it’s just referencing an image. The fewer logs there are of my doing something here the better — it’s a CDN and, even though the ‘snd’ kind of hints that it might be ‘self’ hosted by Soundcloud that doesn’t mean the infrastructure doesn’t belong to some other entity (Soundcloud hosts an enormous amount of audio data, so I’d be surprised if they didn’t make use of a third party).

I’ve no idea what the <longalphanumeric>.soundcloud.com is either and I’m not keen to let random XHR objects loose in my browser, so ‘no’ for now.

The widget wants to load an image, which seems logical (it’s going to have to display something), so I’ll give it a try …

Ooh … something … no, gone again <sigh>

Okay … the widget is a widget, so I’m probably going to have to authorise the api-widget.soundcloud.com XHR objects and let it load its API …

As before, a quick flash of something and then it’s gone again.

There’s nothing new to be seen in uMatrix, so let’s take a look in NoScript.

Sure enough, it wants to run some scripts …

Well, we’ve let it load its XHR resources, so …

Ooh!

Ooh, yes …. good call!

Let’s press the [PLAY] button!

Rats! The button works, but there’s no audio and the waveform isn’t doing anything.

But nor is the time counting up — so, it’s not a problem with my audio playback … there’s still something missing.

There’s nothing new in Noscript, but uMatrix has some more entries …

Let’s see now ….

wave.sndcdn.com wants to load an XHR object.

Well, I know what wave files are, so that looks promising … but, on the other hand, how likely is it that the site will be delivering full CD quality audio? On the other other hand, this is a demo of the audio quality of a softsynth they want to persuade people to buy … so anything less wouldn’t be a good marketing, would it?

Hmmmmm …

cf-hls-media.sndcdn.com wants an XHR object as well.

Well, ‘media’ sounds promising … but, even though it could be audio, it could also be something else (‘media’ is a bit of catch-all, say-nothing term on the Web).

I can’t decide … maybe NoScript can help …

Nope … I just know that, whatever I authorise in uMatrix is going to want one or more scripts before I’m through here — so, the decision isn’t insignificant.

<sigh>

I’m going to take a chance on wave.sndcdn.com — it’s the most specific of the two likely candidates.

Authorising the XHR object in uMatrix doesn’t seem to make any difference, so it’s off to NoScript to authorise the scripts as well.

Now that looks promising …

Yep … that’s definitely a better looking waveform.

Does it play now though?

<sigh>

No.

There’s nothing new in either uMatrix or NoScript, but there’s still the cf-hls-media.sndcdn.com XHR obkect and scripts to consider — the wave.sndcdn.com stuff seems to have been to do with rendering the waveforms, so perhaps the ‘media’ are the sounds after all.

Let’s give it a try.

One quick trip to uMatrix and NoScript later …

Yes … it works!

YES!

Success!

So, what did we need in the end?

Most of it, but not everything, it seems.

Certainly not the fonts. And I’m especially pleased about that because there’s no way Google won’t be keeping track of their use. Note that, although it’s seemingly just a CSS reference, that’s a bit odd given that there’s no apparent delivery of any fonts themselves, which makes me wonder if they aren’t delivered as a data URL (which we’ll talk about later).

None of the eighteen images from i1.sndcdn.com — no potential tracking from there either.

Not the XHR from<obscure>.soundcloud.com.

And not the script from w.soundcloud.com.

The precess wasn’t without its (brief) frustrations, but it wasn’t all that bad either — a bit of a faff, but nothing that would make me give up before I attained my goal.

Oaky, so, how about a hybrid … text, images, vidos and audio?

Medium itself then.

Here’s Medium when you first land on it with nothing enabled.

Not exactly a pretty sight, but it’s a cosmetic issue and easily remedied.

I know in advance I’m going to need to allow at least cookies, CSS, and images.

What about scripts?

Well, uMatrix is indicating that medium.com only wants to run one script right now, so we’ll be conservative ,,,

There are no visual changes but uMatrix and NoScript are saying that cdn-client.medium.com wants to run some scripts (seven!) now too …

I’m not about to let stuff run in my browser unless I’m sure it’s necessary, so I’m going to see if I can log in without them first.

No, I can’t, but, interestingly, I don’t appear to need it in order to log in, because it’s not in the list any more …

So, that was a good call.

Go back two images and notice how uMatrix’s rule-propagation logic means that the seven scripts from cdn-client.medium.com were authorised by default.

Now look at the image after that again,

NoScript to the rescue!

Although we’ve told Firefox it can load the scripts, we haven’t told it can do anything with them. Until we tell NoScript to allow medium.com to actually run scripts, it doesn’t. So, whatever Medium might be loading by way of scripts, XHR, frames (anything with the potential to do, rather than simply display, something) isn’t going to be able to do much — it’s a security tool as well as a privacy tool (no running scripts means no script based exploits).

cdn-static-1.medium.com is still there though and now wants to run a script. And that’s borne out by its appearance in NoScript too …

So, as it’s a common demoninator insofar as it appeared both on the landing page and on the login page, I’ll give that a try.

And there’s that helpful popup telling us that it really doesn’t matter what provision may be made for us by local legislation (such as the GDPR), Medium is going to ignore our preferences and set a cookie (or six) regardless, by simply subverting our browser settings — it’s a good job we told uMatrix to say it could, eh? </bitter, bitter sarcasm>

So, cdn-static-1.medium.com is good for something at least, eh?

Can I log in now though?

Indeed, it seems I can …

Uh-oh — that’s not good …

It’s nonsense, of course … they’ve no intention of fixing it at all — how could they?

So, what can I do about it instead?

Aha! Medium wants an XHR now too …

Well, I’m already letting it run scripts, so …

And, yes .. that was the trick of it.

Alright elves … the trip’s off …. back down the mines with you — Santa’s got quotas to meet!

I’m going to unauthorise cdn-static-1.medium.com again though — I didn’t need it on the landing page, so I probably don’t need it at all any more.

[A quick trip to my email later and …]

Oh, dear … Medium’s just going through the motions … threatening to log in but never doing so …

[Checks cookie status in uMatrix and Cookie Quick Manager — they’re all good]

I’d better authorise cdn-static-1.medium.com again then.

And, yes, it seems that did the trick, because I’m in,

I’ll try unauthorising it again now that I’m in,

Nope … that’s no good …

cdn-static-1.medium.com is a core element of the Medium experience then — perhaps I’f better leave it on.

After some investigation involving underhand ways of getting the better of Medium by logging out and back in to specific pages rather than the home page, wiith https://cdn-client.medium.com disabled by NoScript, it seems that it is involved in the loading of images …. so, also pretty significant but not essential — I can still read articles without images … and it saves bandwidth/data, if that’s a concern.

Intriguingly, it only seems necessary on certain pages though.

On this page … the one I am currently editing … I can see images perfectly fine without it — you can see the top of the last image beneath NoScript there.

My profile page is a different matter, however …

And whilst clicking on the menu/action items on this page (or my Medium home page) works fine, doing so on my profile page results in nothing at all. Nor do they work if I open a story from there … indicating that (unless, for some reason, they’re presented on my home page) finished ‘stories’ are treated differently by Medium from stories under construction.

They don’t work on replies either.

I’ll leave that particular avenue of exploration there though — I just wanted to demonstrate how, along with being a security and privacy tool, NoScript can be used to investigate a site’s/domain’s inner workings (at least to some extent) without your needing to be a web developer yourself or understanding Javascript.

Knowing, as I now do, that https://cdn-client.medium.com also plays a significant part in our everyday use of Medium, I’ll enable it.

What about that XHR? I’m not keen on things from potentially unknown sources in my browser. And, look, it’s gone from the one item I thought I was authorising at the start to six now!

That’s a bit underhand, don’t you think? We thought we were authorising an XHR and, all of a sudden it’s loaded five more from who knows where? It might’ve been nice if … oh, I don’t know … the W3C had done something sensible in all the time they’ve been fiddling with post hoc rationalising of their existence (by recognising de facto standards after the event) … like, say, stipulating that webpages should deliver a manifest of their intended content, so that people can decide just how happy they are to load them before they get caught unawares (because they weren’t informed). Then extensions like uMatrix/NoScript could list the effects of authorising various elements in a tree form, showing that one of them will load five others (and where from), for instance </just an idea>.

But it turns out that my own profile wants to load five …

And this very page wants to load fifty-five ..

So, it looks as though disabling that would result in a much less fullsome Medium experience … possibly leading to frustration as the need to re-enable it arose in order to see certain things — the constant enable/disable cycle would become tedious.

Hmmmm …

Is there anything else we should know about?

What!?

Fifteen cookies!?

Well, okay, they’re not leaving my browser, so it’s possible they’re just the result of Medium repeatedly setting them in the mistaken belief the previous ones have been lost, but nevertheless … that’s a bit more tracking going on than I anticipated when I initially agreed to six.

But, okay … in the spirit of victim-shaming, let’s say I’m a big boy and should know what I’m doing …

Airplane — Counter Point

But … before you installed uMatrix or a similar tool … did you?

Moving on then …

[You can investigate your own cookies with Cookie Quick Manager]

Oh, dear Lord … look at all those scripts! Imagine what might’ve happened if we hadn’t had NoScript looking out for us!

Do we need any of them?

Google-analytics.com? Well, I’m pretty sure I don’t want Google analysing me — I’m on Medium … what business is it of Google’s what I do here?

Yes, I know Google aren’t doing the analysis for themselves, they’re just enabling Medium … except that’s like a fence claiming they didn’t rob anyone themselves, they just enabled the thieves by giving them a list of homes to rob and then selling the stolen loot for them afterwards.

The fact remains that why they are doing it is neither here nor there, Google are doing the analysis. The fact they subsequently share the results of that analysis with Medium is also neither here nor there, the data passes through a domain (googleanalytics.com) that is not a part of a site or group with whom I have (or wish to have) a relationship of any kind.

And, no, the idea that I should be reassured that the analysis is only of traffic and not personally identifiable in any way is utter nonsense. Of course it’s identifiable … that’s in its nature. There is no such thing as anonymous data on the Internet; unlike the trail left by paper records, it can’t be burnt or eaten out of existence — however hard it may be to follow, there is always a copy of the records somewhere that will point the dedicated detective in the right direction.

Oh, sure, I might be one of four people using the Internet connection at this address. But it wouldn’t be difficult to track down who the account holder is from the ISP records, get hold of the census data for their address and, from there, pretty accurately whittle down who I am.

Sure it’s possible that that one or more of us (plus a handful of non-residents) are part of a literary collective engaged in ‘Communal Blogging As An Exploration of the Perception of Identity in Western Versus Estern Cultural Traditions’ and all contribute to the same Medium account, but … get real.

Behavioural fingerprinting, on the other hand, is real. Whether, or not, you can pinpoint exactly which of the eight-member ‘writing group’ engaged in a perversely Freudian act I am when I log into Medium from here is immaterial … you can pinpoint my behaviour and then follow me elsewhere. I can try to hide in the crowd but, as soon as I pop out of the other side of the Tor network and connect to the clearweb, behavioural analysis will tell you it’s me. Sooner or later, however long it takes, you will find a way to identify me IRL.

Secondly, the idea that I should blithely take on trust that there currently being no-one with ill-intent in a position to gain access to it means there never will be is insulting — I’m not five years old and don’t believe in the tooth fairy. You can take all kinds of measures, some of which may be very reliable but, however much they might be ‘calculated’ risks, they’re still always a risk at the end of the day. Man or woman, if you don’t want to have a child, either get sterilised or don’t have sex — they’re the only ways to be 100% certain. Likewise, if you don’t want people to abuse your data, don’t give it to them in the first place — they can’t abuse what they don’t have.

And I don’t know what things they might want to run from google.com, but I’m pretty sure I don’t want them to run those scripts either.

Who on Earth are Datadog and what do they want to do in my browser?

I’ve no idea … let’s investigate: https://en.wikipedia.org/wiki/Datadog

Datadog helps developers and operations teams see their full infrastructure — cloud, servers, apps, services, metrics, and more — all in one place. This includes real-time interactive dashboards that can be customized to a team’s specific needs, full-text search capabilities for metrics and events, sharing and discussion tools so teams can collaborate using the insights they surface, targeted alerts for critical issues, and API access to accommodate unique infrastructures.

Yeah? Well why don’t Medium monitor their infrastructure on their own systems, instead of using up my processing resources and bandwidth then?

Yeah, yeah, I’m sure there are all kinds of rationalisations but the fact remains that they can analyse resource usage server-side without stuffing my browser full of obscure code.

What’s that you say? Load balancing, client-server routing, blah blah blah? You can do that by simply determing where my request has come from and routing me to an optimally regionally located server with resources to spare (or whatever algorithm you use to determine where to send my request) on your own servers — you don’t need my browser to do it for you … get out of here.

No!

I’m not going to go through them all … you get the idea.

Okay then …

I want to insert a video into the piece I’m working on, so I copy/paste the link and press the [ENTER] key.

Hmmmm … where have we seen that look before?

Yep … just like on audjoo.com, we’re probably going to have to authorise some more stuff to get the mediaplayer to work …

As I suspected.

cdn.embedly.com looks a likely suspect … and I’ll hazard a guess that i.embedly.com might have something to do with images.

Let’s see now …

We’ve seen that behaviour before as well … so, we’ve probably done the right thing.

Yes, there’s a frame alright — this behaviour definitely looks familiar and we’re probably on the right track.

Do we need any scripts?

I’d say embedly.com looks a likely candidate, but I’m not keen to let the others loose — we’ll try just that one source for now.

There’s no visual change, so, what’s missing?

Sure enough, just like on audjoo.com, that seems to have been the cue for the local domain to inform us it needs to grab some more things .

What and where from though?

Some CSS and a sctipt.

Hmmmm … well, they’re both from embedly.com … so, it’s probably okay. And probably necessary, given that we’re trying to emded something … although a simple <embed> tag would be fine with me as well, you know — I appreciate the the effort, but I’m really not that all that demanding (simple fare cooked well is good, I don’t need a banquet … really).

But okay, I did ask and Medium has made all this effort now (probably), so it seems churlish to stop it there.

Go on then.

Still no visible change.

What’s the holdup?

A script from www.youtube.com.

Okay, it’s from the webserver rather than some obscure location somewhere in the youtube.com domain and I allowed those on YouTube itself in order to check the video was what I wamted, so alright, I’l authorise it.

Still no visible change.

And now it wants to pull a script from s.ytimg.com.

Seriously!?

Given that, every time I want to play a video on YouTube itself, I have to authorise the ytimg.com scripts … and given that I am explicitly attempting to embed a video from www.youtube.com (I didn’t obscure that information by embedding a mediaplayer that would then just so happen to play videos from www.youtube.com) … you’d think embedly.com could have sorted this out already, no? Or is this a side-effect of somebody else’s code over which the embedly.com developers have no control, because they don’t know how it works any more than they know how to left-pad a text string? Not that I’m beginning to get cynical about other people’s technical knowhow or anything (I’m sure their mothers must love them).

<sigh>

Fine. Whatever.

Better check NoScript too.

Yep … thought so.

One page refresh later …

Ooh!

Wait. What? Isn’t this where we were back at the beginning!? I swear I’m going round in circles!

[Goes to check]

It is! This is exactly where I was right back at the bginning! What the …!?

What’s going on here!?

You what!?

A frame is required for www.youtube.com? I gave you a frame right back at the start! It was the first thing I gave you!

This isn’t the first time you’ll have been asked to embed a YouTube video … it isn’t even the first time I’ve asked you to do it … how can you not know by now that you’re going to want this one as well and ask for it in advance!? Who wrote this procedure and why, in the name of all that is good and holy, are they not Public Enemy Number One worldwide!? Why aren’t they on the run from the FBI, the CIA, Mossad, Interpol and … just everyone!?

GAA … AAA … AAH!!!!

Fine! Have it!

Oh, dear Lord!

Is that it? Is that the best you can do? After all that?

It’s not all that, is it? It’s as far from ‘all that’ as it could be, isn’t it?

I can do text without any help — all I have to do is absolutely nothing to get text!

Words fail me … really they do.

Alright … I’ve come this far … I might as well go on, even if only out of morbid fascination at just how awful this whole process can get — why stop now, just when I’m hating it?

What else does it want now?

Some CSS? Okay, whatever — I’m beyond caring now anyway.

Some XHR? From www.youtube.com? The place I wanted to embed a video from? You don’t say!?

And what’s that? An image from i.ytimg.com? I don’t know whether to laugh or cry.

Does anyone want to hazard a bet that … not long from now … or possibly in a year’s time … I mean … who knows when this will end, if ever? … perhaps I’m in the ninth circle of Helll right now … but anyway … if it does ever end … that, at some point in the procedure, it’s gonna make a request for an image from … let’s see now … I can’t remember any more … I’m not really sure I care any more, to be honest … erm … [scrolls back a lifetime] … ah, yes … s.ytimg.com?

Anybody?

One page refesh later …

Never mind taking the words out of my mouth … you took the expression right off my face as well!

I can’t help thinking that was a singularly inspired choice of video for this particular exercise … and, as a result, wondering if maybe, rather than it being a bunch of tree-hugging hippy cr*p, there might just be something to the concept of Karma after all.

<Heavy … HEAVY … sigh>

Right, so, there you go … the delights of using what are the most important privacy and security tools in your arsenal.

And that Medium adventure went quite well actually. Far more frequently than I care to think about, if I haven’t gone to YouTube and authorised absolutely everything necessary first, Medium can refuse to play ball at all. It can refuse to make the necessary requests and I can end up staring at a blank frame, no matter how many times I refresh the page whilst uMatrix and NoScript have nothing new to show me. Those are the days I hate uMatrix most of all — nearly as much as I hate the people responsible for the utterly appalling design of Medium, Embedly, YouTube, and whatver else is involved in the whole miserable affair.

I advise you to stick with it for a minimum of three months … better yet, six … ideally a year … before you switch to globally authorising CSS and images by default — you’ll learn a surprising amount, if you do.

It’s not impossible that you’ll end up in trouble if you do — it isn’t the jpg you load that’s the problem but the exploit used to subvert the mecahanism by which your browser loads it (and who knows what people might do with some carefully crafted CSS? )But it’s distinctly less likely to happen than it is, if you let scripts run amok, or load XHR resources unthinkingly.

As for ‘other’ … with the exception of the <noscript> tag, I haven’t loaded any of that even once in all the years I’ve been using uMatrix … and nothing’s stopped working so far.

Just keep an eye on things and, if you see ‘ad’ or ‘analytics’ (or not altogether dissimilar) in a source name, you almost certainly don’t want to authorise anything it wants — there’s a reason why uMatrix automatically blocks some things. Do a bit of research on terms like “list of common web advertising networks” or “common web advertisers”, “internet advertising syndicates” … you get the idea … and see what you find. And the longer you use uMatrix, the more you get a sense for what things are undesirable without even having to investigate them — you learn to recognise them (perhaps unsurprisingly, ‘media’ is a popular one as well).

And remember …

If you visit newspaper.com and they have a cookie for weather.com, that cookie is a third-party cookie. If you block all third-party cookies, that cookie will be blocked. If you only block cookies from unvisited websites, however, that cookie is only blocked if you haven’t already visited weather.com (i.e. if there isn’t already a cookie from weather.com on your device), so you can end up being tracked around the Web by every site that sets a cookie (which is why we block them and/or prevent them leaving the browser).

By the same token, therefore, think carefully before you decide you want to authorise all first-party resources globally in uMatrix. They can include XHR objects. Since XHR objects can include third-party ervice workers and websockets, they aren’t necessarily first-party any more than is a cookie from weather.com delivered by newspaper.com. Gloabally authoriseg first party resources can result, therefore, in things hanging around in the background even when you browse to another site/domain (where, under the right circumstances, they can start exchanging information with the new site/domain and/or other XHR objects).

One final note on uMatrix: don’t forget that it has to be the last of this kind of extension to be activated, if it is to do its job properly — so, after launching Firefox … and before you do any browsing … go into you extensions manager, disable it and then immediately re-enable it.

CanvasBlocker

And finally … the tricky one.

So far, we’ve used a number of settings and extensions to help cover our tracks, defend ourselves from prying eyes and we have also made a start on disguising ourselves with Chameleon but, whilst that latter will lie about what browser and platform we are using, it won’t prevent other information from being disclosed that can uniquely identify us. Well, okay, it can prevent some of it from being disclosed but, as you will recall, I said that we would use CanvasBlocker to do that instead.

CanvasBlocker fakes values supplied by your browser to websites that let them know your browser’s capabilities and configuration. Specifically, it fakes the values that relate to the canvas element in HTML5, which “allows for dynamic, scriptable rendering of 2D shapes and bitmap images [and] through WebGL it allows 3D shapes and images to be displayed. HTML5 Canvas also helps in making 2D games.” — Wikipedia.

That doesn’t sound like an obvious need, does it? Why would we want to do that?

The reason is that those values supply yet more of the information websites use to identify you from amongst the millions (hundreds of millions? thousands of millions?) of other browsers out there, marking you out as an individual in the crowd.

The privacy concerns regarding canvas fingerprinting centre around the fact that even deleting cookies and clearing the cache will not be sufficient for users to avoid online tracking.

Wikipedia.

Canvas fingerprinting works by exploiting the HTML5 canvas element. As described by Acar et. al. in:[5]

When a user visits a page, the fingerprinting script first draws text with the font and size of its choice and adds background colors (1). Next, the script calls Canvas API’s ToDataURL method to get the canvas pixel data in dataURL format (2), which is basically a Base64 encoded representation of the binary pixel data. Finally, the script takes the hash of the text-encoded pixel data (3), which serves as the fingerprint …

Variations in which graphics processing unit (GPU) is installed or the graphics driver cause the variations in the fingerprint. The fingerprint can be stored and shared with advertising partners to identify users when they visit affiliated websites. A profile can be created from the user’s browsing activity allowing advertisers to target advertising to the user’s inferred demographics and preferences.[3][6]

Wikipedia,

Which is yet another reason to disable the Performance options in Firefox’s settings … and (except where the impact is too greatly negative) to block remote fonts.

Furthermore, supplying information about our hardware (specifically the GPU) and drivers is a good way to facilitate explotis for known bugs — which will not be fixed, once an OEM (Original Equipment Manufacturer) stops supporting our hardware with updated drivers. Got an older computing device? It’s time to start faking your Canvas API then!

The first problem is that CanvasBlocker’s interface and help are not very interesting to look at — if you’re not into reading, it’s a bit of turnoff. Moreover, the things it deals with are quite dry and technical — if you don’t know or care about APIs and alpha channels, CanvasBlocker doesn’t really go out of its way to make you understand why you should. It’s on that cusp where geeky and nerdy are very close to being one-and-the-same thing … which isn’t a good way to encourage people who are neither to become interested — it’s hard enough tgetting people to pay enough attention to geeky stuff as it is, without it being offputtingly nerdy.

On top of that, thanks to what it does (fake your browser’s ‘physical’ configuration), even when the difference between your real profile and the one Chameleon is faking isn’t so drastic as to render a site impossible to use (as the Apple profiles on a Windows, or even Linux, based machine all too frequently do), it can have a not insignificant impact on performance — even if you turn Chameleon off (or tell it to report your real profile), CanvasBlocker can occasionally make browsing frustratingly slow. So, whilst you’re cursing me for ever introducing you to uMatrix and thinking that you can’t stand it anymore and you’re uninstalling it, in the background a not inconsiderable amount of your frustration about how painfully slow your browser is can actually be down to CanvasBlocker.

If you ever see the dreaded yellow banner bearing the legend ‘A web page is slowing down your browser’ … or ‘A script is slowing down you browser’ … asking you whether you want to stop/close it or wait …

… and you will …

… whilst I have, on the odd occasion, seen that the problem was related to AdNauseam (probably struggling with one script fighting with another for the browser’s processing resources, because I’ve got far too many tabs open) … there’s also a not entirely insignificant chance that CanvasBlocker has a hand in it — combine it with Chameleon and the fact that the site is already under the delusion that you’re using an entirely different browser on an entirely different platform … it can be like waiting for paint to dry .

It’s tricky to recommend this one because, possibly more than any other, it’s likely to leave you hating me and everything I’ve recommended without your even realising it. Even when it isn’t obviously ruining your day with a banner, it can slow your system down thanks to the fact that what sites send your browser is inappropriately sized, incorrectly balanced … just doesn’t fit … and Firefox has to reprocess it to make it fit — which takes up processing resources and slows everything down.

It does seem to produce the greatest number of issues when I’m using Medium (which may say more about Medium than anything else I’ve mentioned along the way). And it does seem to happen most frequently when I’ve got alot of browser windows and tabs open — which, given that the other extensions have to process them as well, means it’s probably not fair to blame it entirely on CanvasBlocker lying through its teeth about what size and shape those windows and tabs are (forcing the others to work all the harder to render them).

Most sites I struggle with are simply another dispute over what the site wants to do in my browser and what I’m willing to risk in return for access to “what I came here for”. Most days, I don’t see a yellow banner and most days, after a bit of cursing as uMatrix reveals the next reason to hate modern web design, I can just get on with things — mostly, it just does its job and I don’t notice it … mostly. But there are those days (mostly, as I said, on Medium) when it just seems that CanvasBlocker is determined to make Firefox down tools and take ‘industrial (in)action’ to a whole new level (especially on pages with a lot of images).

So, although it is an important tool in my arsenal … one I’m not sure I’d be happy to give up (I haven’t yet) … and, thankfully, by and large a set-and-forget option, I don’t recommend it lightly: because it’s a set-and-forget option, it’s all too easy to forget … and because it’s a dryly technical thing, it doesn’t necessarily spring straight to mind when you’re trying to troubleshoot a problem with a page (or even entire site) … meaning it can be the root cause of a problem but hiding under your radar, making determining what the issue is more difficult than it might otherwise be.

I still do recommend it though — unprepossessing though it may be (and not without its faults), for all the good the others do by way of helping you reclaim some security and privacy … precisely because it flies under the radar, lying through its teeth about your browser’s configuration and capabilities, CanvasBlocker does more than any to disguise you.

So, don’t let my warnings put you off trying it — it’s just possible that even I underestimate it.

Let’s take a look under the bonnet (hood) then.

I said it was a bit offputting, didn’t I?

But don’t let that discourage you — it can be helpful, if you take the time to go through it carefully.

There are three tabs: General, APIs and Misc.

GENERAL

Check the options to Display Hidden Settings and Expert mode.

Select a Block mode.

The options are comprehensive, but the only one we are interested in is fake.

We could just block everything, but then there’d really be no point in installing CanvasBlocker in the first place — not only can we already do that in other ways … with Chameleon, for instance (or even in Firefox’s own Privacy & Security settings) … but, as previously discussed, I feel misinformation to be of more value than reducing the amount of data apotential dversaries need to focus on.

Likewise, there’s really no point in installing CanvasBlocker, if we’re just going to allow everything.

Being asked for permission every time will quickly become tedious — maybe it would be useful to know which sites make requests (and when) … but I can’t be bothered to be that paranoid.

Allowing only the whitelist would introduce a number of potential points of failure.

  1. It requires us to whitelist every site that isn’t immediately an obvious threat to our privacy or it’s the same as blocking everything.
  2. Whitelisting every site that isn’t immediately obviously a threat doesn’t mean it won’t pose a threat later — only we’ll never find that out, because it’s already whitelisted.
  3. It gets tedious just whitelisting everything, let alone actively investigating the site (every page of it, just in case) before making our mind up.

Blocking only the blacklist means we let every site fingerprint us before we’ve even learned it poses a potential threat to our privacy — and it’s too late then, isn’t it?

Faking our output means that websites can request our information, but we lie about it so that a consistent fingerprint can’t be established.

So, if it isn’t already selected, choose fake here.

By clicking on the white triangle to the right of the block mode drop-down menu, we can tune things on a site-by-site basis, if we want … adding individual sites, elements and choosing an individual response.

But examining what you might want to treat individually (and why) to that level of detail is beyond the scope of this guide.

Under the Faking section, select the kind of random numbers you want to generate (I told you it was dry and technical).

If you want really geek out … at least verging on nerding out … read this article about random nuber generation.

But we’re not going into that level of detail here — it’s enough to consider the types CanvasBlocker makes available and why we might favour one over another.

Which one you choose is a matter of weighing up the tradeoffs.

None returns ‘all white’ as the answer to the question of what colour an element is. Whilst it may be untrue in most cases, consistently doing so is, in itself, a reliable fingerprint and I don’t therefore recommend it.

Non-persistent returns a different answer each time CanvasBlocker suplies an answer. This is generates the greatest amount of misinformation, but … just as frequent changes to our browser profile made by Chameleon can give the game away, so can this. Moreover, a canvas profile that is consistently that inconsistent is, in itself, a fingerprint. This option then is probably best for when you’re just browsing anround rather than making use of sites you frequent regularly (such as Amazon) or, especially, log into (like Medium).

Constant fakes the same value every time a page requests the information and is a reasonable compromise between supplying misinformation and not drawing attention to yourself. Consider, however, that it might still be best suited to casual browsing sessions — although you might linger on a page for a while (thus potentially fending off multiple fingerprinting attempts) and then move on to another … if you browse multiple pages within the same domain then the discrepancy between your fingerprints on each of the pages might be noticed.

Persistent fakes the same value to the entire domain and is, therefore, the least noticable approach, but does mean that, so long as you are within that domain, what you do there builds a consistent picture. Whilst that may not appear to be so much of an issue if you generate a different fingerprint each time you browse that domain, remember that behavioural fingerprinting means that frequent visits to that domain will result in identification that way, so, at that point, the question becomes less one of how to prevent it and more of just how much confusion you want to sow — of course, if your canvas fingerprint is not only different but fake each time, you may consider that sufficient confusion.

As noted under Offering a spoofed fingerprint in the Wikipedia article on device fingerprinting, “Spoofing some of the information exposed to the fingerprinter (e.g. the user agent) may allow to reduce diversity.[48]:13 The contrary could be achieved if the mismatch between the spoofed information and the real browser information differentiates the user from all the others who do not use such strategy.[11]:552 Spoofing the information differently at each site visit allow to reduce stability.[8]:820,823” … so think carefully about which approach to take.

This is probaly the setting I agonise over most … struggling to decide which would be best … and where portable Firefox comes into its own again, because you can set a different canvas profile for each and simply choose the one most appropriate to your browsing on a given occasion. Bear in mind that “Different browsers on the same machine would usually have different fingerprints, but if both browsers aren’t protected against fingerprinting, then the two fingerprints could be identified as originating from the same machine.[3][49]” — Wikipedia. So, think carefully about your use of Chameleon at the same time when deciding how to configure CanvasBlocker in each of them

The Notifications section is pretty self-explanatory.

I choose to show notifications so that I am alerted to fingerprinting attempts — the above settings mean they’re unobtrusive though.

I don’t ignore any of the APIs because … unless I’m engaged as a web developer/site manager, testing whether a particular fingerprinting attempt is working … or forensically investigating the use of a specific API by a site … I’m only interested in whether I’m being fingerprinted, not how — faking my canvas means I’m less concerned about the result of the attempt than I am interested in which sites are attempting to track me by this method.

The last entry in this section is slightly cut off due to having to compromise between the limits of my display resolution and legible text, but really … unless you’re the aforementioned site developer/manager (or going to forensically track down the ways in which a site attempts to fingerprint you) … there’s no need to display the complete calling stack.

There’s nothing to say about the (white/black)lists that I haven’t already said … and the final Settings section is self-explanatory.

APIs

Canvas API

The first section is self-explanatory and I choose readout because, without security, you have no privacy — you can hang net curtains, double-layer curtains (with extra backing) and fit external windowblinds … if someone gets in through the back door, they won’t do any good.

Select the API features you want to protect.

I, of course, select all of them.

Re WebGL specifically, the privacy concerns may no longer be the issue they were a few years ago (I certainly can’t find any recent articles discussing them), but it can open your machine up to abuse in other ways (see the last entry here, for example).

The next sections are self-explanatory and my choices should, therefore be equally so.

Yes, that is one trillion pixels — I’m nothing if not cautious 😉

What WebGL values you report are up to you — for privacy reasons, I’ve blanked mine here.

Note, however, that reporting anything other than the original values will require a trial and error process that is likely to be more literal than figurative. The values you enter here may well result in not only your browser freezing but your whole device too, if you pick ones that don’t work!

Before doing so, therefore

  1. Configure everything else.
  2. Export your settings.
  3. Make sure you gave investigated how to launch Firefox in Safe Mode.
  4. If things go drastically wrong, if you can reimport your saved settings in Safe Mode, do so — otherwise try resetting them, then reimporting them in normal mode after re-launching Firefox.
  5. If the steps in point 4 don’t work, remove CanvasBlocker in Safe Mode, relaunch Firefox in normal mode, re-install CanvasBlocker and import your saved settings.

Keep trying until you find values that work most of the time on most sites. You might get lucky, but are unlikely to have much (if any) success by reporting that you have a GPU by a different OEM; so a different GPU in the same ‘family’ by your real OEM is your best bet — sorry I can’t be of more help here, but an exploration of GPU and videocard technology is well beyond the scope of this discussion.

Audio API

As a DJ/Producer, I need my audio subsystem to be as smooth as a baby’s behind (no latency, no crackle, no stutter, no freezes) — I haven’t got time to waste when auditioning new tracks or engaging in other music related activities.

So my settings here are conservative. If you don’t have the same need for 1000000% rock solid stability though, you can try being more duplicitous.

History API

It’s unusual for there to be no history at all (a browser that never reports any history is a remarkable thing indeed, noteworthy), but … setting Chameleon’s Referer X Origin Policy to prevent it… any site querying it can’t know how many other sites you’ve visited first, so it’s up to you how much you want to tell it.

Bear in mind, however, that as you browse pages on the same site (or, potentially, domain) it’s possible that any new query sent to the History API that doesn’t tally with the site’s/domain’s internal log may be noted (you never can tell), so you might consider setting a higher value, with the tradeoff that, in return for drawing less attention, you will be giving away more data.

If you’re particularly concerned about this, you can always go back to Chameleon and, in the Headers section set the Referer X Origin Policy to Match Host. It won’t guarantee a resolution to the problem (if there is one), but it could make it harder to keep a 100% accurate record of how many pages you looked at if you change hosts (e.g. from news.example.com to weather.example.com). The tradeoff there, of course, is that you mark yourself out in a different way by supplying noticably reduced referer data. So it’s swings and roundabouts really — decide what you’re most comfortable with.

Personally, I feel allowing a domain to know what pages I visited whilst browsing it is fair enough (it probably knows that without my help anyway), so I have my Referer X Origin Policy as Match Domain and let the domain keep its own log of what pages I’ve visited, if it wants to (and can). My Chameleon setting ensures it knows nothing about what external site I came from, so I see no reason to give it any idea of how many pages I’ve browsed before I got there either. Maybe I stand out in the crowd with my mask on but (as the site can’t see whether I’ve even been anywhere else, let alone what I might’ve done there) I really don’t mind that (and I can’t prevent it keeping a log of what pages I look at on the domain, whatever I do).

Speaking of Chameleon, remember that you could limit your tab history there instead, but it’s an all-or-nothing option without the granularity offered by CanvasBlocker. If you do make that choice, don’t forget that competing extensions selfom produce good results, so don’t forget to uncheck the option here.

Window API

If you protect the opener and name in the Window API, it’s possible that some pages/sites might not work properly, but I haven’t experienced that yet — whether that’s a function of the kinds of sites I tend to visit, the way I use them (links manually opened in new tabs and new windows opened in tabs by default)… or a sign that this feature of CanvasBlocker doesn’t work … I couldn’t say though — I check both just in case.

Sadly, because some sites will demand a reCAPTCHA response as a result of our settings (not that many, but more than ‘none’) … and we can never know when the next site night be one of them … it’s really not worth the effort of blocking the window name in frames, as far as I‘m concerned — there are enough uMatrix/NoScript hoops to jump through to get it to work as it is, without adding a trip to the extensions manager as well. But, this is one of those occasions on which I (hypocrite that I am) recommend you don’t follow my (lazy) example and, instead, do the sensible thing of making your life even more tortuous by protecting these — after all the other uMatrix/NoScript induced frustration, what’s an extra ten mouse-clicks? 😉

DOMRect API

This one of the few things that CanvasBlocker could do for us that I don’t ask it to.

The reason for that is that there is a note in the Chameleon FAQ …

Exactly why I spoof the Client Rects with Chameleon rather than the DOMRect API with CanvasBlocker, I couldn’t say any more. I used Random Agent Spoofer for many years and it may be a legacy of that giving me faith in Chameleon to reliably provide the same performance. Or it could be that, at one time, with Chameleon managing it, the test on Browser Leaks produced a result I was happier with.

Either way around, as long as one of them does it and there are no conflicts between the two, it’s all good — so, take your pick.

TextMetrics API

There is no mention in the Chameleon FAQ indicating this might interfere with its font fingerprint spoofing, so I err on the side of caution and check all options — I really should get in touch with one/both of the developers and confirm this though.

Navigator API

We are using Chameleon to fake our profile, so there’s no need to enable it in CanvasBlocker.

If you prefer to use CanvasBlocker to perform this task, however, you can do so.

Check Protect navigator API and check the Protected API features you want to spoof.

Note that it is not enough to simply protect the API features here — you will need to configure the values in the Navigator Settings.

Click the Open button.

In the CanvasBlocker navigator settings

  1. Select your preferred Operating system preset.
  2. Select your preffered Browser preset.
  3. Modifiy specific Navigator values to suit your needs.

Bear in mind, however, that these values remain static, so you will be identifiable across sites and sessions until you change them again.

To create consistent profiles for individual sites (emulating Chameleon’s whitelist feature), create containers for the sites in question in Firefox Mult-Account Containers first.

Then, in the CanvasBlocker navigator settings screen, for each site for which you want a consistent profile

  1. Drop down the menu next to the words Settings for the container and select the container for which you want create the profile.
  2. Select your preferred Operating System preset.
  3. Select your preferred Browser preset.
  4. Modifiy specific Navigator values to suit your needs.

Screen API

The Screen API settings are self-explanatory. It is best to protect all API features, in my opinion (otherwise there’s little point in the exercise).

You can either

  1. Set a specific Screen size

OR

2. Choose to Fake minimal screen size

If you need to use a page/site that requires specific values that are not delivered by the faked readout, you can simply unprotect the necessary features and return the true values.

MISC

Pick a c̶a̶r̶d̶ theme … any c̶a̶r̶d̶ theme — don’t show it to me.

Too right we want to Block data URL pages! Sure, it means we won’t be able to view those pages properly (if at all), but there had better be a seriously good reason before we even temporarily disable this feature, never mind add them to the exclusion list! I haven’t even once had my web browsing experience hindered in any way that I can recall by blocking them.

Pick your logging level — be aware though that anything other than ‘error’ will result in a lot of logging and an enormous file that will probably reach the limit for a log file in no time at all.

CONCLUSION

So, what have we achieved?

Apart from discouraging you from ever listening to me again that is?

  1. Our precise physical location is more difficult to determine (at least as long as we are not using a mobile device anyway).
  2. Wherever and whenever avaialable, we are using a secure connection and preventing our communication being eavesdropped or subverted.
  3. We are not as easily tracked around the Web as before.
  4. Our interests and searches are less easily determined, making it more difficult to profile us.
  5. Insofar as we can set limits, only the things we want entering/leaving our browser do so.
  6. To some extent at least, we have disguised our browser, its configuration and its capabilities, increasing the difficulty with which we can be identified.

It’s not foolproof … and there are even ways for a page/site to determine precisely what extensions you have installed.

It’s a balancing act. My unique list of installed extensions might provide a way to track me … but not having them installed doesn’t prevent me from being tracked either. So, my privacy is more significant than anonymity. So long as you can’t see what lies beneath my underwear, I don’t really mind all that much if you catch a glimpse of it (it’s still doing its job of protecting my intimates) — likewise, therefore, your knowing what extensions I’m using is not as significant as your knowing what really lies behind the fake data they generate.

What is the tradeoff?

We are running the risk that the very tools we are employing to help secure our privacy can be used against us.

Is it worth the risk?

I think so … especially with regard to NoScript and uMatrix — if you use no others, use those two. And in all the years I’ve been using them, I have never once seen any indication that any of the others presents a threat (nor read any warnings by anyone else anywhere) either.

AFTERWARD

If you’ve made it this far, congratulations …. there were a number of occasions on which I wasn’t sure I was going to myself — at just north of forty-one thousand words (including the addendum), it’s much longer than even I anticipated!

If there are any errors, I apologise — if you’d be so kind, please point them out to me and I’ll correct them.

I must also apologise for a possible ommission.

On a number of occasions I have had to re-create screeshots and entire sections of the text thanks to lapses at times when I was too tired to notice that I was too tired and should have stopped long before I did. Returning to things the next day, I’d re-read what I’d previosuly written and think “That’s completely wrong! What on Earth was I thinking when I wrote that? And how did I manage to miss out all those steps when taking screenshots!?”

Having learned my lesson, I took to noticing when my focus started to wane and jotting down notes to remind myself, when I recommenced the next day, what had been on my mind when I stopped.

Unfortunately, last night, I seem to have been too tired to do even that properly and I today I found two lines written to myself … one of which consisted simply of the words ‘Highlight Chameleon’. I now no longer have any idea of what I meant by that nor if I didn’t in fact already do whatever it was before finally signing off for the evening — it’s possible I did it but got distracted writing something else and just didn’t get around to deleting the note.

So, if I have neglected to mention something relevant to the use of Chameleon, I apoligise — if it comes to mind at some future date, I’ll edit accordingly.

Right, that’s it. I hope it’s been informative, useful and, if not exactly entertaining, not too boring. But I’m burnt out and … you’ll no doubt be pleased to know … can’t write another word.

ADDENDUM

Whilst writing this piece, the below report appeared in The Register.

Unless you have sensibly opted to let Firefox check for updates but let you choose whether/when to install them (in which case, update it now), by the time you read this, your browser will have bbeen updated already, so there is no need to panic.

But it’s worth reading nevertheless, because it is a prime example of why we don’t want things runing in our browser arbitrarily:

“Of the five high-risk flaws, three are confirmed to allow arbitrary code execution, which in the case of a web browser means that simply loading up a malicious page could lead to malware running on your machine.”

Particularly relevant in with regard to CanvasBlocker, is CVE-2020–12407:

The moderate-rated flaw is a GPU memory leak bug that, interestingly enough, displays memory contents on the screen so that the local user can see them, but not to any web content.

This exploit didn’t leak the data to anyone but the user. What about the next one?

If you go to the testing sites (C.f. the useful links), you’ll not altogether infrequently be informed that your profile is unique. Don’t worry about this too much. Quite apart from the aforementioned issues with how outdated the sites’ browser profile datasets can be, given your randomly spoofed profile, it’s not at all unusual for it to be unique (the chance of two people spoofing the exact same one is ridiculously low).

The thing is that it’s only unique now, as it were. Tomorrow you’ll have another one (which might also be unique). The day after you’ll have yet another (possibly unique) one. And the browsing habits of a string of unique profiles are of little use to someone trying to piece together those of one, single browser profile … because there is no one, single profile — you’re here today, gone tomorrow.

The Shawshank Redemption — The man up and vanished

If you see this screen along your trail (and travails) with uMatrix

It’s not usually a warning that here be evildoers but, more often than not, the result of opening a resource (like an image) and easily remedied with uMatrix (just authorise the image and refresh the page).

You’ll notice as you use NoScript, and uMatrix in particular, that certain sites seem to crop up a lot.

It’s easy to spot some of the ones you don’t want to let through: facebook.com, google.com, twitter.com, etc. — although you’ll quickly discover that reCAPTCHA pretty much requires you to turn your broswer into Chrome for the duration … and a lot of sites rely upon Amazon’s AWS ‘cloud’ services (Go, go, gadget eCleaner Forget Button!).

Some of them give themselves away by having ‘ad’ in them, like adnexus (and I’m pretty dubious about anything containing ‘nexus’ too) or adsense or the just plain blatant amazon-adsystem and googleads.

metric? Definitely measuring something … and whilst it might be innoccuous, it could equally be used to track/profile you in the Future, even if it isn’t now.

Others betray themselves in other ways; you can’t quite say what, but there’s something in the domain name that just makes you think ‘ad network’, ‘tracking services’, etc. I’m also pretty dubious of ‘services’, ‘technologies’ and (sometimes) ‘media’ for instance (especially if the word ‘new’/’nu’ appears in conjunction with it). ‘Global’ … I don’t much like the sound of that either — you get the idea²².

Some you have pretty much no choice but to allow, if you want to make use of the site … <something (most likely ‘ajax’)>.googleapis.com for instance … or jquery.com … because the designers of the site you’re on have either never heard of Azer Koçulu and/or are third parties and don’t care either way — contractors for whom, as long as they get paid at the end, the fallout is the client’s problem … or ‘service providers’ who will keep taking payment for as long as the client is too ignorant to know any better.

CDNs usually tell you they’re a CDN by having ‘cdn’ in their name (e.g. sndcdn). We installed Decentraleyes because we’re specifically not happy about being tracked by CDNs, but sometimes a site doesn’t give you much of a choice — you authorise the CDN and you get to see things … or you go elsewhere.

Cloudflare?

Let me see now … Cloudflare … Cloudflare … Cloudflare

Well, you frequently don’t have much of a choice if you want to see what a site contains. But, I predict that … if they haven’t already done so … like Google, Cloudflare will go bad and turn to the dark side, you mark my words — you read it here first.

I mean … static.cloudflareinsights.com — anything with ‘insights’ in it is analysing something … and it only takes one bright spark to observe some ̶p̶r̶o̶f̶i̶t̶a̶b̶l̶e̶ useful profiling that could be done at the same time (it’d uncompetitive not to, right?).

Be wary is my advice … Google sounded very much like Cloudflare once — before they just up and stopped pretending … dropping their Don’t Be Evil motto in the waste²³.

Research can help, as we saw in the case of Datadog (which we got to by searching for datadoghq).

Whatever you do, never put one of them straight into your browser as an address … it’ll take you to the mothership, where, in all likelihood, you’ll find yourself authorising all kinds of stuff (like profiling.allyourdata.biz) just to be able to see the landing pages’s lime green backdrop with diagonally descending white stripes animation (BOOM!). And if they provide those services for others, you can be sure they’re doing it on their own site! Put it in quotes (“dubiouslynameddomain.com”)

If you can’t find anything on a domain then (as when we looked at various sites with uMatrix/NoScript), you see what it wants to load and make your best call. Images and CSS are the least likely to be a problem and, of the two, I’d go with CSS first and only load images if I really have a need (although, conversely, if what I’m after is images specifically, I might ignore the CSS) — scripts, XHR and frames, I’m less inclined to give the time of day to, unless, like on YouTube, for instance, they’re essential.

If you’re uncertsain whether to authorise something in NoScript, check whether it is (dark red) blocked by uMatrix by default. You don’t want to authorise something undesirable — remember, just because it hasn’t been loaded, the fact that a request was made fro a resource can be enough to add another piece of information to the profile the site/service is (trying) to build of you.

In my experience it’s necessary to allow at least a limited amount through from wp.com / wordpress.com, bp.com / blogger.com / blogblog.com / blogspot.com simply because, if you don’t, the page remains blank — don’t get overexcited though, just the minimum you need to determine that the article is ten years out of date, sigh heavily and try another link.

Likewise <anything>disqus<anything>.<something>, if you want see/post comments.

<something>-usercontent.somewhere.com is probably fine — although, if it’s hosting some devious hacker’s ‘pwnuser.js’ script, that rule might not stand up in court.

What I definitely ignore are:

fonts — we don’t need no stinking fonts.

thumb[nail]s — I’ve got some, thanks.

favicons — I’m not an Egyptian phaeroh, I can read the description.

avatars, especially from gravatar.com — I really don’t need to see endless anime characters’ headshots (I’m not a virtual modelling agency and won’t be calling them for a trial shoot).

I don’t want to be tracked at all, but to be so by one of the above would just be embarrassing.

doubleclick.net — I avoid that one like the plague.

noscript.net

In the NoScript FAQ, in section 1.5 (What websites are in the default whitelist and why?), the answer to why noscript.net is whitelisted by default is “You just installed NoScript on your system, running with the privileges of your web browser. If you don’t trust it, you’ve got a much bigger problem than JavaScript on its website (strictly HTTPS) ;)

Which is all well and good … until noscript.net gets hacked — trust nothing by default!

This is all you need in order to make use of the site for nearly everything you are likely to want to do there …

… no script is necesary (no pun intended).

To minimise the risks posed by extensions ‘going rogue’, in the Add-ons Manager in Firefox

Select Extensions

Click the cogwheel icon to the right of the words Manage Your Extensions and select Reset All Add-ons to Update Manually.

Periodically (whatever schedule suits you, but at least monthly), return to the Extensions Manager and Check for Updates.

If an update is available for an extension, there will be a blue dot in the top-right corner of the ellipses to the right of it.

Before updating it, go to https://addons.mozilla.org and search for the addon in question.

Read the Release notes panel to learn what the update changes about the extension and confirm you are confident it won’t do anything you are unhappy about — it almost certainly won’t but it doesn’t hurt to check.

Read the More information panel and check when the update was Last updated — if it was very recently (less than two weeks) then, unless there is any indication elsewhere on the page that the update is critical, consider waiting a week or two before updating (giving early updaters time to report problems).

Read all <number> reviews to confirm nobody has reported anything untoward.

When you’re confident that it’s safe to update it, in the Extensions Manager click on the ellipses next to the extension and select Update.

Yes, it’s yet more inconvenience, but I did warn you that you’d come to the wrong place if convenience was what you were looking for.

USEFUL LINKS

https://browserleaks.com

https://www,panopticlick.com

https://amiunique.com

https://www.dnsleaktest.com/

[NOTES]

¹ The same kind of person who recommends Telegram for instant messaging.

² Dodgy practices and connections … exaggerated claims that turn out to be little more than aspirational at best, outright Marketing at worst … etc.

³ For instance, they will excitedly tell you about Opera’s free VPN. It is not a VPN service but a proxy service and that difference is so significant that only someone without even the least clue would not be acutely aware of that fact, let alone make the error of conflating the two. And, even if they understand the difference between the two, they simply have not done their due diligence by investigating … and you don’t want to take advice about security and from people who don’t do due diligence — next thing you know, they’ll be advising you to use the Telegram!

⁴ It took twenty-five years … a quarter of a century … for someone to notice it!

⁵ Except for when I do, of course 😉

⁶ And I don’t get them delivered to my own address either, I collect them in person and hand over the payment in cash.

⁷ If you go to https://www.theregister.co.uk and trawl back a few years (maybe even ten, I can’t remember exactly when it was), you’ll find a story about the US government subpoenaing data from a server in Australia because a shareholder of the (Australian) company that owned it was a .com and, therefore, fair game because, as far as the US government was concerned, all .com sites were the property of the US — so, the argument went, the US had the right to the data because the Australian company was (at least in part) owned by a US company,.

⁸ Later on, we’ll install a cookie manager and you’ll see that … even on those rare sites that (thanks to the GDPR) pop up that irritating notification saying that they want to set cookies and actually give you the choice of saying ‘no’ to some of them … cookies get set regardless.

⁹ You can publish my bank statement if you want but, so long as you can’t identify to whom it belongs … and so long as the account number isn’t part of those details … you can’t do much with it beyond identifying a pattern of behaviour that can’t be attributed to anyone specific — I might stand out in the crowd with my mask on but you have no idea who I am as a result, so it doesn’t matter.

¹⁰ Which you shouldn’t. If you want/need live technical support, any reputable site/service will provide a text chat box and not require voice/video to do so¹¹. Any site that requires you to let it link to your camera or microphone rather than letting you upload files isn’t one you should be using. If you need to Skype someone (or whatever) then use Skype (or whatever) not some dodgy plugin or widget that the site designer/manager almost certainly doesn’t maintain themselves but imports from some third party source, with no idea how it works or how secure it is!

¹¹ Not even if they’re testing your microphone/webcam: you can test whether their troubleshooting steps have worked by recording something to a local app and reporting back to them whether it worked or not, or by initiating a vidoeconference with them — they don’t need direct access themselves.

¹² Random sites offering to install some wizzy feature, like a toolbar promising to enhance your life in a myriad of insignificant ways in return for all your search data/an excitingly sophisticated w̶a̶r̶e̶z̶ ̶d̶o̶w̶n̶l̶o̶a̶d̶ ̶m̶a̶n̶a̶g̶e̶r̶ malware installer/a p̶o̶r̶n̶ ̶s̶e̶a̶r̶c̶h̶ ̶e̶n̶g̶i̶n̶e̶ credit card skimmer … and not your regular online services that allow you to upload your photos to Instagram or latest copy to the newspaper you write for … are things you will be saying a firm “Get out of here” to.

¹³ The software equivalent of the Whissendine phenomenon …

Whissendine (n): The noise which occurs (often by night) in a strange house, which is too short and too irregular for you ever to be able to find out what it is and where it comes from.
— Douglas Adams & John Lloyd, The Meaning Of Liff

¹⁴ I also seem to recall that, if they weren’t part of the ghacks team themself, they knew the members of it well — which also lends weight to their being a good source of reliable and trustworthy information/advice.

¹⁵ Never. Ever. Accept certificates from sites — not even temporarily! The only time you’ll ever find yourself being given the option is when Firefox has just warned you that the certificate isn’t valid for some reason … and that’s a biiiiiig red flag right there, telling you to shut the tab/window and never return to this site again! The only exception to this rule is when it’s for a known entity, such as your employer’s website, or a client’s/customer’s site — and the first thing you do then is pause and get in touch with them to enquire why there’s <problem> with their self-signed certificate.

¹⁶ You know all those sites that say “You need to update your Flash Player to view this content” … no, you DON’T! Any reputable site these days makes use of HTML5, CSS, Javascript and standard media formats that don’t require browser plugins. Any site that requires the use of a browser plugin is either your technologically illiterate employer/client/customer or else seeking to install malware and ruin your life. DON’T DO IT!¹⁷ One exception to this rule is any plugins you have installed to secure online banking or similar¹⁸ but, otherwise, repeat after me: “We don’t need no stinking plugins!”

¹⁷ The same goes for Java: unless you have a very good reason (like employer/client/customer requirements) to accept a site’s use of Java, don’t. In fact, unless you have a locally installed app that requires it, completely uninstall Java from your system now — since the introduction of HTML5, no website needs to deliver anything via Java any more and that practice needs to die quickly!

¹⁸ Please don’t tell me you do online banking (you’ll make me weep, if you do), but, if you do, talk to your bank about finding a more secure way to deliver their services than Java — or find another bank.

¹⁹ If I ever need to assign different containers to different proxies then I’ll definitely take a closer look at Container Proxy, for instance, but I have no use for it at this time.

²⁰ As stated in the AdNauseam FAQ, it depends on the advertising business model and the degree of effort they are willing to put into filtering (some might, others might not), but if they do, your automated clicks can generate billable events, and hence revenue, for the website.

²¹ I swear this idea will turn out to be the ActiveX of its era.

²² I wouldn’t touch GlobalMediaTech.com with yours!

²³ Seriously? No … seriously … how far gone must things be for you to say, in a Board meeting, “Guys, you know our motto … Don’t Be Evil … … … Naaah” (let alone for everyone to agree with you)!? If nothing else, it’s terrible PR.

--

--

Where Angels Fear

There he goes. One of God's own prototypes. A high-powered mutant of some kind never even considered for mass production. Too weird to live and too rare to die.