I recently had a conversation with a friend, wherein I asked them if they still maintained some of their firm, personal policies on privacy. My friend, somewhat disheartened, said no. I nodded in agreement, sharing some practices I had slackened on recently—such as storing passwords in a non-local cloud service provider.

In response to this, my pal mentioned that they had come to a place where it felt like engaging with privacy had to be an "all-in or not-at-all" effort. I nodded because the phrase made sense to me, but I couldn't quite articulate why. We didn't dwell on the topic, and moved on to something else.

I've had some thoughts unfold on the topic in the weeks and months since that brief conversation. Before I share those thoughts, I must say that I'm no privacy expert. Contextually, I work as a programmer, so at best, I'm familiar with arguments for self-hosting your software, and occasionally, I pay some attention to privacy-oriented news. But I'm no expert.

Why might we feel the need to be all-in?

What might "being all-in" on privacy mean? Or the inverse? I take it to mean that if we use a well-vetted, private chat application like Signal, but also use Telegram (which is not end-to-end encrypted) we aren't all-in. In this example situation, there is an open vector for attack to at least some of our data. Further, it perhaps feels like the secure data of one application might metaphorically leak into a less-secure application, either by human error or some roundabout means outside our comprehension.

That is, I'm hypothesizing that there is a cognitive dissonance that because we are leaving one door more widely open than other doors, we might feel that all doors may as well be open.

Let's consider another example beyond that. Say you use a popular email provider, and the contents of our email is scanned to provide ads for us. Not only that, but now this hypothetical email service is going to email you once a week with products that they think will improve your life.

That's annoying, but we can just delete those emails. It is an arduous task to migrate to a new email provider that engages in less shady behaviour, and so we might just say to ourselves: forget it, I'm stuck. As soon as we admit that, it becomes easier to say why bother with other tools we use — our chat applications or our social networks that fingerprint users and sell their data. In this way, we slide into the slippery-slope fallacy, and then get on with our lives in the most convenient manner fashionable.


At the heart of these acceptances is movement toward what makes our lives more convenient. For myself, I turned on FaceID when I switched over to iPhone from Android this year. I had avoided using it on the other iDevice I've had for a while, finding it a bit creepy, but decided to try it because—well, see above.

Guess what? It is wonderful in its convenience. I prefer it to typing in a pin code. And since I live a psychologically safe, normal life that poses little risk to the status quo, I am more easily permitted to succumb to the "I have nothing to hide" fallacy. So what if Apple has biometric scans of my face and fingerprint on its servers? How's that ever going to affect me?

Privacy: a spectrum

This sort of black and white thinking on privacy concerns me. Let's consider our personal efforts to maintain privacy as a spectrum, instead. On it, we can progressively improve; using a mixture of tools, websites, and applications. If one service turns out to be a bad apple does not mean we need to throw out the entire batch (of efforts, software, tools, etc).

Interestingly, privacy is becoming both somewhat hip and a selling point for some businesses. Some hot-shot billionaire says to "use signal" and hordes of people sign up to use Signal. That's great, I guess. Apple's massive billboards tote privacy as a selling point for their platform. New features like email masking, or the "ask app not to track" dialogue box to iOS have been released. These actions do make a difference.

Maybe Apple does have a better track record with not selling user's data than other companies. Maybe more people are using an end-to-end encrypted chat applications. These are steps along a spectrum to increasing the privacy we have.

That's all well and good for the lay-person, but if you are technical, or on the fringe of tech, you may have encountered zealots of privacy. People like Richard Stallman are sometimes pointed to as examples of taking privacy to the extreme — or rather, seem all-in on privacy. Take a moment to peruse the "What's bad about" links on RMS's website, and you'll see a laundry list of faux-pas made by large corporations (Apple included), many of them having to do with violations of users' privacy.

I do find it hard not to be cynical, whether due to exposure to strong advocates of privacy, a growing wariness of human nature (or is it just hyper-capitalism?) and even just from seeing how the inside of a tech organization works. When people point to a company with privacy initiatives (such as Apple's) I can't help but shrug. I'm just not optimistic or trusting. Sure, a company might actually offer some features for privacy, and some may well work. But then I ask: how do you know what's going on behind the scenes with your data? What if a change in leadership brings in someone new who's more amenable to selling your data or doing whatever with it—well, it's too late—your data has already been collected and is ready to be sold.

When I raise questions and concerns like the above I usually hit a dead-end in the conversation. There isn't really any recourse to it; my conversational partners and I all shrug as if to say: well, at that point, what can you do? And so we drop it, and go back to selling our wares in exchange for free services.

Exposure to strong advocates of privacy can make you feel that only an extreme stance will provide privacy for your digital life. An extreme stance likely means giving up the conveniences that keep you connected with loved ones and building new connections; and so we perhaps feel like using just one service with a bad rep is enough to say to hell with it and open the door to the rest.

I think it's a matter of picking your battles. Privacy doesn't need to be an all-or-nothing system. While it might be difficult to quantitatively view the positive results of choosing pro-privacy services, you can still choose. Choose private services for the things that comprise the most important parts of your digital life. Whatever the service may be, the adage goes that if you're not paying for it, you are the product. Personally, it took me a while for that phrase to click, both in its literal meaning and why that really matters.

While my mind often goes to self-host all your software, this is unrealistic even for myself with limited time and interest in doing so. The other option, which I chronically forget, is paying smaller companies that are likely to provide more private services (provided that you are indeed paying for them). For some, the idea of paying for email is ludicrous. It's always been free, since the very first (cringey) email address I registered! At the end of the day, it's still not open-source software, but there's a better chance that the service is not selling your data.

Where does that leave us? To be honest, I don’t know. I wrote this essay without any particular argument, hoping that one would come out by the end. I suppose I have challenged myself to not think in black and white about my own personal privacy, and that taking small steps when I can is beneficial. But more so, I wonder about how all this relates to the folks who don’t have the time, energy, finances, interest, or technical knowledge to implement the similar measures for personal privacy. For so many people in these situations, selling their data and being the product is the norm, and convenience is fine in the face of companies that don’t seem to be overtly impacting us in nefarious ways.

Do we applaud companies like Apple just because they dare to market privacy as a feature? Is that the lowest barrier to entry toward getting the conversation going?

Perhaps approaching it from a regulation perspective will be more effective, but that in itself is debatable, and I’m not knowledgeable enough to really offer anything there. It appears that some regulation and repercussions are being doled out, but if I were to guess, it’s probably a slow-moving system.

Personal privacy doesn’t have to be an all or nothing herculean effort, but it needs to be a mindful one. Each small step in this direction is effectively turning down the temperature on the pot we the frog find ourselves in. If we ever get out, I wonder if we’ll even know what we avoided, in the end.