Posted on by Wladimir Palant

Nowadays it is common for locally installed applications to also offer installing browser extensions that will take care of browser integration. Securing the communication between extensions and the application is not entirely trivial, something that Logitech had to discover recently for example. I’ve also found a bunch of applications with security issues in this area. In this context, one has to appreciate RememBear password manager going to great lengths to secure this communication channel. Unfortunately, while their approach isn’t strictly wrong, it seems to be based on a wrong threat assessment and ends up investing far more effort into this than necessary.

The approach

It is pretty typical for browser extensions and applications to communicate via WebSockets. In case of RememBear the application listens on port 8734, so the extension creates a connection to ws://localhost:8734. After that, messages can be exchanged in both directions. So far it’s all pretty typical. The untypical part is RememBear using TLS to communicate on top of an unencrypted connection.

So the browser extension contains a complete TLS client implemented in JavaScript. It generates a client key, and on first connection the user has to confirm that this client key is allowed to connect to the application. It also remembers the server’s public key and will reject connecting to another server.

Why use an own TLS implementation instead of letting the browser establish an encrypted connection? The browser would verify TLS certificates, whereas the scheme here is based on self-signed certificates. Also, browsers never managed to solve authentication via client keys without degrading user experience.

The supposed threat

Now I could maybe find flaws in the forge TLS client they are using. Or criticize them for using 1024 bit RSA keys which are deprecated. But that would be pointless, because the whole construct addresses the wrong threat.

According to RememBear, the threat here is a malicious application disguising as RememBear app towards the extension. So they encrypt the communication in order to protect the extension, making sure that it only talks to the real application.

Now the sad reality of password managers is: once there is a malicious application on the computer, you’ve lost already. Malware does things like logging keyboard input and should be able to steal your master password this way. Even if malware is “merely” running with user’s privileges, it can go as far as letting a trojanized version of RememBear run instead of the original.

But hey, isn’t all this setting the bar higher? Like, messing with local communication would have been easier than installing a modified application? One could accept this line argumentation of course. The trouble is: messing with that WebSocket connection is still trivial. If you check your Firefox profile directory, there will be a file called browser-extension-data/ Part of this file: the extension’s client key and RememBear application’s public key, in plain text. A malware can easily read out (if it wants to connect to the application) or modify these (if to wants to fake the application towards the extension). With Chrome the data format is somewhat more complicated but equally unprotected.

Rusty lock not attached to anything
Image by Joybot

The actual threat

It’s weird how the focus is on protecting the browser extension. Yet the browser extension has no data that a malicious application could steal. If anything, malware might be able to trick the extension into compromising websites. Usually however, malware applications manage to do this on their own, without help.

In fact, the by far more interesting target is the RememBear application, the one with the passwords data. Yet protecting it against malware is a lost cause, whatever a browser extension running in the browser sandbox can do – malware can easily do the same.

The realistic threat here are actually regular websites. You see, same-origin policy isn’t enforced for WebSockets. Any website can establish a connection to any WebSocket server. It’s up to the WebSocket server to check the Origin HTTP header and reject connections from untrusted origins. If the connection is being established by a browser extension however, the different browsers are very inconsistent about setting the Origin header, so that recognizing legitimate connections is difficult.

In the worst case, the WebSocket server doesn’t do any checks and accepts any connection. That was the case with the Logitech application mentioned earlier, it could be reconfigured by any website.

Properly protecting applications

If the usual mechanisms to ensure connection integrity don’t work, what do you do? You can establish a shared secret between the extension and the application. I’ve seen extensions requiring you to copy a secret key from the application into the extension. Another option would be the extension generating a secret and requiring users to approve it in the application, much like RememBear does it right now with the extension’s client key. Add that shared secret to every request made by the extension and the application will be able to identify it as legitimate.

Wait, no encryption? After all, somebody called out 1Password for sending passwords in cleartext on a localhost connection (article has been removed since). That’s your typical bogus vulnerability report however. Data sent to localhost never leaves your computer. It can only be seen on your computer and only with administrator privileges. So we would again be either protecting against malware or a user with administrator privileges. Both could easily log your master password when you enter it and decrypt your password database, “protecting” localhost traffic wouldn’t achieve anything.

But there is actually an even easier solution. Using WebSockets is unnecessary, browsers implement native messaging API which is meant specifically to let extensions and their applications communicate. Unlike WebSockets, this API cannot be used by websites, so the application can be certain: any request coming in originates from the browser extension.

Conclusion and outlook

There is no reasonable way to protect a password manager against malware. With some luck, the malware functionality will be too generic to compromise your application. Once you expect it to have code targeting your specific application, there is really nothing you can do any more. Any protective measures on your end are easily circumvented.

Security design needs to be guided by a realistic threat assessment. Here, by far the most important threat is communication channels being taken over by a malicious website. This threat is easily addressed by authenticating the client via a shared secret, or simply using native messaging which doesn’t require additional authentication. Everything else is merely security theater that doesn’t add any value.

This isn’t the only scenario where bogus vulnerability reports prompted an overreaction however. Eventually, I want to deconstruct research scolding password managers for leaving passwords in memory when locked. Here as well, a threat scenario has been blown out of proportion.

Categories: Comment [1]

Posted on by Wladimir Palant

After staying on Textpattern for more than ten years, time was right for a new blog engine. It’s not that Textpattern is bad, it’s actually pretty good and rather sturdy security-wise. But perfect is the enemy of good, and a blog where it’s only static files on the server side is perfect security – no attack surface whatsoever. No PHP and no database on the server means far fewer security updates. And I can easily see locally what any modifications to the site would look like, then push to a repository that doubles as backup – done, the changes are deployed. Finally, I got simply fed up with writing Textile when everywhere else the format of choice is Markdown.

So now this blog is generated by Hugo, it’s all static files and the server can get some rest.

Screenshot of load average being 0.00

As an added bonus, the concept of page bundles means that adding images or PDFs to individual posts no longer results in an unmanageable mess. Migrating content and layout from Textpattern was fairly straightforward, with the custom RSS template to allow full blog posts in the RSS feed being already the “challenging” part.

But there are two inherently dynamic parts of a blog: search and comments. Very often, statically generated blogs will use something like Google’s custom search and Disqus to implement these. I didn’t want to rely on third parties however, for privacy reasons already. In addition, with comments I’d much rather keep them in the repository along with all the other content instead of breaking the beautiful concept with a remote dynamically generated frame. So here is how I solved this.

Static search with lunr.js

Hugo websites has a few suggestions for implementing search functionality. After looking through these, I thought that lunr.js would be the simplest solution. However, the hugo-lunr package mentioned there turned out to be a waste of time. Its purpose is generating a list of all the content in the blog. Yet it tries to do that without considering site configuration, so it fails to guess page URIs correctly, exports the wrong taxonomy and adds binary files to the index. I eventually realized that it is much easier to generate the index with Hugo itself. The following layouts/index.json template does the job for me already:

{{ $scratch := newScratch -}}
{{ $scratch.Add "index" slice -}}
{{ range .Site.RegularPages -}}
  {{ $scratch.Add "index" (dict "uri" .RelPermalink
                                "title" .Title
                                "description" .Description
                                "categories" .Params.categories
                                "content" (.Plain | htmlUnescape)) -}}
{{ end -}}
{{ $scratch.Get "index" | jsonify -}}

You have to enable JSON format in the site configuration and you are done:

    - HTML
    - JSON
    - RSS

Now this isn’t an actual search index but merely a list of all content. I considered pre-building a search index but ended up giving up this idea. A pre-built search index is larger, but that would still be acceptable thanks to compression. More importantly however, it no longer has any information about the original text. So lunr.js would give you a list of URIs as search results but nothing else. You would have neither a title nor a summary to show to the user.

End result: The search script currently used on this site will download the JSON file with all the blog contents on first invocation. It will invoke lunr.js to build a search index and execute the search then. For the search results it shows the title and summary, the latter being generated from the entire content in the same way Hugo does it. It would be nice to highlight actual keywords found but that would be far more complicated and lunr.js does nothing to help you with this task.

A concern I have about lunr.js is its awkward query language. While this allows for more flexibility in theory, in practice nobody will want to learn this only to use the search on some stupid blog. Instead, people might put search phrases in quotation marks, currently a certain way to get no search results.

Somewhat dynamic commenting functionality

The concept of page bundles also has the nice effect that you can put a number of comment files into an article’s directory and a simple change to the templates will have them displayed under the article. So you can have comments in the same repository, neatly organized by article and generated statically along with all the other comment. Nice!

Only issue: how do you get comments there? This is the part that’s no longer possible without some server-side code. Depending on how much you want this to be automated, it might not even be a lot of code. I ended up going for full automation, so right now I’ve got around 300 lines of Python code and additional 100 lines of templates.

Comments on my blog are always pre-moderated, this makes things easier. So when somebody submits a comment, it is merely validated and put into queue. No connection to GitHub at this point, that would be slow and not entirely reliable. Contacting GitHub can be done when the comment is approved, I have more patience that the the average blog visitor.

Identifying the correct blog post

Each blog post has two identifiers: its URI and its directory path in the repository. Which one should be sent with the comment form and how to validate it? This question turned out less obvious than it seemed, e.g. because I wanted to see the title of the blog post when moderating comments; yet I didn’t want to rely on the commenter to send the correct title with the form. Getting data from GitHub isn’t an option at this stage, so I thought: why not get it from the generated pages on the server?

The comment form will now send the URI of the blog post. The comment server will use the URI to locate the corresponding index.html file, so here we already have validation that the blog post actually exists. From the file it can get the title and (via data-path attribute on the comment form) the article’s path in the repository. Another beneficial side-effect: if the blog post doesn’t have a comment form (e.g. because comments are disabled), this validation step will fail.

Sanitizing content

Ideally, I would add comments to the repository exactly as entered by the user and leave conversion from Markdown up to Hugo. Unfortunately, Hugo doesn’t have a sanitizer for untrusted content, the corresponding issue report is stale. So the comment server has to do Markdown conversion and sanitization, the comments will be stored in the repository as already safe HTML code and rel="nofollow" added to all links. The good news: Python-Markdown module allows disabling some syntax handlers, which I did for headings for example – the corresponding HTML tags would have been converted to plain text by the sanitizer otherwise.

Securing moderation interface

I didn’t want to implement proper user management for the comment moderation mechanism. Instead I wanted to be given a link in the notification mail, and I would merely need to follow it to review the comment. Original thought: do some HMAC dance to sign comment data in the URL. Nope, comment data might be too large for the URL, so it needs to be stored in a temporary file for moderation. Sign comment ID instead? Wait, why bother? If the comment ID is some lengthy random string it will be impossible to guess.

And that’s what I implemented: comment data is stored in the queue under a random file name. Accessing the moderation interface is only possible if you know that file name. Bruteforcing it remotely is unrealistic, so no fancy crypto required here.

Notifications and replies

Obviously, I wouldn’t want to put people’s email addresses into a public repository. Frankly however, I don’t think that subscribing to comments is terribly useful; comment sections of blogs simply aren’t a good place to have extended conversations. So already with Textpattern a direct reply to a comment could only come from me, and that’s the only scenario where people would get notified.

I’ve made this somewhat more explicit now, with the email field hint saying that filling it out is usually unnecessary. It is stored along with the comment data when the comment is in the moderation queue, so I can provide a reply during moderation and the comment author will receive a notification. Once moderation is done the comment data is removed from the queue and the email address is gone forever. Works for me, your mileage may wary.

Adding a comment to GitHub

I’ve made some bad experiences with automating repository commits in the past, there are too many edge conditions here. So this time I decided to use GitHub API instead, which turned out fairly simple. The comment server gets an access token and can then construct a commit to the repository.

Downside: adding a comment requires five HTTP requests, party because one file needs to be modified (updating lastmod setting of the article), but mostly because of the API being very low-level. There is only a high-level “all-in-one update” call if you want to modify a single file. For a commit with multiple files you have to:

  • Create a new tree.
  • Create a commit for this tree.
  • Update master branch reference to point to the commit.

Altogether this means: approving a comment is expected to take a few seconds.

Categories: Comment [0]

Posted on by Wladimir Palant

Dear Mozilla, of course I learned about your new file sharing project from the news. But it seems that you wanted to be really certain, so today I got this email:

Email screenshot

Do you still remember how I opted out of all your emails last year? Luckily, I know that email preferences of all your users are managed via Mozilla Basket and I also know how to retrieve raw data. So here it is:

Screenshot of Basket data

It clearly says that I’ve opted out, so you didn’t forget. So why do you keep sending me promotional messages? Edit (2019-04-05): Yes, that optin value is thoroughly confusing but it doesn’t mean what it seems to mean. Basket only uses it to indicate a “verified email,” somebody who either went through double opt-in once or registered with Firefox Accounts.

This isn’t your only issue however. A year ago I reported a security issue in Mozilla Basket (not publicly accessible). The essence is that subscribing anybody to Mozilla’s newsletters is trivial even if that person opted out previously. The consensus in this bug seems to be that this is “working as expected.” This cannot seriously be it, right?

Now there is some legislation that is IMHO being violated here, e.g. the CAN-SPAM Act and GDPR. And your privacy policy ends with the email address one can contact to report compliance issues. So I did.

Screenshot of Mozilla's bounce mail

Oh well…

Categories: Comment [5]

Posted on by Wladimir Palant

TL;DR: Yes, very much.

The issue

I’ve written a number of blog posts on LastPass security issues already. The latest one so far looked into the way the LastPass data is encrypted before it is transmitted to the server. The thing is: when your password manager uploads all data to its server backend, you normally want to be very certain that the data visible to the server is useless both to attackers who manage to compromise the server and company employees running that server. Early last year I reported a number of issues that allowed subverting LastPass encryption with comparably little effort. The most severe issues have been addressed, so all should be good now?

Sadly, no. It is absolutely possible for a password manager to use a server for some functionality while not trusting it. However, LastPass has been designed in a way that makes taking this route very difficult. In particular, the decision to fall back to server-provided pages for parts of the LastPass browser extension functionality is highly problematic. For example, whenever you access Account Settings you leave the trusted browser extension and access a web interface presented to you by the LastPass server, something that the extension tries to hide from you. Some other extension functionality is implemented similarly.

The glaring hole

So back in November I discovered an API meant to accommodate this context switch from the extension to a web application and make it transparent to the user. Not sure how I managed to overlook it on my previous strolls through the LastPass codebase but the getdata and keyplug2web API calls are quite something. The response to these calls contains your local encryption key, the one which could be used to decrypt all your server-side passwords.

There has been a number of reports in the past about that API being accessible by random websites. I particularly liked this security issue uncovered by Tavis Ormandy which exploited an undeclared variable to trick LastPass into loosening up its API restrictions. Luckily, all of these issues have been addressed and by now it seems that only and domains can trigger these calls.

Oh, but the chances of some page within or domain to be vulnerable aren’t exactly low! Somebody thought of that, so there is an additional security measure. The extension will normally ignore any getdata or keyplug2web calls, only producing a response once after this feature is unlocked. And it is unlocked on explicit user actions such as opening Account Preferences. This limits the danger considerably.

Except that the action isn’t always triggered by the user. There is a “breach notification” feature where the LastPass server will send notifications with arbitrary text and link to the user. If the user clicks the link here, the keyplug2web API will be unlocked and the page will get access to all of the user’s passwords.

The attack

LastPass is run by LogMeIn, Inc. which is based in United States. So let’s say the NSA knocks on their door: “Hey, we need your data on XYZ so we can check their terrorism connections!” As we know by now, NSA does these things and it happens to random people as well, despite not having any ties to terrorism. LastPass data on the server is worthless on its own, but NSA might be able to pressure the company into sending a breach notification to this user. It’s not hard to choose a message in such a way that the user will be compelled to click the link, e.g. “IMPORTANT: Your Google account might be compromised. Click to learn more.” Once they click it’s all over, my proof-of-concept successfully downloaded all the data and decrypted it with the key provided. The page can present the user with an “All good, we checked it and your account isn’t affected” message while the NSA walks away with the data.

The other scenario is of course a rogue company employee doing the same on their own. Here LastPass claims that there are internal processes to prevent employees from abusing their power in such a way. It’s striking however how their response mentions “a single person within development” — does it include server administrators or do we have to trust those? And what about two rogue employees? In the end, we have to take their word on their ability to prevent an inside job.

The fix

I reported this issue via Bugcrowd on November 22, 2018. As of LastPass (released on February 28, 2019) this issue is considered resolved. The way I read the change, the LastPass server is still able to send users breach notifications with text and image that it can choose freely. Clicking the button (button text determined by the server) will still give the server access to all your data. Now there is additional text however saying: “LastPass has detected that you have used the password for this login on other sites, too. We recommend going to your account settings for this site, and creating a new password. Use LastPass to generate a unique, strong password for this account. You can then save the changes on the site, and to LastPass.” Ok, I guess this limits the options for social engineering slightly…

No changes to any of the other actions which will provide the server with the key to decrypt your data:

  • Opening Account Settings, Security Challenge, History, Bookmarklets, Credit Monitoring
  • Linking to a personal account
  • Adding an identity
  • Importing data if the binary component isn’t installed
  • Printing all sites

Some of these actions will prompt you to re-enter your master password. That’s merely security theater however, you can check that they have g_local_key global variable set already which is all they need to decrypt your data.

One more comment on the import functionality: supposedly, a binary component is required to read a file. If the binary component isn’t installed, LastPass will fall back to uploading your file to the server. The developers apparently missed that the API to make this work locally has been part of any browser released since 2012 (yes, that’s seven years ago).


I wrote the original version of this Stack Exchange answer in September 2016. Back then it already pointed out that mixing trusted extension user interface with web applications is a dangerous design choice. It makes it hard to secure the communication channels, something that LastPass has been struggling with a lot. But beyond that, there is also lots of implicit trust in the server’s integrity here. While LastPass developers might be inclined to trust their servers, users have no reason for that. The keys to all their online identities are data that’s too sensitive to entrust any company with it.

LastPass has always been stressing that they cannot access your passwords, so keeping them on their servers is safe. This statement has been proven wrong several times already, and the improvements so far aren’t substantial enough to make it right. LastPass design offers too many loopholes which could be exploited by a malicious server. So far they didn’t make a serious effort to make the extension’s user interface self-contained, meaning that they keep asking you to trust their web server whenever you use LastPass.

Categories: Comment [13]

Posted on by Wladimir Palant

Every now and then, politicians will demand mandatory use of real names on the web. Supposedly, this will restrict hate speech and make the discourse more civilized overall. South Korea tried this approach already and realized that there was only a marginal effect if any. It has been argued again and again that this approach doesn’t help against hate speech but damages freedom of individuals [German], but why would anybody care about facts?

I have nothing to add to the debate as such, everything has been said already. But I, like probably many others, had the impression that the debate is going on because being anonymous on the web is so easy. You have to keep in mind that the last time I did something on the web without signing with my real name was more than a decade ago. So when now I tried to establish an identity on the web not tied to my real-life identity, I was in for a huge surprise: things changed massively! As things stand right now, being truly anonymous on the web is hardly possible at all.

Why being anonymous?

A while ago I made the decision to sign all my online communication with my real name. Many (most?) people have it similarly. So why would anybody want to stay anonymous if not for some shady or even criminal business? Turns out, there are many very valid reasons [German]. For example:

  • Writing anything under your real name would result in personal attacks and discrimination. The article mentions the name “Fatima” which people will immediately associate with a Muslim woman and act accordingly.
  • You want to write about topics that are “controversial” in the society (as in: there is a large group that will try to silence anybody talking about it). That affects feminists, gay and transsexual people, sex workers and many others. These people might not want the “debate” to spill over into real life and result in harassment, mobbing, even job loss.
  • Some people have the need to exchange information about their invisible handicap. Yet they don’t want the whole world to know about such private matters, e.g. their neighbors should not be able to find out.

The basics: hiding your IP address

The first step in establishing an anonymous identity is always hiding your IP address. That’s not so much because somebody might get your ISP to disclose the name behind the IP address, but simply because you probably have another identity on the web already, and be it for work. If the same IP address is associated with both identities, some entities will make the connection. That’s especially true for the big players like Google or Facebook but probably some smaller advertising and tracking services as well. Once the connection is established, you can never know who it will be shared with and whether it will leak in the next big data breach.

You might be inclined to go with a VPN provider to achieve this goal. However, as some people already learned, even if a VPN provider claims to not keep any logs, that’s not necessarily true. In general, not all VPN services are trustworthy and it is hard to know whether one is. Also, VPN browser extensions generally don’t provide real privacy.

So your best choice to achieve this goal is installing the Tor Browser. Not only will it route all your traffic through the Tor network so that the origin of a request cannot be traced back, it will also provide a number of additional protection measures. Essentially, Tor Browser is a modified Firefox always running in Private Browsing mode by default, so no data will be stored locally.

The downside of using Tor (and similarly a VPN) is that you make yourself suspicious. Websites will immediately suspect that you are a spammer, so they will be extra careful before accepting anything from you. This means in particular that you will see reCAPTCHA far more often than you used to, and making it accept you as a human will take far more time. Used to clicking “I’m not a robot” and that single click being accepted? Forget about it, with Tor Browser you will have to solve ten tasks before it believes you.

Creating new accounts

This results in immediate difficulties when you try to get an email address for your new identity. For example, a privacy-minded person might want to create a ProtonMail account. Yet ProtonMail will distrust Tor users and require an additional validation step. You have the choice between SMS validation and making a donation. The former requires providing your mobile phone number which is immediately tied to your real-life identity — more on that soon. The latter requires you to make an online payment which will normally be tied to your credit card or some other identifying token.

Sidenote: It seems that US citizens are better off here and at least in the past there was the option to purchase an anonymous prepaid debit card. This article is quite a bit older however, and while the website is still online, it would be nice for somebody to confirm that you can indeed still buy and activate these cards anonymously.

Having failed creating a ProtonMail account I tried various smaller providers but all of them required SMS validation. Eventually, I succeeded creating an account that merely required me to solve a CAPTCHA correctly. I could then use this email address to create further accounts. However, the next day I found my account locked due to “suspicious activity” and once again requiring SMS validation to unlock.

But why bother creating a proper email account if there is Mailinator? One might be inclined to choose a long random inbox name that nobody would be able to guess and use that as your email “account.” Ok, Mailinator locks out Tor users but similar services such as the German-language Wegwerf-eMail-Adresse work. And sometimes this will do. Unless you are registering at GitHub of course which makes a massive effort to recognize these temporary email addresses and won’t let you verify them.

Either way, using my address I successfully created a Twitter account. And guess what: after only a few minutes that account was locked down due to “suspicious activity.” Only way to unlock: SMS validation. There appears to be a system here. Theoretically, Twitter doesn’t require accounts to be linked to mobile phone numbers. In practice however, when I created my regular Twitter account I was also locked out after a day and forced to use SMS validation.

The trouble with SMS validation

Wherever you go, it seems that the state of the art today is sending an SMS to a phone number in order to verify an account. If you didn’t have to do it, it’s likely because the website already knows your real-life identity. This is highly problematic privacy-wise of course, as that phone number becomes the single most reliable trait that various actors can use to combine records about a person. Not only Google, Facebook and a bunch of smaller internet companies will collect (and sell or leak) data about you this way, state-level actors such as NSA will do as well. So having your different online identities tied to different phone numbers is essential now.

Now there are plenty of websites that will give you access to a public SMS inbox. These sites own several phone numbers and will let anybody see the messages received by those. Problem solved? Not quite. First of all, many websites will actively block such phone numbers. But even if they don’t, chances are that you will receive a response like “an account is already registered for this number.” That’s because there are far too few public inboxes available for the number of people who would like to use them.

There is also another issue. The supposedly privacy-minded Signal messenger won’t let you register without proving control of a phone number. I managed to register a test account using a public inbox. However, after a day somebody kicked me out of that account. If your account is linked to a public inbox, anybody can start the account recovery process and will usually succeed taking control of it merely by proving that they can receive an SMS meant for you. So public inboxes are only good for test accounts but not something you don’t want to fall into the wrong hands.

The same websites advertise private SMS inboxes which you can rent for a certain monthly amount. Here again the issue is paying for the service anonymously which is usually impossible. Also, I was told that Telegram messenger somehow managed to recognize and block these numbers even though they wouldn’t be publicized or shared.

So it might seem that going into a shop and buying a prepaid SIM card with cash would be your best option. Ideally, you would also buy cheap hardware for it, so that this card cannot be linked to your regular mobile phone number via your IMEI number. Many countries closed this loophole under the premise of fighting terrorism however. For example, in Germany you used to be able to activate prepaid SIM cards online and the data you entered was barely verified. As of summer 2017, this is no longer possible and even online activation requires you showing your ID via video chat. My understanding is that similar legislation exists in all of EU, and I was told that India and China also won’t allow anonymous SIM cards.

So it seems that having your accounts linked to your real-life identity via your mobile phone number is usually unavoidable even if it’s not the same number you normally use. Does being worried about it make you paranoid? In theory, only law authorities should be able to request the identity behind a certain number, and this should be rather unlikely if you aren’t doing anything illegal. Then again, can mobile providers really be trusted? If they sell your location data, can you trust them to keep your name private? And even if they have the moral integrity (or rather: legal obligation) to keep that data for themselves, the direction in which the web is heading makes the data guarded by mobile providers a huge target. Will they be able to repel attacks launched by all kinds of malicious actors? Looking at how they deal with SIM swapping scams I consider that rather unlikely.


Initially, I was researching how one would stay anonymous on the web. I realized however that true anonymity on the web is already almost non-existent. Sure, you can probably be anonymous in some obscure corner of the web. But being part of the mainstream discussion without leaving hints towards your identity? No longer possible. So when German law enforcement claims to be unable to prosecute online crimes such as hate speech that’s due to lack of motivation and/or competence rather than “too much” freedom of speech online.

I find it highly concerning how all of your online activity is increasingly tied to your mobile phone number. Not only is the assumption that everybody owns a mobile phone excluding people. Not only does this make it very hard to properly separate different facets of your online activity from each other. It also makes mobile providers the keepers of everone’s privacy, a role that they seem to be ill-equipped to fulfill.

Either way, the point should no longer be that we want the right to use the web anonymously to remain. We should rather fight to get this right back, because at some point somewhere along the way we lost it and nobody noticed.

Categories: Comment [3]