Posted on by Wladimir Palant

To English-language readers: This is about a German-language presentation on the high-level impact that digitalization has on social interactions.


Ich will in einer kleinen Runde grob präsentieren, welchen Einfluss Digitalisierung auf die sozialen Interaktionen in unserer Gesellschaft hat. Vielleicht findet jemand die zusammengetragenen Informationen nützlich.

Präsentation

Die Slides könnt ihr im PDF-Format (491,6 KB) herunterladen.

Quellen

Categories: Comment [0]

Posted on by Wladimir Palant

Emails aren’t private, so much should be known by now. When you communicate via email, the contents are not only visible to yours and the other side’s email providers, but potentially also to numerous others like the NSA who intercepted your email on the network. Encrypting emails is possible via PGP or S/MIME, but neither is particularly easy to deploy and use. Worse yet, both standard were found to have security deficits recently. So it is not surprising that people and especially companies look for better alternatives.

It appears that the German company FTAPI gained a good standing in this market, at least in Germany, Austria and Switzerland. Their website continues to stress how simple and secure their solution is. And the list of references is impressive, featuring a number of known names that should have a very high standard when it comes to data security: Bavarian tax authorities, a bank, lawyers etc. A few years ago they even developed a “Secure E-Mail” service for Vodafone customers.

I now had a glimpse at their product. My conclusion: while it definitely offers advantages in some scenarios, it also fails to deliver the promised security.

Quick overview of the FTAPI approach

The primary goal of the FTAPI product is easily exchanging (potentially very large) files. They solve it by giving up on decentralization: data is always stored on a server and both sender and recipient have to be registered with that server. This offers clear security benefits: there is no data transfer between servers to protect, and offering the web interface via HTTPS makes sure that data upload and download are private.

But FTAPI goes beyond that: they claim to follow the Zero Knowledge approach, meaning that data transfers are end-to-end encrypted and not even the server can see the contents. For that, each user defines their “SecuPass” upon registration which is a password unknown to the server and used to encrypt data transfers.

Why bother doing crypto in a web application?

The first issue is already shining through here: your data is being encrypted by a web application in order to protect it from a server that delivers that web application to you. But the server can easily give you a slightly modified web application, one that will steal your encryption key for example! With several megabytes of JavaScript code executing here, there is no way you will notice a difference. So the server administrator can read your emails, e.g. because of being ordered by the company management, the whole encryption voodoo didn’t change that fact. Malicious actors who somehow gained access to the server will have even less scruples of course. Worse yet, malicious actors don’t need full control of the server. A single Cross-site scripting vulnerability is sufficient to compromise the web application.

Of course, FTAPI also offers a desktop client as well as an Outlook Add-in. While I haven’t looked at either, it is likely that no such drawbacks exist there. The only trouble: FTAPI fails to communicate that encryption is only secure outside of the browser. The standalone clients are being promoted as improvements to convenience, not as security enhancements.

Another case of weak key derivation function

According to the FTAPI website, there is a whitepaper describing their SecuTransfer 4.0 approach. Too bad that this whitepaper isn’t public, and requesting at least in my case didn’t yield any response whatsoever. Then again, figuring out the building blocks of SecuTransfer took merely a few minutes.

Your SecuPass is used as input to PBKDF2 algorithm in order to derive an encryption key. That encryption key can be used to decrypt your private RSA key as stored on the server. And the private RSA key in turn can be used to recover the encryption key for incoming files. So somebody able to decrypt your private RSA key will be able to read all your encrypted data stored on the server.

If somebody in control of the server wants to read your data, how do they decrypt your RSA key? Why, by guessing your SecuPass of course. While the advise is to choose a long password here, humans are very bad at choosing good passwords. In my previous article I already explained why LastPass doing 5000 PBKDF2 iterations isn’t a real hurdle preventing attackers from guessing your password. Yet FTAPI is doing merely 1,000 iterations which means that bruteforcing attacks will be even faster, by factor 5 at least (actually more, because FTAPI is using SHA-1 whereas LastPass is using SHA-256). This means that even the strongest passwords can be guessed within a few days.

Mind you, PBKDF2 isn’t a bad algorithm and with 100,000 iterations (at least, more is better) it can currently be considered reasonably secure. There days, there are better alternatives however — bcrypt and scrypt are the fairly established ones, whereas Argon2 is the new hotness.

And the key exchange?

One of the big challenges with end-to-end encryption is always the key exchange — how do I know that the public key belongs to the person I want to communicate with? S/MIME solves it via a costly public trust infrastructure whereas PGP relies on a network of key servers with its own set of issues. On the first glance, FTAPI dodges this issue with its centralized architecture: the server makes sure that you always get the right public key.

Oh, but we didn’t want to trust the server. What if the server replaces the real public key by the server administrator’s (or worse: a hacker’s), and we make our files visible to them? There is also a less obvious issue: FTAPI still uses the insecure email for bootstrapping. If you aren’t registered yet, email is how you get notified that you received a file. If somebody manages to intercept that email, they will be able to register at the FTAPI server and receive all the “secure” data transfers meant for you.

Final notes

While sharing private data via an HTTPS-protected web server clearly has benefits over sending it via email, the rest of FTAPI’s security measures is mostly appearance of security right now. Partly, it is a failure on their end: 1,000 PBKDF2 iterations were already offering way too little protection in 2009, back when FTAPI prototype was created. But there are also fundamental issues here: real end-to-end encryption is inherently complicated, particularly solving key exchange securely. And of course, end-to-end encryption is impossible to implement in a web application, so you have to choose between convenience (zero overhead: nothing to install, just open the site in your browser) and security.

Categories: Comment [2]

Posted on by Wladimir Palant

Disclaimer: I created PfP: Pain-free Passwords as a hobby, it could be considered a LastPass competitor in the widest sense. I am genuinely interested in the security of password managers which is the reason both for my own password manager and for this blog post on LastPass shortcomings.

TL;DR: LastPass fanboys often claim that a breach of the LastPass server isn’t a big deal because all data is encrypted. As I show below, that’s not actually the case and somebody able to compromise the LastPass server will likely gain access to the decrypted data as well.

A while back I stated in an analysis of the LastPass security architecture:

So much for the general architecture, it has its weak spots but all in all it is pretty solid and your passwords are unlikely to be compromised at this level.

That was really stupid of me, I couldn’t have been more wrong. Turned out, I relied too much on the wishful thinking dominating LastPass documentation. January this year I took a closer look at the LastPass client/server interaction and found a number of unpleasant surprises. Some of the issues went very deep and it took LastPass a while to get them fixed, which is why I am writing about this only now. A bunch of less critical issues remain unresolved as of this writing, so that I cannot disclose their details yet.

Cracking the encryption

In 2015, LastPass suffered a security breach. The attackers were able to extract some data from the server yet LastPass was confident:

We are confident that our encryption measures are sufficient to protect the vast majority of users. LastPass strengthens the authentication hash with a random salt and 100,000 rounds of server-side PBKDF2-SHA256, in addition to the rounds performed client-side. This additional strengthening makes it difficult to attack the stolen hashes with any significant speed.

What this means: anybody who gets access to your LastPass data on the server will have to guess your master password. The master password isn’t merely necessary to authenticate against your LastPass account, it also allows encrypting your data locally before sending it to the server. The encryption key here is derived from the master password, and neither is known to the LastPass server. So attackers who managed to compromise this server will have to guess your master password. And LastPass uses PBKDF2 algorithm with a high number of iterations (LastPass prefers calling them “rounds”) to slow down verifying guesses. For each guess one has to derive the local encryption key with 5,000 PBKDF2 iterations, hash it, then apply another 100,000 PBKDF2 iterations which are normally added by the LastPass server. Only then can the result be compared to the authentication hash stored on the server.

So far all good: 100,000 PBKDF2 iterations should be ok, and it is in fact the number used by the competitor 1Password. But that protection only works if the attackers are stupid enough to verify their master password guesses via the authentication hash. As mentioned above, the local encryption key is derived from your master password with merely 5,000 PBKDF2 iterations. And it is used to encrypt various pieces of data: passwords, private RSA keys, OTPs etc. The LastPass server stores these encrypted pieces of data without any additional protection. So a clever attacker would guess your master password by deriving the local encryption key from a guess and trying to decrypt some data. Worked? Great, the guess is correct. Didn’t work? Try another guess. This approach speeds up guessing master passwords by factor 21.

So, what kind of protection do 5,000 PBKDF2 iterations offer? Judging by these numbers, a single GeForce GTX 1080 Ti graphics card (cost factor: less than $1000) can be used to test 346,000 guesses per second. That’s enough to go through the database with over a billion passwords known from various website leaks in barely more than one hour. And even if you don’t use any of the common passwords, it is estimated that the average password strength is around 40 bits. So on average an attacker would need to try out half of 240 passwords before hitting the right one, this can be achieved in roughly 18 days. Depending on who you are, spending that much time (or adding more graphics cards) might be worth it. Of course, more typical approach would be for the attackers to test guesses on all accounts in parallel, so that the accounts with weaker passwords would be compromised first.

Statement from LastPass:

We have increased the number of PBKDF2 iterations we use to generate the vault encryption key to 100,100. The default for new users was changed in February 2018 and we are in the process of automatically migrating all existing LastPass users to the new default. We continue to recommend that users do not reuse their master password anywhere and follow our guidance to use a strong master password that is going to be difficult to brute-force.

Extracting data from the LastPass server

Somebody extracting data from the LastPass server sounds too far fetched? This turned out easier than I expected. When I tried to understand the LastPas login sequence, I noticed the script https://lastpass.com/newvault/websiteBackgroundScript.php being loaded. That script contained some data on the logged in user’s account, in particular the user name and a piece of encrypted data (private RSA key). Any website could load that script, only protection in place was based on the Referer header which was trivial to circumvent. So when you visited any website, that website could get enough data on your LastPass account to start guessing your master password (only weak client-side protection applied here of course). And as if that wasn’t enough, the script also contained a valid CSRF token, which allowed this website to change your account settings for example. Ouch…

To me, the most surprising thing about this vulnerability is that no security researcher found it so far. Maybe nobody expected that a script request receiving a CSRF token doesn’t actually validate this token? Or have they been confused by the inept protection used here? Beats me. Either way, I’d consider the likeliness of some blackhat having discovered this vulnerability independently rather high. It’s up to LastPass to check whether it was being exploited already, this is an attack that would leave traces in their logs.

Statement from LastPass:

The script can now only be loaded when supplying a valid CSRF token, so 3rd-parties cannot gain access to the data. We also removed the RSA sharing keys from the scripts generated output.

The “encrypted vault” myth

LastPass consistently calls its data storage the “encrypted vault.” Most people assume, like I did originally myself, that the server stores your data as an AES-encrypted blob. A look at https://lastpass.com/getaccts.php output (you have to be logged in to see it) quickly proves this assumption to be incorrect however. While some data pieces like account names or passwords are indeed encrypted, others like the corresponding URL are merely hex encoded. This 2015 presentation already pointed out that the incomplete encryption is a weakness (page 66 and the following ones). While LastPass decided to encrypt more data since then, they still don’t encrypt everything.

The same presentation points out that using ECB as block cipher mode for encryption is a bad idea. One issue in particular is that while passwords are encrypted, with ECB it is still possible to tell which of them are identical. LastPass mostly migrated to CBC since that publication and a look at getaccts.php shouldn’t show more than a few pieces of ECB-encrypted data (you can tell them apart because ECB is encoded as a single base64 blob like dGVzdHRlc3R0ZXN0dGVzdA== whereas CBC is two base64 blobs starting with an exclamation mark like !dGVzdHRlc3R0ZXN0dGVzdA==|dGVzdHRlc3R0ZXN0dGVzdA==). It’s remarkable that ECB is still used for some (albeit less critical) data however. Also, encryption of older credentials isn’t being “upgraded” it seems, if they were encrypted with AES-ECB originally they stay this way.

I wonder whether the authors of this presentation got their security bug bounty retroactively now that LastPass has a bug bounty program. They uncovered some important flaws there, many of which still exist to some degree. This work deserves to be rewarded.

Statement from LastPass:

The fix for this issue is being deployed as part of the migration to the higher iteration count in the earlier mentioned report.

A few words on backdoors

People losing access to their accounts is apparently an issue with LastPass, which is why they have been adding backdoors. These backdoors go under the name “One-Time Passwords” (OTPs) and can be created on demand. Good news: LastPass doesn’t know your OTPs, they are encrypted on the server side. So far all fine, as long as you keep the OTPs you created in a safe place.

There is a catch however: one OTP is being created implicitly by the LastPass extension to aid account recovery. This OTP is stored on your computer and retrieved by the LastPass website when you ask for account recovery. This means however that whenever LastPass needs to access your data (e.g. because US authorities requested it), they can always instruct their website to silently ask LastPass extension for that OTP and you won’t even notice.

Another consequence here: anybody with access to both your device and your email can gain access to your LastPass account. This is a known issue:

It is important to note that if an attacker is able to obtain your locally stored OTP (and decrypt it while on your pc) and gain access to your email account, they can compromise your data if this option is turned on. We feel this threat is low enough that we recommend the average user not to disable this setting.

I disagree on the assessment that the threat here is low. Many people had their co-workers play a prank on them because they left their computer unlocked. Next time one these co-workers might not send a mail in your name but rather use account recovery to gain access to your LastPass account and change your master password.

Statement from LastPass:

This is an optional feature that enables account recovery in case of a forgotten master password. After reviewing the bug report, we’ve added further security checks to prevent silent scripted attacks.

Conclusion

As this high-level overview demonstrates: if the LastPass server is compromised, you cannot expect your data to stay safe. While in theory you shouldn’t have to worry about the integrity of the LastPass server, in practice I found a number of architectural flaws that allow a compromised server to gain access to your data. Some of these flaws have been fixed but more exist. One of the more obvious flaws is the Account Settings dialog that belongs to the lastpass.com website even if you are using the extension. That’s something to keep in mind whenever that dialog asks you for your master password: there is no way to know that your master password won’t be sent to the server without applying PBKDF2 protection to it first. In the end, the LastPass extension depends on the server in many non-obvious ways, too many for it to stay secure in case of a server compromise.

Statement from LastPass:

We greatly appreciate Wladimir’s responsible disclosure and for working with our team to ensure the right fixes are put in place, making LastPass stronger for our users. As stated in our blog post, we’re in the process of addressing each report, and are rolling out fixes to all LastPass users. We’re in the business of password management; security is always our top priority. We welcome and incentivize contributions from the security research community through our bug bounty program because we value their cyber security knowledge. With their help, we’ve put LastPass to the test and made it more resilient in the process.

Categories: Comment [5]

Posted on by Wladimir Palant

Today, I found this email from Google in my inbox:

We routinely review items in the Chrome Web Store for compliance with our Program policies to ensure a safe and trusted experience for our users. We recently found that your item, “Google search link fix,” with ID: cekfddagaicikmgoheekchngpadahmlf, did not comply with our Developer Program Policies. Your item did not comply with the following section of our policy:

We may remove your item if it has a blank description field, or missing icons or screenshots, and appears to be suspicious. Your item is still published, but is at risk of being removed from the Web Store.

Please make the above changes within 7 days in order to avoid removal.

Not sure why Google chose the wrong email address to contact me about this (the account is associated with another email address) but luckily this email found me. I opened the extension listing and the description is there, as is the icon. What’s missing is a screenshot, simply because creating one for an extension without a user interface isn’t trivial. No problem, spent a bit of time making something that will do to illustrate the principle.

And then I got another mail from Google, exactly 2 hours 30 minutes after the first one:

We have not received an update from you on your Google Chrome item, “Google search link fix,” with ID: cekfddagaicikmgoheekchngpadahmlf, item before the expiry of the warning period specified in our earlier email. Because your item continues to not comply with our policies stated in the previous email, it has now been removed from the Google Chrome Web Store.

I guess, Mountain View must be moving at extreme speeds, which is why time goes by way faster over there — relativity theory in action. Unfortunately, communication at near-light speeds is also problematic, which is likely why there is no way to ask questions about their reasoning. The only option is resubmitting, but:

Important Note: Repeated or egregious policy violations in the Chrome Web Store may result in your developer account being suspended or could lead to a ban from using the Chrome Web Store platform.

In other words: if I don’t understand what’s wrong with my extension, then I better stay away from the resubmission button. Or maybe my update with the new screenshot simply didn’t reach them yet and all I have to do is wait?

Anyway, dear users of my Google search link fix extension. If you happen to use Google Chrome, I sincerely recommend switching to Mozilla Firefox. No, not only because of this simple extension of course. But Addons.Mozilla.Org policies happen to be enforced in a transparent way, and appealing is always possible. Mozilla also has a good track record of keeping out malicious extensions, something that cannot be said about Chrome Web Store (a recent example).

Update (2018-07-04): The Hacker News thread lists a bunch of other cases where extensions were removed for unclear reasons without a possibility to appeal. It seems that having a contact within Google is the only way of resolving this.

Update 2 (2018-07-04): The extension is back, albeit without the screenshot I added (it’s visible in the Developer Dashboard but not on the public extension page). Given that I didn’t get any notification whatsoever, I don’t know who to thank for this and whether it’s a permanent state or whether the extension is still due for removal in a week.

Update 3 (2018-07-04): Now I got an email from somebody at Google, thanks to a Google employee seeing my blog post here. So supposedly this was an internal miscommunication, which resulted in my screenshot update being rejected. All should be good again now and all I have to do is resubmit that screenshot.

Categories: Comment [5]

Posted on by Wladimir Palant

The short version

Ryzom is an online role-playing game. If you happen to be playing it, using the in-game browser is a significant risk. When you do that, there is a chance that somebody will run their Lua code in your client and bad things will happen.

Explaining Ryzom’s in-game browser

Ryzom’s in-game browser is there so that you can open links sent to you without leaving the game. It is also used to display the game’s forum as well as various other web apps. The game even allows installing web apps that are created by third parties. This web browser is very rudimentary, it supports only a bunch of HTML tags and nothing fancy like JavaScript. But it compensates for that lack of functionality by running Lua code.

You have to consider that the Lua programming language is what powers the game’s user interface. So letting the browser download and run Lua code allows for perfect integration between websites and the user interface, in many cases users won’t even be able to tell the difference. The game even uses this functionality to hot-patch the user interface and add missing features to older clients.

The flaws

The developers realize of course that letting arbitrary websites run Lua code in their game client is dangerous. So they created a whitelist of trusted websites that would be allowed to do it, currently that’s app.ryzom.com and api.ryzom.com. And that solution would have been mostly fine if these sites weren’t full of Cross-Site Scripting (XSS) vulnerabilities.

Having an XSS vulnerability in your website normally is bad enough on its own. In this case however, these vulnerabilities allow anybody to create a link to a trusted website that would contain malicious Lua code. No need to make things too obvious, that link can be hidden behind a URL shortener. Send this link to your target, add some text that will make them want to open it — you are done.

To add insult to injury, the game won’t use HTTPS when establishing connections to trusted websites because the developers haven’t figured out SSL support yet. So if somebody can manipulate your traffic, e.g. if you are connected to an open WiFi, then they will be able to inject malicious Lua code when your Ryzom client starts up.

How bad is it?

What’s the worst thing that could happen? Given that Lua code controls the game’s user interface, some very competitive player could scramble the interface for an adversary to achieve an advantage over them, clearly a rather extreme action. The more likely exploit would involve tricking a game admin into running an admin command, e.g. one that gives you tons of skill points.

But the issue here extends far beyond the game itself. Lua code can read and write arbitrary files, and it can also run external applications. So the risk here is really getting your machine infested with malware, just by clicking a link in the game or by playing on an open WiFi network.

The resolution

Notifying Ryzom developers turned out rather non-trivial, which is surprising for an open-source project. Initially, I asked a gamemaster who told me to write a support mail. Supposedly, my mail would be forwarded to the developers. Nine days later, I still haven’t got any response and decided to create a Bitbucket issue asking whether the developers got the info — they didn’t. The issue was deemed “resolved” on the same day, by means of fixing a bunch of server-side XSS vulnerabilities.

It’s impossible to tell how complete this resolution is, with the Ryzom server-side web apps not being open source. Given the obvious lack of good security practices, I wouldn’t rely too much on it. Also, the issue about adding SSL support is still just sitting there, last activity was six months ago. So if you are playing Ryzom, I’d recommend disabling execution of remote Lua code altogether by removing trusted domains from Ryzom’s configuration. For that, you need to edit client.cfg file while Ryzom isn’t running and add the following line:

WebIgTrustedDomains  = {};

Some game features will no longer work then, such as the Info window. Also, using apps will be more complicated or even impossible. But at least you can enjoy the game without worrying about your computer getting p0wned.

Categories: Comment [3]