Well, it’s official; the IETF has officially deprecated SSLv3.0. This means that it’s now a protocol violation to fall back to it. This is good news, since the number and types of attacks have been on the rise for awhile now. We’d like to take the opportunity to explore how to debug web applications that use HTTPS over SSL/TLS in CloudShark.
It’s undeniable that debugging HTTP traffic is one of the most common use-cases for a packet decoder. Fortunately for users (and unfortunately for developers), encrypting HTTP is reaching an inflection point. While we’re still some time from all traffic being TLS protected, it’s already a common occurrence that many web-based data transfers are available only under secured channels.
Capturing HTTPS traffic is a common scenario
Here’s the scene: a developer is tasked with debugging some property of the customer portal. Of course, their web stack runs entirely over HTTPS, with the exception of a quick redirection for HTTP requests. The developer knows that there is some obscure detail conflicting with the browser behavior. Normally, a quick TCP stream decode would be the obvious answer, but since the data is all encrypted, it’s all out of reach.
Each web server has its own certificate public/private keypair
To get around this issue, one technique is to get a copy of the PKI X.509 certificates (usually in .pem format) from the web server and store them. CloudShark makes this easy: Administrators can easily Import RSA keys into CloudShark. This allows a developer to investigate a single web server running on a single IP address. Server Name Identification does allow for additional hostnames to be defined by the same HTTPS server, but in general there will need to be an extra certificate keypair for each website the developer wishes to view.
Debugging on restricted devices against multiple sites
Let’s expand the above scenario. Now the developer is debugging some property but can’t tell which of the several dozen websites from which the customer portal sources data is causing the conflict. Our developer has access to a few of the certificate keypairs (permitting some time for the request to come back approved!). However, none of the third party sites are willing to share their keypairs, and with good reason.
Another developer is tasked with debugging why their mobile application is returning the same issue. The smart phone being used as the client is under contract and can’t be jailbroken. There are no good development tools available for diagnosing the web data.
Fortunately both of these problems can be solved by introducing some extra open source software. We’ll be looking at mitmproxy: Man in the Middle Proxy, which works especially well with CloudShark due to the way CloudShark manages its certificate keypairs.
The mitmproxy software project is an HTTP/HTTPS proxy with a clever addition. Normal HTTP proxies are able to faithfully proxy a web request because they can observe the contents of the client request and then rewrite them in a new connection. It can do no such thing for an HTTPS proxy, since it cannot observe the contents of the payload. Instead, a regular HTTPS proxy forms what is little more than a NAT (Network Address Translation) - the payload is copied in its original, encrypted form into a new connection, and then the response from the encrypted server is returned to the client, not observable in nature.
Mitmproxy provides an HTTPS proxy that is a true proxy. The client is manually configured with a root signing certificate provided by the mitmproxy software. Every HTTPS connection is channeled through the proxy, to which we have a copy of the private key. The proxy itself can emulate every remote HTTPS certificate, since it is a trusted signing certificate. With all of these combined, we are able to decrypt every HTTPS connection the client makes.
Another nice feature of this is that devices, such as game consoles, smartphones, tablets, voip phones, et cetera, which do not generally have developer tools available, are still available to drive client traffic for debugging.
A quick mitmproxy installation primer
We will not provide exact instructions due to variations between mitmproxy releases and operating system components. The general procedure was quite straight-forward following the mitmproxy documentation:
# Install Ubuntu 12.04 and all security updates on a separate system (we used a VM) apt-get install build-essential python-pip python-dev libxml2 libxml2-dev libxslt1-dev pip install mitmproxy pip install pyasn1 pip install flask pip install urwid pip install lxml pip install pyOpenSSL==0.13
Installing the root certificate on the clients and CloudShark
The PEM file containing the root certificate and private key are in the ~/.mitmproxy/mitmproxy-ca.pem file on the system with mitmproxy. This certificate must be installed using your operating system’s certificate import utility. Specific to Apple iOS, the easiest way is to mail the .pem file to yourself as an attachment. The built-in iOS mail client will allow you to install the certificate simply by clicking the attachment.
You must set a web proxy to access the system running mitmproxy, port 8080. (Or you can do this transparently using layer 4 redirection with iptables, et al. This is beyond the scope of this primer, but such a configuration allows any device without proxy configuration to still be an useful client). On iOS, this is stored under the WiFi connection properties. Simply enter the DNS hostname or IP address of the proxy, port 8080, and nothing else. Do not specify a transport protocol in the server property.
While you have this .pem file, it is convenient to note that this is exactly the same file CloudShark needs to decrypt the HTTPS sessions you’re about to generate. Simply import this PEM file into CloudShark while you have it handy. Give it a relevant name such as ‘mitmproxy’ or the hostname of the proxy system.
Capturing HTTPS traffic
In a designated terminal window, run
mitmproxy on the proxy system. This will start the HTTPS proxy. (When you are done for the day,
q will shutdown.)
If all is working, you can now watch your HTTPS sessions on the mitmproxy monitor window.
Sending to CloudShark
To import this data into CloudShark requires a separate process to capture the
entire packet data in pcap format, which
mitmproxy and its counterpart
mitmdump are not currently able to do.
On the proxy system, in a second terminal, run the following:
tcpdump -li eth0 -w capture.cap
Go back to your client and make a new request.
This will write all of the HTTPS data to a file, capture.cap. When you are done generating your session data, type control-c on the tcpdump process to flush the file to disk, and then either drag the file into CloudShark using a web browser, or use the CloudShark Upload API to easily push the file right from the command line. For example,
tcpdump -li eth0 -w capture.cap curl -F firstname.lastname@example.org http://cloudshark/api/v1/<api-key>/upload
Or, just install the Wireshark CloudShark Plugin and use tshark to do it in one step instead of two:
tshark -i eth0 -w capture.cap
The pcap of the HTTPS proxy is sent directly to CloudShark and a capture id is output
Now that CloudShark has the capture file and also the public/private
certificate keypair, the entire web browsing session is available with no
further work. Every request over HTTP and HTTPS to every server the web client
made will be available as a plain text view, using the
CloudShark SSL Decode functionality
available in every version of CloudShark. Simply create a sub filter of
tcp.port == 8080 and only the HTTPS session to the proxy will be visible.
A few details are worth stating explicitly:
This all works because the client is programmed with the proxy’s root certificate. There is no way to observe a third party’s data because they do not have the certificate installed. In this, TLS is working exactly as it was designed. This is a developer tool, and cannot be used to capture client data without complete ownership of the client hardware.
The session data is the client to the proxy, not the proxy to the real web servers. Only the layer 7 data is of real use, since any other network layer component is of the proxy reaching the real web server.