Periscope header

Periscope: web page sandbox tool

Last year I made a handy little tool called Periscope. It lets you sandbox a web page and see all of what happens, without any risk to your browser. Since I was giving it a little love with some updates and UI improvements recently I decided it was high time I made a post about it 😊

Periscope is a CentOS/Red Hat targeted web app, written in NodeJS. Behind the scenes it runs an instance of Chromium (or Chrome) using the automation APIs (specifically the Puppeteer library) to drive that browser to a site of your choosing; then it records the resulting requests and responses.

The core of the app is a HTTP API with options to add a new site to be sandboxed, and retrieve the results.

> curl -XPOST http://localhost:3000/targets/add -H "Content-Type: application/json" -d '{"url": ""}'

      "userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36 Edg/88.0.705.74"

On top of this is a user interface in VueJS and Bootstrap, written to be responsive and mobile-friendly, and intuitive to use.

Features include:

  • Capturing a screenshot of the page
  • Recording all headers (names and values) in every request and response
  • Recording HTTP POST parameters
  • Storing full content of all results, downloadable by the user either individually or as a set (.tar.gz archive)
  • Indexed search of the header names and values
  • Using the Don’t FingerPrint Me extension to alert on attempts to fingerprint the browser

All of this lovely stuff is available for free (MIT license) on GitHub. Enjoy!

Zero-contact recon: a game to hone target research skills

A few weeks ago I was researching an entity and wanted to be completely hands-off with my recon methods and see what I could still obtain. The answer was: a lot. A heck of a lot, some of it incredibly valuable from an attacker’s perspective. I had also had in the back of my mind an intent to come up with some interesting workshops that my good friends in the Bournemouth Uni Computing and Security Society could have fun with. From these seeds, an idea was born: challenge people to find as much as possible using these methods, awarding points based on a finding’s value from the perspective of an attacker. Although it’s still very much a work in progress, I thought it might be worth sketching out my idea for people to try it, and get feedback.

This is something I hope can be of use to many people – the relevance to the pentesting community is pretty clear, but I also want to encourage my homies on the blue team to try this. If you use these techniques against your own organisation it has the potential to be of massive benefit.


Select a target entity with an internet presence. The game’s organisers should research this entity themselves first, both to get an idea of the potential answers that competitors might give and assist scoring, and possibly to produce a map of known systems so that the no-contact rule can be enforced.

Set up an environment suitable for monitoring the competitors’ traffic and enforcing the rule. This might consist of an access point with some firewall logging to identify anyone connecting to the target’s systems directly, but organisers can choose any other method they deem suitable. If you trust participants to respect the rules enough, you could just decide not to do this part. It’s probably a good idea to at least have a quiet place with decent internet where you won’t be disturbed for the duration, and enough space for all the people taking part.

I would suggest that all participants should have a Shodan account. For students, good news – free! (at least in the US and UK, I don’t know about other locations)

Other tools that competitors should probably brush up on are whois, dig, and of course google your favourite search engine.


Competitors have a fixed time period to discover and record information relevant to an attacker who might wish to break in to the target’s network. Points will be scored for each piece of information based on how useful it is to this task. The time period can be decided by the organisers, but I think between one and two hours is a sensible period. The highest score achieved in the time period wins.

At the end of the period, competitors must provide a list of all their discoveries. The list must include a description of how each was found or a link to the source, sufficient for the judges to validate the finding’s accuracy. If there is insufficient information to verifiy a discovery, it shall receive no points.

No active connection to the target’s networks is permitted; this includes the use of VPNs and open proxies. Making active connections will be penalised, either by a points deduction or disqualification, at the judges’ discretion.


Domains other than the target’s primary domain1pt each
* if domain can be confirmed as internal AD domain3pts each
Subsidiaries (wholly or majority owned)1pt each
Joint ventures2pts each
Public FQDNs/hostnames1pt each
Internet exposed services
* per unique protocol presented externally1pt each
* per unique server software identified1pt each
* per framework or application running on server (e.g. presence of struts, coldfusion, weblogic, wordpress)3pts each
* each version with known vulnerability
* RCE10pts each
* other vulns3pts each
ASNs1pt each
Format of employees’ emails3pts
Internal hostname convention5pts
Internal hostnames2pts each
Internal username convention5 pts
Internal usernames2pts each
Configuration fails (e.g. exposed RDP/SMB/SQL/Elastic/AWS bucket)5pts each
Individual confidential documents on internet3pts each
Large archive or collection of confidential documents on internet8pts each
Vendor/product name of internal products relevant to security/intrusion (e.g. AV/proxy/reverse proxy/IDS/WAF in use)4pts each

Any items discovered that apply to a subsidiary or joint venture are valid and to be included in the score.

A little help?

I’m intending to playtest this as soon as possible, but I would absolutely love to get feedback, regardless of whether I’ve already tested it. Please comment or tweet at me (@http_error_418) with suggestions for how to improve it, more items that can be scored, criticism of points balance… anything!

tstats: afterburners for your Splunk threat hunting

Recently, @da_667 posted an excellent introduction to threat hunting in Splunk. The information in Sysmon EID 1 and Windows EID 4688 process execution events is invaluable for this task. Depending on your environment, however, you might find these searches frustratingly slow, especially if you are trying to look at a large time window. You may also have noticed that although these logs concern the same underlying event, you are using two different searches to find the same thing. Is there anything we can do to improve our searches? Spoiler: yes!

One of the biggest advantages Splunk grants is in the way it turns the traditional model of indexing SIEM events on its head. Instead of parsing all the fields from every event as they arrive for insertion into a behemoth of a SQL database, they decided it was far more efficient to just sort them by the originating host, source type, and time, and extract everything else on the fly when you search. It’s a superb model, but does come with some drawbacks.

Some searches that might have been fast in a database are not so rapid here. Again, because there is no database, you are not constrained to predefined fields set by the SIEM vendor – but there is nothing to keep fields with similar data having the same name, so every type of data has its own naming conventions. Putting together a search that covers three different sources for similar data can mean having to know three different field names, event codes specific to the products… it can get to be quite a hassle!

The answer to these problems is datamodels, and in particular, Splunk’s Common Information Model (CIM). Datamodels allow you to define a schema to address similar events across diverse sources. For example, instead of searching

index=wineventlog EventCode=4688 New_Process_Name="*powershell.exe" 


index=sysmon EventCode=1 Image=*powershell.exe 

separately, you can search the Endpoint.Processes datamodel for process_name=powershell.exe and get results for both. The CIM is a set of predefined datamodels for, as the name implies, types of information that are common. Once you have defined a datamodel and mapped a sourcetype to it, you can “accelerate” it, which generates indexes of the fields in the model. This process carries a storage, CPU and RAM cost and is not on by default, so you need to understand the implications before enabling it.

Turn it up to 11

Let’s take this example, based on the fourth search in @da_667’s blog. In my (very limited) data set, according to the Job Inspector, it took 10.232 seconds to search 30 days’ worth of data. That’s not so bad, but I only have a few thousand events here, and you might be searching millions, or tens of millions – or more!

Splunk Job Inspector showing search time and cost breakdown

What happens if we try searching an accelerated datamodel instead? Is there much of a difference?

Splunk Job Inspector information for accelerated datamodel search

Holy shitballs yes it does. This search returned in 0.038 seconds, that’s nearly 270x faster! What sorcery is this? Well, the command used was:

| tstats summariesonly=true count from datamodel=Endpoint.Processes where Processes.process_name=powershell.exe by Processes.process_path Processes.process

What’s going on in this command? First of all, instead of going to a Splunk index and running all events that match the time range through filters to find “*.powershell.exe“, my tstats command is telling it to search just the tsidx files – the accelerated indexes mentioned earlier – related to the Endpoint datamodel. Part of the indexing operation has broken out the process name in to a separate field, so we can search for an explicit name rather than wildcarding the path.

The statistics argument count and the by clause work similarly to the traditional stats command, but you will note that the search specifies Processes.process_name – a quirk of the structure of data models means that where you are searching a subset of a datamodel (a dataset in Splunk parlance), you need to specify your search in the form

datamodel=DatamodelName[.DatasetName] where [DatasetName.]field_name=somevalue by [DatasetName.]field2_name [DatasetName.]field3_name

The DatasetName components are not always needed – it depends whether you’re searching fields that are part of the root datamodel or not (it took me ages to get the hang of this so please don’t feel stupid if you’re struggling with it).

Filtered, like my coffee

Just as with the Hurricane Labs blog, options for filtering and manipulating tstats output can be managed with the same operations.

| tstats summariesonly=true count from datamodel=Endpoint.Processes where Processes.process_name=powershell.exe NOT Processes.parent_process_name IN ("code.exe", "officeclicktorun.exe") by Processes.process_path Processes.process | `drop_dm_object_name("Processes")`

You can filter on any of the fields present in the data model, and also by time, and the original index and sourcetype. The resulting data can be piped to whatever other manipulation/visualisation commands you want, which is particularly handy for charts and other dashboard features – your dashboards will be vastly sped up if you can base them on tstats searches.

You’ll also note the macro drop_dm_object_name – this reformats the field names to exclude the Processes prefix, which is handy when you want to manipulate the data further as it makes the field names simpler to reference.

A need for speed

How do I get me some of this sweet, sweet acceleration I hear you ask? The first thing to understand is that it needs to be done carefully. You will see an increase in CPU and I/O on your indexers and search heads. This is because the method involves the search head running background searches that populate the index. There will be a noticeable increase in storage use, with the amount depending on the summary range (i.e. time period covered by detailed indexing) and how busy your data sources are.

With this in mind, you can start looking at the Common Information Model app and the documentation on accelerating data models. I highly recommend consulting Splunk’s Professional Services before forging ahead, unless your admins are particularly experienced. The basic process is as follows:

  • Ensure that your sourcetypes are CIM compliant. For most Splunk-supported apps, this is already done.
  • Ensure that you have sufficient resources to handle the increased load
  • Deploy the CIM app
  • Enable acceleration for the desired datamodels, and specify the indexes to be included (blank = all indexes. Inefficient – do not do this)
  • Wait for the summary indexes to build – you can view progress in Settings > Data models
  • Start your glorious tstats journey
Configuration for Endpoint datamodel in Splunk CIM app
Detail from Settings > Data models

Datamodels are hugely powerful and if you skim through the documentation you will see they can be applied to far more than just process execution. You can gather all of your IDS platforms under one roof, no matter the vendor. Get email logs from both Exchange and another platform? No problem! One search for all your email! One search for all your proxy logs, inbound and outbound! Endless possibilities are yours.

One search to rule them all, one search to find them… happy Splunking!

Collecting Netscaler web logs

A little while ago I wrote about collecting AppFlow output from a Citrix Netscaler and turning it into Apache-style access logs. Whilst that might technically work, there are a few drawbacks – first and foremost that Logstash gobbles CPU cycles like nobody’s business.

Furthermore, since the Netscaler outputs separate AppFlow records for request and response, if you want a normal reverse proxy log, you need to put them back together yourself. Although I have already described how to achieve that, as you can see above it is also not terribly efficient. So, is there a better way? There certainly is!

NetScaler Web Log Client

In order to deliver responses to requests correctly, the Netscaler must track the state of connections internally. Instead of creating our own Frankenstein’s Monster of a state machine to reassemble request and response from AppFlow, it would be much simpler if we could get everything from a place that already has the combined state. The good news is that Citrix have provided a client application to do just that. The bad news is that their documentation is a little on the shonky side, and it isn’t always clear what they mean. To fill in some of the gaps, I have written a brief guide to getting it running on CentOS 7. I will assume for this that you have installed CentOS 7 Minimal and updated it through yum.

Obtain the client

Citrix’s description of where to find the client on their site isn’t terribly helpful. Here’s how to get there at the time of writing:

    Citrix Portal > Downloads > Citrix Netscaler ADC > Firmware > [your version] > Weblog Clients

Prep the Netscaler

Ensure Web logging is turned on

    System > Settings > Configure Advanced Features > Web Logging

Ensure remote authentication is OFF for the nsroot user (not expecting many people to encounter this problem but it’s not easy to troubleshoot – the client just shows an authentication failure even if you entered the password correctly)

    System > User Administration > Users > nsroot > Enable External Authentication

Install and configure the NSWL client

Extract the .rpm from the zip downloaded from the Citrix portal and transfer it to your CentOS system. Run the following commands as root:

    $> yum install glibc.i686
    $> rpm -i nswl_linux-[citrix_version].rpm

You need to be able to connect from the system you are running the client on to your Netscaler reverse proxy on port 3011.

    $> nc -v [netscaler_ip] 3011

Add the target IP and nsroot account credentials to the config file as described in the Citrix docs (yes, some of their instructions are accurate – just not everything):

    $> /usr/local/netscaler/bin/nswl -addns -f /usr/local/netscaler/etc/log.conf

Edit the config file to set the format, log output directory, rotation settings etc.

----extract from /usr/local/netscaler/etc/log.conf----
logFormat    NCSA %h %v %l %u %p [%t] "%r" %s %j %J %{ms}T "%{referer}i" "%{user-agent}i"
logInterval			Daily
logFileSizeLimit		1024
logFilenameFormat		/var/log/netscaler/nswl-%{%y%m%d}t.log

Note: Citrix do not appear to provide a complete breakdown of what format strings are accepted, so I used the Apache documentation as a reference. However, not all of the variables are supported by the NSWL client, and some work in a different manner than expected. For example, %D does not output microseconds, but the %{UNIT}T style does work.

Configure a service to run the NSWL client

    $> vim /etc/systemd/system/nswl.service


ExecStart=/usr/local/netscaler/bin/nswl -start -f /usr/local/netscaler/etc/log.conf	


    $> useradd -d <log directory> -s /sbin/nologin nswl
    $> chown -R nswl:nswl <log directory>
    $> systemctl daemon-reload
    $> service nswl start

SIEM configuration and log rotation

The logFormat directive shown above is similar to the standard Apache Combined format, but not identical. To parse the output, a slightly tweaked version of the regex is necessary:

^(?<src_ip>\S+) (?<site>\S+) (?:-|(?<ident>\S+)) (?:-|(?<user>\S+)) (?<dest_port>\d+) \[[^\]]*] "(?<request>[^"]+)" (?<status>\d+) (?<request_bytes>\d+) (?<response_bytes>\d+) (?<response_time>\d+) "(?:-|(?<http_referer>[^"]*))" "(?:-|(?<http_user_agent>.*))"

You should use a prefix pattern to match files to collect – do NOT use a suffix pattern like ‘*.<extension>‘ to track files. The NSWL client creates a new file with ‘.<number>‘ appended under many circumstances, including when the service is restarted, when the logFileSizeLimit is reached, and others. For example, if the service was restarted while writing to ‘nswl-20191001.log‘, it would begin writing ‘nswl-20191001.log.0‘.

Make sure to take this into account when configuring log rotation – e.g. move the files before compressing: ‘$> gzip nswl-20191001.log‘ results in ‘nswl-20191001.log.gz‘, which matches the pattern ‘nswl-*‘; SIEM agents may consider the latter file to be new and index it again, resulting in duplicate data.


Using 1% CPU and a single process as opposed to the previous method of attempting to melt a CPU into the motherboard substrate is a definite improvement. Another plus is that it’s an officially supported tool, so in theory if something’s not working you can actually get some help with it.

I’m pretty proud of my eldritch horror of a python script, it ran for nearly two years in production with no significant problems (unlike Logstash which needed CPR every 6 weeks or so), but it’s high time my code was retired.

Remcos lettin’ it all hang out

I don’t post (or even tweet) very much about the malware going through my sandbox; in most cases it has been blogged about by more than one person in way more depth than I could. The other day, however, one stood out in my feed for a few reasons –

  • it was the first time I’d had a Remcos sample
  • contrary to what most blogs I read about it said, its C2 traffic was in the clear
  • despite knowing that Emerging Threats had multiple signatures explicitly for Remcos, I didn’t see any of them fire
Remcos C2 traffic

The content is exactly what one would expect – an array of values holding parameters of the infected host, separated by the string “|cmd|”, but no decryption was necessary.

Someone familiar with Remcos might also have spotted another odd feature from the above screenshot; the value “” is the address of its C2 server. Given that my sandbox doesn’t have any route to a device with that address, these connections inevitably failed.

What is the point of having a private IP as a C2 server? Two possibilities spring to mind:

  • The sample was intended for a target network with an established foothold serving as an internal C2
  • It is an unfinished or test sample and the address is for the creator’s lab C2 server

I lean strongly toward the latter, given who it was sent to and that the data was also unencrypted, though I’m open to other ideas.



Via squid and second host running tor

Intercepting SSL with squid proxy and routing to tor

There was a time when practically all malware communicated with its command and control (C2) servers unencrypted. Those days are long gone, and now much of what we would wish to see is hidden under HTTPS.  What are we to do if we want to know what is going on within that traffic?


(for those who are unfamiliar with the HTTPS protocol and public key encryption)

The foundation of HTTPS is the Public Key Infrastructure. When traffic is to be encrypted, the destination server provides a public key with which to encrypt a message. Only that server, which is in possession of the linked private key, can decrypt the message. Public key, or asymmetric encryption, is relatively slow so instead of all traffic being secured with this, the client and server use this stage only to negotiate a new key in secret for a symmetrically encrypted connection. If we wish to be able to read the traffic, we need to obtain the symmetric encryption key.

How can this be achieved? If we are in a position to intercept the traffic, we could provide a public key that we are in control of to the client, and establish our own connection to the server. The traffic would be decrypted at our interception point with our key, and re-encrypted as we pass it to the server with the server’s key. However, because HTTPS must be able to keep information confidential, it has defences designed with this attack in mind. A key issued by a server is normally provided along with the means to verify that it is genuine, not falsified as we wish to do. The key is accompanied by a cryptographic signature from a Certificate Authority (CA), and computers and other devices using HTTPS to communicate hold a list of CAs which are considered trustworthy and authorised to verify that keys are valid. Comparing the signature against the client’s stored list enables the client to verify the authenticity of the public key.

If we wish to inspect encrypted communication, we must both intercept the secret key during the exchange, and convince the client that the certificate it receives is genuine. This post will walk through the process needed to achieve those two goals.


Starting point

I have already been running a sandbox that routes traffic via tor. It is loosely based on Sean Whalen’s Cuckoo guide, and implements the tor routing without going via privoxy, as shown below.

Initial setup

Using this method allows me to run malware without revealing the public IP of my lab environment. It has certain drawbacks; some malware will recognise that it is being routed via tor and stop functioning, however the tradeoff is acceptable to me.

squid | tor

Using squid with tor comes with some caveats that make the eventual configuration a little complicated. The version of squid I am using (3.5.23) cannot directly connect to a tor process running on the local host. In order to route via tor locally you will need a parent cache peer to which the connection can be forwarded. Privoxy is capable of serving this purpose, so initially I attempted the setup shown below:

Via privoxy

This configuration will function just fine if all you want is to proxy via squid. Unfortunately, this version of squid does not support SSL/TLS interception when a parent cache is being used. So, since we cannot use privoxy, and squid cannot route to tor on the same host, what can we do? Run tor on a different host!

Via squid and second host running tor


squid with ssl intercept/ssl-bump

In order to use squid with ssl-bump, you must have compiled squid with the –with-openssl and –enable-ssl-crtd options. The default package on Debian is not compiled this way, so to save you some time I have provided the commands I used to compile it:

apt-get source squid
cd squid3-3.5.23/
./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --libexecdir=${prefix}/lib/squid3 --srcdir=. --disable-maintainer-mode --disable-dependency-tracking --disable-silent-rules 'BUILDCXXFLAGS=-g -O2 -fdebug-prefix-map=/build/squid3-4PillG/squid3-3.5.23=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wl,-z,relro -Wl,-z,now -Wl,--as-needed' --datadir=/usr/share/squid --sysconfdir=/etc/squid --libexecdir=/usr/lib/squid --mandir=/usr/share/man --enable-inline --disable-arch-native --enable-async-io=8 --enable-storeio=ufs,aufs,diskd,rock --enable-removal-policies=lru,heap --enable-delay-pools --enable-cache-digests --enable-icap-client --enable-follow-x-forwarded-for --enable-auth-basic=DB,fake,getpwnam,LDAP,NCSA,NIS,PAM,POP3,RADIUS,SASL,SMB --enable-auth-digest=file,LDAP --enable-auth-negotiate=kerberos,wrapper --enable-auth-ntlm=fake,smb_lm --enable-external-acl-helpers=file_userip,kerberos_ldap_group,LDAP_group,session,SQL_session,time_quota,unix_group,wbinfo_group --enable-url-rewrite-helpers=fake --enable-eui --enable-esi --enable-icmp --enable-zph-qos --enable-ecap --disable-translation --with-swapdir=/var/spool/squid --with-logdir=/var/log/squid --with-pidfile=/var/run/ --with-filedescriptors=65536 --with-large-files --with-default-user=proxy --enable-build-info='Debian linux' --enable-linux-netfilter build_alias=x86_64-linux-gnu 'CFLAGS=-g -O2 -fdebug-prefix-map=/build/squid3-4PillG/squid3-3.5.23=. -fstack-protector-strong -Wformat -Werror=format-security -Wall' 'LDFLAGS=-Wl,-z,relro -Wl,-z,now -Wl,--as-needed' 'CPPFLAGS=-Wdate-time -D_FORTIFY_SOURCE=2' 'CXXFLAGS=-g -O2 -fdebug-prefix-map=/build/squid3-4PillG/squid3-3.5.23=. -fstack-protector-strong -Wformat -Werror=format-security' --with-openssl --enable-ssl-crtd
make && make install

The configuration above is identical to the precompiled one in the Debian Stretch repository, apart from the addition of the SSL options. If you are using a different distro the above command may not work.

Most of my configuration is based on the guide in the official squid documentation. My squid configuration is as follows:

acl ftp proto FTP
acl SSL_ports port 443
acl SSL_ports port 1025-65535
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl LANnet src # local network for virtual machines
acl step1 at_step SslBump1
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access allow localhost manager
http_access allow LANnet
http_access deny manager
http_access allow localhost
http_access deny all
http_port 3128 intercept # intercept required for transparent proxy
https_port 3129 intercept ssl-bump \
    cert=/etc/squid/antfarm.pem \
    generate-host-certificates=on dynamic_cert_mem_cache_size=4MB
ssl_bump peek step1
ssl_bump bump all
sslcrtd_program /usr/lib/squid/ssl_crtd -s /var/lib/ssl_db -M 4MB
sslcrtd_children 8 startup=1 idle=1
access_log daemon:/var/log/squid/access.log logformat=combined
pid_filename /var/run/squid/
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
request_header_access X-Forwarded-For deny all
httpd_suppress_version_string on
always_direct allow all

Use the SSL certificate generation process shown in the linked guide. Once you have created the .pem file, copy the section from —–BEGIN CERTIFICATE—– to —–END CERTIFICATE—– into a new file with the extension .crt.

A few notes here:

  • The ‘intercept’ keyword is necessary if you are using iptables to redirect ports to squid as a transparent proxy. If you configure your client to explicitly use a proxy, you should not use it.
  • The always_direct clause is used because we are routing squid’s output to another host (running tor) as the default gateway. If you wanted to use the squid → privoxy → tor configuration locally, you would use ‘never_direct’ instead.
  • The path for the ssl_crtd tool in Debian is /usr/local/squid/ssl_crtd – no libexec.
  • When setting permissions for the cache directories in Debian, use “proxy:proxy” instead of “squid:squid” as this is the default user that Debian creates to run the squid service.

In order for the virtual machine to treat the falsified public keys as genuine, we must instruct it to trust the certificate as created above. For a Windows 7 host like mine, double click the .crt file and import the certificate in to the Trusted Root Certification Authorities store.

Importing a cert

With squid set up and certificate imported, you must then configure iptables on the hypervisor host to redirect traffic through squid.

iptables -t nat -A PREROUTING -i virbr0 -p tcp --dport 80 -j REDIRECT --to-port 3128
iptables -t nat -A PREROUTING -i virbr0 -p tcp --dport 443 -j REDIRECT --to-port 3129

where virbr0 is the name of the virtual interface in QEMU. You should adjust interface name and destination ports as required for your setup.

tor service

On the second host I have installed tor (version from Debian Stretch repo). This is configured with ports to listen for TCP and DNS connections in /etc/tor/torrc:


Then with iptables, inbound traffic from the hypervisor host is redirected to tor:

-A PREROUTING -s -i eth0 -p tcp -j REDIRECT --to-ports 8081

Since the objective is to keep my real IP hidden, care must be taken to ensure the host’s routing does not leak information. In /etc/network/interfaces, instead of specifying a gateway, I added two routes:

up route add -net netmask gw
up route add -net netmask gw

This causes all traffic not intended for my internal network to be routed to the host running the tor service (on I have then configured my firewall so that it only allows connections reaching in to this VLAN, or from the tor host, not from the malware VM hypervisor.  When updates are required, connectivity can be enabled temporarily, with the VMs paused or shut off. Alternative techniques include allowing the hypervisor host to update via tor (if I didn’t mind it being slow), or routing the traffic from the VMs without NAT and denying anything outbound from the VM network on my core router, but that’s something to look at another day.

With the gateways set up, the routing for the VM interface can then be applied on the hypervisor host:

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
iptables -A FORWARD -i virbr0 -j ACCEPT

After applying these rules you should have a fully functioning TLS/SSL intercept routed via tor. To test, start by attempting to resolve a few hostnames from the VM and verify that the traffic is hitting your tor service host BEFORE giving any web pages a spin. Move on to HTTP/HTTPS traffic once you are sure DNS is working correctly.


Once you have a functioning setup you should expect to see both HTTP and HTTPS URLs appearing in your squid access log. In addition, if you perform a packet capture on the hypervisor virtual interface (virbr0 in my case), you can use the key generated earlier to view the decrypted traffic in Wireshark. You will need to copy the private key section of the .pem file to a new file to use in Wireshark. When entering the protocol as described in the link above, use ‘http’ in lowercase – uppercase will not work.

importing an SSL key in wireshark

decrypted output of call to

BSides Workshop “My log obeys commands – Parse!”

Better late than never, as they say. Last week I went to BSides London, which was pretty awesome. In between hanging out with all sorts of awesome people and downing mojitos, I had the opportunity to present a workshop. It seemed to go pretty well – though I have definitely learned enough to improve it for next time.

The short version is that it was an introduction to the basic principles and techniques of log parsing, for people at the level of a junior SOC analyst. Minimal regex knowledge required.

Although I don’t have a recording of the workshop, I’m putting the slides up here in case they’re of use to anyone. Enjoy! If you have any questions, please tweet @http_error_418 😊

My log obeys commands – Parse!

Sandbox stealth mode: countering anti-analysis

20,000 Leagues Under The Sand – part 6

read part 5

As long as there are robbers, there are going to be cops. The bad guys know perfectly well that people will be trying to identify their malware, and have all sorts of anti-analysis tricks up their sleeves to evade detection. Malware will very often perform checks for common analysis tools and stop running if it identifies their presence. Since one of the most fundamental tools for a malware analyst is the use of a virtual machine, it is the subject of numerous and varied detection attempts in many families of malware.

Simple strings

In its default configuration, a virtual machine is likely to have a wide range of indicators of its true nature. For example, it is common for standard peripheral names to contain hints (or outright declarations) that they are virtual.

VirtualBox DVD indicator

VirtualBox DVD drive

This is likewise the case for QEMU/KVM among others. As well as peripheral devices, the CPU vendor information may also be an indicator:

Device Manager processor info in QEMU/KVM

Device Manager processor info in QEMU/KVM

Less obvious to casual browsing but still perfectly simple for code running on the system to detect are features such as the CPUID Hypervisor Bit, MAC address, and registry key indications such as the presence of BOCHS or VirtualBox BIOS information in the registry.

SystemBiosVersion registry value

SystemBiosVersion registry value

These detections depend on the code of the hypervisor; in some cases they can be overcome by specifying particular configuration values, and in others they can only be solved by modifying the source code of the hypervisor and recompiling it. Fortunately for my choice of QEMU/KVM, many people have already looked at this particular problem and some have been generous enough to publish their solutions. There is also a fair amount of information out there for VirtualBox (partly because Cuckoo favours this hypervisor), and some for VMWare ESXi.

Bad behaviour

Another means of detecting an analysis environment is to collect information indicating the behaviour and use of the system. As discussed in part 4 of this series, simulating the presence of a user is an important ability to counter this evasion method. You should also consider environmental factors such as uptime (it is improbable that a user would boot a system and immediately run malware; some samples look for a minimum uptime period).

Presence of the abnormal, absence of the normal

One of the side effects of Windows being engineered to be helpful, is that it leaves behind traces of a user’s activity everywhere. In addition, people are messy. They leave crap scattered all over their desktop, fill their document folders with junk, and run all sorts of unnecessary processes. Looking for evidence of this is quite simple, and malware has been known to refuse to run if there are insufficient recent documents, or very few running processes.

Malware may also attempt to evade detection by searching for running and installed services and the presence of files linked to debuggers, analysis tools and the sandbox itself (e.g. VirtualBox Guest Additions).

Anti-analysis could be a series all of its own, and my understanding of it is still quite narrow. I strongly encourage you to research the topic yourself as there are tons of excellent articles out there written by authors with far more experience.


Although it is not specific to sandboxing, I do not feel this series would be complete without some mention of the delivery of the output. You can write the best code to manage, control, and extract data from your sandbox in the world, but it is worthless if you cannot deliver it to your users in a helpful fashion. Think about what types of data are most important (IDS alerts/process instantiation/HTTP requests?), what particular feature of that data it is that makes it useful (HTTP hostname? destination IP? Alert signature name, signature definition?) and make sure that it is clearly highlighted – but you MUST allow the user to reach the raw data.

I cannot stress this enough. Sandboxes are a wonderful tool to get the information you need as a defender (though not everyone is so enthusiastic), but they are imprecise instruments. The more summarised and filtered the information is, the higher the chance it has to lead the analyst to false conclusions.

You should look at other sandboxes out there and draw inspiration, as well as learn what you want to avoid (whether because it’s too complicated for you right now, or you just think it’s a bad way of doing things) when making one yourself. Start by looking at Cuckoo, because it’s free and open source. Take a peek at the blogs and feature sheets of the commercial offerings like VMRay, Joe Sandbox, Hybrid Analysis, and the very new and shiny


Sandboxing is a huge topic and I haven’t begun to scratch the surface with this series. However, I hope that I have done enough to introduce the major areas of concern and provide some direction for people interested in dabbling in (or diving into) this fascinating world. I didn’t realise quite how much work it would be to reach the stage that I have; getting on for 18 months in, I’m still very much a novice and my creation, whilst operational, is distinctly rough around the edges. And on all of the flat surfaces. But it works! And I had fun doing it, and learned a ton – and that’s the main thing. I hope you do too.

Undocumented function “RegRenameKey”

Whilst learning some bits and pieces about the Windows API, I found a function that I wanted to use which does not appear to be documented on MSDN, “RegRenameKey”. Although Visual Studio will helpfully inform you of the datatypes required, it does not tell you what is expected within the datatypes. Since the answer doesn’t seem to be out there just yet I thought I’d do a quick writeup. The definition for the function is as follows:

  _In_         HKEY    hKey, 
  _In_         LPCTSTR lpSubKeyName,
  _In_         LPCTSTR lpNewKeyName 

I assumed that lpSubKeyName and lpNewKeyName would accept the same form of input as the registry path expected for RegKeyCreate, e.g. “Software\\MyProduct\\MyKey”. However, attempting to use this returns error code 0x57 “The parameter is incorrect”. This is because lpNewKeyName seems to expect just a name without the path. A valid call looks like this:

 TCHAR keyname[] = L"Software\\testkey";
 TCHAR newkeyname[] = L"testkey2";
 LSTATUS renameerr = RegRenameKey(HKEY_CURRENT_USER, keyname, newkeyname);

Not a particularly difficult one, but hopefully this will save people some time!

Mounting image with python-guestfs

Automating a sandbox: Evidence Collection

20,000 Leagues Under The Sand: Part 5

read part 4

You may have a tricked-out sandbox that logs host activity, does packet capture and IDS, and will make you a slice of toast, but none of the bells and whistles will do you any good without collecting the information and putting it in front of your eyes. The techniques required will test your knowledge of network and file system forensics, as well as your skill with code. Let’s start with an easy one.

Suricata logs

If you have followed the suggestions made earlier in this series, Suricata will be writing events to files in /var/log/suricata/ in JSON form, one object per line. This lends itself to ease of use; pretty much any language will have a good JSON parsing library. All you will need to do is filter for entries based on the timestamp being within the period you were running your malware sample.

Be aware that the Suricata log does not get truncated unless you have specified. If you read and filter the log using  the simplest method (line-by-line read from the start, parsing each event then filtering), this will eventually become very slow. You should consider rotating the file, either yourself or using Suricata’s built in rotation, and make sure that your parsing and filtering takes account of this rotation.

Packet capture

As mentioned in the post discussing networking, you can either create a per-run packet capture as part of your code (assuming your language has the appropriate libraries), or a systemwide one which you can then extract portions of.

If you only ever plan to have one guest VM sandboxing malware at a time, the per-run capture should be fine and relatively simple. If you are slightly nuts ambitious like me and want to design for the possibility of several in parallel, a systemwide capture would be more suitable. Again, depending on the way you have organised capture, you should make sure your code accounts for the rotation of the pcaps.

Host activity/event logs

Early on in this series I waxed lyrical about the advantages of Sysmon. I am not going to contradict any of that here, but collecting its output is not as simple as you might think. Windows event logs get written to EVTX files, but not necessarily immediately. Therefore although an event may be generated, its presence in the EVTX file is not guaranteed. Under testing I have found that not even a shutdown is a guarantee of the events being written to the file. The only method I have found to be 100% reliable is to query the Windows Event Log API¹. Therefore, to collect Sysmon logs in a reliable fashion, you need to be able to use the Windows API.

I am aware of two methods for doing this. The first is to write a program which queries the API, and run that in your sandbox. You can then write the data to a file, or send it out of the sandbox immediately. To send it out of the sandbox you could have a service on the host listening on the virtual network interface, such as an FTP or HTTP server.

The second method would be to use Windows Event Forwarding. This is a tremendously useful technique for blue teamers and comes highly recommended by Microsoft staff. It does, however, require you to have a second Windows host on which to collect the events, which may not be an option for you. Most documentation you will find on this will refer to setting it up in an Active Directory environment, however it is also capable of running in workgroup-only systems.

¹ I strongly suspect that the events are being written to temporary files but at the time of writing this is little better than a hunch. I’ll chase down my suspicion at some point and if it’s right there’ll be a new post about my findings.

Filesystem collection

Getting events is a huge win, and might well be all you need; but why not go one step further? Malware drops and modifies files and writes to the registry, and if you could get your hands on that evidence, it could be invaluable. Another of the reasons for choosing LibVirt/QEMU as my hypervisor was the availability of python bindings for LibGuestFS, allowing me to directly mount and read QEMU disk images. However, you should still be fine with other hypervisors: VMWare also provides a utility for this, and VirtualBox can apparently be mounted as a… network block device? Please can I have some of whatever Oracle have been smoking, because it’s clearly the good shit.

Detailed coverage of the options for filesystem evidence collection could run to several blog posts of its own, so I won’t go into everything here. However, I will describe three approaches, each with their own advantages and drawbacks.

  • Diffing from a known-good state

The slowest, but most comprehensive method. Requires building a comprehensive catalogue of the hashes of all files on the disk prior to malware execution, and another one after, and identifying the differences. Not recommended unless you are truly desperate to roast your CPU with hash calculations.

  • Metatadata-based selection

Since you know the lower and upper time bounds for possible activity by the malicious sample, you can walk the directory tree and select only items which have been changed or created in that period. Relatively quick, but some malware is known to modify the MFT record with false created/modified values, known as ‘timestomping’.

  • Key items and locations

The majority of malware activity is limited to just a few locations. Taking a copy of the user directory, and SYSTEM and SOFTWARE registry hives, plus a couple of other items, would capture the traces left by most samples you might ever run.

There is a final option for collection of file-based evidence, and that is to use a host agent which collects the files as the malware writes them. The above methods would fail to capture a file that has been created and subsequently removed. In an earlier post I mentioned that if you were so inclined, you could write code which would monitor API calls yourself. Doing this would also give you the ability to capture temporary files in addition to the ones which are left behind.

Hopefully you now have an idea of the approaches you can use to gather useful information from the execution of a malware sample without the need for manual intervention. The final post in my series considers anti-analysis techniques and countering sandbox evasion.