Wednesday, August 16, 2017

Toolsmith #127: OSINT with Datasploit

I was reading an interesting Motherboard article, Legal Hacking Tools Can Be Useful for Journalists, Too, that includes reference to one of my all time OSINT favorites, Maltego. Joseph Cox's article also mentions Datasploit, a 2016 favorite for fellow tools aficionado, Toolswatch.org, see 2016 Top Security Tools as Voted by ToolsWatch.org Readers. Having not yet explored Datasploit myself, this proved to be a grand case of "no time like the present."
Datasploit is "an #OSINT Framework to perform various recon techniques, aggregate all the raw data, and give data in multiple formats." More specifically, as stated on Datasploit documentation page under Why Datasploit, it utilizes various Open Source Intelligence (OSINT) tools and techniques found to be effective, and brings them together to correlate the raw data captured, providing the user relevant information about domains, email address, phone numbers, person data, etc. Datasploit is useful to collect relevant information about target in order to expand your attack and defense surface very quickly.
The feature list includes:
  • Automated OSINT on domain / email / username / phone for relevant information from different sources
  • Useful for penetration testers, cyber investigators, defensive security professionals, etc.
  • Correlates and collaborate results, shows them in a consolidated manner
  • Tries to find out credentials,  API keys, tokens, sub-domains, domain history, legacy portals, and more as related to the target
  • Available as single consolidating tool as well as standalone scripts
  • Performs Active Scans on collected data
  • Generates HTML, JSON reports along with text files
Resources
Github: https://github.com/datasploit/datasploit
Documentation: http://datasploit.readthedocs.io/en/latest/
YouTube: Quick guide to installation and use

Pointers
Second, a few pointers to keep you from losing your mind. This project is very much work in progress, lots of very frustrated users filing bugs and wondering where the support is. The team is doing their best, be patient with them, but read through the Github issues to be sure any bugs you run into haven't already been addressed.
1) Datasploit does not error gracefully, it just crashes. This can be the result of unmet dependencies or even a missing API key. Do not despair, take note, I'll talk you through it.
2) I suggest, for ease, and best match to documentation, run Datasploit from an Ubuntu variant. Your best bet is to grab Kali, VM or dedicated and load it up there, as I did.
3) My installation guidance and recommendations should hopefully get you running trouble free, follow it explicitly.
4) Acquire as many API keys as possible, see further detail below.

Installation and preparation
From Kali bash prompt, in this order:

  1. git clone https://github.com/datasploit/datasploit /etc/datasploit
  2. apt-get install libxml2-dev libxslt-dev python-dev lib32z1-dev zlib1g-dev
  3. cd /etc/datasploit
  4. pip install -r requirements.txt
  5. mv config_sample.py config.py
  6. With your preferred editor, open config.py and add API keys for the following at a minimum, they are, for all intents and purposes required, detailed instructions to acquire each are here:
    1. Shodan API
    2. Censysio ID and Secret
    3. Clearbit API
    4. Emailhunter API
    5. Fullcontact API
    6. Google Custom Search Engine API key and CX ID
    7. Zoomeye Username and Password
If, and only if, you've done all of this correctly, you might end up with a running instance of Datasploit. :-) Seriously, this is some of the glitchiest software I've tussled with in quite a while, but the results paid handsomely. Run python datasploit.py domain.com, where domain.com is your target. Obviously, I ran python datasploit.py holisticinfosec.org to acquire results pertinent to your author. 
Datasploit rapidly pulled results as follows:
211 domain references from Github:
Github results
Luckily, no results from Shodan. :-)
Four results from Paste(s): 
Pastebin and Pastie results
Datasploit pulled russ at holisticinfosec dot org as expected, per email harvesting.
Accurate HolisticInfoSec host location data from Zoomeye:

Details regarding HolisticInfoSec sub-domains and page links:
Sub-domains and page links
Finally, a good return on DNS records for holisticinfosec.org and, thankfully, no vulns found via PunkSpider

DataSploit can also be integrated into other code and called as individual scripts for unique functions. I did a quick run with python emailOsint.py russ@holisticinfosec.org and the results were impressive:
Email OSINT
I love that the first query is of Troy Hunt's Have I Been Pwned. Not sure if you have been? Better check it out. Reminder here, you'll really want to be sure to have as many API keys as possible or you may find these buggy scripts crashing. You'll definitely find yourself compromising between frustration and the rapid, detailed results. I put this offering squarely in the "shows much promise category" if the devs keep focus on it, assess for quality, and handle errors better.
Give Datasploit a try for sure.
Cheers, until next time...

Friday, July 07, 2017

Toolsmith #126: Adversary hunting with SOF-ELK

As we celebrate Independence Day, I'm reminded that we honor what was, of course, an armed conflict. Today's realities, when we think about conflict, are quite different than the days of lining troops up across the field from each other, loading muskets, and flinging balls of lead into the fray.
We live in a world of asymmetrical battles, often conflicts that aren't always obvious in purpose and intent, and likely fought on multiple fronts. For one of the best reads on the topic, take the well spent time to read TJ O'Connor's The Jester Dynamic: A Lesson in Asymmetric Unmanaged Cyber Warfare. If you're reading this post, it's highly likely that your front is that of 1s and 0s, either as a blue team defender, or as a red team attacker. I live in this world every day of my life as a blue teamer at Microsoft, and as a joint forces cyber network operator. We are faced, each day, with overwhelming, excessive amounts of data, of varying quality, where the answers to questions are likely hidden, but available to those who can dig deeply enough.
New platforms continue to emerge to help us in this cause. At Microsoft we have a variety of platforms that make the process easier for us, but no less arduous, to dig through the data, and the commercial sector continues to expand its offerings. For those with limited budgets and resources, but a strong drive for discovery, that have been outstanding offerings as well. Security Onion has been forefront for years, and is under constant development and improvement in the care of Doug Burks.
Another emerging platform, to be discussed here, is SOF-ELK, part of the SANS Forensics community, created by SANS FOR572, Advanced Network Forensics and Analysis author and instructor Phil Hagen. Count SOF-ELK in the NFAT family for sure, a strong player in the Network Forensic Analysis Tool category.
SOF-ELK has a great README, don't be that person, read it. It's everything you need to get started, in one place. What!? :-)
Better yet, you can download a fully realized VM with almost no configuration requirements, so you can hit the ground running. I ran my SOF-ELK instance with VMWare Workstation 12 Pro and no issues other than needing to temporarily disable Device Guard and Credential Guard on Windows 10.
SOF-ELK offers some good test data to get you started with right out of the gate, in /home/elk_user/exercise_source_logs, including Syslog from a firewall, router, converted Windows events, a Squid proxy, and a server named muse. You can drop these on your SOF-ELK server in the /logstash/syslog/ ingestion point for syslog-formatted data. Additionally, utilize /logstash/nfarch/ for archived NetFlow output, /logstash/httpd/ for Apache logs, /logstash/passivedns/ for logs from the passivedns utility, /logstash/plaso/ for log2timeline, and  /logstash/bro/ for, yeah, you guessed it.
I mixed things up a bit and added my own Apache logs for the month of May to /logstash/httpd/. The muse log set in the exercise offering also included a DNS log (named_log), for grins I threw that in the /logstash/syslog/ as well just to see how it would play.
Run down a few data rabbit holes with me, I swear I can linger for hours on end once I latch on to something to chase. We'll begin with a couple of highlights from my Apache logs. The SOF-ELK VM comes with three pre-configured dashboards including Syslog, NetFlow, and HTTPD. You can learn more in the start page for the SOF-ELK UI, my instance is http://192.168.50.110:5601/app/kibana. There are three panels, or blocks, for each dashboard's details, at the bottom of the UI. I drilled through to the HTTPD Log Dashboard for this experiment, and immediately reset the time period for analysis (click the time marker in the upper right hand part of the UI). It defaults to the last 15 minutes, if you're reviewing older data it won't show until you adjust to match your time stamps. My data is from the month of May so I selected an absolute window from the beginning of May to its end. You can also select quick or relative time options, it's great to get comfortable here quickly and early. The resulting opening visualizations for me made me very happy, as seen in Figure 1.
Figure 1: HTTPD Log Dashboard
Nice! An event count summary, source ASNs by count (you can immediately see where I scanned myself from work), a fantastic Access Source map, a records graph by HTTP verbs, and one by response codes.
The beauty of these SOF-ELK dashboards is that they're immediately interactive and allow you to drill right in to interesting data points. The holisticinfosec.org website is intentionally flat and includes no active PHP or dynamic content. As a result, my favorite response code as a web application security tester, the 500 error, is notably missing. But, in both the timeline graphs we note a big traffic spike on 8 MAY 2017, which correlates nicely with my above mention scan from work, as noted in the ASN hit count, and seen here in Figure 2.

Figure 2: Traffic spike from scan
This visualizes well but isn't really all that interesting or uncommon, particularly given that I know I personally ran the scan, and scans from the Intarwebs are dime a dozen. What did jump out for me though, as seen back in Figure 1, was the presence of four PUT requests. That's usually a "bad thing" where some @$$h@t is trying to drop something on my server. Let's drill in a bit, shall we? After clicking the graph line with the four PUT requests, I quickly learned that two requests came from 204.12.194.234 AS32097: WholeSale Internet in Kansas City, MO and two came from 119.23.233.9 AS37963: Hangzhou Alibaba Advertising in Hangzhou, China. This is well represented in the HTTPD Access Source panel map (Figure 3).

Figure 3: Access Source
The PUT request from each included a txt file attempt, specifically dbhvf99151.txt and htjfx99555.txt, both were rejected, redirected (302), and sent to my landing page (200).
Research on the IPs found that 119.23.233.9 was on the "real time suspected malware list as detected by InterServer's intrusion systems" as seen 22 MAY, and 204.12.194.234 was found twice in the AbuseIPDB, flagged on 18 MAY 2017 for Cknife Webshell Detected. Now we're talking. It's common to attempt a remote file include attack or a PUT, with what is a web shell. I opened up SOF-ELK on that IP address and found eight total hits in my logs, all looking for common PHP opportunities with the likes of GET and POST for /plus/mytag_js.php, noted in PHP injection attack attempts.
SOF-ELK made it incredibly easy to hunt down these details, as seen in Figure 4 from the HTTPD Discovery panel.
Figure 4: Discovery
That's a groovy little hunting trip through HTTPD logs, but how about a bit of Syslog? I spotted I likely oddity that could be correlated across a number of the exercise logs, we'll see if the correlation is real. You'll notice tabs at the top of your SOF-ELK UI, we'll use Discover for this experiment. I started from the Syslog Dashboard with my time range set broadly on the last two months. 7606 records presented themselves, sliced neatly by hosts and programs, as seen in Figure 5.

Figure 5: Syslog Dashboard
Squid proxy logs showed the predominance of host entries (6778 or 57.95% of 11,696 to be specific), so I started there. Don' laugh, but I'll often do keyword queries just to see what comes up, sometimes you land a pointer to a good rabbit hole. Within the body of 6778 proxy events, I searched malware. Two hits came back for GET request via a JS redirector to bleepingcomputer.com for your basic how-to based on "random websites opening in Chrome". Ruh-roh.
Figure 6: Malware keyword
More importantly, we have an IP address to pivot on: 10.3.59.53. A search of that IP across the same 6778 Squid logs yielded 3896 entries specific to this IP, and lots to be curious about:
  • datingukrainewomen.com 
  • anastasiadate.com
  • YouTube videos for hair loss
  • crowdscience.com for "random pop-ups driving me nuts"
Do I need to build this user profile out for you, or are you with me? Proxy logs tell us so much, and are deeply worthy of your blue team efforts to collect and review.
I jumped over to the named_log from the muse host to see what else might reveal itself. Here's where I jumped to Discover, the Splunk-like query functionality inherent to SOF-ELK (and ELK implemetations). I did reductive query to see what other oddities might surface: 10.3.59.53 AND dns_query: (*.co.uk OR *.de OR *.eu OR *.info OR *.cc OR *.online OR *.website). I used these TLDs based on the premise that bots using Domain Generation Algorithms (DGA) will often use the TLDs. See The DGA of PadCrypt to learn more, as well as ISC Diary handler John Bambanek's OSINT logic. The query results were quite satisfying, 29 hits, including a number of clearly randomly generated domains. Those that were most interesting all included the .cc TLD, so I zoomed in further. Down to five hits with 10.3.59.53 AND dns_query: *.cc, as seen in Figure 7.
Figure 7:. CC TLD hits
Oh man, not good. I had a hunch now, and went back to the proxy logs with 10.3.59.53 AND squid_request:*.exe. And there you have it, ladies and gentlemen, hunch rewarded (Figure 8).

Figure 8: taxdocs.exe
It taxdocs.exe isn't malware, I'm a monkey's uncle. Unfortunately, I could find no online references to these .cc domains or the .exe sample or URL, but you get the point. Given that it's exercise data, Phil may have generated it to entice to dig deeper.
When we think about the IOC patterns for Petya, a hunt like this is pretty revealing. Petya's "initial infection appears to involve a software supply-chain threat involving the Ukrainian company M.E.Doc, which develops tax accounting software, MEDoc". This is not Petya (as far as I know) specifically but we see pattern similarities for sure, one can learn a great deal about the sheep and the wolves. Be the sheepdog!
Few tools better in the free and open source arsenal to help you train and enhance your inner digital sheepdog than SOF-ELK. "I'm a sheepdog. I live to protect the flock and confront the wolf." ~ LTC Dave Grossman, from On Combat.

Believe it or not, there's a ton more you can do with SOF-ELK, consider this a primer and a motivator.
I LOVE SOF-ELK. Phil, well done, thank you. Readers rejoice, this is really one of my favorites for toolsmith, hands down, out of the now 126 unique tools discussed over more than ten years. Download the VM, and get to work herding. :-)
Cheers...until next time.

Sunday, May 21, 2017

Toolsmith #125: ZAPR - OWASP ZAP API R Interface

It is my sincere hope that when I say OWASP Zed Attack Proxy (ZAP), you say "Hell, yeah!" rather than "What's that?". This publication has been a longtime supporter, and so many brilliant contibutors and practitioners have lent to OWASP ZAPs growth, in addition to @psiinon's extraordinary project leadership. OWASP ZAP has been 1st or 2nd in the last four years of @ToolsWatch best tool survey's for a damned good reason. OWASP ZAP usage has been well documented and presented over the years, and the wiki gives you tons to consider as you explore OWASP ZAP user scenarios.
One of the more recent scenarios I've sought to explore recently is use of the OWASP ZAP API. The OWASP ZAP API is also well documented, more than enough detail to get you started, but consider a few use case scenarios.
First, there is a functional, clean OWASP ZAP API UI, that gives you a viewer's perspective as you contemplate programmatic opportunities. OWASP ZAP API interaction is URL based, and you can invoke both access views and invoke actions. Explore any component and you'll immediately find related views or actions. Drilling into to core via http://localhost:8067/UI/core/ (I run OWASP ZAP on 8067, your install will likely be different), gives me a ton to choose from.
You'll need your API key in order to build queries. You can find yours via Tools | Options | API | API Key. As an example, drill into numberOfAlerts (baseurl ), which gets the number of alerts, optionally filtering by URL. You'll then be presented with the query builder, where you can enter you key, define specific parameter, and decide your preferred output format including JSON, HTML, and XML.
Sure, you'll receive results in your browser, this query will provide answers in HTML tables, but these aren't necessarily optimal for programmatic data consumption and manipulation. That said, you learn the most important part of this lesson, a fully populated OWASP ZAP API GET URL: http://localhost:8067/HTML/core/view/numberOfAlerts/?zapapiformat=HTML&apikey=2v3tebdgojtcq3503kuoq2lq5g&formMethod=GET&baseurl=.
This request would return




in HTML. Very straightforward and easy to modify per your preferences, but HTML results aren't very machine friendly. Want JSON results instead? Just swap  out HTML with JSON in the URL, or just choose JSON in the builder. I'll tell you than I prefer working with JSON when I use the OWASP ZAP API via the likes of R. It's certainly the cleanest, machine-consumable option, though others may argue with me in favor of XML.
Allow me to provide you an example with which you can experiment, one I'll likely continue to develop against as it's kind of cool for active reporting on OWASP ZAP scans in flight or on results when session complete. Note, all my code, crappy as it may be, is available for you on GitHub. I mean to say, this is really v0.1 stuff, so contribute and debug as you see fit. It's also important to note that OWASP ZAP needs to be running, either with an active scanning session, or a stored session you saved earlier. I scanned my website, holisticinfosec.org, and saved the session for regular use as I wrote this. You can even see reference to the saved session by location below.
R users are likely aware of Shiny, a web application framework for R, and its dashboard capabilities. I also discovered that rCharts are designed to work interactively and beautifully within Shiny.
R includes packages that make parsing from JSON rather straightforward, as I learned from Zev Ross. RJSONIO makes it as easy as fromJSON("http://localhost:8067/JSON/core/view/alerts/?zapapiformat=JSON&apikey=2v3tebdgojtcq3503kuoq2lq5g&formMethod=GET&baseurl=&start=&count=")
to pull data from the OWASP ZAP API. We use the fromJSON "function and its methods to read content in JSON format and de-serializes it into R objects", where the ZAP API URL is that content.
I further parsed alert data using Zev's grabInfo function and organized the results into a data frame (ZapDataDF). I then further sorted the alert content from ZapDataDF into objects useful for reporting and visualization. Within each alert objects are values such as the risk level, the alert message, the CWE ID, the WASC ID, and the Plugin ID. Defining each of these values into parameter useful to R is completed with the likes of:
I then combined all those results into another data frame I called reportDF, the results of which are seen in the figure below.
reportDF results
Now we've got some content we can pivot on.
First, let's summarize the findings and present them in their resplendent glory via ZAPR: OWASP ZAP API R Interface.
Code first, truly simple stuff it is:
Summary overview API calls

You can see that we're simply using RJSONIO's fromJSON to make specific ZAP API call. The results are quite tidy, as seen below.
ZAPR Overview
One of my favorite features in Shiny is the renderDataTable function. When utilized in a Shiny dashboard, it makes filtering results a breeze, and thus is utilized as the first real feature in ZAPR. The code is tedious, review or play with it from GitHub, but the results should speak for themselves. I filtered the view by CWE ID 89, which in this case is a bit of a false positive, I have a very flat web site, no database, thank you very much. Nonetheless, good to have an example of what would definitely be a high risk finding.


Alert filtering

Alert filtering is nice, I'll add more results capabilities as I develop this further, but visualizations are important too. This is where rCharts really come to bear in Shiny as they are interactive. I've used the simplest examples, but you'll get the point. First, a few, wee lines of R as seen below.
Chart code
The results are much more satisfying to look at, and allow interactivity. Ramnath Vaidyanathan has done really nice work here. First, OWASP ZAP alerts pulled via the API are counted by risk in a bar chart.
Alert counts

As I moused over Medium, we can see that there were specifically 17 results from my OWASP ZAP scan of holisticinfosec.org.
Our second visualization are the CWE ID results by count, in an oft disdained but interactive pie chart (yes, I have some work to do on layout).


CWE IDs by count

As we learned earlier, I only had one CWE ID 89 hit during the session, and the visualization supports what we saw in the data table.
The possibilities are endless to pull data from the OWASP ZAP API and incorporate the results into any number of applications or report scenarios. I have a feeling there is a rich opportunity here with PowerBI, which I intend to explore. All the code is here, along with the OWASP ZAP session I refer to, so you can play with it for yourself. You'll need OWASP ZAP, R, and RStudio to make it all work together, let me know if you have questions or suggestions.
Cheers, until next time.

Sunday, March 26, 2017

Toolsmith #124: Dripcap - Caffeinated Packet Analyzer

Dripcap is a modern, graphical packet analyzer based on Electron.
Electron, you say? "Electron is a framework for creating native applications with web technologies like JavaScript, HTML, and CSS. It takes care of the hard parts so you can focus on the core of your application."
We should all be deeply familiar with the venerable Wireshark, as it has long been the forerunner for packet analysts seeking a graphical interface to their PCAPs. Occasionally though, it's interesting to explore alternatives. I've long loved NetworkMiner, and the likes of Microsoft Message Analyzer and Xplico each have unique benefits.
For basic users comfortabel with Wireshark, you'll likely find Dripcap somewhat rudimentary at this stage, but it does give you opportunities to explore packet captures at fundamental levels and learn without some of the feature crutches more robust tools offer.
However, for JavaScript developers,  Dripcap opens up a whole other world of possibilities. Give the Create NTP dissector package tutorial a read, you can create, then publish and load dissector (and others) packages of your choosing.

Installation
I built Dripcap from source on Windows as follows, using Chocolatey.
From a administrator PowerShell prompt (ensure Get-ExecutionPolicy is not Restricted), execute the following (restart your admin PS prompt after #2):
  1. iwr https://chocolatey.org/install.ps1 -UseBasicParsing | iex
  2. choco install git make jq nodejs
  3. git clone https://github.com/dripcap/dripcap.git
  4. cd dripcap
  5. npm install -g gulp node-gyp babel-cli
  6. npm install
  7. gulp
Step 1 installs Chocolatey, step 2 uses Chocolatey to install tools, step 3 clones Dripcap, steps 5 & 6 install packages, and step 7 builds it all.    
Execute dripcap, and you should be up and running.
You can also  use npm, part of Node.js' package ecosystem to install Dripcap CLI with npm install -g dripcap, or just download dripcap-windows-amd64.exe from Dripcap Releases.

Experiment 

I'll walk you through packet carving of sorts with Dripcap. One of Dripcap's strongest features is its filtering capabilities. I used an old PCAP with an Operation Aurora Internet Explorer exploit (CVE-2010-0249) payload for this tool test.
Ctrl+O will Import Pcap File for you.

Click Developer, then Toggle Log Panel for full logging.

Figure 1: Dripcap
You'll note four packets with lengths of 1514, as seen in Figure 1. Exploring the first of these packets indicates just what we'd expect: an Ethernet MTU (maximum transmission unit) of 1500 bytes, and a TCP payload of 1460 bytes, leaving 40 bytes for our header (20 byte IP and 20 byte TCP).

Figure 2: First large packet
 Hovering your mouse over the TCP details in the UI will highlight all the TCP specific data, but you can take such actions a step further. First, let's filter down to just the large packets with tcp.payload.length == 1460.
Figure 3: Filtered packets
 With our view reduced we can do some down and dirty carving pretty easily with Dripcap. In each of the four filtered packets I hovered over Payload 1460 bytes as seen in Figure 4, which highlighted the payload-specific hex. I then used the mouse to capture the highlighted content and, using Dripcap's Edit and Copy, grabbed only that payload-specific hex and pasted it to a text file.
Figure 4: Hex payload
I did this with each of these four packets and copied content, one hex blob after the other, into my text file, in tight, seamless sequence. I then used Python Tools for Visual Studio to do a quick hexadecimal to ASCII translation as easily as bytearray.fromhex("my hex snippet here").decode(). The result, in Figure 5, shows the resulting JavaScript payload the exploits CVE-2010-0249.
Figure 5: ASCII results
You can just as easily use online converters as well. I saved the ASCII results to a text file in a directory which I had excluded from my anti-malware protection. After uploading the file to VirusTotal as payload.txt, my expectations were confirmed: 32 of 56 AV providers detected the file as the likes of Exploit:JS/Elecom.D or, more to the point, Exploit.JS.Aurora.a.

In closing
Perhaps not the most elegant method, but it worked quickly and easily with Dripcap's filtering and editing functions. I hope to see this tool, and its community, continue to grow. Build dissector packages, create themes, become part of the process, it's always good to see alternatives in available to security practitioners.
Cheers...until next time.

Sunday, February 19, 2017

Toolsmith Release Advisory: Sysmon v6 for Securitay

Sysmon just keeps getting better.
I'm thrilled to mention that @markrussinovich and @mxatone have released Sysmon v6.
When I first discussed Sysmon v2 two years ago it offered users seven event types.
Oh, how it's grown in the last two years, now with 19 events, plus an error event.
From Mark's RSA presentation we see the current listing with the three new v6 events highlighted.
 
Sysmon Events

"This release of Sysmon, a background monitor that records activity to the event log for use in security incident detection and forensics, introduces an option that displays event schema, adds an event for Sysmon configuration changes, interprets and displays registry paths in their common format, and adds named pipe create and connection events."

Mark's presentation includes his basic event recommendations so as to run Sysmon optimally.
Basic Event Recommendations
Basic Event Recommendations (Cont)

I strongly suggest you deploy using these recommendations.
A great way to get started is to use a Sysmon configuration template. Again, as Mark discussed at RSA, consider @SwiftOnSecurity's sysmon-config-export.xml via Github. While there are a number of templates on Github, this one has "virtually every line commented and sections are marked with explanations, so it should also function as a tutorial for Sysmon and a guide to critical monitoring areas in Windows systems." Running Sysmon with it is as easy as:
sysmon.exe -accepteula -i sysmonconfig-export.xml

As a quick example of Sysmon capabilities and why you should always run it everywhere, consider the following driver installation scenario. While this is a non-malicious scenario that DFIR practitioners will appreciate, rather than the miscreants, the detection behavior resembles that which would result from kernel-based malware.
I fired up WinPMEM, the kernel mode driver for gaining access to physical memory included with Rekall, as follows:
WinPMEM
Upon navigating to Applications and Services Logs/Microsoft/Windows/Sysmon/Operational in the Windows Event Viewer, I retrieved the expected event:

Event ID 6: Driver loaded
The best way to leave you to scramble off and deploy Sysmon broadly, are with Mark's Best Practices and Tips, again from his RSA presentation.

Go forth and deploy!
Cheers...until next time.

Wednesday, February 08, 2017

Aikido & HolisticInfoSec™

This is the 300th post to the HolisticInfoSec™ blog. Sparta, this isn't, but I thought it important to provide you with content in a warrior/philosopher mindset regardless. 
Your author is an Aikido practitioner, albeit a fledgling in practice, with so, so much to learn. While Aikido is often translated as "the way of unifying with life energy" or as "the way of harmonious spirit", I propose that the philosophies and principles inherent to Aikido have significant bearing on the practice of information security.
In addition to spending time in the dojo, there are numerous reference books specific to Aikido from which a student can learn. Among the best is Adele Westbrook and Oscar Ratti's Aikido and the Dynamic Sphere. All quotes and references that follow are drawn from this fine publication.
As an advocate for the practice of HolisticInfoSec™ (so much so, I trademarked it) the connectivity to Aikido is practically rhetorical, but allow me to provide you some pointed examples. I've tried to connect each of these in what I believe is an appropriate sequence to further your understanding, and aid you in improving your practice. Simply, one could say each of these can lead to the next.
  
The Practice of Aikido
"The very first requisite for defense is to know the enemy."
So often in information security, we see reference to the much abused The Art of War, wherein Sun Tzu stated "It is said that if you know your enemies and know yourself, you will not be imperiled in a hundred battles." Aikido embraces this as the first requisite, but so too offers the importance of not underestimating your enemy or opponent. For information security, I liken it to this. If you are uninformed on adversary actor types and profiles, their TTPs (tools, tactics, procedures), as well as current vulnerabilities and exploits, along with more general threat intelligence, then you are already at a disadvantage before you even begin to imagine countering your opponent.  

"A positive defensive strategy is further qualified as being specific, immediate, consistent, and powerful." 
Upon learning more about your adversary, a rehearsed, exercised strategy for responding to their attack should be considered the second requisite for defense. To achieve this, your efforts must include:
  • a clear definition and inventory of the assets you're protecting
  • threat modeling of code, services, and infrastructure
  • an incident response plan and SOP, and regular exercise of the IR plan
  • robust security monitoring to include collection, aggregation, detection, correlation, and visualization
  • ideally, a purple team approach that includes testing blue team detection and response capabilities in partnership with a red team. Any red team that follows the "you suck, we rock" approach should be removed from the building and replaced by one who espouses "we exist to identify vulnerabilities and exploits with the goal of helping the organization better mitigate and remediate".
As your detection and response capabilities improve with practice and repetition, your meantime to mitigate (MTM) and meantime to remediate (MTR) should begin to shrink, thus lending to the immediacy, consistentcy, and power of your defense.


The Process of Defense and Its Factors
"EVERY process of defense will consist of three stages: perception, evaluation-decision, and reaction."
These should be easy likenesses for you to reconcile.
Perception = detection and awareness
The better and more complete your threat intelligence collection and detection capabilities, the better your situational awareness will be, and as a result your perception of adversary behaviors will improve and become more timely.
Evaluation-decision = triage
It's inevitable...$#!+ happens. Your ability to quickly evaluate adversary actions and be decisive in your response will dictate your level of success as incident responders. Strength at this stage directly impacts the rest of the response process. Incorrect or incomplete evaluation, and the resulting ill-informed decisions, can set back your response process in a manner from which recovery will be very difficult.
Reaction = response
My Aikido sensei, after doing so, likes to remind his students "Don't get hit." :-) The analogy here is to react quickly enough to stay on your feet. Can you move quickly enough to not be hit as hard or as impactfully as your adversary intended? Your reaction and response will determine such outcomes. The connection between kinetic and virtual combat here is profound. Stand still, get hit. Feign or evade, at least avoid some, or all contact. In the digital realm, you're reducing your time to recover with this mindset.  

Dynamic Factors
"A defensive aikido strategy begins the moment a would-be attacker takes a step toward you or turns aggressively in your direction. His initial motion (movement) in itself contains the factors you will use to neutralize the action of attack which will spring with explosive force from that motion of convergence."
Continuing on our theme of inevitability, digital adversaries will, beyond the shadow of a doubt, take a step toward you or turn aggressively in your direction. The question for you will be, do you even know when that has occurred in light of our discussion of requisites above? Aikido is all about using your opponent's energy against them, wherein, for those of us in DFIR, our adversary's movement in itself contains the factors we use to neutralize the action of attack. As we improve our capabilities in our defensive processes (perception, evaluation-decision, and reaction), we should be able to respond in a manner that begins the very moment we identify adversarial behavior, and do so quickly enough that our actions pivot directly on our adversary's initial motion.
As an example, your adversary conducts a targeted, nuanced spear phishing campaign. Your detective means identify all intended victims, you immediately react, and add all intended victims to an enhanced watch list for continuous monitoring. The two victims who engaged the payload are quarantined immediately, and no further adversarial pivoting or escalation is identified. The environment as a whole raised to a state of heightened awareness, and your user-base becomes part of your perception network.


"It will be immediate or instantaneous when your reaction is so swift that you apply a technique of neutralization while the attack is still developing, and at the higher levels of the practice even before an attack has been fully launched."
Your threat intelligence capabilities are robust enough that your active deployment of detections for specific Indicators of Compromise (IOCs) prevented the targeted, nuanced spear phishing campaign from even reaching the intended victims. Your monitoring active lists include known adversary infrastructure such that the moment they launch an attack, you are already aware of its imminence.
You are able to neutralize your opponent before they even launch. This may be unimaginable for some, but it is achievable by certain mature organizations under specific circumstances.

The Principle of Centralization
"Centralization, therefore, means adopting a new point of reference, a new platform from which you can exercise a more objective form of control over events and over yourself."
Some organizations decentralize information security, others centralize it with absolute authority. There are arguments for both, and I do not intend to engage that debate. What I ask you to embrace is the "principle of centralization". The analogy is this: large corporations and organizations often have multiple, and even redundant security teams. Even so, their cooperation is key to success.
  • Is information exchanged openly and freely, with silos avoided? 
  • Are teams capable of joint response? 
  • Are there shared resources that all teams can draw from for consistent IOCs and case data?
  • Are you and your team focused on facts, avoiding FUD, thinking creatively, yet assessing with a critical, objective eye?
Even with a logically decentralized security model, organizations can embrace the principle of centralization and achieve an objective form of control over events. The practice of a joint forces focus defines the platform from which teams can and should operate.

Adversarial conditions, in both the physical realm, and the digital realm in which DFIR practitioners operate, are stressful, challenging, and worrisome. 
Morihei Ueshiba, Aikido's founder reminds us that "in extreme situations, the entire universe becomes our foe; at such critical times, unity of mind and technique is essential - do not let your heart waver!" That said, perfection is unlikely, or even impossible, this is a practice you must exercise. Again Ueshiba offers that "failure is the key to success; each mistake teaches us something."
Keep learning, be strong of heart. :-)
Cheers...until next time.

Toolsmith #127: OSINT with Datasploit

I was reading an interesting Motherboard article,  Legal Hacking Tools Can Be Useful for Journalists, Too , that includes reference to one ...